When implementing a thread pool or any other means of limiting concurrency in the MySQL Server, careful thought is required about how to divide the problem to ensure that we don't create any unnecessary new hot spots. It is very easy to make a design that manages all connections and threads in one pool. This design does however very quickly run into scalability issues due to the need to lock the common data structures every time a connection or thread needs to change its state.
To avoid this issue we decided to implement the thread pool using a set of thread groups. Each of those thread groups are independent of the other thread groups. Each thread group manages a set of connections and threads. It also handles a set of queues and other data structures required to implement the thread group operations. Each thread group will contain a minimum of one thread, connections will be bound to a thread group at connect time using a simple round robin assignment. The thread pool aims to ensure that each thread group either has zero or one thread actively executing a statement. This means that the interactions between threads within one thread group is extremely limited. Also the interactions won't grow as the MySQL Server gets more statements to process. Thus it is very hard to see this model become a scalability issue in itself.
So we solved the scalability problem using a Divide-and-Conquer technique.
An obvious problem with this approach as you describe it is when your group is actively executing one connection, all other connections in the same group wait. A CPU hogging query will thus block everything else, and to fix it, you would need to create additional threads, and execute them in parallel at the very same time. So it does not look quite like a solution, and frankly, one cannot just apply the same principles of partitioning to everything. You have N groups, and now you have N problems :)
It's quite correct that this problem exists, but we also have a solution to it, I will post the solution in the next blog.
The solution is to introduce a stall timer, when this have expired the query is no longer blocking the thread group.
This problem is however there also when you don't partition the problem. So the problem needs a solution independent of whether you partition the problem or not.
I think this needs to limit max concurrent queries per db account before it will be interesting to most customers. A thread pool is an implementation detail that people like you and I care about, but most users are only interested in the functionality it provides. This doesn't do much more than innodb_thread_concurrency -- although it probably does that better.
So when will this get support for that? The Facebook patch now has a feature called Admission Control to enforce such a limit. But we didn't use a thread pool because that was too much work for our small team.
@Mark, thread pooling and user quotas are different things. Thread pools are user-unaware. However, maybe techniques introduced with pooling can help somewhat. Why do you absolutely want to limit user activity to a predefined number of concurrent statements?
I think there is something like priority under the covers of this thread pool- I read earlier that first transaction statements have low prio. While I personally would find this automagic feature questionable (why should transaction users suffer, if their requests are non-overlapping), in general priorities are interesting. So your problem could also be solved by setting request priority low if the same user has many normal prio requests already.
It is not necessary to fully block a query, on Unix every blocking means there is a filler thread that stucks in pthread_cond_wait and does not do anything useful.
I want per-account limits because I try to make MySQL support busy OLTP workloads. Good fences make good neighbors.
I understand thread pools and user quotas are not the same. One is an implementation detail that the majority of MySQL users will find boring. The other is functionality that might let them get their work done.
I posted a blog about how this feature in the thread pool on priorities for connections based on user settings and started transactions work.
User quotas is an interesting feature to consider as is the thread pool. The thread pool also adds a first step to get priorities on connections which I think is also an important step forward in designing a flexible MySQL Server solution.
In managing a high load quotas, limitations and priorities are all useful things.
The limitations makes sure that the server will operate well also when too many queries come in at once.
Priorities will ensure that the right users/connections get things done in those overload situations.
Quotas can help as well, but if all users at the same time try to use their full quota you can still run into an overload situation.
All accounts don't all overload the server at the same time. The typical problem for me is when one account overloads the server and neither innodb_thread_concurrency nor this thread pool helps in that case.
Multiple accounts are used to run real database workloads and accounts don't all behave the same way. You need to get beyond dbt-2 and sysbench to measure this.
The thread pool will make sure that the server continues to operate in an optimal manner. It will obviously not ensure that run-away users are behaving well or are kicked out. For this quotas is one manner of solving it.
I don't think any benchmarks exists that mimics all the real-world problems, but they can still be very useful to understand basic server behaviour and they continue to provide us useful information of what works and what doesn't work.
Anyways the thread pool solves one important problem, there are still more interesting problems to be solved and we've had some interesting discussions on this.
I define optimal here as operating with the optimal number of concurrent threads executing queries. For most InnoDB workloads this varies between 12-36 as I've noted. So the thread pool will work hard to keep the server operating in this region of concurrency.
With fewer concurrent threads the server doesn't use its full potential since there simply isn't sufficient parallelism.
With more concurrent threads the threads compete for hot spots and CPU cache memory such that the server
throughput slows down.
So the server works in its optimal manner when the throughput is close to the optimum even when there is very many concurrent connections running queries against the MySQL Server.
It isn't that hard to mimic some of the misbehavior that occurs in the real world. Run two sysbench processes against the thread pool with one that misbehaves by using too much concurrency and measure the QPS achieved by the other.
Sounds like an interesting experiment, I migth try that, shouldn't be so hard to adapt my benchmark tool to handle that.
I gather the total throughput would be the same as if I ran one experiment but the sysbench with more connections would be getting more throughput since the server will give all connections equal treatment
We also have a "Justin Bieber" benchmark where all connections do an increment on the same row. Run that concurrent with a well-behaved workload to see the impact on QPS for the well-behaved account.
Being able to handle this will make it easier to sell the feature to companies that do multi-tenant MySQL in addition to enterprise users that do OLTP + ETL + random queries.
Another very simplistic method that comes to mind of how the thread pool can achieve something similar to user quotas in an automated manner is by distributing connections to thread groups based on user id instead of by round robin.
In this manner the user can only stop the operation of the thread group(s) it resides in.
For a complete solution a bit more is needed, but I think it gives a chance to implement a fairly flexible scheme for how allocations of server resources are given to different users. If the user is lightly loaded they can use more than their fair share, but not in an overload situation.
The benchmark scripts I mostly use was
mentioned in a blog today. Any help on
extending those is always welcome. It
will help me to automate my benchmark runs.
The more things I can add into those scripts of realistic benchmarks the better.
I think both you and I need to upgrade to the new sysbench that uses Lua. It is easier to write custom tests via Lua. Alas I have a lot invested in my current scripts that run the tests and extract performance data. I imagine you are in the same state.
I actually started with the Lua version a few years ago but it really wasn't stable for me, so I went back to the 0.4 version. And yep, I invested a fair amount in my environment, the biggest investment is in the automation scripts and so it's not so hard to add another benchmark to the scripts.
Anyways time to go home, have a nice weekend.
Post a Comment