Tuesday, May 25, 2021

HA vs AlwaysOn

 In the 1990s I spent a few years studying requirements on databases used in 3G telecom networks. The main requirement was centered around three keywords, Latency, Throughput and Availability. In this blog post I will focus on Availability.


If a telecom database is down it means that no phone calls can be made, internet connections will not work and your app on your smartphone will cease to work. So more or less impacting each and everyone's life immediately.


The same requirements on databases now also start to appear in AI applications such as online Fraud detection, self-driving cars, smartphone apps.


Availability is measured in percent and for telecom databases the requirement is to reach 99.9999% availability. One often calls this Class 6 availability where 6 is the number of nines in the availability percentage.


Almost every database today promises High Availability, this has led to inflation in the term. Most databases that promise HA today reach Class 4 availability, thus 99.99% availability. Class 4 availability means that 50 minutes each year your system is down and unavailable for transactions. For many this is sufficient, but imagine relying on such a system for self-driving cars, or phone networks or modern AI applications used to make online decisions for e.g. hospitals. Thus I sometimes use the term AlwaysOn to refer to a higher availability reaching Class 5 and Class 6 and beyond.


To analyse Availability we need to look at the reasons for unavailability. There are the obvious ones like hardware and software failure. The main solution for those is to use replication. However as I will show below, most replication solutions of today incur downtime on every failure. Thus it is necessary to analyse the replication solution in more detail to see what happens at a failure. Does the database immediately deliver the same latency and throughput after a failure as required by the application?


There are also a lot of planned outages that can happen in many HA systems. We could have downtime when a node goes down before the cluster has been reorganised again, we can have downtime at Software upgrades, at Hardware upgrades, at schema reorganisations. Major upgrades involving lots of systems may cause significant downtime.


Thus to be called AlwaysOn a Database must have zero downtime solutions for at least the following events:

  1. Hardware Failure

  2. Software Failure

  3. Software Upgrade

  4. Hardware Upgrade

  5. Schema Reorganisation

  6. Online Scaling

  7. Major Upgrades

  8. Major disasters


Most of these problems require replication as the solution. However the replication must be at 2 levels. There must be a local replication that makes it possible to sustain the Latency requirements of your application even in the presence of failures. In addition there is a need to handle replication on a global level that makes it possible to survive major upgrades and major disasters.


Finally, to handle schema reorganisations and scaling the system without downtime requires careful design of algorithms in your database.


Today replication is used in almost every database. However the replication scheme in most databases leads to significant downtime for AlwaysOn applications at each failure and upgrade.


To handle a SW/HW failure and SW/HW upgrade without any downtime requires that the replicas are ready to start serving requests within one millisecond after discovering the configuration change required by the failure/upgrade.


Most replication solutions today rely on eventual consistency, thus they are by design unavailable even in normal operation and even more so when the primary replica fails. Many other replication solutions only ship the log records to the backup replica, thus at failure the backup replica must execute the log before it can serve all requests. Again downtime for the application which makes it impossible to meet the requirements of AlwaysOn applications.


To reach AlwaysOn availability it is necessary to actually perform all write operations on the backup replicas as well as on the primary replica. This ensures that the data is immediately available in the presence of failures or other configuration changes.


This has consequences for the latency, many databases take shortcuts when performing writes and doesn’t ensure that the backup replicas are immediately available at failures.


Thus obviously if you want your application to reach AlwaysOn availability, you need to ensure that your database software delivers a solution that doesn't involve log replay at failure (thus all shared disk solutions and eventual consistency solutions are immediately discarded).


As part of my Ph.D research in the 1990s I concluded those things. All those requirements was fed into the development of NDB Cluster, NDB Cluster was the base for MySQL Cluster (nowadays often referred to as MySQL NDB Cluster) and today Logical Clocks AB develops RonDB a distribution of NDB Cluster.


To reach AlwaysOn it is necessary that your database software can deliver solutions to all 8 problems listed above without any downtime (not even 10 seconds). Actually the limit on the availability is the time it takes to discover a failure, for the most part this is immediate since most failures are SW failures, but for some HW failures it is based on a heartbeat mechanism that can lead to a few seconds of downtime.


Thus to reach AlwaysOn availability it is necessary that your database software can handle failures with zero downtime, it is necessary that your database software can handle the algorithms for online schema management and online scaling of your database, it is necessary that your database software supports global replication to also be able to handle major upgrades and major disasters where entire regions are affected. Finally you need to have an organisation that operates your systems in a reliable manner.


RonDB is designed to handle all the requirements on the database software which have been proven in production sites reaching Class 6 availability for more than 15 years. Logical Clocks is building the competence to operate RonDB for customers at these availability levels.

Wednesday, May 19, 2021

5.5M Key Lookups per second on 16 VCPU VMs

 As introduced in a previous blog RonDB enables us to easily execute benchmarks on RonDB using the Sysbench benchmark.


In this blog I will present some results where the RonDB cluster had 2 data nodes, each using a r5.4xlarge VM in AWS that has 16 VCPUs and 128 GB memory. The Sysbench test uses SQL to access RonDB.


In this particular test case we wanted to test the Key-Lookup performance using SQL. Key-Lookup performance is essential in the RonDB use case as an online Feature Store in Hopsworks.


In this case we use the following settings in the Sysbench configuration file (see RonDB documentation for more details on how to set up those tests):


SB_USE_IN_STATEMENT=”100”

SYSBENCH_TEST=”oltp_ro”

SB_SIMPLE_RANGES=”0”

SB_ORDER_RANGES=”0”

SB_SUM_RANGES=”0”

SB_DISTINCT_RANGES=”0”


These settings means that each transaction has 10 SQL queries, each of those queries fetch 100 rows using a primary key lookup. Thus each transaction performs 1000 key lookups.


We will look at two things here, how many key lookups can be delivered per second and what is the latency of each of those batched SELECT statements.


In the image above we see how the number of key value lookups grows extremely quickly, even with only 2 threads per MySQL Server we reach beyond 1M key lookups per second. The total throughput at 32 threads is 5.56M key lookups per second.


Even more interesting is the response time of each of those SELECT statements. Remember that each of those SELECT fetch 100 rows using a primary key lookup. Using 4 threads per MySQL Server we respond within 1.0 millisecond with a throughput of 2.17M key lookups per second. At 12 threads we deliver around 4.5M key lookups with a latency around 1.5 millisecond per SELECT query. At 16 threads we deliver 5.27M key lookups at a response time of 2.28 milliseconds.


Thus as can be seen from these numbers, even using SQL, RonDB can provide astonishing key lookup performance. Obviously using C++, Java or JavaScript key-value APIs we expect to perform a lot better yet and in particular it will use much less CPU on the client side.




Monday, May 10, 2021

Research on Thread Pipelines using RonDB

 

Introduction

In my previous two blogs on this topic I first introduced the concept of automatic thread configuration and the thread model we use in RonDB. After receiving some questions on the topic I dived a bit deeper into explaining the RonDB thread model and its thread pipeline and compared it to another similar concept called batch pipelines.


Since then I read up a bit more on the research in this area with a focus on implementations in other key-value stores. Some researchers argue that a model where one handles the request immediately is superior to a model using a thread pipeline.

RonDB Software Architecture

RonDB has its roots in MySQL NDB Cluster which was developed at Ericsson in the 1990s. The execution engine in NDB has always used asynchronous programming. NDB has its roots in AXE, a telecom switch that Ericsson developed in the 1970s and still develops actually. The development model in AXE used two concepts called blocks and signals. This architecture was inherited from using a HW architecture as base for the software architecture.


The idea is that the blocks are independent units that only communicate with other blocks using signals. This is the model that all hardware is built with due to physical constraints. The reasoning in AXE was that this model is beneficial also to develop large software systems. Development of blocks can be made independent of each other as long as they handle the signalling protocols correctly.


In the 1990s CPUs were single-threaded, sometimes you had 2 CPUs on the motherboard. So a single-threaded model was good enough. When moving to a multi-threaded environment it was necessary to find a way to break up the single-threaded model into a multi-threaded model. In NDB this used two approaches, the first approach was to simply divide the blocks into execution modules. These execution modules are true to the block concept, thus they only communicate with each other through asynchronous signals. Blocks within an execution module are allowed to communicate with synchronous signals and can for optimisation purposes use direct method calls.

RonDB execution modules


In NDB the execution modules are the following:

  1. Transaction Coordinator (tc)

  2. Local Data Manager (ldm)

  3. Main execution module (main)

  4. Replication module (rep)

  5. Network receive (recv)

  6. Network send (send)


These execution modules are now natural to divide into threads. Each execution module can potentially be an independent thread.


However to scale RonDB to larger systems it is also necessary to split a few of those execution modules. The tc module can easily be scaled by allowing each tc module to handle its own set of transactions. This makes for a simple round-robin model to select the tc module to handle a transaction.


The ldm module can be split by partitioning the data and let each ldm module handle a subset of the data. This makes for an easy way to route the data requests to the correct ldm module.


The network receive and the network send modules can be split by partitioning on sockets. Thus each receive socket is handled by one recv module and each send socket is handled by one send module. Each recv and send module can handle multiple sockets.


Actually the send module is handled a bit using its own model where other modules can either handle the send themselves or offload it to the send module. This is not the topic for this blog, so I won’t go into more details about send modules.


One problem with this design is that partitioning the data makes perfect sense for key-value lookups that access data using a primary key. However RonDB is a key-value database with SQL capabilities, thus it is important to also perform well for more complex queries even complex join queries accessing a multitude of tables.


In the partitioned model a range scan not using the partition key has to scan all partitions. This is both beneficial and inefficient. It allows for automatic parallelisation of the range scan, but the lookup in the index tree has to be done once in each partition which increases the cost of the range scan. The solution would be to have some amount of partitioning, but to limit it.

RonDB Query threads


To alleviate this problem a new execution module was invented which is used in RonDB. This is the query execution module. This is in reality the same module as the ldm module, but it is only allowed to handle Committed Read queries. Thus all locked reads and all writes are still routed to the ldm module.

Experiment setup

Now to the point of this description. As can be seen from the above description we have a set of execution modules where each execution module can be assigned to an independent thread. This thread can participate in the thread pipeline previously described and shown in the image above.


The important thing here is that we are actually at liberty to place the execution modules into threads in any manner we like. This opens up the floor to use RonDB as a vehicle to research the benefits and deficiencies of thread pipelines.


The code used in these experiments is found in the hopsworks_2525 branch in the RonDB tree.


In this branch of RonDB it is possible to colocate the tc threads and recv execution modules. Thus each recv thread has both a tc execution module as well as a recv execution module. The assignment of the tc execution module will always be done such that the recv execution module and the tc execution module is placed in the same recv thread.


This makes it possible to create a RonDB data node where we only have separate recv threads and ldm/query threads. However I took another step and even made it possible to run RonDB with only receive threads. In this case each recv thread contains a recv, tc, ldm, and query execution module. In this case we will always select the local query module to execute the query if possible.


The above model makes it possible to experiment with the various settings.


To test this I used a micro-benchmark using variants of the Sysbench test presented here and here. The tests were executed on a P620 Lenovo workstation with a AMD Threadripper processor based on the AMD Zen 2 architecture.


To analyze the tests and to try to understand the reasons for the performance differences in the setups I used a set of perf scripts that can be found in the dbt2 benchmark tree.


The setup I am going to report here used two different setups of RonDB. The setups are not using the automated thread configuration, but rather use the ThreadConfig configuration variable that makes it possible to set up a large variety of thread configurations and experiment with those. I will add the autobench.conf used in those benchmarks at the end of the blog.


The RonDB data nodes used 4 threads that were locked to 4 CPUs using 2 CPU cores in both experiments. One experiment used 4 receive threads that try to do all processing on one thread, the second experiment used 2 receive threads that handle both the receive execution module and the tc execution module and then 1 ldm thread and 1 query thread.


We will call the set up with separated ldm and recv threads the LDM setup and the other setup will be called the Receive setup in this document.

Key-value lookups

The first experiment uses a variant where Sysbench only performs batched primary key lookups with 100 lookups in each SQL statement.


In this case the model LDM setup is able to handle 1.34M lookups per second. The Receive setup is able to handle 1.05M lookups. Thus the thread pipeline achieves almost 30% better throughput in this experiment.


Given that both experiments are executed within RonDB using exactly the same code, from the same RonDB version, it is safe to assume that the execution uses the same amount of code, the same amount of instructions are executed to perform a key-value lookup.


So what differs when looking into the CPU statistics. What is clear is that modern CPUs are very hard to analyze given that they are extremely good at parallelising tasks. 


The two things that stand out are the IPC achieved, the CPU usage and the clock frequency used. The ldm thread in the LDM setup achieves an IPC of 1.40, a frequency of 3.96 GHz and 99.8% CPU usage. The query thread has an IPC of 1.30, a frequency of 3.95 GHz and a CPU usage of 99.9%. The receive threads in the LDM setup have an IPC of 1.26, a frequency of 3.886 and a CPU usage of 66.4%.


The Receive setup threads have an IPC of 1.05, a frequency of 3.468 GHz and a 77.6% CPU usage.


Thus the performance difference comes from a combination of a higher IPC in the LDM setup, but also actually it runs at 500MHz higher clock frequency. The total CPU usage is slightly higher in the LDM setup. Those factors combined account for the 30% difference.


So trying to understand those numbers one can see that the LDM setup has a higher hit rate in the L1 data cache and slightly higher hit rate in the L1 instruction cache. There is also a slightly higher branch prediction miss rate in the Receive setup. However nothing revolutionary is seen here. The L2 miss rate is very similar in both experiments. The other difference is that the Receive setup sees a 10x higher rate iTLB misses.


The only real difference between the two cases is that the Receive setup will see more code. Thus it is expected that it will have to pay for that with more instruction cache misses, but interestingly this also triggers changes in the CPU frequency.


Thus our conclusion here is that if one can achieve a setup where the thread uses a limited amount of code to achieve its purpose, the performance will be much higher.


In principle the conclusion I do is that if we can decrease the code size we touch in a thread, this will directly affect the performance.


Using some number crunching we see that executing one key-value lookup in the ldm thread uses about 6000-7000 x86  assembler instructions. Thus the payback of decreasing the code size is almost logarithmic. So reusing the same code gives great payback in throughput of your program.


To summarize executing about 10k instructions in a single thread costs almost 30% more than

splitting up the execution in 2 threads where one thread takes care of 4k instructions and the other takes care of 6k instructions.


It isn’t expected that this gain is independent of the code size. It is likely that when the code size fits well into the L1 instruction cache, that brings about a large performance enhancement. Thus if the code size of each of threads in the thread pipeline is too big for the L1 instruction cache, it is not expected that the gain of the thread pipeline is very large. So to optimize performance one needs to keep the code size aligned with the size of the instruction cache. This means that another important factor impacting the performance is the choice of compiler flags, this choice plays together with the software architecture.

Range Scans

The next test I singled out only included range scans from Sysbench. This had two effects, first the code executed was different, in the ldm/query thread this led to even higher IPC and even higher frequency of the CPU. The IPC increased to 2.25 and the frequency increased by 2%. However another effect was that a lot more of the processing moved to the ldm/query threads, thus the receive thread was much less involved. This had the effect that the Receive setup actually had higher throughput in this case.

Outcome of tests

What we have shown with those tests is that it is beneficial from an efficiency point of view to use a thread pipeline. It increases the IPC and surprisingly also increases the frequency the CPU core operates in. This can increase the throughput up to at least 30% and even more than that in some tests executed.


At the same time the workload can cause the load to be unbalanced, this can cause some CPUs to be a bit unused. This can cause up to 30-40% loss of performance in some workloads and no loss at all and even a small gain in other workloads.


Thus the outcome is that whether or not using a thread pipeline or not is dependent on the workload. We have concluded that thread pipelines definitely provide an advantage, but also that this advantage can be wiped out by imbalances in the load on the various threads in the pipeline.

Adaptive solution

So actually when I started this exercise I expected to find out which model would work best. I wasn’t really expecting to come up with a new model as the solution. However the results made it clear that thread pipelines should be used, but that one needs to find ways of using the free CPU cycles in the LDM setup.


Given the flexibility of the query threads we have an obvious candidate for this flexibility. It is possible to execute some queries directly from the receive threads instead of passing them on to the ldm/query threads. The introduction of query threads makes it possible to be more flexible in allocating the execution of those queries.


So the conclusion is that a combination of the Receive setup and the LDM setup is the winning concept. However this introduces a new technical challenge. It is necessary to implement an adaptive algorithm that decides when to execute queries locally in the receive thread and when to execute them using the ldm/query threads.


This is in line with the development of RonDB the last couple of years. The static solutions are rarely optimal, it is often necessary to introduce adaptive algorithms that use CPU load statistics and queue size statistics for the threads in the mix.


In a sense this could be viewed as a query optimizer for key-value databases. SQL queries are usually complex and require careful selection of the execution plan for a single query. In a key-value database millions of queries per second flow through the database and by adapting the execution model to the current workload we achieve query optimisation of large flows of small simple queries.


autobench.conf

#

# Software definition

#

USE_RONDB="no"

USE_BINARY_MYSQL_TARBALL="no"

TARBALL_DIR="/home/mikael/tarballs"

MYSQL_BIN_INSTALL_DIR="/home/mikael/mysql"

BENCHMARK_TO_RUN="sysbench"

MYSQL_VERSION="rondb-gpl-21.10.0"

DBT2_VERSION="dbt2-0.37.50.17"

SYSBENCH_VERSION="sysbench-0.4.12.17"

MYSQL_BASE="8.0"

TASKSET="numactl"

#

# Storage definition

#

DATA_DIR_BASE="/home/mikael/ndb"

#

# MySQL Server definition

#

SERVER_HOST="127.0.0.1"

SERVER_PORT="3316"

ENGINE="ndb"

MYSQL_USER="root"

#

# NDB node definitions

#

NDB_EXTRA_CONNECTIONS="32"

NDB_MULTI_CONNECTION="4"

NDB_TOTAL_MEMORY="25200M"

NDB_MGMD_NODES="127.0.0.1"

NDBD_NODES="127.0.0.1"

NDB_FRAGMENT_LOG_FILE_SIZE="1G"

NDB_NO_OF_FRAGMENT_LOG_FILES="16"

NDB_SPIN_METHOD="StaticSpinning"

NDB_MAX_NO_OF_CONCURRENT_OPERATIONS="262144"

NDB_AUTOMATIC_THREAD_CONFIG="no"

NDB_THREAD_CONFIG="ldm={count=1,cpubind=0},query={count=1,cpubind=1},tc={count=0,cpubind=2},recv={count=2,cpubind=32,33},send={count=0,cpubind=35},main={count=0,cpubind=36},io={count=1,cpubind=33}"

#NDB_THREAD_CONFIG="ldm={count=0,cpubind=0},query={count=0,cpubind=3},tc={count=0,cpubind=2},recv={count=4,cpubind=0,1,32,33},send={count=0,cpubind=35},main={count=0,cpubind=36},io={count=1,cpubind=33}"

#

# Benchmark definition

#

SYSBENCH_TEST="oltp_rw"

MAX_TIME="20000"

#SB_POINT_SELECTS="0"

SB_NUM_TABLES="1"

#SB_SUM_RANGES="0"

#SB_DISTINCT_RANGES="0"

#SB_ORDER_RANGES="0"

#SB_SIMPLE_RANGES="0"

#SB_USE_IN_STATEMENT="100"

THREAD_COUNTS_TO_RUN="48"

SERVER_BIND="0"

SERVER_CPUS="8-27,40-59"

BENCHMARK_BIND="0"

BENCHMARK_CPUS="28-31,60-63"