Tuesday, September 28, 2021

Memory Management in RonDB

 Most of the memory allocated in RonDB is handled by the global memory manager. Exceptions are architecture objects and some fixed size data structures. In this presentation we will focus on the parts handled by the global memory manager.

In the global memory manager we have 13 different memory regions as shown in the figure below:



- DataMemory

- DiskPageBufferCache

- RedoBuffer

- UndoBuffer

- JobBuffer

- SendBuffers

- BackupSchemaMemory

- TransactionMemory

- ReplicationMemory

- SchemaMemory

- SchemaTransactionMemory

- QueryMemory

- DiskOperationRecords

One could divide those regions into a set of qualities. We have a set of regions that are fixed in size, another set of regions are critical and cannot handle failure to allocate memory, a set of regions have no natural upper limit and are unlimited in size, there is also a set of regions that are flexible in size that can work together to achieve the best use of memory. We can also divide regions based on whether the memory is short term or long term. Each region can belong to multiple categories.

To handle these qualities of the regions we have priorities on each memory region, this priority can be affected by the amount of memory that the resource has allocated.

Fixed regions have a fixed size, this is used for database objects, the Redo log Buffer, the Undo log buffer, the DataMemory and the DiskPageBufferCache (the page cache for disk pages). There is code to ensure that we queue up when those resources are no longer available. DataMemory is a bit special and we will describe it separately below.

Critical regions are regions where a request to allocate memory would cause a crash. This relates to the job buffer which is used for internal messages inside a node, it also relates to send buffers which are used for messages to other nodes. DataMemory is a critical region during recovery, if we fail to allocate memory for database objects during recovery we would not be able to recover the database. Thus DataMemory is a critical region in the startup phase, but not during normal operation. DiskOperationRecords are also a critical resource since otherwise we cannot maintain the disk data columns. Finally we also treat BackupSchemaMemory as critical since not being able to perform a backup would make it very hard to manage RonDB.

Unlimited regions have no natural upper limit, thus as long as memory is available at the right priority level, the memory region can continue to grow. The regions in this category is BackupSchemaMemory, QueryMemory and SchemaTransactionMemory. QueryMemory is memory used to handle complex SQL queries such as large join queries. SchemaTransactionMemory can grow indefinitely, but the meta data operations try avoid growing too big.

Flexible regions are regions that can grow indefinitely but that have to set limits on its own growth to ensure that other flexible regions are also allowed to grow. Thus one flexible resource isn't allowed to grab all the shared memory resources. There are limits to how much memory a resource can grab before its priority is significantly lowered.

Flexible regions are TransactionMemory, ReplicationMemory, SchemaMemory, QueryMemory, SchemaTransactionMemory, SendBuffers, BackupSchemaMemory, DiskOperationRecords, 

Finally we have short term versus long term memory regions. A short term memory region allocation is of smaller signifance compared to a long term memory region. In particular this relates to SchemaMemory. SchemaMemory contains metadata about tables, indexes, columns, triggers, foreign keys and so forth. This memory once allocated will stay for a very long time. Thus if we allow it to grow too much into the shared memory we will not have space to handle large transactions that require TransactionMemory.

Each region has a reserved space, a maximum space and a priority. In some cases a region can also have a limit where its priority is lowered.

4% of the shared global memory is only accessible to the highest priority regions plus half of the reserved space for job buffers and communication buffers.

10% of the shared global memory is only available to high prio requesters. The remainder of the shared global memory is accessible to all memory regions that are allowed to allocate from the shared global memory.

The actual limits might change over time as we learn more about how to adapt the memory allocations.

Most regions have access also to a shared global memory. It will first use its reserved memory and if there is shared global memory available it can allocate from this as well.

The most important ones are DataMemory and DiskPageBufferMemory. Any row stored in memory and all indexes in RonDB are stored in the DataMemory. The DiskPageBufferMemorycontains the page cache for data stored on disk. To ensure that we can always handlerecovery, DataMemory is fixed in size and since recovery can sometimes grow the data size a bit. We don't allow the DataMemory to be filled beyond 95% in normal operation. In recovery it can use the full DataMemory size. Those extra 5% memory resources are also reserved for critical operations such as growing the cluster with more nodes and reorganising the data inside RonDB. The DiskPageBufferCache is fixed in size, operations towards the disk is queued by using DiskOperationRecords.

Critical regions which have higher priority to get memory compared to the rest of the regions. These are job buffers used for sending messages between modules inside a data node, send buffers used for sending messages between nodes in the cluster, the meta data required for handling backup operations and finally operation records to access disk data.

These regions will be able to allocate memory even when all other regions will fail to allocate memory. Failure to access memory for those regions would lead to failure of the data node or failure to backup the data which are not events that are acceptable in a DBMS.

We have 2 more regions that are fixed in size, the Redo log buffer and the Undo log buffer (the Undo log is only used for operations on disk pages). Those allocate memory at startup and use that memory, there is some functionality to handle overload on those buffers by queueing operations when those buffers are full.

The remaining 4 regions we will go through in detail.

The first one is TransactionMemory. This memory region is used for all sorts of operations such as transaction records, scan records, key operation records and many more records used to handle the queries issued towards RonDB.

The TransactionMemory region have a reserved space, but it can grow up to 50% of the shared global memory beyond that. It can even grow beyond that, but in this case it only has access to the lowest priority region of the shared global memory. Failure to allocate memory in this region leads to aborted transactions.

The second region in this category is SchemaMemory. This region contains a lot of meta data objects representing tables, fragments, fragment replicas, columns, and triggers. These are long-term objects that will be there long-term. Thus we want this region to be flexible in size, but we don't want it grow such that it diminishes the possibility to execute queries towards region. Thus we calculate a reserved part and allow this part to grow into at most 20% of the shared memory region in addition to its reserved region. This region cannot access the higher priority memory regions of the shared global memory.

Failure to allocate SchemaMemory causes meta data operations to be aborted.

Next region in this category is ReplicationMemory. These are memory structures used to represent replication towards other clusters supporting Global Replication. It can also be used to replicate changes from RonDB to other systems such as ElasticSearch. The memory in this region is of temporary nature with memory buffers used to store the changes that are being replicated. The meta data of the replication is stored in the SchemaMemory region.

This region has a reserved space, but it can also grow to use up to 30% of the shared global memory. After that it will only have access to the lower priority regions of the shared global memory.

Failure to allocate memory in this region lead to failed replication. Thus replication have to be set up again. This is a fairly critical error, but it is something that can be handled.

The final region in this category is QueryMemory. This memory has no reserved space, it can use the shared global lower priority regions. This memory is used to handle complex SQL queries. Failure to allocate memory in this region will lead to complex queries being aborted.

This blog presents the memory management architecture in RonDB that is currently in a branch called schema_mem_21102, this branch is intended for RonDB 21.10.2, but could also be postponed to RonDB 22.04. The main difference in RonDB 21.04 is that the SchemaMemory and ReplicationMemory are fixed in size and cannot use the shared global memory. The BackupSchemaMemory is also introduced in this branch. It was currently part of the TransactionMemory.

In the next blog on this topic I will discuss how one configures the automatic memory in RonDB.

Friday, September 24, 2021

Automatic Memory Management in RonDB

RonDB has now grown up to the same level of memory management as you find in expensive commercial DBMSs like Oracle, IBM DB2 and Microsoft SQL Server.

Today I made the last development steps in this large project. This project started with a prototype effort by Jonas Oreland already in 2013 after being discussed for a long time before that. After he left for Google the project was taken over by Mauritz Sundell that implemented the first steps for operational records in the transaction manager.

Last year I added the rest of the operational records in NDB. Today I completed the programming of the final step in RonDB. This last step meant moving around 30 more internal data structures towards using the global memory manager. These memory structures are used to represent meta data about tables, fragments, fragment replicas, triggers and global replication objects.

One interesting part that is contained in this work is a malloc-like implementation that interacts with all record-level data structures that is already in RonDB to handle linked list, hash tables and so forth for internal data structures.

So after more than 5 years it feels like a major step forward in the development of RonDB.

What does this mean for a user of RonDB? It means that the user won't have to bother much with memory management configuration. If RonDB is started in a cloud VM, it will simply use all memory in the VM and ensure that the memory is handled as a global resource that can be used by all parts of RonDB. This feature is exactly existing already in RonDB 21.04. What this new step means is that the memory management is even more flexible, there is no need to allocate more memory than needed for meta data objects (and vice versa if more memory is needed, it is likely to be accessible).

Thus memory can be used for other purposes as well. Thus the end result is that more memory is made available in all parts of RonDB, both to store data in it and to perform more parallel transactions and more query handling.

Another important step is that this step opens up for many new developments to handle larger objects in various parts of RonDB.

In later blogs we will describe how the memory management in RonDB works. This new development will either appear in RonDB 21.10 or in RonDB 22.04.


Friday, August 13, 2021

How to achieve AlwaysOn

When discussing how to achieve High Availability most DBMS focus on handling it via replication. Most of the focus has thus been focused on various replication algorithms.

However truly achieving AlwaysOn availability requires more than just a clever replication algorithm.

RonDB is based on NDB Cluster, NDB has been able to prove in practice that it can deliver capabilities that makes it possible to build systems with less than 30 seconds of downtime per year.

So what is required to achieve this type of availability?

  1. Replication
  2. Instant Failover
  3. Global Replication
  4. Failfast Software Architecture
  5. Modular Software Architecture
  6. Advanced Crash Analysis
  7. Managed software

Thus a clever replication algorithm is only 1 of 7 very important parts to achieve the highest possible level of availability. Managed software is one of the addition that RonDB does to NDB Cluster. This won't be discussed in this blog.

Instant Failover means that the cluster must handle failover immediately. This is the reason why RonDB implements a Shared Nothing DBMS architecture. Other HA DBMS such as Oracle and MySQL InnoDB Cluster and Galera Cluster relies on replaying the logs at failover to catch up. Before this catch up has happened the failover hasn't completed. In RonDB every updating transaction updates both data and logs as part of the changing transaction, thus at failover we only need to update the distribution information.

In a DBMS updating information about node state is required to be a transaction itself. This transaction takes less than one millisecond to perform in a cluster. Thus in RonDB the time it takes to failover is dependent on the time it takes to discover that the node has failed. In most cases the reason for the failure is a software failure and this usually leads to dropped network connections which are discovered within microseconds. Thus most failovers are handled within milliseconds and the cluster is repaired and ready to handle all transactions again.

The hardest failure to discover are the silent failures, this can happen e.g. when the power on a server is broken. In this case the time it takes is dependent on the time configured for heartbeat messages. How low this time can be set is dependent on the operating system and how much one can depend on that it sends a message in a highly loaded system. Usually this time is a few seconds.

But even with replication and instant failover we still have to handle failures caused by things like power breaks, thunderstorms and many more problems that cause an entire cluster to fail. A DBMS cluster is usually located within a confined space to achieve low latency on database transactions.

To handle this we need to handle failover from one RonDB cluster to another RonDB cluster. This is achieved in RonDB by using asynchronous replication from one cluster to another. This second RonDB cluster needs to physically separated from the other cluster to ensure higher independence of failures.

Actually having global replication implemented also means that one can handle complex software changes such as if your application does a massive rewrite of the data model in your application.

Ok, are we done now, is this sufficient to get a DBMS cluster which is AlwaysOn.

Nope, more is needed. After implementing these features it is also required to be able to quickly find the bugs and be able to support your customers when they hit issues.

The nice thing with this architecture is that a software failure will most of the time not cause anything more than a few aborted transactions which the application layer should be able to handle.

However in order to build an AlwaysOn architecture one has to be able to quickly get rid of bugs as well.

When NDB Cluster joined MySQL two different software architectures met each other. MySQL was a standalone DBMS, this meant that when it failed the database was no longer available. Thus MySQL strived to avoid crashes since that meant that the customer no longer could access its data.

With NDB Cluster the idea was that there would always be another node available to take over if we fail. Thus NDB, and thus also RonDB implements a Failfast Software Architecture. In RonDB this is implemented using a macro in the RonDB called ndbrequire, this is similar how most software uses assert. However ndbrequire stays in the code also when we run in production code.

Thus every transaction that is performed in RonDB causes thousands error checks to be checked. If one of those ndbrequire's returns false we will immediately fail the node. Thus RonDB will never proceed when we have an indication that we have reached a disallowed state. This ensures that the likelihood of a software failure leading to data being incorrect is minimised.

However crashing solves only the problem as a short-term solution. In order to solve the problem for real we also have to fix the bug. To be able to fix bugs in a complex DBMS requires a modular software architecture. RonDB software architecture is based on experiences from AXE, this is a switch developed in the 1970s at Ericsson.

The predecessor of AXE at Ericsson was AKE, this was the first electronic switch developed at Ericsson. It was built as one big piece of code without clear boundaries between the code parts. When this software reached sizes of millions of lines of code it became very hard to maintain the software.

Thus when AXE was developed in a joint project between Ericsson and Telia (a swedish telco operator) the engineers needed to find a new software architecture that was more modular.

The engineers had lots of experiences of designing hardware as well. In hardware the only path to communicate between two integrated circuits is by using signals on an electrical wire. Since this made it possible to design complex hardware with small amount of failures, the engineers reasoned that this architecture should work as a software architecture as well.

Thus the AXE software architecture used blocks instead of integrated circuits and signals instead of electrical signals. In modern software language these would have been called modules and messages most likely.

A block owns its own data, it cannot peek at other blocks data, the only manner to communicate between blocks is by using signals that send messages from one block to another block.

RonDB is designed like this with 23 blocks that implements different parts of the RonDB software architecture. The method to communicate between blocks is mainly through signals. These blocks are implemented as large C++ classes.

This software architecture leads to a modular architecture that makes it easy to find bugs. If a state is wrong in a block it can either be caused by code in the block, or by a signal sent to the block.

In RonDB signals can be sent between blocks in the same thread, to blocks in another thread in the same node and they can be sent to a thread in another node in the cluster.

In order to be able to find the problem in the software we want access to a number of things. The most important feature to discover is to discover the code path that led to the crash.

In order to find this RonDB software contains a macro called jam (Jump Address Memory). This means that we can track a few thousand of the last jumps before the crash. The code is filled with those jam macros. This is obviously an extra overhead that makes RonDB a bit slower, but to deliver the best availability is even more important than being fast.

Just watch Formula 1, the winner of Formula 1 over a season will never be a car that fails every now and then, the car must be both fast and reliable. Thus in RonDB reliability has priority over speed even though we mainly talk about the performance of RonDB.

Now this isn't enough, the jam only tracks jumps in the software, but it doesn't provide any information about which signals that led to the crash. This is also important. In RonDB each thread will track a few thousand of the last signals executed by the thread before the crash. Each signal will carry a signal id that makes it possible to follow signals being sent also between threads within RonDB.

Let's take an example of how useful this information is. Lately we had an issue in the NDB forum where a user complained that he hadn't been able to produce any backups the last couple of months since one of the nodes in the cluster failed each time the backup was taken.

In the forum the point in the code was described in the error log together with a stack trace of which code we executed while crashing. However this information wasn't sufficient to find the software bug.

I asked for the trace information that includes both the jam's and the signal logs of all the threads in the crashed node.

Using this information one could quickly discover how the fault occurred. It would only happen in high-load situations and required very tricky races to occur, thus the failure wasn't seen by most users. However with the trace information it was fairly straightforward to find what caused the issue and based on this information a work-around to the problem was found as well as a fix of the software bug. The user could again be comfortable by being able to produce backups.

Thursday, August 12, 2021

RonDB and Docker Compose

After publishing the Docker container for RonDB I got a suggestion to simplify it further by using Docker Compose. After a quick learning using Google I came up with a Docker Compose configuration file that will start the entire RonDB cluster and stop it using a single command.

First of all I had to consider networking. I decided that using an external network was the best solution. This makes it easy to launch an application that uses RonDB as a back-end database. Thus I presume that an external network has been created with the following command before using Docker Compose to start RonDB:

docker network create mynet --subnet=192.168.0.0/16

The docker-compose.yml is available on GitHub at

https://github.com/logicalclocks/rondb-docker

In the file rondb/21.04/docker-compose.yml for RonDB 21.04 and in rondb/21.10/docker-compose.yml for RonDB 21.10. Link to docker-compose.yml

To start a RonDB cluster now run this command from a directory where you have placed docker-compose.yml.

docker-compose up -d

After about 1 minute the cluster should be up and running and you can access it using:

docker exec -it compose_test_my1_1 mysql -uroot -p

password: password

The MySQL Server is available at port 3306 on IP 192.168.0.10 using the mynet subnet

When you want to stop the RonDB cluster use the command:

docker-compose stop

Docker Compose creates normal Docker containers that can be viewed using docker ps and docker logs commands as usual.