Tuesday, January 10, 2023

The flagship feature in the new LTS version RonDB 22.10.0

 In RonDB 22.10.0 we added a new major feature to RonDB. This feature means that variable sized disk columns in RonDB are stored in variable sized rows instead of using fixed size rows.

The history of disk data in RonDB starts already in 2004 when the NDB team at Ericsson had been acquired by MySQL AB. NDB Cluster was originally designed as an in-memory DBMS. The reason for this was based on that a disk based DBMS couldn't handle the latency requirements in telecom applications.

Thus NDB was developed using a distributed architecture using Network Durability (meaning that a transaction is made durable by writing the transaction into memory in several computers in a network). Long-term durability of data is achieved by a background process ensuring that data is written to disk.

When the NDB team joined MySQL we looked for many other application categories as well and thus increasing the database sizes NDB could handle was seen as important. Thus we started on developing support for disk-based columns. The design decisions of this design was accepted as a paper at VLDB in Trondheim in 2005.

The use of this feature didn't really take off in any significant manner for a few years since the latency of hard drives and also the performance of hard drives made it too different from the performance of in-memory data.

That problem has been solved by technology development of SSDs and with the introduction of NVMe drives and newer versions of PCI Express 3,4 and now 5. As an anecdote I installed a set of NVMe drives on my workstation capable of handling millions of IOPS and able to deliver 66 GBytes per second of bandwidth to these NVMe drives. However while installing I discovered that I had only 1 memory card which meant that I had 3x more bandwidth to my NVMe drives compared to my memory bandwidth. So in order to make use of those NVMe drives I had to install a number of memory cards to get the required memory bandwidth to handle those NVMe drives.

So with the introduction of NVMe drives the feature became useful, actually one of the main users of this feature is HopsFS, a distributed file system in the Hopsworks platform which uses RonDB for metadata management. HopsFS can use disk columns in RonDB for storing small files.

Performance of disk columns is really good. This blog presents a benchmark with YCSB using disk-based columns in NDB Cluster. We get a bandwidth of more than 1 GByte per second of application data read and written.

The latency on NVMe drives is 100x lower than on hard drives. This means that previously latency on hard drives was a lot more than 100x higher than in-memory latency for database operations. With modern NVMe drives the difference on latency between in-memory columns and disk columns is down to a factor of 2. We analysed performance and latency using the YCSB benchmark and compared it to in-memory columns in this blog.

One problem with the original implementation is that the disk columns was always stored in fixed size rows. In HopsFS we found ways to handle this by using multiple tables for different row sizes.

In a traditional application and in the Feature Store it is very common to store data in variable sized columns. To ensure that the data fits the maximum size of the column can be 10x higher than the average size of the column. Thus we can easily waste 90% of the disk space. This means that to use disk columns in Feature Store applications we have to enable support of variable sized rows on disk.

Thus with the release of the new LTS RonDB version 22.10.0 the disk columns is now as useful as the in-memory columns. They have excellent performance, the latency is very good, even better than the in-memory latency of some competitors and the storage efficiency is now high as well.

This means that with RonDB 22.10.0 we can handle nodes with TBytes of in-memory and many tens of TBytes of disk columns. Thus RonDB can scale all the way up to handling database sizes up to the petabyte level with latency of read and write operations in less than a millisecond.

No comments: