Wednesday, September 11, 2024

Rate limits and Quotas in RonDB

Hopsworks-RonDB Background 

One of the services that Hopsworks provides is a free service to run Hopsworks workloads in a managed cloud. This service has been used by many thousands of individuals and companies wanting to experiment with AI Lakehouse applications of various sorts such as predicting weather in your location and other experimental machine learning applications.

This Hopsworks service means that thousands of projects can run concurrently on one Hopsworks cluster. A Hopsworks cluster uses RonDB for three different things. It is used as an online feature store. This means that users import data from their data pipelines and some of this is directed to the online feature store and some of it is directed towards the offline feature store. The data in the online feature store is used for machine learning inferencing in real-time applications. This could be services such as personalised search services, credit fraud analysis and many, many other applications.

The offline feature store uses HopsFS, a distributed file system that is built on top of RonDB. It stores the file data in a backend storage system, often using storage systems such as S3, Scality, Google Cloud Storage, Azure Blob Storage or other storage systems. The metadata of the file system is stored in RonDB.

The data in HopsFS can be used by Hudi, Apache Iceberg, DuckDB and other query services for training machine learning models and performing batch inferencing.

Thirdly RonDB is also used to store metadata about the features, job control and many other things that are used to operate Hopsworks.

Having a multi-tenant DBMS with thousands of concurrent projects running in one RonDB cluster is obviously a challenge. RonDB is required to provide response times down to less than a millisecond and tens of milliseconds while fetching a batch of hundreds of rows to serve a single personalised search request.

Tables in RonDB can store the payload data which could be hundreds of features in one table (feature group) either in memory for best latency and performance or it could be stored in a disk column that provides a cheaper storage at a bit higher cost of accessing the data.

To handle these thousands of projects each project has one database in RonDB. In order to handle many tables in RonDB we have added the capability to configure RonDB to support hundreds of thousands of tables concurrently.

The figure below shows how the data server side is implemented by the RonDB data nodes together with the RonDB management server. The HopsFS access RonDB through ClusterJ, the native Java NDB API. The applications can either access RonDB using a set of MySQL servers or through the RonDB REST API Server. The REST API server delivers capabilities for key lookups and batched key lookups in real-time. It also provides in RonDB 24.10 a new endpoint of RonSQL. RonSQL can handle simple SQL queries that retrieve aggregate information from a table in RonDB. The same query in RonSQL is about 20x times faster than sending it through the MySQL Server.

Why Rate Limits and Quotas?

To manage thousands of concurrent users in a real-time DBMS with each using only a fraction of a CPU is indeed a challenge. In RonDB 24.10 which will be released in a few weeks we have added the capability to limit the use of CPUs, memory and disk space per database.

Managing memory and disk space is fairly straightforward. RonDB tracks each and every memory page and disk page used by a specific database. When the database has reached its limit, it will no longer be possible to insert, write and update the data. It will still be possible to delete and read the data.

Managing CPU resources is handled by keeping track of the CPU usage for each database in real-time. As long as the application using the database is within the limits of its CPU rate, the operation works as normal.

If the application tries to use more CPU than it has requested it will soon be discovered. At this point RonDB needs to slow down this project to ensure that all the other projects get a real-time service. The more overload the project tries to create, the more it will be slowed down. If the slow down doesn't work, eventually the project will not be able to complete any queries in RonDB until it has paid back its CPU "debt". Each time interval the database pays the "debt" and if things go as normal without rate limitation this will put the "debt" back to zero every time interval.

When defining the rate limits we measure it in microseconds of CPU time per second per data node. Thus in a 2-node RonDB cluster setting the rate limit to 100.000 means you get access to 0.1 CPUs in each data node. Setting it to 1000 means you get access to 1 millisecond of CPU time per second per data node. Thus 0.001 CPUs in each data node.

Different projects can have very different requirements, one could require 0.001 CPUs and another one could 1.5 CPUs, these databases can easily co-exist in the RonDB cluster and they will both be able to get a reliable real-time service.

An example of how this works was that we ran a Sysbench OLTP RW benchmark and set the rate limit to 100.000 (0.1 CPU per data node), this made it possible to run 330 TPS (thus 6600 SQL queries per second delivering almost 150.000 rows to the application). This TPS was achieved whatever the number of threads was used from 1 thread to 256 thread. Increasing the rate limit 500.000 meant that we got 5x more TPS.

Obviously this service can also be useful for a company with many departments that want to share one Hopsworks cluster as well.

Now this shows how we can operate the actual data servers in RonDB. The data server clients are stateless clients and can thus scale up and down as needs come and go. In addition the clients in Hopsworks can access the data server clients using load balancers. Together with Hopsworks 4.0 that is operated by Kubernetes it means that RonDB 24.10 will be extremely flexible in scaling up and down CPU, memory and disk on the data server side as well as CPU on the client side.



No comments:

Post a Comment