A few months ago I decided to run a benchmark to showcase how RonDB 24.10 can handle 100M Key lookups per second using our REST API server from Python client. This exercise is meant to show both how RonDB can scale to handle throughput requirements as well as latency requirements for Personalised Recommendation systems that are commonly used by companies such as Spotify, E-commerce sites and so forth.
The exercise started at 2M Key lookups per second. Running a large benchmark like this means that you hit all sorts of bottlenecks. Some of the bottlenecks are due to configuration issues, some are due to load balancers, some due to quota constraints and networking within the cloud vendor, some are due to bugs and yet some required some new features in RonDB. It also includes a comparison of VM types using Intel, AMD and ARM CPUs. It also included managing multiple Availability Zones.
I thought reporting on this exercise could be an interesting learning also for others, so the whole process can be found in this blog.
At rondb.com you can find other blogs about RonDB 24.10 and you can even try out RonDB in a Test Cluster. You can start a small benchmark and check 12 dashboards of monitoring information about RonDB while it is running.