My name is Mikael Ronstrom and I work for Hopsworks AB as Head of Data. I also assist companies working with NDB Cluster as self-employed consultant. I am a member of The Church of Jesus Christ of Latter Day Saints. I can be contacted at mikael dot ronstrom at gmail dot com for NDB consultancy services. The statements and opinions expressed on this blog are my own and do not necessarily represent those of Hopsworks AB.
Tuesday, April 12, 2011
MySQL Cluster running 2.46M updates per second!
In a previous blog post we showed how MySQL Cluster achieved 6.82M reads per second. This is a high number. However what is also very interesting to see is how efficient MySQL Cluster is at executing updating transactions as well. We were able to push through the 1M transactions per second wall and even past the 2M transactions per second and all the way up to 2.46M transactions per second.
Hi Mikael, this was across 16 data nodes / 8 physical servers, right ?
ReplyDeleteSo circa 153k updates per second on each node or 300k on each physical server
Great numbers !
Hi Mikael
ReplyDeletethis was across 16 data nodes or 8 physical servers, right ?
So 153k per data node or 300k per physical server
Great numbers !
That is impressive.
ReplyDeleteHow many servers did you use? Was the data replicated?
Hi!
ReplyDeletePlease, can you share the test more details about the environment ? It will be great if you can provide it as package for linux with configuration files.
Thanks in advance.
Mikael,
ReplyDeleteAs asked by other readers, its unclear how many nodes and physical servers were used for getting 2.46M updates/s.
Also is the workload against a single indexed table, and does the index + data fit in memory ?
Thanks,
Darpan
This benchmark was executed with 16 data nodes on 8 physical boxes. The data was replicated. The benchmark itself was executed on separate servers.
ReplyDeleteIt was using flexAsynch programs, each flexAsynch program used its own table, all data was memory resident.
ReplyDelete