One important thing that we changed in the GA release is that we made it possible to
set DataMemory to up to 16 TB.
We are currently working with Intel with machines that can handle 6 TB of memory
and using these machines we have already been able to load more than 5 TB of user
data in the DBT2 benchmark. We will publish more details on the results of those
benchmark investigations later.
Given that we have also improved support for disk data so much in MySQL Cluster 8.0
it means that it is perfectly sensible to store 10s of TB of data in each data node and
even up to 100 TB. So this means that a cluster with 144 data nodes would be able to
store all the way up to 5 PB of data even with 3 replicas.
The changes that made this possible is actually done already in MySQL Cluster 7.6
as part of the development of Partial LCP which was a key feature in 7.6.
Now that machines with more than 1 TB of memory becomes common it is
important for MySQL Cluster to support even larger memories.
The development of Partial LCP was developed to handle up to 16 TB DataMemory
efficiently. It will still work with even bigger DataMemory, but to support larger
memories efficiently would require some minor changes in the product.