a Red Hat VM.
In order to run MySQL Cluster in a cloud environment I use 4 VMs.
This is sufficient for a demo or proof of concept of MySQL Cluster.
For a production environment it is likely that one would want at least
6 VMs. It is generally a good idea to use one VM per process.
This has the advantage that one can perform an upgrade in the
order suitable for upgrading MySQL Cluster.
I used one VM to handle the NDB management server. When this
node is down no new nodes can join the cluster, but living nodes
will continue to operate as normal. So in a demo environment it
is quite sufficient to use 1 NDB management server. In a production
environment it is quite likely that one wants at least two
management server VMs.
I used two VMs for data nodes. This is the minimal setup for a
highly available MySQL Cluster. Since MySQL Cluster is designed
for high availability it makes sense to use a replicated setup even in
a demo environment.
I used one VM for the MySQL Server and for MySQL client and
NDB management client. This is good enough for a demo, but in
a production environment it is likely that one wants at least two
MySQL Server VMs for failover handling but also for load
balancing and normally probably even more than two VMs for
The first step is to create these small 4 instances. This is very straightforward
and I used the Red Hat 7.3 Linux OS.
Each instance comes with 3 IP addresses. One IP address is a global
IP address used to SSH into the instance. Next there is Public
hostname of the machine and finally there is a private IP address.
The Public IP address is used to connect with SSH, it isn't intended for
communication between the VMs inside the cloud. It is possible to use
the Public hostnames for this, it is however better to use the private IP
addresses for communication inside the cluster. The reason is that otherwise
the cluster also depends on a DNS service for its availability.
I will call PrivIP_MGM the private IP address of the NDB management
server VM, PrivIP_DN1 the private IP address of the first NDB data node
and PrivIP_DN2 of the second data node and PrivIP_Server.
Each instance you log in as the user ec2-user, so I simply created a directory
called ndb under /home/ec2-user as the data directory on all VMs.
So the first step towards setting up the cluster is to create a configuration
file for the cluster called config.ini and place this in
/home/ec2-user/ndb/config.ini on the NDB management server VM.
Here is the content of this file:
nodeid = 49
We set ServerPort to 11860 to ensure that we always use the same port number
to connect to the NDB data nodes. Otherwise it is hard to setup a
secure environment with firewalls.
A good rule for node ids is to use 1 through 48 for data nodes, 49 through
52 for NDB management servers and 53 to 255 for API nodes and MySQL
Servers. This will work in almost all cases.
In addition we need to create a configuration file for the MySQL Server.
In this file we have the following content:
Now we have a setup that is working with a management server, two data nodes
and a MySQL Server up and running. So now one can connect a MySQL client
to the MySQL Server and perform more experiments or some other MySQL
application. Obviously to use some more realistic application it is likely that
a configuration with more memory, more disk space and more CPU resources