Friday, April 14, 2017

Setting up MySQL Cluster in the Oracle Bare Metal Cloud

The Oracle Bare Metal Cloud service is an innovative cloud service.
When looking at how it can be used for MySQL Cluster it is a great
fit.

MySQL Cluster is a high availability solution. Oracle Bare Metal Cloud
provides the possibility to have servers that only you use and thus
two Oracle Bare Metal servers are definitely independent of each others
hardware. If you want to also run synchronous replication with no
dependence on network, housing and electricity you can place these
servers in different availability domains in the same region.
Thus it is possible to build individual clusters with very high
availability.

It is still possible to use smaller virtual machines that share the
machine with other simultaneous users.

One of the most important features of MySQL Cluster is predictable
latency. To achieve this it is important that the network latency
is predictable and it is also important that network bandwidth is
constant. Oracle Bare Metal provides this by constructing a
non-oversubscribed network that have round-trip latency below
100 microseconds within an availability domain and below 1 millisecond
round-trip latency between availability domain.

All machines have currently 10 Gbit Ethernet and it has been announced
that this will soon be upgraded to 25Gbit Ethernet. This is great for
MySQL Cluster that is using the network very heavily and relies on it
for predictable latency.

In addition Oracle Bare Metal comes with the option to use servers with
more than a million IOPS that can store 6.4 TByte locally on NVMe drives
even when setup in a RAID10 setup (Oracle BM High IO) with 36 CPU cores
and 512 GByte RAM. So a very capable platform to use MySQL Cluster with
both in-memory data and using the disk data feature in MySQL Cluster and
still have predictable latency.

I have long experience of running benchmarks and even some very large
benchmarks, but all these benchmarks have been done on machines that
have been lacking in IO performance. So for me it is great to see a
platform that have a very capable CPU performance, a large memory
footprint and a very capable IO performance on top of that. So it
will be interesting to run benchmarks on this platform and
continously improve MySQL Cluster performance on this type of
platforms.

In addition it is possible to setup a secure network environment
for MySQL Cluster since each user can set up his own virtual
cloud network that can also be integrated with an on-premise
network.

So I made an exercise today to setup a MySQL Cluster installation on
4 VMs on the Oracle Bare Metal Cloud.

This is the steps that I did to prepare each VM, I will also show how
those steps can easily be automated for a DevOps environment.

The first step is to launch an instance, I used the web interface here
and I used the simplest VM1.1 instances. I decided to launch all instances
in the same availability domain. I will later look into how to setup
things to make it work with more than one availability domain.

I choose to use Oracle Linux 7.3 as the OS for all instances. Each instance
gets a public IP address that can be used to log into the instance with
the user opc. In addition the instances also get a private IP address.
It is possible to also have Public DNS name for each instance, given that
we don't want to use those type of names we don't use any public DNS names
on our instances. I use the 10.0.0.0/16 range of private IP addresses.

Next step is to create a block volume. They come in 256 GByte or 2 TByte sizes
and for this experiment the 256 GByte size was used.

The final step is to attach the block volume to an instance such that each
instance have a block volume.

Next step is to use SSH to log into the machine using
ssh -l opc PUBLIC_IP_ADDRESS
when defining the instance the SSH public key was provided such that this
can be done without password.

Now the block volume needs to be registered, configured to reconnect at
boot and one needs to log into iSCSI. These 3 commands one copies from
the web interface and pastes them in the terminal window.

A final step using the web interface is to enable TCP communication
between the instances in the cloud. To do this one new ingress rule
is added that allows TCP traffic from 10.0.0.0/24 on all ports in a
stateless mode. This means that the network will not block any TCP
traffic between the instances in the cloud. It will still block any
communication to the instances from anywhere outside of my private
cloud network.

To automate those parts one would make use of the REST API available to
interact with the Oracle Bare Metal Cloud service.

After these steps we have now a virtual machine up and running, we have
a device (gets named /dev/sdb). So now it is time to install MySQL Cluster
software, create a file system on the device and setup networking and
finally setup the firewall for the instances.

At first we want to setup the file system for each instance. My personal
experience is that I prefer using XFS as file system for MySQL Cluster.
It works with all sorts of other file systems, but XFS is good at high
write loads that are common when using MySQL Cluster.

Next we mount the new file system on the directory /ndb_data.

In Oracle Linux 7.3 the firewall is setup to block most traffic by default.
So in order to make it possible to communicate on the appropriate ports
we open up the ports 1186 (NDB Management server port), 3306 (MySQL Server
port), 11860 (MySQL Cluster data node port), 33060 (MySQLX port).

This is performed by the command
sudo firewall-cmd --zone=public --permanent --add-port=1186/tcp
for all ports followed by
sudo firewall-cmd --reload
to ensure that the new firewall rules are used.

To prepare the MySQL repo to use MySQL Cluster we have copied over
the mysql57-community-release-el7-10.noarch.rpm file that gives
access to the MySQL repos.

So we issue the command to install this into Oracle Linux.
Next we install yum-utils to be able to use yum-config-manager
rather than editing yum files to disable MySQL 5.7 and enable
MySQL Cluster 7.5 repo.

Now I put all of this into a script that in my case looks like this.

#!/bin/bash
sudo firewall-cmd --zone=public --permanent --add-port=1186/tcp
sudo firewall-cmd --zone=public --permanent --add-port=3306/tcp
sudo firewall-cmd --zone=public --permanent --add-port=11860/tcp
sudo firewall-cmd --zone=public --permanent --add-port=33060/tcp
sudo firewall-cmd --reload
sudo mkfs.xfs -d su=32k,sw=6 /dev/sdb
sudo mkdir /ndb_data
sudo mount /dev/sdb /ndb_data
sudo chown opc /ndb_data
sudo chgrp opc /ndb_data
sudo rpm -ivh mysql57-community-release-el7-10.noarch.rpm
sudo yum install -y yum-utils
sudo yum-config-manager --disable mysql57-community
sudo yum-config-manager --enable mysql-cluster-7.5-community

Now the script to prepare the data node VMs also add the line:
sudo yum install -y mysql-cluster-community-data-node

The script to prepare the NDB management server VM adds the line:
sudo yum install -y mysql-cluster-community-management-server

As usual the installation of the MySQL Server VM is a bit more
involved. Oracle Linux 7.3 comes with MySQL 5.6 preinstalled and
also postfix dependent on the MySQL Server. In addition we depend
on the EPEL (Extra Packages for Enterprise Linux) being accessible.

So the script here needs to add the following lines:
sudo rpm -ivh http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-9.noarch.rpm
sudo yum remove -y mysql-community-{server,client,common,libs}
sudo yum install -y mysql-cluster-community-{server,client,common,libs}

This means that postfix is no longer installed, but this should be
ok since it is a very specialised VM we have prepared.

Now we write the config.ini file for this cluster. We use serverport equal
to 11860 to ensure that all communication to the data node goes through the
same port that we have opened in the firewall.

We create 4 mysqld instances to give the possibility to run a few tools in
parallel to the MySQL Server.

One block volume have a maximum throughput of 1500 IOPS (4kByte IO),
so we configure MinDiskWriteSpeed and MaxDiskWriteSpeed parameters to ensure
that we don't oversubscribe the resources to the disk for checkpointing.
If more bandwidth is needed one should add more block volumes. Also 2 TByte
volumes have higher bandwidth.

Even the smallest VM in the Oracle Cloud have 7GByte of memory so we set up
the instance with 4GByte of data memory and 500M of index memory.

[ndb_mgmd]
nodeid=49
hostname=10.0.0.4
datadir=/ndb_data

[ndbd default]
noofreplicas=2
serverport=11860
datadir=/ndb_data
MinDiskWriteSpeed=2M
MaxDiskwriteSpeed=5M
MaxDiskwriteSpeedOtherNodeRestart=5M
MaxDiskWriteSpeedOwnRestart=5M
DataMemory=4G
IndexMemory=500M

[ndbd]
nodeid=1
hostname=10.0.0.5

[ndbd]
nodeid=2
hostname=10.0.0.6

[mysqld]
nodeid=53
[mysqld]
nodeid=54
[mysqld]
nodeid=55
[mysqld]
nodeid=56

Finally we also need a configuration file for the MySQL Server.

[mysqld]
ndbcluster
datadir=/ndb_data/data
socket=/ndb_data/mysql.sock
log-error=/ndb_data/error.log
pid-file=/ndb_data/mysqld.pid

We store the configuration file for the cluster in the NDB management
server VM in /ndb_data/config.ini. We store the MySQL Server configuration
file in the MySQL Server VM in /ndb_data/my.cnf.

Obviously for a real-world use case more care has to be put into setting
up the configuration, but this will be sufficient for a reasonable demo
case.

Now it is time to start things up.
First we start the management server in the NDB management server VM:
ndb_mgmd -f /ndb_data/config.ini --initial
--configdir=/ndb_data --ndb-nodeid=49

Next we start the data nodes in the data node VMs. We use ndbd here since
it is the most efficient use case in a very small VM, when going beyond
2 CPUs one should use ndbtmd instead.

ndbd --ndb-connectstring=10.0.0.4 --ndb-nodeid=1
and
ndbd --ndb-connectstring=10.0.0.4 --ndb-nodeid=2

Finally we can monitor that the cluster have started using
ndb_mgm --ndb-connectstring=10.0.0.4
and issue the command show and quit to exit the NDB management client.

Now we bootstrap the MySQL Server with
mysqld --defaults-file=/ndb_data/my.cnf --initialize-insecure
and next we start the MySQL Server:
mysqld --defaults-file=/ndb_data/my.cnf --ndb-connectstring=10.0.0.4

and we're done.

We can now connect to the MySQL Server from the client with
mysql --socket=/ndb_data/mysql.sock --user=root

and start doing whatever database commands we want to test.

Later I will look at a bit more advanced tests of MySQL Cluster in the
Oracle cloud and execute some benchmarks to see how it works.

1 comment:

  1. I am very grateful you did share your knowledge here. It is an excellent post

    ReplyDelete