Friday, April 14, 2017

Setting up MySQL Cluster in the Oracle Bare Metal Cloud

The Oracle Bare Metal Cloud service is an innovative cloud service.
When looking at how it can be used for MySQL Cluster it is a great
fit.

MySQL Cluster is a high availability solution. Oracle Bare Metal Cloud
provides the possibility to have servers that only you use and thus
two Oracle Bare Metal servers are definitely independent of each others
hardware. If you want to also run synchronous replication with no
dependence on network, housing and electricity you can place these
servers in different availability domains in the same region.
Thus it is possible to build individual clusters with very high
availability.

It is still possible to use smaller virtual machines that share the
machine with other simultaneous users.

One of the most important features of MySQL Cluster is predictable
latency. To achieve this it is important that the network latency
is predictable and it is also important that network bandwidth is
constant. Oracle Bare Metal provides this by constructing a
non-oversubscribed network that have round-trip latency below
100 microseconds within an availability domain and below 1 millisecond
round-trip latency between availability domain.

All machines have currently 10 Gbit Ethernet and it has been announced
that this will soon be upgraded to 25Gbit Ethernet. This is great for
MySQL Cluster that is using the network very heavily and relies on it
for predictable latency.

In addition Oracle Bare Metal comes with the option to use servers with
more than a million IOPS that can store 6.4 TByte locally on NVMe drives
even when setup in a RAID10 setup (Oracle BM High IO) with 36 CPU cores
and 512 GByte RAM. So a very capable platform to use MySQL Cluster with
both in-memory data and using the disk data feature in MySQL Cluster and
still have predictable latency.

I have long experience of running benchmarks and even some very large
benchmarks, but all these benchmarks have been done on machines that
have been lacking in IO performance. So for me it is great to see a
platform that have a very capable CPU performance, a large memory
footprint and a very capable IO performance on top of that. So it
will be interesting to run benchmarks on this platform and
continously improve MySQL Cluster performance on this type of
platforms.

In addition it is possible to setup a secure network environment
for MySQL Cluster since each user can set up his own virtual
cloud network that can also be integrated with an on-premise
network.

So I made an exercise today to setup a MySQL Cluster installation on
4 VMs on the Oracle Bare Metal Cloud.

This is the steps that I did to prepare each VM, I will also show how
those steps can easily be automated for a DevOps environment.

The first step is to launch an instance, I used the web interface here
and I used the simplest VM1.1 instances. I decided to launch all instances
in the same availability domain. I will later look into how to setup
things to make it work with more than one availability domain.

I choose to use Oracle Linux 7.3 as the OS for all instances. Each instance
gets a public IP address that can be used to log into the instance with
the user opc. In addition the instances also get a private IP address.
It is possible to also have Public DNS name for each instance, given that
we don't want to use those type of names we don't use any public DNS names
on our instances. I use the 10.0.0.0/16 range of private IP addresses.

Next step is to create a block volume. They come in 256 GByte or 2 TByte sizes
and for this experiment the 256 GByte size was used.

The final step is to attach the block volume to an instance such that each
instance have a block volume.

Next step is to use SSH to log into the machine using
ssh -l opc PUBLIC_IP_ADDRESS
when defining the instance the SSH public key was provided such that this
can be done without password.

Now the block volume needs to be registered, configured to reconnect at
boot and one needs to log into iSCSI. These 3 commands one copies from
the web interface and pastes them in the terminal window.

A final step using the web interface is to enable TCP communication
between the instances in the cloud. To do this one new ingress rule
is added that allows TCP traffic from 10.0.0.0/24 on all ports in a
stateless mode. This means that the network will not block any TCP
traffic between the instances in the cloud. It will still block any
communication to the instances from anywhere outside of my private
cloud network.

To automate those parts one would make use of the REST API available to
interact with the Oracle Bare Metal Cloud service.

After these steps we have now a virtual machine up and running, we have
a device (gets named /dev/sdb). So now it is time to install MySQL Cluster
software, create a file system on the device and setup networking and
finally setup the firewall for the instances.

At first we want to setup the file system for each instance. My personal
experience is that I prefer using XFS as file system for MySQL Cluster.
It works with all sorts of other file systems, but XFS is good at high
write loads that are common when using MySQL Cluster.

Next we mount the new file system on the directory /ndb_data.

In Oracle Linux 7.3 the firewall is setup to block most traffic by default.
So in order to make it possible to communicate on the appropriate ports
we open up the ports 1186 (NDB Management server port), 3306 (MySQL Server
port), 11860 (MySQL Cluster data node port), 33060 (MySQLX port).

This is performed by the command
sudo firewall-cmd --zone=public --permanent --add-port=1186/tcp
for all ports followed by
sudo firewall-cmd --reload
to ensure that the new firewall rules are used.

To prepare the MySQL repo to use MySQL Cluster we have copied over
the mysql57-community-release-el7-10.noarch.rpm file that gives
access to the MySQL repos.

So we issue the command to install this into Oracle Linux.
Next we install yum-utils to be able to use yum-config-manager
rather than editing yum files to disable MySQL 5.7 and enable
MySQL Cluster 7.5 repo.

Now I put all of this into a script that in my case looks like this.

#!/bin/bash
sudo firewall-cmd --zone=public --permanent --add-port=1186/tcp
sudo firewall-cmd --zone=public --permanent --add-port=3306/tcp
sudo firewall-cmd --zone=public --permanent --add-port=11860/tcp
sudo firewall-cmd --zone=public --permanent --add-port=33060/tcp
sudo firewall-cmd --reload
sudo mkfs.xfs -d su=32k,sw=6 /dev/sdb
sudo mkdir /ndb_data
sudo mount /dev/sdb /ndb_data
sudo chown opc /ndb_data
sudo chgrp opc /ndb_data
sudo rpm -ivh mysql57-community-release-el7-10.noarch.rpm
sudo yum install -y yum-utils
sudo yum-config-manager --disable mysql57-community
sudo yum-config-manager --enable mysql-cluster-7.5-community

Now the script to prepare the data node VMs also add the line:
sudo yum install -y mysql-cluster-community-data-node

The script to prepare the NDB management server VM adds the line:
sudo yum install -y mysql-cluster-community-management-server

As usual the installation of the MySQL Server VM is a bit more
involved. Oracle Linux 7.3 comes with MySQL 5.6 preinstalled and
also postfix dependent on the MySQL Server. In addition we depend
on the EPEL (Extra Packages for Enterprise Linux) being accessible.

So the script here needs to add the following lines:
sudo rpm -ivh http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-9.noarch.rpm
sudo yum remove -y mysql-community-{server,client,common,libs}
sudo yum install -y mysql-cluster-community-{server,client,common,libs}

This means that postfix is no longer installed, but this should be
ok since it is a very specialised VM we have prepared.

Now we write the config.ini file for this cluster. We use serverport equal
to 11860 to ensure that all communication to the data node goes through the
same port that we have opened in the firewall.

We create 4 mysqld instances to give the possibility to run a few tools in
parallel to the MySQL Server.

One block volume have a maximum throughput of 1500 IOPS (4kByte IO),
so we configure MinDiskWriteSpeed and MaxDiskWriteSpeed parameters to ensure
that we don't oversubscribe the resources to the disk for checkpointing.
If more bandwidth is needed one should add more block volumes. Also 2 TByte
volumes have higher bandwidth.

Even the smallest VM in the Oracle Cloud have 7GByte of memory so we set up
the instance with 4GByte of data memory and 500M of index memory.

[ndb_mgmd]
nodeid=49
hostname=10.0.0.4
datadir=/ndb_data

[ndbd default]
noofreplicas=2
serverport=11860
datadir=/ndb_data
MinDiskWriteSpeed=2M
MaxDiskwriteSpeed=5M
MaxDiskwriteSpeedOtherNodeRestart=5M
MaxDiskWriteSpeedOwnRestart=5M
DataMemory=4G
IndexMemory=500M

[ndbd]
nodeid=1
hostname=10.0.0.5

[ndbd]
nodeid=2
hostname=10.0.0.6

[mysqld]
nodeid=53
[mysqld]
nodeid=54
[mysqld]
nodeid=55
[mysqld]
nodeid=56

Finally we also need a configuration file for the MySQL Server.

[mysqld]
ndbcluster
datadir=/ndb_data/data
socket=/ndb_data/mysql.sock
log-error=/ndb_data/error.log
pid-file=/ndb_data/mysqld.pid

We store the configuration file for the cluster in the NDB management
server VM in /ndb_data/config.ini. We store the MySQL Server configuration
file in the MySQL Server VM in /ndb_data/my.cnf.

Obviously for a real-world use case more care has to be put into setting
up the configuration, but this will be sufficient for a reasonable demo
case.

Now it is time to start things up.
First we start the management server in the NDB management server VM:
ndb_mgmd -f /ndb_data/config.ini --initial
--configdir=/ndb_data --ndb-nodeid=49

Next we start the data nodes in the data node VMs. We use ndbd here since
it is the most efficient use case in a very small VM, when going beyond
2 CPUs one should use ndbtmd instead.

ndbd --ndb-connectstring=10.0.0.4 --ndb-nodeid=1
and
ndbd --ndb-connectstring=10.0.0.4 --ndb-nodeid=2

Finally we can monitor that the cluster have started using
ndb_mgm --ndb-connectstring=10.0.0.4
and issue the command show and quit to exit the NDB management client.

Now we bootstrap the MySQL Server with
mysqld --defaults-file=/ndb_data/my.cnf --initialize-insecure
and next we start the MySQL Server:
mysqld --defaults-file=/ndb_data/my.cnf --ndb-connectstring=10.0.0.4

and we're done.

We can now connect to the MySQL Server from the client with
mysql --socket=/ndb_data/mysql.sock --user=root

and start doing whatever database commands we want to test.

Later I will look at a bit more advanced tests of MySQL Cluster in the
Oracle cloud and execute some benchmarks to see how it works.

Thursday, April 13, 2017

Setting up a MySQL Cluster in the Cloud

In my previous post I covered how to install MySQL Cluster on
a Red Hat VM.

In order to run MySQL Cluster in a cloud environment I use 4 VMs.
This is sufficient for a demo or proof of concept of MySQL Cluster.
For a production environment it is likely that one would want at least
6 VMs. It is generally a good idea to use one VM per process.
This has the advantage that one can perform an upgrade in the
order suitable for upgrading MySQL Cluster.

I used one VM to handle the NDB management server. When this
node is down no new nodes can join the cluster, but living nodes
will continue to operate as normal. So in a demo environment it
is quite sufficient to use 1 NDB management server. In a production
environment it is quite likely that one wants at least two
management server VMs.

I used two VMs for data nodes. This is the minimal setup for a
highly available MySQL Cluster. Since MySQL Cluster is designed
for high availability it makes sense to use a replicated setup even in
a demo environment.

I used one VM for the MySQL Server and for MySQL client and
NDB management client. This is good enough for a demo, but in
a production environment it is likely that one wants at least two
MySQL Server VMs for failover handling but also for load
balancing and normally probably even more than two VMs for
MySQL Servers.

The first step is to create these small 4 instances. This is very straightforward
and I used the Red Hat 7.3 Linux OS.

Each instance comes with 3 IP addresses. One IP address is a global
IP address used to SSH into the instance. Next there is Public
hostname of the machine and finally there is a private IP address.

The Public IP address is used to connect with SSH, it isn't intended for
communication between the VMs inside the cloud. It is possible to use
the Public hostnames for this, it is however better to use the private IP
addresses for communication inside the cluster. The reason is that otherwise
the cluster also depends on a DNS service for its availability.

I will call PrivIP_MGM the private IP address of the NDB management
server VM, PrivIP_DN1 the private IP address of the first NDB data node
and PrivIP_DN2 of the second data node and PrivIP_Server.

Each instance you log in as the user ec2-user, so I simply created a directory
called ndb under /home/ec2-user as the data directory on all VMs.

So the first step towards setting up the cluster is to create a configuration
file for the cluster called config.ini and place this in
/home/ec2-user/ndb/config.ini on the NDB management server VM.

Here is the content of this file:
[ndb_mgmd]
nodeid = 49
hostname=PrivIP_MGM
datadir=/home/ec2-user/ndb

[ndbd default]
NoOfReplicas=2
ServerPort=11860
datadir=/home/ec2-user/ndb

[ndbd]
nodeid=1
hostname=PrivIP_DN1

[ndbd]
nodeid=2
hostname=PrivIP_DN2

[mysqld]
nodeid=53
[mysqld]
nodeid=54
[mysqld]
nodeid=55

We set ServerPort to 11860 to ensure that we always use the same port number
to connect to the NDB data nodes. Otherwise it is hard to setup a
secure environment with firewalls.

A good rule for node ids is to use 1 through 48 for data nodes, 49 through
52 for NDB management servers and 53 to 255 for API nodes and MySQL
Servers. This will work in almost all cases.

In addition we need to create a configuration file for the MySQL Server.
In this file we have the following content:

[mysqld]
ndbcluster
datadir=/home/ec2-user/ndb/data
socket=/home/ec2-user/ndb/mysql.sock
log-error=/home/ec2-user/ndb/mysqld.log
pid-file=/home/ec2-user/ndb/mysqld.pid
port=3316

We provide a socket file, a log file, a pid file and a data directory
to house the data of the MySQL Server under the ndb data directory.

The reason I use port 3316 is that I wanted to avoid any problems
with other MySQL Server installations. Oftentimes there is already
a MySQL Server installed for various purposes on a machine. So
I decided to make it easy and use port 3316, this is absolutely not
a necessity, more out of laziness on my part.

Now before we move on to start the cluster there is an important
part missing before we are ready to go.

The Linux instances are not going to be able to communicate with
each other unless we set them up for that.

To set up Linux instances to communicate with each other one
uses a concept called Security Group. I created a special security
group I called NDB Cluster that opened up the following ports
for TCP traffic. 1186 (NDB Management Server port), 3306
(MySQL Server port), 3316 (extra MySQL Server port),
8081 (MySQL Cluster Auto-Installer port), 11860
(MySQL Cluster Data node port) and 33060 (MySQLX port).
It is important to open up all those ports for both inbound traffic
as well as outbound traffic.

Now we are ready to start things up.

First we start the NDB management server in its VM
using the command:
ndb_mgmd -f /home/ec2-user/ndb/config.ini --initial
--configdir=/home/ec2-user/ndb
--ndb-nodeid=49

Next I started the two data nodes in their VMs using the command:
ndbd --ndb-connecstring=PrivIP_MGM --ndb-nodeid=1
and
ndbd --ndb-connectstring=PrivIP_MGM --ndb-nodeid=2

Now after a short time we have a working cluster.

So next step is to start up the MySQL Server.
This is done in two steps. The first step is a bootstrap step to
create the MySQL data directory and create the first user.
This is just a demo so we will use the simple insecure
method. In a production environment one should use a
proper initialisation that sets passwords.

The first command is:
mysqld --defaults-file=/home/ec2-user/ndb/my.cnf --initialize-insecure

This command will bootstrap the server. Next we start the MySQL Server
using the command:

mysqld --defaults-file=/home/ec2-user/ndb/my.cnf --ndb-connectstring=PrivIP_MGM

Now we have a setup that is working with a management server, two data nodes
and a MySQL Server up and running. So now one can connect a MySQL client
to the MySQL Server and perform more experiments or some other MySQL
application. Obviously to use some more realistic application it is likely that
a configuration with more memory, more disk space and more CPU resources
is needed.

Installing MySQL Cluster 7.5.6 on Red Hat derivatives

MySQL Cluster 7.5.6 comes with a new nice feature. It is now possible
to install MySQL Cluster using the MySQL repos. I made an exercise today
in setting up a cluster today in the cloud using these new
MySQL repos. This blog describes the work to install MySQL Cluster
on the VMs.

I set up the cluster using 4 different VMs. I used the standard free
t1.micro instances, the aim was to test an installation, not to actually
make anything useful work. For that one would most likely want a bit
fatter VMs. I used Red Hat 7.3 as the OS for the VMs.

So after creating the 4 VMs I ssh:ed into each one. The first step
needed is to ensure that the Linux instance knows about the MySQL repos.
I went to http://dev.mysql.com/downloads/repo/yum to download the
small RPM needed for this. This file is called
mysql57-community-release-el7-10.noarch.rpm. After downloading
this to my machine I used scp to copy the file over to each of the 4
VMs.

Next I ran the command
sudo rpm -ivh mysql57-community-release-el7-10.noarch.rpm
in each VM.

Now by default this activates packages from MySQL 5.7. It does
however also have packages available for MySQL 5.5, 5.6, 8.0 and
also MySQL Cluster 7.5. In order to use the correct version it is
necessary to edit the file
/etc/yum.repos.d/mysql-community.repo

There is a section in this file for MySQL 5.7 called
mysql57-community, this section contains a variable called
enabled that one should change from 1 to 0. Similarly the
section mysql-cluster-7.5-community has a variable enabled
that one should change from 0 to 1.

Now save the file and the MySQL repo is ready to start
installations of MySQL Cluster binaries. It needs to be edited
on all VMs using sudo and an editor of choice (vi for me).

I installed one VM as NDB management server and 2 VMs as
NDB Data Node VMs. The fourth VM I installed the MySQL Server
and client packages into.

Installing in the VM for the NDB management server is now easy.
The command is:
yum install mysql-cluster-community-management-server

Installing in the VMs for the NDB data nodes is also very easy.
The command is
yum install mysql-cluster-community-data-node

Installing the MySQL Server version is a bit more involved since
the MySQL Cluster server package depends on some Perl
part not part of a standard Red Hat installation.

So on the VM used for MySQL Server we first need to install
support for installing EPEL packages (Extra Packages for
Enterprise Linux). This is performed through the command:
rpm -ivh http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-9.noarch.rpm

After installing this it is straightforward to also install the MySQL
Server package. This package will also download the MySQL client
package that also contains the NDB management client package.
The command is:
yum install mysql-cluster-community-server