Knowledge base
How to use Oktawave infrastructure to provide high availability for MySQL databases?
Posted by Pomoc Oktawave on 28.10.2014 21:21

In Oktawave you can independently configure any replication topology databases, but in some cases our infrastructure can help you with this, for example through the load balancer and Autoscaler.

 

1. Master-Slave(s) replication  

In the simplest case, you can configure one node that will handle read and write operations (MASTER), and one node that will support only reading operations.

 


This model can be extended for another slaves, which can then be automated using Autoscaler.


 


1.1. Assumptions

For further considerations we assume that we have two instances of the database - DB01, which will act as the master, and db02, which will act as a slave. For convenience, we use templates from the popular MySQL fork - Percona Server. All instances have a network interface with a public address to communicate with LB and OPN interface where replication will be implemented.

Also we assume that the communication within the OPN will use a private subnet 192.168.0.0/16. Addressing can be static or can use pre-prepared DHCP server. It should also be added to the firewall rules to allow traffic to TCP port 3306 within the OPN and the public interface to/from the load balancer:

 

-A INPUT -i eth1 -p tcp --dport 3306 -j ACCEPT
-A INPUT -i eth0 -s 195.149.198.1 -p tcp --dport 3306 -j ACCEPT

 

The result of our activities will be the following structure, in terms of used services.


 


1.2. Initial Configuration

The default configuration of MySQL is usually not sufficient for correct replication. Below is a sample configuration file that can be used (written in /etc/mysql/conf.d/replication.cnf) but you may need to adapt it to your requirements.


[mysqld] 
server-id = 1 
report-host = db01 
log-bin = db01-bin.log 

query-cache-size=0
query-cache-type=0

binlog-format = ROW 
binlog-row-image = minimal 
log-slave-updates = true 
gtid-mode = on 
enforce-gtid-consistency = true 
master-info-repository = TABLE 
relay-log-info-repository = TABLE 
sync-master-info = 1 
binlog-checksum = CRC32 
master-verify-checksum = 1 
slave-sql-verify-checksum = 1 
binlog-rows-query-log_events = 1 
report-port = 3306 
sync-binlog = 1 
innodb-flush-log-at-trx-commit = 1 

 

The above configuration should be applied to all instances involved in replication. Please note that the server-id and report-host must vary between instances for replication to work properly. For this reason, and also because of the transparency of configuration, we use the db${server-id} convention in ​​report-host variable values and log-bin.

Do not forget variable server-uuid, which is generated automatically by default by MySQL and is in /var/lib/mysql/auto.cnf. This file should be removed. After removal and restart of the MySQL service , it will be recreated with the new value of the server-uuid. Replication can not operate correctly when two instances of MySQL server participating in it have the same value of the variable server-uuid.

After saving the new configuration, you must restart the MySQL server.

 

service mysql restart


1.3. Compilation of replication

For instance, that will act as the master, create a user that will be used to implement replication.

 

@db01:
mysql> CREATE USER replicator;
mysql> GRANT REPLICATION SLAVE ON *.* TO replicator@'192.168.%' IDENTIFIED BY 'password';

For instance, that will act as a slave, we need to set the instance data, which acts as the master.


@db02:
mysql> CHANGE MASTER TO MASTER_HOST='192.168.0.1', MASTER_USER='replicator',
MASTER_PASSWORD='password', MASTER_AUTO_POSITION=1;

To start replication, on instance db02 issue the following command.


@db02:
mysql> START SLAVE;

At this point, replication should be assembled, which can be determined by issuing next command on instance db02.


@db02:
mysql> SHOW SLAVE STATUS;

For a correctly configured replication you should note the following message.

Waiting for master to send event and the Slave has read all relay log; waiting for the slave I/O thread to update it, depending on whether you already have some data replicated.

On instance DB01 we can check the status of master and list of slaves replicating data from it using the following commands.

 

@db01:
mysql> SHOW MASTER STATUS;
mysql> SHOW SLAVE HOSTS;



1.4. Semi-synchronous Replication

If all the steps presented so far are done correctly, we have two instances - DB01 and db02 - with working replication. This is an asynchronous replication, ie. The data is stored on the instance DB01, after which they are forwarded to instance db02 to replicate, there is no assurance that the instance db02 manage to replicate the data before the failure of DB01.

To increase reliability, we can use semi-synchronous replication, then the commit is confirmed by the master, only if at least one slave receives a copy of the data.

To enable semi-synchronous replication, you must install the appropriate plugins.

 

@db01:
mysql> INSTALL PLUGIN rpl_semi_sync_master SONAME 'semisync_master.so'; 

@db02:
mysql> INSTALL PLUGIN rpl_semi_sync_slave SONAME 'semisync_slave.so';

 

Then enable the mode of semi-synchronous replication.


@db01:
mysql> SET GLOBAL rpl_semi_sync_master_enabled = ON; 

@db02:
mysql> SET GLOBAL rpl_semi_sync_slave_enabled = ON; 
mysql> STOP SLAVE IO_THREAD; START SLAVE IO_THREAD;

Check the status of replication.


@db01:
mysql> SHOW STATUS LIKE 'Rpl_semi_sync_master_status'; 
+-----------------------------+-------+ 
| Variable_name               | Value | 
+-----------------------------+-------+ 
| Rpl_semi_sync_master_status | ON    | 
+-----------------------------+-------+ 
1 row in set (0.00 sec) 

mysql> SHOW STATUS LIKE 'Rpl_semi_sync_master_clients'; 
+------------------------------+-------+ 
| Variable_name                | Value | 
+------------------------------+-------+ 
| Rpl_semi_sync_master_clients | 1     | 
+------------------------------+-------+ 
1 row in set (0.00 sec) 

If we decide to constantly use this type of replication, we should add the appropriate boot options to the configuration file /etc/mysql/conf.d/replication.cnf.


@db01:
rpl_semi_sync_master_enabled = ON 

@db02:
rpl_semi_sync_slave_enabled = ON 


1.5. Configure the load balancer

When we have properly configured replication, we still need a mechanism that will make the query that change the structure of the database to go only to MASTER instance, and the SELECT query co go to any of the created database instance. Oktawave load balancer will help us with this. The following figure shows the role of LB in our solution.

 

 

A container with load balancer, we configure as follows.

 


Click the Add container button.

Select the instances that are in the container and select the option Load balancer.

Select the type of MySQL service (3306), the master instance db01, and the algorithm can be arbitrary. With the option session persistence select No Persistence. It is also worth noting the option Healthcheck, through which the state of each of the instances involved in replication is checked every second, so as not to lead to direct queries to the instance is not working properly. It is useful, however, because of the way it is performed by LB (TCP connection on the port 3306), it wakes a mechanism for MySQL responsible for blocking hosts that perform failed (acording to  MySQL) connection attempts. To counteract this behavior and successfully use the LB to break traffic to MySQL databases, periodically make connections by LB, which will reset the counter of failed connection attempts per host, whose value can be increased by using the max-connect-errors in /etc/ mysql/my.cnf.

Now that we properly configured container with Load Balancer for MySQL, get the address of the load balancer and the username and password for the MySQL service. These data will be used to  configure the application that will benefit from our solutions. For example, the entries in the configuration file of Wordpress (wp-config.php) would look as follows:

 

// ** MySQL settings - You can get this info from your web host ** // 
/** The name of the database for WordPress */ 
define('DB_NAME', 'wordpress'); 

/** MySQL database username */ 
define('DB_USER', '00000686_Cs3p8BN'); 

/** MySQL database password */ 
define('DB_PASSWORD', 'random_password'); 

/** MySQL hostname */ 
define('DB_HOST', '195.149.198.75');

To start using the database, create and assign appropriate permissions.


@db01:
mysql> CREATE DATABASE wordpress;
Query OK, 1 row affected (0.00 sec)

mysql> GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP ON `wordpress`.* 
TO '00000686_Cs3p8BN'@'195.149.198.1' IDENTIFIED BY 'random_password';
Query OK, 0 rows affected (0.02 sec)

Thanks to the fact that we replicate all databases, alsow the permissions for them (located in the mysql database) will be replicated to the slaves.

In this way we were able to get the replication of databases for our site services and queries load balancing using Oktawave LB.


1.6. The use of horizontal Autoscaler

A common occurrence in Web sites is a significant advantage of the number of queries to the database over the number of commands that modify its contents. To ensure scalability to handle the number of possible queries, based on the previously presented solution, we will use horizontal autoscaler in our container with load balancer.

We must remember that we must provide a unique IP address for instances running in the container, to communicate in OPN and the unique variables of MySQL. We must therefore prepare instance, which will serve as a template for Autoscaler to clone new instances in a manner that will not require our intervention and will be able to run automatically.

We can provide the uniqueness of IP addresses in the OPN, using the DHCP server. An alternative is to use link-local addressing, described in RFC 3927. The only instance in our scenario, where we need to provide a specific address, is an instance DB01, where in the /etc/network/interfaces we need to set a static address for the eth1 subnet provided in the RFC, for example. 169.254.1.1, and then we need to restart the interface.

 

ifdown eth1; ifup eth1

For instance, which will be the template for cloning, we must install avahi-autoipd.


apt-get install avahi-autoipd

Then replace the line "iface eth1 inet dhcp" to "iface eth1 inet ipv4ll" in /etc/network/interfaces and restart the interface, then as a result of the command 'ip addr show' we should see the following.


inet 169.254.10.47/16 brd 169.254.255.255 scope link eth1:avahi 

In this way we provided a unique addressing for our OPN database instance, without using a DHCP server.

In fact, we have a unique ID, which can also be used to configure MySQL. Now lets generate MySQL variables based on the instance IP addresses using a script.


#!/usr/bin/env bash 

 

MYSQL_REPLICATION_CNF='/etc/mysql/conf.d/replication.cnf'

 

SLAVE_IP=$(ip addr show eth1 | sed -e '/169.254/!d' -e 's/^.*inet //' -e 's/\/16.*$//') 
echo "SLAVE_IP: ${SLAVE_IP}" 
SLAVE_ID=$(echo ${SLAVE_IP} | sed -e 's/\.\([0-9][0-9]\)$/\.0\1/;s/\.\([0-9]\)$/\.00\1/'
 -e 's/169.254.//' -e 's/\.//' -e 's/^0*//')
echo "SLAVE_ID: ${SLAVE_ID}" 

sed -i "s/^server-id = .*/server-id = ${SLAVE_ID}/" $MYSQL_REPLICATION_CNF 
sed -i "s/^report-host = .*/report-host = db${SLAVE_ID}/" $MYSQL_REPLICATION_CNF 
sed -i "s/^log-bin = .*/log-bin = db${SLAVE_ID}-bin.log/" $MYSQL_REPLICATION_CNF 

# Force server-uuid regeneration 
rm /var/lib/mysql/auto.cnf 

Lets write this script to /usr/local/bin/prepare_mysql_slave. It must be run before the start of MySQL service, so that it will be used correctly, generated by variables script, which is why we will write up script that will take care of it for us and we will save it to /etc/init.d/prepare_mysql_slave.


#! /bin/sh 
### BEGIN INIT INFO 
# Provides:             prepare_mysql_slave 
# Required-Start: 
# Required-Stop:        mysql 
# Default-Start:        2 3 4 5 
# Default-Stop:         0 1 6 
# Short-Description:    Prepare MySQL (Percona) replication slave 
# Description: 
### END INIT INFO 

case "$1" in 
start|"") 
/usr/local/bin/prepare_mysql_slave 
;; 
restart|reload|force-reload) 
: 
;; 
stop) 
: 
;; 
*) 
: 
;; 
esac 

exit 0 

We need to ensure that both scripts are executable.


chmod + x / usr / local / bin / prepare_mysql_slave
chmod + x /etc/init.d/prepare_mysql_slave

You should also add the prepare_mysql_slave script to the appropriate runlevels, making sure that it will be launched before the MySQL service.


update-rc.d prepare_mysql_slave start 15 2 3 4 5 . stop 15 0 1 6 . 
Adding system startup for /etc/init.d/prepare_mysql_slave ... 
/etc/rc0.d/K15prepare_mysql_slave -> ../init.d/prepare_mysql_slave 
/etc/rc1.d/K15prepare_mysql_slave -> ../init.d/prepare_mysql_slave 
/etc/rc6.d/K15prepare_mysql_slave -> ../init.d/prepare_mysql_slave 
/etc/rc2.d/S15prepare_mysql_slave -> ../init.d/prepare_mysql_slave 
/etc/rc3.d/S15prepare_mysql_slave -> ../init.d/prepare_mysql_slave 
/etc/rc4.d/S15prepare_mysql_slave -> ../init.d/prepare_mysql_slave 
/etc/rc5.d/S15prepare_mysql_slave -> ../init.d/prepare_mysql_slave

Such prepared instance of slave can function as a source for cloning new instances in the container. After cloning, the instance will automatically join to the replication with unique server-id.



2. Multi-Master Replication

Load balancer can be used to compile a replication model, where multiple instances are also used to perform read and write operations. Consider the ring, which consists of three Nodes, acording to the example shown below.



2.1. Assumptions

We use a static address in OPN.


db01mm: 10.0.0.1
db02mm: 10.0.0.2
db03mm: 10.0.0.3

Replication will occur between pairs of instances, according to the following scheme.


db01mm -> db02mm
db02mm -> db03mm
db03mm -> db01mm

Because of the difficulties involved and a lot of configuration options for implementing such solutions, we do not consider in this tutorial the use of the classic horizontal Autoscaler with Multi-Master Replication.


2.2. Initial Configuration

Replication.cnf file from the previous chapter will be used in this example. Because the replication will take place between pairs of instances, if you want to use semi-synchronous replication, we must remember to install on each of the hosts the two plug-ins: rpl_semi_sync_master and rpl_semi_sync_slave and set their configuration.



2.3. Replication statement

We must perform the following steps in sequence.

 

@db01mm:
GRANT REPLICATION SLAVE ON *.* TO replicator@'10.0.0.2' IDENTIFIED BY 'random_password';

@db02mm:
GRANT REPLICATION SLAVE ON *.* TO replicator@'10.0.0.3' IDENTIFIED BY 'random_password';
CHANGE MASTER TO MASTER_HOST='10.0.0.1', MASTER_USER='replicator',
MASTER_PASSWORD='random_password', MASTER_AUTO_POSITION=1, IGNORE_SERVER_IDS=(2);
START SLAVE;

@db03mm:
GRANT REPLICATION SLAVE ON *.* TO replicator@'10.0.0.1' IDENTIFIED BY 'random_password';
CHANGE MASTER TO MASTER_HOST='10.0.0.2', MASTER_USER='replicator',
MASTER_PASSWORD='random_password', MASTER_AUTO_POSITION=1, IGNORE_SERVER_IDS=(3);
START SLAVE;

@db01mm:
CHANGE MASTER TO MASTER_HOST='10.0.0.3', MASTER_USER='replicator',
MASTER_PASSWORD='random_password', MASTER_AUTO_POSITION=1;
START SLAVE;

To check replication status we can use the commands known from chapter on replication in a master-slave model.


2.4. Configure the load balancer

In this scenario, the load balancer will distribute traffic evenly between the three instances involved in replication.

 

 

In the Oktawave panel we configure the load balancer as follows.

 


The result is the IP address of the load balancer, which serve as the address of the database server in our application. Access data to database we can set ourselves during creation of the database and grant user rights to any of the instances involved in replication.


3. Alternative: use of Percona XtraDB Cluster solution

Instead of manually assemble the replication in Multi-Master module, as described in paragraphs 2.2-2.3, we can use a ready solution - Percona XtraDB Cluster, a distribute of Galera Cluster, which is even easier to set up and easier to maintain. In a PXC cluster it is also possible to take advantage of horizontal Autoscaler.

3.1 Assumptions

The minimum recommended configuration of PXC presupposes the existence of at least three instances, therefore the basis of the cluster are instances db01mm, db02mm and db03mm created using a Percona OCI template. Instances are connected by means of OPN and have static IP addresses, while other instances obtain the address using DHCP, or use link-local addresses. We should also allow free communication between the departments of OPN, with the appropriate iptables rules.

 

-A INPUT -i eth1 -p tcp -m multiport --dports  3306,4444,4567,4568 -j ACCEPT

3.2 Initial Configuration

To install the Percona XtraDB Cluster, perform the following commands:

 

apt-get update
apt-get install percona-xtrabackup percona-xtradb-cluster-full-56

 

We'll start with the preparation of the first instance in our cluster. To do this, turn off the service MySQL server, if enabled.


service mysql stop

Below we propose the PXC configuration that you must save to the file /etc/mysql/conf.d/galera.cnf.


[mysqld] 

binlog-format=ROW 
default_storage_engine=InnoDB 
innodb-autoinc-lock-mode=2 

query-cache-size=0 
query-cache-type=0 

wsrep-provider=/usr/lib/libgalera_smm.so 
wsrep-cluster-address=gcomm://169.254.1.1,169.254.1.2,169.254.1.3 
wsrep-node-address=169.254.1.1
wsrep-sst-method=xtrabackup-v2 
wsrep-cluster-name=GALERA 
wsrep-sst-auth="sst_service:random_password"

We also need to remember to manually change the parameter wsrep-node-address on the remaining nodes with static addressing. Parameter wsrep-sst-auth includes database user authorization that is used to implement data replication within the cluster.


3.3 Initialization of cluster

Now that we have properly prepared instances, we can initialize the cluster on one of them. In our case it will be db01mm.

 

# service mysql bootstrap-pxc
* Bootstrapping Percona XtraDB Cluster database server mysqld

The first PXC node has been successfully initialized. Now log in to the database and issue the following query to create a user used to replicate data between cluster nodes and give him the appropriate permissions.


CREATE USER 'sst_service'@'localhost' IDENTIFIED BY 'random_password';
GRANT RELOAD, LOCK TABLES, REPLICATION CLIENT ON *.* TO 'sst_service'@'localhost';
FLUSH PRIVILEGES;

On the following nodes we only have to enable the MySQL server, then the node will automatically sync with the rest and join the cluster.


# service mysql start
* Starting MySQL (Percona XtraDB Cluster) database server mysqld                                                                                                                * SST in progress, setting sleep higher mysqld 

3.4 Application of horizontal Autoscaler

Like in the point 1.6, we need to ensure the correct configuration of each instance of which will belong to the cluster. In the event of PXC configuration we need only to set the IP address in the configuration file. If you are using link-local addresses, you can use the analogous solution to that described in Section 1.6, using the following script.


#!/usr/bin/env bash 
MYSQL_GALERA_CNF='/etc/mysql/conf.d/galera.cnf' 

MEMBER_IP=$(ip addr show eth1 | sed -e '/169.254/!d' -e 's/^.*inet //' -e 's/\/16.*$//') 

sed -i "s/^wsrep-node-address=.*/wsrep-node-address=${MEMBER_IP}/" $MYSQL_GALERA_CNF 

# Force server-uuid regeneration 
rm /var/lib/mysql/auto.cnf

Instance prepared in this manner we choose as the cloning source in Autoscaler. In the cluster configured in this way, instances can be freely added and removed by Autoscaler with changes in load on the service/application.

(0 vote(s))
This article was helpful
This article was not helpful

Comments (0)
Post a new comment
 
 
Full Name:
Email:
Comments:
CAPTCHA Verification 
 
Please enter the text you see in the image into the textbox below. This is required to prevent automated registrations and form submissions.