Getting started MySQL Group Replication on Ubuntu with MySQL Sandbox

Welcome to next topic dedicated to Group Replication plugin. Today’s topic is to get started with Group Replication. As you remember we set up our environment using MySQL Sandbox on Ubuntu 14.04 and also we have compiled plugin with MySQL 5.7.8-rc on Ubuntu 14.04. Please refer to this topics respectively:

  1. MySQL Sandbox creating test environment
  2. Compiling MySQL Group replication plugin

So we have already our 3 nodes of MySQL 5.7.8-rc + group replication plugin.
Before starting group replication. We need to play with Corosync. Here is very dedicated article to this -> Corosync Guide

I want to talk about some issues where you may see and help to resolve these problems.
First thing is when you try to start group replication, without starting corosync daemon or while corosync is stopped:

mysql> START GROUP_REPLICATION;
ERROR 3092 (HY000): The server is not configured properly to be an active member of the group. Please see more details on error log.

The error indicates that, you will find some input in error log but you won’t 🙂
So just be sure to start corosync daemon before getting your hand with group replication.

Second thing is to save correct user on /etc/corosync/uidgid.d directory.
For eg, if you will start MySQL service with some ‘john’ Linux user, you must put this user into some file. For me, it is ‘sh’ local user on my Ubuntu:

sh@shrzayev:~$ cat /etc/corosync/uidgid.d/users 
uidgid {
 uid: sh
 gid: sh
}

If you will save wrong user you will likely see:

node1 [localhost] {msandbox} ((none)) > start group_replication;
ERROR 3092 (HY000): The server is not configured properly to be an active member of the group. Please see more details on error log.

But again there will be no such input in error log indicating exact issue.
Just be sure to save exact user related to corosync.

These tips came from Pedro Gomes. He is a replication developer from Oracle.
For further discussion please refer to related report -> #78619

Let’s assume you have successfully completed Corosync setup.
Now it is time to config MySQL server for group replication plugin.
For all 3 nodes the my.cnf file should include following settings:

server-id                      = UNIQUE_NUMBER_HERE
binlog-format                  = row
enforce-gtid-consistency       = true
gtid-mode                      = on
log_slave_updates              = 1
master-info-repository         = TABLE
relay-log-info-repository      = TABLE
binlog-checksum                = NONE
transaction-write-set-extraction=MURMUR32

With MySQL Sandbox there are already unique server-ids and some other usefull system variables with my.sandbox.cnf file. Just add non-existing parameters to config file. Start all 3 nodes first time:

sh@shrzayev:~/sandboxes/multi_msb_5_7_8/node[1,2,3]$ ./start
.. sandbox server started

Just be sure that, all servers started successfully:

sh@shrzayev:~/sandboxes/multi_msb_5_7_8/node[1,2,3]$ ./status
node1 on

Now stop all 3 nodes and append –plugin-load=group_replication.so to ./start script:

sh@shrzayev:~/sandboxes/multi_msb_5_7_8/node[1,2,3]$ ./start --plugin-load=group_replication.so
... sandbox server started

You can check plugin status:

node[1,2,3] [localhost] {msandbox} ((none)) > select plugin_name, plugin_status from information_schema.plugins where plugin_name like 'group_%';
+-------------------+---------------+
| plugin_name       | plugin_status |
+-------------------+---------------+
| group_replication | ACTIVE        |
+-------------------+---------------+
1 row in set (0,01 sec)

You have enabled group_replication plugin.
We need to create replication user do it only on node1:

CREATE USER 'rpl_user'@'%' IDENTIFIED BY 'rpl_pass';
GRANT REPLICATION SLAVE ON *.* TO rpl_user@'%';
GRANT SELECT on performance_schema.global_variables TO rpl_user@'%';

Also we need to give a SELECT grant to performance_schema.global_variables table, otherwise you will get an error:

2015-10-08T07:32:26.000461Z 8 [ERROR] Slave I/O for channel 'group_replication_recovery': The slave I/O thread stops because a fatal error is e
ncountered when it try to get the value of SERVER_ID variable from master. Error: SELECT command denied to user 'rpl_user'@'localhost' for tabl
e 'global_variables', Error_code: 1142

Now there are some global variables to change. do it on node1:

SET GLOBAL group_replication_group_name= "8a94f357-aab4-11df-86ab-c80aa9429562";
SET GLOBAL group_replication_recovery_user='rpl_user';
SET GLOBAL group_replication_recovery_password='rpl_pass';
SET GLOBAL group_replication_recovery_retry_count= 2;
SET GLOBAL group_replication_recovery_reconnect_interval= 120;

After all we can start group replication:

node1 [localhost] {msandbox} ((none)) > start group_replication;
Query OK, 0 rows affected (0,16 sec)

In MySQL error log, you will see new entries:

2015-10-08T05:55:22.419952Z 4 [Note] 'CHANGE MASTER TO FOR CHANNEL 'group_replication_applier' executed'. Previous state master_host='<NULL>', master_port= 3307, master_log_file='', master_log_pos= 4, master_bind=''. New state master_host='<NULL>', master_port= 3307, master_log_file='', master_log_pos= 4, master_bind=''.
2015-10-08T05:55:22.578212Z 3 [Note] Plugin group_replication reported: 'Group Replication applier module successfully initialized!'
2015-10-08T05:55:22.578283Z 5 [Note] Slave SQL thread for channel 'group_replication_applier' initialized, starting replication in log 'FIRST' at position 0, relay log './mysql-relay-group_replication_applier.000001' position: 4
2015-10-08T05:55:22.578306Z 3 [Note] Plugin group_replication reported: 'auto_increment_increment is set to 7'
2015-10-08T05:55:22.578359Z 3 [Note] Plugin group_replication reported: 'auto_increment_offset is set to 101'
2015-10-08T05:55:22.579736Z 0 [Note] Plugin group_replication reported: 'Starting group replication recovery with view_id 571734024:1'
2015-10-08T05:55:22.712797Z 6 [Note] Plugin group_replication reported: 'Only one server alive. Declaring this server as online within the replication group'
2015-10-08T05:55:22.713721Z 0 [Note] Plugin group_replication reported: 'This server was declared online within the replication group'

Even you can notice new connection from corosync log file:

Oct 08 10:55:22 [1257] shrzayev corosync debug   [QB    ] IPC credentials authenticated (1293-7202-19)
Oct 08 10:55:22 [1257] shrzayev corosync debug   [QB    ] connecting to client [7202]
Oct 08 10:55:22 [1257] shrzayev corosync debug   [QB    ] shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 08 10:55:22 [1257] shrzayev corosync debug   [QB    ] shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 08 10:55:22 [1257] shrzayev corosync debug   [QB    ] shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 08 10:55:22 [1257] shrzayev corosync debug   [MAIN  ] connection created
Oct 08 10:55:22 [1257] shrzayev corosync debug   [CPG   ] lib_init_fn: conn=0x7fe2da6a4e60, cpd=0x7fe2da6a5454
Oct 08 10:55:22 [1257] shrzayev corosync debug   [CPG   ] got procjoin message from cluster node 2130706433 (r(0) ip(127.0.0.1) ) for pid 7202

Now you can check group replication member status:

node1 [localhost] {msandbox} ((none)) > SELECT * FROM performance_schema.replication_connection_statusG
*************************** 1. row ***************************
             CHANNEL_NAME: group_replication_applier
               GROUP_NAME: 8a94f357-aab4-11df-86ab-c80aa9429562
              SOURCE_UUID: 8a94f357-aab4-11df-86ab-c80aa9429562
                THREAD_ID: NULL
            SERVICE_STATE: ON
COUNT_RECEIVED_HEARTBEATS: 0
 LAST_HEARTBEAT_TIMESTAMP: 0000-00-00 00:00:00
 RECEIVED_TRANSACTION_SET: 8a94f357-aab4-11df-86ab-c80aa9429562:1-2
        LAST_ERROR_NUMBER: 0
       LAST_ERROR_MESSAGE: 
     LAST_ERROR_TIMESTAMP: 0000-00-00 00:00:00
1 row in set (0,01 sec)

So there is no ERRORs here, it is working.
You can check also group members:

node1 [localhost] {msandbox} ((none)) > SELECT * FROM performance_schema.replication_group_membersG
*************************** 1. row ***************************
CHANNEL_NAME: group_replication_applier
   MEMBER_ID: 00008379-1111-1111-1111-111111111111
 MEMBER_HOST: shrzayev
 MEMBER_PORT: 8379
MEMBER_STATE: ONLINE
1 row in set (0,01 sec)

At this moment we have only 1 member -> node1.
Let’s add node2 too. Procedure is same set global variables and start group replication:

SET GLOBAL group_replication_group_name= "8a94f357-aab4-11df-86ab-c80aa9429562";
SET GLOBAL group_replication_recovery_user='rpl_user';
SET GLOBAL group_replication_recovery_password='rpl_pass';
SET GLOBAL group_replication_recovery_retry_count= 2;
SET GLOBAL group_replication_recovery_reconnect_interval= 120;

And start group replication:

node2 [localhost] {msandbox} ((none)) > start group_replication;
Query OK, 0 rows affected (0,16 sec)

Same procedure for node3 and we will have 3 nodes on our group:

node1 [localhost] {msandbox} ((none)) > select * from performance_schema.replication_group_membersG
*************************** 1. row ***************************
CHANNEL_NAME: group_replication_applier
   MEMBER_ID: 00008379-1111-1111-1111-111111111111
 MEMBER_HOST: shrzayev
 MEMBER_PORT: 8379
MEMBER_STATE: ONLINE
*************************** 2. row ***************************
CHANNEL_NAME: group_replication_applier
   MEMBER_ID: 00008380-2222-2222-2222-222222222222
 MEMBER_HOST: shrzayev
 MEMBER_PORT: 8380
MEMBER_STATE: ONLINE
*************************** 3. row ***************************
CHANNEL_NAME: group_replication_applier
   MEMBER_ID: 00008381-3333-3333-3333-333333333333
 MEMBER_HOST: shrzayev
 MEMBER_PORT: 8381
MEMBER_STATE: ONLINE
3 rows in set (0,00 sec)

That’s all. Here it is, how we actually can use MySQL Sandbox on Ubuntu 14.04 to create environment and to get started with Group Replication plugin.
As well as from now, you should be able to get start with Group Replication.
If you have your very own test cases, feel free to share or to ask.

MySQL Sandbox, creating test environment for Group Replication plugin

Today’s topic is about how to test Group Replication plugin in easy way, with using MySQL Sandbox
But firstly we must compile MySQL with Group Replication plugin, refer to previous topic -> Compiling Group Replication Plugin

Who did not use Sandbox before, please refer to official link and read a bit more. It is wonderful tool for installing, testing new things with MySQL. Especially if you have new MySQL release to test and want to install 3 or 5 instances, just use MySQL Sandbox and it will do this work for you.

In our condition, we have already compiled MySQL with our plugin. So we have source folder in -> /home/sh/Sandboxes/mysql-group-replication-0.5.0-dmr/BIN

With MySQL Sandbox there is a wonderful tool, named make_sandbox_from_source.
Let’s see options available with this tool:

sh@shrzayev:~$ make_sandbox_from_source --help
    The MySQL Sandbox,  version 3.1.00
    (C) 2006-2015 Giuseppe Maxia

*** sandbox_type missing
Syntax: /usr/local/bin/make_sandbox_from_source source_directory sandbox_type [options]

source directory is where you have successfully
    run ./configure && make

sandbox_type is one of 
    single
    replication
    circular
    multiple

options is anything that is needed by the sandbox
    application of your choice

We want to test Group Replication plugin, so we need multiple option it will install 3 nodes of same version of MySQL by default. So after running we will get on our hand 3 nodes of Group Replication enabled MySQL.
We must just specify the source folder as follows:

sh@shrzayev:~$ make_sandbox_from_source /home/sh/Sandboxes/mysql-group-replication-0.5.0-dmr/BIN multiple
can't find executable script 'make_binary_distribution'

Again with appending BUILD folder:

sh@shrzayev:~$ make_sandbox_from_source /home/sh/Sandboxes/mysql-group-replication-0.5.0-dmr/BIN/BUILD multiple
can't find executable script 'make_binary_distribution'

No success.
If you go on and list BUILD folder:

sh@shrzayev:~/Sandboxes/mysql-group-replication-0.5.0-dmr/BIN/BUILD$ ls
mysql-server  mysql-server.build  mysql-server.inst

There is a mysql-server.build folder which we will append and run again:

sh@shrzayev:~$ make_sandbox_from_source /home/sh/Sandboxes/mysql-group-replication-0.5.0-dmr/BIN/BUILD/mysql-server.build/ multiple
can't find a version in Makefile

Error has changed -> can’t find a version in Makefile
After searching i found a BUG report(LP BUG 853764) related to this issue and found a workaround. We should add MYSQL_NO_DASH_VERSION option in Makefile:

sh@shrzayev:~/Sandboxes/mysql-group-replication-0.5.0-dmr/BIN/BUILD$ cd mysql-server.build/
sh@shrzayev:~/Sandboxes/mysql-group-replication-0.5.0-dmr/BIN/BUILD/mysql-server.build$ vim Makefile

And put MYSQL_NO_DASH_VERSION=5.7.8 (for me) anywhere in Makefile, save and exit. Then rerun:

sh@shrzayev:~$ make_sandbox_from_source /home/sh/Sandboxes/mysql-group-replication-0.5.0-dmr/BIN/BUILD/mysql-server.build/ multiple
no old tarball found
CPack: Create package using TGZ
CPack: Install projects
CPack: - Run preinstall target for: MySQL
CPack: - Install project: MySQL
CPack: Create package
CPack: - package: /home/sh/Sandboxes/mysql-group-replication-0.5.0-dmr/BIN/BUILD/mysql-server.build/mysql-5.7.8-rc-linux-x86_64.tar.gz generated.
installing node 1
installing node 2
installing node 3
group directory installed in $HOME/sandboxes/multi_msb_mysql-5_7_8

That’s it we have 3 nodes and we are ready to test plugin.
Check for node1 for eg:

node1 [localhost] {msandbox} ((none)) > select @@plugin_dir;
+-----------------------------------------------------------------------------------------------------+
| @@plugin_dir                                                                                        |
+-----------------------------------------------------------------------------------------------------+
| /home/sh/Sandboxes/mysql-group-replication-0.5.0-dmr/BIN/BUILD/mysql-server.build/5.7.8/lib/plugin/ |
+-----------------------------------------------------------------------------------------------------+
1 row in set (0,00 sec)

sh@shrzayev:~$ ls /home/sh/Sandboxes/mysql-group-replication-0.5.0-dmr/BIN/BUILD/mysql-server.build/5.7.8/lib/plugin/ | grep group_
group_replication_license
group_replication.so

We have our plugin in plugin directory.
In the subsequent topics i will talk about activating and using group replication plugin. Again, we are using Ubuntu 14.04 with MySQL 5.7.8-rc2 and mysql-group-replication-0.5.0-dmr.

Compiling MySQL Group Replication plugin with MySQL 5.7.8-rc2 on Ubuntu

MySQL Group Replication plugin is in labs.mysql.com and is available for EL6 x86_64 version Linux. But most of us have Ubuntu desktops where it should be easier to test this new thing, especially with MySQL Sandbox. After getting source code we should have compile this plugin with MySQL from source. So let’s begin. Extract both mysql group replication archive and mysql source archive:

sh@shrzayev:~/Sandboxes$ ls -l
total 650732
drwxr-xr-x 34 sh sh      4096 Ä°yl 20 17:25 mysql-5.7.8-rc
-rw-rw-r--  1 sh sh  49762480 Avq 20 16:19 mysql-5.7.8-rc.tar.gz
drwxrwxr-x  3 sh sh      4096 Sen 28 12:08 mysql-group-replication-0.5.0-dmr
-rw-rw-r--  1 sh sh    251687 Sen 28 11:57 mysql-group-replication-0.5.0-labs.tar.gz

You will have 2 directories as above. Then, go to mysql-group-replication folder:

sh@shrzayev:~/Sandboxes$ cd mysql-group-replication-0.5.0-dmr/
sh@shrzayev:~/Sandboxes/mysql-group-replication-0.5.0-dmr$ mkdir BIN
sh@shrzayev:~/Sandboxes/mysql-group-replication-0.5.0-dmr$ cd BIN

Inside BIN folder:

> NOTE: there must be no spaces beetween cmake options(between ";")

You can run following cmake command:

cmake ..  -DMYSQL_SERVER_SRC_DIR="/home/sh/Sandboxes/mysql-5.7.8-rc" -DMYSQL_SERVER_CMAKE_ARGS="-DMYSQL_DATADIR=/opt/mysql-5.7.8-grouprepl/datadir;-DSYSCONFDIR=/opt/mysql-5.7.8-grouprepl;-DWITH_SSL=system;-DMYSQL_TCP_PORT=3307;-DMYSQL_UNIX_ADDR=/opt/mysql-5.7.8-grouprepl/mysqld.sock;-DDEFAULT_CHARSET=utf8;-DDEFAULT_COLLATION=utf8_general_ci;-DWITH_DEBUG=1;-DCOMPILATION_COMMENT='Group Replication enabled MySQL-5.7.8';-DOPTIMIZER_TRACE=1;-DWITH_ZLIB=system;-DWITH_VALGRIND=1;-DCMAKE_C_FLAGS=-DHAVE_purify;-DCMAKE_CXX_FLAGS=-DHAVE_purify;-DDOWNLOAD_BOOST=1;-DWITH_BOOST=/home/sh/Sandboxes"

But before running this command you must install corosync and related packages, if not, you will have success with compilation but will get an empty binary package 🙂 For further reading refer to related BUG report -> #78600

So install dependencies:

sudo apt-get install corosync corosync-dev libcpg-dev

Then run cmake command above and after sucessful cmake run:

make package

If you list BIN folder:

sh@shrzayev:~/Sandboxes/mysql-group-replication-0.5.0-dmr/BIN$ ls
BUILD           CMakeFiles           CPackConfig.cmake  CPackSourceConfig.cmake  mysql-group-replication-0.5.0-dmr-amd64.tar.gz  SOURCE
CMakeCache.txt  cmake_install.cmake  _CPack_Packages    Makefile                 mysql_server-prefix

You will notice that there is, mysql-group-replication-0.5.0-dmr-amd64.tar.gz package where is group_replication.so file located:

sh@shrzayev:~/Sandboxes/mysql-group-replication-0.5.0-dmr/BIN$ tar -tf mysql-group-replication-0.5.0-dmr-amd64.tar.gz | grep group_replication.so
mysql-group-replication-0.5.0-dmr-amd64/lib/plugin/group_replication.so

From now we have mysql group replication plugin for 64 bit Ubuntu 14.04. We will continue our topic about testing group replication plugin in easy ways. There is some other related issues, if you are interested refer to my previous article: Group Replication Plugin issues

MySQL Group Replication plugin issues with 5.7.8-rc2

Well the new things are cool to test and learn. But it is not possible every time, to have smooth testing process. if you can’t get right software on your hand.

So there are great articles about this new plugin here you can follow all of them: Group Replication Topics

I must say that, plugin available for download as source code and as for Oracle Linux 6 x86_64 from -> labs.mysql.com

So if you want to try it with MySQL 5.7.8-rc2 on Ubuntu you should compile it with MySQL. But you will not be able to compile due to well known ‘boost’ dependency -> see BUG #78600

UPDATE to BUG 78600:

Pedro Gomes clarified the reason of failed compiling process. There must not be any “spaces” between cmake options specified with -DMYSQL_SERVER_CMAKE_ARGS=””. So we have working cmake command as follows:

    cmake ..  -DMYSQL_SERVER_SRC_DIR="/home/sh/Sandboxes/mysql-5.7.8-rc" -DMYSQL_SERVER_CMAKE_ARGS="-DMYSQL_DATADIR=/opt/mysql-5.7.8-grouprepl/datadir;-DSYSCONFDIR=/opt/mysql-5.7.8-grouprepl;-DWITH_SSL=system;-DMYSQL_TCP_PORT=3307;-DMYSQL_UNIX_ADDR=/opt/mysql-5.7.8-grouprepl/mysqld.sock;-DDEFAULT_CHARSET=utf8;-DDEFAULT_COLLATION=utf8_general_ci;-DWITH_DEBUG=1;-DCOMPILATION_COMMENT='Group Replication enabled MySQL-5.7.8';-DOPTIMIZER_TRACE=1;-DWITH_ZLIB=system;-DWITH_VALGRIND=1;-DCMAKE_C_FLAGS=-DHAVE_purify;-DCMAKE_CXX_FLAGS=-DHAVE_purify;-DDOWNLOAD_BOOST=1;-DWITH_BOOST=/home/sh/Sandboxes"

But even you have Oracle Linux 6 x86_64 and even you follow topics: 1. Corosync Guide 2. Getting Started with MySQL Group Replication

You will likely get “Segmentation Fault” after running START GROUP_REPLICATION:

Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7fffc8ef2700 (LWP 3294)]
0x00007ffff625d532 in _int_malloc () from /lib64/libc.so.6
Missing separate debuginfos, use: debuginfo-install corosynclib-1.4.7-2.el6.x86_64
(gdb) bt
#0  0x00007ffff625d532 in _int_malloc () from /lib64/libc.so.6
#1  0x00007ffff625e991 in malloc () from /lib64/libc.so.6
#2  0x00007ffff6acf0bd in operator new(unsigned long) () from /usr/lib64/libstdc++.so.6
#3  0x00007fffc8f5b2ab in Certifier::Certifier (this=0x7fffbc051d20)
    at /export/home/pb2/build/sb_0-16452762-1441960761.59/release-0.5.0-export-8447366/BIN/BUILD/mysql-server/plugin/group_replication/src/certifier.cc:209
#4  0x00007fffc8f6bb36 in Certification_handler::initialize (this=0x7fffbc0507e0)
    at /export/home/pb2/build/sb_0-16452762-1441960761.59/release-0.5.0-export-8447366/BIN/BUILD/mysql-server/plugin/group_replication/src/handlers/certification_handler.cc:32
#5  0x00007fffc8f634d7 in configure_pipeline (pipeline=0x7fffbc051a80, handler_list=0x7fffbc0506f0, num_handlers=3)
    at /export/home/pb2/build/sb_0-16452762-1441960761.59/release-0.5.0-export-8447366/BIN/BUILD/mysql-server/plugin/group_replication/src/pipeline_factory.cc:143
#6  0x00007fffc8f6364d in get_pipeline (pipeline_type=<value optimized out>, pipeline=0x7fffbc051a80)
    at /export/home/pb2/build/sb_0-16452762-1441960761.59/release-0.5.0-export-8447366/BIN/BUILD/mysql-server/plugin/group_replication/src/pipeline_factory.cc:30
#7  0x00007fffc8f554b0 in Applier_module::setup_applier_module (this=0x7fffbc051920, pipeline_type=STANDARD_GROUP_REPLICATION_PIPELINE, reset_logs=false, 
    stop_timeout=31536000, group_sidno=2)
    at /export/home/pb2/build/sb_0-16452762-1441960761.59/release-0.5.0-export-8447366/BIN/BUILD/mysql-server/plugin/group_replication/src/applier.cc:99
#8  0x00007fffc8f63eca in configure_and_start_applier_module ()
    at /export/home/pb2/build/sb_0-16452762-1441960761.59/release-0.5.0-export-8447366/BIN/BUILD/mysql-server/plugin/group_replication/src/plugin.cc:761
#9  0x00007fffc8f657df in plugin_group_replication_start ()
    at /export/home/pb2/build/sb_0-16452762-1441960761.59/release-0.5.0-export-8447366/BIN/BUILD/mysql-server/plugin/group_replication/src/plugin.cc:289
#10 0x00000000016927c9 in Group_replication_handler::start (this=0x378f2d0) at /home/full_datadir/mysql-5.7.8-rc/sql/rpl_group_replication.cc:56
#11 0x0000000001692b81 in group_replication_start () at /home/full_datadir/mysql-5.7.8-rc/sql/rpl_group_replication.cc:192
#12 0x000000000149739c in mysql_execute_command (thd=0x7fffbc007180) at /home/full_datadir/mysql-5.7.8-rc/sql/sql_parse.cc:3029
#13 0x000000000149e15b in mysql_parse (thd=0x7fffbc007180, parser_state=0x7fffc8ef14a0) at /home/full_datadir/mysql-5.7.8-rc/sql/sql_parse.cc:5255
#14 0x0000000001492dec in dispatch_command (thd=0x7fffbc007180, com_data=0x7fffc8ef1cd0, command=COM_QUERY)
    at /home/full_datadir/mysql-5.7.8-rc/sql/sql_parse.cc:1272
#15 0x00000000014917f3 in do_command (thd=0x7fffbc007180) at /home/full_datadir/mysql-5.7.8-rc/sql/sql_parse.cc:852
#16 0x00000000015c9042 in handle_connection (arg=0x36c1a50) at /home/full_datadir/mysql-5.7.8-rc/sql/conn_handler/connection_handler_per_thread.cc:300
#17 0x0000000001c56c41 in pfs_spawn_thread (arg=0x37d1990) at /home/full_datadir/mysql-5.7.8-rc/storage/perfschema/pfs.cc:2178
#18 0x00007ffff7bc79d1 in start_thread () from /lib64/libpthread.so.0
#19 0x00007ffff62ccb6d in clone () from /lib64/libc.so.6

For further reading refer to related fresh BUG report -> #78627

It is really cool to help community to improve Open Source projects. So go on and test by your own, report interesting things and ofcourse contact with community members. They are all open to discussion 🙂

Comprehensive guide to installing PXC on CentOS 7

Recently want to install Percona XtraDB Cluster + HAProxy + KeepAlived on CentOS 7, but could not find any all-in-one guide. So decided to write down all steps necessary for getting started and running HA solution using Open Source projects. Nowadays high availability is one of the main concerns faced by big and small companies.Minimum wasting of time (downtime per year), sustainably working of infrastructure and etc. is the main mission for all companies that they are trying to achieve.There are different approaches to this mission. One of them as large companies did, to buy expensive software and support from vendors. But small companies could not go through these steps. The true power of Open Source comes up at these moments.You can build your own High Availability solution using free but yet powerful Open Source projects. In this article we will show you how to achieve Database Clustering and Load Balancing. We will use:

  1. Percona XtraDB Cluster – Percona XtraDB Cluster is High Availability and Scalability solution for MySQL Users. What is PXC?

  2. HAproxy – HAProxy is a free, very fast and reliable solution offering high availability, load balancing, and proxying for TCP and HTTP-based applications See for further information HAproxy official site

  3. KeepAlived – Keepalived is a routing software written in C. The main goal of this project is to provide simple and robust facilities for loadbalancing and high-availability to Linux system and Linux based infrastructures See for further information KeepAlived official site

General Architecture

Before diving in the step-by-step guide let’s to see what we want to accomplish at the end of tutorial: General Architecture Overview

As diagram shows there must be at least 3 CentOS 6.5/7 instance for PXC Galera Cluster setup. Total 3 CentOS 7 instance with all static local IPs and only 1 public IP for KeepAlived. So another advantage of this architecture that it will reduce public ip usage to 1.

{: .note } >

For this tutorial we will not use public IP as it is an test environment, all CentOSs will use local ips.

Prerequisites

We need 3 instances of CentOS 7 with minimal installation. Also you should give a static ips to all servers. Here is sample IPs that you may give to your CentOS instances for testing this tutorial:

  1. Node1 (1 GB RAM, 1 CPU VM) – 192.168.1.99 – (HAproxy1)
  2. Node2 (1 GB RAM, 1 CPU VM) – 192.168.1.88 – (HAproxy2)
  3. Node3 (1 GB RAM, 1 CPU VM) – 192.168.1.77 – (HAproxy3)

Also keep in mind that we need another IP for KeepAlived that will act as Virtual IP for HAproxy intances.

Installing and Configuring PXC

Installing and Running PXC:

The easy way to install is through official yum repo. You should activate Percona repo on all 3 Nodes:

    -- CentOS 7
    yum install http://www.percona.com/downloads/percona-release/redhat/0.1-3/percona-release-0.1-3.noarch.rpm

Save all 3 files on all 3 Nodes. Then run:

    [root@node1, 2, 3 ~]# yum install Percona-XtraDB-Cluster-56

All needed packages and dependencies will be resolved automatically and at this point you should be able to start newly installed PXCs:

    -- CentOS 7
    [root@node1, 2, 3 ~]# systemctl start mysql.service
    [root@node1, 2, 3 ~]# mysql_secure_installation

At this point, we have 3 running Percona Galera Cluster instances.

Installing Several other needed packages:

We need several other packages from epel repo, such as rsync for PXC SST See Documentation That’s why we must activate epel repo on all 3 Nodes:

    -- CentOS 7
    [root@node1, 2, 3 ~]# yum install socat rsync

Create binary log folder outside MySQL’s datadir

    [root@node1, 2, 3 ~]# mkdir /var/lib/mysql-binlog
    [root@node1, 2, 3 ~]# chown mysql:mysql /var/lib/mysql-binlog

    -- SElinux
    [root@centos7-node1 ~]# yum install policycoreutils-python
    [root@centos7-node1 ~]# semanage fcontext -a -t mysqld_db_t /var/lib/mysql-binlog
    [root@centos7-node1 ~]# restorecon -v /var/lib/mysql-binlog
        restorecon reset /var/lib/mysql-binlog context unconfined_u:object_r:var_lib_t:s0->unconfined_u:object_r:mysqld_db_t:s0

server.cnf configuration file for PXC nodes:

Now we must edit all 3 nodes server.cnf (default location is ‘/etc/my.cnf.d/server.cnf’) file to reflect our needs. Following content is the same on all 3 nodes:

    [mysqld]

    # General #
    datadir                       = /var/lib/mysql
    socket                        = /var/lib/mysql/mysql.sock
    default_storage_engine        = InnoDB
    lower_case_table_names        = 1


    # MyISAM #
    key_buffer_size                = 8M
    myisam_recover                 = FORCE,BACKUP


    # SAFETY #
    max_allowed_packet             = 16M
    max_connect_errors             = 1000000
    skip_name_resolve
    sysdate_is_now                 = 1
    innodb                         = FORCE
    thread_stack                   = 262144
    back_log                       = 2048
    performance_schema             = OFF
    skip_show_database

    # Binary Logging #
    log_bin                        = /var/lib/mysql-binlog/mysql-bin
    log_bin_index                  = /var/lib/mysql-binlog/mysql-bin
    expire_logs_days               = 14
    sync_binlog                    = 1
    binlog_format                  = row

    # CACHES AND LIMITS #
    tmp_table_size                 = 16M
    max_heap_table_size            = 16M
    query_cache_type               = 0
    query_cache_size               = 0
    query_cache_limit              = 0
    max_connections                = 10000
    thread_cache_size              = 500
    open-files-limit               = 65535
    table_definition_cache         = 7000
    table_open_cache               = 7000

    # InnoDB Related #

    innodb_log_files_in_group      = 2
    innodb_autoinc_lock_mode       =2
    #innodb_locks_unsafe_for_binlog =1
    innodb_log_file_size           =100M
    innodb_file_per_table
    innodb_flush_log_at_trx_commit =2
    innodb_buffer_pool_size        =150M

And now we provide per Node settings for configuration file:

    **Node1**

    #
    # * Galera-related settings
    #
    [galera]

    wsrep_provider=/usr/lib64/galera3/libgalera_smm.so
    wsrep_cluster_address="gcomm://192.168.1.99,192.168.1.88,192.168.1.77"
    wsrep_cluster_name='TestingCluster'
    wsrep_node_address='192.168.1.99'
    wsrep_node_name='node1'
    wsrep_sst_method=rsync
    wsrep_sst_auth=sstuser:12345
    bind-address=0.0.0.0


    **Node2**

    #
    # * Galera-related settings
    #
    [galera]

    wsrep_provider=/usr/lib64/galera3/libgalera_smm.so
    wsrep_cluster_address="gcomm://192.168.1.99,192.168.1.88,192.168.1.77"
    wsrep_cluster_name='TestingCluster'
    wsrep_node_address='192.168.1.88'
    wsrep_node_name='node1'
    wsrep_sst_method=rsync
    wsrep_sst_auth=sstuser:12345
    bind-address=0.0.0.0


    **Node3**

    #
    # * Galera-related settings
    #
    [galera]

    wsrep_provider=/usr/lib64/galera3/libgalera_smm.so
    wsrep_cluster_address="gcomm://192.168.1.99,192.168.1.88,192.168.1.77"
    wsrep_cluster_name='TestingCluster'
    wsrep_node_address='192.168.1.77'
    wsrep_node_name='node1'
    wsrep_sst_method=rsync
    wsrep_sst_auth=sstuser:12345
    bind-address=0.0.0.0

Save file on all 3 nodes and exit file editing.

Afer editing configuration files. On all 3 Nodes you must create wsrep_sst_auth user as mentioned in server.cnf file its name is ‘sstuser’

    [root@node1, 2, 3 ~]# mysql -u root -p
    mysql> create user 'sstuser'@'%' identified by '12345';
    Query OK, 0 rows affected (0.01 sec)

    mysql> grant all on *.* to 'sstuser'@'%';
    Query OK, 0 rows affected (0.00 sec)

Set to permissive mode on all 3 Nodes. As SElinux prevents Galera to start:

    -- CentOS  7
    [root@node1, 2, 3 ~]# setenforce 0

‘/etc/hosts’ file contents for PXC Nodes:

**Node1**

    [root@node1 ~]# cat /etc/hosts
    127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
    127.0.0.1   node1.localdomain node1
    ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

    192.168.1.99 localhost localhost.localdomain node1.localdomain node1
    192.168.1.88 node2
    192.168.1.77 node3




**Node2**

    [root@node2 ~]# cat /etc/hosts
    127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
    ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
    127.0.0.1   node2.localdomain node2
    192.168.1.88 localhost localhost.localdomain node2.localdomain node2
    192.168.1.99 node1
    192.168.1.77 node3



**Node3**

    [root@node3 ~]# cat /etc/hosts
    127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
    ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
    127.0.0.1   node3.localdomain node3
    192.168.1.77 localhost localhost.localdomain node3
    192.168.1.99 node1
    192.168.1.88 node2

Iptables settings related to PXC nodes:

-- CentOS 7

** Node1 **

firewall-cmd --add-rich-rule='rule family="ipv4" source address="192.168.1.88" port port="4567" protocol="tcp" accept' --permanent
firewall-cmd --add-rich-rule='rule family="ipv4" source address="192.168.1.88" port port="3306" protocol="tcp" accept' --permanent
firewall-cmd --add-rich-rule='rule family="ipv4" source address="192.168.1.77" port port="4567" protocol="tcp" accept' --permanent
firewall-cmd --add-rich-rule='rule family="ipv4" source address="192.168.1.88" port port="4444" protocol="tcp" accept' --permanent
firewall-cmd --add-rich-rule='rule family="ipv4" source address="192.168.1.77" port port="3306" protocol="tcp" accept' --permanent
firewall-cmd --add-rich-rule='rule family="ipv4" source address="192.168.1.77" port port="4444" protocol="tcp" accept' --permanent
firewall-cmd --add-rich-rule='rule family="ipv4" source address="192.168.1.77" port port="4568" protocol="tcp" accept' --permanent
firewall-cmd --add-rich-rule='rule family="ipv4" source address="192.168.1.88" port port="4568" protocol="tcp" accept' --permanent

firewall-cmd --add-rich-rule='rule family="ipv4" source address="192.168.1.88" port port="9200" protocol="tcp" accept' --permanent
firewall-cmd --add-rich-rule='rule family="ipv4" source address="192.168.1.77" port port="9200" protocol="tcp" accept' --permanent

firewall-cmd --reload

** Node 2 **

firewall-cmd --add-rich-rule='rule family="ipv4" source address="192.168.1.99" port port="4567" protocol="tcp" accept' --permanent
firewall-cmd --add-rich-rule='rule family="ipv4" source address="192.168.1.99" port port="3306" protocol="tcp" accept' --permanent
firewall-cmd --add-rich-rule='rule family="ipv4" source address="192.168.1.77" port port="4567" protocol="tcp" accept' --permanent
firewall-cmd --add-rich-rule='rule family="ipv4" source address="192.168.1.99" port port="4444" protocol="tcp" accept' --permanent
firewall-cmd --add-rich-rule='rule family="ipv4" source address="192.168.1.77" port port="3306" protocol="tcp" accept' --permanent
firewall-cmd --add-rich-rule='rule family="ipv4" source address="192.168.1.77" port port="4444" protocol="tcp" accept' --permanent
firewall-cmd --add-rich-rule='rule family="ipv4" source address="192.168.1.77" port port="4568" protocol="tcp" accept' --permanent
firewall-cmd --add-rich-rule='rule family="ipv4" source address="192.168.1.99" port port="4568" protocol="tcp" accept' --permanent

firewall-cmd --add-rich-rule='rule family="ipv4" source address="192.168.1.99" port port="9200" protocol="tcp" accept' --permanent
firewall-cmd --add-rich-rule='rule family="ipv4" source address="192.168.1.77" port port="9200" protocol="tcp" accept' --permanent

firewall-cmd --reload

** Node 3 **

firewall-cmd --add-rich-rule='rule family="ipv4" source address="192.168.1.88" port port="4567" protocol="tcp" accept' --permanent
firewall-cmd --add-rich-rule='rule family="ipv4" source address="192.168.1.88" port port="3306" protocol="tcp" accept' --permanent
firewall-cmd --add-rich-rule='rule family="ipv4" source address="192.168.1.99" port port="4567" protocol="tcp" accept' --permanent
firewall-cmd --add-rich-rule='rule family="ipv4" source address="192.168.1.88" port port="4444" protocol="tcp" accept' --permanent
firewall-cmd --add-rich-rule='rule family="ipv4" source address="192.168.1.99" port port="3306" protocol="tcp" accept' --permanent
firewall-cmd --add-rich-rule='rule family="ipv4" source address="192.168.1.99" port port="4444" protocol="tcp" accept' --permanent
firewall-cmd --add-rich-rule='rule family="ipv4" source address="192.168.1.99" port port="4568" protocol="tcp" accept' --permanent
firewall-cmd --add-rich-rule='rule family="ipv4" source address="192.168.1.88" port port="4568" protocol="tcp" accept' --permanent

firewall-cmd --add-rich-rule='rule family="ipv4" source address="192.168.1.88" port port="9200" protocol="tcp" accept' --permanent
firewall-cmd --add-rich-rule='rule family="ipv4" source address="192.168.1.99" port port="9200" protocol="tcp" accept' --permanent
firewall-cmd --reload

Bootstrapping(starting) Cluster

As we have 3 Nodes, one of them must be in some words, the “Head” node, at first time we must choose one of Nodes as start point. For us this is Node1. Stop Node1 and run following command:

    -- CentOS 7

    [root@node1 ~]# systemctl stop mysql.service
    [root@node1 ~]# systemctl start mysql@bootstrap.service
    Bootstrapping the clusterStarting MySQL.. SUCCESS!

    (There is no bootstrap command for CentOS 7 for MariaDB cluster - already reported as: [MDEV-7752](https://mariadb.atlassian.net/browse/MDEV-7752))

Then you must start other 2 nodes as usual:

    -- CentOS 7

    [root@node2 ~]# systemctl start mysql.service
    [root@node3 ~]# systemctl start mysql.service

After success messages login to PXC Node1 and run:

    PXC [(none)]> show status like 'wsrep%';

The important part of output is:

    | wsrep_cluster_conf_id        | 5                                                     |
    | wsrep_cluster_size           | 3                                                     |
    | wsrep_connected              | ON                                                    |
    | wsrep_ready                  | ON                                                    |
    | wsrep_cluster_status         | Primary                                               |

As you see the cluster size is 3 and it means all 3 Nodes connected and working. Also you can see from output that Cluster is ready for accepting connection. Let’s create database from Node3 and test its creation from other Nodes:

    [root@node3 ~]# mysql -u root -p
    PXC [(none)]> create database pxc_test;
    Query OK, 1 row affected (0.10 sec)

    [root@node2 ~]# mysql -u root -p
    PXC [(none)]> show databases like 'pxc%';
    +-----------------+
    | Database (lin%) |
    +-----------------+
    | pxc_test        |
    +-----------------+
    1 row in set (0.00 sec)

    [root@node1 ~]# mysql -u root -p
    PXC [(none)]> show databases like 'pxc%';
    +-----------------+
    | Database (lin%) |
    +-----------------+
    | pxc_test        |
    +-----------------+
    1 row in set (0.00 sec)

Enabling SElinux :

Disabling SElinux permanently or running in permissive mode for all time is dangerous related to security. As we have set SElinux into permissive mode, it will not prevent any Galera actions, instead it has already logged all related information into audit.log file. Using this file we should create rules and reenable SElinux. On all 3 nodes:

-- CentOS 7
[root@node1, 2, 3 ~]# yum install policycoreutils-python
[root@node1, 2, 3 ~]# grep mysql /var/log/audit/audit.log | audit2allow -M galera
[root@node1, 2, 3 ~]# semodule -i galera.pp

[root@node1, 2, 3 ~]# setenforce 1

Configuring Clustercheck script for cluster health check

This script will be installed with PXC(/usr/bin/clustercheck). Clustercheck is simple script o make a proxy (i.e HAProxy) capable of monitoring Percona XtraDB Cluster nodes properly.

Before setup install xinetd in all 3 nodes:

    -- CentOS 7

    [root@node1, 2, 3 ~]# yum install xinetd
    [root@node1, 2, 3 ~]# systemctl start xinetd.service
    [root@node1, 2, 3 ~]# systemctl enable xinetd.service

After installing Xinetd and ensuring that it will start on system reboot automatically, we need configure mysqlchk file:

    [root@node1, 2, 3 ~]# nano /etc/xinetd.d/mysqlchk

    -- Add following lines to this file

    service mysqlchk
    {
        disable = no
        flags = REUSE
        socket_type = stream
        port = 9200
        wait = no
        user = nobody
        server = /usr/bin/clustercheck
        log_on_failure += USERID
        only_from = 0.0.0.0/0
        per_source = UNLIMITED
    }

Then edit /etc/services file. Locate wap-wsp word in this file and comment out as follows, then add mysqlchk entry:

    [root@node1, 2, 3 ~]# nano /etc/services
    mysqlchk        9200/tcp                # SERVICE for Checking PXC Cluster      
    #wap-wsp         9200/tcp                # WAP connectionless session service
    #wap-wsp         9200/udp                # WAP connectionless session service

Now create clustercheck database user on ALL Nodes:

    create user 'clustercheckuser'@'localhost' identified by 'clustercheckpassword!';
    grant process on *.* to 'clustercheckuser'@'localhost';

Restart Xinetd on all Nodes:

    -- CentOS 7

    [root@node1, 2, 3 ~]# systemctl restart xinetd

And final step for this section is to check this new script:

    [root@node1, 2, 3 ~]# clustercheck 
    HTTP/1.1 200 OK
    Content-Type: text/plain
    Connection: close
    Content-Length: 40

    Percona XtraDB Cluster Node is synced.

As you see everything is OK and it works.

Installing and Configuring HAproxy

As we have mentioned we have not got VMs dedicated to HAproxy. That’s why HAproxy instances will be in same machines as PXCs.

Install HAproxy:

    [root@node1, 2, 3 ~]# yum install haproxy
    [root@node1, 2, 3 ~]# systemctl enable haproxy.service

Fix HAproxy logging: By default HAproxy does not log anything, we must fix it by adding haproxy.conf file into /etc/rsyslog.d directory:

    [root@node1, 2, 3 ~]# cd /etc/rsyslog.d/
    -- Add following lines into file:
    [root@node1, 2, 3 rsyslog.d]# nano haproxy.conf
    $ModLoad imudp
    $UDPServerRun 514
    $template Haproxy,"%msg%n"
    local0.=info -/var/log/haproxy.log;Haproxy
    local0.notice -/var/log/haproxy-status.log;Haproxy
    local0.* ~

    -- SElinux related command:
    [root@node1, 2, 3 ~]# /sbin/restorecon -v -F /etc/rsyslog.d/haproxy.conf
    [root@node1, 2, 3 ~]# systemctl restart rsyslog.service

Edit haproxy.cfg config file:

    **HAproxy1, 2, 3**

    [root@node1, 2, 3 ~]# cd /etc/haproxy/
    [root@node1, 2, 3 ~]# mv haproxy.cfg haproxy.cfg.orig
    [root@node1, 2, 3 ~]# nano haproxy.cfg

    global
        log         127.0.0.1   local0
        log         127.0.0.1   local1 notice
        maxconn     4096
        user        haproxy
        group       haproxy
        nbproc      1
        pidfile     /var/run/haproxy.pid

    defaults
        log         global
        option      tcplog
        option      dontlognull
        retries     3
        maxconn     4096
        option      redispatch
        timeout     connect 50000ms
        timeout     client  50000ms
        timeout     server  50000ms

    listen PXC-writes
        bind 0.0.0.0:3311
        mode tcp
        #    option mysql-check user haproxy
        option httpchk 
        server node1 192.168.1.99:3306 check port 9200
        server node2 192.168.1.88:3306 check port 9200 backup
        server node3 192.168.1.77:3306 check port 9200 backup

    listen PXC-reads
        bind 0.0.0.0:3312
        mode tcp
        balance leastconn
    #    option mysql-check user haproxy
        option httpchk
        server node1 192.168.1.99:3306 check port 9200
        server node2 192.168.1.88:3306 check port 9200
        server node3 192.168.1.77:3306 check port 9200

    # HAProxy web ui
        listen stats 0.0.0.0:9000
        mode http
        stats enable
        stats uri /haproxy
        stats realm HAProxy Statistics
        stats auth haproxy:haproxy
        stats admin if TRUE


    -- SElinux related command:

    [root@node1 ~]# /sbin/restorecon -v -F /etc/haproxy/haproxy.cfg
    [root@node1 ~]# setsebool haproxy_connect_any on
    [root@node1 ~]# systemctl start haproxy.service

Lets explore config file a bit more: The default port number 9000 is for WEB UI for HAproxy monitoring. Another thing to remember that when using PXC Galera Cluster with SST options mysqldump and rsync (default) it will lock each other nodes from getting updates while DDL, DML statements executing. To avoid such situation and not to stuck with Deadlocks, we decide to separate Write operations. In other words, Write operations (e.g insert, update, delete etc.) will go only to Node1. So on the application side, you should send write operations to port 3310 as we put in haproxy.cfg file, and for read operations to port number 3311. There is an available non-locking SST option XtraBackup (the famous hot online backup tool for MySQL), but it is the subject of another topic.

And ofcourse we should check our work:

Check for listening ports:

    [root@node1 ~]# netstat -ntpl | grep haproxy
    tcp        0      0 0.0.0.0:9000                0.0.0.0:*                   LISTEN      11902/haproxy       
    tcp        0      0 0.0.0.0:3310                0.0.0.0:*                   LISTEN      11902/haproxy       
    tcp        0      0 0.0.0.0:3311                0.0.0.0:*                   LISTEN      11902/haproxy

    [root@node2 haproxy]# netstat -ntpl | grep haproxy
    tcp        0      0 0.0.0.0:9000                0.0.0.0:*                   LISTEN      12846/haproxy       
    tcp        0      0 0.0.0.0:3310                0.0.0.0:*                   LISTEN      12846/haproxy       
    tcp        0      0 0.0.0.0:3311                0.0.0.0:*                   LISTEN      12846/haproxy

As we have mentioned port number 9000 for WEB Ui related to HAproxy statistics. So lets connect to IP of on of the HAproxy servers. The connection URL for me is: http://192.168.1.88:9000/haproxy_stats. In Very first connection there will be an pop-screen for login and password. These information is stored in /etc/haproxy/haproxy.cfg file stats auth haproxy:haproxy. So the Login is: haproxy and Password is: haproxy too.

You will see the statistics page with correctly working Cluster Nodes:

HAproxy Web Statistics Page

Now lets check MySQL connectivity over HAproxy. For this purpose create sample database user on PXC Cluster nodes. Our cluster is ready and that’s why it is sufficient to run this command one of the nodes and other will pick-up automatically

    create user 'all'@'%' identified by '12345';
    grant all on *.* to 'all'@'%';

Before doing, open these ports:

    [root@node1, 2, 3 ~]# firewall-cmd --add-port=3311/tcp --permanent
    success
    [root@node1, 2, 3 ~]# firewall-cmd --add-port=3312/tcp --permanent
    success
    [root@node1, 2, 3 ~]# firewall-cmd --reload
    success

Our testing result should be something like:

    -- make connection through HAproxy2(node2) server to port 3311 (our write node)

    sh@sh--work:~$ mysql -u all -p --host=192.168.1.88 --port=3311 -e "select @@version, @@hostname";
    Enter password: 
    +--------------------+---------------+
    | @@version          | @@hostname    |
    +--------------------+---------------+
    | 5.6.24-72.2-56-log | node1 |
    +--------------------+---------------+


    -- Second to port 3312 (one of our read nodes)

    sh@sh--work:~$ mysql -u all -p --host=192.168.1.88 --port=3312 -e "select @@version, @@hostname";
    Enter password: 
    +--------------------+---------------+
    | @@version          | @@hostname    |
    +--------------------+---------------+
    | 5.6.24-72.2-56-log | node1         |
    +--------------------+---------------+
    sh@sh--work:~$ mysql -u all -p --host=192.168.1.88 --port=3312 -e "select @@version, @@hostname";
    Enter password: 
    +--------------------+---------------+
    | @@version          | @@hostname    |
    +--------------------+---------------+
    | 5.6.24-72.2-56-log | node2         |
    +--------------------+---------------+
    sh@sh--work:~$ mysql -u all -p --host=192.168.1.88 --port=3312 -e "select @@version, @@hostname";
    Enter password: 
    +--------------------+---------------+
    | @@version          | @@hostname    |
    +--------------------+---------------+
    | 5.6.24-72.2-56-log | node3         |
    +--------------------+---------------+

Installing and Configuring KeepAlived

The Final step in our environment is installing and configuring KeepAlived , acting as Virtual IP and switch-over between HAproxy instances. The idea is, when one of HAproxy servers fails, another one must do same work without interrupting all architecture. The new IP address for us is 192.168.1.199 – which is Virtual IP for our HAproxy servers. We will connect to MySQL(PXC) through this IP and KeepAlived will decide to which HAproxy instance it must send this connection. Lets test:

    -- Pay Attention to host IP address

    sh@sh--work:~$ mysql -u all -p --host=192.168.1.199 --port=3311 -e "select @@version, @@hostname";
    Enter password: 
    +--------------------+---------------+
    | @@version          | @@hostname    |
    +--------------------+---------------+
    | 5.6.24-72.2-56-log | node1         |
    +--------------------+---------------+


    sh@sh--work:~$ mysql -u all -p --host=192.168.1.199 --port=3312 -e "select @@version, @@hostname";
    Enter password: 
    +--------------------+---------------+
    | @@version          | @@hostname    |
    +--------------------+---------------+
    | 5.6.24-72.2-56-log | node2         |
    +--------------------+---------------+

We send connection to KeepAlived virtual IP and it passes to one of HAproxy servers. Enough theory lets achieve our goal.

Installing/Configuring

    **HAproxy1**

    [root@node1, 2, 3 ~]# yum install keepalived
    [root@node1, 2, 3 ~]# systemctl enable keepalived.service

    [root@node1, 2, 3 ~]# cd /etc/keepalived/
    [root@node1, 2, 3 ~]# mv keepalived.conf keepalived.conf.orig
    [root@node1, 2, 3 ~]# nano keepalived.conf
    -- Add following lines to file        
    vrrp_script chk_haproxy {
        script "killall -0 haproxy" # verify the pid is exist or not
        interval 2                      # check every 2 seconds
        weight 2                        # add 2 points of prio if OK
    }

    vrrp_instance VI_1 {
        interface eth0                  # interface to monitor
        state MASTER
        virtual_router_id 51            # Assign one ID for this route
        priority 101                    # 101 on master, 100 on backup1, 99 on backup2
        authentication {
                 auth_type PASS
                 auth_pass 12345
        }
    virtual_ipaddress {
        192.168.1.199/24                # the virtual IP
    }
track_script {
        chk_haproxy
    }
    }

    -- Save and exit file editing.

    [root@node1, 2, 3 ~]# /sbin/restorecon -v -F /etc/keepalived/keepalived.conf

Starting KeepAlived

    [root@node1, 2, 3 ~]# systemctl start keepalived.service

Testing MySQL connection with KeepAlived

    sh@sh--work:~$ mysql -u all -p --host=192.168.1.199 --port=3311 -e "select @@version, @@hostname";
    Enter password: 
    +---------------------------+------------+
    | @@version                 | @@hostname |
    +---------------------------+------------+
    | 10.0.15-PXC-wsrep-log | node1      |
    +---------------------------+------------+


    sh@sh--work:~$ mysql -u all -p --host=192.168.1.199 --port=3312 -e "select @@version, @@hostname";
    Enter password: 
    +---------------------------+------------+
    | @@version                 | @@hostname |
    +---------------------------+------------+
    | 10.0.15-PXC-wsrep-log | node2      |
    +---------------------------+------------+

Conclusion

In this tutorial you learn how to install and configure PXC on CentOS 7 and also how to load balance this cluster using wellknown HAproxy. As bonus you learn to install/configure KeepAlived and access database over Virtual IP which is a next level of redundancy in High Availability approach. If you have any suggesstions or errors, please do not hesitate to contact. Thank you for reading.

Error reading GTIDs from binary log: -1

Wonder how MySQL Slave server will act, when disk full condition occurs? Before in our articles we use only single MySQL server. Now think about replication topology, where slave server has problem with full partition. Firstly we will enable Binary Log/GTID on Slave side and will ensure that the changes also applied to binary log on Slave side:

      # BINARY LOGGING # 
      server_id = 2 
      log_bin = /opt/mysql/datadir/mysql-bin 
      log_bin_index = /opt/mysql/datadir/mysql-bin 
      expire_logs_days = 14 
      sync_binlog = 1 
      binlog_format = row 
      relay_log = /opt/mysql/datadir/mysql-relay-bin 
      log_slave_updates = 1 
      read_only = 1 
      gtid-mode = on 
      enforce-gtid-consistency = true 
      master-info-repository = TABLE 
      relay-log-info-repository = TABLE 
      slave-parallel-workers = 15 
      binlog-checksum = CRC32 
      master-verify-checksum = 1 
      slave-sql-verify-checksum = 1 
      binlog-rows-query-log_events = 1 

When disk full condition comes up, error log will be filled as follows:

     2015-05-01 04:42:10 2033 [Note] Slave SQL thread initialized, starting replication in log 'mysql-bin.000002' at position 557, relay log '/opt/mysql/datadir/mysql-relay-bin.000001' position: 4 2015-05-01 04:50:50 7f3b79865700 
     InnoDB: Error: Write to file ./test1/sales.ibd failed at offset 184549376. 
     InnoDB: 1048576 bytes should have been written, only 688128 were written. 
     InnoDB: Operating system error number 0. 
     InnoDB: Check that your OS and file system support files of this size. 
     InnoDB: Check also that the disk is not full or a disk quota exceeded. 
     InnoDB: Error number 0 means 'Success'. 
     InnoDB: Some operating system error numbers are described at 
     InnoDB: http://dev.mysql.com/doc/refman/5.6/en/operating-system-error-codes.html 2015-05-01 04:50:51 2033 
     [ERROR] /opt/mysql/bin/mysqld: The table 'sales' is full 2015-05-01 04:50:51 2033 
     [ERROR] Slave SQL: Worker 14 failed executing transaction '328e26e9-ea51-11e4-8023-080027745404:242' at master log mysql-bin.000002, end_log_pos 275717680; Could not execute Write_rows event on table test1.sales; 
      The table 'sales' is full, 
      Error_code: 1114; handler error HA_ERR_RECORD_FILE_FULL; the event's master log mysql-bin.000002, end_log_pos 275717680, 
     Error_code: 1114 2015-05-01 04:50:51 2033 
     [Warning] Slave SQL: ... The slave coordinator and worker threads are stopped, possibly leaving data in inconsistent state. A restart should restore consistency automatically, although using non-transactional storage for data or info tables or DDL queries  could lead to problems. In such cases you have to examine your data (see documentation for details). 
    Error_code: 1756 2015-05-01 04:50:51 2033 
    [Note] Error reading relay log event: slave SQL thread was killed 2015-05-01 04:50:51 2033 
    [Warning] Slave SQL: ... The slave coordinator and worker threads are stopped, possibly leaving data in inconsistent state. A restart should restore consistency automatically, although using non-transactional storage for data or info tables or DDL queries could lead to problems. In such cases you have to examine your data (see documentation for details). 
   Error_code: 1756 2015-05-01 04:50:51 2033 
   [Warning] Slave SQL: ... The slave coordinator and worker threads are stopped, possibly leaving data in inconsistent state. A restart should restore consistency automatically, although using non-transactional storage for data or info tables or DDL queries could lead to problems. In such cases you have to examine your data (see documentation for details). Error_code: 1756 2015-05-01 04:50:52 2033 
   [Warning] Disk is full writing '/opt/mysql/datadir/mysql-relay-bin.000003' (Errcode: 28 - No space left on device). Waiting for someone to free space... 

Interesting thing is OS error number which is equal to 0. And 0 equals to “success”. After if you try disable binary log/GTID and store only relay-log information in my.cnf as follows:

    # BINARY LOGGING # # 
    server_id = 2 
    #log_bin = /opt/mysql/datadir/mysql-bin 
    #log_bin_index = /opt/mysql/datadir/mysql-bin 
    #expire_logs_days = 14 
    #sync_binlog = 1 
    #binlog_format = row 
    relay_log = /opt/mysql/datadir/mysql-relay-bin 
    #log_slave_updates = 1 
    read_only = 1 
    #gtid-mode = on 
    #enforce-gtid-consistency = true 
    master-info-repository = TABLE 
    relay-log-info-repository = TABLE 
    #slave-parallel-workers = 15 
    #binlog-checksum = CRC32 
    #master-verify-checksum = 1 
    #slave-sql-verify-checksum = 1 
    #binlog-rows-query-log_events = 1 

If you try to start there will be some interesting errors in error log:

2015-05-01 05:05:09 2698 [ERROR] /opt/mysql/bin/mysqld: Found a Gtid_log_event or Previous_gtids_log_event when @@GLOBAL.GTID_MODE = OFF. 
2015-05-01 05:05:14 2698 [ERROR] Error in Log_event::read_log_event(): 'read error', data_len: 8178, event_type: 30 
2015-05-01 05:05:14 2698 [Warning] Error reading GTIDs from binary log: -1 
2015-05-01 05:05:15 2698 [Warning] Storing MySQL user name or password information in the master info repository is not secure and is therefore not recommended. Please consider using the USER and PASSWORD connection options for START SLAVE; see the 'START SLAVE Syntax' in the MySQL Manual for more information.
2015-05-01 05:05:15 2698 [Note] Slave I/O thread: connected to master 'repl@192.168.1.164:3307',replication started in log 'mysql-bin.000002' at position 204643802 
2015-05-01 05:05:16 2698 [ERROR] Slave I/O: The slave IO thread stops because the master has @@GLOBAL.GTID_MODE ON and this server has @@GLOBAL.GTID_MODE OFF, Error_code: 1593 
2015-05-01 05:05:16 2698 [Note] Slave I/O thread exiting, read up to log 'mysql-bin.000002', position 204643802 
2015-05-01 05:05:16 2698 [Note] Event Scheduler: Loaded 0 events 
2015-05-01 05:05:16 2698 [Note] /opt/mysql/bin/mysqld: ready for connections. Version: '5.6.24-debug' socket: '/opt/mysql/datadir/mysqld-new.sock' port: 3307 Shahriyar Rzayev's MySQL 
2015-05-01 05:05:16 2698 [Note] Slave SQL thread initialized, starting replication in log 'mysql-bin.000002' at position 274388137, relay log '/opt/mysql/datadir/mysql-relay-bin.000003' position: 274387894 
2015-05-01 05:05:16 2698 [ERROR] Slave SQL: @@SESSION.GTID_NEXT cannot be set to UUID:NUMBER when @@GLOBAL.GTID_MODE = OFF. Error_code: 1781 
2015-05-01 05:05:16 2698 [Warning] Slave: @@SESSION.GTID_NEXT cannot be set to UUID:NUMBER when @@GLOBAL.GTID_MODE = OFF. Error_code: 1781 
2015-05-01 05:05:16 2698 [ERROR] Error running query, slave SQL thread aborted. Fix the problem, and restart the slave SQL thread with "SLAVE START". We stopped at log 'mysql-bin.000002' position 274388137 [/code] Errors about disabled GTID is normal and it must be here as usual. But must interesting is:  [ERROR] Error in Log_event::read_log_event(): 'read error', data_len: 8178, event_type: 30           
[Warning] Error reading GTIDs from binary log: -1  

[Warning] Error reading GTIDs from binary log: -1. Related BUG report -> #72437 If you have another issue path related to slave server’s disk usage, will be interesting to hear from you.

Change binary log and relay log location to /home? SELinux issue

Recently in my slave server there was a full disk error because of the huge size of binary logs.Which is lead to critical error and BUG report which is still stays as open: http://bugs.mysql.com/bug.php?id=72437
So the issue was a partition problem which in partitioning stage haven’t been specified for “/” (root) filesystem. But in “/home” there were left enough space. The quick decision was to change binary and relay log location(path) to “/home”. But there was a great challenge to get real working slave again. Now we will explore all steps in our test virtual machine.
Test conditions for me: 1. CentOS 6.5 (x86_64) (both master and slave) 2. MySQL 5.6.19 GA (from official yum repo) (both master and slave)
Master’s my.cnf:

# BINARY LOGGING # 
server_id = 1 
log_bin = /var/lib/mysql/data/mysql-bin 
log_bin_index = /var/lib/mysql/data/mysql-bin 
expire_logs_days = 14 
sync_binlog = 1 
binlog_format = row 
gtid-mode = on 
enforce-gtid-consistency = true 
master-info-repository = TABLE 
relay-log-info-repository = TABLE 
slave-parallel-workers = 2 
binlog-checksum = CRC32 
master-verify-checksum = 1 
slave-sql-verify-checksum = 1 
binlog-rows-query-log_events = 1 
log_slave_updates = 1

Slave’s original my.cnf:

# BINARY LOGGING # 
server_id = 2 
log_bin = /var/lib/mysql/data/mysql-bin 
log_bin_index = /var/lib/mysql/data/mysql-bin 
expire_logs_days = 14 
sync_binlog = 1 
binlog_format = row 
relay_log = /var/lib/mysql/data/mysql-relay-bin 
log_slave_updates = 1 
read_only = 1 
gtid-mode = on 
enforce-gtid-consistency = true 
master-info-repository = TABLE 
relay-log-info-repository = TABLE 
#slave-parallel-workers = 2 
binlog-checksum = CRC32 
master-verify-checksum = 1 
slave-sql-verify-checksum = 1 
binlog-rows-query-log_events = 1

So we assume that we have already working master->slave topology. Let’s examine Slave Host for further explanation:

[root@linuxsrv2 home]# df -h 
Filesystem Size Used Avail Use% Mounted on 
/dev/mapper/vg_linuxsrv1-lv_root 48G 2,3G 43G 6% / 
tmpfs 246M 0 246M 0% /dev/shm 
/dev/sda1 485M 57M 403M 13% /boot 
/dev/mapper/new_lv_linuxsrv1-lv_home 7,4G 145M 6,9G 3% /home

So we have seperate free space on /home directory and now we want to change slave’s binary log and reloy log files into this directory. Step-by-step:

1. Create directory:

[root@linuxsrv2 ~]# mkdir /home/binlogs 
[root@linuxsrv2 ~]# ls -ld /home/binlogs/ 
drwxr-xr-x. 2 root root 4096 2014-06-13 15:39 /home/binlogs/

2. Change Slave’s my.cnf to reflect new location. So the new my.cnf of slave must be something like this:

# BINARY LOGGING # 
server_id = 2 
log_bin = /home/binlogs/mysql-bin 
log_bin_index = /home/binlogs/mysql-bin 
expire_logs_days = 14 
sync_binlog = 1 
binlog_format = row 
relay_log = /home/binlogs/mysql-relay-bin 
log_slave_updates = 1 
read_only = 1 
gtid-mode = on 
enforce-gtid-consistency = true 
master-info-repository = TABLE 
relay-log-info-repository = TABLE 
#slave-parallel-workers = 2 
binlog-checksum = CRC32 
master-verify-checksum = 1 
slave-sql-verify-checksum = 1 
binlog-rows-query-log_events = 1

3. Stop MySQL slave server:

[root@linuxsrv2 ~]# service mysqld stop 
Stopping mysqld: [ OK ]

4. Move all binary log and relay log files to new directory “/home/binlogs”:

[root@linuxsrv2 ~]# cd /var/lib/mysql/data/
[root@linuxsrv2 data]# ls 
mysql-bin.000001 mysql-bin.index mysql-relay-bin.000003 mysql-relay-bin.000004 mysql-relay-bin.index mysql-slow.log 
[root@linuxsrv2 data]# mv mysql-bin.* /home/binlogs/ 
[root@linuxsrv2 data]# ls mysql-relay-bin.000003 mysql-relay-bin.000004 mysql-relay-bin.index mysql-slow.log 
[root@linuxsrv2 data]# mv mysql-relay-bin.* /home/binlogs/ 
[root@linuxsrv2 data]# ls mysql-slow.log 
[root@linuxsrv2 data]# ls -l /home/binlogs/ 
total 20 
-rw-rw----. 1 mysql mysql 1320 2014-06-13 15:46 mysql-bin.000001 
-rw-rw----. 1 mysql mysql 37 2014-06-13 15:28 mysql-bin.index 
-rw-rw----. 1 mysql mysql 741 2014-06-13 15:31 mysql-relay-bin.000003 
-rw-rw----. 1 mysql mysql 471 2014-06-13 15:46 mysql-relay-bin.000004 
-rw-rw----. 1 mysql mysql 86 2014-06-13 15:31 mysql-relay-bin.index

5. Change owner of /home/binlogs to mysql:mysql:

[root@linuxsrv2 data]# cd /home 
[root@linuxsrv2 home]# ls binlogs lost+found my_changed.cnf my_original.cnf
[root@linuxsrv2 home]# chown -R mysql:mysql binlogs

6. Go to new directory edit *.index file of both binary logs and relay logs. Why? Open these files and you will see that, the old path is still there:

[root@linuxsrv2 binlogs]# cat mysql-bin.index
/var/lib/mysql/data/mysql-bin.000001 

[root@linuxsrv2 binlogs]# cat mysql-relay-bin.index
/var/lib/mysql/data/mysql-relay-bin.000003
/var/lib/mysql/data/mysql-relay-bin.000004 

So after editing it must be something like:

[root@linuxsrv2 binlogs]# cat mysql-bin.index 
/home/binlogs/mysql-bin.000001 

[root@linuxsrv2 binlogs]# cat mysql-relay-bin.index
/home/binlogs/mysql-relay-bin.000003 
/home/binlogs/mysql-relay-bin.000004

7. Now try to start MySQL:

[root@linuxsrv2 binlogs]# service mysqld start 
MySQL Daemon failed to start. 
Starting mysqld: [FAILED]

See error log:

140613 15:46:19 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended 140613 16:01:50 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql 
2014-06-13 16:02:15 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details). 
2014-06-13 16:02:16 12358 [Warning] Buffered warning: Performance schema disabled (reason: init failed). ^G/usr/sbin/mysqld: File '/home/binlogs/mysql-bin.index' not found (Errcode: 13 - Permission denied) 
2014-06-13 16:02:16 12358 [ERROR] Aborting 
2014-06-13 16:02:16 12358 [Note] Binlog end 
2014-06-13 16:02:16 12358 [Note] /usr/sbin/mysqld: Shutdown complete

But as you see we changed owner of this directory to mysql user, still error occured. SELinux come ups with it’s blocking functionalty here to prevent mysql to use other directories rather than, /var/lib/mysql. Simply you can disable SELinux at all, but it is not a best practice. First possible way is to run following SElinux related commands:

$ semanage fcontext -a -t mysqld_db_t “/home(/.*)?” 
$ restorecon -Rv /home

But if you are not comfortable with these commands, you can just do it automatically using audit2allow:

8. Install “policycoreutils-python” on Linux:

[root@linuxsrv2 ~]# yum install policycoreutils-python
[root@linuxsrv2 ~]# rpm -qa | grep policy*
selinux-policy-3.7.19-231.el6_5.3.noarch
policycoreutils-python-2.0.83-19.39.el6.x86_64 checkpolicy-2.0.22-1.el6.x86_64
policycoreutils-2.0.83-19.39.el6.x86_64
selinux-policy-targeted-3.7.19-231.el6_5.3.noarch 

9. Configuring SELinux:
##1##

[root@linuxsrv2 home]# audit2allow -w -a 
type=AVC msg=audit(1402659916.941:291): avc: denied { search } for pid=14356 comm="mysqld" name="/" dev=dm-2 ino=2 scontext=unconfined_u:system_r:mysqld_t:s0 tcontext=system_u:object_r:file_t:s0 tclass=dir Was caused by: Missing type enforcement (TE) allow rule. You can use audit2allow to generate a loadable module to allow this access. 


[root@linuxsrv2 home]# audit2allow -a 
#============= mysqld_t ============== #
!!!! The source type 'mysqld_t' can write to a 'dir' of the following types: # var_log_t, mysqld_var_run_t, mysqld_db_t, tmp_t, mysqld_tmp_t, var_lib_t, var_run_t, cluster_var_lib_t, cluster_var_run_t, root_t, cluster_conf_t allow mysqld_t file_t:dir { write search read remove_name open add_name }; 
#!!!! The source type 'mysqld_t' can write to a 'file' of the following types: # mysqld_var_run_t, mysqld_db_t, mysqld_tmp_t, mysqld_log_t, cluster_var_lib_t, cluster_var_run_t, root_t, cluster_conf_t allow mysqld_t file_t:file { rename read create write getattr unlink open append };

##3##

[root@linuxsrv2 home]# audit2allow -a -M mymysqlallow 
******************** IMPORTANT *********************** 
To make this policy package active, execute: semodule -i mymysqlallow.pp 

##4##

[root@linuxsrv2 home]# ls 
binlogs lost+found my_changed.cnf mymysqlallow.pp mymysqlallow.te
my_original.cnf 
[root@linuxsrv2 home]# semodule -i mymysqlallow.pp 

10. Try again to start MySQL:

[root@linuxsrv2 ~]# service mysqld start Starting mysqld: [ OK ]

But actually Slave does not work now:
From slave status: ** Slave_IO_Running: No Slave_SQL_Running: No**
And from error log:

2014-06-13 17:03:36 15360 [ERROR] Failed to open the relay log '/var/lib/mysql/data/mysql-relay-bin.000004' (relay_log_pos 448). 
2014-06-13 17:03:36 15360 [ERROR] Could not find target log file mentioned in relay log info in the index file '/home/binlogs/mysql-relay-bin.index' during relay log initialization. 
2014-06-13 17:03:36 15360 [ERROR] Failed to initialize the master info structure 
2014-06-13 17:03:36 15360 [Note] Check error log for additional messages. You will not be able to start replication until the issue is resolved and the server restarted. 

Here is another issue, in fact we defined path correctly in mysql-relay-bin.index file but it shows the old path. So Why? As examining mysql database you will see some tables like:

mysql> show tables like 'slave%'; 
+--------------------------+ 
| Tables_in_mysql (slave%) |
+--------------------------+ 
| slave_master_info | 
| slave_relay_log_info | 
| slave_worker_info | 
+--------------------------+ 
3 rows in set (0,00 sec)

Select from slave_relay_log_info table:

mysql> select * from slave_relay_log_infoG 
*************************** 1. row *************************** 
Number_of_lines: 7 
Relay_log_name: /var/lib/mysql/data/mysql-relay-bin.000004 
Relay_log_pos: 448 
Master_log_name: mysql-bin.000002 
Master_log_pos: 478 
Sql_delay: 0 
Number_of_workers: 0 
Id: 1 
1 row in set (0,00 sec) 

Yes as you see , although, we have defined new path it is not updated in this table. So quite straight option is manually updating this table to new path:

mysql> update slave_relay_log_info set relay_log_name='/home/binlogs/mysql-relay-bin.000004' where id=1; 
Query OK, 1 row affected (0,05 sec) 
Rows matched: 1 Changed: 1 Warnings: 0 

11. Restart MySQL again:

[root@linuxsrv2 ~]# service mysqld restart 
Stopping mysqld: [ OK ] 
Starting mysqld: [ OK ]

Check Slave if it is working properly now? Slave_IO_Running: Yes Slave_SQL_Running: Yes Yes now the binary logs and relay logs of slave server is located in /home directory:

[root@linuxsrv2 ~]# ls -l /home/binlogs/ 
total 28 
-rw-rw----. 1 mysql mysql 1320 2014-06-13 15:46 mysql-bin.000001 
-rw-rw----. 1 mysql mysql 214 2014-06-13 17:21 mysql-bin.000002 
-rw-rw----. 1 mysql mysql 333 2014-06-13 17:26 mysql-bin.000003 
-rw-rw----. 1 mysql mysql 93 2014-06-13 17:22 mysql-bin.index 
-rw-rw----. 1 mysql mysql 173 2014-06-13 17:22 mysql-relay-bin.000006 
-rw-rw----. 1 mysql mysql 590 2014-06-13 17:26 mysql-relay-bin.000007 
-rw-rw----. 1 mysql mysql 74 2014-06-13 17:22 mysql-relay-bin.index

Thanks for reading 😉