Getting started MySQL Group Replication on Ubuntu with MySQL Sandbox

Welcome to next topic dedicated to Group Replication plugin. Today’s topic is to get started with Group Replication. As you remember we set up our environment using MySQL Sandbox on Ubuntu 14.04 and also we have compiled plugin with MySQL 5.7.8-rc on Ubuntu 14.04. Please refer to this topics respectively:

  1. MySQL Sandbox creating test environment
  2. Compiling MySQL Group replication plugin

So we have already our 3 nodes of MySQL 5.7.8-rc + group replication plugin.
Before starting group replication. We need to play with Corosync. Here is very dedicated article to this -> Corosync Guide

I want to talk about some issues where you may see and help to resolve these problems.
First thing is when you try to start group replication, without starting corosync daemon or while corosync is stopped:

mysql> START GROUP_REPLICATION;
ERROR 3092 (HY000): The server is not configured properly to be an active member of the group. Please see more details on error log.

The error indicates that, you will find some input in error log but you won’t 🙂
So just be sure to start corosync daemon before getting your hand with group replication.

Second thing is to save correct user on /etc/corosync/uidgid.d directory.
For eg, if you will start MySQL service with some ‘john’ Linux user, you must put this user into some file. For me, it is ‘sh’ local user on my Ubuntu:

sh@shrzayev:~$ cat /etc/corosync/uidgid.d/users 
uidgid {
 uid: sh
 gid: sh
}

If you will save wrong user you will likely see:

node1 [localhost] {msandbox} ((none)) > start group_replication;
ERROR 3092 (HY000): The server is not configured properly to be an active member of the group. Please see more details on error log.

But again there will be no such input in error log indicating exact issue.
Just be sure to save exact user related to corosync.

These tips came from Pedro Gomes. He is a replication developer from Oracle.
For further discussion please refer to related report ->
#78619

Let’s assume you have successfully completed Corosync setup.
Now it is time to config MySQL server for group replication plugin.
For all 3 nodes the my.cnf file should include following settings:

server-id                      = UNIQUE_NUMBER_HERE
binlog-format                  = row
enforce-gtid-consistency       = true
gtid-mode                      = on
log_slave_updates              = 1
master-info-repository         = TABLE
relay-log-info-repository      = TABLE
binlog-checksum                = NONE
transaction-write-set-extraction=MURMUR32

With MySQL Sandbox there are already unique server-ids and some other usefull system variables with my.sandbox.cnf file. Just add non-existing parameters to config file. Start all 3 nodes first time:

sh@shrzayev:~/sandboxes/multi_msb_5_7_8/node[1,2,3]$ ./start
.. sandbox server started

Just be sure that, all servers started successfully:

sh@shrzayev:~/sandboxes/multi_msb_5_7_8/node[1,2,3]$ ./status
node1 on

Now stop all 3 nodes and append –plugin-load=group_replication.so to ./start script:

sh@shrzayev:~/sandboxes/multi_msb_5_7_8/node[1,2,3]$ ./start --plugin-load=group_replication.so
... sandbox server started

You can check plugin status:

node[1,2,3] [localhost] {msandbox} ((none)) > select plugin_name, plugin_status from information_schema.plugins where plugin_name like 'group_%';
+-------------------+---------------+
| plugin_name       | plugin_status |
+-------------------+---------------+
| group_replication | ACTIVE        |
+-------------------+---------------+
1 row in set (0,01 sec)

You have enabled group_replication plugin.
We need to create replication user do it only on node1:

CREATE USER 'rpl_user'@'%' IDENTIFIED BY 'rpl_pass';
GRANT REPLICATION SLAVE ON *.* TO rpl_user@'%';
GRANT SELECT on performance_schema.global_variables TO rpl_user@'%';

Also we need to give a SELECT grant to performance_schema.global_variables table, otherwise you will get an error:

2015-10-08T07:32:26.000461Z 8 [ERROR] Slave I/O for channel 'group_replication_recovery': The slave I/O thread stops because a fatal error is e
ncountered when it try to get the value of SERVER_ID variable from master. Error: SELECT command denied to user 'rpl_user'@'localhost' for tabl
e 'global_variables', Error_code: 1142

Now there are some global variables to change. do it on node1:

SET GLOBAL group_replication_group_name= "8a94f357-aab4-11df-86ab-c80aa9429562";
SET GLOBAL group_replication_recovery_user='rpl_user';
SET GLOBAL group_replication_recovery_password='rpl_pass';
SET GLOBAL group_replication_recovery_retry_count= 2;
SET GLOBAL group_replication_recovery_reconnect_interval= 120;

After all we can start group replication:

node1 [localhost] {msandbox} ((none)) > start group_replication;
Query OK, 0 rows affected (0,16 sec)

In MySQL error log, you will see new entries:

2015-10-08T05:55:22.419952Z 4 [Note] 'CHANGE MASTER TO FOR CHANNEL 'group_replication_applier' executed'. Previous state master_host='<NULL>', master_port= 3307, master_log_file='', master_log_pos= 4, master_bind=''. New state master_host='<NULL>', master_port= 3307, master_log_file='', master_log_pos= 4, master_bind=''.
2015-10-08T05:55:22.578212Z 3 [Note] Plugin group_replication reported: 'Group Replication applier module successfully initialized!'
2015-10-08T05:55:22.578283Z 5 [Note] Slave SQL thread for channel 'group_replication_applier' initialized, starting replication in log 'FIRST' at position 0, relay log './mysql-relay-group_replication_applier.000001' position: 4
2015-10-08T05:55:22.578306Z 3 [Note] Plugin group_replication reported: 'auto_increment_increment is set to 7'
2015-10-08T05:55:22.578359Z 3 [Note] Plugin group_replication reported: 'auto_increment_offset is set to 101'
2015-10-08T05:55:22.579736Z 0 [Note] Plugin group_replication reported: 'Starting group replication recovery with view_id 571734024:1'
2015-10-08T05:55:22.712797Z 6 [Note] Plugin group_replication reported: 'Only one server alive. Declaring this server as online within the replication group'
2015-10-08T05:55:22.713721Z 0 [Note] Plugin group_replication reported: 'This server was declared online within the replication group'

Even you can notice new connection from corosync log file:

Oct 08 10:55:22 [1257] shrzayev corosync debug   [QB    ] IPC credentials authenticated (1293-7202-19)
Oct 08 10:55:22 [1257] shrzayev corosync debug   [QB    ] connecting to client [7202]
Oct 08 10:55:22 [1257] shrzayev corosync debug   [QB    ] shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 08 10:55:22 [1257] shrzayev corosync debug   [QB    ] shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 08 10:55:22 [1257] shrzayev corosync debug   [QB    ] shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 08 10:55:22 [1257] shrzayev corosync debug   [MAIN  ] connection created
Oct 08 10:55:22 [1257] shrzayev corosync debug   [CPG   ] lib_init_fn: conn=0x7fe2da6a4e60, cpd=0x7fe2da6a5454
Oct 08 10:55:22 [1257] shrzayev corosync debug   [CPG   ] got procjoin message from cluster node 2130706433 (r(0) ip(127.0.0.1) ) for pid 7202

Now you can check group replication member status:

node1 [localhost] {msandbox} ((none)) > SELECT * FROM performance_schema.replication_connection_statusG
*************************** 1. row ***************************
             CHANNEL_NAME: group_replication_applier
               GROUP_NAME: 8a94f357-aab4-11df-86ab-c80aa9429562
              SOURCE_UUID: 8a94f357-aab4-11df-86ab-c80aa9429562
                THREAD_ID: NULL
            SERVICE_STATE: ON
COUNT_RECEIVED_HEARTBEATS: 0
 LAST_HEARTBEAT_TIMESTAMP: 0000-00-00 00:00:00
 RECEIVED_TRANSACTION_SET: 8a94f357-aab4-11df-86ab-c80aa9429562:1-2
        LAST_ERROR_NUMBER: 0
       LAST_ERROR_MESSAGE: 
     LAST_ERROR_TIMESTAMP: 0000-00-00 00:00:00
1 row in set (0,01 sec)

So there is no ERRORs here, it is working.
You can check also group members:

node1 [localhost] {msandbox} ((none)) > SELECT * FROM performance_schema.replication_group_membersG
*************************** 1. row ***************************
CHANNEL_NAME: group_replication_applier
   MEMBER_ID: 00008379-1111-1111-1111-111111111111
 MEMBER_HOST: shrzayev
 MEMBER_PORT: 8379
MEMBER_STATE: ONLINE
1 row in set (0,01 sec)

At this moment we have only 1 member -> node1.
Let’s add node2 too. Procedure is same set global variables and start group replication:

SET GLOBAL group_replication_group_name= "8a94f357-aab4-11df-86ab-c80aa9429562";
SET GLOBAL group_replication_recovery_user='rpl_user';
SET GLOBAL group_replication_recovery_password='rpl_pass';
SET GLOBAL group_replication_recovery_retry_count= 2;
SET GLOBAL group_replication_recovery_reconnect_interval= 120;

And start group replication:

node2 [localhost] {msandbox} ((none)) > start group_replication;
Query OK, 0 rows affected (0,16 sec)

Same procedure for node3 and we will have 3 nodes on our group:

node1 [localhost] {msandbox} ((none)) > select * from performance_schema.replication_group_membersG
*************************** 1. row ***************************
CHANNEL_NAME: group_replication_applier
   MEMBER_ID: 00008379-1111-1111-1111-111111111111
 MEMBER_HOST: shrzayev
 MEMBER_PORT: 8379
MEMBER_STATE: ONLINE
*************************** 2. row ***************************
CHANNEL_NAME: group_replication_applier
   MEMBER_ID: 00008380-2222-2222-2222-222222222222
 MEMBER_HOST: shrzayev
 MEMBER_PORT: 8380
MEMBER_STATE: ONLINE
*************************** 3. row ***************************
CHANNEL_NAME: group_replication_applier
   MEMBER_ID: 00008381-3333-3333-3333-333333333333
 MEMBER_HOST: shrzayev
 MEMBER_PORT: 8381
MEMBER_STATE: ONLINE
3 rows in set (0,00 sec)

That’s all. Here it is, how we actually can use MySQL Sandbox on Ubuntu 14.04 to create environment and to get started with Group Replication plugin.
As well as from now, you should be able to get start with Group Replication.
If you have your very own test cases, feel free to share or to ask.

2 thoughts on “Getting started MySQL Group Replication on Ubuntu with MySQL Sandbox”

Leave a Reply

Your email address will not be published. Required fields are marked *