Using MySQL Connector/Python X (mysqlx module)

This post is about simple usage of mysqlx module i.e X Plugin with latest Connector/Python DMR.
The version of Python Connector is 8.0.4.

Installing:

wget https://dev.mysql.com/get/Downloads/Connector-Python/mysql-connector-python-8.0.4-0.1.dmr.el7.x86_64.rpm
sudo yum install mysql-connector-python-8.0.4-0.1.dmr.el7.x86_64.rpm

Sample Python code:

# Connecting to MySQL and working with a Session
import mysqlx

# Connect to a dedicated MySQL server
session = mysqlx.get_session({
    'host': 'localhost',
    'port': 33060,
    'user': 'bakux',
    'password': 'Baku12345',
    'ssl-mode': mysqlx.SSLMode.DISABLED
})

schema = session.get_schema('generated_columns_test')

# Create 'my_collection' in schema
schema.create_collection('my_collection')

# Get 'my_collection' from schema
collection = schema.get_collection('my_collection')

assert(True == collection.exists_in_database())

# You can also add multiple documents at once
collection.add({'_id': '2', 'name': 'Sakila', 'age': 15},
            {'_id': '3', 'name': 'Jack', 'age': 15},
            {'_id': '4', 'name': 'Clare', 'age': 37}).execute()

collection.remove('_id = 1').execute()

assert(3 == collection.count())

The code is quite clean so basically we have created a collection and inserted some data.

From MySQL client side the structure of this “table”==collection:

CREATE TABLE `my_collection` (
  `doc` json DEFAULT NULL,
  `_id` varchar(32) GENERATED ALWAYS AS (json_unquote(json_extract(`doc`,'$._id'))) STORED NOT NULL,
  PRIMARY KEY (`_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4

The collection has “stored generated column” _id which is extracted from doc column.

select * from my_collection;
+-------------------------------------------+-----+
| doc                                       | _id |
+-------------------------------------------+-----+
| {"_id": "2", "age": 15, "name": "Sakila"} | 2   |
| {"_id": "3", "age": 15, "name": "Jack"}   | 3   |
| {"_id": "4", "age": 37, "name": "Clare"}  | 4   |
+-------------------------------------------+-----+
3 rows in set (0.00 sec)

This default behavior restricted my tests on other storage engines. Because it is impossible to alter the engine of this default “collection”.

ERROR 3106 (HY000): 'Specified storage engine' is not supported for generated columns.

BTW, while testing, found some bugs and reported:

https://bugs.mysql.com/bug.php?id=87472
https://bugs.mysql.com/bug.php?id=87475
https://bugs.mysql.com/bug.php?id=87492
https://bugs.mysql.com/bug.php?id=87493

Testing MySQL 8.0 – let me create a ton of undo files

This is the late blog post about 2 recent bug reports 🙂
#85969
#85971

The basic idea came after reading -> http://mysqlserverteam.com/the-mysql-8-0-1-milestone-release-is-available/

So the result of test ->
After each restart of MySQL the new undo log files are going to be created + keeping old files.

shahriyar.rzaev@qaserver-06:~/sandboxes/msb_8_0_1/data$ du -hs
6.4G	
# The count of undo files
shahriyar.rzaev@qaserver-06:~/sandboxes/msb_8_0_1/data$ ls | grep undo | wc -l
539

After new restart:

# New count
shahriyar.rzaev@qaserver-06:~/sandboxes/msb_8_0_1/data$ ls | grep undo | wc -l
616

So how to reproduce?

1. Started server with following settings

early-plugin-load=keyring_file.so
keyring_file_data=/home/shahriyar.rzaev/sandboxes/msb_8_0_1/tmp/keyring

innodb_redo_log_encrypt=ON
innodb_undo_tablespaces=77
innodb_undo_logs=77
innodb_undo_log_encrypt=ON

2. Then killed it with -9 and got the error:

2017-05-08T08:54:07.174124Z 1 [ERROR] InnoDB: Expected to open 77 undo tablespaces but was able to find only 0 undo tablespaces. Set the innodb_undo_tablespaces parameter to the correct value and retry. Suggested value is 0
2017-05-08T08:54:07.174147Z 1 [ERROR] InnoDB: Plugin initialization aborted with error Generic error
2017-05-08T08:54:07.674794Z 1 [ERROR] Failed to initialize DD Storage Engine
2017-05-08T08:54:07.674922Z 0 [ERROR] Data Dictionary initialization failed.
2017-05-08T08:54:07.674932Z 0 [ERROR] Aborting

3. To overcome this issue I have changed config and restarted MySQL:

innodb_redo_log_encrypt=ON
innodb_undo_tablespaces=0
innodb_undo_logs=77
innodb_undo_log_encrypt=ON

And it is started.

4. Then I have changed back the config again and restarted MySQL:

innodb_redo_log_encrypt=ON
innodb_undo_tablespaces=77
innodb_undo_logs=77
innodb_undo_log_encrypt=ON

From now after each restart it will create new undo files.
See below the full test result:
https://bugs.mysql.com/file.php?id=25383&bug_id=85971

Experimental MySQL UDF code partially written in Python

Hi dear all,
I would like to share our experimental tests about, writing UDF with Python.
So the basic idea was writing a UDF for factorial calculation. The interesting part was to move factorial calculation logic to Python code and then calling this Python file inside UDF code.

Here is the repo link -> UDFs-written-in-Python

The factorial calculation is here -> factorial.py
UDF code is here -> factorial.cc

Basic idea with UDF code quite simple as it has 3 parts:
1. initialization part -> factorial_init() -> all necessary checks reside here
2. main logic -> factorial() -> calling Python script, reading data back and showing the result.
3. deinitialization -> factorial_deinit()

Compiled as:

gcc $(/home/sh/REPOS/MYSQL_DEV_ACTIONS/mysql-5.7/mysql-server/5.7.16/bin/mysql_config --cflags) -fPIC -shared -o factorial.so factorial.cc -std=c++11

Copied to plugin_dir:

cp factorial.so /home/sh/REPOS/MYSQL_DEV_ACTIONS/mysql-5.7/mysql-server/5.7.16/lib/plugin/
cp factorial.py /home/sh/REPOS/MYSQL_DEV_ACTIONS/mysql-5.7/mysql-server/5.7.16/lib/plugin/

Activated as:
mysql > create function factorial returns integer soname 'factorial.so';

Result:

root@test1:~# python --version
Python 2.7.12
mysql> select factorial(5);
+--------------+
| factorial(5) |
+--------------+
| 120 |
+--------------+
1 row in set (0.02 sec)

My first patch for MySQL world; fixing BUG #76852

Hi dear MySQL Community.
I have reported a bunch of bugs and wrote some articles, but today is a special for me 🙂
Because today I send my first PR for MySQL source code.

So I have logged a bug -> #76852 and now fixed it after year.
Okey, I know I am lazy.

Let’s reproduce the bug.
Simulating full disk condition is easy as:

mkdir /filesystems
dd if=/dev/zero of=/filesystems/tmp_fs bs=1024 count=0 seek=$[1024*10]
mkfs.ext4 /filesystems/tmp_fs
mkdir small_mounted_dir
mount -o loop /filesystems/tmp_fs /home/sh/small_mounted_dir/

Then obtain MySQL source and compile it with DEBUG. If you don’t know how to do it just refer to -> installing-mysql-from-source-cmake-issues

From now you can use MySQL Sandbox to install 1 master and 2 slaves:

make_sandbox_from_source /home/sh/MySQLPakcages/mysql-5.6.32 replication

Inside main path there will be a script for enabling GTID:

/home/sh/sandboxes/rsandbox_mysql-5_6_32/enable_gtid

Run it.

Then stop master using ./stop script inside master folder:

/home/sh/sandboxes/rsandbox_mysql-5_6_32/master/stop

Move binary mysql-bin.* files from datadir to /home/sh/small_mounted_dir/.
Update my.sandbox.cnf and add new path for log-bin variable. Then start MySQL using ./start script.
You can run sysbench to populate data:

 sysbench --test=/home/sh/REPOS/sysbench/sysbench/tests/db/oltp.lua --oltp-test-mode=complex \
 --num-threads=100 --mysql-db=dbtest --mysql-user=root --mysql-password=msandbox --db-driver=mysql \
 --max-requests=0 --innodb-table-size=1000000 --mysql-socket=/tmp/mysql_sandbox23097.sock prepare
  
sysbench --test=/home/sh/REPOS/sysbench/sysbench/tests/db/insert.lua --oltp-test-mode=complex \
--num-threads=100 --mysql-db=dbtest --mysql-user=root --mysql-password=msandbox --db-driver=mysql  \
--max-requests=0 --innodb-table-size=1000000 --mysql-socket=/tmp/mysql_sandbox23097.sock run

Next you will likely see something:

2016-11-08 00:56:32 13417 [Warning] Disk is full writing '/home/sh/small_mounted_dir/mysql-bin.000003' (Errcode: 28 - No space left on device). Waiting for someone to free space...
2016-11-08 00:56:32 13417 [Warning] Retry in 60 secs. Message reprinted in 600 secs

If you try to stop MySQL, you will likely hit -> #83705

And if you try to start you will likely hit -> #76852

According to coredump:

(gdb) bt
#0  0x0000000000000000 in ?? ()
#1  0x0000000000ae6242 in my_printf_warning (format=0x1026f08 "Disk is full writing '%s' (Errcode: %d - %s). Waiting for someone to free space...")
    at /root/mysql-5.6.24/mysys/my_error.c:260
#2  0x0000000000ac9052 in wait_for_free_space (filename=0x1fe7ee0 "/opt/mysql/datadir/mysql-bin.~rec~", errors=0) at /root/mysql-5.6.24/mysys/errors.c:115
#3  0x0000000000af1ac7 in my_write (Filedes=19, Buffer=0x1964d00 "/opt/mysql/datadir/mysql-bin.000003\n", Count=36, MyFlags=52)
    at /root/mysql-5.6.24/mysys/my_write.c:89
#4  0x0000000000acd5ae in inline_mysql_file_write (src_file=0x1027708 "/root/mysql-5.6.24/mysys/mf_iocache.c", src_line=1788, file=19, 
    buffer=0x1964d00 "/opt/mysql/datadir/mysql-bin.000003\n", count=36, flags=52) at /root/mysql-5.6.24/include/mysql/psi/mysql_file.h:1141
#5  0x0000000000ad078c in my_b_flush_io_cache (info=0x183c1a8, need_append_buffer_lock=0) at /root/mysql-5.6.24/mysys/mf_iocache.c:1787
#6  0x0000000000a7132b in MYSQL_BIN_LOG::sync_purge_index_file (this=0x183b400) at /root/mysql-5.6.24/sql/binlog.cc:4420
#7  0x0000000000a6e206 in MYSQL_BIN_LOG::open_binlog (this=0x183b400, log_name=0x190d130 "/opt/mysql/datadir/mysql-bin", new_name=0x0, 
    io_cache_type_arg=WRITE_CACHE, max_size_arg=1073741824, null_created_arg=false, need_lock_index=true, need_sid_lock=true, extra_description_event=0x0)
    at /root/mysql-5.6.24/sql/binlog.cc:3146
#8  0x000000000063ad8e in init_server_components () at /root/mysql-5.6.24/sql/mysqld.cc:5012
#9  0x000000000063b6e7 in mysqld_main (argc=19, argv=0x186de68) at /root/mysql-5.6.24/sql/mysqld.cc:5455
#10 0x000000000062fc74 in main (argc=10, argv=0x7fffffffe3d8) at /root/mysql-5.6.24/sql/main.cc:25

The problem is somewhere at line 260 in /mysys/my_error.c:

void(*sql_print_warning_hook)(const char *format,...);
void my_printf_warning(const char *format, ...)
{
  va_list args;
  char wbuff[ERRMSGSIZE];
  DBUG_ENTER("my_printf_warning");
  DBUG_PRINT("my", ("Format: %s", format));
  va_start(args,format);
  (void) my_vsnprintf (wbuff, sizeof(wbuff), format, args);
  va_end(args);
  (*sql_print_warning_hook)(wbuff);
  DBUG_VOID_RETURN;
}

It gives SEGFAULT exactly on (*sql_print_warning_hook)(wbuff) which makes clear that, sql_print_warning_hook is not initialized at that point.

So as a suggestion by Sergei Glushchenko the solution can be achieved by checking sql_print_warning_hook prior calling inside my_error.c or initialize earlier sql_print_warning_hook inside sql/mysqld.cc. The best to have both:

I decide to initialize sql_print_warning_hook inside init_server_components():
Took this one:

+  /*
    The subsequent calls may take a long time : e.g. innodb log read.
    Thus set the long running service control manager timeout
   */
@@ -5596,7 +5598,7 @@ int mysqld_main(int argc, char **argv)
   */
   error_handler_hook= my_message_sql;
   start_signal_handler();       // Creates pidfile
-  sql_print_warning_hook = sql_print_warning;
+
[/cpp]

and added to:


@@ -4663,6 +4663,8 @@ static int init_server_components()
     We need to call each of these following functions to ensure that
     all things are initialized so that unireg_abort() doesn't fail
   */
+  sql_print_warning_hook = sql_print_warning;
+
   mdl_init();
   if (table_def_init() | hostname_cache_init(host_cache_size))
     unireg_abort(1);
@@ -5486,7 +5488,7 @@ int mysqld_main(int argc, char **argv)
 #endif
   }

Second thing is to add simple check inside /mysys/my_error.c:

diff --git a/mysys/my_error.c b/mysys/my_error.c
index b1f57ba..4441228 100644
--- a/mysys/my_error.c
+++ b/mysys/my_error.c
@@ -257,7 +257,14 @@ void my_printf_warning(const char *format, ...)
   va_start(args,format);
   (void) my_vsnprintf (wbuff, sizeof(wbuff), format, args);
   va_end(args);
-  (*sql_print_warning_hook)(wbuff);
+  if(sql_print_warning_hook)
+  {
+    (*sql_print_warning_hook)(wbuff);
+  }
+  else
+  {
+    fprintf(stderr, "%s\n", wbuff);
+  }
   DBUG_VOID_RETURN;
 }

After recompiling and retesting, there will be no SEGFAULT anymore.
So it can be considered as a fix, except that, my IDE added/removed some spaces from source code 🙂

The PR link -> PR #109

Playing with MySQL Source code; Adding “STOP ALL SLAVES” command

This blog post is a result of reading book.
To be clear -> “Expert MySQL” by Charles Bell -> http://www.apress.com/gp/book/9781430246596
This book explain internals of MySQL based on 5.6.x version. So relatively new and really good resource to use.

So the task is:
Chapter 8, “Extending MySQL High Availability” -> page 310, “Extending Replication”.
Where we should add new command -> “SQLCOM_STOP_SLAVES” executed on master and causing to STOP IO_THREAD on all slaves.
The basic idea is to run “STOP ALL SLAVES” on master, which in turn will appear on binary log and then it will applied on slaves.

Great 🙂
MySQL version is 5.6.32.

So, as book advice go to sql/lex.h. “We are adding the new symbol for the command. This file contains the symbol array stored in alphabetical order.”

Added:

{ "SLAVE",            SYM(SLAVE)},
  /* BEGIN CAB MODIFICATION */
  /* Reason for Modification */
  /* Add SLAVES command */
  { "SLAVES",           SYM(SLAVES)},
  /* END CAB MODIFICATION*/
  { "SLOW",             SYM(SLOW)},

Then go to sql/sql_cmd.h to add a new enumeration for the big switch. Locate enum enum_sql_command definition near the top of the file.
Added:

SQLCOM_COMMIT, SQLCOM_SAVEPOINT, SQLCOM_RELEASE_SAVEPOINT,
  /* BEGIN CAB MODIFICATION */
  /* Reason for Modification */
  /*Add SQLCOM_STOP_SLAVES enum */
  SQLCOM_SLAVE_START, SQLCOM_SLAVE_STOP, SQLCOM_STOP_SLAVES,
  /*END CAB MODIFICATION*/
  SQLCOM_BEGIN, SQLCOM_CHANGE_MASTER,

“Next we need to add a new token to be used in the new rule. Once again, the list of tokens is arranged in alphabetical order. Open the sql/sql_yacc.yy and locate the section where new tokens are defined. In this case, we need to add a definition for a token for new command. We will name it SLAVES.”
Added near line 1539/1540:

%token  SLAVE
/* BEGIN CAB MODIFICATION */
/* Reason for Modification */
/*Add SLAVES token */
%token SLAVES
/*END CAB MODIFICATION*/
%token  SLOW

“Next modify the section where the “%type” defintion resides. We need to add new token to this definition.”
Added near line 1855/1856:

slave master_def master_defs master_file_def slave_until_opts
        /* BEGIN CAB MODIFICATION */
        /* Reason for Modification */
        /* Add stop to list of NONE types */
        repair analyze check start stop checksum
        /*END CAB MODIFICATION*/
        field_list field_list_item field_spec kill column_def key_def

“Next, we add a new command definition to the list of commands so that the parser can direct control to the new rule. Notice we add a new ‘or’ condition mapping to a new rule to be evaluated.”
Added near line 2079/2080:

       | start
  /* BEGIN CAB MODIFICATION */
  /* Reason for Modification */
  /* Add stop to list of statement targets */
        | stop
  /*END CAB MODIFICATION*/
        | truncate

“Last, we will add the new rule to process the STOP ALL SLAVES command. The rule simply saves the new enumeration to the lex->sql_command attribute. This is how the code maps the result of the rule(and the processing of the command) to the big switch to a case equal to the enumeration value.”

Added near line 8114/8115:

/* BEGIN CAB MODIFICATION */
/* Reason for Modification */
/* Add rule for stop ALL SLAVES command*/
stop:
          STOP_SYM ALL SLAVES
          {
            LEX *lex= Lex;
            lex->sql_command= SQLCOM_STOP_SLAVES;
          }
        ;
/*END CAB MODIFICATION*/

“With changes to the YACC file complete, we can add a new case for the big switch to ensure the command, once captured by the parser, is directed to code to written the event to the binary log. Normally, the STOP SLAVE command is not replicated. Our code would also override this restriction. Let us add that case statement. Open the sql/sql_parse.cc

Added near line 3159/3160:

    mysql_mutex_unlock(&LOCK_active_mi);
    break;
  }

/* BEGIN CAB MODIFICATION */
/* Reason for Modification */
/* Add case statement for STOP ALL SLAVES command */
  case SQLCOM_STOP_SLAVES:
  {
    if (!lex->no_write_to_binlog)
    {
        res = write_bin_log(thd, TRUE, "STOP SLAVE IO_THREAD", 20);
    }

    break;
  }
/*END CAB MODIFICATION*/


#endif /* HAVE_REPLICATION */

  case SQLCOM_RENAME_TABLE:

“The first statement is designed to check to see if the server is able to write to the binary log. If so, we add a new log event, passing it the STOP SLAVE SQL command. Notice that we use a specific version of the STOP SLAVE command. In this case, we are stopping only IO_THREAD.”

I assume that you are familiar with compiling MySQL from source code. If not then read -> installing-mysql-from-source-cmake-issues

Go ahead and run “make” again. It will likely fail with:

Scanning dependencies of target sql
[ 54%] Building CXX object sql/CMakeFiles/sql.dir/sql_parse.cc.o
[ 54%] Building CXX object sql/CMakeFiles/sql.dir/mysqld.cc.o
In file included from /home/sh/MySQLPakcages/mysql-5.6.32/include/my_compiler.h:151:0,
                 from /home/sh/MySQLPakcages/mysql-5.6.32/include/my_global.h:473,
                 from /home/sh/MySQLPakcages/mysql-5.6.32/sql/mysqld.cc:17:
/home/sh/MySQLPakcages/mysql-5.6.32/sql/mysqld.cc: In function ‘int init_common_variables()’:
/home/sh/MySQLPakcages/mysql-5.6.32/include/my_attribute.h:38:42: error: size of array ‘compile_time_assert’ is negative
 #  define MY_ATTRIBUTE(A) __attribute__(A)
                                          ^
/home/sh/MySQLPakcages/mysql-5.6.32/include/my_global.h:377:52: note: in expansion of macro ‘MY_ATTRIBUTE’
     typedef char compile_time_assert[(X) ? 1 : -1] MY_ATTRIBUTE((unused)); \
                                                    ^
/home/sh/MySQLPakcages/mysql-5.6.32/sql/mysqld.cc:3871:3: note: in expansion of macro ‘compile_time_assert’
   compile_time_assert(sizeof(com_status_vars)/sizeof(com_status_vars[0]) - 1 ==
   ^
sql/CMakeFiles/sql.dir/build.make:3581: recipe for target 'sql/CMakeFiles/sql.dir/mysqld.cc.o' failed
make[2]: *** [sql/CMakeFiles/sql.dir/mysqld.cc.o] Error 1
CMakeFiles/Makefile2:5192: recipe for target 'sql/CMakeFiles/sql.dir/all' failed
make[1]: *** [sql/CMakeFiles/sql.dir/all] Error 2
Makefile:160: recipe for target 'all' failed
make: *** [all] Error 2

As a beginner in source code world, I suddenly found a BUG in book 🙂
After reading comments I have found something inside sql/sql_cmd.h:

/*
  When a command is added here, be sure it's also added in mysqld.cc
  in "struct show_var_st status_vars[]= {" ...
*/

So the new command should also be added inside sql/mysqld.cc. But there is no such struct show_var_st status_vars[] there. It took me some time to figure out that instead of status_vars[], command should be added into SHOW_VAR com_status_vars[]
See related report -> #83613

So edited mysqld.cc file near line 3571/3572:

{"slave_stop",           (char*) offsetof(STATUS_VAR, com_stat[(uint) SQLCOM_SLAVE_STOP]), SHOW_LONG_STATUS},
        /* BEGIN CAB MODIFICATION */
        /* Reason for Modification */
  {"stop_slaves",           (char*) offsetof(STATUS_VAR, com_stat[(uint) SQLCOM_STOP_SLAVES]), SHOW_LONG_STATUS},
        /*END CAB MODIFICATION*/

Save and recompile it will succeed. Next is to look at easy way to setup replication and test the new added command. In the book, author used tools from “MySQL Utilities”. But I have used “MySQL Sandbox” as:

make_sandbox_from_source /home/sh/MySQLPakcages/mysql-5.6.32 replication

That’s it, 1 master and 2 slaves are ready to test.

For my bad, ” stop all slaves” command failed with following assertion on master:

Version: '5.6.32-debug-log'  socket: '/tmp/mysql_sandbox23097.sock'  port: 23097  Shahriyar Rzayev's Ubuntu 16.04 MySQL-5.6.32
mysqld: /home/sh/MySQLPakcages/mysql-5.6.32/sql/protocol.cc:515: void Protocol::end_statement(): Assertion `0' failed.
14:49:07 UTC - mysqld got signal 6 ;

So ideally my statement, should return something like “Query OK”. After asking Weixiang Zhai issue fixed by adding my_ok(thd)

So the final code for sql/sql_parse.cc:

    mysql_mutex_unlock(&LOCK_active_mi);
    break;
  }

/* BEGIN CAB MODIFICATION */
/* Reason for Modification */
/* Add case statement for STOP ALL SLAVES command */
  case SQLCOM_STOP_SLAVES:
  {
    if (!lex->no_write_to_binlog)
    {
        res = write_bin_log(thd, TRUE, "STOP SLAVE IO_THREAD", 20);
    }

    if(!res)
    {
        my_ok(thd);
    }

    break;
  }
/*END CAB MODIFICATION*/


#endif /* HAVE_REPLICATION */

  case SQLCOM_RENAME_TABLE:

Recompiled and reinstalled with sandbox, the result of running new command on master:

master [localhost] {msandbox} ((none)) > stop all slaves; 
Query OK, 0 rows affected (0.00 sec)

Result on slaves:

Slave_IO_Running: No 
Slave_SQL_Running: Yes

So it works like a charm 🙂

Getting started MySQL Group Replication on Ubuntu with MySQL Sandbox

Welcome to next topic dedicated to Group Replication plugin. Today’s topic is to get started with Group Replication. As you remember we set up our environment using MySQL Sandbox on Ubuntu 14.04 and also we have compiled plugin with MySQL 5.7.8-rc on Ubuntu 14.04. Please refer to this topics respectively:

  1. MySQL Sandbox creating test environment
  2. Compiling MySQL Group replication plugin

So we have already our 3 nodes of MySQL 5.7.8-rc + group replication plugin.
Before starting group replication. We need to play with Corosync. Here is very dedicated article to this -> Corosync Guide

I want to talk about some issues where you may see and help to resolve these problems.
First thing is when you try to start group replication, without starting corosync daemon or while corosync is stopped:

mysql> START GROUP_REPLICATION;
ERROR 3092 (HY000): The server is not configured properly to be an active member of the group. Please see more details on error log.

The error indicates that, you will find some input in error log but you won’t 🙂
So just be sure to start corosync daemon before getting your hand with group replication.

Second thing is to save correct user on /etc/corosync/uidgid.d directory.
For eg, if you will start MySQL service with some ‘john’ Linux user, you must put this user into some file. For me, it is ‘sh’ local user on my Ubuntu:

sh@shrzayev:~$ cat /etc/corosync/uidgid.d/users 
uidgid {
 uid: sh
 gid: sh
}

If you will save wrong user you will likely see:

node1 [localhost] {msandbox} ((none)) > start group_replication;
ERROR 3092 (HY000): The server is not configured properly to be an active member of the group. Please see more details on error log.

But again there will be no such input in error log indicating exact issue.
Just be sure to save exact user related to corosync.

These tips came from Pedro Gomes. He is a replication developer from Oracle.
For further discussion please refer to related report -> #78619

Let’s assume you have successfully completed Corosync setup.
Now it is time to config MySQL server for group replication plugin.
For all 3 nodes the my.cnf file should include following settings:

server-id                      = UNIQUE_NUMBER_HERE
binlog-format                  = row
enforce-gtid-consistency       = true
gtid-mode                      = on
log_slave_updates              = 1
master-info-repository         = TABLE
relay-log-info-repository      = TABLE
binlog-checksum                = NONE
transaction-write-set-extraction=MURMUR32

With MySQL Sandbox there are already unique server-ids and some other usefull system variables with my.sandbox.cnf file. Just add non-existing parameters to config file. Start all 3 nodes first time:

sh@shrzayev:~/sandboxes/multi_msb_5_7_8/node[1,2,3]$ ./start
.. sandbox server started

Just be sure that, all servers started successfully:

sh@shrzayev:~/sandboxes/multi_msb_5_7_8/node[1,2,3]$ ./status
node1 on

Now stop all 3 nodes and append –plugin-load=group_replication.so to ./start script:

sh@shrzayev:~/sandboxes/multi_msb_5_7_8/node[1,2,3]$ ./start --plugin-load=group_replication.so
... sandbox server started

You can check plugin status:

node[1,2,3] [localhost] {msandbox} ((none)) > select plugin_name, plugin_status from information_schema.plugins where plugin_name like 'group_%';
+-------------------+---------------+
| plugin_name       | plugin_status |
+-------------------+---------------+
| group_replication | ACTIVE        |
+-------------------+---------------+
1 row in set (0,01 sec)

You have enabled group_replication plugin.
We need to create replication user do it only on node1:

CREATE USER 'rpl_user'@'%' IDENTIFIED BY 'rpl_pass';
GRANT REPLICATION SLAVE ON *.* TO rpl_user@'%';
GRANT SELECT on performance_schema.global_variables TO rpl_user@'%';

Also we need to give a SELECT grant to performance_schema.global_variables table, otherwise you will get an error:

2015-10-08T07:32:26.000461Z 8 [ERROR] Slave I/O for channel 'group_replication_recovery': The slave I/O thread stops because a fatal error is e
ncountered when it try to get the value of SERVER_ID variable from master. Error: SELECT command denied to user 'rpl_user'@'localhost' for tabl
e 'global_variables', Error_code: 1142

Now there are some global variables to change. do it on node1:

SET GLOBAL group_replication_group_name= "8a94f357-aab4-11df-86ab-c80aa9429562";
SET GLOBAL group_replication_recovery_user='rpl_user';
SET GLOBAL group_replication_recovery_password='rpl_pass';
SET GLOBAL group_replication_recovery_retry_count= 2;
SET GLOBAL group_replication_recovery_reconnect_interval= 120;

After all we can start group replication:

node1 [localhost] {msandbox} ((none)) > start group_replication;
Query OK, 0 rows affected (0,16 sec)

In MySQL error log, you will see new entries:

2015-10-08T05:55:22.419952Z 4 [Note] 'CHANGE MASTER TO FOR CHANNEL 'group_replication_applier' executed'. Previous state master_host='<NULL>', master_port= 3307, master_log_file='', master_log_pos= 4, master_bind=''. New state master_host='<NULL>', master_port= 3307, master_log_file='', master_log_pos= 4, master_bind=''.
2015-10-08T05:55:22.578212Z 3 [Note] Plugin group_replication reported: 'Group Replication applier module successfully initialized!'
2015-10-08T05:55:22.578283Z 5 [Note] Slave SQL thread for channel 'group_replication_applier' initialized, starting replication in log 'FIRST' at position 0, relay log './mysql-relay-group_replication_applier.000001' position: 4
2015-10-08T05:55:22.578306Z 3 [Note] Plugin group_replication reported: 'auto_increment_increment is set to 7'
2015-10-08T05:55:22.578359Z 3 [Note] Plugin group_replication reported: 'auto_increment_offset is set to 101'
2015-10-08T05:55:22.579736Z 0 [Note] Plugin group_replication reported: 'Starting group replication recovery with view_id 571734024:1'
2015-10-08T05:55:22.712797Z 6 [Note] Plugin group_replication reported: 'Only one server alive. Declaring this server as online within the replication group'
2015-10-08T05:55:22.713721Z 0 [Note] Plugin group_replication reported: 'This server was declared online within the replication group'

Even you can notice new connection from corosync log file:

Oct 08 10:55:22 [1257] shrzayev corosync debug   [QB    ] IPC credentials authenticated (1293-7202-19)
Oct 08 10:55:22 [1257] shrzayev corosync debug   [QB    ] connecting to client [7202]
Oct 08 10:55:22 [1257] shrzayev corosync debug   [QB    ] shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 08 10:55:22 [1257] shrzayev corosync debug   [QB    ] shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 08 10:55:22 [1257] shrzayev corosync debug   [QB    ] shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 08 10:55:22 [1257] shrzayev corosync debug   [MAIN  ] connection created
Oct 08 10:55:22 [1257] shrzayev corosync debug   [CPG   ] lib_init_fn: conn=0x7fe2da6a4e60, cpd=0x7fe2da6a5454
Oct 08 10:55:22 [1257] shrzayev corosync debug   [CPG   ] got procjoin message from cluster node 2130706433 (r(0) ip(127.0.0.1) ) for pid 7202

Now you can check group replication member status:

node1 [localhost] {msandbox} ((none)) > SELECT * FROM performance_schema.replication_connection_statusG
*************************** 1. row ***************************
             CHANNEL_NAME: group_replication_applier
               GROUP_NAME: 8a94f357-aab4-11df-86ab-c80aa9429562
              SOURCE_UUID: 8a94f357-aab4-11df-86ab-c80aa9429562
                THREAD_ID: NULL
            SERVICE_STATE: ON
COUNT_RECEIVED_HEARTBEATS: 0
 LAST_HEARTBEAT_TIMESTAMP: 0000-00-00 00:00:00
 RECEIVED_TRANSACTION_SET: 8a94f357-aab4-11df-86ab-c80aa9429562:1-2
        LAST_ERROR_NUMBER: 0
       LAST_ERROR_MESSAGE: 
     LAST_ERROR_TIMESTAMP: 0000-00-00 00:00:00
1 row in set (0,01 sec)

So there is no ERRORs here, it is working.
You can check also group members:

node1 [localhost] {msandbox} ((none)) > SELECT * FROM performance_schema.replication_group_membersG
*************************** 1. row ***************************
CHANNEL_NAME: group_replication_applier
   MEMBER_ID: 00008379-1111-1111-1111-111111111111
 MEMBER_HOST: shrzayev
 MEMBER_PORT: 8379
MEMBER_STATE: ONLINE
1 row in set (0,01 sec)

At this moment we have only 1 member -> node1.
Let’s add node2 too. Procedure is same set global variables and start group replication:

SET GLOBAL group_replication_group_name= "8a94f357-aab4-11df-86ab-c80aa9429562";
SET GLOBAL group_replication_recovery_user='rpl_user';
SET GLOBAL group_replication_recovery_password='rpl_pass';
SET GLOBAL group_replication_recovery_retry_count= 2;
SET GLOBAL group_replication_recovery_reconnect_interval= 120;

And start group replication:

node2 [localhost] {msandbox} ((none)) > start group_replication;
Query OK, 0 rows affected (0,16 sec)

Same procedure for node3 and we will have 3 nodes on our group:

node1 [localhost] {msandbox} ((none)) > select * from performance_schema.replication_group_membersG
*************************** 1. row ***************************
CHANNEL_NAME: group_replication_applier
   MEMBER_ID: 00008379-1111-1111-1111-111111111111
 MEMBER_HOST: shrzayev
 MEMBER_PORT: 8379
MEMBER_STATE: ONLINE
*************************** 2. row ***************************
CHANNEL_NAME: group_replication_applier
   MEMBER_ID: 00008380-2222-2222-2222-222222222222
 MEMBER_HOST: shrzayev
 MEMBER_PORT: 8380
MEMBER_STATE: ONLINE
*************************** 3. row ***************************
CHANNEL_NAME: group_replication_applier
   MEMBER_ID: 00008381-3333-3333-3333-333333333333
 MEMBER_HOST: shrzayev
 MEMBER_PORT: 8381
MEMBER_STATE: ONLINE
3 rows in set (0,00 sec)

That’s all. Here it is, how we actually can use MySQL Sandbox on Ubuntu 14.04 to create environment and to get started with Group Replication plugin.
As well as from now, you should be able to get start with Group Replication.
If you have your very own test cases, feel free to share or to ask.

MySQL Sandbox, creating test environment for Group Replication plugin

Today’s topic is about how to test Group Replication plugin in easy way, with using MySQL Sandbox
But firstly we must compile MySQL with Group Replication plugin, refer to previous topic -> Compiling Group Replication Plugin

Who did not use Sandbox before, please refer to official link and read a bit more. It is wonderful tool for installing, testing new things with MySQL. Especially if you have new MySQL release to test and want to install 3 or 5 instances, just use MySQL Sandbox and it will do this work for you.

In our condition, we have already compiled MySQL with our plugin. So we have source folder in -> /home/sh/Sandboxes/mysql-group-replication-0.5.0-dmr/BIN

With MySQL Sandbox there is a wonderful tool, named make_sandbox_from_source.
Let’s see options available with this tool:

sh@shrzayev:~$ make_sandbox_from_source --help
    The MySQL Sandbox,  version 3.1.00
    (C) 2006-2015 Giuseppe Maxia

*** sandbox_type missing
Syntax: /usr/local/bin/make_sandbox_from_source source_directory sandbox_type [options]

source directory is where you have successfully
    run ./configure && make

sandbox_type is one of 
    single
    replication
    circular
    multiple

options is anything that is needed by the sandbox
    application of your choice

We want to test Group Replication plugin, so we need multiple option it will install 3 nodes of same version of MySQL by default. So after running we will get on our hand 3 nodes of Group Replication enabled MySQL.
We must just specify the source folder as follows:

sh@shrzayev:~$ make_sandbox_from_source /home/sh/Sandboxes/mysql-group-replication-0.5.0-dmr/BIN multiple
can't find executable script 'make_binary_distribution'

Again with appending BUILD folder:

sh@shrzayev:~$ make_sandbox_from_source /home/sh/Sandboxes/mysql-group-replication-0.5.0-dmr/BIN/BUILD multiple
can't find executable script 'make_binary_distribution'

No success.
If you go on and list BUILD folder:

sh@shrzayev:~/Sandboxes/mysql-group-replication-0.5.0-dmr/BIN/BUILD$ ls
mysql-server  mysql-server.build  mysql-server.inst

There is a mysql-server.build folder which we will append and run again:

sh@shrzayev:~$ make_sandbox_from_source /home/sh/Sandboxes/mysql-group-replication-0.5.0-dmr/BIN/BUILD/mysql-server.build/ multiple
can't find a version in Makefile

Error has changed -> can’t find a version in Makefile
After searching i found a BUG report(LP BUG 853764) related to this issue and found a workaround. We should add MYSQL_NO_DASH_VERSION option in Makefile:

sh@shrzayev:~/Sandboxes/mysql-group-replication-0.5.0-dmr/BIN/BUILD$ cd mysql-server.build/
sh@shrzayev:~/Sandboxes/mysql-group-replication-0.5.0-dmr/BIN/BUILD/mysql-server.build$ vim Makefile

And put MYSQL_NO_DASH_VERSION=5.7.8 (for me) anywhere in Makefile, save and exit. Then rerun:

sh@shrzayev:~$ make_sandbox_from_source /home/sh/Sandboxes/mysql-group-replication-0.5.0-dmr/BIN/BUILD/mysql-server.build/ multiple
no old tarball found
CPack: Create package using TGZ
CPack: Install projects
CPack: - Run preinstall target for: MySQL
CPack: - Install project: MySQL
CPack: Create package
CPack: - package: /home/sh/Sandboxes/mysql-group-replication-0.5.0-dmr/BIN/BUILD/mysql-server.build/mysql-5.7.8-rc-linux-x86_64.tar.gz generated.
installing node 1
installing node 2
installing node 3
group directory installed in $HOME/sandboxes/multi_msb_mysql-5_7_8

That’s it we have 3 nodes and we are ready to test plugin.
Check for node1 for eg:

node1 [localhost] {msandbox} ((none)) > select @@plugin_dir;
+-----------------------------------------------------------------------------------------------------+
| @@plugin_dir                                                                                        |
+-----------------------------------------------------------------------------------------------------+
| /home/sh/Sandboxes/mysql-group-replication-0.5.0-dmr/BIN/BUILD/mysql-server.build/5.7.8/lib/plugin/ |
+-----------------------------------------------------------------------------------------------------+
1 row in set (0,00 sec)

sh@shrzayev:~$ ls /home/sh/Sandboxes/mysql-group-replication-0.5.0-dmr/BIN/BUILD/mysql-server.build/5.7.8/lib/plugin/ | grep group_
group_replication_license
group_replication.so

We have our plugin in plugin directory.
In the subsequent topics i will talk about activating and using group replication plugin. Again, we are using Ubuntu 14.04 with MySQL 5.7.8-rc2 and mysql-group-replication-0.5.0-dmr.