Altering tablespace of table – new in MySQL 5.7

Dear community, maybe you have already tested, there are new added features in MySQL 5.7. See previous post -> About 5.7.7-rc BUGs

“General Tablespace” feature -> From now you can place tables to separate tablespaces. Prior, on our hands there were only “System TableSpace” and “InnoDB file per table” options. Today’s topic is altering table to use this “General Tablespace”.

Our sample table structure:

mysql> show create table t1;

    | Table | Create Table 
    | t1    | CREATE TABLE `t1` (
      `id` int(11) DEFAULT NULL,
      `c1` varchar(85) DEFAULT NULL
    ) /*!50100 TABLESPACE innodb_file_per_table */ ENGINE=InnoDB DEFAULT CHARSET=utf8 

Note that there is a comment about tablespace of our table -> /*!50100 TABLESPACE innodb_file_per_table */. So our table has very own .ibd file as usual. Let’s create a separate tablespace:

mysql> CREATE TABLESPACE `ts1` ADD DATAFILE 'ts1.ibd' Engine=InnoDB;
Query OK, 0 rows affected (0.03 sec)

If you do not specify path for ‘ts1.ibd’ file it will be created in “datadir” by default. Let’s note the initial size of this file:

[root@centos7_vm datadir]# ls -ltr | grep ts1.ibd
-rw-r-----. 1 root  root     65536 Jul 23 16:12 ts1.ibd

Now it is time to alter table:

mysql> alter table t1 tablespace ts1;
Query OK, 0 rows affected (1 min 5.41 sec)
Records: 0  Duplicates: 0  Warnings: 0

The create statement of table will be affected too:

    mysql> show create table t1;
    | Table | Create Table                                                  
    | t1    | CREATE TABLE `t1` (
      `id` int(11) DEFAULT NULL,
      `c1` varchar(85) DEFAULT NULL
    ) /*!50100 TABLESPACE ts1 */ ENGINE=InnoDB DEFAULT CHARSET=utf8 |

NOTE: /*!50100 TABLESPACE ts1 */

Write down size of ‘ts1.ibd’ file again:

[root@centos7_vm datadir]# ls -ltr | grep ts1.ibd
-rw-r-----. 1 root  root  155189248 Jul 23 16:16 ts1.ibd

155189248 -> ts1.ibd

Let’s try to alter table with same tablespace file (2nd time):

mysql> alter table t1 tablespace ts1;
Query OK, 0 rows affected (1 min 5.05 sec)
Records: 0  Duplicates: 0  Warnings: 0

Pay attention to ts1.ibd file size:

[root@centos7_vm datadir]# ls -ltr | grep ts1.ibd
-rw-r-----. 1 root  root  301989888 Jul 23 16:18 ts1.ibd

301989888 -> ts1.ibd

The tablespace file is increased by same table twice…

Also checkout related BUG report -> #77814

Testing MySQL 5.7.7-rc with memory and disc full conditions

With MySQL 5.7 there will be many improvements and changes. The most valuable ones are:

1. “General Tablespace” feature -> From now you can place tables to separate tablespaces. Prior, on our hands there were only “System TableSpace” and “InnoDB file per table” options.

Check for further reading: General Tablespaces Create statement: Create TableSpace

2. Resizing InnoDB Buffer Pool online -> for changing innodb_buffer_pool_size variable restart is required prior to MySQL 5.7, from now we can change buffer pool size without restart dynamically. Read Documentation -> innodb-buffer-pool-online-resize

To break this new things, we need to create some conditions where Memory or Disc will be full. Firstly let’s try to resize innodb buffer pool size without having sufficient memory.

Note: MySQL installation: debug enabled

Started MySQL as:

(gdb) run --defaults-file=/opt/mysql/my.cnf --basedir=/opt/mysql --datadir=/opt/mysql/datadir --plugin-dir=/opt/mysql/lib/plugin --user=mysql --log-error=/opt/mysql/datadir/centos7-1_vm.err --pid-file=/opt/mysql/datadir/mysqld-new.pid --socket=/opt/mysql/datadir/mysqld-new.sock --port=3306

Then try to resize:

mysql> SET @@GLOBAL.innodb_buffer_pool_size=9556544565345346;
Query OK, 0 rows affected, 2 warnings (0.00 sec)

After few attempts it will stuck with SIGABRT signal:

Program received signal SIGABRT, Aborted.
[Switching to Thread 0x7fffb77f6700 (LWP 17145)]
0x00007ffff61675d7 in __GI_raise (sig=sig@entry=6) at ../nptl/sysdeps/unix/sysv/linux/raise.c:56
56    return INLINE_SYSCALL (tgkill, 3, pid, selftid, sig);

(gdb) bt
#0  0x00007ffff61675d7 in __GI_raise (sig=sig@entry=6) at ../nptl/sysdeps/unix/sysv/linux/raise.c:56
#1  0x00007ffff6168cc8 in __GI_abort () at abort.c:90
#2  0x0000000001adbe0f in ut_dbg_assertion_failed (expr=0x0, file=0x21956d8 "/root/mysql-5.7.7-rc/storage/innobase/ut/ut0ut.cc", line=906)
at /root/mysql-5.7.7-rc/storage/innobase/ut/ut0dbg.cc:67
#3  0x0000000001ae123e in ib::fatal::~fatal (this=0x7fffb77f49e0, __in_chrg=<optimized out>) at /root/mysql-5.7.7-rc/storage/innobase/ut/ut0ut.cc:906
#4  0x00000000018f18e8 in ut_allocator<unsigned char>::allocate (this=0x7fffb77f4ba0, n_elements=14238670808, hint=0x0, 
    file=0x21cbb10 "/root/mysql-5.7.7-rc/storage/innobase/ha/hash0hash.cc", set_to_zero=false, throw_on_error=false)
    at /root/mysql-5.7.7-rc/storage/innobase/include/ut0new.h:328
#5  0x0000000001bed775 in hash0_create (n=1756528640) at /root/mysql-5.7.7-rc/storage/innobase/ha/hash0hash.cc:284
#6  0x000000000194769d in lock_sys_resize (n_cells=1756528640) at /root/mysql-5.7.7-rc/storage/innobase/lock/lock0lock.cc:478
#7  0x0000000001b3dce1 in buf_pool_resize () at /root/mysql-5.7.7-rc/storage/innobase/buf/buf0buf.cc:2669
#8  0x0000000001b3e0f5 in buf_resize_thread (arg=0x0) at /root/mysql-5.7.7-rc/storage/innobase/buf/buf0buf.cc:2749
#9  0x00007ffff7bc6df5 in start_thread (arg=0x7fffb77f6700) at pthread_create.c:308
#10 0x00007ffff62281ad in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113

See Related verified BUG report -> #77564

How about creating “General Tablespace” if there is no free disk space? Again started with:

[root@centos7_vm ~]# gdb /opt/mysql/bin/mysqld
(gdb) run --defaults-file=/opt/mysql/my.cnf --basedir=/opt/mysql --datadir=/opt/mysql/datadir --plugin-dir=/opt/mysql/lib/plugin --user=mysql --log-error=/opt/mysql/datadir/centos7_vm.err --pid-file=/opt/mysql/datadir/mysqld-new.pid --socket=/opt/mysql/datadir/mysqld-new.sock --port=3306

Try to run create tablespace:

mysql> CREATE TABLESPACE `ts2` ADD DATAFILE 'ts2.ibd' Engine=InnoDB;

In error log there is some mention about full disk:

2015-06-30T05:34:41.379790Z 2 [ERROR] InnoDB: Encountered a problem with file ./ts2.ibd
2015-06-30T05:34:41.379865Z 2 [ERROR] InnoDB: Disk is full. Try to clean the disk to free space.
2015-06-30T05:34:41.379910Z 2 [ERROR] InnoDB: Operating system error number 28 in a file operation.
2015-06-30T05:34:41.379937Z 2 [ERROR] InnoDB: Error number 28 means 'No space left on device'.
2015-06-30T05:34:41.379952Z 2 [Note] InnoDB: Some operating system error numbers are described at http://dev.mysql.com/doc/refman/5.7/en/operating-system-error-codes.html
2015-06-30T05:34:41.393247Z 2 [ERROR] InnoDB: Cannot create file './ts2.ibd'

From GDB we will see SIGSEGV signal:

Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7fffddf7e700 (LWP 2787)]
__strnlen_sse2 () at ../sysdeps/x86_64/strnlen.S:34
34      pcmpeqb (%rdi), %xmm2

    (gdb) bt
#0  __strnlen_sse2 () at ../sysdeps/x86_64/strnlen.S:34
#1  0x0000000001dc18f2 in process_str_arg (cs=0x2ba3ea0 <my_charset_utf8_general_ci>, to=0x7fffddf7bfdb "02", end=0x7fffddf7c1cf "", width=192, 
    par=0x1 <Address 0x1 out of bounds>, print_type=0) at /root/mysql-5.7.7-rc/strings/my_vsnprintf.c:194
#2  0x0000000001dc32e6 in my_vsnprintf_ex (cs=0x2ba3ea0 <my_charset_utf8_general_ci>, to=0x7fffddf7bfdb "02", n=512, fmt=0x2d32a12 "s' is full", 
    ap=0x7fffddf7c1d0) at /root/mysql-5.7.7-rc/strings/my_vsnprintf.c:607
#3  0x000000000189d0e9 in my_error (nr=135, MyFlags=0) at /root/mysql-5.7.7-rc/mysys/my_error.c:206
    #4  0x0000000001622428 in mysql_alter_tablespace (thd=0x7fffcc000e20, ts_info=0x7fffcc022080) at /root/mysql-5.7.7-rc/sql/sql_tablespace.cc:191
#5  0x000000000158e104 in mysql_execute_command (thd=0x7fffcc000e20) at /root/mysql-5.7.7-rc/sql/sql_parse.cc:4432
#6  0x000000000158ff62 in mysql_parse (thd=0x7fffcc000e20, parser_state=0x7fffddf7d5a0) at /root/mysql-5.7.7-rc/sql/sql_parse.cc:5159
#7  0x0000000001585cf1 in dispatch_command (command=COM_QUERY, thd=0x7fffcc000e20, 
packet=0x7fffcc006f71 "CREATE TABLESPACE `ts2` ADD DATAFILE 'ts2.ibd' Engine=InnoDB", packet_length=60) at /root/mysql-5.7.7-rc/sql/sql_parse.cc:1249
#8  0x0000000001584b21 in do_command (thd=0x7fffcc000e20) at /root/mysql-5.7.7-rc/sql/sql_parse.cc:835
#9  0x00000000016adcff in handle_connection (arg=0x3c92060) at /root/mysql-5.7.7-rc/sql/conn_handler/connection_handler_per_thread.cc:298
#10 0x0000000001cd9905 in pfs_spawn_thread (arg=0x3b4dcd0) at /root/mysql-5.7.7-rc/storage/perfschema/pfs.cc:2147
#11 0x00007ffff7bc6df5 in start_thread (arg=0x7fffddf7e700) at pthread_create.c:308
#12 0x00007ffff62281ad in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113

Related BUG report -> #77556

If you have you own test cases, please share it.

VALIDATE PASSWORD PLUGIN with mysql_secure_installation in 5.7.7-rc

While testing installation steps with MySQL 5.7.7-rc, surely you will find much more improvements, changes, deprecated functionality and etc.

The interesting one is activating VALIDATE PASSWORD PLUGIN via mysql_secure_installation script. Which we use by mean “securing” MySQL installations.

I will write a separate topic about MySQL 5.7.7-rc installation steps from source, with related BUG reports.

So after first run:

[root@centos7_vm mysql]# bin/mysql_secure_installation --socket=/opt/mysql/datadir/mysqld-new.sock
Securing the MySQL server deployment.
Connecting to MySQL server using password in '/root/.mysql_secret'

If you notice, now script trying to connect to MySQL using a temporary password which is generated for root@localhost and logged into hidden .mysql_secret file.

VALIDATE PASSWORD PLUGIN can be used to test passwords
and improve security. It checks the strength of password
and allows the users to set only those passwords which are
secure enough. Would you like to setup VALIDATE PASSWORD plugin?

Press y|Y for Yes, any other key for No: Y

There are three levels of password validation policy:

LOW    Length >= 8
MEDIUM Length >= 8, numeric, mixed case, and special characters
STRONG Length >= 8, numeric, mixed case, special characters and dictionary                  file

Please enter 0 = LOW, 1 = MEDIUM and 2 = STRONG: 1
Using existing root password.

Estimated strength of the password: 100 

From now we have an option to activate our password policy on the fly after fresh installation.
If you try to give a password such 12345 it will fail at password policy check:

Change the root password? (Press y|Y for Yes, any other key for No) : Y

New password: 

Re-enter new password: 

Estimated strength of the password: 25 
Do you wish to continue with the password provided?(Press y|Y for Yes, any other key for No) : Y
 ... Failed! Error: Your password does not satisfy the current policy requirements

The fail reason is strength of password is equal to 25 , where it should be 100.
After giving a proper password:

New password: 

Re-enter new password: 

Estimated strength of the password: 100 
Do you wish to continue with the password provided?(Press y|Y for Yes, any other key for No) : Y

The other parts of output should be familiar:

By default, a MySQL installation has an anonymous user,
allowing anyone to log into MySQL without having to have
a user account created for them. This is intended only for
testing, and to make the installation go a bit smoother.
You should remove them before moving into a production
environment.

Remove anonymous users? (Press y|Y for Yes, any other key for No) : Y
Success.


Normally, root should only be allowed to connect from
'localhost'. This ensures that someone cannot guess at
the root password from the network.

Disallow root login remotely? (Press y|Y for Yes, any other key for No) : Y
Success.

By default, MySQL comes with a database named 'test' that
anyone can access. This is also intended only for testing,
and should be removed before moving into a production
environment.


Remove test database and access to it? (Press y|Y for Yes, any other key for No) : Y
 - Dropping test database...
Success.

 - Removing privileges on test database...
Success.

Reloading the privilege tables will ensure that all changes
made so far will take effect immediately.

Reload privilege tables now? (Press y|Y for Yes, any other key for No) : Y
Success.

All done! 

Comprehensive guide to installing PXC on CentOS 7

Recently want to install Percona XtraDB Cluster + HAProxy + KeepAlived on CentOS 7, but could not find any all-in-one guide. So decided to write down all steps necessary for getting started and running HA solution using Open Source projects. Nowadays high availability is one of the main concerns faced by big and small companies.Minimum wasting of time (downtime per year), sustainably working of infrastructure and etc. is the main mission for all companies that they are trying to achieve.There are different approaches to this mission. One of them as large companies did, to buy expensive software and support from vendors. But small companies could not go through these steps. The true power of Open Source comes up at these moments.You can build your own High Availability solution using free but yet powerful Open Source projects. In this article we will show you how to achieve Database Clustering and Load Balancing. We will use:

  1. Percona XtraDB Cluster – Percona XtraDB Cluster is High Availability and Scalability solution for MySQL Users. What is PXC?

  2. HAproxy – HAProxy is a free, very fast and reliable solution offering high availability, load balancing, and proxying for TCP and HTTP-based applications See for further information HAproxy official site

  3. KeepAlived – Keepalived is a routing software written in C. The main goal of this project is to provide simple and robust facilities for loadbalancing and high-availability to Linux system and Linux based infrastructures See for further information KeepAlived official site

General Architecture

Before diving in the step-by-step guide let’s to see what we want to accomplish at the end of tutorial: General Architecture Overview

As diagram shows there must be at least 3 CentOS 6.5/7 instance for PXC Galera Cluster setup. Total 3 CentOS 7 instance with all static local IPs and only 1 public IP for KeepAlived. So another advantage of this architecture that it will reduce public ip usage to 1.

{: .note } >

For this tutorial we will not use public IP as it is an test environment, all CentOSs will use local ips.

Prerequisites

We need 3 instances of CentOS 7 with minimal installation. Also you should give a static ips to all servers. Here is sample IPs that you may give to your CentOS instances for testing this tutorial:

  1. Node1 (1 GB RAM, 1 CPU VM) – 192.168.1.99 – (HAproxy1)
  2. Node2 (1 GB RAM, 1 CPU VM) – 192.168.1.88 – (HAproxy2)
  3. Node3 (1 GB RAM, 1 CPU VM) – 192.168.1.77 – (HAproxy3)

Also keep in mind that we need another IP for KeepAlived that will act as Virtual IP for HAproxy intances.

Installing and Configuring PXC

Installing and Running PXC:

The easy way to install is through official yum repo. You should activate Percona repo on all 3 Nodes:

    -- CentOS 7
    yum install http://www.percona.com/downloads/percona-release/redhat/0.1-3/percona-release-0.1-3.noarch.rpm

Save all 3 files on all 3 Nodes. Then run:

    [root@node1, 2, 3 ~]# yum install Percona-XtraDB-Cluster-56

All needed packages and dependencies will be resolved automatically and at this point you should be able to start newly installed PXCs:

    -- CentOS 7
    [root@node1, 2, 3 ~]# systemctl start mysql.service
    [root@node1, 2, 3 ~]# mysql_secure_installation

At this point, we have 3 running Percona Galera Cluster instances.

Installing Several other needed packages:

We need several other packages from epel repo, such as rsync for PXC SST See Documentation That’s why we must activate epel repo on all 3 Nodes:

    -- CentOS 7
    [root@node1, 2, 3 ~]# yum install socat rsync

Create binary log folder outside MySQL’s datadir

    [root@node1, 2, 3 ~]# mkdir /var/lib/mysql-binlog
    [root@node1, 2, 3 ~]# chown mysql:mysql /var/lib/mysql-binlog

    -- SElinux
    [root@centos7-node1 ~]# yum install policycoreutils-python
    [root@centos7-node1 ~]# semanage fcontext -a -t mysqld_db_t /var/lib/mysql-binlog
    [root@centos7-node1 ~]# restorecon -v /var/lib/mysql-binlog
        restorecon reset /var/lib/mysql-binlog context unconfined_u:object_r:var_lib_t:s0->unconfined_u:object_r:mysqld_db_t:s0

server.cnf configuration file for PXC nodes:

Now we must edit all 3 nodes server.cnf (default location is ‘/etc/my.cnf.d/server.cnf’) file to reflect our needs. Following content is the same on all 3 nodes:

    [mysqld]

    # General #
    datadir                       = /var/lib/mysql
    socket                        = /var/lib/mysql/mysql.sock
    default_storage_engine        = InnoDB
    lower_case_table_names        = 1


    # MyISAM #
    key_buffer_size                = 8M
    myisam_recover                 = FORCE,BACKUP


    # SAFETY #
    max_allowed_packet             = 16M
    max_connect_errors             = 1000000
    skip_name_resolve
    sysdate_is_now                 = 1
    innodb                         = FORCE
    thread_stack                   = 262144
    back_log                       = 2048
    performance_schema             = OFF
    skip_show_database

    # Binary Logging #
    log_bin                        = /var/lib/mysql-binlog/mysql-bin
    log_bin_index                  = /var/lib/mysql-binlog/mysql-bin
    expire_logs_days               = 14
    sync_binlog                    = 1
    binlog_format                  = row

    # CACHES AND LIMITS #
    tmp_table_size                 = 16M
    max_heap_table_size            = 16M
    query_cache_type               = 0
    query_cache_size               = 0
    query_cache_limit              = 0
    max_connections                = 10000
    thread_cache_size              = 500
    open-files-limit               = 65535
    table_definition_cache         = 7000
    table_open_cache               = 7000

    # InnoDB Related #

    innodb_log_files_in_group      = 2
    innodb_autoinc_lock_mode       =2
    #innodb_locks_unsafe_for_binlog =1
    innodb_log_file_size           =100M
    innodb_file_per_table
    innodb_flush_log_at_trx_commit =2
    innodb_buffer_pool_size        =150M

And now we provide per Node settings for configuration file:

    **Node1**

    #
    # * Galera-related settings
    #
    [galera]

    wsrep_provider=/usr/lib64/galera3/libgalera_smm.so
    wsrep_cluster_address="gcomm://192.168.1.99,192.168.1.88,192.168.1.77"
    wsrep_cluster_name='TestingCluster'
    wsrep_node_address='192.168.1.99'
    wsrep_node_name='node1'
    wsrep_sst_method=rsync
    wsrep_sst_auth=sstuser:12345
    bind-address=0.0.0.0


    **Node2**

    #
    # * Galera-related settings
    #
    [galera]

    wsrep_provider=/usr/lib64/galera3/libgalera_smm.so
    wsrep_cluster_address="gcomm://192.168.1.99,192.168.1.88,192.168.1.77"
    wsrep_cluster_name='TestingCluster'
    wsrep_node_address='192.168.1.88'
    wsrep_node_name='node1'
    wsrep_sst_method=rsync
    wsrep_sst_auth=sstuser:12345
    bind-address=0.0.0.0


    **Node3**

    #
    # * Galera-related settings
    #
    [galera]

    wsrep_provider=/usr/lib64/galera3/libgalera_smm.so
    wsrep_cluster_address="gcomm://192.168.1.99,192.168.1.88,192.168.1.77"
    wsrep_cluster_name='TestingCluster'
    wsrep_node_address='192.168.1.77'
    wsrep_node_name='node1'
    wsrep_sst_method=rsync
    wsrep_sst_auth=sstuser:12345
    bind-address=0.0.0.0

Save file on all 3 nodes and exit file editing.

Afer editing configuration files. On all 3 Nodes you must create wsrep_sst_auth user as mentioned in server.cnf file its name is ‘sstuser’

    [root@node1, 2, 3 ~]# mysql -u root -p
    mysql> create user 'sstuser'@'%' identified by '12345';
    Query OK, 0 rows affected (0.01 sec)

    mysql> grant all on *.* to 'sstuser'@'%';
    Query OK, 0 rows affected (0.00 sec)

Set to permissive mode on all 3 Nodes. As SElinux prevents Galera to start:

    -- CentOS  7
    [root@node1, 2, 3 ~]# setenforce 0

‘/etc/hosts’ file contents for PXC Nodes:

**Node1**

    [root@node1 ~]# cat /etc/hosts
    127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
    127.0.0.1   node1.localdomain node1
    ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

    192.168.1.99 localhost localhost.localdomain node1.localdomain node1
    192.168.1.88 node2
    192.168.1.77 node3




**Node2**

    [root@node2 ~]# cat /etc/hosts
    127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
    ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
    127.0.0.1   node2.localdomain node2
    192.168.1.88 localhost localhost.localdomain node2.localdomain node2
    192.168.1.99 node1
    192.168.1.77 node3



**Node3**

    [root@node3 ~]# cat /etc/hosts
    127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
    ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
    127.0.0.1   node3.localdomain node3
    192.168.1.77 localhost localhost.localdomain node3
    192.168.1.99 node1
    192.168.1.88 node2

Iptables settings related to PXC nodes:

-- CentOS 7

** Node1 **

firewall-cmd --add-rich-rule='rule family="ipv4" source address="192.168.1.88" port port="4567" protocol="tcp" accept' --permanent
firewall-cmd --add-rich-rule='rule family="ipv4" source address="192.168.1.88" port port="3306" protocol="tcp" accept' --permanent
firewall-cmd --add-rich-rule='rule family="ipv4" source address="192.168.1.77" port port="4567" protocol="tcp" accept' --permanent
firewall-cmd --add-rich-rule='rule family="ipv4" source address="192.168.1.88" port port="4444" protocol="tcp" accept' --permanent
firewall-cmd --add-rich-rule='rule family="ipv4" source address="192.168.1.77" port port="3306" protocol="tcp" accept' --permanent
firewall-cmd --add-rich-rule='rule family="ipv4" source address="192.168.1.77" port port="4444" protocol="tcp" accept' --permanent
firewall-cmd --add-rich-rule='rule family="ipv4" source address="192.168.1.77" port port="4568" protocol="tcp" accept' --permanent
firewall-cmd --add-rich-rule='rule family="ipv4" source address="192.168.1.88" port port="4568" protocol="tcp" accept' --permanent

firewall-cmd --add-rich-rule='rule family="ipv4" source address="192.168.1.88" port port="9200" protocol="tcp" accept' --permanent
firewall-cmd --add-rich-rule='rule family="ipv4" source address="192.168.1.77" port port="9200" protocol="tcp" accept' --permanent

firewall-cmd --reload

** Node 2 **

firewall-cmd --add-rich-rule='rule family="ipv4" source address="192.168.1.99" port port="4567" protocol="tcp" accept' --permanent
firewall-cmd --add-rich-rule='rule family="ipv4" source address="192.168.1.99" port port="3306" protocol="tcp" accept' --permanent
firewall-cmd --add-rich-rule='rule family="ipv4" source address="192.168.1.77" port port="4567" protocol="tcp" accept' --permanent
firewall-cmd --add-rich-rule='rule family="ipv4" source address="192.168.1.99" port port="4444" protocol="tcp" accept' --permanent
firewall-cmd --add-rich-rule='rule family="ipv4" source address="192.168.1.77" port port="3306" protocol="tcp" accept' --permanent
firewall-cmd --add-rich-rule='rule family="ipv4" source address="192.168.1.77" port port="4444" protocol="tcp" accept' --permanent
firewall-cmd --add-rich-rule='rule family="ipv4" source address="192.168.1.77" port port="4568" protocol="tcp" accept' --permanent
firewall-cmd --add-rich-rule='rule family="ipv4" source address="192.168.1.99" port port="4568" protocol="tcp" accept' --permanent

firewall-cmd --add-rich-rule='rule family="ipv4" source address="192.168.1.99" port port="9200" protocol="tcp" accept' --permanent
firewall-cmd --add-rich-rule='rule family="ipv4" source address="192.168.1.77" port port="9200" protocol="tcp" accept' --permanent

firewall-cmd --reload

** Node 3 **

firewall-cmd --add-rich-rule='rule family="ipv4" source address="192.168.1.88" port port="4567" protocol="tcp" accept' --permanent
firewall-cmd --add-rich-rule='rule family="ipv4" source address="192.168.1.88" port port="3306" protocol="tcp" accept' --permanent
firewall-cmd --add-rich-rule='rule family="ipv4" source address="192.168.1.99" port port="4567" protocol="tcp" accept' --permanent
firewall-cmd --add-rich-rule='rule family="ipv4" source address="192.168.1.88" port port="4444" protocol="tcp" accept' --permanent
firewall-cmd --add-rich-rule='rule family="ipv4" source address="192.168.1.99" port port="3306" protocol="tcp" accept' --permanent
firewall-cmd --add-rich-rule='rule family="ipv4" source address="192.168.1.99" port port="4444" protocol="tcp" accept' --permanent
firewall-cmd --add-rich-rule='rule family="ipv4" source address="192.168.1.99" port port="4568" protocol="tcp" accept' --permanent
firewall-cmd --add-rich-rule='rule family="ipv4" source address="192.168.1.88" port port="4568" protocol="tcp" accept' --permanent

firewall-cmd --add-rich-rule='rule family="ipv4" source address="192.168.1.88" port port="9200" protocol="tcp" accept' --permanent
firewall-cmd --add-rich-rule='rule family="ipv4" source address="192.168.1.99" port port="9200" protocol="tcp" accept' --permanent
firewall-cmd --reload

Bootstrapping(starting) Cluster

As we have 3 Nodes, one of them must be in some words, the “Head” node, at first time we must choose one of Nodes as start point. For us this is Node1. Stop Node1 and run following command:

    -- CentOS 7

    [root@node1 ~]# systemctl stop mysql.service
    [root@node1 ~]# systemctl start mysql@bootstrap.service
    Bootstrapping the clusterStarting MySQL.. SUCCESS!

    (There is no bootstrap command for CentOS 7 for MariaDB cluster - already reported as: [MDEV-7752](https://mariadb.atlassian.net/browse/MDEV-7752))

Then you must start other 2 nodes as usual:

    -- CentOS 7

    [root@node2 ~]# systemctl start mysql.service
    [root@node3 ~]# systemctl start mysql.service

After success messages login to PXC Node1 and run:

    PXC [(none)]> show status like 'wsrep%';

The important part of output is:

    | wsrep_cluster_conf_id        | 5                                                     |
    | wsrep_cluster_size           | 3                                                     |
    | wsrep_connected              | ON                                                    |
    | wsrep_ready                  | ON                                                    |
    | wsrep_cluster_status         | Primary                                               |

As you see the cluster size is 3 and it means all 3 Nodes connected and working. Also you can see from output that Cluster is ready for accepting connection. Let’s create database from Node3 and test its creation from other Nodes:

    [root@node3 ~]# mysql -u root -p
    PXC [(none)]> create database pxc_test;
    Query OK, 1 row affected (0.10 sec)

    [root@node2 ~]# mysql -u root -p
    PXC [(none)]> show databases like 'pxc%';
    +-----------------+
    | Database (lin%) |
    +-----------------+
    | pxc_test        |
    +-----------------+
    1 row in set (0.00 sec)

    [root@node1 ~]# mysql -u root -p
    PXC [(none)]> show databases like 'pxc%';
    +-----------------+
    | Database (lin%) |
    +-----------------+
    | pxc_test        |
    +-----------------+
    1 row in set (0.00 sec)

Enabling SElinux :

Disabling SElinux permanently or running in permissive mode for all time is dangerous related to security. As we have set SElinux into permissive mode, it will not prevent any Galera actions, instead it has already logged all related information into audit.log file. Using this file we should create rules and reenable SElinux. On all 3 nodes:

-- CentOS 7
[root@node1, 2, 3 ~]# yum install policycoreutils-python
[root@node1, 2, 3 ~]# grep mysql /var/log/audit/audit.log | audit2allow -M galera
[root@node1, 2, 3 ~]# semodule -i galera.pp

[root@node1, 2, 3 ~]# setenforce 1

Configuring Clustercheck script for cluster health check

This script will be installed with PXC(/usr/bin/clustercheck). Clustercheck is simple script o make a proxy (i.e HAProxy) capable of monitoring Percona XtraDB Cluster nodes properly.

Before setup install xinetd in all 3 nodes:

    -- CentOS 7

    [root@node1, 2, 3 ~]# yum install xinetd
    [root@node1, 2, 3 ~]# systemctl start xinetd.service
    [root@node1, 2, 3 ~]# systemctl enable xinetd.service

After installing Xinetd and ensuring that it will start on system reboot automatically, we need configure mysqlchk file:

    [root@node1, 2, 3 ~]# nano /etc/xinetd.d/mysqlchk

    -- Add following lines to this file

    service mysqlchk
    {
        disable = no
        flags = REUSE
        socket_type = stream
        port = 9200
        wait = no
        user = nobody
        server = /usr/bin/clustercheck
        log_on_failure += USERID
        only_from = 0.0.0.0/0
        per_source = UNLIMITED
    }

Then edit /etc/services file. Locate wap-wsp word in this file and comment out as follows, then add mysqlchk entry:

    [root@node1, 2, 3 ~]# nano /etc/services
    mysqlchk        9200/tcp                # SERVICE for Checking PXC Cluster      
    #wap-wsp         9200/tcp                # WAP connectionless session service
    #wap-wsp         9200/udp                # WAP connectionless session service

Now create clustercheck database user on ALL Nodes:

    create user 'clustercheckuser'@'localhost' identified by 'clustercheckpassword!';
    grant process on *.* to 'clustercheckuser'@'localhost';

Restart Xinetd on all Nodes:

    -- CentOS 7

    [root@node1, 2, 3 ~]# systemctl restart xinetd

And final step for this section is to check this new script:

    [root@node1, 2, 3 ~]# clustercheck 
    HTTP/1.1 200 OK
    Content-Type: text/plain
    Connection: close
    Content-Length: 40

    Percona XtraDB Cluster Node is synced.

As you see everything is OK and it works.

Installing and Configuring HAproxy

As we have mentioned we have not got VMs dedicated to HAproxy. That’s why HAproxy instances will be in same machines as PXCs.

Install HAproxy:

    [root@node1, 2, 3 ~]# yum install haproxy
    [root@node1, 2, 3 ~]# systemctl enable haproxy.service

Fix HAproxy logging: By default HAproxy does not log anything, we must fix it by adding haproxy.conf file into /etc/rsyslog.d directory:

    [root@node1, 2, 3 ~]# cd /etc/rsyslog.d/
    -- Add following lines into file:
    [root@node1, 2, 3 rsyslog.d]# nano haproxy.conf
    $ModLoad imudp
    $UDPServerRun 514
    $template Haproxy,"%msg%n"
    local0.=info -/var/log/haproxy.log;Haproxy
    local0.notice -/var/log/haproxy-status.log;Haproxy
    local0.* ~

    -- SElinux related command:
    [root@node1, 2, 3 ~]# /sbin/restorecon -v -F /etc/rsyslog.d/haproxy.conf
    [root@node1, 2, 3 ~]# systemctl restart rsyslog.service

Edit haproxy.cfg config file:

    **HAproxy1, 2, 3**

    [root@node1, 2, 3 ~]# cd /etc/haproxy/
    [root@node1, 2, 3 ~]# mv haproxy.cfg haproxy.cfg.orig
    [root@node1, 2, 3 ~]# nano haproxy.cfg

    global
        log         127.0.0.1   local0
        log         127.0.0.1   local1 notice
        maxconn     4096
        user        haproxy
        group       haproxy
        nbproc      1
        pidfile     /var/run/haproxy.pid

    defaults
        log         global
        option      tcplog
        option      dontlognull
        retries     3
        maxconn     4096
        option      redispatch
        timeout     connect 50000ms
        timeout     client  50000ms
        timeout     server  50000ms

    listen PXC-writes
        bind 0.0.0.0:3311
        mode tcp
        #    option mysql-check user haproxy
        option httpchk 
        server node1 192.168.1.99:3306 check port 9200
        server node2 192.168.1.88:3306 check port 9200 backup
        server node3 192.168.1.77:3306 check port 9200 backup

    listen PXC-reads
        bind 0.0.0.0:3312
        mode tcp
        balance leastconn
    #    option mysql-check user haproxy
        option httpchk
        server node1 192.168.1.99:3306 check port 9200
        server node2 192.168.1.88:3306 check port 9200
        server node3 192.168.1.77:3306 check port 9200

    # HAProxy web ui
        listen stats 0.0.0.0:9000
        mode http
        stats enable
        stats uri /haproxy
        stats realm HAProxy Statistics
        stats auth haproxy:haproxy
        stats admin if TRUE


    -- SElinux related command:

    [root@node1 ~]# /sbin/restorecon -v -F /etc/haproxy/haproxy.cfg
    [root@node1 ~]# setsebool haproxy_connect_any on
    [root@node1 ~]# systemctl start haproxy.service

Lets explore config file a bit more: The default port number 9000 is for WEB UI for HAproxy monitoring. Another thing to remember that when using PXC Galera Cluster with SST options mysqldump and rsync (default) it will lock each other nodes from getting updates while DDL, DML statements executing. To avoid such situation and not to stuck with Deadlocks, we decide to separate Write operations. In other words, Write operations (e.g insert, update, delete etc.) will go only to Node1. So on the application side, you should send write operations to port 3310 as we put in haproxy.cfg file, and for read operations to port number 3311. There is an available non-locking SST option XtraBackup (the famous hot online backup tool for MySQL), but it is the subject of another topic.

And ofcourse we should check our work:

Check for listening ports:

    [root@node1 ~]# netstat -ntpl | grep haproxy
    tcp        0      0 0.0.0.0:9000                0.0.0.0:*                   LISTEN      11902/haproxy       
    tcp        0      0 0.0.0.0:3310                0.0.0.0:*                   LISTEN      11902/haproxy       
    tcp        0      0 0.0.0.0:3311                0.0.0.0:*                   LISTEN      11902/haproxy

    [root@node2 haproxy]# netstat -ntpl | grep haproxy
    tcp        0      0 0.0.0.0:9000                0.0.0.0:*                   LISTEN      12846/haproxy       
    tcp        0      0 0.0.0.0:3310                0.0.0.0:*                   LISTEN      12846/haproxy       
    tcp        0      0 0.0.0.0:3311                0.0.0.0:*                   LISTEN      12846/haproxy

As we have mentioned port number 9000 for WEB Ui related to HAproxy statistics. So lets connect to IP of on of the HAproxy servers. The connection URL for me is: http://192.168.1.88:9000/haproxy_stats. In Very first connection there will be an pop-screen for login and password. These information is stored in /etc/haproxy/haproxy.cfg file stats auth haproxy:haproxy. So the Login is: haproxy and Password is: haproxy too.

You will see the statistics page with correctly working Cluster Nodes:

HAproxy Web Statistics Page

Now lets check MySQL connectivity over HAproxy. For this purpose create sample database user on PXC Cluster nodes. Our cluster is ready and that’s why it is sufficient to run this command one of the nodes and other will pick-up automatically

    create user 'all'@'%' identified by '12345';
    grant all on *.* to 'all'@'%';

Before doing, open these ports:

    [root@node1, 2, 3 ~]# firewall-cmd --add-port=3311/tcp --permanent
    success
    [root@node1, 2, 3 ~]# firewall-cmd --add-port=3312/tcp --permanent
    success
    [root@node1, 2, 3 ~]# firewall-cmd --reload
    success

Our testing result should be something like:

    -- make connection through HAproxy2(node2) server to port 3311 (our write node)

    sh@sh--work:~$ mysql -u all -p --host=192.168.1.88 --port=3311 -e "select @@version, @@hostname";
    Enter password: 
    +--------------------+---------------+
    | @@version          | @@hostname    |
    +--------------------+---------------+
    | 5.6.24-72.2-56-log | node1 |
    +--------------------+---------------+


    -- Second to port 3312 (one of our read nodes)

    sh@sh--work:~$ mysql -u all -p --host=192.168.1.88 --port=3312 -e "select @@version, @@hostname";
    Enter password: 
    +--------------------+---------------+
    | @@version          | @@hostname    |
    +--------------------+---------------+
    | 5.6.24-72.2-56-log | node1         |
    +--------------------+---------------+
    sh@sh--work:~$ mysql -u all -p --host=192.168.1.88 --port=3312 -e "select @@version, @@hostname";
    Enter password: 
    +--------------------+---------------+
    | @@version          | @@hostname    |
    +--------------------+---------------+
    | 5.6.24-72.2-56-log | node2         |
    +--------------------+---------------+
    sh@sh--work:~$ mysql -u all -p --host=192.168.1.88 --port=3312 -e "select @@version, @@hostname";
    Enter password: 
    +--------------------+---------------+
    | @@version          | @@hostname    |
    +--------------------+---------------+
    | 5.6.24-72.2-56-log | node3         |
    +--------------------+---------------+

Installing and Configuring KeepAlived

The Final step in our environment is installing and configuring KeepAlived , acting as Virtual IP and switch-over between HAproxy instances. The idea is, when one of HAproxy servers fails, another one must do same work without interrupting all architecture. The new IP address for us is 192.168.1.199 – which is Virtual IP for our HAproxy servers. We will connect to MySQL(PXC) through this IP and KeepAlived will decide to which HAproxy instance it must send this connection. Lets test:

    -- Pay Attention to host IP address

    sh@sh--work:~$ mysql -u all -p --host=192.168.1.199 --port=3311 -e "select @@version, @@hostname";
    Enter password: 
    +--------------------+---------------+
    | @@version          | @@hostname    |
    +--------------------+---------------+
    | 5.6.24-72.2-56-log | node1         |
    +--------------------+---------------+


    sh@sh--work:~$ mysql -u all -p --host=192.168.1.199 --port=3312 -e "select @@version, @@hostname";
    Enter password: 
    +--------------------+---------------+
    | @@version          | @@hostname    |
    +--------------------+---------------+
    | 5.6.24-72.2-56-log | node2         |
    +--------------------+---------------+

We send connection to KeepAlived virtual IP and it passes to one of HAproxy servers. Enough theory lets achieve our goal.

Installing/Configuring

    **HAproxy1**

    [root@node1, 2, 3 ~]# yum install keepalived
    [root@node1, 2, 3 ~]# systemctl enable keepalived.service

    [root@node1, 2, 3 ~]# cd /etc/keepalived/
    [root@node1, 2, 3 ~]# mv keepalived.conf keepalived.conf.orig
    [root@node1, 2, 3 ~]# nano keepalived.conf
    -- Add following lines to file        
    vrrp_script chk_haproxy {
        script "killall -0 haproxy" # verify the pid is exist or not
        interval 2                      # check every 2 seconds
        weight 2                        # add 2 points of prio if OK
    }

    vrrp_instance VI_1 {
        interface eth0                  # interface to monitor
        state MASTER
        virtual_router_id 51            # Assign one ID for this route
        priority 101                    # 101 on master, 100 on backup1, 99 on backup2
        authentication {
                 auth_type PASS
                 auth_pass 12345
        }
    virtual_ipaddress {
        192.168.1.199/24                # the virtual IP
    }
track_script {
        chk_haproxy
    }
    }

    -- Save and exit file editing.

    [root@node1, 2, 3 ~]# /sbin/restorecon -v -F /etc/keepalived/keepalived.conf

Starting KeepAlived

    [root@node1, 2, 3 ~]# systemctl start keepalived.service

Testing MySQL connection with KeepAlived

    sh@sh--work:~$ mysql -u all -p --host=192.168.1.199 --port=3311 -e "select @@version, @@hostname";
    Enter password: 
    +---------------------------+------------+
    | @@version                 | @@hostname |
    +---------------------------+------------+
    | 10.0.15-PXC-wsrep-log | node1      |
    +---------------------------+------------+


    sh@sh--work:~$ mysql -u all -p --host=192.168.1.199 --port=3312 -e "select @@version, @@hostname";
    Enter password: 
    +---------------------------+------------+
    | @@version                 | @@hostname |
    +---------------------------+------------+
    | 10.0.15-PXC-wsrep-log | node2      |
    +---------------------------+------------+

Conclusion

In this tutorial you learn how to install and configure PXC on CentOS 7 and also how to load balance this cluster using wellknown HAproxy. As bonus you learn to install/configure KeepAlived and access database over Virtual IP which is a next level of redundancy in High Availability approach. If you have any suggesstions or errors, please do not hesitate to contact. Thank you for reading.

MySQL Optimizer Tracer usage case with count(*)

What is Optimizer Trace? After reading topic about Optimizer Tracer by [Morgan Tocker][1] decided to test it. From [Optimizer Trace and EXPLAIN FORMAT=JSON in 5.7][2]: Optimizer trace is a new diagnostic tool introduced in MySQL 5.6 to show how the optimizer is working internally. It is similar to EXPLAIN, with a few notable differences: It doesn’t just show the intended execution plan, it shows the alternative choices. You enable the optimizer trace, then you run the actual query. It is far more verbose in its output. For understanding goal of article please read previous one about related verified optimizer BUG: [Playing with count() optimizer work][3] ** We have 2 queries: **select count() from sales; select count(*) from sales where sales_id > 0; Firstly let’s get explain plan for query with JSON format and as regular:

       -- JSON 
          mysql> explain format=json select count(*) from sales; |
          { "query_block": 
          { "select_id": 1, 
          "table": { "table_name": "sales", 
                     "access_type":  "index", 
                     "key": "sales_cust_idx", 
                     "used_key_parts": [ "CUSTOMER_ID" ] 
                    /*  used_key_parts */, 
                    "key_length": "4", 
                    "rows": 2489938, 
                    "filtered": 100,
                     "using_index": true }  
            /* table */ } 
           /* query_block */ } 

        mysql> explain select count(*) from salesG 
        *************************** 1. row *************************** 
        id: 1 
        select_type: SIMPLE 
        table: sales 
        type: index 
        possible_keys: NULL 
        key: sales_cust_idx 
        key_len: 4 
        ref: NULL 
        rows: 2489938 
        Extra: Using index 1 row in set (0.00 sec) 

Second query:

       -- JSON  
       mysql> explain format=json select count(*) from sales where sales_id > 0G         *************************** 1. row *************************** 
    EXPLAIN: 
    { "query_block": 
    { "select_id": 1, 
    "table": 
    { "table_name": "sales", 
      "access_type": "range",
      "possible_keys": [ "PRIMARY" ], "key": "PRIMARY", 
      "used_key_parts": [ "SALES_ID" ], "key_length": "4", 
      "rows": 1244969, 
      "filtered": 100, 
      "using_index": true, 
     "attached_condition": "(`sales`.`sales`.`SALES_ID` > 0)" } } } 


     mysql> explain select count(*) from sales where sales_id > 0G 
     *************************** 1. row *************************** 
     id: 1 
     select_type: SIMPLE 
     table: sales 
     type: range 
     possible_keys: PRIMARY 
     key: PRIMARY 
     key_len: 4 
     ref: NULL 
     rows: 1244969 
     Extra: Using where; Using index 1 row in set (0.00 sec)  

From Explain plan it is obvious that, first query will use “Index Scan”, second will use “Range Scan + Index”. First query will use, sales_cust_idx in customer_id column, second query will use primary key in sales_id column. From first view, there now difference between queries, but optimizer estimates half of rows when attaching sales_id > 0 condition. See related BUG: [#68814][4] Now let’s examine problem with Optimizer Tracer. So before running query you should enable optimizer trace:

      SET OPTIMIZER_TRACE="enabled=on",END_MARKERS_IN_JSON=on;
      SET OPTIMIZER_TRACE_MAX_MEM_SIZE=1000000; 

After run first query:

      mysql> select count(*) from sales; 
      +----------+ 
      | count(*) |
      +----------+
      | 2500003 |
      +----------+ 
     1 row in set (0.58 sec) 

Query to OPTIMIZER_TRACE table from information_schema:

     mysql> select query, trace from INFORMATION_SCHEMA.OPTIMIZER_TRACE;
     select count(*) from sales | 
    { "steps": 
   [ { "join_preparation": 
    { "select#": 1, 
    "steps": [ { "expanded_query": "/* select#1 */ select count(0) AS `count(*)` from `sales`" } ] /* steps */ }
    /* join_preparation */ },
    { "join_optimization": 
       { "select#": 1, 
         "steps": [ { "table_dependencies": [ { "table": "`sales`", "row_may_be_null": false, "map_bit": 0, "depends_on_map_bits": [ ] 
    /* depends_on_map_bits */ } ] 
    /* table_dependencies */ }, 
      { "rows_estimation": 
           [ { "table": "`sales`", "table_scan": { "rows": 2489938, "cost": 10347 }
          /* table_scan */ } ] 
          /* rows_estimation */ }, 
           { "considered_execution_plans": [ { "plan_prefix": [ ] 
           /* plan_prefix */,
           "table": "`sales`", 
           "best_access_path": { "considered_access_paths": [ { "access_type": "scan", "rows": 2.49e6, "cost": 508335, "chosen": true } ] 
          /* considered_access_paths */ } 
          /* best_access_path */, 
         "cost_for_plan": 508335,
         "rows_for_plan": 2.49e6,
         "chosen": true } ]
         /* considered_execution_plans */ },
        { "attaching_conditions_to_tables": { "original_condition": null,  "attached_conditions_computation": [ ] 
        /* attached_conditions_computation */,
       "attached_conditions_summary": [ { "table": "`sales`", "attached": null } ] 
        /* attached_conditions_summary */ } 
        /* attaching_conditions_to_tables */ },
       { "refine_plan": [ { "table": "`sales`", "access_type": "index_scan" } ] 
        /* refine_plan */ } ] 
        /* steps */ } 
        /* join_optimization */ },
        { "join_execution": { "select#": 1, "steps": [ ] 
        /* steps */ } 
        /* join_execution */ } ] 
        /* steps */ 

Interesting part for query 1 is -> “cost_for_plan”: 508335, “rows_for_plan”: 2.49e6, “chosen”: true Cost is 508335, rows for plan is 2.49e6 = 2490000 rows, is roughly equal to explain plan estimation.
Now second query:

     mysql> select count(*) from sales where sales_id > 0;
     +----------+ 
     | count(*) |
     +----------+ 
     | 2500003 |
     +----------+ 1 row in set (1.18 sec) 

Query to OPTIMIZER_TRACE:

     mysql> select query, trace from INFORMATION_SCHEMA.OPTIMIZER_TRACE; 
     select count(*) from sales where sales_id > 0 | 
     { "steps": [ { "join_preparation": 
        { "select#": 1, 
          "steps": [ { "expanded_query": "/* select#1 */ select count(0) AS `count(*)` from `sales` where (`sales`.`SALES_ID` > 0)" } ] 
        /* steps */ } 
        /* join_preparation */ },
       { "join_optimization": 
          { "select#": 1, 
            "steps": [ { "condition_processing": { "condition": "WHERE", "original_condition": "(`sales`.`SALES_ID` > 0)", "steps": [ { "transformation": "equality_propagation", 
           "resulting_condition": "(`sales`.`SALES_ID` > 0)" }, 
          { "transformation": "constant_propagation", "resulting_condition": "(`sales`.`SALES_ID` > 0)" }, 
          { "transformation": "trivial_condition_removal", "resulting_condition": "(`sales`.`SALES_ID` > 0)" } ] 
         /* steps */ } 
         /* condition_processing */ }, 
        { "table_dependencies": [ { "table": "`sales`", "row_may_be_null": false, "map_bit": 0, "depends_on_map_bits": [ ] 
        /* depends_on_map_bits */ } ] 
        /* table_dependencies */ }, 
       { "ref_optimizer_key_uses": [ ] 
        /* ref_optimizer_key_uses */ },
       { "rows_estimation": [ { "table": "`sales`", "range_analysis": { "table_scan": {  "rows": 2489938, "cost": 508337 } 
       /* table_scan */, 
       "potential_range_indices": [ { "index": "PRIMARY", "usable": true, "key_parts": [ "SALES_ID" ] 
      /* key_parts */ }, 
      { "index": "sales_cust_idx", "usable": false, "cause": "not_applicable" } ] 
      /* potential_range_indices */,
     "best_covering_index_scan": { "index": "sales_cust_idx", "cost": 500418, "chosen": true } 
      /* best_covering_index_scan */,
      "setup_range_conditions": [ ] 
      /* setup_range_conditions */, 
      "group_index_range": { "chosen": false, "cause": "not_group_by_or_distinct" } 
      /* group_index_range */, 
      "analyzing_range_alternatives": { "range_scan_alternatives": [ { "index": "PRIMARY", "ranges": [ "0 < SALES_ID" ] 
     /* ranges */, 
     "index_dives_for_eq_ranges": true, 
     "rowid_ordered": true, 
     "using_mrr": false, "index_only": true, "rows": 1244969, "cost": 251364, "chosen": true } ] 
    /* range_scan_alternatives */,
   "analyzing_roworder_intersect": { "usable": false, "cause": "too_few_roworder_scans" }  /* analyzing_roworder_intersect */ } 
    /* analyzing_range_alternatives */,
   "chosen_range_access_summary": { "range_access_plan": { "type": "range_scan", "index": "PRIMARY", "rows": 1244969, "ranges": [ "0 < SALES_ID" ] 
   /* ranges */ } 
   /* range_access_plan */, 
   "rows_for_plan": 1244969, "cost_for_plan": 251364, "chosen": true } 
   /* chosen_range_access_summary */ } 
   /* range_analysis */ } ] 
   /* rows_estimation */ },
    { "considered_execution_plans": [ { "plan_prefix": [ ] /* plan_prefix */, "table": "`sales`", "best_access_path": { "considered_access_paths": [ { "access_type": "range", "rows": 1.24e6, "cost": 500357, "chosen": true } ] 
    /* considered_access_paths */ } 
    /* best_access_path */, 
    "cost_for_plan": 500357, "rows_for_plan": 1.24e6, "chosen": true } ] 
    /* considered_execution_plans */ },
    { "attaching_conditions_to_tables": { "original_condition": "(`sales`.`SALES_ID` > 0)", "attached_conditions_computation": [ ] 
   /* attached_conditions_computation */,
   "attached_conditions_summary": [ { "table": "`sales`", "attached": "(`sales`.`SALES_ID` > 0)" } ] 
   /* attached_conditions_summary */ } 
   /* attaching_conditions_to_tables */ }, 
   { "refine_plan": [ { "table": "`sales`", "access_type": "range" } ] /* refine_plan */ } ] 
   /* steps */ } 
   /* join_optimization */ }, 
  { "join_execution": { "select#": 1, "steps": [ ] /* steps */ } /* join_execution */ } ] /* steps */ 

It is much more complicated with second query and due to lacking documentation for all output, i am looking for explanations from experts.
First thing is, it says in “potential_range_indices” that “index”: “sales_cust_index” is not usable:

      "potential_range_indices": 
      [ { "index": "PRIMARY", "usable": true, "key_parts": [ "SALES_ID" ] 
      /* key_parts */ },
      { "index": "sales_cust_idx", "usable": false, "cause": "not_applicable" } ] 

But in “best_covering_index_scan”, “index”: “sales_cust_idx” is marked as “chosen”:true

       "best_covering_index_scan": { "index": "sales_cust_idx", "cost": 500418, "chosen": true } 

Second thing is, in “range_scan_alternatives” and,

        "analyzing_range_alternatives": 
        { "range_scan_alternatives": [ { "index": "PRIMARY", "ranges": [ "0 < SALES_ID" ]  
        /* ranges */, 
       "index_dives_for_eq_ranges": true, "rowid_ordered": true, "using_mrr": false, "index_only": true, "rows": 1244969, "cost": 251364, "chosen": true } 

in “chosen_range_access_summary”, “rows_for_plan” is 1244969 and “cost_for_plan” is 251364

       "chosen_range_access_summary": 
       { "range_access_plan": 
        { "type": "range_scan", "index": "PRIMARY", "rows": 1244969, "ranges": [ "0 < SALES_ID" ] 
         /* ranges */ } 
         /* range_access_plan */, 
        "rows_for_plan": 1244969, "cost_for_plan": 251364, "chosen": true } 

But for final “best_access_path” “cost_for_plan” is increased to 500357 and “rows_for_plan” is 1.24e6 = 1240000:

       "best_access_path": { "considered_access_paths": [ { "access_type": "range", "rows": 1.24e6, "cost": 500357, "chosen": true } ] 
       /* considered_access_paths */ } 
       /* best_access_path */, 
       "cost_for_plan": 500357, "rows_for_plan": 1.24e6, "chosen": true }

Third thing is that, sales_id > 0 is rewritten to 0 < sales_id

      "ranges": [ "0 < SALES_ID" ] 

*** After explanations from community this article will be updated ***

http://www.tocker.ca/
http://www.tocker.ca/2015/05/25/optimizer-trace-and-explain-formatjson-in-5-7.html
https://mysql.az/playing-with-count-optimizer-work/
http://bugs.mysql.com/bug.php?id=68814

MySQL LogRotate script

Did you ever try to use log rotate facility in Linux with MySQL? There is no need to script one for this purpose, it is already installed. From MySQL spec file, it looks for logrotate.d folder:

    # Ensure that needed directories exists 
    install -d $RBR%{_sysconfdir}/{logrotate.d,init.d}

As well as there is dedicated mysql-log-rotate.sh script for installing logrotate script. The script path is: /mysql-5.6.24/support-files/mysql-log-rotate.sh Again from spec file:

    # Install logrotate and autostart 
    install -m 644 $MBD/release/support-files/mysql-log-rotate $RBR%{_sysconfdir}/logrotate.d/mysql

After installing there will be mysql script in /etc/logrottate.d/.

   # The log file name and location can be set in
   # /etc/my.cnf by setting the "log-error" option
   # in either [mysqld] or [mysqld_safe] section as
   # follows:
   #
   # [mysqld]
   # log-error=/var/lib/mysql/mysqld.log
   #
   # In case the root user has a password, then you
   # have to create a /root/.my.cnf configuration file
   # with the following content:
   #
   # [mysqladmin]
   # password = <secret> 
   # user= root
   #
   # where "<secret>" is the password. 
   #
   # ATTENTION: The /root/.my.cnf file should be readable
   # _ONLY_ by root !

   /var/lib/mysql/mysqld.log {
         # create 600 mysql mysql
         notifempty
         daily
         rotate 5
          missingok
         compress
    postrotate
         # just if mysqld is really running
         if test -x /usr/bin/mysqladmin && 
          /usr/bin/mysqladmin ping &>/dev/null
       then
          /usr/bin/mysqladmin flush-logs
       fi
   endscript
  }

There is 2 simple and critical problem with this script. First of all the name of error log file. If MySQL started from default my.cnf file, path for error log file will be: /var/log/mysqld.log

     [mysqld_safe] 
     log-error=/var/log/mysqld.log 
     pid-file=/var/run/mysqld/mysqld.pid

But in script it is defined as: /var/lib/mysql/mysqld.log Even if you comment out default settings in my.cnf file, MySQL by default will create error log file as follows: datadir + host_name.err = /var/lib/mysql/host_name.err Again there will be no such error log file defined in log rotate script. Second problem is usage of mysqladmin. Script is calling mysqladmin without any password. We assume that there is no such MySQL instance without user, at least in production environment:

     [root@centos7 ~]# /usr/bin/mysqladmin ping /usr/bin/mysqladmin: connect to server at  'localhost' failed error: 'Access denied for user 'root'@'localhost' (using password: NO)'   

     [root@centos7 ~]# /usr/bin/mysqladmin flush-logs /usr/bin/mysqladmin: connect to server at  'localhost' failed error: 'Access denied for user 'root'@'localhost' (using password: NO)' 

From now we know that, after installing MySQL there will be non-working log rotate script, which will fail due to explanations above. Related report is: #73949

Partial table recovery from physical backup

In previous topic, we have covered “Transportable Tablespace” concept by copying and importing table’s tablespace to remote server. See -> Copying Tablespaces to Remote Server The idea is copying tablespace file to remote server, in remote server you must create identical database names and table names manually, then you should discard new table’s tablespace file and import new copied one. To achieve this you must have running MySQL version >= 5.6, innodb_file_per_table=1 and you must know “CREATE statement” of table. Let’s to change our test condition. Assume that, you have MySQL server and you have taken physical backup of your server (you can use Percona XtraBackup, cold backup for eg.). But one of the wonderful day somebody deleted all table data (say -> delete from table_name). In fact your table at this moment exists(.frm and .ibd), you can easily discard table’s tablespace and import tablespace from backup folder. But if table is dropped and you don’t know the create of table. Or even database is dropped. Our path will differ from previous one: *1. Create dropped database manually. 2. Create dropped table by extracting table’s create statement from .frm file which is in backed up directory. To extract table create statement from .frm file you can use **mysqlfrm tool from MySQL Utilities. 3. Discard table’s tablespace (ALTER TABLE t DISCARD TABLESPACE;) 4. Copy .ibd file from backup directory to MySQL’s datadir database directory 5. Import copied back tablespace file.(ALTER TABLE t IMPORT TABLESPACE;)*** You can also read about this concept from documentation -> tablespace-copying I have automatized this process adding table create statement extracting functionality to MySQL-AutoXtraBackup project as –partial recovery option. Here is a demo usage video:

If you tested and found issues, please report it to improve this opensource project.