Running simultaneous queries with Python, for killing them

This topic is about, simple approach to run multiple sql statements for testing XtraBackup behaviour – if it is going to kill them due to blocked FTWRL(FLUSH TABLES WITH READ LOCK).

According to the DOC -> Backup Locks

Percona Server has implemented this feature to be a lightweight alternative to FLUSH TABLES WITH READ LOCK for both physical and logical backups. Three new statements are now available: LOCK TABLES FOR BACKUP, LOCK BINLOG FOR BACKUP and UNLOCK BINLOG.

So by default, if you are using PS it is going to use LOCK TABLES FOR BACKUP prior copying non-InnoDB tables.

Well, how to force to use FTWRL instead of Backup Locks?
Congratulations – we have --no-backup-locks option, if you specify it, the FTWRL will be used.

Okay, again reading the DOC -> #killing-the-blocking-queries

So here are the combination of options to kill blocking queries:

--no-backup-locks
--ftwrl-wait-timeout=0 
--ftwrl-wait-query-type=all 
--ftwrl-wait-threshold=1 
--kill-long-queries-timeout=1 
--kill-wait-query-type=all 
--kill-long-query-type=all 

Good to go, but the missing thing is, to create blocking queries.

* First we need some MyISAM tables. I used sysbench to create some tables and then altered them:

        # Altering some of the table engines from innodb to myisam
        for i in range(20, 25):
            sql_alter_engine = "alter table sysbench_test_db.sbtest{} engine=myisam".format(i)
            RunBenchmark.run_sql_statement(basedir=self.basedir, sql_statement=sql_alter_engine)

* As I am using the MySQL client directly, the best is to have some bash script for running SQL redirected to /dev/null(because I don’t need any result from SQL result, I will run benchmark query):

#!/usr/bin/env bash
# Executed as run_sql_queries.sh BASEDIR_PATH 50 SOCKET
BASEDIR=$1
MYSQL_SOCK=$2
SQL=$3
MYSQL_USER=root

${BASEDIR}/bin/mysql --user=${MYSQL_USER} --socket=${MYSQL_SOCK} -e "${SQL}" > /dev/null 2>&1

* Now calling this bash script from Python:
Passing the basedir of PS, socket connection and actual SQL to be run, it is going to be passed to bash script:

    @staticmethod
    def parallel_sleep_queries(basedir, sql, sock):
        dir_path = os.path.dirname(os.path.realpath(__file__))
        bash_command = "{}/run_sql_queries.sh {} {} '{}'".format(dir_path, basedir, sock, sql)
        try:
            process = Popen(
                split(bash_command),
                stdin=None,
                stdout=None,
                stderr=None)
        except Exception as e:
            print(e)

* Using Python’s concurrent.futures here for true concurrent run(prior taking backup):

            with concurrent.futures.ProcessPoolExecutor(max_workers=50) as pool:
                for _ in range(10):
                    for i in range(20, 25):
                        pool.submit(
                            self.parallel_sleep_queries(basedir=self.basedir,
                                                        sock="{}/socket.sock".format(self.basedir),
                                                        sql="select benchmark(9999999, md5(c)) from sysbench_test_db.sbtest{}".format(
                                                            i)))
            self.all_backup() # Taking backup with options provided above

So the thing is quite simple here, I have created a bash script to call mysql cli, then called it from Python method with Popen and the called this method concurrently.
The result is quite handy. Actual backup command(it is generated from config file):

/home/shahriyar.rzaev/XB_TEST/server_dir/target/percona-xtrabackup-2.4.x-debug/bin/xtrabackup --defaults-file= --user=root \
--password=''  --target-dir=/home/shahriyar.rzaev/XB_TEST/backup_dir/ps_5_7_x_2_4/cycle1/full/2017-12-09_06-59-04 --backup \
--socket=/home/shahriyar.rzaev/XB_TEST/server_dir/PS061217-percona-server-5.7.20-18-linux-x86_64-debug/socket.sock \
--compress=quicklz --compress-chunk-size=65536 --compress-threads=4 --encrypt=AES256 \ 
--encrypt-key=VVTBwgM4UhwkTTV98fhuj+D1zyWoA89K --encrypt-threads=4 --encrypt-chunk-size=65536 --slave-info \
--no-version-check --core-file --parallel=10 --throttle=40 --check-privileges --ftwrl-wait-timeout=0 \
--ftwrl-wait-query-type=all --ftwrl-wait-threshold=1 --kill-long-queries-timeout=1 --kill-wait-query-type=all \
--kill-long-query-type=all --no-backup-locks \
--keyring-file-data=/home/shahriyar.rzaev/XB_TEST/server_dir/PS061217-percona-server-5.7.20-18-linux-x86_64-debug/mysql-keyring/keyring 
--stream="xbstream" > \
/home/shahriyar.rzaev/XB_TEST/backup_dir/ps_5_7_x_2_4/cycle1/full/2017-12-09_06-59-04/full_backup.stream

The result of running and killing blocking queries:

171209 06:59:15 Executing FLUSH TABLES WITH READ LOCK...
171209 06:59:15 Kill query timeout 1 seconds.
171209 06:59:16 Connecting to MySQL server host: localhost, user: root, password: set, port: not set, socket: /home/shahriyar.rzaev/XB_TEST/server_dir/PS061217-percona-server-5.7.20-18-linux-x86_64-debug/socket.sock
171209 06:59:16 Killing query 308 (duration 13 sec): select benchmark(9999999, md5(c)) from sysbench_test_db.sbtest20
171209 06:59:16 Killing query 309 (duration 13 sec): select benchmark(9999999, md5(c)) from sysbench_test_db.sbtest21
171209 06:59:16 Killing query 310 (duration 13 sec): select benchmark(9999999, md5(c)) from sysbench_test_db.sbtest22
171209 06:59:16 Killing query 311 (duration 13 sec): select benchmark(9999999, md5(c)) from sysbench_test_db.sbtest23
171209 06:59:16 Killing query 312 (duration 13 sec): select benchmark(9999999, md5(c)) from sysbench_test_db.sbtest24
171209 06:59:16 Killing query 313 (duration 13 sec): select benchmark(9999999, md5(c)) from sysbench_test_db.sbtest20
171209 06:59:16 Killing query 314 (duration 13 sec): select benchmark(9999999, md5(c)) from sysbench_test_db.sbtest21
171209 06:59:16 Killing query 315 (duration 13 sec): select benchmark(9999999, md5(c)) from sysbench_test_db.sbtest22
171209 06:59:16 Killing query 316 (duration 13 sec): select benchmark(9999999, md5(c)) from sysbench_test_db.sbtest23
171209 06:59:16 Killing query 317 (duration 13 sec): select benchmark(9999999, md5(c)) from sysbench_test_db.sbtest24
171209 06:59:16 Killing query 318 (duration 13 sec): select benchmark(9999999, md5(c)) from sysbench_test_db.sbtest20
171209 06:59:16 Killing query 319 (duration 13 sec): select benchmark(9999999, md5(c)) from sysbench_test_db.sbtest21
171209 06:59:16 Killing query 320 (duration 13 sec): select benchmark(9999999, md5(c)) from sysbench_test_db.sbtest22
171209 06:59:16 Killing query 321 (duration 13 sec): select benchmark(9999999, md5(c)) from sysbench_test_db.sbtest23
171209 06:59:16 Killing query 322 (duration 13 sec): select benchmark(9999999, md5(c)) from sysbench_test_db.sbtest24
171209 06:59:16 Killing query 323 (duration 13 sec): select benchmark(9999999, md5(c)) from sysbench_test_db.sbtest20
171209 06:59:16 Killing query 324 (duration 13 sec): select benchmark(9999999, md5(c)) from sysbench_test_db.sbtest21
171209 06:59:16 Killing query 325 (duration 13 sec): select benchmark(9999999, md5(c)) from sysbench_test_db.sbtest22
171209 06:59:16 Killing query 326 (duration 13 sec): select benchmark(9999999, md5(c)) from sysbench_test_db.sbtest24
171209 06:59:16 Killing query 327 (duration 13 sec): select benchmark(9999999, md5(c)) from sysbench_test_db.sbtest23
171209 06:59:16 Killing query 328 (duration 13 sec): select benchmark(9999999, md5(c)) from sysbench_test_db.sbtest21
171209 06:59:16 Killing query 329 (duration 13 sec): select benchmark(9999999, md5(c)) from sysbench_test_db.sbtest20
171209 06:59:16 Killing query 330 (duration 13 sec): select benchmark(9999999, md5(c)) from sysbench_test_db.sbtest23
171209 06:59:16 Killing query 331 (duration 13 sec): select benchmark(9999999, md5(c)) from sysbench_test_db.sbtest22
171209 06:59:16 Killing query 332 (duration 13 sec): select benchmark(9999999, md5(c)) from sysbench_test_db.sbtest24
171209 06:59:16 Killing query 333 (duration 13 sec): select benchmark(9999999, md5(c)) from sysbench_test_db.sbtest20
171209 06:59:16 Killing query 334 (duration 13 sec): select benchmark(9999999, md5(c)) from sysbench_test_db.sbtest21
171209 06:59:16 Killing query 335 (duration 13 sec): select benchmark(9999999, md5(c)) from sysbench_test_db.sbtest22
171209 06:59:16 Killing query 336 (duration 13 sec): select benchmark(9999999, md5(c)) from sysbench_test_db.sbtest23
171209 06:59:16 Killing query 337 (duration 13 sec): select benchmark(9999999, md5(c)) from sysbench_test_db.sbtest24
171209 06:59:16 Killing query 338 (duration 12 sec): select benchmark(9999999, md5(c)) from sysbench_test_db.sbtest20
171209 06:59:16 Killing query 339 (duration 12 sec): select benchmark(9999999, md5(c)) from sysbench_test_db.sbtest21
171209 06:59:16 Killing query 340 (duration 12 sec): select benchmark(9999999, md5(c)) from sysbench_test_db.sbtest22
171209 06:59:16 Killing query 341 (duration 12 sec): select benchmark(9999999, md5(c)) from sysbench_test_db.sbtest23
171209 06:59:16 Killing query 342 (duration 12 sec): select benchmark(9999999, md5(c)) from sysbench_test_db.sbtest24
171209 06:59:16 Killing query 343 (duration 12 sec): select benchmark(9999999, md5(c)) from sysbench_test_db.sbtest20
171209 06:59:16 Killing query 344 (duration 12 sec): select benchmark(9999999, md5(c)) from sysbench_test_db.sbtest21
171209 06:59:16 Killing query 345 (duration 12 sec): select benchmark(9999999, md5(c)) from sysbench_test_db.sbtest22
171209 06:59:16 Killing query 346 (duration 12 sec): select benchmark(9999999, md5(c)) from sysbench_test_db.sbtest23
171209 06:59:16 Killing query 347 (duration 12 sec): select benchmark(9999999, md5(c)) from sysbench_test_db.sbtest24
171209 06:59:16 Killing query 348 (duration 12 sec): select benchmark(9999999, md5(c)) from sysbench_test_db.sbtest20
171209 06:59:16 Killing query 349 (duration 12 sec): select benchmark(9999999, md5(c)) from sysbench_test_db.sbtest21
171209 06:59:16 Killing query 350 (duration 12 sec): select benchmark(9999999, md5(c)) from sysbench_test_db.sbtest22
171209 06:59:16 Killing query 351 (duration 12 sec): select benchmark(9999999, md5(c)) from sysbench_test_db.sbtest23
171209 06:59:16 Killing query 352 (duration 12 sec): select benchmark(9999999, md5(c)) from sysbench_test_db.sbtest24
171209 06:59:16 Killing query 353 (duration 12 sec): select benchmark(9999999, md5(c)) from sysbench_test_db.sbtest20
171209 06:59:16 Killing query 354 (duration 12 sec): select benchmark(9999999, md5(c)) from sysbench_test_db.sbtest21
171209 06:59:16 Killing query 355 (duration 12 sec): select benchmark(9999999, md5(c)) from sysbench_test_db.sbtest22
171209 06:59:16 Killing query 357 (duration 12 sec): select benchmark(9999999, md5(c)) from sysbench_test_db.sbtest23
171209 06:59:16 Killing query 358 (duration 12 sec): select benchmark(9999999, md5(c)) from sysbench_test_db.sbtest24
171209 06:59:16 Kill query thread stopped
171209 06:59:16 Starting to backup non-InnoDB tables and files

The final thing to note here is, how to provide this bash files from setup.py?
You can use something like this in setup.py:

    packages=['general_conf', 'backup_prepare', 'partial_recovery', 'master_backup_script', 'prepare_env_test_mode'],
    package_data={
        'prepare_env_test_mode': ['*.sh']
    },

For full code please see:
prepare_env_test_mode

Thanks 🙂

XtraBackup vs. OOM killer(usage of mprof)

This small article is about nicely depicting memory usage prior to OOM killers show.

I have started MySQL with:

/home/shahriyar.rzaev/XB_TEST/server_dir/PS131117-percona-server-5.7.19-17-linux-x86_64/bin/mysqld --no-defaults \
--innodb_buffer_pool_size=1G --innodb_log_file_size=1G --innodb_page_size=64K --early-plugin-load=keyring_file.so \ --keyring_file_data=/home/shahriyar.rzaev/XB_TEST/server_dir/PS131117-percona-server-5.7.19-17-linux-x86_64/mysql-keyring/keyring \
--log-bin=mysql-bin --log-slave-updates --server-id=1 --gtid-mode=ON --enforce-gtid-consistency --binlog-format=row --core-file --basedir=/home/shahriyar.rzaev/XB_TEST/server_dir/PS131117-percona-server-5.7.19-17-linux-x86_64 \
--tmpdir=/home/shahriyar.rzaev/XB_TEST/server_dir/PS131117-percona-server-5.7.19-17-linux-x86_64/data \
--datadir=/home/shahriyar.rzaev/XB_TEST/server_dir/PS131117-percona-server-5.7.19-17-linux-x86_64/data \
--plugin-load-add=tokudb=ha_tokudb.so --tokudb-check-jemalloc=0 --plugin-load-add=rocksdb=ha_rocksdb.so 
--socket=/home/shahriyar.rzaev/XB_TEST/server_dir/PS131117-percona-server-5.7.19-17-linux-x86_64/socket.sock --port=17057 \
--log-error=/home/shahriyar.rzaev/XB_TEST/server_dir/PS131117-percona-server-5.7.19-17-linux-x86_64/log/master.err

So basically it is MySQL 5.7.19 with 64K page size.

Trying to take full + incremental backups:

Full is OK:

/home/shahriyar.rzaev/XB_TEST/server_dir/target/percona-xtrabackup-2.4.x-debug/bin/xtrabackup --defaults-file= \
--user=root --password=''  --target-dir=/home/shahriyar.rzaev/XB_TEST/backup_dir/ps_5_7_x_2_4/full/2017-11-14_11-31-20 \
--backup --socket=/home/shahriyar.rzaev/XB_TEST/server_dir/PS131117-percona-server-5.7.19-17-linux-x86_64/socket.sock \
--compress=quicklz --compress-chunk-size=65536 --compress-threads=4 --encrypt=AES256 --encrypt-key=VVTBwgM4UhwkTTV98fhuj+D1zyWoA89K \
--encrypt-threads=4 --encrypt-chunk-size=65536 --no-version-check --core-file --parallel=10 --throttle=40 \
--keyring-file-data=/home/shahriyar.rzaev/XB_TEST/server_dir/PS131117-percona-server-5.7.19-17-linux-x86_64/mysql-keyring/keyring --stream="xbstream" > \
/home/shahriyar.rzaev/XB_TEST/backup_dir/ps_5_7_x_2_4/full/2017-11-14_11-31-20/full_backup.stream

Incremental Failed:

/home/shahriyar.rzaev/XB_TEST/server_dir/target/percona-xtrabackup-2.4.x-debug/bin/xtrabackup --defaults-file= \
--user=root --password='' --target-dir=/home/shahriyar.rzaev/XB_TEST/backup_dir/ps_5_7_x_2_4/inc/2017-11-14_11-33-51 \
--incremental-basedir=/home/shahriyar.rzaev/XB_TEST/backup_dir/ps_5_7_x_2_4/full/2017-11-14_11-31-20 --backup \
--socket=/home/shahriyar.rzaev/XB_TEST/server_dir/PS131117-percona-server-5.7.19-17-linux-x86_64/socket.sock \
--compress=quicklz --compress-chunk-size=65536 --compress-threads=4 --encrypt=AES256 \
--encrypt-key=VVTBwgM4UhwkTTV98fhuj+D1zyWoA89K --encrypt-threads=4 --encrypt-chunk-size=65536 \
--no-version-check --core-file --parallel=10 --throttle=40 \
--keyring-file-data=/home/shahriyar.rzaev/XB_TEST/server_dir/PS131117-percona-server-5.7.19-17-linux-x86_64/mysql-keyring/keyring --stream="xbstream" > \
 /home/shahriyar.rzaev/XB_TEST/backup_dir/ps_5_7_x_2_4/inc/2017-11-14_11-33-51/inc_backup.stream

171114 11:35:05 Connecting to MySQL server host: localhost, user: root, password: set, port: not set, socket: /home/shahriyar.rzaev/XB_TEST/server_dir/PS131117-percona-server-5.7.19-17-linux-x86_64/socket.sock
Using server version 5.7.19-17-debug-log
/home/shahriyar.rzaev/XB_TEST/server_dir/target/percona-xtrabackup-2.4.x-debug/bin/xtrabackup version 2.4.7 based on MySQL server 5.7.13 Linux (x86_64) (revision id: )
incremental backup from 70156215 is enabled.
xtrabackup: uses posix_fadvise().
xtrabackup: cd to /home/shahriyar.rzaev/XB_TEST/server_dir/PS131117-percona-server-5.7.19-17-linux-x86_64/data/
xtrabackup: open files limit requested 0, set to 1024
xtrabackup: using the following InnoDB configuration:
xtrabackup:   innodb_data_home_dir = .
xtrabackup:   innodb_data_file_path = ibdata1:12M:autoextend
xtrabackup:   innodb_log_group_home_dir = ./
xtrabackup:   innodb_log_files_in_group = 2
xtrabackup:   innodb_log_file_size = 1073741824
InnoDB: Number of pools: 1
171114 11:35:06 >> log scanned up to (70156224)
xtrabackup: Generating a list of tablespaces
InnoDB: Allocated tablespace ID 2 for mysql/plugin, old maximum was 0
xtrabackup: Starting 10 threads for parallel data files transfer
171114 11:35:07 >> log scanned up to (70156224)
Killed

I forgot about already reported bug here -> #1582456

So according to this report and this code line:

write_filt.cc#L78

The buffer calculation for incremental backup is:

/* allocate buffer for incremental backup (4096 pages) */
	buf_size = (cursor->page_size / 4 + 1) * cursor->page_size +
UNIV_PAGE_SIZE_MAX;

So with 64K MySQL it is going to allocate:

(64*1024/4+1)*64*1024+64*1024 == 1073872896 == 1024 MB == 1G roughly.

And if you are going to use –parallel=10 as me, it is going to something like 10*1G = 10G. Due to this it is quite fair action by OOM killer to terminate my process.

After that, I was searching for nice simple solution to depict memory usage of executable and found this Python module:

memory_profiler

To install it:

sudo pip3.5 install memory_profiler
sudo pip3.5 install psutil
sudo pip3.5 install matplotlib

You need tkinter as well:

sudo yum -y install python35u-tkinter

You need xauth as well, for plotting:

sudo yum -y install xauth

After that connecting to my Vagrant VM with -X option:

ssh -X vagrant@127.0.0.1 -i /home/shako/REPOS/MySQL-AutoXtraBackup/.vagrant/machines/default/virtualbox/private_key -p 2222

And running following command:

mprof run append_here_the_command_to_run

It will run the given command and create some file mprofile_20171114122250.dat.

Now run following:

[vagrant@dhcppc3 ~]$ mprof plot mprofile_20171114122250.dat

And it will create:

BTW I have reported a bug to mprof as well: #171
Thanks for reading 🙂

How to combine BATS, PyTest and MySQL X Plugin for tests

Hi,
In this post I am going to show some tricks on using BATS framework + PyTest + Python X Plugin things, to have combined, simple test solutions.

Let’s describe MySQL X Plugin side, here is the full class:

# Connecting to MySQL and working with a Session
import mysqlx

class MyXPlugin:

    def __init__(self, schema_name, collection_name):
        # Connect to a dedicated MySQL server
        self.session = mysqlx.get_session({
            'host': 'localhost',
            'port': 33060,
            'user': 'bakux',
            'password': 'Baku12345',
            'ssl-mode': mysqlx.SSLMode.DISABLED
        })

        self.schema_name = schema_name
        self.collection_name = collection_name

        # Getting schema object
        self.schema = self.session.get_schema(self.schema_name)
        # Creating collection
        self.schema.create_collection(self.collection_name, reuse=True)
        # Getting collection object
        self.collection_obj = self.schema.get_collection(self.collection_name)


    def insert_into_collection(self):
        # You can also add multiple documents at once
        print "Inserting 3 rows into collection"
        self.collection_obj.add({'_id': '2', 'name': 'Sakila', 'age': 15},
                    {'_id': '3', 'name': 'Jack', 'age': 15},
                    {'_id': '4', 'name': 'Clare', 'age': 37}).execute()

    def remove_from_collection(self):
        # Removing non-existing _id
        self.collection_obj.remove('_id = 1').execute()


    def alter_table_engine(self):
        # Altering table engine to rocksdb; Should raise an error
        try:
            command = "alter table {}.{} engine=rocksdb".format(self.schema_name, self.collection_name)
            sql = self.session.sql(command)
            sql.execute()
        except Exception as e:
            raise mysqlx.errors.OperationalError("Could not alter engine of table here!")
        else:
            return 0

    def alter_table_drop_column(self):
        # Dropping generated column
        print "Altering default collection to drop generated column"
        try:
            command = "alter table {}.{} drop column `_id`".format(self.schema_name, self.collection_name)
            sql = self.session.sql(command)
            sql.execute()
        except Exception as e:
            raise
        else:
            return 0

    def return_table_obj(self):
        # Returning Table object
        table = mysqlx.Table(self.schema, self.collection_name)
        return table

    def create_view_from_collection(self, view_name):
        # Creating view from collection
        print "Trying to create view based on MyRocks collection"
        try:
            command = "create view {}.{} as select * from {}.{}".format(self.schema_name, view_name, self.schema_name, self.collection_name)
            sql = self.session.sql(command)
            sql.execute()
        except Exception as e:
            raise
        else:
            return 0

    def select_from_view(self, view_name):
        # Running select; Should raise an error
        print "Trying to select from view [Should raise an OperationalError]"
        try:
            command = "select * from {}.{}".format(self.schema_name, view_name)
            sql = self.session.sql(command)
            cursor = sql.execute()
            cursor.fetch_all()
        except Exception as e:
            raise mysqlx.errors.OperationalError("The JSON binary value contains invalid data")
        else:
            return 0

    def select_from_table(self):
        # Running select; Should raise an error
        print "Trying to select from view [Should raise an OperationalError]"
        try:
            command = "select * from {}.{}".format(self.schema_name, self.collection_name)
            sql = self.session.sql(command)
            cursor = sql.execute()
            cursor.fetch_all()
        except Exception as e:
            raise mysqlx.errors.OperationalError("The JSON binary value contains invalid data")
        else:
            return 0

And for testing purposes I have added some PyTest tests as:


import pytest
from mysqlx.errors import OperationalError

@pytest.mark.usefixtures("return_plugin_obj")
class TestXPlugin:
    """
    Tests for XPlugin + MyRocks
    """

    def test_check_if_collection_exists(self, return_plugin_obj):
        assert return_plugin_obj.collection_obj.exists_in_database() == True

    def test_check_collection_count(self, return_plugin_obj):
        # Inserting data
        return_plugin_obj.insert_into_collection()
        # Removing data
        return_plugin_obj.remove_from_collection()
        # Checking count
        assert return_plugin_obj.collection_obj.count() == 3

    def test_alter_table_engine_raises(self, return_plugin_obj):
        # Should raise error here
        print "Altering default collection engine from InnoDB to MyRocks [Should raise an OperationalError]"
        with pytest.raises(OperationalError) as er:
            return_plugin_obj.alter_table_engine()
        print er

    def test_alter_table_drop_column(self, return_plugin_obj):
        return_value = return_plugin_obj.alter_table_drop_column()
        assert return_value == 0

    def test_alter_table_engine(self, return_plugin_obj):
        print "Altering default collection engine from InnoDB to MyRocks [Should NOT raise an OperationalError]"
        return_value = return_plugin_obj.alter_table_engine()
        assert return_value == 0

    def helper_function(self, return_plugin_obj):
        table_obj = return_plugin_obj.return_table_obj()
        return table_obj

    def test_check_if_table_exists(self, return_plugin_obj):
        assert self.helper_function(return_plugin_obj).exists_in_database() == True

    def test_check_table_count(self, return_plugin_obj):
        assert self.helper_function(return_plugin_obj).count() == 3

    def test_check_table_name(self, return_plugin_obj):
        assert self.helper_function(return_plugin_obj).get_name() == "my_collection"

    def test_check_schema_name(self, return_plugin_obj):
        assert self.helper_function(return_plugin_obj).get_schema().get_name() == "generated_columns_test"

    def test_check_if_table_is_view(self, return_plugin_obj):
        assert self.helper_function(return_plugin_obj).is_view() == False

    def test_create_view_from_collection(self, return_plugin_obj):
        return_value = return_plugin_obj.create_view_from_collection("my_collection_view")

    def test_select_from_view(self, return_plugin_obj):
        # with pytest.raises(OperationalError) as er:
        #      return_plugin_obj.select_from_view("my_collection_view")
        # print er
        assert return_plugin_obj.select_from_view("my_collection_view") == 0

    def test_select_from_table(self, return_plugin_obj):
        # with pytest.raises(OperationalError) as er:
        #     return_plugin_obj.select_from_table()
        # print er
        assert return_plugin_obj.select_from_table() == 0

Previously I have described the full pytest run but now, I am going to call each test method from bats file:

#!/usr/bin/env bats

# Created by Shahriyar Rzayev from Percona

DIRNAME=$BATS_TEST_DIRNAME

@test "Running test_check_if_collection_exists" {
  run python -m pytest -vv ${DIRNAME}/myrocks_mysqlx_plugin_test/test_module01.py::TestXPlugin::test_check_if_collection_exists
  echo $output
  [ $status -eq 0 ]
}

@test "Running test_check_collection_count" {
  run python -m pytest -vv ${DIRNAME}/myrocks_mysqlx_plugin_test/test_module01.py::TestXPlugin::test_check_collection_count
  echo $output
  [ $status -eq 0 ]
}

@test "Running test_alter_table_engine_raises" {
  run python -m pytest -vv ${DIRNAME}/myrocks_mysqlx_plugin_test/test_module01.py::TestXPlugin::test_alter_table_engine_raises
  echo $output
  [ $status -eq 0 ]
}

@test "Running test_alter_table_drop_column" {
  run python -m pytest -vv ${DIRNAME}/myrocks_mysqlx_plugin_test/test_module01.py::TestXPlugin::test_alter_table_drop_column
  echo $output
  [ $status -eq 0 ]
}

@test "Running test_alter_table_engine" {
  run python -m pytest -vv ${DIRNAME}/myrocks_mysqlx_plugin_test/test_module01.py::TestXPlugin::test_alter_table_engine
  echo $output
  [ $status -eq 0 ]
}

@test "Running test_check_if_table_exists" {
  run python -m pytest -vv ${DIRNAME}/myrocks_mysqlx_plugin_test/test_module01.py::TestXPlugin::test_check_if_table_exists
  echo $output
  [ $status -eq 0 ]
}

@test "Running test_check_table_count" {
  run python -m pytest -vv ${DIRNAME}/myrocks_mysqlx_plugin_test/test_module01.py::TestXPlugin::test_check_table_count
  echo $output
  [ $status -eq 0 ]
}

@test "Running test_check_table_name" {
  run python -m pytest -vv ${DIRNAME}/myrocks_mysqlx_plugin_test/test_module01.py::TestXPlugin::test_check_table_name
  echo $output
  [ $status -eq 0 ]
}

@test "Running test_check_schema_name" {
  run python -m pytest -vv ${DIRNAME}/myrocks_mysqlx_plugin_test/test_module01.py::TestXPlugin::test_check_schema_name
  echo $output
  [ $status -eq 0 ]
}

@test "Running test_check_if_table_is_view" {
  run python -m pytest -vv ${DIRNAME}/myrocks_mysqlx_plugin_test/test_module01.py::TestXPlugin::test_check_if_table_is_view
  echo $output
  [ $status -eq 0 ]
}

@test "Running test_create_view_from_collection" {
  run python -m pytest -vv ${DIRNAME}/myrocks_mysqlx_plugin_test/test_module01.py::TestXPlugin::test_create_view_from_collection
  echo $output
  [ $status -eq 0 ]
}

@test "Running test_select_from_table [Should not raise an OperationalError after MYR-151/152]" {
  run python -m pytest -vv ${DIRNAME}/myrocks_mysqlx_plugin_test/test_module01.py::TestXPlugin::test_select_from_table
  echo $output
  [ $status -eq 0 ]
}

@test "Running test_select_from_view [Should not raise an OperationalError asfter MYR-151/152]" {
  run python -m pytest -vv ${DIRNAME}/myrocks_mysqlx_plugin_test/test_module01.py::TestXPlugin::test_select_from_view
  echo $output
  [ $status -eq 0 ]
}

The idea behind bats framework is quite simple and handy.
So just pick one test and run it inside bats.

The final sample output is something like this which is very well structured.

#Running X Plugin tests#
 ✓ Running test_check_if_collection_exists
 ✓ Running test_check_collection_count
 ✓ Running test_alter_table_engine_raises
 ✓ Running test_alter_table_drop_column
 ✓ Running test_alter_table_engine
 ✓ Running test_check_if_table_exists
 ✓ Running test_check_table_count
 ✓ Running test_check_table_name
 ✓ Running test_check_schema_name
 ✓ Running test_check_if_table_is_view
 ✓ Running test_create_view_from_collection
 ✓ Running test_select_from_table
 ✓ Running test_select_from_view

13 tests, 0 failures

For full code please see:

https://github.com/Percona-QA/percona-qa/tree/master/myrocks-tests

Using MySQL Connector/Python X (mysqlx module) MyRocks episode

This post is about, how I have tried to make simple Python script using mysqlx module work with MyRocks.
This is also related to pytest, as I have implemented simple pytest tests to call them from bash file.

So let’s discuss problem description:
The base problem is, by default when you create collection using Python X Plugin, the collection will have, 1 json type column called `doc` and 1 generated column from this `doc` column called `_id`.
So basically, you can not alter table engine to MyRocks because it will give an error something like:

ERROR 3106 (HY000): 'Specified storage engine' is not supported for generated columns.

The result:

Well, it can be solved by dropping generated `_id` column. Here we are encountering another issue that, if you have table with json data, please do NOT alter it to MyRocks, otherwise, you will get some weird results as described here:

https://jira.percona.com/browse/MYR-152

So basically, when I select from this collection it will raise an OperationalError:

mysqlx.errors.OperationalError("The JSON binary value contains invalid data")

Now the sample code portions:

The main worker class code:


import mysqlx

class MyXPlugin:

    def __init__(self, schema_name, collection_name):
        # Connect to a dedicated MySQL server
        self.session = mysqlx.get_session({
            'host': 'localhost',
            'port': 33060,
            'user': 'bakux',
            'password': 'Baku12345',
            'ssl-mode': mysqlx.SSLMode.DISABLED
        })

        self.schema_name = schema_name
        self.collection_name = collection_name

        # Getting schema object
        self.schema = self.session.get_schema(self.schema_name)
        # Creating collection
        self.schema.create_collection(self.collection_name)
        # Getting collection object
        self.collection_obj = self.schema.get_collection(self.collection_name)


    def insert_into_collection(self):
        # You can also add multiple documents at once
        print "Inserting 3 rows into collection"
        self.collection_obj.add({'_id': '2', 'name': 'Sakila', 'age': 15},
                    {'_id': '3', 'name': 'Jack', 'age': 15},
                    {'_id': '4', 'name': 'Clare', 'age': 37}).execute()

    def remove_from_collection(self):
        # Removing non-existing _id
        self.collection_obj.remove('_id = 1').execute()


    def alter_table_engine(self):
        # Altering table engine to rocksdb; Should raise an error
        try:
            command = "alter table {}.{} engine=rocksdb".format(self.schema_name, self.collection_name)
            sql = self.session.sql(command)
            sql.execute()
        except Exception as e:
            raise mysqlx.errors.OperationalError("Could not alter engine of table here!")
        else:
            return 0

    def alter_table_drop_column(self):
        # Dropping generated column
        print "Altering default collection to drop generated column"
        try:
            command = "alter table {}.{} drop column `_id`".format(self.schema_name, self.collection_name)
            sql = self.session.sql(command)
            sql.execute()
        except Exception as e:
            raise
        else:
            return 0

    def return_table_obj(self):
        # Returning Table object
        table = mysqlx.Table(self.schema, self.collection_name)
        return table

    def create_view_from_collection(self, view_name):
        # Creating view from collection
        print "Trying to create view based on MyRocks collection"
        try:
            command = "create view {}.{} as select * from {}.{}".format(self.schema_name, view_name, self.schema_name, self.collection_name)
            sql = self.session.sql(command)
            sql.execute()
        except Exception as e:
            raise
        else:
            return 0

    def select_from_view(self, view_name):
        # Running select; Should raise an error
        print "Trying to select from view [Should raise an OperationalError]"
        try:
            command = "select * from {}.{}".format(self.schema_name, view_name)
            sql = self.session.sql(command)
            sql.execute()
        except Exception as e:
            raise mysqlx.errors.OperationalError("The JSON binary value contains invalid data")
        else:
            return 0

    def select_from_table(self):
        # Running select; Should raise an error
        print "Trying to select from view [Should raise an OperationalError]"
        try:
            command = "select * from {}.{}".format(self.schema_name, self.collection_name)
            sql = self.session.sql(command)
            sql.execute()
        except Exception as e:
            raise mysqlx.errors.OperationalError("The JSON binary value contains invalid data")
        else:
            return 0

The PyTest module:

import pytest
from mysqlx.errors import OperationalError

@pytest.mark.usefixtures("return_plugin_obj")
class TestXPlugin:
    """
    Tests for XPlugin + MyRocks
    """

    def test_check_if_collection_exists(self, return_plugin_obj):
        assert return_plugin_obj.collection_obj.exists_in_database() == True

    def test_check_collection_count(self, return_plugin_obj):
        # Inserting data
        return_plugin_obj.insert_into_collection()
        # Removing data
        return_plugin_obj.remove_from_collection()
        # Checking count
        assert return_plugin_obj.collection_obj.count() == 3

    def test_alter_table_engine_raises(self, return_plugin_obj):
        # Should raise error here
        print "Altering default collection engine from InnoDB to MyRocks [Should raise an OperationalError]"
        with pytest.raises(OperationalError) as er:
            return_plugin_obj.alter_table_engine()
        print er

    def test_alter_table_drop_column(self, return_plugin_obj):
        return_value = return_plugin_obj.alter_table_drop_column()
        assert return_value == 0

    def test_alter_table_engine(self, return_plugin_obj):
        print "Altering default collection engine from InnoDB to MyRocks [Should NOT raise an OperationalError]"
        return_value = return_plugin_obj.alter_table_engine()
        assert return_value == 0

    def helper_function(self, return_plugin_obj):
        table_obj = return_plugin_obj.return_table_obj()
        return table_obj

    def test_check_if_table_exists(self, return_plugin_obj):
        assert self.helper_function(return_plugin_obj).exists_in_database() == True

    def test_check_table_count(self, return_plugin_obj):
        assert self.helper_function(return_plugin_obj).count() == 3

    def test_check_table_name(self, return_plugin_obj):
        assert self.helper_function(return_plugin_obj).get_name() == "my_collection"

    def test_check_schema_name(self, return_plugin_obj):
        assert self.helper_function(return_plugin_obj).get_schema().get_name() == "generated_columns_test"

    def test_check_if_table_is_view(self, return_plugin_obj):
        assert self.helper_function(return_plugin_obj).is_view() == False

    def test_create_view_from_collection(self, return_plugin_obj):
        return_value = return_plugin_obj.create_view_from_collection("my_collection_view")

    def test_select_from_view(self, return_plugin_obj):
        with pytest.raises(OperationalError) as er:
             return_plugin_obj.select_from_view("my_collection_view")
        print er

    def test_select_from_table(self, return_plugin_obj):
        with pytest.raises(OperationalError) as er:
            return_plugin_obj.select_from_table()
        print er

The PyTest fixture code:

from myrocks_mysqlx_plugin.myrocks_mysqlx_plugin import MyXPlugin
import pytest
# schema_name = "generated_columns_test"
# collection_name = "my_collection"
plugin_obj = MyXPlugin("generated_columns_test", "my_collection")

@pytest.fixture()
def return_plugin_obj():
    return plugin_obj

The final bash runner test output:

#Running X Plugin tests#
========================================================================== test session starts ==========================================================================
platform linux2 -- Python 2.7.5, pytest-3.2.1, py-1.4.34, pluggy-0.4.0 -- /usr/bin/python
cachedir: ../../.cache
rootdir: /home/shahriyar.rzaev, inifile:
collected 13 items                                                                                                                                                       

../../percona-qa/myrocks-tests/myrocks_mysqlx_plugin_test/test_module01.py::TestXPlugin::test_check_if_collection_exists PASSED
../../percona-qa/myrocks-tests/myrocks_mysqlx_plugin_test/test_module01.py::TestXPlugin::test_check_collection_count PASSED
../../percona-qa/myrocks-tests/myrocks_mysqlx_plugin_test/test_module01.py::TestXPlugin::test_alter_table_engine_raises PASSED
../../percona-qa/myrocks-tests/myrocks_mysqlx_plugin_test/test_module01.py::TestXPlugin::test_alter_table_drop_column PASSED
../../percona-qa/myrocks-tests/myrocks_mysqlx_plugin_test/test_module01.py::TestXPlugin::test_alter_table_engine PASSED
../../percona-qa/myrocks-tests/myrocks_mysqlx_plugin_test/test_module01.py::TestXPlugin::test_check_if_table_exists PASSED
../../percona-qa/myrocks-tests/myrocks_mysqlx_plugin_test/test_module01.py::TestXPlugin::test_check_table_count PASSED
../../percona-qa/myrocks-tests/myrocks_mysqlx_plugin_test/test_module01.py::TestXPlugin::test_check_table_name PASSED
../../percona-qa/myrocks-tests/myrocks_mysqlx_plugin_test/test_module01.py::TestXPlugin::test_check_schema_name PASSED
../../percona-qa/myrocks-tests/myrocks_mysqlx_plugin_test/test_module01.py::TestXPlugin::test_check_if_table_is_view PASSED
../../percona-qa/myrocks-tests/myrocks_mysqlx_plugin_test/test_module01.py::TestXPlugin::test_create_view_from_collection PASSED
../../percona-qa/myrocks-tests/myrocks_mysqlx_plugin_test/test_module01.py::TestXPlugin::test_select_from_view FAILED
../../percona-qa/myrocks-tests/myrocks_mysqlx_plugin_test/test_module01.py::TestXPlugin::test_select_from_table PASSED

Also reported interesting thing here:
https://bugs.mysql.com/bug.php?id=87531

Using MySQL Connector/Python X (mysqlx module)

This post is about simple usage of mysqlx module i.e X Plugin with latest Connector/Python DMR.
The version of Python Connector is 8.0.4.

Installing:

wget https://dev.mysql.com/get/Downloads/Connector-Python/mysql-connector-python-8.0.4-0.1.dmr.el7.x86_64.rpm
sudo yum install mysql-connector-python-8.0.4-0.1.dmr.el7.x86_64.rpm

Sample Python code:

# Connecting to MySQL and working with a Session
import mysqlx

# Connect to a dedicated MySQL server
session = mysqlx.get_session({
    'host': 'localhost',
    'port': 33060,
    'user': 'bakux',
    'password': 'Baku12345',
    'ssl-mode': mysqlx.SSLMode.DISABLED
})

schema = session.get_schema('generated_columns_test')

# Create 'my_collection' in schema
schema.create_collection('my_collection')

# Get 'my_collection' from schema
collection = schema.get_collection('my_collection')

assert(True == collection.exists_in_database())

# You can also add multiple documents at once
collection.add({'_id': '2', 'name': 'Sakila', 'age': 15},
            {'_id': '3', 'name': 'Jack', 'age': 15},
            {'_id': '4', 'name': 'Clare', 'age': 37}).execute()

collection.remove('_id = 1').execute()

assert(3 == collection.count())

The code is quite clean so basically we have created a collection and inserted some data.

From MySQL client side the structure of this “table”==collection:

CREATE TABLE `my_collection` (
  `doc` json DEFAULT NULL,
  `_id` varchar(32) GENERATED ALWAYS AS (json_unquote(json_extract(`doc`,'$._id'))) STORED NOT NULL,
  PRIMARY KEY (`_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4

The collection has “stored generated column” _id which is extracted from doc column.

select * from my_collection;
+-------------------------------------------+-----+
| doc                                       | _id |
+-------------------------------------------+-----+
| {"_id": "2", "age": 15, "name": "Sakila"} | 2   |
| {"_id": "3", "age": 15, "name": "Jack"}   | 3   |
| {"_id": "4", "age": 37, "name": "Clare"}  | 4   |
+-------------------------------------------+-----+
3 rows in set (0.00 sec)

This default behavior restricted my tests on other storage engines. Because it is impossible to alter the engine of this default “collection”.

ERROR 3106 (HY000): 'Specified storage engine' is not supported for generated columns.

BTW, while testing, found some bugs and reported:

https://bugs.mysql.com/bug.php?id=87472
https://bugs.mysql.com/bug.php?id=87475
https://bugs.mysql.com/bug.php?id=87492
https://bugs.mysql.com/bug.php?id=87493

Testing MySQL 8.0 – let me create a ton of undo files

This is the late blog post about 2 recent bug reports 🙂
#85969
#85971

The basic idea came after reading -> http://mysqlserverteam.com/the-mysql-8-0-1-milestone-release-is-available/

So the result of test ->
After each restart of MySQL the new undo log files are going to be created + keeping old files.

shahriyar.rzaev@qaserver-06:~/sandboxes/msb_8_0_1/data$ du -hs
6.4G	
# The count of undo files
shahriyar.rzaev@qaserver-06:~/sandboxes/msb_8_0_1/data$ ls | grep undo | wc -l
539

After new restart:

# New count
shahriyar.rzaev@qaserver-06:~/sandboxes/msb_8_0_1/data$ ls | grep undo | wc -l
616

So how to reproduce?

1. Started server with following settings

early-plugin-load=keyring_file.so
keyring_file_data=/home/shahriyar.rzaev/sandboxes/msb_8_0_1/tmp/keyring

innodb_redo_log_encrypt=ON
innodb_undo_tablespaces=77
innodb_undo_logs=77
innodb_undo_log_encrypt=ON

2. Then killed it with -9 and got the error:

2017-05-08T08:54:07.174124Z 1 [ERROR] InnoDB: Expected to open 77 undo tablespaces but was able to find only 0 undo tablespaces. Set the innodb_undo_tablespaces parameter to the correct value and retry. Suggested value is 0
2017-05-08T08:54:07.174147Z 1 [ERROR] InnoDB: Plugin initialization aborted with error Generic error
2017-05-08T08:54:07.674794Z 1 [ERROR] Failed to initialize DD Storage Engine
2017-05-08T08:54:07.674922Z 0 [ERROR] Data Dictionary initialization failed.
2017-05-08T08:54:07.674932Z 0 [ERROR] Aborting

3. To overcome this issue I have changed config and restarted MySQL:

innodb_redo_log_encrypt=ON
innodb_undo_tablespaces=0
innodb_undo_logs=77
innodb_undo_log_encrypt=ON

And it is started.

4. Then I have changed back the config again and restarted MySQL:

innodb_redo_log_encrypt=ON
innodb_undo_tablespaces=77
innodb_undo_logs=77
innodb_undo_log_encrypt=ON

From now after each restart it will create new undo files.
See below the full test result:
https://bugs.mysql.com/file.php?id=25383&bug_id=85971

Experimental MySQL UDF code partially written in Python

Hi dear all,
I would like to share our experimental tests about, writing UDF with Python.
So the basic idea was writing a UDF for factorial calculation. The interesting part was to move factorial calculation logic to Python code and then calling this Python file inside UDF code.

Here is the repo link -> UDFs-written-in-Python

The factorial calculation is here -> factorial.py
UDF code is here -> factorial.cc

Basic idea with UDF code quite simple as it has 3 parts:
1. initialization part -> factorial_init() -> all necessary checks reside here
2. main logic -> factorial() -> calling Python script, reading data back and showing the result.
3. deinitialization -> factorial_deinit()

Compiled as:

gcc $(/home/sh/REPOS/MYSQL_DEV_ACTIONS/mysql-5.7/mysql-server/5.7.16/bin/mysql_config --cflags) -fPIC -shared -o factorial.so factorial.cc -std=c++11

Copied to plugin_dir:

cp factorial.so /home/sh/REPOS/MYSQL_DEV_ACTIONS/mysql-5.7/mysql-server/5.7.16/lib/plugin/
cp factorial.py /home/sh/REPOS/MYSQL_DEV_ACTIONS/mysql-5.7/mysql-server/5.7.16/lib/plugin/

Activated as:
mysql > create function factorial returns integer soname 'factorial.so';

Result:

root@test1:~# python --version
Python 2.7.12
mysql> select factorial(5);
+--------------+
| factorial(5) |
+--------------+
| 120 |
+--------------+
1 row in set (0.02 sec)