XtraBackup vs. OOM killer(usage of mprof)

This small article is about nicely depicting memory usage prior to OOM killers show.

I have started MySQL with:

/home/shahriyar.rzaev/XB_TEST/server_dir/PS131117-percona-server-5.7.19-17-linux-x86_64/bin/mysqld --no-defaults \
--innodb_buffer_pool_size=1G --innodb_log_file_size=1G --innodb_page_size=64K --early-plugin-load=keyring_file.so \ --keyring_file_data=/home/shahriyar.rzaev/XB_TEST/server_dir/PS131117-percona-server-5.7.19-17-linux-x86_64/mysql-keyring/keyring \
--log-bin=mysql-bin --log-slave-updates --server-id=1 --gtid-mode=ON --enforce-gtid-consistency --binlog-format=row --core-file --basedir=/home/shahriyar.rzaev/XB_TEST/server_dir/PS131117-percona-server-5.7.19-17-linux-x86_64 \
--tmpdir=/home/shahriyar.rzaev/XB_TEST/server_dir/PS131117-percona-server-5.7.19-17-linux-x86_64/data \
--datadir=/home/shahriyar.rzaev/XB_TEST/server_dir/PS131117-percona-server-5.7.19-17-linux-x86_64/data \
--plugin-load-add=tokudb=ha_tokudb.so --tokudb-check-jemalloc=0 --plugin-load-add=rocksdb=ha_rocksdb.so 
--socket=/home/shahriyar.rzaev/XB_TEST/server_dir/PS131117-percona-server-5.7.19-17-linux-x86_64/socket.sock --port=17057 \
--log-error=/home/shahriyar.rzaev/XB_TEST/server_dir/PS131117-percona-server-5.7.19-17-linux-x86_64/log/master.err

So basically it is MySQL 5.7.19 with 64K page size.

Trying to take full + incremental backups:

Full is OK:

/home/shahriyar.rzaev/XB_TEST/server_dir/target/percona-xtrabackup-2.4.x-debug/bin/xtrabackup --defaults-file= \
--user=root --password=''  --target-dir=/home/shahriyar.rzaev/XB_TEST/backup_dir/ps_5_7_x_2_4/full/2017-11-14_11-31-20 \
--backup --socket=/home/shahriyar.rzaev/XB_TEST/server_dir/PS131117-percona-server-5.7.19-17-linux-x86_64/socket.sock \
--compress=quicklz --compress-chunk-size=65536 --compress-threads=4 --encrypt=AES256 --encrypt-key=VVTBwgM4UhwkTTV98fhuj+D1zyWoA89K \
--encrypt-threads=4 --encrypt-chunk-size=65536 --no-version-check --core-file --parallel=10 --throttle=40 \
--keyring-file-data=/home/shahriyar.rzaev/XB_TEST/server_dir/PS131117-percona-server-5.7.19-17-linux-x86_64/mysql-keyring/keyring --stream="xbstream" > \
/home/shahriyar.rzaev/XB_TEST/backup_dir/ps_5_7_x_2_4/full/2017-11-14_11-31-20/full_backup.stream

Incremental Failed:

/home/shahriyar.rzaev/XB_TEST/server_dir/target/percona-xtrabackup-2.4.x-debug/bin/xtrabackup --defaults-file= \
--user=root --password='' --target-dir=/home/shahriyar.rzaev/XB_TEST/backup_dir/ps_5_7_x_2_4/inc/2017-11-14_11-33-51 \
--incremental-basedir=/home/shahriyar.rzaev/XB_TEST/backup_dir/ps_5_7_x_2_4/full/2017-11-14_11-31-20 --backup \
--socket=/home/shahriyar.rzaev/XB_TEST/server_dir/PS131117-percona-server-5.7.19-17-linux-x86_64/socket.sock \
--compress=quicklz --compress-chunk-size=65536 --compress-threads=4 --encrypt=AES256 \
--encrypt-key=VVTBwgM4UhwkTTV98fhuj+D1zyWoA89K --encrypt-threads=4 --encrypt-chunk-size=65536 \
--no-version-check --core-file --parallel=10 --throttle=40 \
--keyring-file-data=/home/shahriyar.rzaev/XB_TEST/server_dir/PS131117-percona-server-5.7.19-17-linux-x86_64/mysql-keyring/keyring --stream="xbstream" > \
 /home/shahriyar.rzaev/XB_TEST/backup_dir/ps_5_7_x_2_4/inc/2017-11-14_11-33-51/inc_backup.stream

171114 11:35:05 Connecting to MySQL server host: localhost, user: root, password: set, port: not set, socket: /home/shahriyar.rzaev/XB_TEST/server_dir/PS131117-percona-server-5.7.19-17-linux-x86_64/socket.sock
Using server version 5.7.19-17-debug-log
/home/shahriyar.rzaev/XB_TEST/server_dir/target/percona-xtrabackup-2.4.x-debug/bin/xtrabackup version 2.4.7 based on MySQL server 5.7.13 Linux (x86_64) (revision id: )
incremental backup from 70156215 is enabled.
xtrabackup: uses posix_fadvise().
xtrabackup: cd to /home/shahriyar.rzaev/XB_TEST/server_dir/PS131117-percona-server-5.7.19-17-linux-x86_64/data/
xtrabackup: open files limit requested 0, set to 1024
xtrabackup: using the following InnoDB configuration:
xtrabackup:   innodb_data_home_dir = .
xtrabackup:   innodb_data_file_path = ibdata1:12M:autoextend
xtrabackup:   innodb_log_group_home_dir = ./
xtrabackup:   innodb_log_files_in_group = 2
xtrabackup:   innodb_log_file_size = 1073741824
InnoDB: Number of pools: 1
171114 11:35:06 >> log scanned up to (70156224)
xtrabackup: Generating a list of tablespaces
InnoDB: Allocated tablespace ID 2 for mysql/plugin, old maximum was 0
xtrabackup: Starting 10 threads for parallel data files transfer
171114 11:35:07 >> log scanned up to (70156224)
Killed

I forgot about already reported bug here -> #1582456

So according to this report and this code line:

write_filt.cc#L78

The buffer calculation for incremental backup is:

/* allocate buffer for incremental backup (4096 pages) */
	buf_size = (cursor->page_size / 4 + 1) * cursor->page_size +
UNIV_PAGE_SIZE_MAX;

So with 64K MySQL it is going to allocate:

(64*1024/4+1)*64*1024+64*1024 == 1073872896 == 1024 MB == 1G roughly.

And if you are going to use –parallel=10 as me, it is going to something like 10*1G = 10G. Due to this it is quite fair action by OOM killer to terminate my process.

After that, I was searching for nice simple solution to depict memory usage of executable and found this Python module:

memory_profiler

To install it:

sudo pip3.5 install memory_profiler
sudo pip3.5 install psutil
sudo pip3.5 install matplotlib

You need tkinter as well:

sudo yum -y install python35u-tkinter

You need xauth as well, for plotting:

sudo yum -y install xauth

After that connecting to my Vagrant VM with -X option:

ssh -X vagrant@127.0.0.1 -i /home/shako/REPOS/MySQL-AutoXtraBackup/.vagrant/machines/default/virtualbox/private_key -p 2222

And running following command:

mprof run append_here_the_command_to_run

It will run the given command and create some file mprofile_20171114122250.dat.

Now run following:

[vagrant@dhcppc3 ~]$ mprof plot mprofile_20171114122250.dat

And it will create:

BTW I have reported a bug to mprof as well: #171
Thanks for reading 🙂

How to combine BATS, PyTest and MySQL X Plugin for tests

Hi,
In this post I am going to show some tricks on using BATS framework + PyTest + Python X Plugin things, to have combined, simple test solutions.

Let’s describe MySQL X Plugin side, here is the full class:

# Connecting to MySQL and working with a Session
import mysqlx

class MyXPlugin:

    def __init__(self, schema_name, collection_name):
        # Connect to a dedicated MySQL server
        self.session = mysqlx.get_session({
            'host': 'localhost',
            'port': 33060,
            'user': 'bakux',
            'password': 'Baku12345',
            'ssl-mode': mysqlx.SSLMode.DISABLED
        })

        self.schema_name = schema_name
        self.collection_name = collection_name

        # Getting schema object
        self.schema = self.session.get_schema(self.schema_name)
        # Creating collection
        self.schema.create_collection(self.collection_name, reuse=True)
        # Getting collection object
        self.collection_obj = self.schema.get_collection(self.collection_name)


    def insert_into_collection(self):
        # You can also add multiple documents at once
        print "Inserting 3 rows into collection"
        self.collection_obj.add({'_id': '2', 'name': 'Sakila', 'age': 15},
                    {'_id': '3', 'name': 'Jack', 'age': 15},
                    {'_id': '4', 'name': 'Clare', 'age': 37}).execute()

    def remove_from_collection(self):
        # Removing non-existing _id
        self.collection_obj.remove('_id = 1').execute()


    def alter_table_engine(self):
        # Altering table engine to rocksdb; Should raise an error
        try:
            command = "alter table {}.{} engine=rocksdb".format(self.schema_name, self.collection_name)
            sql = self.session.sql(command)
            sql.execute()
        except Exception as e:
            raise mysqlx.errors.OperationalError("Could not alter engine of table here!")
        else:
            return 0

    def alter_table_drop_column(self):
        # Dropping generated column
        print "Altering default collection to drop generated column"
        try:
            command = "alter table {}.{} drop column `_id`".format(self.schema_name, self.collection_name)
            sql = self.session.sql(command)
            sql.execute()
        except Exception as e:
            raise
        else:
            return 0

    def return_table_obj(self):
        # Returning Table object
        table = mysqlx.Table(self.schema, self.collection_name)
        return table

    def create_view_from_collection(self, view_name):
        # Creating view from collection
        print "Trying to create view based on MyRocks collection"
        try:
            command = "create view {}.{} as select * from {}.{}".format(self.schema_name, view_name, self.schema_name, self.collection_name)
            sql = self.session.sql(command)
            sql.execute()
        except Exception as e:
            raise
        else:
            return 0

    def select_from_view(self, view_name):
        # Running select; Should raise an error
        print "Trying to select from view [Should raise an OperationalError]"
        try:
            command = "select * from {}.{}".format(self.schema_name, view_name)
            sql = self.session.sql(command)
            cursor = sql.execute()
            cursor.fetch_all()
        except Exception as e:
            raise mysqlx.errors.OperationalError("The JSON binary value contains invalid data")
        else:
            return 0

    def select_from_table(self):
        # Running select; Should raise an error
        print "Trying to select from view [Should raise an OperationalError]"
        try:
            command = "select * from {}.{}".format(self.schema_name, self.collection_name)
            sql = self.session.sql(command)
            cursor = sql.execute()
            cursor.fetch_all()
        except Exception as e:
            raise mysqlx.errors.OperationalError("The JSON binary value contains invalid data")
        else:
            return 0

And for testing purposes I have added some PyTest tests as:


import pytest
from mysqlx.errors import OperationalError

@pytest.mark.usefixtures("return_plugin_obj")
class TestXPlugin:
    """
    Tests for XPlugin + MyRocks
    """

    def test_check_if_collection_exists(self, return_plugin_obj):
        assert return_plugin_obj.collection_obj.exists_in_database() == True

    def test_check_collection_count(self, return_plugin_obj):
        # Inserting data
        return_plugin_obj.insert_into_collection()
        # Removing data
        return_plugin_obj.remove_from_collection()
        # Checking count
        assert return_plugin_obj.collection_obj.count() == 3

    def test_alter_table_engine_raises(self, return_plugin_obj):
        # Should raise error here
        print "Altering default collection engine from InnoDB to MyRocks [Should raise an OperationalError]"
        with pytest.raises(OperationalError) as er:
            return_plugin_obj.alter_table_engine()
        print er

    def test_alter_table_drop_column(self, return_plugin_obj):
        return_value = return_plugin_obj.alter_table_drop_column()
        assert return_value == 0

    def test_alter_table_engine(self, return_plugin_obj):
        print "Altering default collection engine from InnoDB to MyRocks [Should NOT raise an OperationalError]"
        return_value = return_plugin_obj.alter_table_engine()
        assert return_value == 0

    def helper_function(self, return_plugin_obj):
        table_obj = return_plugin_obj.return_table_obj()
        return table_obj

    def test_check_if_table_exists(self, return_plugin_obj):
        assert self.helper_function(return_plugin_obj).exists_in_database() == True

    def test_check_table_count(self, return_plugin_obj):
        assert self.helper_function(return_plugin_obj).count() == 3

    def test_check_table_name(self, return_plugin_obj):
        assert self.helper_function(return_plugin_obj).get_name() == "my_collection"

    def test_check_schema_name(self, return_plugin_obj):
        assert self.helper_function(return_plugin_obj).get_schema().get_name() == "generated_columns_test"

    def test_check_if_table_is_view(self, return_plugin_obj):
        assert self.helper_function(return_plugin_obj).is_view() == False

    def test_create_view_from_collection(self, return_plugin_obj):
        return_value = return_plugin_obj.create_view_from_collection("my_collection_view")

    def test_select_from_view(self, return_plugin_obj):
        # with pytest.raises(OperationalError) as er:
        #      return_plugin_obj.select_from_view("my_collection_view")
        # print er
        assert return_plugin_obj.select_from_view("my_collection_view") == 0

    def test_select_from_table(self, return_plugin_obj):
        # with pytest.raises(OperationalError) as er:
        #     return_plugin_obj.select_from_table()
        # print er
        assert return_plugin_obj.select_from_table() == 0

Previously I have described the full pytest run but now, I am going to call each test method from bats file:

#!/usr/bin/env bats

# Created by Shahriyar Rzayev from Percona

DIRNAME=$BATS_TEST_DIRNAME

@test "Running test_check_if_collection_exists" {
  run python -m pytest -vv ${DIRNAME}/myrocks_mysqlx_plugin_test/test_module01.py::TestXPlugin::test_check_if_collection_exists
  echo $output
  [ $status -eq 0 ]
}

@test "Running test_check_collection_count" {
  run python -m pytest -vv ${DIRNAME}/myrocks_mysqlx_plugin_test/test_module01.py::TestXPlugin::test_check_collection_count
  echo $output
  [ $status -eq 0 ]
}

@test "Running test_alter_table_engine_raises" {
  run python -m pytest -vv ${DIRNAME}/myrocks_mysqlx_plugin_test/test_module01.py::TestXPlugin::test_alter_table_engine_raises
  echo $output
  [ $status -eq 0 ]
}

@test "Running test_alter_table_drop_column" {
  run python -m pytest -vv ${DIRNAME}/myrocks_mysqlx_plugin_test/test_module01.py::TestXPlugin::test_alter_table_drop_column
  echo $output
  [ $status -eq 0 ]
}

@test "Running test_alter_table_engine" {
  run python -m pytest -vv ${DIRNAME}/myrocks_mysqlx_plugin_test/test_module01.py::TestXPlugin::test_alter_table_engine
  echo $output
  [ $status -eq 0 ]
}

@test "Running test_check_if_table_exists" {
  run python -m pytest -vv ${DIRNAME}/myrocks_mysqlx_plugin_test/test_module01.py::TestXPlugin::test_check_if_table_exists
  echo $output
  [ $status -eq 0 ]
}

@test "Running test_check_table_count" {
  run python -m pytest -vv ${DIRNAME}/myrocks_mysqlx_plugin_test/test_module01.py::TestXPlugin::test_check_table_count
  echo $output
  [ $status -eq 0 ]
}

@test "Running test_check_table_name" {
  run python -m pytest -vv ${DIRNAME}/myrocks_mysqlx_plugin_test/test_module01.py::TestXPlugin::test_check_table_name
  echo $output
  [ $status -eq 0 ]
}

@test "Running test_check_schema_name" {
  run python -m pytest -vv ${DIRNAME}/myrocks_mysqlx_plugin_test/test_module01.py::TestXPlugin::test_check_schema_name
  echo $output
  [ $status -eq 0 ]
}

@test "Running test_check_if_table_is_view" {
  run python -m pytest -vv ${DIRNAME}/myrocks_mysqlx_plugin_test/test_module01.py::TestXPlugin::test_check_if_table_is_view
  echo $output
  [ $status -eq 0 ]
}

@test "Running test_create_view_from_collection" {
  run python -m pytest -vv ${DIRNAME}/myrocks_mysqlx_plugin_test/test_module01.py::TestXPlugin::test_create_view_from_collection
  echo $output
  [ $status -eq 0 ]
}

@test "Running test_select_from_table [Should not raise an OperationalError after MYR-151/152]" {
  run python -m pytest -vv ${DIRNAME}/myrocks_mysqlx_plugin_test/test_module01.py::TestXPlugin::test_select_from_table
  echo $output
  [ $status -eq 0 ]
}

@test "Running test_select_from_view [Should not raise an OperationalError asfter MYR-151/152]" {
  run python -m pytest -vv ${DIRNAME}/myrocks_mysqlx_plugin_test/test_module01.py::TestXPlugin::test_select_from_view
  echo $output
  [ $status -eq 0 ]
}

The idea behind bats framework is quite simple and handy.
So just pick one test and run it inside bats.

The final sample output is something like this which is very well structured.

#Running X Plugin tests#
 ✓ Running test_check_if_collection_exists
 ✓ Running test_check_collection_count
 ✓ Running test_alter_table_engine_raises
 ✓ Running test_alter_table_drop_column
 ✓ Running test_alter_table_engine
 ✓ Running test_check_if_table_exists
 ✓ Running test_check_table_count
 ✓ Running test_check_table_name
 ✓ Running test_check_schema_name
 ✓ Running test_check_if_table_is_view
 ✓ Running test_create_view_from_collection
 ✓ Running test_select_from_table
 ✓ Running test_select_from_view

13 tests, 0 failures

For full code please see:

https://github.com/Percona-QA/percona-qa/tree/master/myrocks-tests

Using MySQL Connector/Python X (mysqlx module) MyRocks episode

This post is about, how I have tried to make simple Python script using mysqlx module work with MyRocks.
This is also related to pytest, as I have implemented simple pytest tests to call them from bash file.

So let’s discuss problem description:
The base problem is, by default when you create collection using Python X Plugin, the collection will have, 1 json type column called `doc` and 1 generated column from this `doc` column called `_id`.
So basically, you can not alter table engine to MyRocks because it will give an error something like:

ERROR 3106 (HY000): 'Specified storage engine' is not supported for generated columns.

The result:

Well, it can be solved by dropping generated `_id` column. Here we are encountering another issue that, if you have table with json data, please do NOT alter it to MyRocks, otherwise, you will get some weird results as described here:

https://jira.percona.com/browse/MYR-152

So basically, when I select from this collection it will raise an OperationalError:

mysqlx.errors.OperationalError("The JSON binary value contains invalid data")

Now the sample code portions:

The main worker class code:


import mysqlx

class MyXPlugin:

    def __init__(self, schema_name, collection_name):
        # Connect to a dedicated MySQL server
        self.session = mysqlx.get_session({
            'host': 'localhost',
            'port': 33060,
            'user': 'bakux',
            'password': 'Baku12345',
            'ssl-mode': mysqlx.SSLMode.DISABLED
        })

        self.schema_name = schema_name
        self.collection_name = collection_name

        # Getting schema object
        self.schema = self.session.get_schema(self.schema_name)
        # Creating collection
        self.schema.create_collection(self.collection_name)
        # Getting collection object
        self.collection_obj = self.schema.get_collection(self.collection_name)


    def insert_into_collection(self):
        # You can also add multiple documents at once
        print "Inserting 3 rows into collection"
        self.collection_obj.add({'_id': '2', 'name': 'Sakila', 'age': 15},
                    {'_id': '3', 'name': 'Jack', 'age': 15},
                    {'_id': '4', 'name': 'Clare', 'age': 37}).execute()

    def remove_from_collection(self):
        # Removing non-existing _id
        self.collection_obj.remove('_id = 1').execute()


    def alter_table_engine(self):
        # Altering table engine to rocksdb; Should raise an error
        try:
            command = "alter table {}.{} engine=rocksdb".format(self.schema_name, self.collection_name)
            sql = self.session.sql(command)
            sql.execute()
        except Exception as e:
            raise mysqlx.errors.OperationalError("Could not alter engine of table here!")
        else:
            return 0

    def alter_table_drop_column(self):
        # Dropping generated column
        print "Altering default collection to drop generated column"
        try:
            command = "alter table {}.{} drop column `_id`".format(self.schema_name, self.collection_name)
            sql = self.session.sql(command)
            sql.execute()
        except Exception as e:
            raise
        else:
            return 0

    def return_table_obj(self):
        # Returning Table object
        table = mysqlx.Table(self.schema, self.collection_name)
        return table

    def create_view_from_collection(self, view_name):
        # Creating view from collection
        print "Trying to create view based on MyRocks collection"
        try:
            command = "create view {}.{} as select * from {}.{}".format(self.schema_name, view_name, self.schema_name, self.collection_name)
            sql = self.session.sql(command)
            sql.execute()
        except Exception as e:
            raise
        else:
            return 0

    def select_from_view(self, view_name):
        # Running select; Should raise an error
        print "Trying to select from view [Should raise an OperationalError]"
        try:
            command = "select * from {}.{}".format(self.schema_name, view_name)
            sql = self.session.sql(command)
            sql.execute()
        except Exception as e:
            raise mysqlx.errors.OperationalError("The JSON binary value contains invalid data")
        else:
            return 0

    def select_from_table(self):
        # Running select; Should raise an error
        print "Trying to select from view [Should raise an OperationalError]"
        try:
            command = "select * from {}.{}".format(self.schema_name, self.collection_name)
            sql = self.session.sql(command)
            sql.execute()
        except Exception as e:
            raise mysqlx.errors.OperationalError("The JSON binary value contains invalid data")
        else:
            return 0

The PyTest module:

import pytest
from mysqlx.errors import OperationalError

@pytest.mark.usefixtures("return_plugin_obj")
class TestXPlugin:
    """
    Tests for XPlugin + MyRocks
    """

    def test_check_if_collection_exists(self, return_plugin_obj):
        assert return_plugin_obj.collection_obj.exists_in_database() == True

    def test_check_collection_count(self, return_plugin_obj):
        # Inserting data
        return_plugin_obj.insert_into_collection()
        # Removing data
        return_plugin_obj.remove_from_collection()
        # Checking count
        assert return_plugin_obj.collection_obj.count() == 3

    def test_alter_table_engine_raises(self, return_plugin_obj):
        # Should raise error here
        print "Altering default collection engine from InnoDB to MyRocks [Should raise an OperationalError]"
        with pytest.raises(OperationalError) as er:
            return_plugin_obj.alter_table_engine()
        print er

    def test_alter_table_drop_column(self, return_plugin_obj):
        return_value = return_plugin_obj.alter_table_drop_column()
        assert return_value == 0

    def test_alter_table_engine(self, return_plugin_obj):
        print "Altering default collection engine from InnoDB to MyRocks [Should NOT raise an OperationalError]"
        return_value = return_plugin_obj.alter_table_engine()
        assert return_value == 0

    def helper_function(self, return_plugin_obj):
        table_obj = return_plugin_obj.return_table_obj()
        return table_obj

    def test_check_if_table_exists(self, return_plugin_obj):
        assert self.helper_function(return_plugin_obj).exists_in_database() == True

    def test_check_table_count(self, return_plugin_obj):
        assert self.helper_function(return_plugin_obj).count() == 3

    def test_check_table_name(self, return_plugin_obj):
        assert self.helper_function(return_plugin_obj).get_name() == "my_collection"

    def test_check_schema_name(self, return_plugin_obj):
        assert self.helper_function(return_plugin_obj).get_schema().get_name() == "generated_columns_test"

    def test_check_if_table_is_view(self, return_plugin_obj):
        assert self.helper_function(return_plugin_obj).is_view() == False

    def test_create_view_from_collection(self, return_plugin_obj):
        return_value = return_plugin_obj.create_view_from_collection("my_collection_view")

    def test_select_from_view(self, return_plugin_obj):
        with pytest.raises(OperationalError) as er:
             return_plugin_obj.select_from_view("my_collection_view")
        print er

    def test_select_from_table(self, return_plugin_obj):
        with pytest.raises(OperationalError) as er:
            return_plugin_obj.select_from_table()
        print er

The PyTest fixture code:

from myrocks_mysqlx_plugin.myrocks_mysqlx_plugin import MyXPlugin
import pytest
# schema_name = "generated_columns_test"
# collection_name = "my_collection"
plugin_obj = MyXPlugin("generated_columns_test", "my_collection")

@pytest.fixture()
def return_plugin_obj():
    return plugin_obj

The final bash runner test output:

#Running X Plugin tests#
========================================================================== test session starts ==========================================================================
platform linux2 -- Python 2.7.5, pytest-3.2.1, py-1.4.34, pluggy-0.4.0 -- /usr/bin/python
cachedir: ../../.cache
rootdir: /home/shahriyar.rzaev, inifile:
collected 13 items                                                                                                                                                       

../../percona-qa/myrocks-tests/myrocks_mysqlx_plugin_test/test_module01.py::TestXPlugin::test_check_if_collection_exists PASSED
../../percona-qa/myrocks-tests/myrocks_mysqlx_plugin_test/test_module01.py::TestXPlugin::test_check_collection_count PASSED
../../percona-qa/myrocks-tests/myrocks_mysqlx_plugin_test/test_module01.py::TestXPlugin::test_alter_table_engine_raises PASSED
../../percona-qa/myrocks-tests/myrocks_mysqlx_plugin_test/test_module01.py::TestXPlugin::test_alter_table_drop_column PASSED
../../percona-qa/myrocks-tests/myrocks_mysqlx_plugin_test/test_module01.py::TestXPlugin::test_alter_table_engine PASSED
../../percona-qa/myrocks-tests/myrocks_mysqlx_plugin_test/test_module01.py::TestXPlugin::test_check_if_table_exists PASSED
../../percona-qa/myrocks-tests/myrocks_mysqlx_plugin_test/test_module01.py::TestXPlugin::test_check_table_count PASSED
../../percona-qa/myrocks-tests/myrocks_mysqlx_plugin_test/test_module01.py::TestXPlugin::test_check_table_name PASSED
../../percona-qa/myrocks-tests/myrocks_mysqlx_plugin_test/test_module01.py::TestXPlugin::test_check_schema_name PASSED
../../percona-qa/myrocks-tests/myrocks_mysqlx_plugin_test/test_module01.py::TestXPlugin::test_check_if_table_is_view PASSED
../../percona-qa/myrocks-tests/myrocks_mysqlx_plugin_test/test_module01.py::TestXPlugin::test_create_view_from_collection PASSED
../../percona-qa/myrocks-tests/myrocks_mysqlx_plugin_test/test_module01.py::TestXPlugin::test_select_from_view FAILED
../../percona-qa/myrocks-tests/myrocks_mysqlx_plugin_test/test_module01.py::TestXPlugin::test_select_from_table PASSED

Also reported interesting thing here:
https://bugs.mysql.com/bug.php?id=87531

Using MySQL Connector/Python X (mysqlx module)

This post is about simple usage of mysqlx module i.e X Plugin with latest Connector/Python DMR.
The version of Python Connector is 8.0.4.

Installing:

wget https://dev.mysql.com/get/Downloads/Connector-Python/mysql-connector-python-8.0.4-0.1.dmr.el7.x86_64.rpm
sudo yum install mysql-connector-python-8.0.4-0.1.dmr.el7.x86_64.rpm

Sample Python code:

# Connecting to MySQL and working with a Session
import mysqlx

# Connect to a dedicated MySQL server
session = mysqlx.get_session({
    'host': 'localhost',
    'port': 33060,
    'user': 'bakux',
    'password': 'Baku12345',
    'ssl-mode': mysqlx.SSLMode.DISABLED
})

schema = session.get_schema('generated_columns_test')

# Create 'my_collection' in schema
schema.create_collection('my_collection')

# Get 'my_collection' from schema
collection = schema.get_collection('my_collection')

assert(True == collection.exists_in_database())

# You can also add multiple documents at once
collection.add({'_id': '2', 'name': 'Sakila', 'age': 15},
            {'_id': '3', 'name': 'Jack', 'age': 15},
            {'_id': '4', 'name': 'Clare', 'age': 37}).execute()

collection.remove('_id = 1').execute()

assert(3 == collection.count())

The code is quite clean so basically we have created a collection and inserted some data.

From MySQL client side the structure of this “table”==collection:

CREATE TABLE `my_collection` (
  `doc` json DEFAULT NULL,
  `_id` varchar(32) GENERATED ALWAYS AS (json_unquote(json_extract(`doc`,'$._id'))) STORED NOT NULL,
  PRIMARY KEY (`_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4

The collection has “stored generated column” _id which is extracted from doc column.

select * from my_collection;
+-------------------------------------------+-----+
| doc                                       | _id |
+-------------------------------------------+-----+
| {"_id": "2", "age": 15, "name": "Sakila"} | 2   |
| {"_id": "3", "age": 15, "name": "Jack"}   | 3   |
| {"_id": "4", "age": 37, "name": "Clare"}  | 4   |
+-------------------------------------------+-----+
3 rows in set (0.00 sec)

This default behavior restricted my tests on other storage engines. Because it is impossible to alter the engine of this default “collection”.

ERROR 3106 (HY000): 'Specified storage engine' is not supported for generated columns.

BTW, while testing, found some bugs and reported:

https://bugs.mysql.com/bug.php?id=87472
https://bugs.mysql.com/bug.php?id=87475
https://bugs.mysql.com/bug.php?id=87492
https://bugs.mysql.com/bug.php?id=87493

Testing MySQL 8.0 – let me create a ton of undo files

This is the late blog post about 2 recent bug reports 🙂
#85969
#85971

The basic idea came after reading -> http://mysqlserverteam.com/the-mysql-8-0-1-milestone-release-is-available/

So the result of test ->
After each restart of MySQL the new undo log files are going to be created + keeping old files.

shahriyar.rzaev@qaserver-06:~/sandboxes/msb_8_0_1/data$ du -hs
6.4G	
# The count of undo files
shahriyar.rzaev@qaserver-06:~/sandboxes/msb_8_0_1/data$ ls | grep undo | wc -l
539

After new restart:

# New count
shahriyar.rzaev@qaserver-06:~/sandboxes/msb_8_0_1/data$ ls | grep undo | wc -l
616

So how to reproduce?

1. Started server with following settings

early-plugin-load=keyring_file.so
keyring_file_data=/home/shahriyar.rzaev/sandboxes/msb_8_0_1/tmp/keyring

innodb_redo_log_encrypt=ON
innodb_undo_tablespaces=77
innodb_undo_logs=77
innodb_undo_log_encrypt=ON

2. Then killed it with -9 and got the error:

2017-05-08T08:54:07.174124Z 1 [ERROR] InnoDB: Expected to open 77 undo tablespaces but was able to find only 0 undo tablespaces. Set the innodb_undo_tablespaces parameter to the correct value and retry. Suggested value is 0
2017-05-08T08:54:07.174147Z 1 [ERROR] InnoDB: Plugin initialization aborted with error Generic error
2017-05-08T08:54:07.674794Z 1 [ERROR] Failed to initialize DD Storage Engine
2017-05-08T08:54:07.674922Z 0 [ERROR] Data Dictionary initialization failed.
2017-05-08T08:54:07.674932Z 0 [ERROR] Aborting

3. To overcome this issue I have changed config and restarted MySQL:

innodb_redo_log_encrypt=ON
innodb_undo_tablespaces=0
innodb_undo_logs=77
innodb_undo_log_encrypt=ON

And it is started.

4. Then I have changed back the config again and restarted MySQL:

innodb_redo_log_encrypt=ON
innodb_undo_tablespaces=77
innodb_undo_logs=77
innodb_undo_log_encrypt=ON

From now after each restart it will create new undo files.
See below the full test result:
https://bugs.mysql.com/file.php?id=25383&bug_id=85971

Experimental MySQL UDF code partially written in Python

Hi dear all,
I would like to share our experimental tests about, writing UDF with Python.
So the basic idea was writing a UDF for factorial calculation. The interesting part was to move factorial calculation logic to Python code and then calling this Python file inside UDF code.

Here is the repo link -> UDFs-written-in-Python

The factorial calculation is here -> factorial.py
UDF code is here -> factorial.cc

Basic idea with UDF code quite simple as it has 3 parts:
1. initialization part -> factorial_init() -> all necessary checks reside here
2. main logic -> factorial() -> calling Python script, reading data back and showing the result.
3. deinitialization -> factorial_deinit()

Compiled as:

gcc $(/home/sh/REPOS/MYSQL_DEV_ACTIONS/mysql-5.7/mysql-server/5.7.16/bin/mysql_config --cflags) -fPIC -shared -o factorial.so factorial.cc -std=c++11

Copied to plugin_dir:

cp factorial.so /home/sh/REPOS/MYSQL_DEV_ACTIONS/mysql-5.7/mysql-server/5.7.16/lib/plugin/
cp factorial.py /home/sh/REPOS/MYSQL_DEV_ACTIONS/mysql-5.7/mysql-server/5.7.16/lib/plugin/

Activated as:
mysql > create function factorial returns integer soname 'factorial.so';

Result:

root@test1:~# python --version
Python 2.7.12
mysql> select factorial(5);
+--------------+
| factorial(5) |
+--------------+
| 120 |
+--------------+
1 row in set (0.02 sec)

My first patch for MySQL world; fixing BUG #76852

Hi dear MySQL Community.
I have reported a bunch of bugs and wrote some articles, but today is a special for me 🙂
Because today I send my first PR for MySQL source code.

So I have logged a bug -> #76852 and now fixed it after year.
Okey, I know I am lazy.

Let’s reproduce the bug.
Simulating full disk condition is easy as:

mkdir /filesystems
dd if=/dev/zero of=/filesystems/tmp_fs bs=1024 count=0 seek=$[1024*10]
mkfs.ext4 /filesystems/tmp_fs
mkdir small_mounted_dir
mount -o loop /filesystems/tmp_fs /home/sh/small_mounted_dir/

Then obtain MySQL source and compile it with DEBUG. If you don’t know how to do it just refer to -> installing-mysql-from-source-cmake-issues

From now you can use MySQL Sandbox to install 1 master and 2 slaves:

make_sandbox_from_source /home/sh/MySQLPakcages/mysql-5.6.32 replication

Inside main path there will be a script for enabling GTID:

/home/sh/sandboxes/rsandbox_mysql-5_6_32/enable_gtid

Run it.

Then stop master using ./stop script inside master folder:

/home/sh/sandboxes/rsandbox_mysql-5_6_32/master/stop

Move binary mysql-bin.* files from datadir to /home/sh/small_mounted_dir/.
Update my.sandbox.cnf and add new path for log-bin variable. Then start MySQL using ./start script.
You can run sysbench to populate data:

 sysbench --test=/home/sh/REPOS/sysbench/sysbench/tests/db/oltp.lua --oltp-test-mode=complex \
 --num-threads=100 --mysql-db=dbtest --mysql-user=root --mysql-password=msandbox --db-driver=mysql \
 --max-requests=0 --innodb-table-size=1000000 --mysql-socket=/tmp/mysql_sandbox23097.sock prepare
  
sysbench --test=/home/sh/REPOS/sysbench/sysbench/tests/db/insert.lua --oltp-test-mode=complex \
--num-threads=100 --mysql-db=dbtest --mysql-user=root --mysql-password=msandbox --db-driver=mysql  \
--max-requests=0 --innodb-table-size=1000000 --mysql-socket=/tmp/mysql_sandbox23097.sock run

Next you will likely see something:

2016-11-08 00:56:32 13417 [Warning] Disk is full writing '/home/sh/small_mounted_dir/mysql-bin.000003' (Errcode: 28 - No space left on device). Waiting for someone to free space...
2016-11-08 00:56:32 13417 [Warning] Retry in 60 secs. Message reprinted in 600 secs

If you try to stop MySQL, you will likely hit -> #83705

And if you try to start you will likely hit -> #76852

According to coredump:

(gdb) bt
#0  0x0000000000000000 in ?? ()
#1  0x0000000000ae6242 in my_printf_warning (format=0x1026f08 "Disk is full writing '%s' (Errcode: %d - %s). Waiting for someone to free space...")
    at /root/mysql-5.6.24/mysys/my_error.c:260
#2  0x0000000000ac9052 in wait_for_free_space (filename=0x1fe7ee0 "/opt/mysql/datadir/mysql-bin.~rec~", errors=0) at /root/mysql-5.6.24/mysys/errors.c:115
#3  0x0000000000af1ac7 in my_write (Filedes=19, Buffer=0x1964d00 "/opt/mysql/datadir/mysql-bin.000003\n", Count=36, MyFlags=52)
    at /root/mysql-5.6.24/mysys/my_write.c:89
#4  0x0000000000acd5ae in inline_mysql_file_write (src_file=0x1027708 "/root/mysql-5.6.24/mysys/mf_iocache.c", src_line=1788, file=19, 
    buffer=0x1964d00 "/opt/mysql/datadir/mysql-bin.000003\n", count=36, flags=52) at /root/mysql-5.6.24/include/mysql/psi/mysql_file.h:1141
#5  0x0000000000ad078c in my_b_flush_io_cache (info=0x183c1a8, need_append_buffer_lock=0) at /root/mysql-5.6.24/mysys/mf_iocache.c:1787
#6  0x0000000000a7132b in MYSQL_BIN_LOG::sync_purge_index_file (this=0x183b400) at /root/mysql-5.6.24/sql/binlog.cc:4420
#7  0x0000000000a6e206 in MYSQL_BIN_LOG::open_binlog (this=0x183b400, log_name=0x190d130 "/opt/mysql/datadir/mysql-bin", new_name=0x0, 
    io_cache_type_arg=WRITE_CACHE, max_size_arg=1073741824, null_created_arg=false, need_lock_index=true, need_sid_lock=true, extra_description_event=0x0)
    at /root/mysql-5.6.24/sql/binlog.cc:3146
#8  0x000000000063ad8e in init_server_components () at /root/mysql-5.6.24/sql/mysqld.cc:5012
#9  0x000000000063b6e7 in mysqld_main (argc=19, argv=0x186de68) at /root/mysql-5.6.24/sql/mysqld.cc:5455
#10 0x000000000062fc74 in main (argc=10, argv=0x7fffffffe3d8) at /root/mysql-5.6.24/sql/main.cc:25

The problem is somewhere at line 260 in /mysys/my_error.c:

void(*sql_print_warning_hook)(const char *format,...);
void my_printf_warning(const char *format, ...)
{
  va_list args;
  char wbuff[ERRMSGSIZE];
  DBUG_ENTER("my_printf_warning");
  DBUG_PRINT("my", ("Format: %s", format));
  va_start(args,format);
  (void) my_vsnprintf (wbuff, sizeof(wbuff), format, args);
  va_end(args);
  (*sql_print_warning_hook)(wbuff);
  DBUG_VOID_RETURN;
}

It gives SEGFAULT exactly on (*sql_print_warning_hook)(wbuff) which makes clear that, sql_print_warning_hook is not initialized at that point.

So as a suggestion by Sergei Glushchenko the solution can be achieved by checking sql_print_warning_hook prior calling inside my_error.c or initialize earlier sql_print_warning_hook inside sql/mysqld.cc. The best to have both:

I decide to initialize sql_print_warning_hook inside init_server_components():
Took this one:

+  /*
    The subsequent calls may take a long time : e.g. innodb log read.
    Thus set the long running service control manager timeout
   */
@@ -5596,7 +5598,7 @@ int mysqld_main(int argc, char **argv)
   */
   error_handler_hook= my_message_sql;
   start_signal_handler();       // Creates pidfile
-  sql_print_warning_hook = sql_print_warning;
+
[/cpp]

and added to:


@@ -4663,6 +4663,8 @@ static int init_server_components()
     We need to call each of these following functions to ensure that
     all things are initialized so that unireg_abort() doesn't fail
   */
+  sql_print_warning_hook = sql_print_warning;
+
   mdl_init();
   if (table_def_init() | hostname_cache_init(host_cache_size))
     unireg_abort(1);
@@ -5486,7 +5488,7 @@ int mysqld_main(int argc, char **argv)
 #endif
   }

Second thing is to add simple check inside /mysys/my_error.c:

diff --git a/mysys/my_error.c b/mysys/my_error.c
index b1f57ba..4441228 100644
--- a/mysys/my_error.c
+++ b/mysys/my_error.c
@@ -257,7 +257,14 @@ void my_printf_warning(const char *format, ...)
   va_start(args,format);
   (void) my_vsnprintf (wbuff, sizeof(wbuff), format, args);
   va_end(args);
-  (*sql_print_warning_hook)(wbuff);
+  if(sql_print_warning_hook)
+  {
+    (*sql_print_warning_hook)(wbuff);
+  }
+  else
+  {
+    fprintf(stderr, "%s\n", wbuff);
+  }
   DBUG_VOID_RETURN;
 }

After recompiling and retesting, there will be no SEGFAULT anymore.
So it can be considered as a fix, except that, my IDE added/removed some spaces from source code 🙂

The PR link -> PR #109