Some time ago, I was fascinated by the Facebook team that had filed a special base for monitoring purposes - RocksDB. On closer examination, it turned out that it is a fork of an earlier Google project, it archives data on the fly and, being “in the soul” of NoSQL, it fits in with MySQL as a storage engine.
Next came the news that MariaDB included this engine in upstream from version 10.2. Nishtyak like archiving on the fly and ttl on individual lines under the hood and beckoned to try it on something appropriate ...
Zabbix turned out to be a suitable data generator on my farm, and they also decided to pull it onto a new hardware. But "out of the box" zabbix about rocksdb is not up to date, so I had to poshamanit and test. If you are interested in the results and conclusions -
')
Restrictions
The first problem that came out during planning is that myrocks does not know how to
CONSTRAINT FOREIGN KEY
. Not able, and all. And not planned. NoSQL, however. It would seem that on this you can minimize the whole project, but a close look at the zabbix data schema shows that the hottest tables — history_uint, history_text, history_log and history_str — where, in fact, data come from all source
slots and do not contain foreign keys. Probably, the zabbix team did it deliberately in order to simplify these tables - but this is all for us.
It is worth mentioning here that the creators of myrocks
do not recommend using a mix of two storage engines in one application, referring to the fact that transactions will not be atomic in this case.
But careful consideration of the output of
grep -r 'history_uint' zabbix-3.2.5
leads to the conclusion that although zabbix does commit transactions when adding values, inside these transactions it does not touch other tables (why should it, really?) - so we crawl through.
You also need to change the collation on the plates that we transfer to rocksdb on latin1_bin or utf8_bin. And in general - it is better to get rid of the latin1 encoding. This resulted in the following perl-script for converting dump:
#!/usr/bin/perl $tablename=''; $has_constraints=0; while(<>) { s/CHARACTER SET latin1//; if(/CREATE TABLE `(.*)`/) { $tablename=$1; $has_constraints=0; }; if(/CONSTRAINT/) { $has_constraints=1; }; if(/ENGINE=InnoDB/ and $has_constraints==0) { s/ENGINE=InnoDB/ENGINE=ROCKSDB/; s/CHARSET=([^ ^;]+)/CHARSET=$1 COLLATE=$1_bin/; }; print $_; };
Assembly
I collected mariadb from source to .deb-packages and put them already. It looks something like this (OS - debian 8.8):
apt-get update apt-get install git g++ cmake libbz2-dev libaio-dev bison zlib1g-dev libsnappy-dev build-essential vim cmake perl bison ncurses-dev libssl-dev libncurses5-dev libgflags-dev libreadline6-dev libncurses5-dev libssl-dev liblz4-dev gdb smartmontools apt-get install dpkg-dev devscripts chrpath dh-apparmor dh-systemd dpatch libboost-dev libcrack2-dev libjemalloc-dev libreadline-gplv2-dev libsystemd-dev libxml2-dev unixodbc-dev apt-get install libjudy-dev libkrb5-dev libnuma-dev libpam0g-dev libpcre3-dev pkg-config libreadline-gplv2-dev uuid-dev git clone https://github.com/MariaDB/server.git mariadb-10.2 cd mariadb-10.2 git checkout 10.2 git submodule init git submodule update ./debian/autobake-deb.sh
Installation
Not without additional dependencies -
wget http://releases.galeracluster.com/debian/pool/main/g/galera-3/galera-3_25.3.20-1jessie_amd64.deb dpkg -i galera-3*.deb apt-get install gawk libdbi-perl socat dpkg -i mysql-common*.deb mariadb-server*.deb mariadb-plugin*.deb mariadb-client*.deb libm*.deb
Build net-snmp
For reasons that are still unclear, net-snmp from debian results in a non-working zabbix assembly - valgrind swears at memory leaks where everything should work quite linearly. As a result, zabbiks falls.
Rescues - net-snmp bulkhead from sources with almost all debian patches superimposed.
I have gathered net-snmp-code-368636fd94e484a5f4be5c0fcd205f507463412a.zip
Perhaps the fresher ones are also being collected.
You will also need a debian archive manager with a debian directory.
Then something like this:
version=368636fd94e484a5f4be5c0fcd205f507463412a debian_version=net-snmp_5.7.2.1+dfsg-1.debian.tar.xz unzip -q net-snmp-code-${version}.zip cd net-snmp-code-${version} tar -xvJf ../$debian_version for i in 03_makefiles.patch 26_kfreebsd.patch 27_kfreebsd_bug625985.patch fix_spelling_error.patch fix_logging_option.patch fix_man_error.patch after_RFC5378 fix_manpage-has-errors_break_line.patch fix_manpage-has-errors-from-man.patch agentx-crash.patch TrapReceiver.patch ifmib.patch CVE-2014-3565.patch; do rm debian/patches/$i touch debian/patches/$i done cp ../rules debian/rules dpkg-buildpackage -d -b cd .. dpkg -i *.deb
Focus with rules-file - I turned off --with-mysql (replaced with --without-mysql) in it, so as not to tie net-snmp to mysql - then, when experimenting with versions of mariadb, you do not need to reassemble net-snmp. You can and lower.
Build zabbix
The zabbix itself has to be assembled after installing mariadb, since it is linked to the dynamic libraries that arrive with it. I did something like this:
zabbixversion="3.2.7" apt-get install libsnmp-dev libcurl4-openssl-dev python-requests if [ ! -f zabbix-${zabbixversion}.tar.gz ]; then wget https://downloads.sourceforge.net/project/zabbix/ZABBIX%20Latest%20Stable/${zabbixversion}/zabbix-${zabbixversion}.tar.gz tar -xvzf zabbix-${zabbixversion}.tar.gz fi cd zabbix-${zabbixversion} groupadd zabbix useradd -g zabbix zabbix sed -i 's/mariadbclient/mariadb/' configure ./configure --enable-proxy --enable-server --enable-agent --with-mysql --enable-ipv6 --with-net-snmp --with-libcurl --with-libxml2 make -j5 make install
Profit - it was possible to reduce the appetites of the zabbiks to the place, to abandon the rotation of the plates according to the “create partition / drop partition” scheme - now the housekeeper does the job himself (at least on an ssd disk, heh. Here I would check on innodb in a fresh build , but has not had time yet), and the data retention period has again become managed for each data item separately. In case of massive problems, the queue is now cleaned several times faster.
What has not been tested (exactly because the housekeeper started up) - add the magic piece COMMENT = 'ttl_duration = 864000; ttl_col = clock;' which, as I understand it, makes sense to “store no more than 864000 seconds, clean at the storage engine level”.
Yes, while I tested all this and screwed it, Zabbiks managed to roll out version 3.4, I didn’t check it all out on it, but something tells me what should work.
Useful docks that came in handy when writing this article:
Miscellaneous other, that got out in Google for certain requests :)
Thanks for attention. If you have any questions / comments - you are welcome in the comments.