📜 ⬆️ ⬇️

read_buffer_size can break data replication

Translation of the latest Miguel Angel Nieto article "read_buffer_size can break your replication" .

There are some variables that can affect replication and sometimes cause a lot of trouble. In this post, I'm going to talk about the read_buffer_size variable, and how this variable, along with max_allowed_packet, can break your replication.

Suppose we have a master-master replication setting:

max_allowed_packet = 32M
read_buffer_size = 100M


To break replication, I'm going to load 4 million lines with LOAD DATA INFILE:
')
MasterA (test) > LOAD DATA INFILE '/tmp/data' INTO TABLE t;
Query OK, 4510080 rows affected (26.89 sec)


After some time, the SHOW SLAVE STATUS command on MasterA will give us the following output:

Last_IO_Error: Got fatal error 1236 from master when reading data from binary log: 'log event entry exceeded max_allowed_packet; Increase max_allowed_packet on master;the first event 'mysql-bin.000002' at 74416925, the last event read from './mysql-bin.000004' at 171, the last byte read from './mysql-bin.000004' at 190.'

Very strange! We uploaded data to MasterA, and now we have there torn replication and an error related to max_allowed_packet. The next step is to check the binary logs for both masters:

MasterA:

masterA> mysqlbinlog data/mysql-bin.000004 | grep block_len
#Begin_load_query: file_id: 1 block_len: 33554432
#Append_block: file_id: 1 block_len: 33554432
#Append_block: file_id: 1 block_len: 33554432
#Append_block: file_id: 1 block_len: 4194304
#Append_block: file_id: 1 block_len: 33554432
#Append_block: file_id: 1 block_len: 10420608


There are no more blocks than max_allowed_packet (33554432).

MasterB:

masterB> mysqlbinlog data/mysql-bin.000004 | grep block_len
#Begin_load_query: file_id: 1 block_len: 33555308
#Append_block: file_id: 1 block_len: 33555308
#Append_block: file_id: 1 block_len: 33555308
#Append_block: file_id: 1 block_len: 4191676
#Append_block: file_id: 1 block_len: 33555308
#Append_block: file_id: 1 block_len: 10419732


Notice the difference? 33555308 is larger than max_allowed_packet (33554432), so Master2 has written some blocks 876 bytes more than the safe value. Then MasterA tries to read the binary log from MasterB, and the replication is broken because the packets are too large. No, replicate_same_server_id is not included.

What is the connection between read_buffer_size and this bug?

Again, it is better to give an example than to explain in words. Take the new values:

max_allowed_packet = 32M
read_buffer_size = 16M


We run LOAD DATA INFILE again, and now the output on both servers will be:

#Begin_load_query: file_id: 1 block_len: 16777216
#Append_block: file_id: 1 block_len: 16777216
#Append_block: file_id: 1 block_len: 16777216
#Append_block: file_id: 1 block_len: 16777216
#Append_block: file_id: 1 block_len: 16777216
#Append_block: file_id: 1 block_len: 16777216
#Append_block: file_id: 1 block_len: 16777216
#Append_block: file_id: 1 block_len: 16777216
#Append_block: file_id: 1 block_len: 14614912


The maximum size of data blocks is based on the read_buffer_size value, so now they certainly will never be larger than max_allowed_packet.
Therefore, it is worth remembering that if the read_buffer_size value is greater than max_allowed_packet, then this may cause a replication break during data import into MySQL. This bug exists in all versions from 5.0.x to the latest 5.5.25, and the easiest way to get around it is to not set the read_buffer_size value greater than max_allowed_packet. It seems that the 30435 bug has not yet been fixed.

And do not forget that large values ​​for read_buffer_size do not increase performance (this can be read here in the original, and here - the translation).

Source: https://habr.com/ru/post/145904/


All Articles