mysql-proxy is almost an ideal solution for those who need cheap sharding without rewriting applications, as well as hosting providers who find it difficult to control curved customers :) but I would like to take out mysql or reduce the load on the DBMS without unnecessary contacts with customers.
So, there is a task to make life easier for mysql server. The available solution is master-slave replication. Everything is great when we have programmers who can rewrite the application to use several DBMS servers, but what if there aren't any? Here mysql-proxy comes to the rescue. Working transparently for the client, mysql-proxy can proxy requests of several slave & master servers.
further will be a description of the tests
')
I will omit the deployment of master-slave replication and documentation in this topic, I only recommend skipping the one that advises copying table files, the option --master-data in mysqldump has been around for a long time.
So, master-slave replication was deployed with one master and three slaves. Task, check the mysql-proxy overhead and failover support.
For testing, two scripts were used, the first one inserted 1000 lines into the database without using indexes, the second produced a table reading, the cache was excluded, the script was run several times, the average values ​​were taken.
Produced several tests.
Test 1 - insert to the database
a) directly to the database (the result is taken as 100%) - 100%
b) through mysql-proxy (one master) - 69%
Test 2 - selection from the database
a) directly from the database (the result is taken as 100%) - 100%
b) via mysql-proxy (only one slave) - 75%
Test 3 - a sample of the database
a) directly from the database (the result is taken as 100%) - 100%
b) through mysql-proxy (three slaves) 70%
Test 4 - selection from the database
a) directly from the database (the result is taken as 100%) - 100%
b) via mysql-proxy (two of the three slaves were extinguished to check for failover) 8%
Schedule:

Test scripts called
ab -n 1000 -c 1 script
and ab -n 1000 -c 10 script
there was a minute pause between calls, extreme values ​​were discarded, the average for a connection without mysql-proxy was taken as 100%
Test 1 was performed by the script:
<? php
for ($ i = 1; $ i <= 1000; $ i ++) {
$ link = new mysqli ("localhost", "login", "pass", "base");
$ link-> query (“INSERT INTO test` SET` ... ”);
mysqli_close ($ link);
}
?>
before its execution truncate was performed
Tests 2 - 4
<? php
for ($ i = 1; $ i <= 1000; $ i ++) {
$ link = new mysqli ("localhost", "login", "pass", "base");
$ q = $ link-> query ("SELECT * FROM` test` ");
while ($ r = $ q-> fetch_assoc ()) {
print_r ($ r);
}
mysqli_close ($ link);
}
?>
CONCLUSIONS:
If test1 does not cause a question, and test 2 and 3 in the case of complex queries and the use of indexes will be completely different under real load and are also understandable, then the results of test 4 impose a bit of confusion on me.
Config mysql-proxy in test4
cat / etc / default / mysql-proxy
ENABLED = "true"
OPTIONS = "
--proxy-address = localhost: 3309
--proxy-read-only-backend-addresses = workserver1: 3306
--proxy-read-only-backend-addresses = downserver1: 3306
--proxy-read-only-backend-addresses = downserver2: 3306
--proxy-backend-addresses = workserver1: 3306
--proxy-skip-profiling
"
Your thoughts?