📜 ⬆️ ⬇️

Translation - BoxedIce shares the experience of switching from MySQL to MongoDB

The link to this article already flashed on Habré and I was faced with an interest in it. Many experienced problems with mastering the original in English and I decided to translate it.

Notes on using MongoDB in production


A year ago in July I wrote that we switched from MySQL to MongoDB .
We launched MongoDB in production for the Server Density monitoring service. Eight months have passed since then and we are faced with some things.

These are things that you encounter only after you achieve some experience of using and, interestingly, there are few such things. If a problem or bug got stuck, then it was quickly fixed by the guys from MongoDB, so as a result we didn’t break off.

Some statistics


Taken from our MongoDB servers on February 26th:
Collections (tables)17,810
Indices43 175
Documents (lines)664 158 090
Currently we have one master and one slave with manual switching in the event of a database failure (meaning that in the event of a disaster, the software will be switched from one database to another manually - note). The master database runs on a server with 72GB of RAM, and the slave is in another DC. We have problems with disk space and we are in the final stages of switching to the use of automated manual replicas” with manual sharding that work on four servers (two masters and two slaves in different DCs).
')
There was a delay in launching new servers due to the fact that we waited for a whole week until all the data is synchronized as it should and we could switch. This is the usual transition from the initial vertical “hardware” to horizontal scaling by adding new servers as we grow. And although we now have a hand sharding, we plan to switch to automatic , which will soon appear in MongoDB.

Namespace restrictions (namespace)


Due to the fact that MongoDB has a limit of 24,000 names for each database , we have divided our customers' data into three databases. In general, 24,000 is the total number of collections and indexes per database.

You can check the total number of used names in the database from the MongoDB console using the db.system.namespaces.count() command. You can also change the size of the namespaces with the --nssize , but there are ambushes and restrictions. Read the documentation .

fault tolerance


In fact, there is no full resiliency of a single MongoDB server. This was emphasized by the MongoDB developers themselves , but in real life it only means that you must have several replicable servers.

If you have suffered from power loss or incorrect completion of MongoDB, you will have to restore the database . If the base is small, then this is a simple procedure, run from the console, which, however, forces MongoDB to walk through each document (note, per.d. Document in terms of mongo is an analogue of the string in sql) and recreate it. If you have a base comparable in size to ours, then it will take hours of time.

You can use master / slave replication (and preferably with a slave in another DC), but if the master crashes, you will need to switch the software manually. Or you can use the "pair replica", which itself decides who is the master in it and in case of the second server falling, the integrity of the data (after it is raised) will not be violated (the term consistent consistent was used).

Large database replication


Our databases are very large and the complete upload of fresh data to a new slave in a separate DC through VPN takes from 48 to 72 hours. During this time, you should be stsykotno, as the slave is still not working.

In addition to this, you must ensure that your op log is large enough to accommodate all operations since the start of synchronization [with the new slave]. MongoDB merges data to disk at the time of the start and end of synchronization, and in the interval stores all operations in op log.

We found that in our case, an op log size of as much as 75GB is required. The size is set with the --oplogSize parameter . Interestingly, these [op log] files are created before MongoDB starts accepting connections. Therefore, you will have to wait until the OS builds about 37 files for logs of 2GB each.

On our fast SAS 15K [rpm] drives, each such file takes from 2 to 7 seconds (about 5 minutes in total), but on some of our old test drives it takes up to 30 seconds to file (all files up to 20 minutes).

During this time, your database is unavailable.

This is a real problem when you put an existing single server into replication mode or run a “pair replica” — until the files are created, the server will not work.

There is a solution to the problem. You need to create these files yourself. Then MongoDB will not try to do it yourself. Run the commands below (after you hit the input after done , 80GB of files will be created):
  for i in {0..40}
     do
     echo $ i
     head -c 2146435072 / dev / zero> local. $ i
     done 

Now stop MongoDB, make sure that all old local. * Files are deleted and after that move the new files to the data directory. After that, run MongoDB.

This will work for --oplogSize=75000 . Keep in mind that the creation of these files zasrat I / O and slow down all that is, but will not drop the base, it will be available. Essno, nothing prevents you from creating these files on another wheelbarrow and copying over the network (IMHO, it is more useful to make renice "create files", correct if that is a comment).

Slowdown during first sync


When the first synchronization of the master and the slave occurs (note: it is a question of the first launch of the slave after filling the data into it), we observe the slowdown of our application - the response time increases.

We did not understand this in detail, because despite the slowdown, the application works fairly quickly, although it is clear that connecting to the database and responding to the request takes a little longer.

This leads to the fact that the web server processes take more time to process each request, so that the CPU load increases.

I have a theory. Since the slave reads from the wizard all the data from all the collections of all the databases, the cache is actively disabled and its value is zero. But I'm not sure. This was not stated in MongoDB, as this is not really a problem.

However, this can be a problem if your server cannot absorb the load from the Apache processes (sic!). In short, on the verge.

Starting the MongoDB Daemon and Logging


In the past, we ran MongoDB under the screen 'th, but now at startup it is enough to specify the --fork parameter and MongoDB will start as a normal daemon.

Do not forget to specify --logpath to be aware of errors that occur. We also use the --quiet parameter, because too much is written to the log, it grows quickly and MongoDB does not have a built-in log rotation.

OS settings


We are faced with the problem of limiting the number of open files, which is caused by the default limit. Usually it is 1024, which is not enough. There are a couple of words in the MongoDB documentation about this, but you can increase the limit. On Redhat, this changes in the /etc/security/limits.conf file. Also, you need to enable UsePAM in /etc/ssh/sshd_config in order for the new limits to apply to you at login.

We also turned off atime on all servers with databases, so that the file system does not update the date and time of access to the file every time MongoDB accesses it.

Locks on index creation


We create our indexes first, so it takes little time.
However, if you create a new index in an existing collection, this process will block the database until the creation of the index is completed.

This is fixed in the MongoDB 1.3 branch - there appeared background indexing .

Disk Space Efficiency


The essence of our server monitoring application is to collect a mass of data that is deleted over time. We found that there is a non-sickly difference in disk space between our master and the slave just raised.

Obviously, the slave copies the data and stores it in its most optimal way (there are no bald spots due to the deleted data), so after the first synchronization of the slave it takes up less disk space than the master. However, we saw a master with almost 900GB on disk, the slave of which turned out to be 350GB in size.

We posted this problem to a commercial MongoDB support team.

Technical support pleases


Even before 10gen (the MongoDB development company) received $ 1.5 million in investment , the support was cool. And is such and at the moment.

We took the Gold Commercial Support service from them and it benefited many times when we encountered problems.
Their ticket system is usable, plus we get incredibly quick answers from the developers. I can also call them by voice all day, seven days a week.

If you don’t want or can’t pay for support, then your open mailing list is at your service. He is also very good. It is important for us to be able to quickly solve problems day and night and we pay for it, but even MongoDB’s free support is very fast.

Conclusion - was switching to MongoDB the right step?


Yes. MongoDB turned out to be a great choice. Administration of large databases is simple, the database scales well, and support is top class.

The only thing I miss is, and I look forward to it - this is auto-sharding . We do the sharding manually, but if MongoDB itself steers it, then it will be really cool!

- David Mytton, original
- Snick translation

PS: I will gladly correct translation errors and typos, but please write about this in a personal note and leave the essence to the comments.

Source: https://habr.com/ru/post/86429/


All Articles