📜 ⬆️ ⬇️

How we built a fast and reliable ad views repository

One of the unobtrusive, but important functions of our sites ads - saving and displaying the number of their views. Our sites have been following ad views for more than 10 years. The technical implementation of the functionality has changed several times during this time, and now it is a (micro) service on Go, working with Redis as a cache and task queue, and with MongoDB as a persistent storage. Several years ago, he learned to work not only with the amount of ad views, but also with the statistics for each day. But to do all this really quickly and reliably, he learned recently.

image

In total for projects, the service processes ~ 300 thousand requests for reading and ~ 9 thousand requests for writing per minute, 99% of which are executed up to 5ms. This is, of course, not astronomical indicators and not launching rockets to Mars - but also not such a trivial task as simple storage of numbers may seem. It turned out that doing all this, ensuring the preservation of data without loss and reading consistent, actual values ​​requires some effort, which we describe below.

Tasks and project overview


Although viewing counters are not so critical for a business as, say, processing payments or requests for a loan , they are primarily important to our users. People are fascinated by tracking the popularity of their ads: some even call support when they notice inaccurate information about views (this happened with one of the previous implementations of the service). In addition, we store and display detailed statistics in users' personal accounts (for example, to assess the effectiveness of using paid services). All this forces us to treat carefully the saving of each viewing event and the display of the most relevant values.
')
In general, the functionality and principles of the project look like this:


View counters recording: pitfalls


Although the steps described above look quite simple, the problem here is the organization of interaction between the database and the microservice instances so that the data is not lost, duplicated, and not delayed.

Using only one repository (for example, only MongoDB) would solve some of these problems. In fact, before the service worked, until we came up against the problem of scaling, stability and speed.

A naive implementation of moving data between storages could lead, for example, to such anomalies:


There may be other errors, the causes of which also lie in the non-atomic nature of operations between databases, for example, a conflict while simultaneously removing and increasing views of the same entity.

View counters recording: solution


Our approach to storing and processing data in this project is based on the expectation that at any given time MongoDB can refuse more likely than Redis. This, of course, is not an absolute rule - at least not for every project - but in our environment we are really used to observing periodic timeouts on requests to MongoDB caused by the performance of disk operations, which was one of the reasons for losing some of the events.

To avoid many of the problems mentioned above, we use task queues for deferred storage and lua scripts that enable atomically changing data in several radish structures at once. With this in mind, in detail the scheme of saving views looks like this:

  1. When a write request gets into microservice, it executes the lua-script IncrementIfExists to increase the counter only if it already exists in the cache. The script immediately returns -1 if there is no data for the viewed entity in the radish; otherwise, it increases the value of the cached hits in HINCRBY , adds an event to the queue for later saving to MongoDB (we call it pending queue ) via LPUSH , and returns the updated amount of hits.
  2. If the IncrementIfExists returns a positive number, this value is returned to the client and the request is completed.

    Otherwise, microservice takes the view counter from MongoDb, increases it by 1 and sends it to radishes.
  3. Writing to radishes is done through another lua script, Upsert , which saves the amount of views to the cache if it is still empty, or increments them by 1 if someone else managed to fill the cache between steps 1 and 3.
  4. Upsert also adds a viewing event to the pending queue, and returns the updated amount, which is then sent to the client.

Due to the fact that lua-scripts are executed atomically , we avoid a lot of potential problems that could be caused by a competitive entry.

Another important detail is ensuring the safe transfer of updates from the pending queue to MongoDB. To do this, we used the “secure queue” template described in the Redis documentation , which significantly reduces the chances of data loss by creating a copy of the processed items in a separate, one more queue until they are permanently stored in the persistent storage.

To better understand the steps of the whole process, we have prepared a little visualization. First, look at the usual, successful scenario (the steps are numbered in the upper right corner and are described in detail below):

image

  1. Microservice receives a write request
  2. The request handler sends it to the lua script, which writes the scan to the cache (immediately making it readable) and to the queue for subsequent processing.
  3. Background gorutina (periodically) performs the BRPopLPush operation, which atomically moves an item from one queue to another (we call it the “processing queue” - the queue with the items currently being processed). The same element is then stored in a buffer in the process memory.
  4. Another write request is received and processed, which leaves us with 2 items in the buffer and 2 items in the processing queue.
  5. After some timeout, the background process decides to flush the buffer in MongoDB. Writing a set of values ​​from the buffer is performed in one request, which has a positive effect on throughput. Also, before recording, the process tries to combine several views into one, summing up their values ​​for the same ads.
    Each of our projects uses 3 instances of microservice, each with its own buffer, which is saved to the database every 2 seconds. During this time, about 100 elements accumulate in one buffer.
  6. After successful writing, the process removes the items from the processing queue, signaling that the processing has been completed successfully.

When all subsystems are in order, some of these steps may seem redundant. And the attentive reader may also have a question about what makes the gopher sleeping in the lower left corner.
Everything is explained when considering the script when MongoDB is unavailable:

Example of service operation in case of failure of MongoDB

  1. The first step is identical to the events from the previous scenario: the service receives 2 requests for recording views and processes them.
  2. The process lost connection with MongoDB (the process itself, of course, still does not know about it).
    The handler, as before, tries to flush its buffer into the database, but this time without success. It returns to the expectation of the next iteration.
  3. Another background gorutin wakes up and checks the processing queue. She discovers that elements have been added to her for a long time; concluding that their processing failed, she moves them back to the pending queue.
  4. After some time, the connection with MongoDB is restored.
  5. The first background gorutin tries again to perform the write operation — this time successfully — and eventually eventually removes the elements from the processing queue.

In this scheme, there are several important timeouts and heuristics derived through testing and common sense: for example, elements are moved back from the processing queue to the pending queue after 15 minutes of their inactivity. In addition, Gorutin, who is responsible for this task, performs blocking before executing, so that several instances of microservice do not attempt to restore hung-up views at the same time.

Strictly speaking, even these measures do not provide theoretically reasonable guarantees (for example, we ignore scenarios like the process hangs for 15 minutes) - but in practice it works quite reliably.

Also in this scheme there are still at least 2 known vulnerabilities that are important to be aware of:


It may seem that the problems turned out more than we would like. However, in reality, it turns out that the scenario from which we were originally protected - the failure of MongoDB - is indeed a much more real threat, and the new data processing scheme successfully ensures the availability of the service and prevents losses.

One of the clearest examples of this was the case when the MongoDB instance on one of the projects was unavailable all night by ridiculous chance. All this time, the viewing counters were accumulated and rotated in radish from one queue to another, until in the end they were not stored in the database after the incident was resolved; most users did not even notice the failure.

Reading view counters


Requests for reading are much easier than writing: microservice first checks the cache in radish; all that is not found in the cache is filled with data from MongoDb and returned to the client.

There is no pass-through cache entry for read operations, to avoid the overhead of protecting against concurrent writing. The smarter cache is still quite good, since most often it is already warmed up thanks to other write requests.

The daily view statistics are read from MongoDB directly, since it is requested less often, and it is more difficult to cache it. It also means that when the database is unavailable, reading the statistics stops working; but it affects only a small part of users.

Data storage scheme in MongoDB


The MongoDB collection scheme for the project is based on these recommendations from the database developers themselves , and looks like this:


We do not yet use the transactional capabilities of MongoDb to update several collections at the same time, which means we risk that data can be written only in one collection. For the time being, we are just logging such cases; there are a few of them, and so far this does not present the same significant problem as other scenarios.

Testing


I would not trust my own words about the fact that the described scripts really work if they were not covered by tests.

Since most of the project code works closely with radish and MongoDb, most of the tests in it are integration tests. The test environment is supported through docker-compose, which means it is deployed quickly, provides reproducibility by resetting and restoring state every time it starts, and provides an opportunity to experiment without affecting other people's databases.

There are 3 main areas of testing in this project:

  1. Validation of business logic in typical scenarios, so-called. happy-path. These tests answer the question - when all subsystems are in order, does the service work according to functional requirements?
  2. Verification of negative scenarios in which the service is expected to continue. For example, does the service really lose data when MongoDb crashes?
    Are we confident that the information remains consistent with periodic timeouts, freezes, and concurrent write operations?
  3. Verification of negative scenarios in which we do not expect the service to continue, but the minimum level of functionality should still be provided. For example, there is no chance that the service will continue to store and give away data when neither radish nor Mongo is available - but we want to be sure that in such cases it does not fall, it waits for the system to recover and then returns to work.

To check for failed scenarios, the service business logic code works with the database client interfaces, which are replaced with implementations in the required tests, returning errors and / or simulating network delays. We also simulate the parallel operation of several instances of the service using the " environment object " pattern. This is a variant of the well-known “inversion of control” approach, where functions do not address dependencies themselves, but receive them through the environment object passed in the arguments. In addition to other advantages, the approach allows simulating several independent copies of the service in one test, each of which has its own pool of connections to the database and more or less effectively reproduces the production environment. Some tests run each such instance in parallel and make sure that they all see the same data, and race conditions are missing.

We also conducted a rudimentary, but still quite useful stress test based on
siege , which helped to roughly estimate the allowable load and speed of response from the service.

About performance


For 90% of requests, the processing time is very insignificant, and most importantly - stable; Here is an example of measurements on one of the projects for several days:

image

Interestingly, a write (which is actually a write + read operation, since it returns updated values) is slightly faster than a read (but only from the point of view of a client who does not observe the actual pending write).
A regular morning increase in delays is a side effect of the work of our analytics team, which daily collects its own statistics based on the data from the service, creating us an “artificial highload”.

The maximum processing time is relatively long: among the slowest requests, new and unpopular ads show themselves (if the ad has not been viewed and is displayed only in lists - its data does not fall into the cache and is read from MongoDB), group requests for multiple ads at once (they were worth to make a separate schedule), as well as possible network delays:

image

Conclusion


Practice, to some extent, counterintuitively, showed that using Redis as the main repository for the display service increased overall stability and improved its overall speed.

The main load of the service is read requests, 95% of which are returned from the cache, and therefore they work very quickly. Write requests are deferred, although from the end user's point of view, they also work quickly and become visible to all clients immediately. In general, almost all customers receive responses in less than 5ms.

As a result, the current version of microservice based on Go, Redis and MongoDB successfully works under load and is able to survive the periodic unavailability of one of the data stores. Based on previous experience with infrastructure problems, we identified the main error scenarios and successfully defended against them, so that most users are not inconvenienced. And we, in turn, receive much less complaints, alerts and messages in the logs - and are ready for further increase in attendance.

Source: https://habr.com/ru/post/431902/


All Articles