📜 ⬆️ ⬇️

Scorocode Cloud Service Development: Part 1



In this article, I will tell you how we developed the Scorocode cloud service, what problems we encountered, and, most importantly, share development plans.

A small survey at the end of the article will allow readers to give votes for the functions planned in the future, thereby influencing the development strategy of the service.

Prerequisites


Already in 2011, I actively used Parse for conducting rapid experiments with the development of mobile and web-based applications. The usefulness of the service is not in doubt, only some flaws periodically caused the desire to find something more convenient.
')
Over time, having tried several similar services, I came to the following conclusions:

  1. Backend as a service is needed by many system developers in the “client-server” architecture, as it accelerates the work at times, especially at the initial stage, when the data structure and server logic are already needed, and the resource for its development is limited.
  2. The main functionality of this service: structured storage and access to data using the SDK for different platforms and the possibility of developing server logic. Additional functionality, like sms / push / email messages, ready-made objects such as users / roles, on the one hand, can be implemented by developers independently, on the other hand, it speeds up work even more and allows you to focus on the frontend.
  3. There is no service with full documentation in Russian. Yes, there are translations of pieces and a small number of examples, but they do not provide a complete understanding of the capabilities and pitfalls of the service.

In this regard, the idea of ​​developing such a service for a Russian-speaking audience, in which you could realize your own wishes and wishes of users, arose.

I will omit the story of how we passed the organizational path from the idea to the investment project in 2015, my colleagues will tell about it better. But on the technical part I will dwell in more detail.

Development tools


After determining the minimum of the functions that needed to be implemented, they began to determine the means of development.

The option of using paid proprietary software was immediately flagged due to its inconsistency. The main reason was that during the last 5-6 years, the software development industry has undergone strong qualitative changes in a positive way. Those tasks that previously could be solved solely using IT platforms and “monsters” tools can now be solved quickly and efficiently using modern development tools, programming languages ​​and platforms, most of which are distributed under the MIT license.

So, we have formed a list of functions, and began to choose the platform on which the main part of the service will work - the API server. In our opinion, one server had to hold at least 10 thousand requests per second so that clusters could be assembled from such servers that withstand the load of up to 50 thousand requests per second. This number did not appear by chance. In one of the industrial systems we are developing, there are such load requirements, and we took it as a starting point, with an eye to transferring the backend of this system to the cloud (by the way, using the requirements for the same system allowed us to calculate the economic benefits from using the cloud backend) .

As a result, 3 variants of API implementation with the JSON exchange format were tested. Testing was conducted using Yandex.Tank . Results:

  1. Node.js + Express.js - 4,000 requests per second
  2. Node.js + Total.js - 1 500 requests per second
  3. Own server written in Golang - 20,000 requests per second

I will add that mongoDB was unanimously chosen as a DBMS, as a modern, scalable DBMS that can withstand heavy loads, with detailed and high-quality documentation and a large number of examples and drivers for popular programming languages.

The choice was made in favor of our own development, and we began to work out the architecture.

Architecture


The main task in building the service architecture was the implementation of a scalable cluster system. After the experiments, we came to the following configuration:


In the process of development and testing, many small tasks arose, but in general the architecture turned out to be viable and the complex was successfully tested.

Functions


As I wrote above, at the initial stage, we implemented the basic functionality. The boundaries of the minimum set were based on the need for migration of Parse users and the minimum requirements for backend functions for developing not very complex applications.

During the implementation of the functionality there were serious and not very problems. A couple of characteristic quote below.

Problem 1. BSON parsing speed.


As you know, mongoDB gives the data in BSON format, which can be easily disassembled and converted to JSON. Nevertheless, on large volumes, BSON parsing takes a fairly decent time. For example, on a sample of 1000 medium sized documents, parsing BSON to JSON takes more than 1.5 seconds. For us, this speed was unacceptable.

We tried to completely rewrite the mgo.v2 driver parser . Did not help. It was concluded that time could be reduced either by increasing the frequency and number of cores on the server, or by shifting this task to the client.

As a result, it was decided to return all samples in BSON format with subsequent analysis in the SDK on the client. So it works to this day.

Problem 2. The speed of JavaScript triggers.


Initially, the engine that will execute the server scripts was chosen by Google V8 , and it coped well with its task for asynchronous scripts. But with the triggers on data operations having problems.

The V8 engine itself is very smart, but it starts relatively slowly - 150-300 milliseconds. And we had a limit on the trigger time - 500 milliseconds. Give half of this time to start the engine was unreasonable. Create a pool of pre-run "workers" - to make a lot of problems with context switching.

Therefore, for triggers, we chose the fastest option for executing JavaScript code in Golang - Otto's Robert Krieman library. Yes, she has certain limitations, but for the task of executing the triggers she approached perfectly. Based on this library, we have implemented a “terminator” of the call stack for interrupting an infinite loop of trigger calls (for example, when an insert operation is called in a beforeInsert trigger).

About the problems and tasks arising in the implementation process, you can write endlessly. I hope that the audience itself will indicate technical topics that it would be interesting to read about, and I will be happy to talk about them.

What's next?


Now we have planned and started working on new system functions. Given the consistently high level of interest in the Scorocode service, we would like to know the opinion of the community on the need to implement such functions. Ready to answer all your questions in the comments to the article.

Source: https://habr.com/ru/post/307056/


All Articles