Modern processors are very, very fast, but at the same time, while working at a large enterprise, all the time you are confronted with an incredible inhibition of software. It is no good when the month closing procedure in the accounting department takes more than a day and if something is wrong somewhere, then the calculation has to be restarted.
The article discusses the fundamental reasons for the slowness of applications and provides supporting figures that will support the choice of architecture for future programs.
Consider a typical scheme of a modern application.

The overall speed of the application is determined by two categories:
1) the speed of each of the links of the application;
2) the speed of interaction between the links of the application.
')
On the first point, only the database can be the weak link, since we can always put another server next to it and parallelize the application server and the web server.
Let's sort the second item. Different links of the application work at least in different processes of the operating system, therefore, for the interaction of the two links between them, the operating system must switch between processes at least twice. This means that even if the application server and the database are on the same computer, due to the overhead of time, we cannot expect to execute more than 8000 SQL queries per second. Even more processes are involved in the network interaction, respectively, and the number of requests decreases to almost 1000. The interaction between the web server, the web client and the application server is usually much less intensive than between the application server and the database. The only connection between the user and the web server is characterized by a relatively long response time, especially on satellite channels. So it is not worthwhile to arrange a cloud of ajax-requests, too (than the first ajax-applications often sinned with). But back to the database.
In normal operation, thousands of requests per second are enough. Problems begin, at any mass data handling. But this restriction becomes especially critical when using object add-ins, including such a popular one, as the 1C system. The peculiarity of the use of OOP is that it generates a bunch of small calls to the database. And from this situation now there are two ways out:
a) place the application logic inside the database;
b) cache data within the application server.
By choosing option "a", you can increase the speed by about two orders of magnitude: up to 100,000 requests per second. The inconvenience is that in this case we are forced to implement logic in the embedded database language. The variant with # or Java-procedures is not suitable, since Such procedures are performed in a virtual machine, which is actually a separate process and practically does not differ in time from the external application server. And although the built-in programming language is not very suitable for the implementation of application logic, many banks use exactly this approach. Basically, in this case, of course, Oracle with its embedded PL / SQL is used.
Consider next option "b". If it is “on the fingers”, then the implementation of the logic with this variant looks as follows: we load in large blocks all the data that may be needed in arrays in memory. We process. Upload the processing results to the database. From the database, the rows are loaded very quickly - about a million pieces per second. Processed data is also very fast. The problem is in loading data into the database. Here we have again 1000 lines per second. Non-standard tricks can load up to 100,000 lines per second. For very large enterprises, this method is not suitable, since data volumes are such that they do not fit in the RAM. At the same time, this method is most suitable for optimizing 1C, you just need not forget that linear search is very slow even in RAM and you have to do your own implementation of indexes.
Where did the border of 100,000 lines per second come from, which we cannot overcome in any way using the database. The answer is simple - this is an approximate average speed of building indexes in the database. Yes, despite the fact that a modern processor can sort 100 million rows in half a second, Oracle spends about 20 seconds building an index of 1 million items. This must be borne in mind when developing an application. And even these 100 thousand lines per second can be achieved only thanks to procedures in a built-in language or mass loading of data from a file. When using SQL commands, we get a maximum of 10,000 queries per second. You can increase the overall speed by clustering, but the response time is very, very difficult to increase.
Let me remind you that we are talking about mass data processing, such as the end of the month or the end of the year. In everyday mode, the usual speed is enough.
Now let's talk about the prospects. Now a real revolution in computer technology is quietly and imperceptibly taking place - the transition to 64-bit. And this is not just an opportunity to install more than 4 gigabytes of memory - this possibility was still on the Pentium II by using advanced processor modes. 64 bits is the ability to include any amount of data in the address space of the processor. Coupled with the development of solid-state disks, which are inherently more similar to RAM than to classic hard disks, this radically changes the face of future data processing systems. Apparently, in the near future we will see the integration of the application server and the database. The first steps have been taken: Oracle has bought Java technology, and Microsoft is already embedding SQL queries in the C # language. Those. they try to push the logic inside their DBMS. The disadvantage of this approach is that the logic built on the principles of OOP does not fit very well into the relational data model, and the number of queries to the database increases by an order of magnitude. However, large banks that will use this approach can afford not to switch to new development technologies at all. Thus, Sberbank's IT department is comparable in number to the total number of staff. In the companies of the same production sector, programmers are much less and new technologies here, oddly enough, are more in demand.
Another approach is the opposite - embedding the database inside the application server. Of the existing technologies, the closest are NoSQL databases. With this approach, we can use all the power of a modern processor. Application performance immediately rises thousands, millions of times. But, with this approach, OOP is used, the main feature of which is the need for quick access to random areas of the database. This can be done most effectively by mapping the database file into the address space of the application server. It should be noted that such popular programming languages ​​as
C #, 1C, Java do not have the means to work directly with memory, and therefore, it is necessary to create additional layers of unsafe-code for them to enable such work.
Estimate the amount of RAM needed. The use of multimedia technologies throughout the world gives a little bit wrong ideas about the amount of memory occupied by information. Now it may sound unexpected, but 1 megabyte can fit within itself a volume of War and Peace. This can be easily seen by multiplying the number of pages in the book, the number of lines of text on the worker and the number of letters in a line. We will get about 1 million characters. Not every user who does not possess the skills of blind typing, “drives” several volumes of Tolstoy into a computer every year. Even a salesperson who continuously scans products in a store generates only about 10 megabytes of information per year. For office users, we can safely put 1 megabyte per person per year. Yes, there are Word files, scanned copies of documents, photos, music, movies - but, you see, all these types of information are in the database "as is" and are not processed in any way (except for full-text indexing), therefore they are in RAM dont need.
Findings. The PLO, which has proved its effectiveness over the 30 years of its existence, cannot be used by us completely, because it is poorly compatible with relational databases. In the topic are indicative figures, using which, you can choose the optimal depth of the implementation of the PLO, and know the limits of the possibilities of the future application.