
It is no secret that software developers often have to solve problems of performance, high load, processing large amounts of data and fault tolerance. Ideally, all these issues are taken into account when designing the system. But in practice they are often attempted to be solved by late “optimizations” after launch.
Why it happens? Providing high performance and reliability is mistakenly revered by many for "black magic." And for good reason - in almost every book or article on this topic, you first stumble upon a statement like "you can't just increase productivity."
')
Book authorities advise us to first determine our goals, build some load models, calculate the requirements for hardware resources, test our assumptions, and deal with that kind of husk, which has nothing to do with the real business. With all this, they still do not give specific advice. What should I change in my system for it to work quickly? No answer.
At the risk of incurring the wrath of highbrow soft-theoreticians, I’ll say: there are specific, understandable practices that, by 99%, solve all the problems of speed, reliability and availability of your software. And I am ready to share these practices with you. Immediately, I note that we are talking primarily about server applications with business logic more complex than the usual CRUD.
So, let's go - 7 practical tips to increase productivity, which really work.
Do not delve into the database

Your system probably uses some data storage. So, do not much to delve into the principles and mechanisms of its work, as well as a variety of settings.
Are you using a classic Oracle relational database? If problems arise, that is, specially trained specialists of the DBMS, they will tweak the settings where necessary, and everything will be fine. Do you use NoSQL storage like MongoDB? So there is no need to know anything at all, the 10gen developers have already taken care of everything for you.
No need to think for the developers of the database. You have an ORM or client library — use it here. How it works there on the storage side doesn’t bother anyone. Besides, what if you want to change the database engine? You can not "sharpen" the specifics of a particular database.
Single operations instead of batch operations

Suppose your system needs to process a million objects, information on each of which lies in the database. Please do not attempt to handle them in batches. You will have to write separate logic to extract and save a package of objects in the database, to solve the problem of error handling. Yes, and business logic will have to change.
Instead, just use single operations — process one object after another. So the database will be loaded more evenly, the application logic will remain simple, and processing time will not suffer at all.
No caches
Home-based optimizers will advise you to use caches (for content pages, business objects, complex calculation results, and the like). Allegedly, these caches can reduce response time and reduce the load on the system. Yes of course! And how will you solve the issue of obsolescence of data in the caches? Their consistency? Elasticity? The extra resources they consume?
My advice to you is just do not use caches. Disk subsystems in modern operating systems themselves know what and when to cache. Add to this fast SSD-drives, and you will realize that the time caches irrevocably passed.
Use single sync primitive

Your system, of course, actively uses multi-threaded programming for efficient processing of competitive requests. And multithreading, as you know, entails problems when accessing shared resources. How to be here? Very simple: make extensive use of a simple and time-tested primitive - a synchronization unit, in which only one thread is allowed to be executed at a time.
You should not use high-level patterns of multithreading - all these non-blocking collections, atomic types, agents, and the like. All of them are overcomplicated and impose a completely inconvenient usage model on you (which costs one method compare_and_set, grossly violating the principle of single responsibility).
If further optimization is required, you can simply get rid of the synchronization blocks, and the system will work even faster! Of course, there may be some problems with competing streams, but in the end everything will be fine (you probably heard, this is called eventual consistency - the topic is quite relevant now).
Use as simple as possible algorithms.

Sometimes you have to solve algorithmic problems not covered by your standard library. For example, to distribute objects across clusters, or to solve some problem on graphs. In such situations, use the simplest algorithm that just came to your mind.
Instead of talking about asymptotic complexity and trying to estimate O-large for the occupied memory, just take and solve the problem by iterating over nested loops. There is no doubt that in 95% of cases the solution will be quite good. In addition, modern compilers are perfectly able to catch patterns of inefficient operations in code and fix them transparently for a programmer.
Use default settings
Surely your system works in some kind of container. This could be a web server, an application server, a virtual machine, or something else. Just trust them! There is no need to delve into multi-page documentation or, especially, forums and user groups, in search of the holy grail of optimization. Containers are developed by clever people, and all default settings are good enough for you.
Local interaction is no different from remote

Suppose you have an API in the system that is used locally. Is it suitable for networking, when a client can be located anywhere in the world? Of course yes! Do I need to somehow modify it, reduce the granularity, handle the interruption of the connection in a special way, introduce additional error typing, and support several versions of the interface? Of course not! Libraries and frameworks make remote interaction completely transparent for the client and the server, and the rapid growth of bandwidth and stability of global networks eliminates their difference from local networks.
I have repeatedly seen how the principles described above are successfully applied by developers in a wide variety of organizations and projects. And they always worked! According to the developers themselves, the systems were extremely fast and reliable. Of course, sometimes there were some problems with their use. But these are the particulars caused by the imperfection of the environment in which the system is forced to work, and they do not deserve serious consideration.
Feel free to take 7 tips on weapons, and you will soon see the result in your system!
PS Illustrations courtesy of Flickr users: