📜 ⬆️ ⬇️

Gifts from M.Video: what's under the hood?


Instead of introducing


This story began in December 2007. I was a graduate student of MSTU. N.E. Bauman, and got a job in a small company, where they just launched a project under a name that I did not understand then - “Processing Gift Cards M.Video”. As I was explained at a brief briefing on the first working day, processing is such a system in which data on gift cards are stored, and various operations can be performed with them. They also told me that there are almost no developments, but creating such a system is completely easy. In this regard, the output in the product was tentatively planned in a couple of months. “I see,” I replied, and with my head plunged into the creative process, from which I had not yet emerged.

This article is about how important it is to make the right decisions regarding the technology and architecture of a future product. How to take them. And about what happens when decisions are wrong. If in December of 2007 I had the experience that I have now, the processing of gift cards of M.Video would develop smoothly and measuredly. There would not be a lot of sleepless nights and time-savers, seven days a week, with breakfast, lunch and dinner in front of the monitor. But, at the same time, there would be no such furious drive when working on a product.

What is processing


So what is a processing system? Before you begin the story of the thorny and long way to create this system, it would be nice to determine the terminology.
Processing system (processing) - designed to process information used in making payment transactions.

It is customary to distinguish two types of processing: bank (works with bank cards: Visa, MasterCard, etc.) and non-bank (works with non-bank cards: gift cards, discount cards, promotional codes, etc.).
')
As you have probably guessed, this article will focus on non-bank processing.

How it all began, or how technologies were chosen for the future product. This is the most capacious chapter of my story, because the beginning, in my opinion, is the most interesting time in creating a product from scratch. Well, or almost from scratch. At the initial stages of product creation, key (I would even say, momentous) decisions are made, on which the future of the product directly depends. Wrong in choosing a platform? The product will drown in the growing traffic or will be so expensive to maintain that it will not bring the company anything but losses. Wrong choice of technology? After a couple of years, you risk not finding specialists (or finding, but for inadequate money) who are able to work with selected technologies. The cost of creation and maintenance (I’ll come back to this question more than once) is the cornerstone of so many IT solutions that would be much cheaper if the right decisions were made first.

So, the task was set: a team of two people for two months from scratch to develop a processing system for the company M.Video. Naturally, the system had to take into account the baggage of legacy restrictions that existed at that time (data format on the magnetic tracks of the cards, features of the cash register system, etc.). From the point of view of the technologies used - complete creative freedom. “There is everything on torrents” - under this slogan we started thinking about what to build our product on. I did not know what the main problems faced when using processing systems. It was also not possible to find any effective information about this on the Internet - the direction is rather narrowly specialized, and, as we all know, the Internet, as a rule, writes about trend things (again, the law of the market). Therefore, when choosing technologies, it was necessary to rely on the experience of developing other systems, technical logic and banal skills in using certain technologies (I think it is obvious to you that there was no time to seriously study new technologies). Looking ahead, I will say that I was not mistaken in the choice of technologies (naturally, I do not attribute this to my genius, in many respects I was just lucky). I was wrong in another - in the architecture of a software solution. But first things first.

Iron and hardware architecture


Let's start with iron. Often, when designing systems, equipment is selected based on the selected software platform (for example, Oracle or IBM, the specifications for their platforms usually specify the minimum hardware requirements), or the complexity of the problem to be solved (for example, 4K streaming video processing with specified performance characteristics). In our case - neither one nor the other. The selection criteria were: the reliability of the system and, no matter how trite, the budget. It immediately became clear that IBM's cluster systems with the possibility of hot-swappable modules and similar chips are beyond our means, from the word “in general” (who worked in small companies / startups will understand me). Therefore, our attention has moved to a low price segment, with the proviso that the reservation of the system will need to be implemented independently. In the end, chose an inexpensive HP server, unremarkable. But the built-in reservation mechanism at first caused many questions from the customer. But in the first year of operation of the system, they were completely removed. The mechanism was as follows (Fig. 1).


Fig. one.

Yes, that's how everything is primitive. Two mirrored front-end servers and one database server. But why so, and what is the use of this scheme? Nobody has been doing this for a long time, the reader who understands modern technologies for building systems 24/7/365 will say, and will be right ... But only partially. Let's figure it out. As mentioned above, the option to put a really cool cluster system was not considered due to the high cost of it.

In this place of the narration I can not insert a small offtop on this topic. I am aware of at least two failures of such systems in large banks (for obvious reasons, I will not give names). And it was not a matter of limited budgets (the banks spent several million dollars on ensuring the resiliency of their systems), but the fact that reality is a ruthless thing.

In the first case, a drunk excavator broke the optical fiber of the main and backup communication channels simultaneously. This happened despite the fact that the architects of the data center of the bank absolutely correctly organized the withdrawal of communication lines from the building so that they diverged in different directions. How did one break both lines with one stroke of the bucket? Yes, very simple. On the opposite side of the street, these lines converged in one well. And the data center architects could not affect it. They did not even know about it.

The second case is not so epic, but also interesting. The bank had a classic Active-Standby cluster. After the failure of the main node due to the banal failure of the equipment (absolutely uncritical situation for any, even the cheapest, cluster), the backup node simply did not rise. Investigation of the situation revealed the reason - in the rack where the backup node was mounted, there was physically not enough equipment, there was an incomplete set of disk drives. Then we found out that the missing disk drives were used as an operational replacement fund to support the performance of other bank systems. And again the question arises - how is that? Didn't people really know that a backup node of the cluster was mounted in this rack, on which one of the most important IT-systems of the bank works? They knew. But due to a happy (or unlucky, depending on which side of view) a coincidence, never before the incident we were considering there was a need to launch a backup cluster node. Employees have watched this rack off for years. It is even possible that people who knew the purpose of the equipment in the rack had already quit. And, which is quite natural in the situation described (infrastructure engineers will understand), the rack went "under the knife" to solve the "burning" problems with productive equipment.

Summarizing this is not such a small but very necessary digression for the further story, I want to draw two fundamental conclusions, which, in my opinion, must be taken into account when building IT systems of any level:

  1. Absolutely reliable systems do not exist (trite, but, for some reason, this is often forgotten).
  2. Real (not on paper) system reliability weakly depends on its cost (this is forgotten more often).

Let's return to the processing of gift cards M.Video. We had an introduction that there are two Internet channels from different providers in the M.Video data center. How the wires leave the building there, of course, we did not know, but the two channels were not bad for a start. We tried to build a system of two mirror segments that were supposed to “sit” on different communication channels and somehow reserve each other. But it was not possible to fully implement this idea, because it was necessary to divide the processing database into two mirror segments and to provide high-performance (and even very) online replication between them. Looking ahead, I’ll say that we chose Microsoft SQL Server 2000 as the DBMS. After examining how the replication mechanism of this DBMS is arranged, we came to the conclusion that we could not provide the necessary performance. In addition, the organization of replication will require a very large overhead. Therefore, it was decided to leave the database centralized. But the front-end servers "dilute" was quite easy, because they do not contain data that must be promptly replicated. We implemented the logic of switching clients between front-end processing servers in the client module of our system, but more on that later.

Now let's talk about reliability. Since we had two communication channels from different providers, the reliability of the front-end part turned out to be quite high. Only a drunk excavator (see above) or an explosion of our stand in the data center could lay it down. Fears caused the centralized server of a database. In this regard, we studied all the statistical data collected on the results of the operation of real systems that we could only find. We wrote out the three most frequent problems in production:

  1. Failure / overload of the communication channel.
  2. Insufficient system performance during peak periods (holidays, Friday evenings and so on).
  3. Software errors / crashes.

Equipment failure was also in the general list, but was not included in the TOP-3. Since all three problems are almost completely solved by using a scheme with mirrored front-end servers, we decided to take the risk - left the database server centralized (with the proviso that it should be powerful, as far as the budget allows, to have enough performance in bursts). The bid was correct. Processing has worked on this hardware platform from March 2008 to May 2014 inclusive. During this time there have been incidents related to all the points listed above. But equipment failures did not happen. Someone will say that just lucky. I will not argue, but still.

Finally, I want to tell you what I didn’t like the most in the described scheme. The pro-system administrators reproached me more than once for using a separate router with a separate IP address in each branch of the scheme. They say that it’s not friendly to the client - to force him to be able to switch between two different addresses (although at that moment our client module was responsible for the switch, and not the client software working with processing). It was suggested to supply either a balancer, which would automatically arrange traffic between the front-end servers, or collect a scheme of two routers communicating via the HSRP protocol, so that the client would have a single IP address. My opinion on this is as follows. Balancer - generally in the furnace. Creating a centralized link in a distributed system is the maximum evil, this is obvious. HSRP seems to create a distributed link (two routers), but the components are quite dependent on each other (the routers constantly communicate with each other and implement the logic of teamwork defined by the protocol). Such a link will never be more reliable (or at least not worse) than a link consisting of really independent elements (two routers that do not interact with each other). It can even be proved mathematically. The scheme described here has proven its reliability more than once during its work, having outpaced many systems that are several orders higher than our gift card processing.

Web service platform


As I said before, we chose Microsoft SQL Server 2000 as a DBMS without any lotions, such as data replication between several databases. I was well versed in this database, and we knew many real-world examples of using SQL Server in high-load corporate systems, including banking. A good argument for, I think. For the same reasons, they decided to organize a web service based on the IIS server from the same Microsoft. Nothing unusual yet, right? More interesting is another. As a technology for writing scripts that implement the logic of the processing service, we didn’t use ASP.NET, which was already beginning to gain popularity, and now the long-forgotten ASP Classic has become popular. Another controversial decision at the time, which largely determined the future success of the system. If you read all sorts of comparative reviews of ASP.NET vs ASP Classic on the Internet, then the main arguments in favor of ASP.NET will be approximately the following:

  1. More modern technology.
  2. Provides more features.
  3. More streamlined web application structure.
  4. Great performance.

Sounds cool, right? True. Everyone likes, everyone buys. Microsoft merchants and marketers get their bonuses. All good. And now let's understand the essence.

With paragraph 1, everything is clear, the new is always good. New jacket, car, girl, ..., java version. It always causes positive emotions, this is how our psyche works. With this nothing can be done ... and not necessary. You just do not need to consider this an “advantage” when choosing technologies for future products. This is how to choose a car not on the technical specifications, but on the basis of fashionable trends. I understand that many people do this, but I believe that the developer should always maintain a sober mind and rationally approach any issue. Unfortunately (to mine; maybe someone likes it), now I often see just such a “fashionable” approach to the choice of technologies. And what is the result? But in the end it turns out to be expensive and stupid, because the newest technologies are usually more expensive, and their advantages when used in a particular project are not always clearly understood by customers and / or performers. And, as you know, the new is well forgotten old. But this is a separate big story, let's not talk about sad things.

Point 2 - opportunities. Opportunities are always good. But do you need, for example, to monitor the weather online in Baltimore, Maryland, USA? It is unlikely, if only you are going there soon or your relatives live there. Do many of your friends need such an opportunity? In this simplest example, I want to show the obvious: when someone entices you with new opportunities, you need to clearly realize their usefulness for solving your problem. Having studied the possibilities of ASP.NET, which ASP Classic does not have, we decided that for our problem ASP.NET has no advantages over ASP Classic.

Point 2 is the structure of the web application. Once upon a time, at the dawn of PHP, in one very good book about this language, presented to me by a friend for my birthday, I read the statement that all web applications developed using scripts on the server side and embedded Right in static HTML, there is a serious drawback - worse than desktop applications written, for example, in C ++, the structure of the code. And I totally agree with that. If C ++ provokes the developer to comply with a certain structure (header files, preprocessor directives, the Main () function), then PHP, ASP Classic, and other similar languages ​​do not push for any code design. If you wish, you can write as randomly as you like or structurally (although structuralness will have to be controlled independently). The topic is completely at the mercy of programmers. Since the release of that great book a lot of water has flowed. Technologies have advanced, and ASP.NET is indeed better than ASP Classic, helping developers create applications with structured code.

But back to our sheep. There were only two developers of the system (with 90% of the code written by one of them - me). It was clear that, if desired, we will be able to ensure the structure of the source code of the application. And for our specific task, this eliminates the advantage of ASP.NET over ASP Classic. Well, the cherry on the cake - performance. I will not spread thoughts on the tree (and so it turns out long), and I will say right away - yes, the performance of ASP.NET applications in general (I adore these words in such statements) is higher than that of ASP Classic applications for a fundamental reason - compiled (precompiled to be exact) the code is faster than interpretable.

But, as always, there are nuances.

When you first start a .NET application (for example, after rebooting the server), due to losses in loading the framework and bytecode into the memory, the system runs for a while at a speed lower than “cruising”. ASP Classic applications are free from this drawback due to the absence of bytecode, and each time the interpreter is run, the interpreter processes the script in the same way (unless, of course, script caching is configured). Of course, the problem is small, but, nevertheless, it exists and is especially noticeable for systems that process a powerful continuous stream of requests (and our processing deals with this). This, as well as exceeding by several orders of magnitude the target value of the performance of a Classic-application, made the choice in favor of ASP Classic easy and unconstrained. At the end of the topic of confrontation between ASP.NET and ASP Classic I will describe several advantages of ASP Classic over a .NET fellow (yes, there are also advantages in this direction, and quite significant ones, in my opinion):

  1. You can update the web application without restarting the web server and, accordingly, without interruption in service. This chip still makes our system cool against the background of other corporate systems that often have problems with it.
  2. To finalize the system, nothing but a standard Notepad is needed (such as with the blue icon, everyone knows it, but no one uses it). A competent developer who can write code without tooltips after typing on the keyboard the name of the object and a point, can lead to the finalization of the application from a mobile phone. Can you write .NET code on a mobile phone? Can you precompile it on the same phone?
  3. General "lightness" of the application. Sources do not weigh anything. They can be uploaded to the server on almost any, even the slowest link. Since 2008, as I just did not upload updates to the M.Video processing system: via mobile communication (2G / 3G and older), Yota-Internet, torrent-overloaded office channel and even a Dial Up connection (there was even that, fortunately , just one time).

Web service architecture


“Architecture” is present here only in the section title, because at that time there was no architecture in the system at all. Yes, yes, our system has implemented a complete list of features on the subject of “what not to do when developing a web application architecture”. ASP scripts were absolutely uncontrolledly smeared over HTML code, the files were named in such a way that understanding the meaning of this or that script by its name was a very difficult task. The reason is my inexperience in developing web applications. Nevertheless, in this form, the system worked successfully for the first year, until we got around to combing the architecture.

Client module


Today, the presence of a client module in a system based on a web application does not cause anything but confusion. And this is understandable, because now there is practically no software, which, in principle, is not able to access the web service. But back in 2007 there were more such systems, and M.Video’s ticket office was among them. The only way to interact with cash processing systems was file sharing. And we are talking not only about our system, but also about processing bank cards! Now if someone tells about the transfer of banking information through unencrypted files, it will only cause a smirk. But it's not about that. Cashier M.Video could not independently access the web service we are creating. She could only create a request file and wait for the response file in the data exchange folder. There was a question about the development of a client module, which, in essence, was to become a data transport of the form [file] <---> [service]. And again the classic task is to develop a lightweight and very reliable (cash register of a retail store, after all!) Client module that will perform this rather clumsy task.

And here we applied the most interesting, in my opinion, solution in the whole product - to write a client in VBScript. On the one that can be executed by the standard Windows interpreter. I am sure you are surprised. I will say more: there was not a single person in the project team who responded otherwise. What did I not hear about this idea while working on the project: “this is not serious!”, “Are you kidding me ?!”, “I did not mishear, were you going to write a corporate system module in VBScript ?!”, “this solution is impossible will support "and the like. Then I even wrote down the most aggressive and funny phrases, but, unfortunately, I lost this file. So here. This idea in the end turned out to be not only the most unusual in the whole product, but also one of the most effective. And that's why.

The biggest problem of such customers is their remote centralized update. For example, on the cash register side, the format of the data exchange file has changed, a typical situation. This means that all clients at all cash desks need to be updated, and for a minute, M.Video had about a thousand in 2008. Clearly, sending engineers to all stores is a sunset manually. We need a mechanism that would allow the client module, communicating with the web service, to understand that it is outdated, download the new version and replace it with itself. The last action is the most difficult: on Windows, a running process cannot replace its files. This can be done only after the application is terminated. Accordingly, it is necessary to implement all sorts of crutches of the type of displaying the message “It is necessary to install the update, restart the computer” and the subsequent launch of the mechanism that updates the client files before its next launch. And all this is not a problem for the developer. This is a problem for the user. There is no “happier” cashier, who on Saturday evening before the New Year, when the shops are packed to capacity, serving the next customer saw the above message on the cash register screen. There is no “happier” customer for whose service it appeared. This is a real problem. And until now (and it's already 2017, gentlemen), most of the systems are arranged in this way.

But I spoke and advocate for quality software, so this problem was solved in our system. This very “offended” VBScript helped us in this. Namely, one of his remarkable properties: the ability to rewrite a VBS file without interrupting its execution. We implemented a mechanism in which the script on the response from the web service understood that it was necessary to upgrade, downloaded a new version of itself (again, VBS files do not weigh anything), rewritten its own file on disk, launched a new version of itself and, final chord , was completing the old version of myself. From the user's point of view, it looked ... no way. And this is the greatest value of our ideas. The user generally could not understand what happened update. The cash register, servicing the queue, carried out transaction N on the previous version of the client module, and transaction N + 1 on the new version. And all this without any delays and other visible signs of what happened. The update was managed centrally by uploading a new VBS script to the server and increasing the current version in the corresponding database table.

First steps in production and first improvements.


To be honest, after the system started working in March 2008, I was expecting a huge flow of incidents, in which, as it seemed to me, our small team should have drowned (I remind you that we were only two). But this did not happen. No, of course there were incidents, but nothing epic. It quickly became clear that it was necessary to organize the structure of the scripts at least a little. The first requests arrived to improve the web-interface of the system. By tradition, the customer could understand what he really needed only after he received the first version of the product. Nothing unusual was created here, so I see no reason to go into details. The main thing that I would like to note is that the system in production worked immediately and very well, despite all the fears associated with the “revolutionism” of some decisions.

The evolution of the system and the dangerous holidays


In this chapter, I want to describe the way the system has gone from 2008 to the present, and talk about one thing (only one thing, otherwise, you will have to publish a book), but a very serious and very interesting from a technical point of view incident that happened in 2013 .
So, in March 2008, information about about 16 million M.Video gift cards were stored in the database of the processing system. System performance reached 30 transactions per second with a target value of 16 transactions per second.Now in a database about 120 million entities (not only cards), and productivity reaches 700 transactions per second. The path, as you can see, is long and uphill all the time. I tried to reflect the main turning points along the way on the timeline (Fig. 2).


Fig. 2

Development is cool, but most of the most valuable experience we receive, as a rule, in those moments when something is not going as expected. And if everything goes so wrong that the processing of a large federal company does not work, then the amount of experience gained per unit of time is multiplied by at least 10.

It was Friday, March 8, 2013. A relaxed weekend morning did not foretell anything bad. Around noon, my mobile phone rang, and the tense voice of the IT director said that "processing is not working." Of course, I was a little worried that the IT director personally called (usually for similar incidents I was called from technical support), but I still did not take this information seriously at first. And this is due to the fact that during the first years of the system, I received an enviable regularity from technical support with the exact same wording. And in 99.9% of cases, it turned out: the cleaner cleaned the Ethernet cable from the cash register with a mop, the communication channel in the store fell, the recently opened store had the addresses of the processing endpoints incorrectly registered, and so on. I lazily got a laptop, remotely connected to my servers,I began to watch the current transaction flow and saw that the processing ... was working! Transactions go, and with a very powerful stream. After spending some time in telephone conversations with technical support, I realized that processing simply could not cope with the load. To my pride, he did not fall (as many systems like to do in similar situations), but he could not process all the transactions arriving to him (as we later thought, processing managed to process about 2/3 of the total transaction flow).but I could not process all the transactions arriving at it (as we later thought, processing managed to process about 2/3 of the total transaction flow).but I could not process all the transactions arriving at it (as we later thought, processing managed to process about 2/3 of the total transaction flow).

The result was a “lottery” for M.Video customers - lucky / not lucky. Someone was fine, someone had a timeout from the processing. But on the external signs, technical support could not diagnose that the processing somehow, but still works because 1/3 of the total number of customers that day is, I'm sorry, about 80 thousand people. This crowd of “offended” customers created so many incidents that technical support phones were overloaded in a matter of minutes, a whole avalanche of similar attacks from stores scattered throughout the country (which looked especially threatening) arose, and after 15 minutes the whole vertical management of the company knew about the situation to the owners. I promptly improved the system performance, but I can not say anything about their effectiveness, because by 14 o'clock the traffic naturally decreased,and the problem disappeared as lightning-fast as it appeared. I still do not fully understand some of the nuances of the current social and technical situation. Below I will share what was found out during the debriefing.

The first factor that provoked the situation I described above colorfully was the M.Video action using cards of a small nominal value - 500 rubles. For several years now, promotions with cards of such denomination have not been held in the company. In 2011-2013, they were. The bottom line is that more cards are issued for such shares than for shares, say, for 1,000 ruble cards. Accordingly, when customers spend the issued cards, the processing loads more. The second negative factor was International Women's Day. And this factor for me is still unclear to the end. Let's reason.

Anyone who is related to the maintenance of services operating 24/7/365 knows that the most terrible time is the New Year, or rather the period from December 25 to 31. This is understandable and logical - people are packed with gifts for the holiday. According to statistics, the New Year is the most popular holiday in Russia. On the experience of M.Video I can say with certainty - the absolute truth. Each year in December, M.Video conducts a powerful federal action with gift cards, which further increases the processing load: to the already increased activity of buyers, the activity provoked by the promotion is added. Nothing could be compared with the New Year load of the second half of December. And then the excess, as we later thought, was 1.5 times, and even in March! It is clear that also a holiday. Clear,that a huge number of gallant gentlemen came to the shops to purchase gifts for girls. But why did the increased load last so briefly (the peak was kept from 12 to 14 hours on March 8)? The male half of humanity did not bother to buy gifts in advance and everyone went to buy a gift just in time for the festive dinner? Why in the middle of the day on February 23 of the same year nothing of the kind was observed? Do girls like to receive gifts as a gift? Or are they more responsible and buying gifts in advance?But why did the increased load last so briefly (the peak was kept from 12 to 14 hours on March 8)? The male half of humanity did not bother to buy gifts in advance and everyone went to buy a gift just in time for the festive dinner? Why in the middle of the day on February 23 of the same year nothing of the kind was observed? Do girls like to receive gifts as a gift? Or are they more responsible and buying gifts in advance?But why did the increased load last so briefly (the peak was kept from 12 to 14 hours on March 8)? The male half of humanity did not bother to buy gifts in advance and everyone went to buy a gift just in time for the festive dinner? Why in the middle of the day on February 23 of the same year nothing of the kind was observed? Do girls like to receive gifts as a gift? Or are they more responsible and buying gifts in advance?

I do not have an unequivocal answer to these questions, so I suggest you to speculate on this topic yourself (write your options in the comments). I believe that the main reason for the collapse of the processing system was the combination of a small face value (and therefore a large number) of gift cards during the promotion, and increased consumer activity associated with the holiday. The socio-psychological aspects of what happened are not clear to me so far. Apart from March 8, 2013, other similar in scale and consequences of incidents in the entire history of the system did not happen. Largely due to the large-scale processing of the system, which affected the seemingly unshakable architectural principles, which were then based on it. The work was done tremendous. Essentially, a productive system was rewritten from scratch,It has passed a long evolutionary path and has accumulated a very decent mass of code. But the result of this work was the architectural platform, in which the system functions to this day. And I am grateful to fate for the fact that in 2013 there was a need for so seriously reworking the architecture. Then it helped to avoid many serious problems and still allows me to develop the system flexibly and quickly. As you can see, every cloud has a silver lining. Holidays are not only dangerous, but sometimes very useful from a technical point of view.Then it helped to avoid many serious problems and still allows me to develop the system flexibly and quickly. As you can see, every cloud has a silver lining. Holidays are not only dangerous, but sometimes very useful from a technical point of view.Then it helped to avoid many serious problems and still allows me to develop the system flexibly and quickly. As you can see, every cloud has a silver lining. Holidays are not only dangerous, but sometimes very useful from a technical point of view.

Instead of conclusion


I don’t like it very much when in long articles in conclusion they carry conclusions from a reading of several pages. A tired reader, even if he finds the strength to get to the end, will surely remember nothing. This article has turned out to be long because, in short, I can’t get a pretty long life path of the system. This, of course, can be done, but it is necessary to omit the details. And the devil, as you know, in the details. Therefore, I did not cut the article. In compensation for patience and determination, I will not draw any conclusions here. I am sure that everyone during the presentation drew attention to situations familiar to him (with good or bad side) and / or interesting. It seems to me that the processing of M.Video during its life has collected quite a lot of illustrative situations, the description of which can help someone. ActuallyThis is the main motivator for writing an article.

And finally, I want to quote one very famous, though not in IT-circles, a person. Samuel Colt, the inventor of the legendary revolver himself, said: “It’s not a weapon that kills, it’s a man that kills.” It may seem strange to someone such a quote in an article about IT, but I think that it can be very useful here. Too often, I see how decision makers in the IT world naively believe that it is killing a weapon ...

Source: https://habr.com/ru/post/343848/


All Articles