📜 ⬆️ ⬇️

At the request of Habrapipl ... a more detailed description of our data center

At the request of Habrapipl, friends and acquaintances, I post more detailed information about our data center:

It all started about 2 years ago, sitting in an Italian restaurant at a plate of pasta :)
The idea has long been thrown in the waking minds of great geniuses, but then she found a way out. A little later, it was still decided to start building this disgrace. Initially it was supposed to make the data center "Irtyshsky" , the plans were ambitious, but there were enormous technical difficulties, therefore, it was decided. to postpone the Irtysh "for dessert" :), undertook "Slavic" .

A little help: Long thought how to call the data project, finally came to the conclusion that in the global it will be “M77” on the principle of M9, M10, M5 (MMTS), again Stack (M1), and in locale they decided that we would distinguish by geographically :) it actually happened, “Slavic” is located on Slavyansky Boulevard, and “Irtyshsky” on Irtyshsky Drive :)

Initially, they had at their disposal a separate 2-storey building of a free purpose, chose premises and started to think, at first they wanted to give everything to outsource and invite “specialists” in TSODostroeniyu, invited some, then others, then others, eventually realized that all Whoever is involved in building data centers in the Russian Federation are people of the “old” mindset, who are not very keen on following the industry trends, often poorly understood and versed in this matter; after about 10 months of reflection and pre-project research, we decided to create everything with our own and with your hands (the benefit in your pocket is a telecom operator, a construction holding and other related companies), this is how the cape builders of data centers :). Later we got acquainted with the excellent company Knuerr (Germany) and the head of the Russian representative office, who offered us the equipment, as a result of which they began to build and from which all systems “danced”.
')
I want to note that since the construction began in the run-up to the crisis, which we smelled about 3-4 months before the start, a decision was made about the phased construction and commissioning, the most important principle of step by step lay step by step (step by step) profitable and economically and for other reasons, each system must be created in such a way that if it breaks down or is disabled for prevention, this will not affect the operation of the entire data center, vendors were chosen in this way, systems were designed, equipment was purchased, etc.

90% of all applied technologies are innovative not only for Russia (in principle, lagging behind the rest of the world) but also for Europe and America. We created something that was only in words or in the lamps of projectors of advanced developers. Faced with many subtleties, questions, misunderstandings and other things. As a result, we created the first stage of the Slavyansky data center, so for more details on each part of the data center (the following document took as a basis: Document SP – 3-0092: (Standard TIA-942, revision 7.0, February 2005)):

1. Since the location of the data center, the building and the territory were at our disposal, we immediately got to the point according to Western standards I disliked :)
- a separate task;
- ground floor;
- load on floors up to 2 tons / m2 (monolithic building);
- wide aisles, allowing to move large objects;
- high ceilings (~ 5m);
- the absence of columns and other architectural structures in the engine room;
- All walls are covered with specials. paints minimizing dust formation in bright colors;
- they decided to abandon false floors at the initial stage of construction for their uselessness; they made floors from porcelain stoneware;
- divided the premises of the machine room from the administrative and technical premises, created a tight area;
- we decided to use diode lamps (more light, less energy);
I will not write about such nonsense as seismic resistance and other nonsense and everything is clear :)

2. Initially, we have our own distribution substation (RTP) on the territory with two independent inputs from the city from different heat and power plants (category 1), respectively. they organized two cable entries to the building along monolithic concrete channels (just in case), added a diesel generator to the DTS and supplied the DTP with the appropriate automation (AVR, etc.). Again, the initial plans were to create a dedicated room for energy inputs, later they abandoned this scheme (D & G is expensive and stupid) and decided to place power equipment directly in the hall at the ends of each row. For each power cabinet, there are two independent inputs from the RTP. Themselves power plants and automation cabinets wanted to buy from vendors, but faced the following problem: wait too long ... soothes. the resiliency of the entire enterprise became questionable, since if some component breaks down in the power cabinet, you will have to wait a very long time to replace it (as a rule no one keeps such assemblies and units in warehouses, and this is an enormous difficulty (if you think about it). And since there is a Rosatom plant for the production of power units in the pocket, they did everything themselves, according to their own projects, they bought only components from leading vendors, which can be easily bought if they are promptly replaced with their own resources.
image
image

3. Heat removal in the data center is a headache for all professionals involved in this field, it is a subject for discussion at forums and conferences, in the kaluars and offices of integrators :) Today a lot of research on this topic has already been done, someone thought up to place the data center on a barge, someone at the north pole, etc. etc., in my opinion the most optimal scheme is hot / cold corridors, it would seem the simplest, but still we went further, thanks to the developers of Knuerr, who invented and created the ingenious and simple heat removal scheme in their implementation, they switched cabinet and hot / cold corridor system in one “drawer” :) as a result, we have what we have.

To date, all designers offer a standard heat removal scheme, with the distribution of cold air from under the raised floor or something else like that, in the end, a maximum of 10 kW of heat is removed from the rack and the bust! This option did not suit us for one simple reason (D & G :)) in the sense that the data center space was inefficiently used and its power capacity (by the way, we have 4 mW of power). As I wrote above, we met German company Knuerr and unique developments. As a result, we assembled a heat removal system for 18 kW from the cabinet (in my opinion, it’s really not necessary anymore, in any case, the next 3 years, after 3 years we buy more modules and get 20-22 kW in the cabinet).
Comp. the system is cooled with water (internal circuit), a bunch of balers, a second circuit with a refrigerant (a bunch of chiller-outer blocks), I will not paint the system in detail (too difficult and long). But the essence is very simple, the system of hot / cold corridors inside sealed cabinets is implemented.

The equipment is protected from dust, physical. access, moisture, etc.
As pipes with refrigerant pass from below, cabinet systems were “raised” above the floor, meth were made. constructions that divided the subspace space into different levels: the level of the clean floor, the level of the electrical cabinets and the UPS (the UPS is also modular and compact, with a calculation of 22 kW per cabinet, which allows you to smoothly increase power and operational replacement in case of failure differences from “large” systems).
image
image
image

All chilerny and pump systems are located underground (we were lucky and initially underground building was designed and built underground parking on 3 floors :)
image

4. Communication channels are naturally optical fiber :)
Own highways to MMTS-9 and MMTS-10 with a total bandwidth of 160 and 320 Gb / s, respectively, make it possible to turn around :)
Today, the network is built on Cisco systems equipment, but we are inclined to change the vendor, all the same tsiski and very expensive and the quality of their work at high speeds is slightly behind the declared in brochures and presentations (this is not a secret). To date, we are testing equipment from different manufacturers, as soon as we’ll dwell on something specific, then I’ll sign for a separate post :)
Also for the convenience of customers, they made entry into the data center building for 144 fibers and placed the coupling in the collector, respectively. any client can reach us with its fiber.

5. Cable systems are located under the ceiling, as well as power and fiber, I want to note that the entire internal network infrastructure is made on optics, it's easier, more convenient and easier for the mass :) (copper is still too heavy), and the customers have gone now obstinate, they ask for wide channels, which it is foolish or impossible to give on copper :) They used three-dimensional tray systems (Defem Sweden), very good. comfortable and beautiful :)
image
image

All this allowed us to place unprecedented capacities on an area of ​​600 m2, we created a machine room for 288 cabinets (12096 U). This heat removal scheme allows using cabinet space and data center areas with maximum efficiency.
There was the only platform in Russia that is able to host powerful server equipment, we all saw enough of data centers where there are “skeletons” with servers in the sense that in the closet for example either one tsiska is large or a dozen, two servers, all the rest of the space is empty, there is no possibility to bring so much energy and there is no possibility to draw so much heat :) Now such an opportunity has appeared. Comp. You can implement projects that until today were not profitable or just pointless because of their power. Now you can optimize your resources. According to the experience of communicating with customers, I will say that often people looking for 10 racks are very surprised when I collect their equipment in 5-6 racks :)

If someone has any questions, write, ask, I will be happy to answer all :)

Separate respect dug for clarifying how to upload pictures to the habr :)

Source: https://habr.com/ru/post/71167/


All Articles