The customer of this project was the Moscow Institute of MIPT. It is a great honor to carry out a great and responsible project for the legendary Physicotech, but the responsibility is serious.
The works were to be held in the data center of the institute located in Dolgoprudny, Moscow Region. This is the heart of all information systems MIPT. It contains computational facilities that are used both for scientific and academic work (modeling, calculations) and for official purposes (mail and communications, bookkeeping, etc.)

')
An object
Over time, the data center capacity was not enough. In addition, many faculties and departments had their own servers, which they supported on their own. The management of the institute decided to consolidate computing power in a modernized and expanded data center, designed for the installation of more powerful (and therefore more energy-intensive) IT equipment. The new data center had to meet modern requirements in terms of power, reliability and fault tolerance.
Our task was to prepare systems related to the engineering infrastructure, namely:
• power distribution system and uninterrupted power supply;
• air conditioning system;
• structured cabling system;
• an automated dispatch control system for the entire economy.
The technical solution proposed by us was recognized as the best among other proposals, we won the competition, it was possible to get down to business.
General construction preparation
We had to start with the dismantling and disposal of obsolete equipment, and this was almost the entire old air-conditioning system, electricity and SCS. From the old equipment there are only four air conditioners.
On the roof of the building we dismantled two cooling towers, which were used for cooling earlier. Then they designed, manufactured and installed metal structures on the roof to install new external cooling system units.
Then they prepared for installation of the new equipment the machine room itself and the room for the distribution node (tanks, pumps, heat exchangers).
To protect the equipment from leakage in case of water supply and heating accidents in the premises located above, we have provided a system for draining water from the space above the stretch ceiling along the slope of the stretch ceiling plane and subsequent drainage.

Uninterrupted power supply and power distribution
The new data center is designed to consume 180 kW. Power supply of computing and engineering equipment is performed separately. Computing equipment (16 server cabinets and 2 switching cabinets) accounts for 141 kW.
For computing equipment, we implemented a power reserve level of 2N (N + N). Two modular UPSs manufactured by APC by Schneider Electric Symmetra PX 160 kW are involved here.
The redundancy level for engineering support (here the main consumers are chiller circulation pumps and water circuit pumps) - N + 1. Power is supplied by a modular UPS manufactured by APC by Schneider Electric MGE Galaxy 3500 20 kW.
The autonomy of the uninterruptible power supply when the power supply fails is at least 15 minutes, which is enough to start up and exit the backup power supply to the load.
The entire uninterruptible power supply system is designed so that there is the possibility of maintenance and upgrades on the go, without taking the entire complex out of work.


Air conditioning and ventilation
Air temperature is 20-25 ° C, relative humidity is 40-65% - this microclimate must be constantly maintained in the machine room and in the UPS room. This is necessary to protect the equipment not only from overheating, but also to failure due to condensation or static discharge.
New air conditioning system decided to make a two-circuit. Water is used as the heat carrier of the internal circuit, and a 40% ethylene glycol solution circulates in the external circuit. This scheme eliminates two troubles: the freezing of the coolant in the pipes outside the building and the use of hazardous ethylene glycol within the machine room.
Let's start with the inner contour. It consists of two subsystems:
• inter-row air-conditioning system for server racks,
• air conditioning system of the UPS.
The machine room has two rows of 18 server racks, facing back to each other. Between them, we built a “hot corridor”, isolated from the external environment by doors and panels. Installed in the “hot corridor” 8 inter-row air conditioners APC InRow RC take heat from this corridor and blow the cooled air into the outer room, outside the corridor. Here, it is fed under static pressure to the front of the racks and is again driven through the racks.
In the UPS room, two Carrier duct fan coil units were installed, which supply cold air and pick up hot air. The third fan coil is in reserve.
The external contour of the air-conditioning system is provided by the chiller chiller. Two chillers manufactured by Uniflair by Schneider Electric (one main, another standby) with a cooling capacity of 185 kW each, were completed by the manufacturer specifically for this project and installed on the roof on specially prepared metal structures.
When the air temperature is +5 and below, the chillers switch to the free cooling mode: the coolant is cooled by the external air, which reduces power consumption.
To ensure the operation of air conditioners at low temperatures, winter start-up and heating of drainage holes are provided when draining condensate outside the building.
Installed devices degassing hydraulic circuit. Shut-off valves are installed in all places where hydraulic system components are to be disconnected from the network for maintenance and repair.


Structured Cabling System
Under the newly installed cabinets was created a new structured cabling system for the transmission of digital and analog data, consisting of copper and optical parts. Its architecture and performance parameters comply with a number of international standards ANSI and Russian GOST R 53346-2008.
The copper subsystem is built on category 6A F / FTP cables. Each 24-port patch panel with organizers was installed in each server cabinet; 24 F / FTP cables were installed from each new cabinet to each cross-connection cabinet. The subsystem is based on the Huber + Suhner LiSA Solutions modular cabling system, sales of which began only at the end of 2013. This is the first installation of the system in Russia!
Optical subsystem. Each new cabinet is connected to the main optical crosses with the help of two preterminated 12-fiber multimode cables. An optical cassette is installed in each cabinet. Fiber optic equipment is also manufactured by Huber + Suhner.


Automated dispatch control system
The system is designed for the operator's workplace. Allows you to monitor engineering systems and remotely control them in real time. It has the ability to alert personnel in case of emergency situations (for example, leaks), maintains an archive of technological information and allows you to generate reports. The system is implemented on Delta Controls modular controllers and has a three-tier architecture.
Sensors and actuators form the lower level of the system. Here is the collection of primary information from the sensors (temperature, pressure, flow, electrical parameters) and the direct control of the equipment (valves, valves, relays).
At the middle level, there are controllers that receive information from the lower level, transmitting it to the upper level. Also, the controllers form the control signal for the actuators in accordance with the set program.
The top level of the system is responsible for the final data processing and interaction with users. At this level, aggregation and processing of all data takes place, registration of all events in the system, including user actions. The upper level includes server hardware and software for polling, storage and visualization of data (SCADA). The user interface displays the parameters of the equipment and controls in an intuitive, intuitive way. The visualization system is organized using the ORCAview software.

Warranty and service
We are committed to a five-year warranty and service. Moreover, the contract concerns not only the new equipment installed in the project, but also four previously owned APC air conditioners.
The warranty service regulations include the departure of a service engineer for diagnostics, the supply of components and materials and the performance of repair and restoration work. After-sales service includes at least twice a year preventive maintenance, including the necessary components and consumables.

Results and Results
In parallel with the work on design, equipment supply and installation, detailed documentation was developed, compiled in accordance with the requirements of GOST. It described in detail the system, rules and regulations for its operation.
Among the documents prepared was the “Program and Test Methodology”, on the basis of which the system’s test tests were carried out. The test took place in conditions as close to combat. For example, to test the cooling system in the absence of real servers, a heat gun was brought to the machine room. All tests were successful and allowed to test the performance of all subsystems and ensure that they fully comply with the requirements.
After the transfer of the data center, the customer filled it with server, communication and other necessary equipment and launched into commercial operation. Work in real conditions has shown that the data center meets all the necessary requirements in terms of power, reliability and fault tolerance.
The best energy-saving technologies used in the project ensure high energy efficiency of the data center:
• creation of an isolated “hot” corridor is the most efficient server equipment cooling system known for all in terms of price / quality ratio.
• the cooling system on chillers Uniflair by Schneider Electric with the function of free cooling allows you to save up to 30% of the annual energy consumption.
According to preliminary estimates, the indicator of Power Usage Effectiveness (PUE) of the MIPT data center at the end of the modernization is 1.5, which indicates a high level of energy efficiency of the site.
The data center of MIPT corresponds to the level of reliability (TIER 3) according to the international standard TIA-942 for the infrastructure of Data-Processing Centers, and the uninterrupted operation rate is 99.982%.

Softline team