
The article will discuss the data centers of one of the most commercially successful startups, which is developing rapidly and in the network of which 1/7 of the total population of the earth is involved.
Probably, few people would have thought in 2004 that the idea of students from Harvard to make a modest site for communication within the university would turn into a global social network with an audience of over 1 billion people. The company's IT infrastructure also grew quite rapidly: if in 2004 there was only one rented server, then after five years Facebook became the most visited site, the number of calls to it was already more than 1 trillion per month (according to Google Double Click) and its traffic accounted for about 9 percent of all Internet traffic on the global network with 300 million downloads daily and with a base of more than 550,000 applications running on the Facebook Connect platform.
To support such activity, today, the company has three active data centers and one in the process of construction. Additionally, space is leased in nine data centers - so far, Facebook cannot refuse to rent space. Areas are being rented in six Silicon Valley data centers located in Santa Clara (4.5 thousand square meters rent in CoreSite Realty data center, Terremark Worldwide (TMRK), Equinix (EQIX) and San Jose (more than 2 thousand square meters) meters in the data center Fortune), in three data centers in Virginia (total area - 12.5 thousand square meters in the data center Digital Realty Trust and 15% of the data center DuPont Fabros Technology), as well as in the European data center owned by the Telecity Group.
The first Facebook’s own data center was built in Oregon, USA, in the small town of Prineville, in 2010 (photo 1). A year later, a second backup data center was built in Forest City (photo. 2), North Carolina. In fact, it is a modernized version of the data center in Prineville - it took into account the operating experience of the first data center. The capacity of the first stage of construction of the data center in Prineville is 28 MW (the site is planned to grow in three lines - up to 78 MW, for which another hangar is being built near the data center, but without premises for offices). The second data center in Forest City has a more modest expansion: up to 40 MW. All data centers are combined into a single cluster functioning as a content delivery network: analyzing a user's request determines the shortest network path to it, and content delivery comes from the closest node, which significantly reduces the response time, the number of intermediate transmission nodes over the network and transit traffic.
')

Photo 1. Prineville data center

Photo 2. Data Center in Forest City
Datacenters are built far from the city, near the industrial sector. The first fact that involuntarily attracts attention is the amount of energy efficiency: PUE 1.07 for the data center in Prinville and 1.09 for the data center in Forest City. According to Facebook, this result was achieved due to the reduction of losses in the transmission and transformation of electricity, as well as higher operating temperatures of the air inside the data center (+35 ° C allowed in racks in the cold corridor).
After analyzing the geography of requests, of which 70% came from Europe, Facebook decided to build another data center outside the US, which was done in 2011. The site was chosen on a territory with a cold climate (the average annual temperature in the vicinity of the town of Luleå in Sweden is only 1.3 ° C - the town is only 100 kilometers from the Arctic Circle), on which a building 15 meters high and with an area of 27 thousand square meters was erected. meters (photo. 3). The nearby Lule River and the hydroelectric power station, built on it, made it possible to resolve the issue of water supply and power supply to the data center. The data center has communication channels from five independent telecom operators. The project was implemented only a year, despite the fact that the capacity of the site is 129 MW.
It is known that another large Facebook data center is building in Taiwan.

A photo. 3. Data Center in Luleå
Features of the implementationIn the data centers do not use traditional UPS - voltage stabilization function assume the power supply. Delivered by Taiwanese company Delta Electronics and American Power One, they are customized and designed for supply voltage in the extended power range. According to the company, a wider operating range of the supply voltage eliminates the need for stabilizers, and batteries with a charger are placed in racks between the rows. Each battery rack contains 20 batteries connected in 5 chains, forming a 48 V DC voltage at the output (photo 4).
In the server racks, there are two power supply buses at once: a 48V DC bus for backup power supply from batteries and an AC bus that connects directly to a 0.4 kV power grid. The equipment installed in the cabinets is connected to these tires simultaneously - thus, in the event of a power failure on the AC buses, the load will be powered from the battery rack.

Photo 4. Battery cabinet located between server racks
Power cables are laid under the raised floor in trays, low-current cables are laid on racks in cold corridors above the cabinets. All the lighting in the data center is provided by means of LED lights (photo 5), but there is no lighting in the hot corridors - according to the company, this is not necessary, since all the maintenance and connection of connectors takes place exclusively from the front.

Photo 5. The lighting of the data center is made in the corporate corporate color
The data center still has a traditional cooling system, but it is used only in emergency cases. The main air conditioning system is direct freecooling with several air preparation chambers through which air passes outside (Diagram 1).

Scheme 1. Schematic diagram of the cooling of the Facebook data center
Initially, the air from outside is taken in by air intakes on the second tier and enters the preparation chamber, where it is filtered and partially mixed with hot air, the volume of which is controlled by automatic blinds. After that, the air is passed through the so-called refrigeration panels. Actually, the cooling panels are a humidification chamber with a multitude of pipes spraying distilled water with high-pressure nozzles, thereby increasing the humidity and lowering the temperature of the blown air. So that the fine moisture does not act as a conductor of electricity, the sprayed water is first subjected to distillation.
At the output of this section are membrane filters that cut off large particles of moisture (photo 6). After that, the air is sucked in by powerful fans and redirected to the computer room, where server cabinets are united according to the principle of an isolated hot corridor. The supply of cold air from top to bottom is not accidental: cold air is heavier, therefore an effect of self-thrust occurs, which saves on ventilation power. Hot air from the isolated hot corridors rises, some of it goes into the preparation chamber, where it mixes with the cold air again, and the rest is blown out by the fans. But here it is not so simple: the air goes to heat the ventilated facade and roof. Waste water is collected in a special tank and cleaned.

Photo 6. Air preparation corridor: after humidification, the air is sucked in by powerful fans and supplied to the engine room.
Interestingly, solar panels and wind turbines are used as alternative power sources on the territory of the data center (Photo 7). But, even without thinking about their power, we can say that they, rather, provide power only to a small part of energy consumers, but not the entire data center. That is why the data center uses the same traditional diesel generators as a backup source of electricity.

Photo 7. Solar panels are present throughout the data center.
IT staff travels through data centers freely using an electronic assistant (photo 8). It is an electrically operated mobile workstation that acts as a transporter of equipment, a rack and a desktop on which a laptop can be put.

Photo 8. IT-cart - everything you need is at hand
In data centers there are areas for the physical limitation of critical equipment - airborne sections, the entrance to which is controlled around the clock with strict logging of visits. All employees in such zones wear control bracelets, which are the identifier.
In the northern data center in Luleå, when the equipment is delivered and brought into the data center, a large temperature drop occurs, which leads to condensation of moisture on the equipment. To eliminate the failure of electronics, a special drying chamber is implemented in the airlock; the equipment gets here immediately after unloading for quick drying, and only after that it is installed in the machine room.
Servers and software
Already traditionally for large data centers information about the exact number of servers has not been disclosed, but it is known that in 2008 this figure was 10 thousand servers, in 2009 - 30 thousand, in 2010 - 60 thousand. According to from different sources, today's number of servers ranges from 120 to 180 thousand.
Facebook, like other large companies, decided to go its own way and develop not just a typical server, but a whole platform: a program called the Open Compute Project, launched in April 2011, includes open specifications for both server stuffing and racks and power system.
Both Intel and AMD are used as processes. Spartan motherboard design. It was developed by the Taiwanese company Quanta Computer. In the future, the possibility of using ARM processors is announced, but so far this is only an experimental technology under development.
The server uses a non-standard form factor of 1.5 U (photo 9), allowing the use of larger heat exchangers and fans (60 mm instead of the standard 40 mm for 1U servers). There is nothing superfluous in the server: all ports and power connectors are located on the front panel. The server width is 21 inches instead of the standard 19. All fastenings are carried out without screws, with the help of clips.

Photo 9. Facebook server with the top cover removed, without additional hard drives and with the heat sink removed on one of the processors.
Interestingly organized the principle of backup. For example, in the data center in Oregon, data storage systems are located in a separate building on the data center. By the way, the building itself uses external insulation with electrostatic insulation, protection from mechanical damage and dust. The equipment of the storage system is connected for a short time - only at the moment of synchronization, after which it is turned off. Moreover, the power of such systems is performed only from autonomous sources.
Just a few words about the software platform. Uses open source software. The site is written in PHP, and the MySQL engine is used as a database. To speed up the work of the site, the HipHop engine developed by our own team is used, which converts PHP code to C ++ to speed up. A caching mechanism is used for the operation of a clustered database on MySQL.
Facebook, being a fast-growing and progressive startup, creates interesting solutions, not being afraid to experiment and at the same time not following the path of global economy on everything. Facebook data centers are aesthetically pleasing and at the same time functional and technological. It is surprising how a company that has no experience in building a data center could, even with the use of third-party specialists, implement its first data center by implementing many progressive solutions. Perhaps this is the whole secret - to introduce only useful innovations focused on a clear goal.
The author: Konstantin Kovalenko
Journal TsODy.RF №2 February