📜 ⬆️ ⬇️

Dell prepares for the arrival of ARM processors in servers (part 2)

The Challenges of Modern HPC Technology



HPC - High Performance Computing - high-performance computing, designed primarily for the needs of science, defense and, more recently, the provision of cloud services and generally live in them Web 2.0. They are usually based on computer (server) installations in the form of multi-node clusters. Dell has long been successfully developing solutions for HPC in close cooperation with customers and has experience in equipping thousands of turnkey data centers.

Of course, we will not be able to talk about all the complexities of the designers, builders and support staff of data centers today, but we will touch on those that lead to the reformatting of server architecture and the upcoming ARM processors on this market. We briefly recalled the history of the development of two processor architectures, x86 and ARM, and also compared them from different points of view in the previous material . Today we will try to understand why not only Dell made a bet on ARM in its promising developments, but even Intel returned to their production.
')

A lot - it really is a lot!



So what is a server? The right, as they say, underline.

A dusty box without a monitor in the corner, piled high with boxes, everything flashes with lights and occasionally a shaggy student in a sweater comes to him? Several such boxes: is it time to somehow sort this all out? Strange closet with flat boxes, occasionally screaming like a wounded buffalo? Well, finally, all this household left for a separate room, and a permanent sysadmin started up, with whom it is better to live together.

This is all called back office , i.e. providing everyday needs for IT companies whose main business is not connected with IT. In general, the performance of the new server is enough for three to five years, power consumption on the background of kettles and heaters is not so noticeable, and the entire economy takes up no more than mops, buckets and shovels.

Also, usually, with proper development planning and proper updating of components, special growth should not be either in size or in terms of energy consumption, nor in the cost of a year of quiet and efficient work of a small or medium organization. Yes, and usually in large special problems do not arise.

It’s quite another thing when IT services are exactly what a company makes money. Modern giants of the industry contain several sites on which tens of thousands of servers are placed, leaving computing, military and scientific clusters far behind in computing power. If this seems far-fetched, then here are some of the brightest representatives of the world of IT giants who make up the modern Internet that is visible to everyone: Amazon, Apple, eBay, Google, Facebook, Microsoft, Mail.ru, Yahoo, Vkontakte.

It is desirable to place all this economy as compact as possible, because land and buildings cost money. It is also better for staff to walk between racks on foot, and not to drive around, for example, bicycles, hectares of squares. The length of communications, heat removal and power supply on a compact area is also easier and cheaper to perform. Therefore, in each server rack should be placed as much as possible production capacity. These powers are not always computational in the classical sense. Now it is often petabytes of disk space, but this is not the topic in question today. For web applications, especially cloud storage, quite often large arrays of separate physical servers are required, which are far from always fully loaded with computational tasks.

It is highly desirable to save electricity very hard, because 1 watt consumed by one server turns into kilowatts. In addition, the processor, the disk controller and the hard disk itself, which consumes these watts, also dissipate them in the form of heat in accordance with the law of conservation of energy. Heat must be removed first from the place of its formation, then from the case and cabinet, and then from the room. This gives a very tangible overhead, because carried out using the same electric devices - fans and air conditioners. In general, according to analysts, in a couple of years, data centers can consume up to 7% of the electricity produced in the world.

A big role is played by the efficiency of investments, both in the purchase of equipment, and in maintenance and support. In addition to the well-known term TCO - total cost of ownership , the total cost of ownership - is now widely used in the design of data centers and other indicators: "computing power per watt", "total power consumption in standby mode", "total power consumption under load."

In business terms, after building a data center, a very large share of the cost of its maintenance falls on electricity bills. Any optimization of these costs is welcome, because leads directly to increased profitability of the enterprise.

The processor is the heart of any computer.



It would seem that the difference in power consumption in standby mode by 2-5 watts, and in the mode of maximum load by 10-20 watts, is not so much. ARM and Atom, based on the x86-architecture, are positioned in this way, which is also positioned in low-cost servers. However, it is worth considering that the new SoC, systems on a chip based on ARM cores, have already integrated network controllers, SATA controllers, the implementation of which outside the chip leads to additional power consumption.

In addition, the concentration in one crystal of most of the functions necessary to build a complete system leads to a significant decrease in the size of this system. Completely functional computers on a single chip for solving problems that have recently only been characteristic of desktop computers, are now available in the size of a little more than a flash drive. Nevertheless, they are fully capable of viewing Internet resources, communicating via the network even in text, even in video mode, as well as playing music and movies over the network - have you heard of the Ophelia project ? With proper optimization of the placement in the server chassis of an array of such babies in a regular rack you can concentrate a lot of full-fledged independent machines.

Yes, for performance of certain tasks Intel Xeon are irreplaceable, but constant and high requirements for large computing power are typical, according to analysts, for approximately 2/3 of server tasks. The remaining third can be characterized by the word “readiness”. Those. most of the time the technician spends in the standby mode, but also to withdraw it from the active state is also impossible. Balancing, virtualization and distributed computing help solve the problem, but not completely. Simply put, the market has a need for compact, energy-efficient, cost-effective servers.

So, the possibility of replacing the rack with classic blades based on Xeon on the same rack with miniature servers based on ARM for some business tasks looks very attractive. With an increase in the number of physical machines at times, the energy efficiency of such a rack will be much higher, and the total energy consumption will be less, and in idle time, also at times. Conclusions are somewhat predictable.

Dell is ready for new challenges



Dell paid close attention to the problems described above five years ago when the Fortuna project was launched, officially XS11-VX8, which was built on VIA Nano processors. At that time, they were as economical as possible, consuming 15 watts in standby mode and up to 30 at maximum load. In a 42-inch rack, you can accommodate up to 256 such servers the size of a 3.5-inch hard drive. Dell has created a complete ecosystem for babies, including wrecks, communications, cooling and power systems.

In May 2012, Dell launched the Copper project , which aims to create an ecosystem of using ARM processors in servers designed for both general needs and high-performance installations. Developers do not have direct access to servers, but they can apply to test their applications via remote access to equipment located in the Dell data center. Moreover, the internal tests were started back in 2010 and were successful enough to begin bringing the technology to the market. A software developer can test his product on a real ARM server running the Linux family of operating systems so that when they enter the market, they have a ready and streamlined product in their hands, suitable for sales to a wide range of users.

In October of the same year, with the support of the Apache Software Foundation, a joint project with Dell started Zinc , designed to test web applications, both developed for this web server and ported to it. Also, developers can remotely test their programs for the most popular web server running in this case on ARM processors.

While the developers are testing the software, Dell has a great opportunity to test new servers with various load patterns, test scalability, expand the bottlenecks and refine MiddleWare for the new platform. All this leads to the creation of a complete ecosystem, ready to make turnkey solutions for customers.

Very soon!



In the next article, we will look at several news items that convincingly show that all market participants are almost ready to join the server segment of high-performance computing of ARM architecture processors. Dell, as usual, is at the forefront of high-tech developments, and in 2014 we are waiting for news about real products available for ordering!

Source: https://habr.com/ru/post/228065/


All Articles