On June 6, 2016, Intel introduced its new processor for
Xeon E7-8890 v4 servers, which contains 24 processor cores and, accordingly, as many as 48 processor
threads (threads) - see the article:
"Intel introduced a 24-core processor for servers .
"Following this, the majority of players in the server market, among the first - Lenovo, Dell, SGI, HPE and Fujitsu, announced the update of their servers and support for new processors from the Intel Xeon E7 v4 line. The last three companies: SGI, HPE and Fujitsu are different from the rest by the fact that they have the opportunity to offer their customers a super-server on x86 processors with supercomputer hardware resources, in essence, of a "
mainframe " class.
For example, Silicon Graphics International offers its customers
SGI UV 300 servers - scalable up to
64 Xeon E7-8890 v4 processors in a single server, and accordingly this server can support the work of as many as
1536 processor cores , and as a result, as many as
3072 threads will become available to the operating system . And the streams in most operating systems are perceived as logical processors, and accordingly, 3072 logical processors are available for the OS in this monstrous server, and besides as much as
64 TB of RAM, see the description on the manufacturer's website:
“SGI UV 300 and SGI UV 30EX .
')
All this is wonderful when such incredibly powerful resources appear, and not as a
cluster (which has its own problems), but as a single super-server. But the question arises, how can we manage all this iron?
Hewlett Packard Enterprise is also trying to keep up with competitors and offers
HPE Integrity Superdome X super-servers — in which up to
16 Xeon E7-8890 v4
processors can be installed and we get
384 cores and
768 threads of execution, plus can contain up to
24TB of RAM - see the description on the manufacturer's website:
"HPE Integrity Superdome X" .
And the Japanese company Fujitsu has the ability to offer new servers FUJITSU Server PRIMEQUEST 2800B3 - containing up to 4 motherboards, each of which today has 2 processors installed and as a result the company offers
8 processor servers with
192 processor cores and
384 threads available, plus the server can be set up to
24TB of RAM - see description on the site of the manufacturer:.
«the FUJITSU Server PRIMEQUEST 2800B3» (here the way Habré was an interesting and detailed article about these servers, Fujitsu PRIMEQUEST 2000 series - see:
«the FUJITSU against all and Japanese assassin RISC servers " ).
And according to the Intel company, on new Xeon E7-8800 v4 line processors, you can easily build motherboards on 4 or 8 sockets (as I understand, all the necessary logic for this is in the processors themselves). So, Fujitsu is soon going to manufacture 8-socket motherboards for its servers, and then we can get a single server from Fujitsu which will contain up to
32 Xeon E7-8890 v4 processors, and accordingly we will get
768 processor cores and whole
1536 threads of execution
The largest distribution on the x86 server market today is the RedHat Linux operating system, but experts say that it has a limitation today and it can only support up to 288 processor cores in a single system. It turns out that even the weakest of the three listed servers today - FUJITSU Server PRIMEQUEST 2800 with 384 execution threads (logical processors) - cannot be controlled by a single operating system :(
Of course, besides RHEL, there is finally SUSE Linux, Oracle Linux, Oracle Solaris for x86, and there is FreeBSD. Well, finally, there is even
MS Windows Server 2016 (which, as promised will be super-scalable), and even the seemingly outdated
MS Windows Server 2012 OS supports up to 640 logical cores.
But all of these listed operating systems can not serve in a single system 3072 logical processors, which we offer SGI UV 300.
And besides, most systems have a limit on the amount of RAM that they can address, for example, the capabilities of MS Windows Server 2012 end up with 4 TB of RAM :( And what can we do with the extra 60 GB of RAM that SGI UV 300 can provide us?
Of course, I understand that iron of this level is probably not intended for a single operating system to work on it, of course, through
virtualization using
hypervisors such as
Hyper-V ,
KVM and
VMware ESX, we get the separation of hardware resources and on such powerful servers we can run dozens of different operating systems (well, just like on mainframes), and each of these operating systems will have access only to specific and only dedicated hardware resources, the number of which will be determined by the great server administrator itself :)
But I still can not fully understand, but why should we need these wonderful super-expensive super-servers? And why are they better than a simple set of several regular servers installed in a server rack, which in much the same way using the same virtualization and hypervisors can provide us with the ability to work with the same set of several operating systems?
Yes, here we can start discussing the resiliency of these mission-critical servers with you. We can certainly say that in fact, the more resources the better. And modern database management systems based on in-memory databases will eat up all the memory that the system can provide. But then we need an operating system that can provide all the available hardware resources.
Otherwise, today it turns out that, for example, if you look at the server FUJITSU Server PRIMEQUEST 2800 - which consists of 4 2-socket motherboards combined together. That iron unites them, and the hypervisor re-separates them. So why then combine them? Why it is impossible to assemble a system from 4 separate 2-socket servers, even at the mission-critical level and using virtualization to combine them into a single resource - if someone needs it.
I think that apparently in the development of IT-technologies, we today again approached the time when the
hardware is significantly ahead of the capabilities of the software component of computer equipment. Sometime in the late 1980s, there was also a problem in desktop computers
like the IBM PC — when as many as 1 or even 2 MB of physical RAM could be installed in a computer, and
MS-DOS could only address 512 KB, and a little later. its capabilities have grown to address as much as 640 KB of the
main memory area (this was due to the fact that MS-DOS was originally written in the late 1970s for 16-bit
Intel 8086 processors that initially had this limitation). Then, when this restriction became a real problem, it took Microsoft five or even ten years to solve it - and all this time they stopped this problem with the help of all sorts of patches and half measures. And the problem was finally defeated, only in the mid-1990s, when a completely new
MS Windows NT OS came out - with a completely new kernel and device principle (they say the principles of building MS Windows NT were taken from the celebrated OS for
minicomputers -
DEC VMS ).
So here I’m talking about all this, and to what is possible to effectively use all these incredibly huge hardware resources - how these 3072 logical processors and 64 TB of RAM in the SGI UV 300 super-server arise - it is necessary to create a completely new an operating system built on completely new principles!
And of course, everything new is a forgotten old :) The new OS can be the development of principles laid down for example, in such operating systems as:
Plan 9 or
Inferno . OS Inferno has a clearly stupid name, which may be why it has not yet gained widespread acceptance, although the ideas themselves are revolutionary and it clearly could help curb, use the enormous iron deposits provided by modern super servers.
And if my reasoning has some truth, then it may turn out that in ten or twenty years, unfortunately, Linux-type OSs may end up on the dustbin of history, and in the server world new OSs such as GoogleOS or VMwareOS will rule the ball :)) )
PS: What do you think about all this? I am very interested in your opinion, both about the new super-servers, and about my reasoning about the urgent need to create a new server operating system?