(I ask you not to swear strongly, this is thinking before bedtime).
At one time, the first protocols of network interaction did not have a hard division into levels. The data was "just transmitted" and "just read." Gradually, an understanding emerged that every time it was expensive and inconvenient to invent a universal combine (not compatible with other combines).
The protocols were divided into levels: physical-channel, network, transport, applied. Then to this (practically used) TCP / IP model they tried to attach a theoretical 7-level OSI model. Not really stuck (tell me 5 protocols of the presentation level).
')
However, the need to separate the vicissitudes of the physical-channel level and the network, no one doubts. The protocols are changing, the hardware is changing, and the IP is still the same ...
About the same thing is happening now with computers. At first, these were universal combines that can initialize hardware, draw graphics, and work with the server. But it is
expensive . The clearest example is the need to use a floppy disk to install windows 2003 still on sale. In 2010! Diskette! Why? Because the unfortunate OS is forced to think about what kind of controllers it has inside and what kind of interruptions it has there. And at the same time, it should also provide multitasking, planning CPU time on a multiprocessor system, planning disk operations and other complex things. Oh, yes, also the right to control.
Emulation has always been a lab miracle. Oh, we can run the game for the Play Station! Oh, we can run the game for NES, SNES, ZX ... But the IBM / 360 emulator. Cool! But the dosbox in which the UFO works ...
It was all at the level of test tubes, laboratories, and, perhaps, gaming systems. It was all a way of slowing down a thousand times (the price of interpretation) ...
Then came the virtualization. Which differed from emulation exactly this one thousand times. And although the first virtual machines differed little from the emulators in terms of functionality (well, they were faster only), they already had an important (most important) feature - broadcasting host performance to the guest with a small overhead. It was the most important property. Next to it was attached (more precisely, just beginning to appear) infrastructure.
And, in fact, we now have something similar to the OSI (or TCP / IP) model for computers. We have identified the level of abstraction, which is engaged in iron, information storage, initialization of the network card, the allocation of resources, etc. In other words, the OS “cut off” a piece from the bottom, leaving it high-level tasks.
However, the OS itself is still "combines". Virtualization adapts to them, although it would be much more correct to develop operating systems that can work ONLY with the participation of the hypervisor. Such an OS, of course, will lose its universality, but with a high probability it will be compact, stable (the less code, the fewer errors), convenient for the work of the hypervisor.
Some steps in this area have already been made by Xen, which has a kernel for paravirtualization (de facto, there is no virtualization there already, there is just the interaction between the OS and the hypervisor, in which the OS gives all the low-level work "to outsource," approximately as IP gives the base work of framing, sending them, control of the absence of collisions, etc. at the mercy of Ethernet).
... And I must say, at least in the case of Xen, that the ancient dispute between Torvalds and Tanenbaum is resolved unexpectedly: the concept of a “microkernel” is replaced by the concept of a “micro-hypervisor” (1500 lines of authors) there is “own pocket virtualka” (Dom0) for all sorts of non-critical things, like disk and network operations; the hypervisor shifts requests from DomU to Dom0 (about how a microkernel should do this); the hypervisor itself is busy with “real things,” like memory management and CPU allocation.
Most likely, in the near future, there will be the emergence of just such systems, designed for a “good uncle from above,” giving resources and distributing them. Most likely, all manufacturers of hypervisors will come to a single call format (so that a virtual machine from one hypervisor can easily be run on another). XenCloud is already doing some steps - there is an xva format that claims to be the "cross-hypervisor" format of virtual machines.
I see the future in the appearance of a certain computer stack, which was described by each level as an independent entity with standardized interfaces “up” and “down”. The main difference of this stack from the existing layers of software abstraction from the hardware of modern operating systems will be the standardization of interfaces. It will be possible to take the hypervisor from windows, the intermediate layer from linux and userspace from solaris. Or vice versa - take xen, the linux kernel, and there are already several different systems quietly coexisting with each other ... just like now we can run tcp for both ip and ipv6, and ip goes for so many different channel protocols, which is not considered ...
(flying away into fantasy) and there will also be tunneling of hypervisors (something like now ip over ip, GRE, etc.) - there will be a xen hypervisor, in which there will be a vmware hypervisor, which will be able to launch the hypervisor from quest, ...