
One day in the icy winter season ... I decided to start painting the fences. And go to the dark side to farm in Eve (EVE Online) under the cover of the eternal cosmic night and once to take over the world. Naturally, my faithful electronic servants should work 24/7/365 (they do not need sleep) and my computer, roaring like a Boeing 747 turbine, was not enough for this. Glancing over my modest dwelling (all the enslavers of the world started out small), I noticed a server modestly standing in the corner. It was a real miracle in the dual-unit building of the Chinese nouneym. All the fans in it were torn out and replaced with radiators, and instead of the heart - a
stone ax the third stump. The server was quiet and absolutely useless for this task.
Chatting on the forums of hereditary botovodov in the third generation, I realized how to regain lost happiness and faith in humanity. It is necessary to throw bots on virtuals, the benefit of the hypervisor, hosted on the server, was already ESXI 5.5. There is one small nuance. Virtualka needs 3D support and sufficient performance, we are here playing toys with you, and not in the Word on the keyboard, tyts-tyts. Serious people, aha. And so the challenge is to push a physical video card into a virtual machine. And there and to everyone's happiness at hand.
')
Wailing hacks
This is, in fact, a very strange article. Starting from the fact that I am writing about what “Moisha sang to me” and, ending with the fact that it’s not a technical article that comes out, as I like it, but rather Frankenstein based on the motives. The latter, moreover, periodically carries it to the analyst, then to the small mess with technical details. Plus, the article is a little lost in time. So be careful, hereinafter you can stumble upon a mammoth skeleton. But there is nothing to do, uncover the lancet. Executioner, motor!
Glass beads
On Habré already there were several articles on forwarding a video card into a virtual machine -
one ,
two . From there were obtained approximate guidelines and tips, where to look in case of anything. The motherboard from the articles could not be found, but it did not bother me. A processor with support for Intel Virtualization Technology for Directed I / O (VT-d) / AMD I / O Virtualization (AMD-Vi) and a motherboard with support for virtualization on the bridge was required. Unfortunately, for desktop motherboards, such details are most often not written and you can learn about them by means of a long and persistent investigation of the characteristics of bridges. There are cases when the bridge supports everything that is needed, but the manufacturer found it unnecessary to add such functionality to UEFI in order to save money. The Asus M5A99X EVO R2.0 motherboard with three PCI-E slots and three video cards for it were purchased. One is the NVidia GeForce 210 and two are the Radeon HD6450. After thoughtfully smoking mans, it was found that without the support of UEFI, nVidia cannot be reinitialized in a virtual system and turns into a pumpkin before rebooting. But, nevertheless, I was able to get 2 cards out of 3, and at that time it was a success. I left NVidia in case I need to manually fix the hypervisor. Not everything was smooth with ATI either. Maps did not want to be initialized without a monitor connected. As a result, the knight's move was made - I simply removed the VGA cable from the ATI card, disassembled the connector, cut the cable from the LTP port, soldered to the correct legs through the resistance cable to emulate the monitor, inserted the patched cable into the connector and that's it. The port still works like this, wrapped in blue tape. Desoldering scheme:

The cable was used for VGA, because the cards for low cases and the third connector (VGA) are removable there. I just have 2U and full-size cards will not stand for me. Now I am thinking of changing the case and putting the GeForce GTX480 in there. Moreover, good news has reached our collective farm - on the new ATI cores, resistance is no longer needed. The card is initialized without them. And nVidia, according to rumors, began to roll in the UEFI mode into regular video cards, and the top cards are being forwarded to the virtual machine. In order to add to my pain, I changed the hypervisor to Proxmox and ran ATI video cards there by following this
instruction . It should be noted that you cannot control Proxmox-forwarding from the web interface. Everything needs to be done via SSH.
In battle
As a result, I drove a different number of bots on ATI graphics cards. Launched through TeamViewer / VNC, and they themselves worked in the background until the evening. Of course, the FPS sagged, but the bot, which, in fact, simply looked at the necessary parts of the screen, analyzed the picture and, depending on the result, clicked either into one or the other field, it was all the same. My largest space fleet included 1 unit commander and 6 space miners. All this worked 24/7 almost silently on 2 video cards with a total cost of about 5,000 rubles. As a result, for about a year I did not pay for the game at all, exchanging my earnings for playing time and even remained in a small plus. Of course, there were payments for the light, but even with that in mind, there was a good saving, given that a month's subscription for 7 of my bots cost about 3,600 rubles. Yes, and the light of the video card from ATI almost did not eat. It was about 1000 per month for the entire hypervisor. And there still spun a lot of useful things. Separately, it is worth noting the identified shortcomings that hinder the normal use of the virtual machine for the game.
- PCoIP badly hides the mouse, so playing is uncomfortable.
- Big I / O Wait on disks. Of course, it is determined not only by the gaming virtual machine, but the sediment remains.
- The loss of power from the virtualization layer, depending on the video card. Feels like the more powerful the card, the higher the loss.
The author, what are you talking about?
Over time, for various reasons, the idea became obsolete, the ore began to cost like dust under my feet, and I covered the shop. But creative thought went further. Why do I need a powerful computer / laptop? You can also raise a high-performance virtual machine on the server, throw a physical video card there, buy a laptop that does not heat up like an atomic boiler and with which you can cut the same trolley bus out of a loaf of bread. Then pick up Linux on it and calmly play any games with any requirements, if only the iron on the server would endure. If the game is sensitive to response time, then we have a local network. And all sorts of editors, offices, etc. are generally available from any hole on the planet where there is at least some broadband Internet. But, as it turned out, I’m not the only one who is skillful and resourceful, and in general such services are a wagon and a small trolley, but ... They all gave access only to games, the choice of games themselves was extremely small, you could erase an oiled robe with a picture (soap, there is soap). Of course, such services had advantages, for example, you can play console games from a computer, but almost no one provided access to the desktop. On the Internet, the only service of this kind was found - onPlay. But it was located in America, had a weak picture and was on the verge of death. Therefore, in fact, he sold access to the desktop instead of subscribing to games. By the way, he died a couple of years ago. Remember. XD
Continuing to develop this idea, I regretfully discovered that if any advanced student can put together the hardware for my idea, then the software is presented poorly and consists of only a few solutions. Namely:
- PCoIP ( official site / habr / demo ). Quite a famous thing. For embedding directly into a virtual machine, the minuses are as follows: the processor is very heavy, a little expensive - a subscription for a year is 8000 rubles, it works like a remote control, the software implementation is a hell of a superstructure above the operating system, requires a fairly wide channel (about 30 Mbps per user) . If you move to ESX, it works in the hypervisor out of the box, but only with the racially correct video card (officially approved by nVidia and vmWare GRIDv1, GRIDv2 and math maps) + additional packages from vmWare. If you solder the top geForce and pretend to be GRID1 or GRID2, it will also turn on. However, the GeForce doesn’t have extra data handlers for video compression, unlike GRID. And almost certainly the result will be mediocre. Given this, as well as the fact that I am a penguin driver, I refused this idea. Although from time to time I think about the software version as a backup. As a bonus for big guys: ideal probros USB-devices as a hardware implementation, built-in monitors, and software. Even USB tokens work smoothly. Still an enterprise solution for money.
- GamingAnywhere ( official site ). Several students challenged cloud gaming services with their opensource handicraft. At first glance, not bad. You can even customize the encoding of video and audio, depending on the requirements and power of iron. In fact, as usual - a set of crutches and supports over the open video converters. If without stopping to bring their software to a normal state, say, a year, you can get a good service. Overall unbelievably damp. I did not master even compile the client under Linux. XD
- nVidia GameSteam ( official site ). Here it is, the crown of creation nVidia. It is for such services that we overpay 1/4 of nVidia compared to similar ATI chips. In general, nVidia has its own game cloud. Technically, there you can even register and play calculators in the top toys. But again, they do not allow to the desktop, which game will appear in the cloud next month, it’s not you who choose. And this sad progressive community. GameSteam implements such a private cloud on the GeForce. There is an unofficial client on almost all platforms and, if you register the desktop process as a game, then you can easily “play” the desktop. From enterprise minuses: no USB, but this is clearly my choice. It remains to solve the problem with forwarding geForce to virtualku. It is this solution that I have now in the implementation.
Expand horizons
Now let's go further in our fabrications and put the question wider. And what if 1 powerful computer and several employees at the same time? The very idea of ​​this method is not new. Many companies make software of different cost, one way or another realizing this opportunity. Although in fact, properly selected hardware and a little IT-staff time will allow to solve such a problem with the help of virtualization absolutely free. For example, throw all the computers out the window, first removing the video cards from them and
throwing themselves behind them , buy 1 powerful computer with PassThrought support and several cheap video cards. To save money, let's take ProxMox (KVM) as the hypervisor. Create a number of virtual machines, equal to the number of video cards. We connect to each virtual machine on the video card, as well as the keyboard and mouse from USB-connectors. Turn them on. Jobs are ready. The allocation of processor resources allows users to work simultaneously on the same computer behind different monitors, with different operating systems, keyboards and mice. In the case of using USB audio headsets, the sound is also connected to the virtual machine. Such machines are much easier to backup, in the event of a computer crash, you can simply migrate for any free one, and the user can continue to work, as if nothing had happened. Or even move all user virtual machine storages to a separate iSCSI disk and generally abstract away from the hardware when working with users' computers. With a slight careless movement of the administrator's hand, we swing the user's C drive, put the necessary software on it, set it up and upload it back. Or we clone a disk from a template and the new workplace is ready, it remains only to change the name of the machine. In this way, users with high demands on the video system can also work - just put a video card in the physical machine that meets the user's requirements. In this method, of course, there is a fly in the ointment. In the event of a breakdown of a single physical machine, jobs are lost at once by several employees. It is also better not to turn off such machines, because they can only be turned on via the hypervisor web panel.
Clouds, whitemane horses
But what if you move computers closer to IT-shnik, for example, in a data center, and leave only monitors in the office? It is quite possible. There are several solutions for the implementation of such an idea. In particular, VMware Horizon, Xen Desktop or Microsoft RDP. They allow you to run individual applications or entire operating systems in your browser, and their maintenance becomes simpler due to the fact that the IT department will need to serve only a few blanks of working machines, and not the entire user fleet. All of these servers allow you to use graphic acceleration based on nVidia industrial 3D accelerators. In the office, you can also install thin clients or monitors with PCoIP protocol support. This means that in your office, in principle, there may be no system blocks. Only a router and monitors with keyboards. Teradici also releases cards to turn a regular workstation into a remote PCoIP server or simply to speed up protocol processing when installed on a XEN or VMware server. Even instead of the hardware board, you can use the software package.
Enlightenment
So, starting with a simple, unpretentious, I would even say, selfish thought and realization, we came to a new level of terminal access, in which the network acts not as a passive image transmitter, but as a full-fledged bus exchange with our periphery. Users have much more opportunities to use such a familiar remote desktop.