📜 ⬆️ ⬇️

Stas Protasov from Parallels answered the questions of the habrasoobshchestva

Stas Protasov, co-founder of Parallels and head of the development department, answered questions from Habrakhabr users .

The reason for the interview with Parallels was the Linux Foundation's 2012 report , according to which the Russian developer of virtualization tools was among the most active contributors of the Linux kernel, on a par with Google, AMD, Cisco and HP. The Russian company also hired James Bottomley, a member of the board of directors of the Linux Foundation, who is now responsible for directing Parallels container virtualization.

In addition, other events have happened in the life of Parallels: the opening of a St. Petersburg office, the release of a new project Parallels Automation for Cloud Infrastructure (PACI) is a system similar to Amazon EC2, but in the form of a boxed product.


')
- On which platform and programming language is PACI implemented and have the load tests of the system been done to compare the speed of work with analogues (Amazon EC2, etc.)? filatov

- PACI is implemented in Java and C ++ and based on the Parallels Automation platform developed by us. Tests of similar systems, of course, were done. Container virtualization, generally speaking, is faster than the Xen hypervisor used by Amazon-on, and has virtually no overhead for virtualization. This is especially noticeable on applications that are sensitive to latency. At the same time, much will depend on how much CPU resources in MHz and memory the provider allocates to you. Another subtlety is that Amazon does not indicate clearly what is measured and how the speed of the CPU is limited, and the actual speed may vary depending on the processor model where your environment will run.

I will add that Amazon EC2, of course, is not a direct analogue of PACI, because it is not a “boxed” product alienable from Amazon.

- I would like to hear a little about the team itself, who does it, and probably about the development process itself. And about the new office in St. Petersburg - why did you decide to open it and what developments are planned to be concentrated there. How was the main core of developers? Who are these people, where did they come from? eaa

- As for the team, its backbone is made up of developers who were at the origin of the company. Then in the 90s the team gathered bit by bit, but in a rather simple way - they invited their “sensible” acquaintances, someone came on the recommendation. Fortunately, for the most part, all those people are still in the boat - this is a big friendly team of really talented specialists, thanks to whom Parallels achieved what it has achieved.

Another part of the developers - the majority - are people of a new generation who have come to us from the best technical universities of the country. This was largely influenced by the fact that Parallels has a long history of cooperation with these universities, in the same MIPT (since 1999), Novosibirsk State University (since 2004) and MGU (since 2006) we have our own training centers where work in a large company is modeled, tasks for solving are suggested. We pay scholarships, and at the end of the training we are pleased to invite you to work. Last year, they opened educational centers in the ACS (St. Petersburg Academic University).

And, probably, in the third group you can highlight hired world-class technicians, such as Michael Toutongi (one of 22 technical fellow Microsoft, Parallels technical director), Mark Zbikowski (the creator of Windows NT, winner of multiple international awards for more than 25 years of career in Microsoft), James Bottomley (member of the Board of Directors of the Linux Foundation, CTO of Parallels Container Virtualization), Alexey Kuznetsov (created 90% of the Linux TCP / IP stack), Richard Wirth (former Vice President, Senior Researcher and General Manager of software company Intel), Amir Sharif (ex-top manager of VM Ware, vice president of server virtualization Parallels) and others.

Regarding the R & D center in St. Petersburg: they opened it recently and plan to bring it to a size not inferior to our centers in Moscow and Novosibirsk (about 250 engineers work in Moscow, about 180 engineers in Novosibirsk). Why there? Because it is the second largest city in our country and, probably, because of its size, the second city in terms of the number of educated people. I’m sure that if an IT company is ripe to start a regional development center, it’s just not silly to miss Peter.

The people who formed the backbone of the new development center have experience working with TogetherSoft, which Borland later bought. We crossed and met them relatively by chance. Borland invested a lot in rapid development tools. Therefore, initially we used the center's team in St. Petersburg to create an easy-to-use software environment, full-featured plug-ins for Eclipse for a simpler “packaging” of developers' applications according to APS specifications. The team is currently engaged in various tasks to improve the Parallels Automation platform. In particular, working on a new APS.2.0 controller and a new, more simple to use interface for Plesk panels.

- Can you disclose plans regarding the development of the OpenVZ project? rpisarev

- Now we will relocate to the core 3.5 or 3.6. The next step is on Red Hat Enterprise Linux 7, which happens in about a year. Our user space will work with main line containers. The fact that we are pouring into the upstream Linux kernel is already less efficient, so support for our user space of the usual non-OpenVZ up-stream kernel is such a good intermediate step.

Not to mention the project CRIU , in the framework of OpenVZ. The project is a live migration in user space. There is a lot of active work and soon version 1.0 will most likely come out. It does not yet allow migrating to the main stream container.

- I was always interested in the question - how exactly is the migration of a running container to another server? anarx

- Live migration is based on the checkpoint / restore mechanism, that is, saving and restoring the state of the running container. If it is very simplified, then when a checkpoint is “frozen”, the container and its state (running processes, network connections, open files, various buffers, etc.) are dropped into a file on disk. Then from this file you can restore and “defrost” the container in memory, it will run further. So, during migration, recovery takes place on another machine where the container's file system and this state dump are copied.
The migration process can be described in steps (again, simplified):

  1. Copying the container's RAM.
  2. Copying the file system of the container (at this time the container works as usual).
  3. Freezing the container, recording its state and changed memory in a dump.
  4. Copy changes to the file system of the container.
  5. Copy dump.
  6. Recover from dump.
  7. Killing a frozen container on the source server.

Of course, the migration will look “live” only when the time when the container is frozen is small (no more than a few seconds). Various tricks are used to optimize this parameter. For example, at the beginning of the process (step 1), the container’s memory is iteratively (several times, first all, then only modified) is copied to another machine in order to “warm up” it and minimize the size of the dump. In our core, there are also mechanisms for tracking changes in the container's file system so that copying changes (step 3) is as fast as possible (you can read about it here: http://ru-openvz.livejournal.com/4741.html . Naturally, if both physical servers have a shared disk (SAN / NAS / NFS), then we skip the steps of copying files.

Before the migration, various checks are performed, for example, that there is enough space on the new machine and that its processors understand the same instruction sets as on the old one (such as SSE3). Nevertheless, the migration can “break”, for example, if in the middle of the process the network disappears. In this case, the process is interrupted, and the container is simply “thawed” on the source machine.

And finally, from the point of view of users working with the container through the network, it “freezes” for a few seconds, it looks like some kind of network latency. At the same time, TCP connections are not broken, but migrate and continue to work after defrosting.

- Is Apple virtualization still the main focus of the company? danilf

- For those who do not know, I will explain that the question is about our product Parallels Desktop for Mac, which allows you to run Windows-based applications on the Mac (and even more than 50 OS).

The Parallels Desktop for Mac product itself remains very important to us, and so far I see no reason why the situation may change. We are the world leader in the desktop virtualization market, and the development of mobile IT opens up a field almost untapped.

As for virtualization as a technology, it (technology) is quite mature, and I do not think that any fundamentally new solutions can appear on this field. Its development will certainly occur, but probably already in an evolutionary way.

- What is your company's attitude towards KVM? Do you plan to use this technology? skobkin

I think KVM is the future mainstream hypervisor virtualization system in the Linux kernel. Today it may not be ideal and has some drawbacks in comparison with competitors, but in the future it will certainly be the best. There will be many products using this technology. Most likely, Red Hat will have something of its own, and other large companies will have it. We, too, will not stay aside, and our products will use KVM technology. The answer is yes, it is planned.

- What is the situation with porting OpenVZ patches to kernel 3.x, it is clear that some things have already hit the upstream. But as far as I understand, not everything is ready for inclusion? What problems are associated with the synchronization of your patches with fresh kernels, your estimate of the amount of work? It is embarrassing that the site does not even have experimental repositories with a kernel above 2.32. tamerlan311

- Our team leader in Open VZ, Kirill Kolyshkin, answered this question in some detail. With his permission, I will do a repost here:

“So far our freshest kernel branch is the one that is based on RHEL6. The plans include porting to RHEL7, but since it will be released not earlier than in a year, we will do an intermediate branch on the base, most likely 3.5 or 3.6, so that later it would be easier to move to RHEL7.

The lack of experimental repositories for fresh kernels is due to the fact that recently we have focused on other tasks, namely: 1) the injection of our functionality into the core, mainly NFS for containers and CRIU, also a memory controller; 2) CRIU; 3) bringing to the mind of new developments: vswap, ploop, etc. "

- Please tell us about the new product Parallels Cloud Server. How does its core differ from the PVC / OpenVZ core, other than the integrated PSBM? If the differences are significant, do you plan to pour them into PVC / OpenVZ / Upstream? borisko

- We will not have PVC as such, but there will be a Parallels Cloud Server, which, in general, is a PVC plus a hypervisor. If we talk about the core PCS, then it will all be in OpenVZ. Already now in public you can see such significant projects as ploop, which is an important part of our repository. He is already there - go and take. There is nothing closed in the PCS core. And if today something in public is not available, it is only because it is not ready yet.

- Within the framework of OpenVZ, I’m only worried about the prod Debian kernels ... I'm not sure that this is a separate issue on Habre ... (a question from the team leader on Open VZ by Cyril Kolyshkin, who published a link to Habr)

- We traditionally provide assemblies of our software (kernels and utilities) in the form of rpm-packages, and packages for Debian are not build for several reasons, so we advise everyone to use the alien utility to convert rpm to deb.

Recently, the guys from OpenStack are very fond of OpenVZ, in particular, it is used in the project RedDwarf (database as a service). Since OpenStack is built, as a rule, based on Ubuntu, now we are working with them to build our kernels for Ubuntu / Debian.

- Stas, tell us about CloudLinux. Why did Parallels need its own Linux distribution? How is it different from other distributions? And why PVC does not support CloudLinux? freem4n

“The CloudLinux team has nothing to do with Parallels, either by human resources or financially, or by any means whatsoever. Although, of course, I know from there some people. Perhaps we need CloudLinux, but unfortunately it is not ours.

- And here's another question about the APS application. The thing is good and necessary, but very often come across packages that are simply not installed. It also happens that with the release of a new revision of the package, it is no longer installed, and on the old revision everything is fine. Or, for example, localization in the application is missing. Moreover, these applications are packed with Parallels. One gets the impression that the packages are either not tested at all or are being tested to a limited extent. I would like to know whether steps are being taken towards improving the quality of APS applications? freem4n

- Applications for integration with external services, such as Microsoft Office 365 or SpamExperts, cannot really be delivered without a login and password to access this service. Simple web applications should be installed without additional information.

All applications from the APS catalog are tested and certified, including the verification of the update procedure. If there is a problem with any application, we would very much like to know about it. The APS website has a form through which we collect reviews.

Localization is not really available for all applications and not for all languages. This happens only if there is a demand in the relevant market. Then the vendor of the application (or Parallels) usually releases either a new version of the package with the appropriate translation, or an add-on package with it.

Comment from rumkin , unhappy with the Plesk product: “I work closely with their Splash, except for discomfort and full bathhert with a consequent professional hemorrhoids, I feel nothing more. Thank you for the opportunity to speak. ”

“Thank you for this comment, which will balance our interview about“ the wonderful Parallels team. ” Since there was no question in the message, I allow myself to give just a comment about Plesk.

Having a long and difficult history, the Plesk Panel is an important part of our hosting automation and cloud services product line. Five years ago, many of our users were afraid to even upgrade to the new version, because these upgrades often brought only problems to them. But over the past three years we have done a lot of work to make it better, and we have made a lot of progress. We regularly measure user satisfaction with Plesk, and I can tell you that over the past three years it has increased significantly. It gives us faith that we still do the right thing.

By the way, just a few days ago we unreleased Plesk 11 , which, from my point of view, is a further significant improvement of the panel. I note that over the past few days we see just some fantastic upgrade to the 11th version. I admit, this is the first time in our history. This probably indicates that people like the way it looks compared to previous versions and the fact that people have ceased to be afraid. Here you can read that in Plesk 11 new: http://habrahabr.ru/post/146720/ .

From the description of the problem that you gave (bathert and hemorrhoids), it is difficult to understand exactly what the problem is. But if you contact us directly - for starters, you can tweet @ParallelsPanel, we will try to help somehow.

Source: https://habr.com/ru/post/147101/


All Articles