📜 ⬆️ ⬇️

Waiver of NFS in the cloud

Sorry for the long silence - a lot of work, big updates are coming. In the meantime, a little about the not very large, but very noticeable change for our customers.

We refuse to host kernel modules on NFS. (And not only modules, but customers will notice exactly the change of storage location of modules).

How it was supposed to work

Client virtual machines are loaded using our kernels (that is, the kernel code is stored outside the virtual machine). The kernels need modules in the process. / lib / modules is mounted on NFS, the kernel itself determines from which directory to load which modules, it is easy for us to update them, the client is easy to access.
')

As it turned out

First, the NFS balls are mounted later on the initialization of the network (this is obvious) after mounting all the other lines in fstab. Even cooler - in the debian / ubuntu family, they are mounted asynchronously by default, so that a race condition is obtained with the launch of rc.local.

Bottom line: pre-up scripts on interfaces do not work as expected, non-standard file systems from fstab are not mounted as expected. Additionally, NFS is not the most reliable service (especially considering bug # 538000), in other words, inconvenient.

How this problem was solved

The modules are now on ISO's, connected to all virtual machines as a separate disk / dev / xvdp. The modules are mounted immediately after the root has been mounted ('/') and make it easy to perform all subsequent operations (pre-up scripts, non-standard file systems, etc.).

The mount line (fstab) looks the same for everyone:
 / dev / xvdp / lib / modules iso9660 ro 0 0

By the way, this drive is not paid by customers.

Why in read only?

First, as already mentioned, it is one for all. I don’t want your neighbor to put a special version of iptables on you, for example. Secondly, the modules (like the core) are controlled by our control system. Soon ... well, in the near future we will add the ability to choose which core to use, but the core will be from our list. The reason is simple - there are a lot of not very stable kernels under xen. This, for example, debian suffers. Alas. Moreover, some of the bugs will not be immediately visible - and we really wanted to avoid the need to answer questions about the stability of the cores that we did not choose. Xen'an paravirtualization involves kernel cooperation when performing many operations (such as migration) - and abandoning this cooperation in the middle of the process can make the machine inoperative. So the kernels get into the cloud only after a long and very boring test for all possible uses.

Can I load my own modules?

Yes you can. Despite the fact that / lib / modules is mounted as readonly, you can load modules using insmod from anywhere. The only "but" kernel, we sometimes update, in this case the modules supplied by us are updated. And you have to rebuild the modules themselves. Fortunately, updates are always obvious and “accidentally broke” will not happen.

Why iso9660?

This is the file system used on the CD (and ISO images) - for it the most natural is read only mode. If we used ext3, then there would be a temptation to remount to rw (this could not be done and there would be a lot of errors). The second reason is that iso9660 is ignored by OS installers during installation (no one will try to create a partition table there).

How to move from NFS to ISO

Simple script:

  sed -i /109.234.152.2/d / etc / fstab
  echo "/ dev / xvdp / lib / modules iso9660 ro 0 0" >> / etc / fstab
  umount / lib / modules
  mount / lib / modules


No reboot required.

On new machines, the modules are mounted exactly as needed.

Of course, you can not go, we will maintain the current configuration on the NFS as long as necessary (until the last machine with the old system of modules). Those who switched to the new system can safely cut out any traces of NFS - it is no longer needed.

Source: https://habr.com/ru/post/117853/


All Articles