This instruction contains a step-by-step algorithm for installing and configuring LXD. The instructions cover the following topics:
- Installation and launch of the container.
- Network configuration.
- Configure static IP addresses for containers.
- Configure NAT and Iptables.
- Create backups and restore them.
- Distinctive features from Docker.
LXD is a container hypervisor that is based on LXC [1]. The main difference from LXC is that LXD introduces the concept of container images, and builds the infrastructure around these two concepts.
Simply put, LXD is a Docker for virtual OS. The principle is the same: the OS image can be downloaded from the repositories and deploy instances on the host as containers. One image can be “cloned” across multiple virtual machines.
Differences from Docker:
exit
it continues its work./var/lib/lxd/containers/< >/rootfs
.Other LXD features:
LXD currently works fine on Ubuntu 16.04 LTS. You can run on other systems, but there may be difficulties or something will not work as it should. For example, on Centos 7, containers run only in privileged mode, there are no ready-made lxd builds and you need to compile them manually.
In the latest Ubuntu version, by default, lxd is already built in. If it is not installed, you can put it like this:
aptitude install lxd
Update the system and install the necessary packages for work:
aptitude update aptitude upgrade aptitude install lxd zfs zfsutils-linux
LXD initialization needs to be done before you start using containers.
Before doing the initialization, you need to decide which backend storage will be used. Backend repositories are where all containers and images are located. There are two main types of storage: ZFS and Dir.
ZFS is mounted in a file as a loop device, so you need to monitor the size of the storage and increase the place if there is not enough of it. ZFS makes sense to use if you have a private remote image repository, where you send, from time to time, container snapshots as backups, and then download them from there to install new versions or to restore containers from backups.
I decided to put Dir on the production server. ZFS I test in my local computer. I will do backups with regular scripts - pack them into a tar and send them to Amazon S3.
Once you have decided which backend repository to use, begin the initialization process. This is done by the command:
lxd init
The utility will ask questions that you will need to answer. The first question the utility asks: what type of storage to use?
Name of the storage backend to use (dir or zfs): dir
If your answer is Dir, then the utility will immediately go to network configuration. If your answer is ZFS, then the system will ask the following questions:
Create a new ZFS pool (yes/no)? yes Name of the new ZFS pool: lxd Would you like to use an existing block device (yes/no)? no Size in GB of the new loop device (1GB minimum): 10
"Size in GB of the new loop device" is the size of the ZFS storage. All images and containers will be stored in this storage, so if you intend to store many images or containers, then you need to increase this number.
Then the utility will ask: do you need to open access to LXD from outside? The answer is "no." If you want to create a public or private repository, then you need to answer "yes."
Would you like LXD to be available over the network (yes/no)? no
After configuring the storage type, the utility will ask: “Would you like to configure the LXD bridge?”. The answer is "yes."
Do you want to configure the LXD bridge (yes/no)? yes
Start the network configuration interface. Answer questions like this:
Would you like to setup a network bridge for LXD containers now? Yes Bridge interface name: lxdbr0 Do you want to setup an IPv4 subnet? Yes IPv4 address: 10.200.0.1 IPv4 CIDR mask: 16 First DHCP address: 10.200.100.1 Last DHCP address: 10.200.255.254 Max number of DHCP clients: 25399 Do you want to NAT the IPv4 traffic? Yes Do you want to setup an IPv6 subnet? No
The network will use the bridge with the lxdbr0 interface.
Network mask 10.200.0.0/16.
IP address of the host 10.200.0.1.
Automatically DHCP will distribute IP for containers from 10.200.100.1 to 10.200.255.254, but you can manually set it from 10.200.0.2.
The ip6 protocol for containers can be omitted.
You can rerun the LXD bridge configuration utility with the command:
dpkg-reconfigure -p medium lxd
Open the file:
nano /etc/default/lxd-bridge
Uncomment the line LXC_DHCP_CONFILE and list:
LXD_CONFILE="/etc/lxd-dnsmasq.conf"
Create a static IP address configuration file:
nano /etc/lxd-dnsmasq.conf
Register the static IP address for the test container:
dhcp-host=test,10.200.1.1
Further, in this file you can add other static IP addresses for other containers.
After each change of the /etc/lxd-dnsmasq.conf file, you will need to reload lxd-bridge with the command:
service lxd-bridge restart
If this does not help, then you need to stop the containers with the wrong IPs, delete the dnsmasq.lxdbr0.leases file, and then reload the lxd-bridge:
lxc stop test rm /var/lib/lxd-bridge/dnsmasq.lxdbr0.leases service lxd-bridge restart
To make NAT work by executing the commands:
echo 1 > /proc/sys/net/ipv4/ip_forward echo "net.ipv4.ip_forward=1" >> /etc/sysctl.conf
Edit the file
nano /etc/default/grub
Change the line
GRUB_CMDLINE_LINUX="swapaccount=1 quiet"
Without this line, when I started lxd, I got a warning that the cgroup swap account would not work. I decided to turn on the option swapaccount = 1. quiet - this is a quiet boot system (optional)
systemctl enable lxd
Reboot Ubuntu:
init 6
Add repository (optional, by default images have already been added):
lxc remote add images images.linuxcontainers.org:8443
Download image:
lxc image copy images:centos/6/amd64 local: --alias=centos-image
centos-image - a synonym for the image to make it easier to access it
Run the image:
lxc launch local:centos-image test
test - the name of the future container
You can run images in two commands:
lxc init local:centos-image test lxc start test
The first command will create a container, and the second will launch it. The first command is useful if you just want to create a container, but not run it.
View the status of running containers.
lxc list
The team should show the following information:
(Ubuntu)[root@ubuntu /]# lxc list +------+---------+-------------------+------+------------+-----------+ | NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS | +------+---------+-------------------+------+------------+-----------+ | test | RUNNING | 10.200.1.1 (eth0) | | PERSISTENT | 0 | +------+---------+-------------------+------+------------+-----------+
Please note that LXD has generated a static IP for the container that you configured in /etc/lxc-dnsmasq.conf
This command mounts the / data / test / folder folder into the test container in the / folder folder
mkdir -p /data/test/folder chown 100000:100000 /data/test/folder lxc config device add disk_name test disk path=/folder source=/data/test/folder
Folder mounting does not change the contents of the / var / lib / lxd / containers / test folders, but is mounted in a separate / var / lib / lxd / devices / test folder. Therefore, backups and container images will not contain mounted folders and files. Updating the container from the backup or image will not affect the contents of the mounted folders.
You can view configuration information through the command:
lxc config show test
Log in to the running test container:
lxc exec test -- /bin/bash
Check the connection:
ifconfig
Conclusion:
[root@test ~]# ifconfig eth0 Link encap:Ethernet HWaddr 00:16:3E:23:21:3F inet addr:10.200.1.1 Bcast:10.200.255.255 Mask:255.255.0.0 inet6 addr: fe80::216:3eff:fe23:213f/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:15078 errors:0 dropped:0 overruns:0 frame:0 TX packets:15320 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:28090645 (26.7 MiB) TX bytes:841975 (822.2 KiB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b)
Check NAT:
ping ya.ru
Conclusion:
[root@test ~]# ping ya.ru PING ya.ru (93.158.134.3) 56(84) bytes of data. 64 bytes from www.yandex.ru (93.158.134.3): icmp_seq=1 ttl=50 time=105 ms 64 bytes from www.yandex.ru (93.158.134.3): icmp_seq=2 ttl=50 time=106 ms 64 bytes from www.yandex.ru (93.158.134.3): icmp_seq=3 ttl=50 time=105 ms 64 bytes from www.yandex.ru (93.158.134.3): icmp_seq=4 ttl=50 time=105 ms 64 bytes from www.yandex.ru (93.158.134.3): icmp_seq=5 ttl=50 time=104 ms 64 bytes from www.yandex.ru (93.158.134.3): icmp_seq=6 ttl=50 time=106 ms ^C --- ya.ru ping statistics --- 6 packets transmitted, 6 received, 0% packet loss, time 6671ms rtt min/avg/max/mdev = 104.942/105.845/106.664/0.568 ms
Install the base packages:
yum install mc nano openssh-server epel-release wget -y yum update -y chkconfig sshd on service sshd start
Set root password
passwd
Disconnect from the container:
exit
Copy host ssh key to container
ssh-copy-id root@10.200.1.1
If Ubuntu swears that it cannot find the key, then first generate the ssh key, and then copy it with the ssh-copy-id command. If the key was copied successfully, then skip this step (key generation).
ssh-keygen
Now you can enter the container via ssh without a password (via certificates):
ssh root@10.200.1.1
Often you need to be able to connect via ssh to the container directly, bypassing the host (so that you do not go to the host each time to go to the container).
To do this, execute the command:
iptables -t nat -A PREROUTING -p tcp --dport 22001 -j DNAT --to-destination 10.200.1.1:22
In Ubuntu, by default, iptables is lost after a host reboot. To solve this problem you need to create a file:
nano /etc/network/if-up.d/00-iptables
Write file contents:
#!/bin/sh iptables-restore < /etc/default/iptables #ip6tables-restore < /etc/default/iptables6
Set launch permissions:
chmod +x /etc/network/if-up.d/00-iptables
Save current settings:
iptables-save > /etc/default/iptables
Reboot and try to connect to the container via ssh:
ssh root@< ip > -p22001
If you use iptables recovery when booting, then LXD will add its commands to iptables, and iptables will contain duplicate entries. In addition, it is required to prohibit incoming connections on various servers and open only necessary ports.
The ready listing /etc/default/iptables
, which solves two tasks at once, is presented below:
# Generated by iptables-save v1.6.0 on Fri Aug 19 16:21:18 2016 *mangle :PREROUTING ACCEPT [129:9861] :INPUT ACCEPT [129:9861] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [102:11316] :POSTROUTING ACCEPT [102:11316] COMMIT # Completed on Fri Aug 19 16:21:18 2016 # Generated by iptables-save v1.6.0 on Fri Aug 19 16:21:18 2016 *nat :PREROUTING ACCEPT [0:0] :INPUT ACCEPT [0:0] :OUTPUT ACCEPT [0:0] :POSTROUTING ACCEPT [0:0] # ssh test -A PREROUTING -p tcp -m tcp --dport 22001 -j DNAT --to-destination 10.200.1.1:22 COMMIT # Completed on Fri Aug 19 16:21:18 2016 # Generated by iptables-save v1.6.0 on Fri Aug 19 16:21:18 2016 *filter :INPUT ACCEPT [128:9533] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [102:11316] -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT -A INPUT -p icmp -j ACCEPT -A INPUT -i lo -j ACCEPT # http ssh -A INPUT -p tcp -m tcp --dport 80 -j ACCEPT -A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT # -A INPUT -j REJECT --reject-with icmp-host-prohibited -A FORWARD -j REJECT --reject-with icmp-host-prohibited COMMIT # Completed on Fri Aug 19 16:21:18 2016
This method creates container backups as LXD images, ready for import. Ideally, take snapshots and send them to the LXD private repository. But sometimes, this can not be done. For example, a small company does not have the opportunity to buy another server. In this case, you can do with a simple tar + Amazon S3 solution.
Download ready-made scripts for creating and restoring backups:
wget https://github.com/vistoyn/lxd_backup/raw/1.1/scripts/lxc-backup -O "/usr/local/bin/lxc-backup" wget https://github.com/vistoyn/lxd_backup/raw/1.1/scripts/lxc-restore -O "/usr/local/bin/lxc-restore"
Set the execution flag for scripts:
chmod +x /usr/local/bin/lxc-restore chmod +x /usr/local/bin/lxc-backup
Before creating and restoring backups, you need to stop the running container. You can, in principle, make a backup on a running container, but when creating a backup, some data may be lost (depending on the installed programs in the container).
This command will create a backup of the test container, compress the file into an archive and save it to disk in the / backup / lxc / test folder:
lxc stop test lxc-backup test
Restore backup from snapshot:
lxc-restore test /backup/lxc/test/snap-test-2016-08-19.tar.bz2
For ZFS, add “.zfs” after the container name
Creating backup:
lxc stop test lxc-backup test.zfs
Restore backup from snapshot:
lxc-stop test lxc-restore test.zfs /backup/lxc/test/snap-test.zfs-2016-08-19.tar.bz2
On a new host, you sometimes need to create a container from a backup. To do this, you must first import the image, and then run it as a container.
Import backup command as LXD image:
lxc image import /backup/lxc/test/snap-test-2016-08-19.tar.bz2 --alias my-new-image
The command to run the image as a container:
lxc launch me-new-image test2
This article does not address many other LXD related issues. Additional LXD literature can be read here:
Source: https://habr.com/ru/post/308400/
All Articles