📜 ⬆️ ⬇️

“Wings, paws and tails” of our Linux hosting, part 1: how we automated the deployment of infrastructure

- Hey, bird, fly with me, there are so many delicious !!!
- So much? [spreading his arms]
- yy [shaking your head and joining your hands]

One of the main problems facing shared-hosting providers is the isolation of users. It would, of course, be easier and more reliable to create a container for each user, but this eats up unnecessary resources and greatly reduces the packing density of sites for one machine (I assume that the package is comfortable for the client, and not the “herring in a barrel” option, which is still found on ultra-cheap hosting, when opening even a static page of the client’s site noticeably slows down due to the load on the web box). I will say more, it is not uncommon for one client to take too many resources, accidentally or intentionally, to the detriment of everyone else.

To solve these problems, without wasting server resources for nothing, we intended using CloudLinux. CloudLinux has already been mentioned on Habré. This is a RHEL-compatible distribution based on the OpenVZ core. Using tricky components (CageFS and LVE) and a modified kernel allows you to limit users to resources (processor memory disk) without creating containers.

CageFS is a virtual file system that implements Container-Based Virtualization (Operation System-Level Virtualization). It creates the file structure of the home directory and limits the user namespace, isolating it from other users of the system.
The default user home directory structure is in / usr / share / cagefs-skeleton. In our case, we use our own home directory structure, the path to the template (skeleton) of which is described in /etc/cagefs/cagefs.base.home.dirs.
')
LVE (Lightweight Virtual Environment) is a cgroups based resource limiting technology. Allows you to finely limit resources for a specific user at the kernel level, such as:


Moreover, restrictions are applied based on the owner of the process. Thus, no matter how many times a user logs into the server over SSH, resource limits will be shared.

With the full exhaustion of the resources of active servers, we create a new one and create users there. Of course, you can do all this by hand (especially since the monitoring system warns about the exhaustion of resources in the current infrastructure). But we know that the most expensive thing in IT is a specialist and it’s better to “lose a day, but then fly for five minutes” - to automate routine operations.
To create and prepare new servers, we use the Puppet fabric and Puppet library.
Fabric will allow you to automate the connection to server groups and remote command execution, and Puppet is used for centralized configuration management.

I will give an example of server deployment to a pure version of CentOS 6.7 x64, which we will turn into CloudLinux. I will deploy the scripts remotely from my machine (CentOS) to a machine with IP 10.0.0.146. Creating clean machines with a given OS, we made templates and does not require any effort at all. IP configuration is also performed automatically; free addresses are taken from a special accounting system, which we call “Inventory”.

I put on my working machine fabric and puppet:

yum install gcc python-devel python-pip puppet puppet-server pip install fabric 

The initial system preparation script:
install_web.sh
 #!/bin/sh if ! grep --quiet "^SELINUX=disabled" /etc/selinux/config; then echo "SELINUX=disabled" > /etc/selinux/config echo "disabled SELINUX, please reboot server and run script again!" exit fi if ! grep --quiet "var" /etc/fstab; then echo "please create separate mount point /var and run script again!" exit fi # fix fstab, activate quotas perl -p -i -e "s/\/var\s+ext4\s+defaults\s+0 0/\/var\t\t\text4\tdefaults,noatime,nosuid,usrjquota=aquota.user,jqfmt=vfsv0\t1 2/" /etc/fstab mount -vo remount /var quotacheck -vumaf quotaon -avu set -e # install cloudlinux if ! echo $(uname -r) | grep --quiet "lve"; then wget http://repo.cloudlinux.com/cloudlinux/sources/cln/cldeploy sh -x cldeploy -i # If use non-IP-based activation then execute cldeploy -k <activation key> yum -y '--disablerepo=*' --enablerepo=cloudlinux* update yum -y '--disablerepo=*' --enablerepo=cloudlinux* install mod_hostinglimits cagefs lve-wrappers yum install libgcc.i686 glibc.i686 -y rm -rf cldeploy echo "installed CloudLinux, please reboot server and run script again!" exit fi # activate cloudlinux cagefs cagefsctl --init cagefsctl --set-min-uid 2000 /usr/sbin/cagefsctl --enable-all 


The first time you run the script, it will disable SELinux, install the required repositories, activate the CloudLinux repositories, and install the necessary CloudLinux components, including the kernel:

 ssh 10.0.0.146 < install_web.sh 

After rebooting the system, restarting the script will initialize the home directory template (cagefs-skeleton), set the UID from which the CageFS user accounts start to 2000, and enable CageFS.

To install and configure the necessary software I will use fabric and puppet.
Add our server with IP 10.0.0.146 and hostname web.domain.com to the Puppet configuration - web.pp manifest, which describes our new server:

 node /^web.domain.com/ { include base_web } 

The base_web class includes all the configuration of our server. How to customize the system in detail using Puppet, including examples of customization classes, is available in many places on the Internet, including the excellent CookBook .
Fabric configuration script:
fabfile.py
 #!/usr/bin/python from fabric.api import env, sudo, roles, settings from fabric.contrib.project import rsync_project env.roledefs['production'] = ['10.0.0.146'] env.roledefs['development'] = [] def shell_env(): env.port = 22 env.deploy_path = './puppet' def deploy_puppet(): shell_env() with settings(warn_only=True): result = sudo('rpm -q rsync') if not result.succeeded: sudo('yum install -y rsync') rsync_project( remote_dir=env.deploy_path, local_dir='./', exclude=['.svn', '*.pyc', 'fabfile.py', 'install_*.sh'], extra_opts='--delete-after' ) with settings(warn_only=True): result = sudo('rpm -q epel-release') if not result.succeeded: sudo('rpm -Uvh http://mirror.yandex.ru/epel/6/x86_64/epel-release-6-8.noarch.rpm') result = sudo('rpm -q puppet') if not result.succeeded: sudo('yum install puppet -y') sudo('puppet apply --modulepath {0}/modules/ {0}/manifests/site.pp'.format(env.deploy_path)) @roles('production') def deploy_p(): deploy_puppet() @roles('development') def deploy_d(): deploy_puppet() def deploy(): deploy_puppet() 


Finally, I launch the fabric production, which will be installed by the puppet client and additional software, and the system is ready:

 fab deploy_p 

With the help of puppet, the same settings will be uploaded to the server, which we use for all servers of the same type.

The entire configuration (uploaded to the servers via puppet) is stored centrally in our internal SVN repository, which is very convenient when several people are engaged in configuration and support. Suppose you need to change the default configuration for sysctl. Go to puppet / modules / sysctl / files / default / sysctl.conf and edit it to your liking.

The manifest puppet / modules / sysctl / manifests / init.pp will look like this:
init.pp
 class sysctl::params { $template = "default" } class sysctl ( $template = $sysctl::params::template ) inherits sysctl::params { file { "/etc/sysctl.conf": ensure => present, owner => root, group => root, mode => 0640, source => [ "puppet:///modules/sysctl/$template/sysctl.conf" ], notify => Exec["sysctl_reload"]; } exec { "sysctl_reload": command => $kernel ? { FreeBSD => "/etc/rc.d/sysctl reload", Linux => "/sbin/sysctl -q -p /etc/sysctl.conf", }, refreshonly => true } } 


After that, we run the fabric again, which will make puppet apply on the servers we need:

 fab deploy_p 

So, thanks to the full automation of Linux-hosting , our admins do not waste time on routine tasks, but rather solve non-standard problems. This I tell you as the admin. :)

Source: https://habr.com/ru/post/273733/


All Articles