📜 ⬆️ ⬇️

Creating a hosting site based on Proxmox + HP ProLiant

Good time of day.

So, turned to me with the task of moving to your own server from three VPS about 100 sites, including news with a MySQL database of about 20 GB in size, and the total weight of small (mostly) files hosted about 500 GB.
The server itself was installed without my participation in the provider’s rack, two IP addresses were given - access to the server’s admin panel and hosting IP address.
Pictures to attract attention:
imageimage

Server configuration:

Proc 1: 2267 MHz
Execution technology: 6/6 cores; 12 maximum threads
Memory technology: 64-bit capable
Processor 1 Internal L1 Cache: 192 KB
Processor 1 Internal L2 Cache: 1536 KB
Processor 1 Internal L3 Cache: 12288 KB
Proc 2: 2267 MHz
Execution technology: 6/6 cores; 12 maximum threads
Memory technology: 64-bit capable
Processor 2 Internal L1 Cache: 192 KB
Processor 2 Internal L2 Cache: 1536 KB
Processor 2 Internal L3 Cache: 12288 KB
')
HDD:
1 PLEXTOR PX-256M
2 WDC WD1000DHTZ

Ram
PROC 1 DIMM 4B: 16384 MB 1333 MHz
PROC 1 DIMM 6C: 16384 MB 1333 MHz
PROC 2 DIMM 4B: 16384 MB 1333 MHz
PROC 2 DIMM 6C: 16384 MB 1333 MHz


At the desired on the server:
1. Windows 2012 Server for some purposes related to 1C.
2. Primary hosting (high load).
3. Test machine

During the setup process, he convinced the owners to purchase another IP address for the test machine.

Two licenses of ISP Panel for main hosting and test hosting have been purchased.

Actually with the choice of * nix OS there were no options, insisted on CentOS 6.

The admin panel of the server was pleasantly struck - everything is convenient, understandable.
So, virtualization (based on the tasks) decided to do on ProxMox, for which I swung the disk image .
In the admin, I hooked the image from my machine, like an image of a CD in the virtual media section. I was surprised that the image was not uploaded to the server, but was pulled straight from my machine as it was loaded. The entire installation process took about 1.5 hours. I have a channel of 60 MB, but there are still people working watching online and downloading media files.
What should be noted: proxmox is installed on ssd, the disk partitioning is at the mercy of the installation, everything is “default”. That is, WD has remained pristine-clean.
As a result, on the IP address of the hosting we have the Proxmox admin area at: IP : 8006 /.
It has to be said that it was limited in time, so I didn’t pick around with the “CT” machines, tried to pour the CentOS template on and off, and only showed a black screen. I must say that before that I raised the templates for the CT machines on my local server. IMHO their advantage - access to all server resources.

Fill in the car image of CentOS. There was some inconvenience from ProxMox - you can swing the image not by reference, but only on your hard drive. So I had to pour it to myself first, then pour it onto the server.

With WD (/ dev / sdb), which should be the main repository, I entered the following way in the Proxmox console under the article from here :

0. aptitude update && aptitude upgrade 1. pvcreate /dev/sdb 2. vgcreate ws /dev/sdb 3. lvcreate -n data -L980G ws ( 980    ,       10 ) 4. mkfs.ext4 -L data /dev/ws/data        ,   ,    0   1   aptitude install screen.   4   : screen -dmS createfs mkfs.ext4 -L data /dev/ws/data 5. echo "/dev/ws/data /var/lib/ws ext4 defaults 0 1" >> /etc/fstab 6. mount -a 


Thus, I received additional storage for virtual disks of machines.
But it was decided to place the swap files on ssd. About this below.

About setting permissions. Since the user will not root, then we start it and give access rights to the storage and machines. To do this, I went to the data center, the group tab, created the group “users”. Entered the user into the group “users”, in the users tab.
It should be noted that if we go rooting, then in the authorization form of Proxmox, you need to select OS authorization - “pam”, I already started the user with authorization directly from the proxmox user base, so when entering, you must select the authorization type “pve”.
Just another “SJ” in the direction of the creators of Proxmox - would fasten the generation of passwords in the admin panel, it is inconvenient to have more than one user.
Go to the Data Center, the Permissions tab and add the “users” group admin rights (well, the owners) to access the ws storage and create virtual machines. For now enough.

At first I decided to create a test hosting machine. Create a VM "tests" (machine 100) with the following characteristics:
4 GB RAM; default processor kvm64, CD - drive CentOS image, HDD - disk image from vmvare (vmdk), write back caching, 120 Gb on storage ws, network Intel on bridge vmbr0. I’ll tell you about the network functions just below when we will be hosting for work sites.
We set centos, we catch repositories, we are updated, we put mc. atop, set up resolv.conf and so on. This stopped the virtual machine thought.
Since I spent an hour installing the OS image, I decided to optimize this process for the following n-machines. Therefore, the console went / var / lib / ws / images / 100, and copied the vmdk disk image in the folder / home
Set the ip address of the eth0 interface of the virtual machine (for now from the proxmox console).

 cd /etc/sysconfig/network-scripts cat ifcfg-eth0 DEVICE=eth0 HWADDR= ( mak ,     proxmox) TYPE=Ethernet ONBOOT=yes NM_CONTROLLED=yes BOOTPROTO=none IPADDR=<,   > NETMASK= <,   > GATEWAY= <,   > DNS1=8.8.8.8 DNS2=8.8.4.4 service network restart 


Next, from the ISPmanager site, the installation section expands the necessary services. Some time has passed, the car took off, earned. I did not touch anything in konfy, I left everything by default.

As for the proxmox options for a virtual machine, it is necessary to make a reservation:
1. Run at boot - put "Yes" or "No" by hand. By default, "No", be careful.
2. CPU units is a completely obscure thing, which nevertheless affects the performance of virtualoks. Went the following way:

 vzcpucheck   proxmox. vzcpucheck Current CPU utilization: 4000 OpenVZ VM is getting 1000 divided by 906755 and multiplied by 100 = 0.1% of the CPU time So if I want to give 5 percent of guaranteed time to my VPS, I would enter CPU Units = 45337 


I must say for each created machine did just that - CPU Units = 45000.

What to do with a highly loaded hosting thought matured for a long time:
Create virtualok on the principle of one server - one service. In addition, since we have proxmox external IP hosting, then we will ipfirewall to make connections to the machines we need. We also create an “internal” grid, for example, 192.168.12.0/24.
In Proxmox for this, I decided to raise the dummy interface:

 modprobe dummy 

Next in the admin area of ​​Proxmox, Data Center - Network create vmbr1 (bridge) based on the dummy interface.
We set the ip address of proxmox 192.168.12.1. The preparation is done.
Go.
1 machine: mysql (vm101). It was found out experimentally that a database of such a volume of data (20GB) well exists on 27 GB of RAM. The processor 4 sockets on 3 kernels. But hard cling like the car number 1. When Proxmox creates a machine, I copy the hdd image of the first CentOS from / home (if you remember, the network and hostname are not configured there, but everything is updated and ready to start). Network card on bridge vmbr1
Configure the network interface. Since we have eth0 left on the test machine, we need to create the ifcfg-eth1 file in / etc / sysconfig / network-scripts with the following content:
 cd /etc/sysconfig/network-scripts cat ifcfg-eth1 DEVICE=eth1 HWADDR= ( mak ,     proxmox) TYPE=Ethernet ONBOOT=yes NM_CONTROLLED=yes BOOTPROTO=none IPADDR=192.168.12.10 NETMASK= 255.255.255.0 GATEWAY= 192.168.12.1 DNS1=8.8.8.8 DNS2=8.8.4.4 service network restart 


We put the necessary Percona repository and Percona itself.
The mysql configuration file was cited below (it partially moved from the old server, was partially optimized):
 cat my.cnf [mysqld] user=mysql skip-external-locking low-priority-updates port= 3306 wait_timeout = 120 #     #innodb_force_recovery = 5 datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock symbolic-links=0 key_buffer = 256M key_buffer_size = 256M max_allowed_packet = 4M thread_stack = 2048K thread_cache_size = 8086 thread_concurrency = 8 query_cache_limit = 1G query_cache_size = 1G #myisam-recover = BACKUP max_connections = 1400 table_definition_cache = 8000 join_buffer_size =4M tmp_table_size = 768M max_heap_table_size = 768M max_tmp_tables = 500 character-set-server = utf8 expire_logs_days=2 innodb_data_home_dir=/var/lib/mysql innodb_data_file_path=ibdata1:10M:autoextend innodb_log_group_home_dir = /var/lib/mysql innodb_file_per_table=1 # You can set .._buffer_pool_size up to 50 - 80 % # of RAM but beware of setting memory usage too high innodb_open_files=1200 innodb_buffer_pool_size = 22G innodb_buffer_pool_instances = 22 innodb_additional_mem_pool_size = 512M # Set .._log_file_size to 25 % of buffer pool size innodb_log_file_size = 64M innodb_log_buffer_size = 4M innodb_lock_wait_timeout = 50 innodb_flush_log_at_trx_commit = 2 innodb_flush_method=O_DIRECT innodb_doublewrite=0 innodb_support_xa=0 innodb_checksums=0 innodb_io_capacity = 120 max-connect-errors = 10000 back_log = 500 binlog_cache_size = 1M sync_binlog = 0 key_cache_division_limit=70 [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid [myisamchk] key_buffer_size = 128M sort_buffer_size = 128M read_buffer = 64M write_buffer = 64M key_cache_division_limit=70 

As for all kinds of meanings, this is the topic of a separate article, which may appear later. Therefore, I bring the fruit of my raids on MySQL as is, without comment.


As for installing nginx as a proxy for apache, there are a lot of articles on this topic . Plus, in the same virtual machine I get Exim + dovecot with a base in MySQL.
Nginx virtual machine (vm102). The disk image is also copied after creating the virtual machine. CPU 3 sockets, 3 cores; 4 GB RAM. Network card on the bridge vmbr1.
I will give nginx config.
 cat nginx.conf user nginx; worker_processes 6; #  ,    ,   . #worker_processes auto; #worker_cpu_affinity 010000 100000 001000 000100 000010 000001; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 2048; } http { include /etc/nginx/mime.types; default_type text/html; access_log /var/log/nginx/access.log; sendfile on; keepalive_timeout 20; tcp_nodelay on; gzip on; gzip_comp_level 2; gzip_proxied any; gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript; client_max_body_size 64m; include /etc/nginx/conf.d/*; } cd /etc/nginx/conf.d cat default server { listen *:80; ## listen for ipv4; this line is default and implied access_log /var/log/nginx/access.log; location /nginx_status { stub_status on; access_log off; allow 127.0.0.1; deny all; } server_name nginx.xxx.ru; location / { proxy_pass http://192.168.12.20:80/; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $remote_addr; proxy_connect_timeout 60; proxy_send_timeout 60; proxy_read_timeout 90; } } 

192.168.12.20 - apache address with ISPManager
192.168.12.30 - the address of nginx (and exim), that is, a machine that looks into the network, which will turn traffic on the ports http / https / mail


Create apache (vm103). CPU 4 sockets with 2 cores, 16 GB RAM, 1st HDD 120 Gb for the finished image, second HDD 500Gb for sites, Network interface card on the bridge vmbr1. Pouring an image of an already installed CentOS, configure it:

 cd /etc/sysconfig/network-scripts cat ifcfg-eth1 DEVICE=eth1 HWADDR= ( mak ,     proxmox) TYPE=Ethernet ONBOOT=yes NM_CONTROLLED=yes BOOTPROTO=none IPADDR=192.168.12.20 NETMASK= 255.255.255.0 GATEWAY= 192.168.12.1 DNS1=8.8.8.8 DNS2=8.8.4.4 service network restart 


How do we install ISPManager on a hosting located inside a local network? Nothing smarter came to mind except how to set the other IP address available to us for ProxMox - from the machine tests, the tests to the car, disable autorun, and temporarily transfer the network interface to the vmbr0 bridge. After the reboot Proxmox, inside the apache machine (vm103):
 cd /etc/sysconfig/network-scripts cat ifcfg-eth1 DEVICE=eth1 HWADDR= ( mak ,     proxmox) TYPE=Ethernet ONBOOT=yes NM_CONTROLLED=yes BOOTPROTO=none IPADDR=<,   > NETMASK= <,   > GATEWAY= <,   > DNS1=8.8.8.8 DNS2=8.8.4.4 service network restart 


We cling HDD 500 GB on / var / www:

 fdisk /dev/sdb <<    >> mkfs.ext4 -L data /dev/sdb1 echo "/dev/sdb1 /var/www ext4 defaults 0 1" >> /etc/fstab 6. mount -a 


Next, from the ISPmanager site, the installation section expands the necessary services.
Turning off Posfix, courier (we have the same mailer is not here), other unnecessary services to taste. Turn off mysql, tell ISPmanager that the mysql server is on a different machine. This is done from ISPManager in the MySQL section.
We return the settings as it was (apache vmbr1, 192.168.12.20; tests autostart at boot; proxmox ip is the address of the hosting).
In proxmox (192.168.12.1; <Real IP>) I add a port 1500 forwarding (admin panel ISPManager) to apc.c (192.168.12.20), forcing ssh to virtualkam, port forwarding for nginx to rc.local:
 cat /etc/rc.local /sbin/iptables -F /sbin/iptables -X /sbin/iptables -t nat -A PREROUTING -p tcp -d < IP> --dport 22003 -j DNAT --to-destination 192.168.12.10:22003 /sbin/iptables -t nat -A PREROUTING -p tcp -d < IP> --dport 22002 -j DNAT --to-destination 192.168.12.20:22002 /sbin/iptables -t nat -A PREROUTING -p tcp -d < IP> --dport 22004 -j DNAT --to-destination 192.168.12.30:22004 /sbin/iptables -t nat -A PREROUTING -p tcp -d < IP> --dport 1500 -j DNAT --to-destination 192.168.12.20:1500 #nginx /sbin/iptables -t nat -A PREROUTING -p tcp -d < IP> --dport 80 -j DNAT --to-destination 192.168.12.30:80 /sbin/iptables -t nat -A PREROUTING -p tcp -d < IP> --dport 445 -j DNAT --to-destination 192.168.12.30:445 /sbin/iptables -t nat -A PREROUTING -p tcp -d < IP> --dport 443 -j DNAT --to-destination 192.168.12.30:443 /sbin/iptables -t nat -A PREROUTING -p tcp -d < IP> --dport 25 -j DNAT --to-destination 192.168.12.30:25 /sbin/iptables -t nat -A PREROUTING -p tcp -d < IP> --dport 110 -j DNAT --to-destination 192.168.12.30:110 


As for the launch of IPSManager, which requires an ip on the local machine to which the license is attached, I also decided through the dummy interface. I give him the necessary ip address, then I do ifconfig dummy down. Previously, I did not do this, but the sites are set up so that they take pictures for display at the address of the site name / images / ... / ... / ... jpg.
Therefore, if the interface is not extinguished, then it is
 nslookup < >  <ip > -    IP ? -  httpd. -    ,     -  .     ,  .     . 


He promised to tell you about the swap partition on the SSD disk. It's all quite simple, create a disk image in the default repository, where Proxmox was actually put. Set the required image size. Then we connect the sections as follows:

 mkswap /dev/sd< > cat /proc/swaps swapoff -v /dev/sda<  > (   swap swapon /dev/sd< > 


Finally, I’ll give fstab machines with apache (there’s the most virtual disks).

  cat /etc/fstab # # /etc/fstab # Created by anaconda on Tue Mar 11 15:57:46 2014 # # Accessible filesystems, by reference, are maintained under '/dev/disk' # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info # /dev/mapper/VolGroup-lv_root / ext4 defaults 1 1 UUID=43977fc1-b315-4c84-8d4e-147f4063a60e /boot ext4 defaults 1 2 /dev/mapper/VolGroup-lv_home /home ext4 defaults 1 2 /dev/sdc swap swap defaults 0 0 tmpfs /dev/shm tmpfs defaults 0 0 devpts /dev/pts devpts gid=5,mode=620 0 0 sysfs /sys sysfs defaults 0 0 proc /proc proc defaults 0 0 /dev/sdb1 /var/www ext4 defaults 1 2 noatime,nodiratime,noacl,data=writeback,commit=15 


Notice the last line of the fstab file. Significantly accelerated the work of the file system by installing after 1 2. Before that, I dug all kinds of forums for a long time and made such a line from what I’ve dug up. Hodgepodge, so to speak.

In principle, one way or another, everything works and has been flying for 2 weeks, that is, the test period has passed.
As for the backup (BACKUP), I suggested to go in a very simple way - take the WD Live Book gig that way for 3, enable ftp on it and pour snapshots of virtual machines there that ProxMox successfully creates.
As for passive FTP forwarding, I have not done it yet. The point is that it is necessary to forward ports 20, 21 on apache and a certain number of ports for a passive connection, but I do not get this forwarding. Well, ftp from outside does not cling to my server in any way. It does not burn yet, so I decide as inspiration comes. I would appreciate a hint. There is a proftpd server. The idea was as follows - it is hard to set up a port pool for passive FTP mode in the config, and forward it inside from ProxMox. Not yet got to know how it is done.

Thanks for attention!
PS Here they write that I can not edit this article, I read 30 minutes, everything seems to be clean. I apologize for the voluntary and involuntary vulgarisms in the text, errors or typos.

Source: https://habr.com/ru/post/218237/


All Articles