📜 ⬆️ ⬇️

Moving XenForo Forum to the modern platform



Why was it necessary


The platform for the community of our product has long been based on the XenForo forum engine. Until recently, the forum worked as a VPS based on CentOS 6.8 with vendor's Apache 2.2.15, MySQL 5.1 and PHP 5.6.

In connection with the upcoming release of XenForo 2.0, which has increased requirements for components, and the general desire to speed up the forum on a modern component base, it was decided to move to VPS with nginx, the latest version of PHP and the database running on Percona Server 5.7.
')
The instructions below do not claim to be the ideal solution with the perfect configuration and can be considered as a general plan for using XenForo on nginx hosting. The instruction is primarily intended for those XenForo administrators who are not too strong in the intricacies of Linux administration and would like to have some kind of common basic instruction.

VPS preparation


CentOS 7.3 was chosen as the operating system simply because the rpm-based OSes administrator is closer than deb-based :)

VPS has 25Gb of disk space, 4Gb of RAM, and the command:

# cat /proc/cpuinfo | grep processor | wc -l 

shows number 8.

First, remove all unnecessary packages, such as Samba, httpd, and everything that you consider unnecessary. Then we install all available updates from the official repository using:

 # yum update 

Next you need to connect all the necessary third-party repositories and install the components we need. First install the Percona database server . We connect the repository and install the necessary packages:

 # yum install http://www.percona.com/downloads/percona-release/redhat/0.1-4/percona-release-0.1-4.noarch.rpm # yum install Percona-Server-server-57 

Here the subtlety is that during the installation a temporary admin password is generated, which must be found with the help of the command:

 # grep 'temporary password' /var/log/mysqld.log 

We will need it to further configure Percona Server with the command:

 # /usr/bin/mysql_secure_installation 

After that you will receive a permanent password for your database server.

Next, install the nginx repository and the nginx package itself:

 # yum install epel-release # yum install nginx 

After that, install the latest version of PHP with all the necessary components:

 # cd /tmp # curl 'https://setup.ius.io/' -o setup-ius.sh # bash setup-ius.sh # yum install php71u-fpm-nginx php71u-cli php71u-mysqlnd php71u-pecl-memcached php71u-opcache php71u-gd memcached 

We include all necessary services with the help

 # systemctl enable nginx # systemctl enable memcached # systemctl enable mysql # systemctl enable php-fpm 

so that after restarting the server, they all start automatically. This completes the preparation and proceeds to the most difficult - setting up the entire economy for the optimal operation of our XenForo forum.

Setting up services


In this section, you should not take everything as the ultimate truth. Experienced administrators can specify much better tweaks. Inexperienced are offered some general recommendations that they can use one-on-one, as a real working configuration, or use them as a template for their own individual configuration.

So, first, in the /etc/php.ini file, set the value of the cgi.fix_pathinfo = 0 parameter. Then we go to the /etc/php-fpm.d/www.conf file, comment on the line listen = 127.0.0.1:9000 and uncomment listen = /run/php-fpm/www.sock. Additionally, we enable listen.acl_users = nginx. It must be something like:

 ;listen = 127.0.0.1:9000 listen = /run/php-fpm/www.sock listen.acl_users = nginx 

In the /etc/nginx/conf.d/php-fpm.conf file we also enable the work through the socket:

 #server 127.0.0.1:9000; server unix:/run/php-fpm/www.sock; 

Restart php-fpm:

 # systemctl restart php-fpm 

For security reasons, the memcached service is bound only to the address 127.0.0.1:

 # cat /etc/sysconfig/memcached PORT="11211" USER="memcached" MAXCONN="2048" CACHESIZE="1024" OPTIONS="-l 127.0.0.1" 

Run it with:

 # systemctl start memcached 

make sure that port 11211 is open for connections and configure it in the XenForo config to use it to cache the backend in accordance with the official documentation of XenForo. But there is a subtlety, instead of a line:

 $config['cache']['backend'] = 'Memcached'; 

I earned a line:

 $config['cache']['backend']='Libmemcached'; 

Percona Server can be optimized using their wizard , or the well-known script mysqltuner.pl Everything is at your discretion and in accordance with the resources of your hardware.

Just keep in mind that the configuration file is in /etc/percona-server.conf.d/mysqld.cnf

The most difficult part in this story is the nginx configuration. In the basic settings, nothing special. Only correctly set the values ​​of worker_processes (the number of processors is determined by the cat / proc / cpuinfo | grep processor | wc -l command) and worker_connections (worker_processes * 1024):

 user nginx; worker_processes 8; error_log /var/log/nginx/error.log; pid /run/nginx.pid; include /usr/share/nginx/modules/*.conf; events { worker_connections 8192; use epoll; multi_accept on; } 

Next is the httpd block. Here, too, there are no particular subtleties, except for one, very important. The fact is that we use FastCGI caching, and this will require additional settings. This caching requires the inclusion of two configuration blocks in different nginx.conf blocks. First, what it looks like in the httpd block:

 http { access_log off; server_tokens off; charset utf-8; reset_timedout_connection on; send_timeout 15; client_max_body_size 5m; client_header_buffer_size 1k; client_header_timeout 15; client_body_timeout 30; large_client_header_buffers 2 1k; open_file_cache max=2000 inactive=20s; open_file_cache_min_uses 5; open_file_cache_valid 30s; open_file_cache_errors off; output_buffers 1 32k; postpone_output 1460; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; keepalive_requests 100000; types_hash_max_size 2048; include /etc/nginx/mime.types; default_type application/octet-stream; ### FastCGI Cache ################ map $http_cookie $nocachecookie { default 0; ~xf_fbUid 1; ~xf_user 1; ~xf_logged_in 1; } map $request_uri $nocacheuri { default 0; ~^/register 1; ~^/login 1; ~^/validate-field 1; ~^/captcha 1; ~^/lost-password 1; ~^/two-step 1; } fastcgi_cache_path /tmp/nginx_fastcgi_cache levels=1:2 keys_zone=fastcgicache:200m inactive=30m; fastcgi_cache_key $scheme$request_method$host$request_uri; fastcgi_cache_lock on; fastcgi_cache_use_stale error timeout invalid_header updating http_500; fastcgi_ignore_headers Cache-Control Expires Set-Cookie; ### FastCGI Cache ################ 

Then we will return to the features of enabling FastCGI caching in another block, but for now let's look at the following block, server:

 server { listen 80 reuseport; server_name domain.com; return 301 https://domain.com$request_uri; } server { listen 443 ssl reuseport http2; server_name domain.com; root /var/www/html; ssl_certificate "/etc/nginx/ssls/ssl-bundle.crt"; ssl_certificate_key "/etc/nginx/ssls/domain_com.key"; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_prefer_server_ciphers on; ssl_dhparam /etc/ssl/certs/dhparam.pem; ssl_ciphers "EECDH:+AES256:-3DES:RSA+AES:RSA+3DES:!NULL:!RC4"; ssl_session_timeout 1d; ssl_session_cache shared:SSL:10m; ssl_session_tickets off; ssl_stapling on; ssl_stapling_verify on; resolver 8.8.8.8 8.8.4.4 77.88.8.8 valid=300s; resolver_timeout 5s; add_header X-Frame-Options DENY; add_header X-Content-Type-Options nosniff; add_header Strict-Transport-Security max-age=31536000; 

It is important to configure the SSL certificate. In our case, a certificate from Comodo is used. Instructions for connecting it can be found on their website , and to generate /etc/ssl/certs/dhparam.pem we use the command:

 # openssl dhparam -out /etc/ssl/certs/dhparam.pem 2048 

Further verification of the SSL certificate settings can be done here .

And finally, the last important nginx config blocks:

 location / { index index.php index.html; try_files $uri /index.php?$uri&$args; } location ~ /(internal_data|library) { internal; } location ~ /wp-content/ { return 444; } location ~ /wp-includes/ { return 444; } # define error page error_page 404 = @notfound; # error page location redirect 301 location @notfound { return 301 /; } error_page 500 502 503 504 /50x.html; location = /50x.html { } location ~ \.php$ { fastcgi_max_temp_file_size 1M; fastcgi_cache_use_stale updating; fastcgi_pass_header Set-Cookie; fastcgi_pass_header Cookie; fastcgi_pass unix:/run/php-fpm/www.sock; fastcgi_index index.php; fastcgi_intercept_errors on; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_buffer_size 128k; fastcgi_buffers 256 16k; fastcgi_busy_buffers_size 256k; fastcgi_temp_file_write_size 256k; fastcgi_ignore_headers Cache-Control Expires Set-Cookie; proxy_buffer_size 8k; include fastcgi_params; ### fastcgi_cache ### fastcgi_cache fastcgicache; fastcgi_cache_bypass $nocachecookie $nocacheuri; fastcgi_no_cache $nocachecookie $nocacheuri; fastcgi_cache_valid 200 202 302 404 403 5m; fastcgi_cache_valid 301 1h; fastcgi_cache_valid any 1m; add_header X-Cache $upstream_cache_status; ### fastcgi_cache end ### } gzip on; gzip_http_version 1.1; gzip_vary on; gzip_min_length 1100; gzip_buffers 64 8k; gzip_comp_level 6; gzip_proxied any; gzip_types image/png image/gif image/svg+xml image/jpeg image/jpg text/xml text/javascript text/plain text/css application/json application/javascript application/x-javascript application/vnd.ms-fontobject gzip_disable "MSIE [1-6]\.(?!.*SV1)"; location ~* \.(ico|css|js|gif|jpeg|jpg|png|woff|ttf|svg)$ { add_header "Access-Control-Allow-Origin" "*"; root /var/www/html; expires 30d; add_header Pragma public; add_header Cache-Control "public"; } } } 

The location parameters are very important here for correct operation of the NC references, PHP scripts and closing access to important internal directories internal_data and library. In addition, gzip compression and static media file caching are enabled here. Well, the second part of setting up FastCGI caching.

Actually, the forum content transfer itself consisted in transferring the database dump and tar.gz archive of the contents of the root directory and deploying them on the new server.

Additional information about caching in nginx


In the beginning, I tried using nginx microcaching. First, create a directory to store the cache:

 # mkdir /var/cache/nginx2 

Created the /etc/nginx/conf.d/microcache.conf file with the contents:

 fastcgi_cache_path /var/cache/nginx2 levels=1:2 keys_zone=microcache:5m max_size=1000m; map $http_cookie $cache_uid { default nil; # hommage to Lisp ~SESS[[:alnum:]]+=(?<session_id>[[:alnum:]]+) $session_id; } map $request_method $no_cache { default 1; HEAD 0; GET 0; } 

and in the nginx config for php location did this:

 location ~ \.php$ { fastcgi_cache microcache; fastcgi_cache_key $server_name|$request_uri; fastcgi_cache_valid 404 30m; fastcgi_cache_valid 200 10s; 

In principle, everything worked perfectly, the forum began to work very quickly, except for one problem - sessions of registered and logged in users began to work strangely. Suddenly, you discovered that you were logged out and you had to log in again.

It turned out that the problem lies in the depths of the XenForo engine and is solved by installing the Logged In Cookie plugin and editing the XenForo helper_login_form and login_bar_form templates by replacing the line:

 <label class="rememberPassword"><input type="checkbox" name="remember" value="1" id="ctrl_pageLogin_remember" tabindex="3" /> {xen:phrase stay_logged_in}</label> 

per line:

 <input type="hidden" name="remember" checked="checked" value="1" /> 

But I learned all this later when I configured the FastCGI caching described above, with which everything now works fine. Therefore, I think the problem with sessions would be solved for nginx microcaching, but I did not check. You can try this caching option.

Conclusion


After testing the forum on Google Pagespeed and the corresponding additional optimization, the substantial acceleration of the forum’s work could not be overlooked. Now the forum is gaining 86 points out of 100. Previously, Apache had 78 points. There is still something to work on in terms of code optimization, especially for the mobile version.

In addition, I compared the old forum on Apache and the new one on nginx with load testing for a php script with a total of 1000 requests and the number of simultaneous connections 300. The results are obvious, as they say:

Apache:


# ab -n 1000 -c 300 talk6.plesk.com/admin.php
This is ApacheBench, Version 2.3 <$ Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, www.zeustech.net
Licensed to The Apache Software Foundation, www.apache.org

Benchmarking talk6.plesk.com (be patient)
Completed 100 requests
SSL handshake failed (5).
SSL handshake failed (5).
Completed 200 requests
SSL handshake failed (5).
SSL handshake failed (5).
SSL handshake failed (5).
SSL handshake failed (5).
SSL handshake failed (5).
SSL handshake failed (5).
SSL handshake failed (5).
SSL handshake failed (5).
SSL handshake failed (5).
SSL handshake failed (5).
SSL handshake failed (5).
SSL handshake failed (5).
Completed 300 requests
Completed 400 requests
Completed 500 requests
Completed 600 requests
Completed 700 requests
Completed 800 requests
Completed 900 requests
Completed 1000 requests
Finished 1000 requests


Server Software: Apache / 2.2.15
Server Hostname: talk6.plesk.com
Server Port: 443
SSL / TLS Protocol: TLSv1 / SSLv3, ECDHE-RSA-AES256-GCM-SHA384,2048,256

Document Path: /admin.php
Document Length: 3438 bytes

Concurrency Level: 300
Time taken for tests: 9.056 seconds
Complete requests: 1000
Failed requests: 44
(Connect: 0, Receive: 0, Length: 44, Exceptions: 0)
Write errors: 0
Total transferred: 3734136 bytes
HTML transferred: 3286728 bytes
Requests per second: 110.43 [# / sec] (mean)
Time per request: 2716.714 [ms] (mean)
Time per request: 9.056 [ms] (mean, across all concurrent requests)
Transfer rate: 402.69 [Kbytes / sec] received

Connection Times (ms)
min mean [± sd] median max
Connect: 0 1987 1940.1 1223 8748
Processing: 59,257,800.3 76 4254
Waiting: 0 79 31.4 72 211
Total: 234 2244 1926.3 1472 8811

Percentage of the requests served within a certain time (ms)
50% 1472
66% 2019
75% 2683
80% 3068
90% 4278
95% 8313
98% 8625
99% 8787
100% 8811 (longest request)

nginx:


# ab -n 1000 -c 300 talk.plesk.com/admin.php
This is ApacheBench, Version 2.3 <$ Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, www.zeustech.net
Licensed to The Apache Software Foundation, www.apache.org

Benchmarking talk.plesk.com (be patient)
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Completed 500 requests
Completed 600 requests
Completed 700 requests
Completed 800 requests
Completed 900 requests
Completed 1000 requests
Finished 1000 requests


Server Software: nginx
Server Hostname: talk.plesk.com
Server Port: 443
SSL / TLS Protocol: TLSv1 / SSLv3, ECDHE-RSA-AES128-GCM-SHA256,2048,128

Document Path: /admin.php
Document Length: 3437 bytes

Concurrency Level: 300
Time taken for tests: 5.585 seconds
Complete requests: 1000
Failed requests: 0
Write errors: 0
Total transferred: 3932790 bytes
HTML transferred: 3474807 bytes
Requests per second: 179.05 [# / sec] (mean)
Time per request: 1675.541 [ms] (mean)
Time per request: 5.585 [ms] (mean, across all concurrent requests)
Transfer rate: 687.65 [Kbytes / sec] received

Connection Times (ms)
min mean [± sd] median max
Connect: 182 1089 298.9 1185 1450
Processing: 55,261,279.5,159,1092
Waiting: 55 243 267.6 139 943
Total: 253 1350 81.5 1323 1510

Percentage of the requests served within a certain time (ms)
50% 1323
66% 1347
75% 1422
80% 1451
90% 1467
95% 1477
98% 1486
99% 1498
100% 1510 (longest request)

Monitoring the consumed VPS resources at the highest load of the forum also makes it possible to judge about their very small consumption. The forum interface has recently been completely redesigned in accordance with the new corporate standard, and in combination with reactive responsiveness has become an additional advantage to attract new members of our community.

PS I would be very grateful to the experts and experts of nginx in pointing out errors and tips on additional configuration optimization.

Source: https://habr.com/ru/post/326636/


All Articles