I recently heard that NginX working with PHP via FastCGI was faster than Apache with mod_php. I also saw that people both deny and agree with this hypothesis. Let's do a little test and find out the performance of these systems.
For testing, I created a small Hello world script. Why I did not choose something more difficult? The answer is simple: there should not be much difference in performance in the PHP interpreter. Why then didn’t I create an absolutely blank page? The fact is that I wanted to test bi-directional data transfer. The goal is to test the speed of the web server as a whole, and not just PHP.
Basic tests give us the following results:
')
Apache with mod_php
Total transferred: 3470000 bytes
HTML transferred: 120000 bytes
Requests per second: 2395.73 [#/sec] (mean)
Time per request: 4.174 [ms] (mean)
Time per request: 0.417 [ms] (mean, across all concurrent requests)
Transfer rate: 811.67 [Kbytes/sec] received
NginX with PHP-FPM
Total transferred: 1590000 bytes
HTML transferred: 120000 bytes
Requests per second: 5166.39 [#/sec] (mean)
Time per request: 1.936 [ms] (mean)
Time per request: 0.194 [ms] (mean, across all concurrent requests)
Transfer rate: 801.82 [Kbytes/sec] received
Apache managed to process 2400 requests per second, unlike Nginx. He coped with 5,200 queries. I have never seen such a large number before. I ran the same requests with -c -f for Apache to find out the reasons for this difference. -c displays the time of the system requests, -f follows the branches. What is the result for the first 10 lines?
% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- ----------------
33.65 0.042063 4 10003 getcwd
16.10 0.020127 2 10001 writev
16.00 0.019994 2 10001 shutdown
10.54 0.013179 0 51836 40118 open
9.01 0.011263 1 20008 semop
5.22 0.006522 0 54507 10002 read
2.53 0.003158 0 10024 write
1.91 0.002386 0 88260 66483 close
1.57 0.001959 245 8 clone
1.16 0.001455 0 54832 384 stat64
getcwd? But why? After I remembered that I have activated AllowOverwrite (.htaccess). Performed the same tests after disabling this feature.
Total transferred: 3470000 bytes
HTML transferred: 120000 bytes
Requests per second: 5352.41 [#/sec] (mean)
Time per request: 1.868 [ms] (mean)
Time per request: 0.187 [ms] (mean, across all concurrent requests)
Transfer rate: 1813.40 [Kbytes/sec] received
With 5352 processed requests Apache ahead of NginX. But what happens then when the amount of data transferred increases? I created about 100k of content and tried again.
Apache
Total transferred: 1051720000 bytes
HTML transferred: 1048570000 bytes
Requests per second: 2470.24 [#/sec] (mean)
Time per request: 4.048 [ms] (mean)
Time per request: 0.405 [ms] (mean, across all concurrent requests)
Transfer rate: 253710.79 [Kbytes/sec] received
Nginx
Total transferred: 1050040000 bytes
HTML transferred: 1048570000 bytes
Requests per second: 2111.08 [#/sec] (mean)
Time per request: 4.737 [ms] (mean)
Time per request: 0.474 [ms] (mean, across all concurrent requests)
Transfer rate: 216476.53 [Kbytes/sec] received
This time the difference was more noticeable. There are some changes. PHP is built into Apache's mod_php, which should speed it up. If only PHP is running on your server, this should be the best solution in terms of performance.
If you work with different languages like CSS, JS and images mm, then NginX will suit you more. Its performance will be higher, but PHP will not become faster. It will also be safer in terms of protection against DDoS, but a CDN is still the best solution.
Below are graphs to compare performance:
