Intermediate Caching (Opcode Caching)Code caching is one of the easiest and most effective ways to increase performance in PHP. Using this type of caching will get rid of a large number of inefficiencies that arise during the process of running code execution. Code caching stores the intermediate code in memory in order not to compile the PHP code each time the file is run.
There are many libraries for such caching, for example,
APC ,
XCache ,
eAccelerator, and
Zend Platform .
Caching intermediate code filesWhen we have a large amount of code and our service has a high attendance, most likely we will not wait for each PHP file to be processed when it is called, it is logical in this case to run some script before uploading the code to the server, which will immediately create an intermediate code . For example, the code of such a script can be implemented as
/ **
* Compile Files for APC
* The function runs through each directory and
* compiles each * .php file through apc_compile_file
* param string $ dir start directory
* return void
* /
function compile_files ($ dir)
{
$ dirs = glob ($ dir. DIRECTORY_SEPARATOR. '*', GLOB_ONLYDIR);
if (is_array ($ dirs) && count ($ dirs)> 0)
{
while (list (, $ v) = each ($ dirs))
{
compile_files ($ v);
}
}
$ files = glob ($ dir. DIRECTORY_SEPARATOR. '* .php');
if (is_array ($ files) && count ($ files)> 0)
{
while (list (, $ v) = each ($ files))
{
apc_compile_file ($ v);
}
}
}
compile_files ('/ path / to / dir');
Variable cachingMost caching libraries allow you to cache variable values. It is very useful to save configuration values or data that are difficult to calculate (get) and that do not change (perhaps they do not change for some time, then on the basis of such caching you can implement caching with obsolescence, the translator’s note).
if (! $ config = apc_fetch ('config'))
{
require ('/ path / to / includes / config.php');
apc_store ('config', $ config);
}
A practical example is illustrated based on using the Zend Framework and simply running the ab utility, in this example the result of the XML configuration is stored in the cache. Speeding up parsing time allows you to extremely quickly access the configuration parameters.
Code:
if (! $ conf = apc_fetch ('pbs_config'))
{
$ conf = new Zend_Config_Xml (PB_PATH_CONF. '/base.xml', 'production');
apc_store ('pbs_config', $ conf);
}
Command to test ab -t30 -c5
www.example.comResult without caching
Concurrency Level: 5
Time taken for tests: 30.33144 seconds
Complete requests: 684
Failed requests: 0
Write errors: 0
Result with caching
Concurrency Level: 5
Time taken for tests: 30.12173 seconds
Complete requests: 709
Failed requests: 0
Write errors: 0
As you can see, we got about 3-4% in performance by caching the values of the configuration file. There are many other places that can also be optimized; finding such places will increase the number of requests processed.
')
File Caching ResultsIn some cases, the server processes requests that result in the same content. It is possible to cache similar type of content (in whole or in part)
This text illustrates an example based on the
Pear :: Cache_Lite package .
Full output cachingFull caching is quite hard to perform on most sites with constantly updated data from a large number of sources. All this is true, however, there is no need to update the data every second. Even a 5-10 minute delay with an extremely high load site will allow you to increase productivity.
The example below saves a page cast for future use. This approach can help a large number of users.
I do not recommend using this solution, but if you need something quick, you can use it, sooner or later you will see the disadvantages of this method.
The Bootstrap Cache Example:
require ('/ path / to / pear / Cache / Lite / Output.php');
$ options = array (
'cacheDir' => '/ tmp /',
'lifeTime' => 10
);
$ cache = new Cache_Lite_Output ($ options);
if (! ($ cache-> start ($ _ SERVER ['REQUEST_URI'])))
{
require ('/ path / to / bootstrap.php');
$ cache-> end ();
}
An example based on .htaccess:
.htaccess
php_value auto_prepend_file /path/to/cache_start.php
php_value auto_append_file /path/to/cache_end.php
cache_start.php
require ('Cache / Lite / Output.php');
$ options = array (
'cacheDir' => '/ tmp /',
'lifeTime' => 10
);
$ cache = new Cache_Lite_Output ($ options);
if (($ cache-> start ($ _ SERVER ['REQUEST_URI'])))
exit;
cache_end.php
$ cache-> end ();
Cache Lite does most of the hard work such as blocking a file, deciding how to save content for various parameters (in this example, the REQUEST URI is used). You may also need the values $ _POST, $ _COOKIE, and $ _SESSION.
Partial cachingPartial caching is a typical optimization path. Most likely your site has parts that rarely change or should not change in real time. This is the case when you need to apply partial caching and it will allow you to see an increase in performance.
String value cachingrequire ('Cache / Lite.php');
$ options = array (
'cacheDir' => '/ tmp /',
'lifeTime' => 3600 // 1 hour
);
$ cache = new Cache_Lite ($ options);
if (! ($ categories = $ cache-> get ('categories')))
{
$ rs = mysql_query ('SELECT category_id, category_name FROM category');
$ categories = '';
$ cache-> save ($ categories, 'categories');
}
echo $ categories;
While this is an oversimplified example, it only shows the flexibility of storing a value. You can save array values in order to access them later.
Array Cachingrequire ('Cache / Lite.php');
$ options = array (
'cacheDir' => '/ tmp /',
'lifeTime' => 3600, // 1 hour
'automaticSerialization' => true
);
$ cache = new Cache_Lite ($ options);
if (! ($ categories = $ cache-> get ('categories')))
{
$ rs = mysql_query ('SELECT category_id, category_name FROM category');
$ categories = array ();
while ($ row = mysql_fetch_assoc ($ rs))
{
$ categories [] = $ row;
}
$ cache-> store ($ categories, 'categories');
}
var_dump ($ categories);
As you can see, you can store various types of data in the cache. However, I would not recommend using file caching to save the results of database queries.
In-memory cachingThere are many ways to perform caching in memory: memcached, memory tables in databases, RAM disk and others.
MemcachedFrom the
memcache site,
memcached is a high-performance and distributed caching system that increases the speed of dynamic web applications by reducing the load on the database.
What it says, that you can save data on a single server that other servers will access, does not depend on your web server (as in the case of caching intermediate code), since memcached is a daemon that most cases is used to cache the results of database queries.
An example of working with Memcache:
$ post_id = (int) $ _GET ['post_id'];
$ memcached = new Memcache;
$ memcached-> connect ('hostname', 11211);
if (! $ row = $ memcached-> get ('post_id_'. $ post_id))
{
// yes this is safe, we type casted it already;)
$ rs = mysql_query ('SELECT * FROM post WHERE post_id ='. $ post_id);
if ($ rs && mysql_num_rows ($ rs)> 0)
{
$ row = mysql_fetch_assoc ($ rs);
// cache compressed for 1 hour
$ memcached-> set ('post_id_'. $ post_id, $ row, MEMCACHE_COMPRESSED, time () + 3600);
}
}
var_dump ($ row);
This is a fairly simple example of working with memcached. We have kept a simple element in memory for future use, to which we will have easy access in the future. I recommend using this method for the data that you most often refer to.
An example of setting up session parameters for working with Memcache
session.save_handler = memcache
session.save_path = "tcp: // hostname: 11211"
As you can see, session support is pretty simple. If you have many memcached servers, the save_path variable must contain server names separated by commas with each server.
Memory Tables DatabaseMemory tables of databases can be used to store session data. You can create a table of this type using MySQL. Create your own session handler. This is one way to increase session performance.
Ram diskWhile the approach of using RAM as a disk is not an example of distribution, this approach can easily be adapted to increase the performance of the site. Remember the information on this disk disappears after the server is rebooted.
Creating a ramdisk
mount --bind -ttmpfs / path / to / site / tmp / path / to / site / tmp
I would try to avoid such an approach, because I believe that the risk in this case outweighs the benefits when it comes to a large number of servers. In this case, the best solution is to use memcached.
I hope that the above was quite informative. It does not describe the full potential of caching, such as using caching in distributed databases or using Squid. In future articles I will describe this too ...