Good day!
In this article we will describe how we work with the cache in
plus1.wapstart.ru , what problems we had and how we solved some special cases.
To begin with about terminology.
By “cache” in this article, I will understand some kind of fast storage, which can be used, including, for caching. At the same time, the storage should have a standardized interface.
Server / storage is any application that can store data and give access to it via the interface described below. For example, this application can be memcached.
We use the
onPHP framework. It has the abstract class CachePeer, from which all cache implementations should inherit. The interface of any implementation is reduced to the following methods.
')
abstract public function get($key); abstract public function delete($key); abstract public function increment($key, $value); abstract public function decrement($key, $value); abstract protected function store( $action, $key, $value, $expires = Cache::EXPIRES_MEDIUM ); abstract public function append($key, $data);
In our world there are the following implementations of CachePeer (clickable)

This diagram shows both the implementations of the storages (see the
bridge ) and the various
decorators who solve particular problems.
Onphp has support for working with
Redis ; Memcached - as many as two implementations:
on sockets and using
Memcache (
http://php.net/Memcache ); we can work with
SharedMemory . If there is none of this in the installation, then we will work with the
application's memory .
In spite of all the variety of technologies supported, I don’t know a single onphp project that would use something other than Memcached.
Memcached rules this world. :)
We have two Memcache implementations for the following reasons:
- when Memcached was written (on sockets) extension Memcache was not written yet.
Yes, we know about the name conflict with http://php.net/Memcached , this has already been fixed in master .
- The implementation of the connection on the sockets is available to you even when you do not have access to the php settings and can not install the necessary extensions.
We almost everywhere use PeclMemcached, which connects to the server through the Memcache extension. Other things being equal, it works faster and also supports
pconnect .
With the alternative library (
MemcacheD ) we somehow did not work out. I tried to write an implementation for it, but at that time (about two years ago) it was not very stable.
About decorators:
When the ratio of projects per caching system becomes more than one, then
WatermarkedPeer should be used. Its meaning comes down to the getActualWatermark () method. For example, the get implementation becomes
public function get($key) { return $this->peer->get($this->getActualWatermark().$key); }
This avoids key conflicts. Data from different projects / classes / etc. will be recorded under different keys. As for the rest, this cache is a standard decorator for any storage implementation.
If you need to spread data across multiple caching systems, then you can either use the memkesh cluster from the php delivery or take one of our aggregate caches. We have several:
- AggregateCache defines a server that will work with the current key based on the value of mt_rand from it with a previously redefined mt_srand . It sounds weird, but in fact everything is very simple .
- SimpleAggregateCache is generally simple as a brick. The required server is determined based on the remainder of dividing the numerical key representation by the number of servers.
CyclicAggregateCache - the implementation we saw at last.fm. She is very elegant. Take the circle, put on it the "mount points" servers.
Moreover, the number of points for each server will be proportional to its weight. Upon receipt of a request key, also mappim to some point on the circle. It will be processed by the server whose point has a smaller distance to the key point.
The advantage of this approach is that when a server is added to the pool, only a fraction of the values, and not all, is devalued. Also, when a server is taken out of the pool, only those values ​​that were stored on it are lost, while the remaining servers take its load more or less evenly. You can read more about the idea of ​​the algorithm here or here .
At this more or less standard part ends. Most applications should have this set of implementations to create a normal caching system.
Then begin the particular.
- DebugCachePeer - his name is so self-documenting to such an extent that I don’t see any sense in describing it.
- ReadOnlyPeer - there are such caches from which you can only read, but you cannot write to them. For example, they may be recorded from some other place, or they may even be implemented differently, for example, like our fish . For these storage, it is advisable to use ReadOnlyPeer, since it will be on the side of the application to ensure that the data will only be read, but not recorded / updated.
- CascadeCache - you have a local fast unloaded cache, for example, on a socket. And there is some remote cache, which is also pre-filled. If your application is allowed to use slightly outdated data, then you can use CascadeCache. It will do read operations from the local cache, while if the data in the local cache is missing, they will be requested in the remote cache.
For “negative” results (null), you can use one of two strategies — they will either be stored in the local cache or ignored.
- MultiCachePeer - you have a pre-filled cache. At the same time, it is desirable that he could fill a dozen "local" caches on ten servers.
In other words, we want to write data in one place, and read them, in general, from another. At the same time, on each server with applications there should be one and the same configuration - for ease of deployment. To do this, you can use MultiCachePeer with something like this config:
MultiCachePeer::create( PeclMemcached::create('localhost', 11211), array( PeclMemcached::create('meinherzbrennt', 11211), PeclMemcached::create('links234', 11211), PeclMemcached::create('sonne', 11211), PeclMemcached::create('ichwill', 11211), PeclMemcached::create('feuerfrei', 11211), PeclMemcached::create('mutter', 11211), PeclMemcached::create('spieluhr', 11211) ) );
- SequentialCache - Imagine that you have a storage that sometimes crashes or is simply unavailable. For example, it can sometimes be displayed for maintenance, restarted, etc. At the same time, the application always wants to receive data. To cover this situation, you can use SequentialCache, approximately with the following config:
$cache = new SequentialCache( PeclMemcached::create('master', 11211, 0.1),
Since Since almost all implementations use the decorator pattern, they can be quite successfully combined.
For example, the following construction is acceptable:
$swordfish = ReadOnlyPeer::create( new SequentialCache( PeclMemcached::create('localhost', 9898, 0.1), array( PeclMemcached::create('backup', 9898, 0.1), ) ) );
Or even this:
$swordfish = CascadeCache::create( PeclMemcached::create('unix:///var/run/memcached_sock/memcached.sock', 0), ReadOnlyPeer::create( new SequentialCache( PeclMemcached::create('localhost', 9898, 0.1), array( PeclMemcached::create('backup', 9898, 0.1), ) ) ), CascadeCache::NEGATIVE_CACHE_OFF );
In this case, the data will first be searched in the local memkesh, accessible via unix-socket, if they are not there, then the “memkesh” localhost: 9898 will be requested. And in case it is unavailable, then backup: 9898. At the same time, the application knows that from the caches on ports 9898 you can only read, but not write.
This does not end the onphp caches. You can make completely different configurations that will cover your tasks. CachePeer from onphp is awesome.
ps. Once upon a time
here talked about a series of articles about onphp. A start was made by this post. In the future, we will touch on other topics related to the framework and its use in plus1.wapstart.ru.
pps. Taking this opportunity, I inform you that we are looking for people:
hantim.ru/jobs/11163-veduschiy-qa-menedzher-rukovoditel-otdela-testirovaniyahantim.ru/jobs/11111-veduschiy-php-razrabotchik-team-leader