📜 ⬆️ ⬇️

Fighting PHP Cache Fragmentation

I hope that normal people are already imbued with the need to cache the output of data on their sites, either to cache the intermediate results of working with the database, or simply to cache the opcode scripts for their faster execution.
And what do the developers provide for this business?

So the leader for this business (personally I) can call the memcached system.
Distribution. Its own memory (normal caches work in a shared-memory web server), and not even there are 32-512 megabytes, and up to 4 gigs
(4 gigabytes is a limit for one memcached process (32 bits addressing), if you have 32 Gig operatives just run with a dozen daemons on one computer, and a dozen on the other)
But what is bad in memcached.
1. It is a brake - network interaction (even within Localhost) is not fast
2. It does NOT cache opcode

The main thing is not to forget the main advantage - memory caches a lot of data ...

So, it seems to have figured out that we won't be fed up with one memeshadom.
It is required to put also opcode cacher.
Again in my opinion it is necessary to rip between
1.APC
2.eAccelerator
3.XCache

ALL they work with the cache well, two or three times faster than memcached.
In memcached, only the store is worthless (asynchronous?), But try to pull out the data - while the package is here, while there. Well, a millisecond will be for sure, or even all five.

Plus XCache - optimizes scripts very well.
Plus eAccelerator - has a Lock \ Unlock operation that allows you to exclude simultaneous caching of one block by four users.
(I have one block (the most popular news of this week) was cached simultaneously in 6 threads. The database just went down because of cross LOCK. Before switching to eAccelerator, I had to remove the block (Lock via FS I consider unreliable))
Plus APC - it works faster than eAccelerator, but nothing “such” is able.

On the local machine (Windows), I have XCache doing file optimization, data caching is eCcelerator (due to lock \ unlock)
On the server ... and on the server, the system does not allow them to be delivered simultaneously.
Lie down on the floor, knocking with legs and arms, and says - either Xcache or APC, we do not have polygamy!
So it’s worth eAccelerator only.
Cache is currently doing at memcached on two servers.
Well, plus to this, all operations with the read-write cache are also duplicated in both the local cache and the memory cache.
Ie, when the store operation is done
$ ttl + = rand (0, $ ttl / 10); // VERY helpful for caches created at the same time, EXPIRE to different
cache_storelocal ($ name, $ value, $ ttl / 2); // put in eAcc for half the lifetime
cache_storeglobal ($ name, $ value, $ ttl); // put in memo for the whole term

Well, when we take the cash, we first look at the local, then at the global, and if we add to the local.
')
Tax, I forgot to tell on the topic ...
In general, we lived well, and then the projects started to just crumble.
You get up in the morning. Sites hang (segmentation fault). service httpd restart
An hour passes. Sites begin to slow down (tens or hundreds of times, CPU usage. 7%) service httpd restart
Three days we have been over the problem.
And the problem was simple - fragmentation of the memory of the cache (just projects that did not use the local cache) worked fine) ...
And what this fragmentation looked like - with the allocated 64 meters, the kesher HONESTLY reported that its data weighs ... gigabytes? (Usually 1.6G)
You go to look at variable handles - a certain array on 60 bytes weighs. 8 megabytes?

For starters, we decided all the same to ban this problem locally by caching large data (pictures and texts), it seems like the server has been standing for 50 hours and does not reboot ...

But having come across a “secondary” server (parallel processing of ajax requests), I saw a repin picture - 100% fragmentation. All saved data - tiddly tiddly (but very good Pts a lot).
What to do kakzhe be. How to fragmentation kill.

Personally, I have come up with only one solution -
1. We allow “ourselves” to take more memory.
2. consistently collect all the data from the local cache (it’s 64 meters, 24 meters of which are occupied by the file cache, 40 meters in total - which is garbage)
3. We crash the cache.
4.Wrap data back.
Hours - about one and a half seconds.
and crowns once an hour.
But personally for me - the decision is crooked.
Any other options?

PS: I reject the proposal not to cache variables in the local cache.
Turn it and check it!

Source: https://habr.com/ru/post/23206/


All Articles