Good day to all! After reading this article (
online flower shop, or how we screwed up on Valentine's Day ) decided to share the experience of optimizing one of the sites on Bitrix. For some unknown reason, it was this article that gave a decisive kick to share its experience. I want to believe that my story will save someone precious time (because of my “complete everything” feature, I spent 2 days off to achieve the goal. I didn’t want to leave the client without a working site on the weekend), and I hope that the more experienced colleagues will point out my mistakes.
On Friday I got a website on Bitrix with a catalog of auto parts and a database of 3.2 GB in size. Problem: the site either did not give the page at all, or during the waiting time you could forget why you visited this site. What attempts I made and what I managed to achieve in the end I will tell under the cut.
So, more specifically, the parameters of the old hosting:
- VDS;
- 8 GB of RAM (on a new 4GB hosting);
- 40GB SSD;
- bitrix environment 5. * (on a new hosting net version 7.0);
- PHP 5.6 (on the new PHP 7.0 hosting);
- MySql 5.5. *;
- file bitrix caching;
- agents run on hits.
Usually I take the following steps to optimize the Bitrix site (VDS), but this time it didn’t give any tangible results:
')
- transfer performance of agents from hits to crowns ( more );
- setting memcached ( more );
- This time a transfer was added to a new hosting with updated components (php, mysql, etc.)
When I decided to deploy a local version, I was greatly surprised by the 3.2 GB site
database , especially the
b_sale_fuser table (2.4 GB), which is responsible for the visitor baskets. As it turned out, it contained data from 2014. When I looked inside this table, I noticed several features:
- 80% of the data was only for the last month (total 17+ million records);
- records were created at intervals of several seconds. The standard method of cleaning abandoned baskets simply did not cope;
- there are three indices in the table, which means that when the data in it changes, the indices will be updated, which entails additional costs for resources;


At this stage, I made the assumption that the problem lies in using the
CsaleBasket :: GetBasketUserID method (bSkipFUserInit) without an additional parameter. The
nuance is that the
bSkipFUserInit parameter
is responsible for creating an entry in the table, even if the client has not yet placed anything in the basket. My guess was confirmed when
I found a call to the ill-fated method in one of the files
result_modifier.php without the necessary parameter. Having corrected this moment and cleared the table of irrelevant data (around 3 o'clock, because the muscles constantly fell off, and the data needed to be removed from the related tables. All this was done using standard Bitrix methods, which I later regretted. In more detail I will report in the conclusions. After cleaning, the number of records decreased from 19+ million to 400+ thousand, which had a beneficial effect on the work of the local version, but the result still did not suit. The page began to be given in 20-30 seconds, and earlier in a few minutes.
Then it was decided to look for slow queries. Since we use bitrixenv, the order of commands for editing the musk-config looks like this:
nano /etc/mysql/bx/bvxat.cnf
After the search, two queries were found that ran for 300+ seconds (see below). One of them showed 4 random goods from the entire catalog. At that time, I decided to comment out the call of this component until better times. But the second one simply cannot be excluded, since he is responsible for the formation of the main menu (see below).
Sql query Tcp port: 3306 Unix socket: /var/lib/mysqld/mysqld.sock Time Id Command Argument
At first I was not embarrassed that on the battle server, this request was executed for 300+ seconds, and on the local machine for 20+, and I thought that the reason for this was not enough load on the site. Those. on the combat site, the visit was 20 persons per minute, and on the local copy, only I did requests. I decided to use the Jmeter utility (see below).

After running this test in 20 requests, I decided to open the site in the browser and immediately received the following error:
Incorrect key file for table / tmp / *. As it turned out, for each SQL query, Muskul created temporary tables on the disk in the temporary folder, but there was not enough space. Since not strong in principle, MySql went with a question to all-knowing Google (did you have at least one day without resorting to searching ?!), who explained the following:
if the sample contains TEXT / BLOB type fields, then the database will create temporary tables on the diskAnd the great assistant, as always, was right! In the
b_iblock_section table, there are a couple of such fields (see right), namely
DESCRIPTION and
SEARCHABLE_CONTENT .

By removing these fields from the query and rewriting it (see below), we managed to win the speed several times! As a result, the query instead of 20+ seconds on the local machine began to return the result after 1.5 seconds. However, it was too early to rejoice. since this query in the database was formed in the system bitrix file
/bitrix/modules/iblock/classes/mysql/iblocksection.php . Unfortunately, I didn’t find anything better than to fix it, although I’m aware that in the first update of the Bitrix kernel my editing may be erased. But at that time I was already struggling with this site for 3 days in a row and the time went by on Sunday evening. So left this farm ...
It became BS.ID, BS.TIMESTAMP_X, BS.MODIFIED_BY, BS.DATE_CREATE, BS.CREATED_BY, BS.IBLOCK_ID, BS.IBLOCK_SECTION_ID, BS.ACTIVE, BS.GLOBAL_ACTIVE, BS.SORT, BS.NAME, BS.PICTURE, BS.LEFT_MARGIN, BS.RIGHT_MARGIN, BS.DEPTH_LEVEL, BS.CODE, BS.XML_ID, BS.TMP_ID, BS.DETAIL_PICTURE, BS.SOCNET_GROUP_ID, ...
However, it was too early to rejoice here. When I filled in the edits to the combat site, the result became better, but far from the desired (300 sec ––> 100+ sec). Having spent some time in bewilderment and having pawed about myself, I decided to try to work out an assumption about the difference of the versions of mysql on the combat server and on the local machine. You might think that the matter is in the settings of the database itself, however, I cut off this item at the beginning of the path, when I set the same settings as on the combat vehicle. It only remained to upgrade from version
5.5. * To
5.6.35 on the server (the latest available version of mysql on the machine). He placed great hopes on this step, since ideas and assumptions about what could be the case dried out. Yes, and it was a pity the weekend, which spent on finding and solving problems. Together with the weekend my nerves were ending ... But how glad I was that when after updating the database everything worked as it should, the numbers in the request logs were identical to the numbers on the local machine, and the site just started to fly. Joy knew no bounds, it was enough for two: me and my girlfriend, who realized that I would spend the rest of the weekend with her, and not behind the monitor screen.
What methods did for yourself:
- Testing and identifying problems on the local computer is logical to conduct in conditions close to combat. Unfortunately, I came up with this a few hours later, updating the site page with single queries;
- it is sometimes easier to update the components used. For example, it helped in my multi-day quest, though it is a pity that I thought of this only at the end of the epic.
- After some time, I think that it would be better to analyze the CMS system files to create several sql queries in the database, which would clear the ill-fated b_sale_fuser table and its associated data. And then he sat and waited until the system methods removed the records for each pass ...
- It is better to spend time studying the tools you work with. In my case, I'll go read the book on MySql, so that the new problems are not an inexplicable focus for me.
I thank everyone who took the time. It will be great if you leave constructive criticism or advice.
PS At the end of the second day of torment, I remembered the story “The Old Man and the Sea”, and thought that my attempts would also not be rewarded, but everything worked out.
PPS In order for the data deletion rate to increase from the
b_sale_fuser table and other related data, it was possible to remove indexes from them and then add them again after updating.