📜 ⬆️ ⬇️

How professional interest stole my weekend

Good day to all! After reading this article ( online flower shop, or how we screwed up on Valentine's Day ) decided to share the experience of optimizing one of the sites on Bitrix. For some unknown reason, it was this article that gave a decisive kick to share its experience. I want to believe that my story will save someone precious time (because of my “complete everything” feature, I spent 2 days off to achieve the goal. I didn’t want to leave the client without a working site on the weekend), and I hope that the more experienced colleagues will point out my mistakes.

On Friday I got a website on Bitrix with a catalog of auto parts and a database of 3.2 GB in size. Problem: the site either did not give the page at all, or during the waiting time you could forget why you visited this site. What attempts I made and what I managed to achieve in the end I will tell under the cut.

So, more specifically, the parameters of the old hosting:


Usually I take the following steps to optimize the Bitrix site (VDS), but this time it didn’t give any tangible results:
')

When I decided to deploy a local version, I was greatly surprised by the 3.2 GB site database , especially the b_sale_fuser table (2.4 GB), which is responsible for the visitor baskets. As it turned out, it contained data from 2014. When I looked inside this table, I noticed several features:






At this stage, I made the assumption that the problem lies in using the CsaleBasket :: GetBasketUserID method (bSkipFUserInit) without an additional parameter. The nuance is that the bSkipFUserInit parameter is responsible for creating an entry in the table, even if the client has not yet placed anything in the basket. My guess was confirmed when I found a call to the ill-fated method in one of the files result_modifier.php without the necessary parameter. Having corrected this moment and cleared the table of irrelevant data (around 3 o'clock, because the muscles constantly fell off, and the data needed to be removed from the related tables. All this was done using standard Bitrix methods, which I later regretted. In more detail I will report in the conclusions. After cleaning, the number of records decreased from 19+ million to 400+ thousand, which had a beneficial effect on the work of the local version, but the result still did not suit. The page began to be given in 20-30 seconds, and earlier in a few minutes.

Then it was decided to look for slow queries. Since we use bitrixenv, the order of commands for editing the musk-config looks like this:

nano /etc/mysql/bx/bvxat.cnf #     log_slow_queries = /var/log/mysql/mysql-slow.log long_query_time = 1 service mysqld restart 

After the search, two queries were found that ran for 300+ seconds (see below). One of them showed 4 random goods from the entire catalog. At that time, I decided to comment out the call of this component until better times. But the second one simply cannot be excluded, since he is responsible for the formation of the main menu (see below).

Sql query
 Tcp port: 3306 Unix socket: /var/lib/mysqld/mysqld.sock Time Id Command Argument # Time: 180318 18:30:07 # User@Host: bitrix[bitrix] @ localhost [] # Thread_id: 96 Schema: testdb QC_hit: No # Query_time: 301.414008 Lock_time: 0.000324 Rows_sent: 13 Rows_examined: 260456 use testdb; SET timestamp=1521387007; SELECT DISTINCT BS.*, B.LIST_PAGE_URL, B.SECTION_PAGE_URL, B.IBLOCK_TYPE_ID, B.CODE as IBLOCK_CODE, B.XML_ID as IBLOCK_EXTERNAL_ID, BS.XML_ID as EXTERNAL_ID, DATE_FORMAT(BS.TIMESTAMP_X, '%d.%m.%Y %H:%i:%s') as TIMESTAMP_X, DATE_FORMAT(BS.DATE_CREATE, '%d.%m.%Y %H:%i:%s') as DATE_CREATE ,COUNT(DISTINCT BE.ID) as ELEMENT_CNT FROM b_iblock_section BS INNER JOIN b_iblock B ON BS.IBLOCK_ID = B.ID INNER JOIN b_iblock_section BSTEMP ON BSTEMP.IBLOCK_ID = BS.IBLOCK_ID LEFT JOIN b_iblock_section_element BSE ON BSE.IBLOCK_SECTION_ID=BSTEMP.ID LEFT JOIN b_iblock_element BE ON (BSE.IBLOCK_ELEMENT_ID=BE.ID AND ((BE.WF_STATUS_ID=1 AND BE.WF_PARENT_ELEMENT_ID IS NULL ) AND BE.IBLOCK_ID = BS.IBLOCK_ID ) AND BE.ACTIVE='Y' AND (BE.ACTIVE_TO >= now() OR BE.ACTIVE_TO IS NULL) AND (BE.ACTIVE_FROM <= now() OR BE.ACTIVE_FROM IS NULL)) WHERE 1=1 AND BSTEMP.IBLOCK_ID = BS.IBLOCK_ID AND BSTEMP.LEFT_MARGIN >= BS.LEFT_MARGIN AND BSTEMP.RIGHT_MARGIN <= BS.RIGHT_MARGIN AND BSTEMP.GLOBAL_ACTIVE = 'Y' AND ((((BS.ACTIVE='Y')))) AND ((((BS.GLOBAL_ACTIVE='Y')))) AND ((((BS.IBLOCK_ID = '9')))) AND ((((BS.DEPTH_LEVEL <= '1')))) AND ((((B.ID = '9')))) AND (( B.ID IN ( SELECT IBLOCK_ID FROM b_iblock_group IBG WHERE IBG.GROUP_ID IN (2) AND IBG.PERMISSION >= 'R' AND (IBG.PERMISSION='X' OR B.ACTIVE='Y') ) OR (B.RIGHTS_MODE = 'E' AND EXISTS ( SELECT SR.SECTION_ID FROM b_iblock_section_right SR INNER JOIN b_iblock_right IBR ON IBR.ID = SR.RIGHT_ID INNER JOIN b_user_access UA ON UA.ACCESS_CODE = IBR.GROUP_CODE AND UA.USER_ID = 0 WHERE SR.SECTION_ID = BS.ID AND IBR.OP_SREAD = 'Y' )) )) GROUP BY BS.ID, B.ID ORDER BY BS.LEFT_MARGIN asc; 


At first I was not embarrassed that on the battle server, this request was executed for 300+ seconds, and on the local machine for 20+, and I thought that the reason for this was not enough load on the site. Those. on the combat site, the visit was 20 persons per minute, and on the local copy, only I did requests. I decided to use the Jmeter utility (see below).



After running this test in 20 requests, I decided to open the site in the browser and immediately received the following error: Incorrect key file for table / tmp / *. As it turned out, for each SQL query, Muskul created temporary tables on the disk in the temporary folder, but there was not enough space. Since not strong in principle, MySql went with a question to all-knowing Google (did you have at least one day without resorting to searching ?!), who explained the following:

if the sample contains TEXT / BLOB type fields, then the database will create temporary tables on the disk

And the great assistant, as always, was right! In the b_iblock_section table, there are a couple of such fields (see right), namely DESCRIPTION and SEARCHABLE_CONTENT .



By removing these fields from the query and rewriting it (see below), we managed to win the speed several times! As a result, the query instead of 20+ seconds on the local machine began to return the result after 1.5 seconds. However, it was too early to rejoice. since this query in the database was formed in the system bitrix file /bitrix/modules/iblock/classes/mysql/iblocksection.php . Unfortunately, I didn’t find anything better than to fix it, although I’m aware that in the first update of the Bitrix kernel my editing may be erased. But at that time I was already struggling with this site for 3 days in a row and the time went by on Sunday evening. So left this farm ...

It was
 BS.*, ... 


It became
 BS.ID, BS.TIMESTAMP_X, BS.MODIFIED_BY, BS.DATE_CREATE, BS.CREATED_BY, BS.IBLOCK_ID, BS.IBLOCK_SECTION_ID, BS.ACTIVE, BS.GLOBAL_ACTIVE, BS.SORT, BS.NAME, BS.PICTURE, BS.LEFT_MARGIN, BS.RIGHT_MARGIN, BS.DEPTH_LEVEL, BS.CODE, BS.XML_ID, BS.TMP_ID, BS.DETAIL_PICTURE, BS.SOCNET_GROUP_ID, ... 


However, it was too early to rejoice here. When I filled in the edits to the combat site, the result became better, but far from the desired (300 sec ––> 100+ sec). Having spent some time in bewilderment and having pawed about myself, I decided to try to work out an assumption about the difference of the versions of mysql on the combat server and on the local machine. You might think that the matter is in the settings of the database itself, however, I cut off this item at the beginning of the path, when I set the same settings as on the combat vehicle. It only remained to upgrade from version 5.5. * To 5.6.35 on the server (the latest available version of mysql on the machine). He placed great hopes on this step, since ideas and assumptions about what could be the case dried out. Yes, and it was a pity the weekend, which spent on finding and solving problems. Together with the weekend my nerves were ending ... But how glad I was that when after updating the database everything worked as it should, the numbers in the request logs were identical to the numbers on the local machine, and the site just started to fly. Joy knew no bounds, it was enough for two: me and my girlfriend, who realized that I would spend the rest of the weekend with her, and not behind the monitor screen.

What methods did for yourself:


I thank everyone who took the time. It will be great if you leave constructive criticism or advice.

PS At the end of the second day of torment, I remembered the story “The Old Man and the Sea”, and thought that my attempts would also not be rewarded, but everything worked out.

PPS In order for the data deletion rate to increase from the b_sale_fuser table and other related data, it was possible to remove indexes from them and then add them again after updating.

Source: https://habr.com/ru/post/351676/


All Articles