📜 ⬆️ ⬇️

Client optimization and development stages

Usually, the user doesn’t care what approaches we use in development, how the server is configured, what client and server frameworks we use. It can worry how useful the site is, convenient and fast. Our task is not to cause inconvenience to the user, to please him, and thereby force him to buy our mega-product or look at our wonderful banners. This article is about how to create fast sites.

How to evaluate the performance of the site?


I used only two services to get at least some kind of formalized and reliable assessment: this is a plug-in for FireBug "YSlow" and the website webo.in (I am sure that there are many more such services).
Evaluation of the quality of content delivery from the server to the client, which YSlow gives out, is a summary indicator, simple and clear: “Performance Grade A” - everything is good, “Performance Grade F” - everything is bad. This score is formed from 13 other sub-parameters, even simpler, described in English on the site developer.yahoo.com
The result of the webo.in service is two estimates, based on an analysis of how the developers follow the advice of the owner of the service and the amount of information presented on the site and a list of recommendations for further optimization. Plus, the site has a large number of articles on this topic.

For a complete picture, I would advise using both methods of assessment. But keep in mind that user experience and automatically generated assessment can be very different.

How does the user understand which site is faster and which is slower?


We visit websites for the sake of information and the sooner we get it in a readable form, the higher will be our opinion about the site. Thus, one of the primary tasks of the developer is to minimize the time for receiving the completed information and to exclude everything that can postpone this moment.

In fact, I have already described the goal of the first stage of development (the server part is included in this first stage =) But before describing it in detail, I would like to stop at one important moment, in my opinion.
')

Methodology

On the way to the optimal result, there are a number of problems, mostly organizational ones, and the technical side of the question is always easily formalized, which means it can be solved .

Let's select from the entire technological process "from the creative and the TK to the beginning of operation" , in fact, the development process "from the database to the full page load . " For server development, more than one approach, method or pattern has been developed: take, for example, MVC . Client development is: HTML - structure ( S tructure), CSS - appearance ( P resentation), JavaScript - behavior ( B ehavior). In total, there are 6 parts: Model, View, Controller, Structure, Presentation, Behavior, of which two are the same: the server View is the client Structure.

M => C => [V == S] => P => B

I believe that it is necessary to consider all 5 parts of the process, as a whole. In this process, all parts are designed for the rest, following them; and errors in their design and implementation will lead to temporary losses in the following parts and to the slow or incorrect operation of the final result .
To minimize the likelihood of errors, it is necessary that each of the parts of the process to perform only its own tasks. For example, the document structure should not contain attributes that perform the functions of appearance or behavior: style , onclick , onmouseover ; All CSS and JavaScript should be rendered to external files.

Division of labor

The following specializations can be distinguished for the implementation of such a scheme:
- M => C - DB designer, developer of system modules
- C => [V == S] - CMS development, automation of work with templates
- [V == S] => P - layout and layout
- [V == S] => B => P - JavaScript (here I do not know what to call a specialization)
These are about 3 people (I do not consider admins, designers, managers and directors =)
So,

Stage 1: Delivery of information and clearance


At this stage, developers should do everything possible not to slow down the page loading speed.

Ways to speed up the receipt of information
Reducing the number of HTTP requests , Compressing HTML and CSS , Placing static components of a page across several domains , Reducing the number of DOM elements , Placing CSS in the HEAD page , Configuring HTTP headers ( Expires , ETag , Set-Cookie ), Reducing the number of DNS requests

How can you spoil the first impression of the site
Downloading and / or using JavaScript in the process of downloading information , Using iframes on a page

I arranged the items in order of decreasing importance. All items are very clear. I want to stop only at one moment (I will write about compression and caching below)

Placing CSS in the HEAD page: there should be only one external CSS file on the page.

Failure to comply with this requirement will result in a delay in the display of the page, since CSS files are loaded sequentially + each HTTP request takes additional time. The importance of this requirement is measured by the response time of the server and the speed of Internet access at the client.

The requirement, in fact, is reduced to the consolidation of all CSS files into one. There are two options for implementation: manual and automatic. Manual method can be performed only in simple projects, or in projects with the same type of pages. In all other cases, the human factor will play a role, and people will simply not engage in this nonsense every time after any change. Dozens of kilobytes of CSS rules can be the result of a mindless merging of all files in general, of which only a small fraction will be used on each individual page.

The idea ( but not the implementation !!! ) for an automatic solution can be found here . The essence of the idea in the query syntax: for two files /styles/a.css and /styles/b.css a single query of the form /styles/a.css;b.css can be formed. If such a file is already on the server, it can be given to nginx, otherwise the request must be forwarded further to the backend in order to create this file for the next access to it.

With this approach, you can physically divide the CSS rules into general (required on all pages) and private (required only on "non-standard" pages), and, if necessary, combine them.
The result of the first stage is delivered and decorated HTML. There is no JavaScript (at the stage of delivering the main content, it only hinders). The time from the beginning to the completion of loading such a page with JS turned on and off will actually be the same. This will be the gain in download speed!

If the “Reducing the number of DOM elements” requirement is more or less close to ideal, then a side effect may be the possibility of implementing the mobile version of the site and the print version with the same HTML code. It will remain “only” to implement the CSS rule sets for media = “handheld” and media = “print”.

Stage 2: Caching Design Files, Compressing HTML, CSS, and JS


At this stage, developers should ensure that other pages of the site are loaded quickly (if the visitor decides to go there). This stage should be held in parallel with the first.

To solve the problems of caching and compression, I recommend using only nginx , and Apache will only generate dynamic pages and build CSS and JS files. I will give the nginx configuration, which I prepared “based on” our already working system:

Main domain of the project (www.site.org)

  server {
   listen 80;
   server_name www.site.org site.org;
   gzip on;
   gzip_min_length 1000;
   gzip_proxied expired no-cache no-store private auth;
   gzip_types text / plain application / xml;
   location / {
     proxy_pass http: // backend;
     proxy_set_header Host $ host;
     proxy_set_header X-Real-IP $ remote_addr;
     proxy_set_header X-Forwarded-For $ proxy_add_x_forwarded_for;
   }
   location /favicon.ico {
     expires max;
     root / home / project / img /;
   }
   location /s.gif { 
     expires max;
     root / home / project / img /;
   }
 } 
Here, all requests to www.site.com are forwarded further to Apache (http: // backend is defined by the upstream directive. The only exceptions are favicon.ico and transparent s.gif: they are in the same directory with all the other interface images, and for the browser are available like www.site.com/favicon.ico and www.site.com/s.gif . Such an exception for s.gif is made only to reduce the size of the html-code.
Apache's responses with Content-Type "text / html", "text / plain", "application / xml" are compressed on the fly. Compression should be disabled in the case when a small amount of resources is allocated for a project.
All HTML pages are considered dynamic, so they are not cached.

Domain for interface images (img.site.net)

  server {
   listen 80; 
   server_name img.site.net;
   expires max;
   add_header Cache-Control public;
   location / {
     root / home / project / img /;
   }
   location ~ ^ / \ d + \. \ d + \ /.* {
     root / home / project / img /;
     rewrite (\ d + \. \ d +) / (. *) / $ 2 break;
   }
 } 
All interface images are cached forever. Example: img.nnow.ru/interface/is.png
In order to reset the browser cache, the following rule has been introduced: if an HTTP request begins with “slash number dot number slash”, then the pictures should be taken from the root. Example: img.nnow.ru/2.0/interface/is.png
This is very useful when we need to add another icon inside the image of the CSS sprite.

Domain for CSS and JS files (static.site.net)

  server {
   listen 80; 
   server_name static.site.net;
   expires max;
   gzip_static on;
   location / {
     return 404;
   }
   location / jas / {
     # javascript-and-stylesheets
     proxy_set_header Host $ host;
     if (! -f $ request_filename) {
       break;
       proxy_pass http: // backend;
     }
     root / home / project / static /;
   }
 } 
The principle of the server mechanisms for building CSS and JS was described above. The only thing worth mentioning is the gzip_static directive. This directive comes with the ngx_http_gzip_static_module module. The module allows you to give a pre-compressed file with the same name and suffix ".gz" instead of the usual file. By default, the module is not built; you need to allow its assembly when configured with the --with-http_gzip_static_module parameter .
The file collector must additionally be able to write files of the form /jas/a.css;b.css.gz .
CSS and JS files are cached forever. A cache reset can be implemented by adding a “file version” to the file name: /jas/ver2.0.css;a.css;b.css
NB! We implemented the collector in such a way that the a.css and b.css files are included in the final result by the PHP function include , i.e. are actually executable php files. This makes it possible to get rid of CSS hacks or analysis of the User-Agent browser in JS:
When accessing /jas/ver2.0.css; Firefox3.css; a.css;b.css, Firefox3.css stores the name and version of the browser into the PHP variable, and subsequent parts of the compound result file can read this variable and output different content for different browsers. For example: " s.img.nnow.ru/jas/ver , 1.0.js; Firefox3.js; habr, index.js" and " s.img.nnow.ru/jas/ver , 1.1.js; IE6.js ; habr, index.js »
In the same way, the “version of the interface image” for img.nnow.ru/3.0/interface/logo_ni.gif also changes (the corresponding variable is set in the CSS version file).

Stage 3: Life After Loading Page


The purpose of this stage is to attach mouse events to various DOM elements and implement other Web-two-bit gadgets, i.e. page recovery.

They say that sometimes “the gram of visibility is more important than a kilogram of essence” is just about JS. After all, it is in JavaScript that you can implement mechanisms that simplify user actions; You can make a lot of different visual effects, emphasizing the design, convenience and usefulness of the site (and in fact all the work that the developers have done in the previous stages).

At this point, we should have an HTML page in which all links and forms are required to work without JavaScript. Server interfaces for Ajax requests should be ready; The page structure should be such that for similar pieces of HTML you do not have to implement similar, but not identical pieces of JS-code. Most likely, page templates should be created, where you can see how the page will look after some user action.
In essence, an effective exchange of information between all developers should be organized. How to organize such a dialogue in a team of dozens of people, I do not know; but, for a team of three or four people, this is quite realistic (albeit difficult).

In order not to reduce the speed of content delivery and design, JS-files (best of course, one JS-file ) must be connected before closing the body tag. Life begins after loading the HTML code :
The task is to perform the following actions:
  1. find DOM-elements that require "revitalization" (hereinafter - the components);
  2. determine what the component is;
  3. provide connection of the necessary JavaScipt code;
  4. follow the sequence of the connection files;
  5. Do not allow multiple downloads of a single file.
If you have read this far;) then you are best to continue reading on the Modularity in JavaScript page , dynamic loading on the site www.jsx.ru. It was there that I borrowed this algorithm. I would rather not advertise my solution; in fact, in the process of performing the task, an experiment was made, the final result of which is not 100% clear (for now, everything seems to be working fine =)

I will tell about points 3 and 4.
Finding the required DOM elements should give us a list of the names of the JS component. The names of the components must unambiguously correspond to the names of the files on the server that contain the code for them. We may also need to load some additional CSS rules for the found components for some visual effects that are not necessary at the first stage of loading the content and design.
The list of component names can be combined into a single request to the server. As a result, after downloading the content, the files of the form " static.site.net/jas/componentName1.css;componentName2.css " and " static.site.net/jas/componentName1.js;componentName2.js " should load.

This approach has two drawbacks: (1) a lot of files can be generated in the / jas / folder after a while, which theoretically can reduce the time to access them on the server; (2) sometimes there can be a lot of components on the page, and there are so many that the length of the name of the requested merged file will exceed the file system's capabilities (for example, Ext3 has 255 characters) - in this case it will be necessary to split one request into several successive ones.

YSlow: Performance Grade: A (100)


At first, this is how I wanted to name the article, but then it grew into something more than just “site performance evaluation”. Now I am sure that only pages without advertising banners and counters or default pages of nginx or Apache with a message that the page could not be found can receive such an assessment.

The impossibility of achieving excellent performance estimates does not give us a reason to neglect client optimization. After all, a stepLeft-stepRIGHT from the optimal path can mean an immediate execution:
Home page of my blog on liveinternet.ru: Performance Grade: F (30)
Habr main page: Performace Grade F (38)
Home page of my blog on Ya.ru: Performance Grade: F (42)
Post in the official Google blog: Performance Grade: F (56)
Main page of my blog in LiveJournal: Performance Grade: D (66)
These are not just numbers - this is the probability that someday someone else will implement his idea, promote the project and select your audience, no matter how popular Habr and LiveJournal are.

Thanks for attention.

Source: https://habr.com/ru/post/38299/


All Articles