Evaluation of the quality of content delivery from the server to the client, which YSlow gives out, is a summary indicator, simple and clear: “Performance Grade A” - everything is good, “Performance Grade F” - everything is bad. This score is formed from 13 other sub-parameters, even simpler, described in English on the site developer.yahoo.com
The result of the webo.in service is two estimates, based on an analysis of how the developers follow the advice of the owner of the service and the amount of information presented on the site and a list of recommendations for further optimization. Plus, the site has a large number of articles on this topic.
So,Methodology
On the way to the optimal result, there are a number of problems, mostly organizational ones, and the technical side of the question is always easily formalized, which means it can be solved .
Let's select from the entire technological process "from the creative and the TK to the beginning of operation" , in fact, the development process "from the database to the full page load . " For server development, more than one approach, method or pattern has been developed: take, for example, MVC . Client development is: HTML - structure ( S tructure), CSS - appearance ( P resentation), JavaScript - behavior ( B ehavior). In total, there are 6 parts: Model, View, Controller, Structure, Presentation, Behavior, of which two are the same: the server View is the client Structure.M => C => [V == S] => P => B
I believe that it is necessary to consider all 5 parts of the process, as a whole. In this process, all parts are designed for the rest, following them; and errors in their design and implementation will lead to temporary losses in the following parts and to the slow or incorrect operation of the final result .
To minimize the likelihood of errors, it is necessary that each of the parts of the process to perform only its own tasks. For example, the document structure should not contain attributes that perform the functions of appearance or behavior: style , onclick , onmouseover ; All CSS and JavaScript should be rendered to external files.Division of labor
The following specializations can be distinguished for the implementation of such a scheme:
- M => C - DB designer, developer of system modules
- C => [V == S] - CMS development, automation of work with templates
- [V == S] => P - layout and layout
- [V == S] => B => P - JavaScript (here I do not know what to call a specialization)
These are about 3 people (I do not consider admins, designers, managers and directors =)
The result of the first stage is delivered and decorated HTML. There is no JavaScript (at the stage of delivering the main content, it only hinders). The time from the beginning to the completion of loading such a page with JS turned on and off will actually be the same. This will be the gain in download speed!Placing CSS in the HEAD page: there should be only one external CSS file on the page.
Failure to comply with this requirement will result in a delay in the display of the page, since CSS files are loaded sequentially + each HTTP request takes additional time. The importance of this requirement is measured by the response time of the server and the speed of Internet access at the client.
The requirement, in fact, is reduced to the consolidation of all CSS files into one. There are two options for implementation: manual and automatic. Manual method can be performed only in simple projects, or in projects with the same type of pages. In all other cases, the human factor will play a role, and people will simply not engage in this nonsense every time after any change. Dozens of kilobytes of CSS rules can be the result of a mindless merging of all files in general, of which only a small fraction will be used on each individual page.
The idea ( but not the implementation !!! ) for an automatic solution can be found here . The essence of the idea in the query syntax: for two files /styles/a.css and /styles/b.css a single query of the form /styles/a.css;b.css can be formed. If such a file is already on the server, it can be given to nginx, otherwise the request must be forwarded further to the backend in order to create this file for the next access to it.
With this approach, you can physically divide the CSS rules into general (required on all pages) and private (required only on "non-standard" pages), and, if necessary, combine them.
Here, all requests to www.site.com are forwarded further to Apache (http: // backend is defined by the upstream directive. The only exceptions are favicon.ico and transparent s.gif: they are in the same directory with all the other interface images, and for the browser are available like www.site.com/favicon.ico and www.site.com/s.gif . Such an exception for s.gif is made only to reduce the size of the html-code.server { listen 80; server_name www.site.org site.org; gzip on; gzip_min_length 1000; gzip_proxied expired no-cache no-store private auth; gzip_types text / plain application / xml; location / { proxy_pass http: // backend; proxy_set_header Host $ host; proxy_set_header X-Real-IP $ remote_addr; proxy_set_header X-Forwarded-For $ proxy_add_x_forwarded_for; } location /favicon.ico { expires max; root / home / project / img /; } location /s.gif { expires max; root / home / project / img /; } }
All interface images are cached forever. Example: img.nnow.ru/interface/is.pngserver { listen 80; server_name img.site.net; expires max; add_header Cache-Control public; location / { root / home / project / img /; } location ~ ^ / \ d + \. \ d + \ /.* { root / home / project / img /; rewrite (\ d + \. \ d +) / (. *) / $ 2 break; } }
The principle of the server mechanisms for building CSS and JS was described above. The only thing worth mentioning is the gzip_static directive. This directive comes with the ngx_http_gzip_static_module module. The module allows you to give a pre-compressed file with the same name and suffix ".gz" instead of the usual file. By default, the module is not built; you need to allow its assembly when configured with the --with-http_gzip_static_module parameter .server { listen 80; server_name static.site.net; expires max; gzip_static on; location / { return 404; } location / jas / { # javascript-and-stylesheets proxy_set_header Host $ host; if (! -f $ request_filename) { break; proxy_pass http: // backend; } root / home / project / static /; } }
NB! We implemented the collector in such a way that the a.css and b.css files are included in the final result by the PHP function include , i.e. are actually executable php files. This makes it possible to get rid of CSS hacks or analysis of the User-Agent browser in JS:
When accessing /jas/ver2.0.css; Firefox3.css; a.css;b.css, Firefox3.css stores the name and version of the browser into the PHP variable, and subsequent parts of the compound result file can read this variable and output different content for different browsers. For example: " s.img.nnow.ru/jas/ver , 1.0.js; Firefox3.js; habr, index.js" and " s.img.nnow.ru/jas/ver , 1.1.js; IE6.js ; habr, index.js »
In the same way, the “version of the interface image” for img.nnow.ru/3.0/interface/logo_ni.gif also changes (the corresponding variable is set in the CSS version file).
The task is to perform the following actions:If you have read this far;) then you are best to continue reading on the Modularity in JavaScript page , dynamic loading on the site www.jsx.ru. It was there that I borrowed this algorithm. I would rather not advertise my solution; in fact, in the process of performing the task, an experiment was made, the final result of which is not 100% clear (for now, everything seems to be working fine =)
- find DOM-elements that require "revitalization" (hereinafter - the components);
- determine what the component is;
- provide connection of the necessary JavaScipt code;
- follow the sequence of the connection files;
- Do not allow multiple downloads of a single file.
I will tell about points 3 and 4.
Finding the required DOM elements should give us a list of the names of the JS component. The names of the components must unambiguously correspond to the names of the files on the server that contain the code for them. We may also need to load some additional CSS rules for the found components for some visual effects that are not necessary at the first stage of loading the content and design.
The list of component names can be combined into a single request to the server. As a result, after downloading the content, the files of the form " static.site.net/jas/componentName1.css;componentName2.css " and " static.site.net/jas/componentName1.js;componentName2.js " should load.
This approach has two drawbacks: (1) a lot of files can be generated in the / jas / folder after a while, which theoretically can reduce the time to access them on the server; (2) sometimes there can be a lot of components on the page, and there are so many that the length of the name of the requested merged file will exceed the file system's capabilities (for example, Ext3 has 255 characters) - in this case it will be necessary to split one request into several successive ones.
Home page of my blog on liveinternet.ru: Performance Grade: F (30)These are not just numbers - this is the probability that someday someone else will implement his idea, promote the project and select your audience, no matter how popular Habr and LiveJournal are.
Habr main page: Performace Grade F (38)
Home page of my blog on Ya.ru: Performance Grade: F (42)
Post in the official Google blog: Performance Grade: F (56)
Main page of my blog in LiveJournal: Performance Grade: D (66)
Source: https://habr.com/ru/post/38299/
All Articles