Are you already using progressive download? What about Tree Shaking technologies and code breaks in React and Angular? Have you configured Brotli or Zopfli compression, OCSP stapling and HPACK compression? And how are things going with the optimization of resources and the client side, with CSS nesting? Not to mention IPv6, HTTP / 2, and service workers.
There were times when productivity was often improved retroactively. Work on it was postponed until the end of the project, and it all came down to minification, concatenation, optimization of resources and, if lucky, a few tweaks on the server side. Since then, the situation has changed significantly.
The task of increasing productivity has ceased to be a purely technical matter. Now it is embedded in the overall workflow, and some architectural solutions are evaluated in terms of their impact on the speed of the system. Productivity needs to be constantly monitored, measured and refined, but as the complexity of the web grows, new problems arise for developers, making it difficult to track metrics, because they are highly dependent on devices, browsers, protocols, network types and delays (CDN, Internet providers, caches, proxy , firewalls, load balancers and servers - all of this affects performance).
But if you describe all the factors that should be remembered when improving performance - from the beginning of work on the project to the launch of the site, what would a similar list look like? Below you will find (I hope, impartial and objective) a checklist for improving the performance of the frontend for 2017. This is an overview of the problems that you need to solve to reduce the response time of the site and its smooth operation.
Micro-optimizations allow you to keep performance under control, but at the same time it is extremely important to have clear goals in mind: they must be measurable and influence all decisions made in the process of work. There are several different models, and those that will be discussed later are very categorical. So make sure you choose your own priorities from the start.
According to one psychological study , if you want your users to see your site as the fastest, you need to be at least 20% faster. The full page load time is irrelevant, unlike metrics such as the start time of the drawing, the time to the first useful screen (that is, the time it takes to display the primary content on the page) and the time before the start of the interaction (the time after which the page (and first of all one-page application) is ready for the user to interact with it).
Measure the time before drawing starts (using WebPagetest ) and before the first useful screen (using Lighthouse ) on Moto G, some business-class Samsung smartphone and an ordinary device like Nexus 4 (preferably in some open lab ) on regular 3G -, 4G and Wi-Fi connections.
Lighthouse , a new performance audit tool developed by Google
Analyze the results to understand what the situation of your users. Then you can simulate the 90th percentile for testing. Collect the data in a table , cut off 20% and set yourself specific goals ( performance budget ). Now you can focus on specific indicators when testing. If you keep the budget in mind and try to “strangle” at least the smallest script for the sake of reducing the time before the start of the interaction, then you are on the right track.
Share the checklist with colleagues . Introduce him to each team member to avoid misunderstandings. Each decision affects performance, and the project will greatly benefit from the involvement of front-end developers in the decision-making process on concept, UX and design. Link the solution design to the performance budget and priorities outlined in the checklist.
The RAIL performance model sets the right targets: do everything possible so that users get a response earlier than 100 ms after the initial data entry. For this, the page must return control to the main thread no later than every <50 ms. In high-loaded areas like animation, wherever possible, it is better not to do anything else, or to do the absolute minimum.
In addition, each frame of the animation should be completed in less than 16 ms to provide a frequency of 60 frames / s (1 s Ă· 60 = 16.6 ms). Desirable - in less than 10 ms. Since the browser needs time to render the new frame, your code should complete execution before 16.6 ms. Be optimistic and use your time wisely. Obviously, these goals relate to runtime performance, not boot performance.
Although it can be very difficult to achieve, your main goal is to start the drawing in less than 1 second and the Speed ​​Index is below 1000 (on fast connections). The time to the first useful screen should not exceed 1250 ms. For mobile devices connected via 3G, the start of drawing is considered acceptable no later than 3 seconds. Even if it is a little longer, not scary, but try to reduce this value.
Do not pay too much attention to what is today supposedly cool. Stick to your build environment, be it Grunt, Gulp, Webpack, PostCSS, or some combination of tools. As long as you get the results quickly enough and do not experience problems with maintaining the assembly process, you are doing everything correctly.
Let progressive improvement be the main principle of your frontend architecture and deployment, it is a win-win option. First, design and create basic interaction schemes, and then extend them with capabilities supported by different browsers, creating fault - tolerant schemes. If your site is running fast on slow machines with lousy screens, outdated browsers and suboptimal networks, then it will work even faster on fast machines with good browsers and a decent network.
Choose a framework that allows drawing on the server. Be sure to measure the load time on mobile devices in the drawing modes on the server and client sides before stopping on a specific framework (later it will be very difficult to “change horses at the crossing”, and this will lead to performance problems). If you decide to use the JS framework, then be sure to make sure that your choice is thoughtful and conscious . Different frameworks have a different impact on performance and require different approaches to optimization, so you need to clearly understand all the advantages and disadvantages of the tool you choose. When creating a web application, pay attention to the PRPL pattern and the shell architecture of the application .
PRPL stands for Pushing critical resource, Rendering initial route, Pre-caching routing on demand: transmitting critical resources, drawing the initial route, pre-caching other routes, deferred loading of other routes on request.
The shell of the application is a combination of HTML, CSS and JavaScript, the minimum required for the user interface
Depending on the priorities and strategies adopted in your organization, you can use Google AMP or Facebook Instant Articles . You can achieve good performance without them, but AMP offers a reliable and fast framework with a free content delivery network (CDN), and Instant Articles improves the performance of your products on Facebook. It is also possible to create progressive web AMP .
Depending on how much dynamic data you have, try to “pull out” some of the content to the static site generator , transferring it to the CDN and thus serving the static version, avoiding database queries. You can even choose a static hosting platform based on CDN, improving the page with the help of interactive component extensions ( JAMstack ).
Have you noticed that a CDN can serve (and unload) more dynamic content too? There is no need to limit it to static resources only. Make sure that the selected CDN compresses and converts the content, supports smart delivery via HTTP / 2, ESI technology, which collects static and dynamic parts of pages on the CDN side (that is, in the server part closest to the user), and solves other tasks.
First of all, you need to understand what you are dealing with. Take an inventory of all resources (JavaScript, images, fonts, third-party scripts and "expensive" modules on a page like carousels, complex infographics and multimedia content) and divide them into groups.
Build the table. Determine the basic functionality for legacy browsers (that is, the full availability of main content), advanced functionality for modern browsers (that is, improved, full functionality) and additional materials (resources that are not necessary and that can be loaded lazily: web fonts, additional styles , carousel scripts, video players, social network buttons, large-size images). Read more about this in the article Improving Smashing Magazine's Performance .
It is used to transfer to older browsers the main, and to modern browsers - enhanced functionality. Be strict when loading your resources: immediately load the main functionality, extensions - on DOMContentLoaded
events, and additional materials - on load
events.
Please note that this approach determines the capabilities of the device according to the version of the browser, which is no longer permissible today. For example, on cheap Android-based smartphones in developing countries, Chrome mostly works, so the described scenario will be executed, despite the small amount of RAM and a modest processor. Although at the moment there are no alternatives to this approach, you need to understand that lately it has been less and less needed.
In some cases, it takes some time to initialize the application before rendering the page. Instead of loading indicators, it is better to use a frame mapping . Look for modules and techniques to reduce the initial rendering time (for example, Tree-Shaking and code splitting ), because most performance problems relate to primary parsing when the application starts. Also use the AOT compiler to transfer part of client rendering to the server side , which will speed up getting a decent result. Finally, consider using Optimize-js for faster initial loading by wrapping immediately called functions (although this may not be necessary anymore ).
Progressive loading is the rendering of the page on the server side to reduce the time to the first useful screen (this requires some minimum JavaScript required for the time before the start of the interaction to be close to the time before the first useful screen)
Which is better - client-side or server-side rendering? In both cases, it is necessary to set up progressive loading : server-side rendering will speed up the appearance of the first useful screen, but this requires some minimal JavaScript, so that the time before the interaction starts is close to the time before the first useful screen. Then we can load the secondary parts of the application, on request or if time allows. Unfortunately, frameworks usually lack the notion of priorities explicitly provided to developers, so with most libraries and frameworks progressive download is not easy to implement. If you have the time and resources, you can use this strategy to significantly increase productivity.
Check that expires
, cache-control
, max-age
and other HTTP cache headers are configured correctly. In general, resources should be cached either for a very short time (if they can change), or indefinitely (if they are static) - if necessary, you can change their version in the URL.
To avoid revalidation of fingerprinted-resources, use Cache-Control: immutable
for them Cache-Control: immutable
possible (as of December 2016, it is supported only in Firefox in https://
transactions). You can read the manual on the HTTP cache headers , the article "Best Caching Techniques," and another HTTP caching manual .
When a user requests a page, the browser takes the HTML and builds the DOM, then takes the CSS and builds the CSSOM, and then generates the drawing tree, matching the DOM and CSSOM. If you need to process some JavaScript, the browser will not start rendering the page until the processing is complete. Therefore, developers should clearly "tell" the browser so that it does not wait and immediately starts drawing. For scripts, this is done using the HTML attributes defer
and async
.
But in practice it is better async
use defer
async
( for the sake of users of IE up to version 9 inclusive, otherwise they may not work with scripts). It is also recommended to limit the influence of third-party libraries and scripts, especially social networking buttons and embedded <iframe>
. Alternatively, you can use the static buttons of social networks (for example, SSBG ) and static links to interactive maps .
Use srcset
images as srcset
as possible with srcset
, sizes
and the <picture>
element. And at the same time, you can use the WebP format to serve WebP images with a <picture>
element and a spare JPEG ( sample code ) or use content negotiation (using Accept
headers). Sketch natively supports WebP, in addition there is a plugin for exporting WebP images from Photoshop. Other options are available .
Responsive Image Breakpoints Generator automates image creation and markup
You can use client hints , which today are beginning to be supported by browsers . Not enough resources to create complex markup for responsive images? To automate image optimization, use the Responsive Image Breakpoints Generator or services like Cloudinary .
Also in many cases, using only srcset
and sizes
will help achieve significant results. In Smashing Magazine, for image names we use the ending -opt. For example, brotli-compression-opt.png
; if the image has such an ending, then everyone in the team understands that the picture has been optimized.
When working on a landing page for which lightning-fast loading of a certain image is important, make sure you use progressive JPEGs with MozJPEG compression (this reduces the time before drawing starts by manipulating scan levels), Pingo for PNG, Lossy GIF for GIF and SVGOMG for SVG . Blur to unimportant parts of the image (for example, using a Gaussian blur filter) to reduce the file size. You can even reduce the color palette or make the application black and white. For background images when exporting from Photoshop, set the quality from 0 to 10%, this will be quite enough.
Still not fast enough? Well, you can still improve performance by using different techniques applied to background images ( 1 , 2 , 3 , 4 ).
Surely the web fonts you use use glyphs and other additional characters that will not be used. For the sake of reducing file sizes, you can contact the supplier with a request to provide a reduced set of characters or cut out all unnecessary ones yourself if you use opensource-fonts (for example, leaving only the Latin alphabet and a few accent glyphs). The cool thing is WOFF2 support , and for browsers that don't support it, WOFF and OTF can be used. Also choose one of the strategies described in the Comprehensive Guide to Font Loading Strategies article, and use the service worker cache to reliably cache web fonts.
Need a quick fix? Study this material to load fonts in a specific order.
The Comprehensive Guide to Font Loading Strategies article describes dozens of ways to improve the delivery of web fonts.
If you cannot transfer fonts from your server and rely on third-party hosts, then use Web Font Loader . It is better to use FOUT than FOIT . Immediately start drawing text into the backup version of the font and load the fonts asynchronously. For this you can use loadCSS . You may even be able to manage the fonts installed in the operating system .
In order for the browser to begin to render the page as soon as possible, it is standard practice to collect all the CSS needed to render the first visible part of the page (“critical CSS”, or “CSS without scrolling the screen” (above-the-fold CSS)) and then embed it in <head>
pages. This reduces the number of calls to the server. Due to the limited packet size that the parties exchange during the slow initial phase, your critical CSS budget is about 14 Kb. If you exceed it, the browser will need an extra roundtrip to get the styles. CriticalCSS and Critical will help you keep within budget. You may have to use them with each template. As far as possible, use the conditional embedding method practiced by the Filament Group.
In the case of HTTP / 2, critical CSS can be stored in a separate file and delivered at server initiative (server push) without inflating HTML. The bottom line is that this type of delivery does not have mandatory support and has some problems with caching (see slide 114 in the presentation ). So the effect may even turn out to be negative : the network buffers will increase in size, which will prevent the delivery of frame originals to the document. Due to the slow start of TCP, server-initiated delivery is much more efficient on hot connections . You may need to create a mechanism for such delivery via HTTP / 2, depending on the cache. But remember, the new caching specification will eliminate the need to manually create similar "cache-dependent" servers.
Tree Shaking is a way to clean up your build process by including only the code that is used in the working draft. To eliminate surplus when exporting, you can use Webpack 2 , and to remove unused styles from CSS - UnCSS or Helium . It may be useful for you to read how to write effective CSS selectors , as well as how to avoid inflating and increasing the cost of styles .
Code breaking is another Webpack feature. The code base is divided into "chunks", which are loaded on request. After you specify breakpoints in the code, the Webpack takes care of the dependencies and the generated files. Splitting the code allows you to reduce the amount of data during the initial loading, loading the next portions of the code as needed.
Please note that Rollup demonstrates much better results compared to exporting Browserify. In this case, you probably want to “feel” Rollupify , which converts ECMAScript 2015 modules into one large CommonJS module - after all, small modules can surprisingly greatly reduce performance depending on the Bundler and the modular system.
With the limitation of CSS, we can isolate expensive components. For example, to limit the scope of browser styles, layout, drawing for hidden navigation (paint work for off-canvas navigation) or third-party widgets. Make sure that there are no lags when scrolling the page or when working with animation elements. Also check that the frame rate is always at least 60 frames / s. If this cannot be achieved, then at least hold the frequency between 15 and 60. To tell the browser which elements and properties will change, use the will-change
from CSS.
Also measure rendering performance during execution (for example, in DevTools ). To get started, check out the free course on Udacity to optimize rendering in the browser. You can also read an article about animation using GPU .
Use skeleton screens and lazy loading of all expensive components such as fonts, javascript, carousels, videos and iframe. Use hints for resources to save time on:
dns-prefetch
(looking for DNS inpreconnect
(asks the browser in the background to start a handshake for establishing a connection (DNS, TCP, TLS)),prefetch
(asks browser to request resources),prerender
(asks the browser to render certain pages in the background),preload
(among other things, selects resources in advance without their execution).Note: in practice, depending on browser support, you prefer preconnect, rather than dns-prefetch, and you will use prefetch and prerender with care. The latter should be used only if you are absolutely sure where the user is going (for example, it will go further through the sales funnel).
As Google creates a more secure web , and Chrome eventually begins to treat all HTTP pages as “insecure”, you will have to decide whether you will stay on HTTP / 1.1 or set up an HTTP / 2 environment . HTTP / 2 is already very well supported . He is not going anywhere, and in most cases it is better to adhere to this protocol. Investments are significant, but sooner or later it will have to be done. In addition, you can get a strong performance boost thanks to service workers and sending data at the initiative of the server (at least in the long term).
The other side of the coin: you have to migrate to HTTPS and depending on how big your user base is on HTTP / 1.1 (users of legacy OS or browsers), you have to send different builds, for which you will need to create a new build process . Keep in mind: setting up a migration and a new build process can be difficult and time consuming. Further in the text we will proceed from the assumption that you either switch or have already switched to HTTP / 2.
Once again: the distribution of resources via HTTP / 2 will require a deep revision of the usual processes. You will have to find a balance between packaging modules and parallel loading of numerous small modules.
On the one hand, you may want to avoid pooling resources by breaking the interface into numerous small modules, compressing them during assembly, referencing them in the framework of the scout methodology and loading in parallel. Changing a single file will not require reloading the entire style sheet or JavaScript.
On the other hand, the packaging makes sense , since sending numerous small. Js files to the browser involves certain problems. First, compression will worsen . When re-using the dictionary, we benefit from the compression of a large package, but not a separate small package. Secondly, browsers are not yet optimized for such workflows. For example, for several resources, Chrome launches the sequential execution of interprocess communication ( IPC ), so if we have several hundred resources, we will get high costs during the execution in the browser.
For best results using HTTP / 2, use progressive CSS loading.
You can also try progressively loading CSS . Obviously, in this case, the users of HTTP / 1.1 will be the losers, so the generation and transmission of different assemblies to different browsers can be part of the deployment process. This will somewhat complicate the process. You can also be helped by the coalescing of HTTP / 2 connections, which allows you to use domain sharding without losing the benefits of HTTP / 2, but in practice this is difficult to achieve.
How to be? If you are using HTTP / 2, then sending about ten packets can be a reasonable compromise (this is not too bad for older browsers). Experiment and select the best option for your site.
All HTTP / 2 browser implementations work through TLS, so you probably want to avoid security warnings or dysfunction of some elements on the page. Ensure that the security headers are configured correctly, close known vulnerabilities, and verify your certificate .
You have not migrated to HTTPS? Carefully study the standard HTTPS-Only. Make sure that all external plugins and tracking scripts are loaded via HTTPS, that an attack via cross-site scripting is not possible, and that the HTTP Strict Transport Security and Content Security Headers are configured correctly.
Different servers and CDNs can support HTTP / 2 in different ways. Use the Is TLS Fast Yet ? Tool to check. or check your servers for performance and support.
Is TLS Fast Yet ? allows you to find out the capabilities of your servers and CDNs when switching to HTTP / 2
Last year, Google introduced Brotli , a new, open-source, lossless data format that is now widely supported in Chrome, Firefox and Opera. In practice, Brotli is more efficient than Gzip and Deflate. Depending on the settings, this format may shrink more slowly, but the degree of compression is higher.
However, unpacking is fast. Since the format was developed by Google, it is not surprising that browsers only accept it when visiting sites via HTTPS. By the way, there are technical reasons for this. Today, Brotli is not pre-installed on most servers, and it is not so easy to configure. (comment of translator: It is enough to assemble the corresponding module from google or cloudflare and dynamically connect it, for which nginx will only need to re-read the config) However, you can enable Brotli even in those CDNs that do not support it yet (using a service worker).
Alternatively, you can use the Zopfli compression algorithm, which encodes data into Deflate, GZIP and ZLIB formats. When you compress any resource with regular GZIP, you will benefit from using Deflate coding, improved with Zopfli, because files will be 3–8% less than with maximum ZLIB compression. The problem is that compressing will take about 80 times longer. Therefore, it is recommended to use Zopfli for those resources that do not change much, as well as for files that are compressed once and downloaded repeatedly.
OCSP stapling , TLS-. Online Certificate Status Protocol (OCSP) Certificate Revocation List (CRL). SSL-. OCSP , , .
IPv4 IPv6 ( 50- ( – 29.66%, – . ) ) DNS IPv6 . , — IPv6 IPv4. , IPv6 . , 15% « » (NDP) .
HTTP/2, , HPACK- HTTP-. . - HTTP/2- , HPACK. H2spec . HPACK .
, . HTTPS, Pragmatist's Guide to Service Workers , -, - ( ) . Offline Cookbook Udacity Offline Web Applications . , .
HTTP HTTPS, . report-uri.io . HTTPS- Mixed Content Scan .
Chrome Firefox. , -. , UC Browser Opera Mini ( 35%). , . , DPI. BrowserStack — , .
WebPagetest . . (user timing marks) -. SpeedCurve / New Relic . WebPagetest. SpeedTracker , Lighthouse Calibre .
, - . , ? . , , Speed Index 3G- .
— – 3G-. SpeedI ndex — 1000. .
CSS <head>
( — 14 ).
( , ), , JavaScript.
dns-lookup
, preconnect
, prefetch
, preload
prerender
.
- ( ).
WebP (, ).
Brotli Zopfli. , GZIP-.
HTTP/2, HPACK- . TLS, OCSP stapling.
, .
, legacy-. Nothing wrong! - , . — , .
Source: https://habr.com/ru/post/320558/
All Articles