📜 ⬆️ ⬇️

Accelerate webpack build with webpack

As your application evolves and grows, its build time increases — from a few minutes when rebuilding in development mode to tens of minutes in a cold production build. This is completely unacceptable. We, the developers, do not like to switch the context while waiting for the bundle to be ready and want to receive feedback from the application as early as possible - ideally, while we are switching from IDE to the browser.


How to achieve this? What can we do to optimize build time?


This article provides an overview of tools to speed up assembly in the webpack ecosystem, experience with their use, and tips.


The optimization of the bundle size and the performance of the application itself is not covered in this article.


The project, references to which are found in the text and for which measurements of the assembly speed are performed, is a relatively small application written on the JS + Flow + React + Redux stack using webpack, Babel, PostCSS, Sass, etc. and consisting of approximately 30 thousand lines of code and 1500 modules. Versions of dependencies are relevant for April 2019.


The studies were conducted on a computer with Windows 10, Node.js 8, 4-core processor, 8 GB of memory and SSD.


Terminology



Caching


Caching allows you to save the results of calculations for further reuse. The first build may be a little slower than usual due to the overhead of caching, but subsequent ones will be much faster due to the reuse of the compilation results of unchanged modules.


By default, the webpack in watch mode caches intermediate build results in memory so as not to rebuild the entire project with each change. For a regular build (not in watch mode), this setting does not make sense. You can also try to turn on the cache rezolving to simplify the webpack's work on finding modules and see if this setting has a noticeable effect on your project.


There is no persistent (saved to disk or other storage) cache in the webpack, although they promise to add it in version 5. In the meantime, we can use the following tools:


- Caching in the TerserWebpackPlugin settings


Disabled by default. Even alone it has a noticeable positive effect: 60.7 s → 39 s (-36%), it is perfectly combined with other tools for caching.


Enable and use is very simple:


 optimization: { minimizer: [ new TerserJsPlugin({ terserOptions: { ... }, cache: true }) ] } 

- cache loader


Cache-loader can be placed in any chain of loaders and cache the results of previous loaders.


By default, it stores the cache in the .cache-loader folder in the project root. Using the cacheDirectory option in the loader settings, you can override the path.


Example of use:


 { test: /\.js$/, use: [ { loader: 'cache-loader', options: { cacheDirectory: path.resolve( __dirname, 'node_modules/.cache/cache-loader' ), }, }, 'babel-loader' ] } 

Safe and secure solution. It works without problems with almost any loaders: for scripts (babel-loader, ts-loader), styles (scss-, less-, postcss-, css-loader), images and fonts (image-webpack-loader, react-svg- loader, file-loader), etc.


Note:



Results:



- HardSourceWebpackPlugin


A more massive and “smart” solution for caching at the level of the entire assembly process, rather than individual loader chains. In the base use case, it is enough to add a plugin to the webpack configuration; standard settings should be sufficient for correct operation. Suitable for those who want to achieve maximum performance and are not afraid to face difficulties.


 plugins: [ ..., new HardSourceWebpackPlugin() ] 

In the documentation there are examples of use with advanced settings and tips for solving possible problems. Before putting the plug-in into operation on an ongoing basis, it is worth thoroughly testing its work in various situations and assembly modes.


Results:



Pros:



Minuses:



- Caching in the settings of babel-loader . Disabled by default. The effect is a few percent worse than the cache-loader.


- Caching in the eslint-loader settings . Disabled by default. If you use this loader, the cache will help not to waste time on linting of unchanged files when re-assembling.




When using the cache-loader or HardSourceWebpackPlugin, you need to disable the built-in caching mechanisms in other plugins or loaders (except TerserWebpackPlugin), since they will no longer be useful for repeated and incremental builds, and the “cold” ones will even slow down. The same applies to the cache-loader itself, if you already use HardSourceWebpackPlugin.




When setting up caching, the following questions may arise:


Where should I save caching results?


node_modules/.cache/<_>/ are usually stored in the node_modules/.cache/<_>/ directory. Most tools use this path by default and allow you to override it if you want to store the cache in a different place.


When and how to disable cache?


It is very important to reset the cache when the configuration of the assembly changes, which will affect the output. Using the old cache in such cases is harmful and can lead to errors of unknown nature.


Factors to consider:



If you do not use the cache-loader or HardSourceWebpackPlugin, which allow you to redefine the list of sources to form an assembly fingerprint, npm-scripts that clear the cache when adding, updating, or removing dependencies will help you a bit:


 "prunecaches": "rimraf ./node_modules/.cache/", "postinstall": "npm run prunecaches", "postuninstall": "npm run prunecaches" 

Also help nodemon , configured to clear the cache, and restart webpack-dev-server if it detects changes in the configuration files:


 "start": "cross-env NODE_ENV=development nodemon --exec \"webpack-dev-server --config webpack.config.dev.js\"" 

nodemon.json


 { "watch": [ "webpack.config.dev.js", "babel.config.js", "more configs...", ], "events": { "restart": "yarn prunecaches" } } 

Do I need to save the cache in the project repository?


Since the cache is, in fact, an assembly artifact, you do not need to commit it to the repository. This is where the cache location inside the node_modules folder, which is usually included in .gitignore, helps.


It is worth noting that if you have a caching system that can reliably determine the validity of the cache under any conditions, including changing the OS and the Node.js version, the cache could be reused between the developers' machines or in the CI, which would significantly reduce the time even for the very first build switch between branches.


In what modes of assembly is it worth it, and in which modes you should not use the cache?


There is no unequivocal answer: it all depends on how intensively you use the dev and prod modes when developing and switch between them. In general, nothing prevents you from including caching everywhere, but remember that it usually makes the first build slower. In CI, you probably always need a “clean” build, in which case caching can be disabled using the appropriate environment variable.




Interesting materials about caching in webpack:



Parallelization


Using parallelization, you can get a performance boost by using all available processor cores. The final effect is individual for each car.


By the way, here is a simple Node.js code to get the number of available processor cores (this can be useful when setting up the tools listed below):


 const os = require('os'); const cores = os.cpus().length; 

- Parallelization in TerserWebpackPlugin settings


Disabled by default. Just like your own caching, it is easy to turn on and visibly speeds up the build.


 optimization: { minimizer: [ new TerserJsPlugin({ terserOptions: { ... }, parallel: true }) ] } 

- thread-loader


Thread-loader can be placed in a chain of heavy-duty loaders, after which the previous loaders will use a pool of Node.js subprocesses (“workers”).


It has a set of options that allow you to fine-tune the work of a pool of workers, although the basic values ​​look quite adequate. Special attention should be paid to poolTimeout and workers - see example .


It can be used in conjunction with the cache-loader as follows (order is important): ['cache-loader', 'thread-loader', 'babel-loader'] . If the warm-up is enabled for the thread-loader, it is worth re-checking the stability of re-assemblies using the cache — the webpack may hang and not complete the process after successfully completing the assembly. In this case, simply turn off warmup.


If you encounter a hangup of the assembly after adding a thread-loader to the Sass-style compilation chain, this advice can help you.


- HappyPack


A plugin that intercepts loader calls and distributes their work across multiple threads. At the moment, it is in support mode (that is, development is not planned), and its creator recommends thread-loader as a replacement. Thus, if your project keeps up with the times, it is better to refrain from using HappyPack, although it is definitely worth trying and comparing the results with the thread-loader.


HappyPack has clear configuration documentation , which, by the way, is rather unusual in itself: it is proposed to move the configurations of the loaders to the plugin's constructor call, and replace the loader chains with the own happypack loader. Such a non-standard approach can cause inconvenience when creating a custom webpack configuration “of pieces”.


HappyPack maintains a limited list of loaders ; The main and most widely used in this list are present, but the performance of others is not guaranteed due to the possible incompatibility of the API. More information can be found in the project's issues .


Failure to calculate


Any work takes time. To spend less time, you need to avoid work that does little good, can be put off until later or not needed at all in this situation.


- Apply loaders to the minimum possible number of modules


The test, exclude and include properties set the conditions for the module to be included in the processing by the loader. The point is to avoid transforming modules that do not need this transformation.


A popular example is the exclusion of node_modules from Babel transfiguration:


 rules: [ { test: /\.jsx?$/, exclude: /node_modules/, loader: 'babel-loader' } ] 

Another example is that regular CSS files do not need to be processed by the preprocessor:


 rules: [ { test: /\.scss$/, use: ['style-loader', 'css-loader', 'sass-loader'] }, { test: /\.css$/, use: ['style-loader', 'css-loader'] } ] 

- Do not enable optimization of the size of the bundle in dev-mode


On a powerful developer’s machine with a stable Internet, a locally deployed application usually starts quickly, even if it weighs several megabytes. Optimizing a bundle when building can take much more precious time than saving when loading.


The advice concerns JS (Terser, Uglify , etc. ), CSS (cssnano, optimize-css-assets-webpack-plugin), SVG and images (SVGO, Imagemin, image-webpack-loader), HTML (html-minifier, option in html-webpack-plugin) and others.


- Do not include polyfill and transformation in dev-mode


If you are using babel-preset-env, postcss-preset-env or Autoprefixer, add a separate dev-mode Browserslist configuration that includes only those browsers that you use during development. Most likely, these are the latest versions of Chrome or Firefox, which perfectly support modern standards without polyfills and transformations. This will avoid unnecessary work.


Example .browserslistrc:


 [production] your supported browsers go here... [development] last 2 Chrome versions last 2 Firefox versions last 1 Safari version 

- Revise the use of source maps


Generating the most accurate and complete source maps takes considerable time (on our project, about 30% of the time of the prod-build with the devtool: 'source-map' option). Consider whether you need source maps in a prod build (locally and in CI). It may be worthwhile to generate them only when necessary - for example, on the basis of an environment variable or a commit tag.


In dev-mode, in most cases there will be a rather lightweight version - 'cheap-eval-source-map' or 'cheap-module-eval-source-map' . For details, see the webpack documentation .


- Set compression in Terser


According to the Terser documentation (the same applies to Uglify), minigating the code is overwhelmed by the mangle and compress options. By fine-tuning them, you can speed up assembly at the cost of a slight increase in the size of the bundle. There is an example in the source code of vue-cli and another example from an engineer from Slack. In our project, Terser tuning in the first version reduces assembly time by about 7% in exchange for a 2.5% increase in the size of the bundle. Is the game worth the candle - you decide.


- Exclude external dependencies from parsing


Using the module.noParse and resolve.alias you can redirect importing library modules to already compiled versions and simply insert them into the bundle without wasting time on parsing. In dev-mode, this should significantly increase the speed of the assembly, including incremental one.


The algorithm is about the following:


(1) Make a list of modules that need to be skipped during parsing.


Ideally, these are all runtime dependencies that fall into a bundle (or at least the most massive ones, such as react-dom or lodash), and not only their own (first level), but also transitive (dependency dependencies). Maintain this list in the future will have on their own.


(2) For the selected modules, write down the paths to their compiled versions.


Instead of missing dependencies, you need to provide the assembler with an alternative, and this alternative should not depend on the environment - have calls to module.exports , require , process , import , etc. Pre-compiled (not necessarily minified) single-file modules that usually lie in the dist folder inside dependency sources are suitable for this role. To find them, you have to go to node_modules. For example, for axios, the path to the compiled module looks like this: node_modules/axios/dist/axios.js .


(3) In the webpack configuration, use the resolve.alias option to replace imports by dependency names with direct file imports, the paths to which were written out in the previous step.


For example:


 { resolve: { alias: { axios: path.resolve( __dirname, 'node_modules/dist/axios.min.js' ), ... } } } 

There is a big drawback: if your code or code of your dependencies does not refer to the standard entry point (index file, main field in package.json ), but to a specific file inside the dependency source, or if the dependency is exported as an ES module, or if the rezolving process interferes (for example, babel-plugin-transform-imports), the whole idea may fail. Bundle will be assembled, but the application will be broken.


(4) In the webpack configuration, use the module.noParse option to use the regular expressions to skip the parsing of precompiled modules requested in the paths from step 2.


For example:


 { module: { noParse: [ new RegExp('node_modules/dist/axios.min.js'), ... ] } } 

Total: on paper, the method looks promising, but a non-trivial setting with pitfalls at a minimum raises the costs of implementation, and at most, reduces the benefits to nothing.


An alternative option with a similar principle of operation is the use of the externals option. In this case, you will have to embed links to external scripts yourself in the HTML file, and even with the necessary dependency versions that correspond to package.json.


- Select rarely changing code into a separate bundle and compile it only once


Surely you've heard about DllPlugin . With it, you can spread an actively changing code (your application) and rarely changing code (for example, dependencies) across different assemblies. Once assembled a dependency bundle (the same DLL) then simply connects to the application build - saving time.


It looks like this in general terms:


  1. To build a DLL, a separate webpack configuration is created, the necessary modules are connected as entry points.
  2. The build starts with this configuration. DllPlugin generates a DLL bundle and a manifest file with name mappings and module paths.
  3. A DllReferencePlugin is added to the configuration of the main assembly, to which the manifest is passed.
  4. Imports of dependencies rendered in a DLL are displayed during assembly on already compiled modules using a manifest.

A little more can be found in the article on the link .


Starting to use this approach, you quickly find a number of drawbacks:



You can get rid of the boilerplate and solve the first problem (and the second one, if you use html-webpack-plugin v3 - it doesn't work with the 4th version) using AutoDllPlugin . However, it still does not support the entryOnly option for the entryOnly used under the hood, and the author of the plug-in himself doubts the advisability of using his brainchild in the light of the soon coming webpack 5.


miscellanea


Update your software and dependencies regularly. Node.js, npm / yarn (webpack, Babel .) . , changelog, issues, , .


PostCSS postcss-preset-env stage, . , stage-3, Custom Properties, stage-4 13%.


Sass (node-sass, sass-loader), Dart Sass ( Sass Dart, JS) fast-sass-loader . , . — dart-sass , node-sass, JS, libsass.


Dart Sass sass-loader . Sass fibers.


CSS-, dev-. - , , , .


Example:


 { loader: 'css-loader', options: { modules: true, localIdentName: isDev ? '[path][name][local]' : '[hash:base64:5]' } } 

, , : .


, - webpack PrefetchPlugin , , — . webpack issues , . ?


  1. . CLI- --json , . . , , dev- .
  2. - Hints.
  3. , “Long module build chains”. , — PrefetchPlugin .
  4. PrefetchPlugin. . StackOverflow .

: .



, (TypeScript, Angular .) — !


Sources


, , , .



')

Source: https://habr.com/ru/post/451146/


All Articles