More and more SPA appearssalons. Even people landings sawing on React. A really complicated web application is already difficult to imagine with a different approach. One of the main problems of the modern frontend is the assembly of such projects. With this help to deal with bandlers.
Ivan Sosnin, frontend developer Kontur, tells how to set up webpack 2 and 3 in order to get a significant increase in the speed of static assembly. The article will be useful to those who are already working with webpack or looking in his direction.
It is worth starting with the remark that webpack 4 recently came out . There, in general, everything is super-fast and nothing needs to be done, and the process of breaking the code into chunks has changed.
But dragging into production libraries that were updated yesterday is not my way.
Webpack is a module builder (bandler). It collects various modules with dependencies in one or several files (bundles). Webpack has a modular architecture, which means that it can be flexibly configured. Code building is configured using plugins , and code transformations are performed using loaders .
If you want more basic details, you can read Rakhim Davletkaliev's article about webpack 1. It is a bit outdated, but the ideas and examples in it are given in detail.
For all this flexibility you have to pay a complicated configuration.
Setting up an earlier webpack was a creative process and could last forever. The situation has changed somewhat with the release of the second version and the emergence of coherent documentation . But there are a lot of settings that do not lie on the surface. This is due to the fact that there are many open-source solutions that are embedded in the build process.
Other collectors:
require()
for the browser. Opportunity is much inferior to the webpack (able to work only with JS);We have quite a lot of client code in the project: ~ 2000 js / jsx files (~ 300,000 lines) and ~ 800 scss files (~ 50,000 lines). All this beauty must somehow be collected and for this we use webpack 3. Obviously, with the growth of the code base, the build speed will not become higher. This means that you need to look for ways to optimize assembly speed. In general, there are already quite a lot of articles and discussions on this topic, but they usually affect some one part of the assembly (caching, pre-vendor libraries, etc.). I gathered various optimization directions with concrete examples.
For different projects there were different results. For example, in one adjacent project, the build speed increased from 3.5 minutes to 30 seconds. For my project, the statistics are below.
Before all changes | After change | |
---|---|---|
Cold build for production | 14 minutes | 3 minutes |
Rebuild for production | 12 minutes | 2 minutes |
Cold build for development | 17 minutes | 3 minutes |
Rebuild for development | 5 minutes | 30 seconds |
The build process is the same in all cases: first you need to install dependencies, then build 4 applications in which there are several entry points.
In this case, the "cold build" implies that all project caches are cleared (except for the local yarn cache, more on that later), there is no node_modules folder. A rebuild involves restarting the assembly.
I split builds for production and development, because they have different configs, for example, there is no Uglify in the development build.
Next, I will show which item exactly what change in speed allowed to achieve. The numbers may well be inconsistent, because At different moments of measurement there could be a different webpack configuration. It is worth paying more attention to the order of the size of the changes.
I use yarn . He has quite successfully released himself and solved many of the problems of the npm native client at that time.
In October 2016 , when yarn was released, npm was version 3.10.9 and until the release of version 5.0.0 it was still about six months. Some problems that yarn solved:
- the mechanism for fixing all dependencies was a crutch: there was only the
npm shrinkwrap
that created the lockfile;- npm was highly dependent on network stability. This problem was solved by the shrinkpack tool , which archived all current dependencies and replaced paths in the lockfile with local ones. All anything, but all these thousands of archives had to be dragged into the repository. And at any measure to catch conflicts in binaries. You can learn more about this topic from an interesting report from WSD 2016 in Yekaterinburg ;
- re-installation of packages, if nothing has changed, still lasted for a while;
- subjectively, but I was very pleased with the visual mode of updating packages:
yarn upgrade-interactive
, although there are analogues for npm.
Already there are alternatives to yarn: for example, npm has learned to cache, and there are still pnpm , which in general node_modules only hardlinks create.
Installation speed comparison on ~ 1300 packages:
npm 5 | yarn 0.24.6 | pnpm | |
---|---|---|---|
Installation with local cache | ~ 1 minute | ~ 3 minutes | ~ 1 minute |
Reinstall | ~ 20 seconds | ~ 1 second | ~ 1 second |
Lockfile | ️ ✓ | ️ ️ | ️ ️ |
You can read more amusing comparison on hackernoon .
And if you still do not fix dependencies in package.json - it's time to start.
Of course, if you transform the code and use babel . Webpack-config will be something like this:
test: /\.jsx?$/, use: [{ loader: 'babel-loader', options: { cacheDirectory: true } }]
By default, the cache is added to node_modules / .cache / babel-loader , but you can specify a different directory.
Difference: 667 seconds ⟶ 614 seconds (8%)
A plugin for webpack that caches collected modules. There is a big issue on caching in a webpack, the idea behind this plugin originated there. The link is just the post author plugin.
Connects to webpack config:
plugins: [ new HardSourceWebpackPlugin() ]
By default, the cache is added to node_modules / .cache / hard-source , but you can specify a different directory.
In my case, simply connecting this plug-in without a config gave a boost from 200 seconds to 50 (if there is a cache).
When using webpack-dev-server and postcss you have to work with a file .
Noted problems:
Difference: 275 sec. ⟶ 53 sec. (80%)
UglifyJS is a tool that is used to minify JS code. Webpack adds it to the plugins automatically, if you build a bundle with the -p flag .
There are 2 reasons for using webpack-parallel-uglify-plugin :
If you use this plugin, you will have to manually add it to the production assembly and no longer use the -p flag. Config example:
plugins: [ new ParallelUglifyPlugin({ cacheDir: path.join(dir.root, "node_modules", ".cache", "parallel-uglify"), uglifyJS: {/* uglifyjs options */} }) ]
Difference: 627 seconds ⟶ 391 seconds (38%)
This item follows from the above. You should not delete node_modules before deploying client code.
Many probably have library projects or frameworks that are used throughout the project. Such libraries can not be rebuilt every time. With this we will be helped by a couple of plugins built into the webpack: DllPlugin and DllReferencePlugin .
First you need to put in a separate config DLL assembly. This is your usual webpack-config, where DllPlugin must be connected:
// webpack.vendor-dll.config.js new webpack.DllPlugin({ name: 'vendor', path: 'prebuild/' + environment + '/vendor-manifest.json', })
The environment variable here is process.env.NODE_ENV. Because I want to separate the developer and production DLL assembly.
To install process.env.NODE_ENV, you can look at the cross-env package. Then npm-script might look something like this: "deploy:app1": "cross-env NODE_ENV=production webpack --progress --config ./path/to/app1/webpack.config.js",
After building you will have 2 files: vendor-manifest.json and some dll.vendor.js. They need to commit to the repository. At least the production version.
In your main config you need to add DllReferencePlugin:
// webpack.config.js new webpack.DllReferencePlugin({ manifest: require('./prebuild/' + NODE_ENV + '/vendor-manifest.json'), })
Perhaps you want the DLL you commit to the repository to be next to your bundles. CopyWebpackPlugin will help you here :
new CopyWebpackPlugin([ { context: path.join(__dirname, 'prebuild', NODE_ENV), from: '*', }, ], { ignore: [ 'webpack-vendor-assets.json', 'vendor-manifest.json', ], })
Difference: 233 seconds ⟶ 213 seconds (9%)
Starting with version 0.15, the css-loader began to slow down the build very much . Judging by the comments, some assembly has slowed down more than 50 times. In my case, the difference was, but not so big.
The list of features which cannot be used at lower version:
But CSS Modules and scope can be used. Full documentation for version 0.14.5 .
Difference: 213 seconds ⟶ 185 seconds (13%)
This webpack plugin is able to bring the general code of the specified modules into a separate chunk. That is, if you have 2 bundles 1.bundle.js and 2.bundle.js, and both use, say, React and Redux, they will be in a separate chunk, but they will not be in bundles.
An example and results of use can be found in the webpack repository . And in more detail the work of the plugin is described in the topic on StackOverflow .
Config example:
plugins: [ new webpack.optimize.CommonsChunkPlugin({ names: ["common", "manifest"], minChunks: Infinity }) ]
At the same time I have one entry point (not minified) lost weight from 4.4 MB to 4.2 MB (5%).
Difference: less than 10 seconds
Experiment ! And study your bandy ! And experiment again!
This webpack setting allows you to avoid parsing certain libraries or files. Webpack will simply add such a module to the bundle, without conversion.
That is, if some library is delivered to npm in a minified form and there is no require in it, for example, the code that needs to be compiled, you can not run it through the webpack, because it will be useless and can be long, and immediately stick in the bundle.
Difference: less than 10 seconds
This webpack setting allows you to cache modules and chunks. Enabled by default in --watch
mode.
Difference: less than 10 seconds
This plugin is not so much about the speed of assembly as about the size of the bundle. Some libraries pull a ton of trash when connected (for example, localization for a heap of languages). With ContextReplacementPlugin, you can fix this:
plugins: [ new webpack.ContextReplacementPlugin(/moment[\/\\]locale$/, /ru/) ]
At the same time, I have one entry point (not minified) lost weight from 4.2 MB to 3.8 MB (10%). CommonsChunkPlugin has been disabled.
Difference: less than 10 seconds
Up to this point, we have accelerated the assembly of individual entry-points in various ways. But if you have a lot of them, or a lot of different webpack-configs, they can be collected in parallel. There is a wonderful wrapper with which you can transfer an array of webpack-configs and they will be assembled in parallel.
I got about this config:
const app1Config = require("./App1/webpack.config"); const app2Config = require("./App2/webpack.config"); const app3Config = require("./App3/webpack.config"); const app4Config = require("./App4/webpack.config"); module.exports = [ app1Config, app2Config, app3Config, app4Config ];
As a result, the assembly takes as much time as the assembly of the bold project itself takes.
Noted problems:
Difference: 178 sec. ⟶ 119 sec. (33%)
This is a package that runs all code transformations in parallel.
To configure it, you need to drag the settings for the main loader to the HappyPack plugin settings:
const HappyPack = require("happypack"); // ... plugins: [ new HappyPack({ loaders: ["babel-loader"] }) ]
And instead add a happypack loader:
module: { rules: [ { test: /\.jsx?$/, use: "happypack/loader" } ] }
I got it on my Windows, but I didn't get a significant increase in speed, so I don't use HappyPack. Apparently, I'm not the only one: issue , issue .
Difference on Windows: less than 10 seconds
Source: https://habr.com/ru/post/351080/