
Hello, dear readers! Very soon we will have a
new book about React and Redux technologies, original - O'Reilly, May 2017
To describe the
scale of a disaster, a range of problems that may arise when creating web applications using such technologies, we offer an abridged translation of the article by Samuel Mendenholh (November 15), which discusses the subtleties of working with React, Redux, Typescript, and how to fix and preempt problems with performance in such applications.
IntroductionLast year, our team rewrote one of the company's internal applications from Angular to React. Previously, we have many dealt with React, there were both "newbies" and "experienced". But we all learned a lot of new things on this project, especially when we dealt with the pain points that had arisen during development, struggled with inefficiency, studied successful methods of our colleagues, or found out by trial and error what was best for us. That's what we did.
')
We used typescriptOne of the most successful decisions made on this project is to use Typescript, I will say more - typed JavaScript in a broad sense. I had to choose between Typescript and Flow; even though we had nothing against Flow, we stopped at TypeScript, since it fit better into our workflow. It turned out that TypeScript in our case is just a find, and we began to feel more confident when developing the code base with the whole team. When refactoring a huge code base of 3-4 depth calls, moreover, calls come from different parts of the application ... this is a nervous matter. Working with Typescript, as soon as you type all your functions, the uncertainty will almost completely disappear. No, of course, you can write incorrectly or incomplete code in TypeScript that will cause errors, but if you strictly adhere to the correct typing, then some classes of errors (say, passing the wrong set of arguments) almost disappear.
If you are swimming in Typescript, or want to seriously reduce the level of risk in the application - take and use Typescript.
By the way, we also worked with
typestyle.imtqy.com/# , which we were very pleased with.
Avoid large applications that do not have strict code styling and standards and / or some type-checking tool in JavaScript, such as Flow or Typescript, are not used. In particular, other tools can help here, for example Scala.js.
On the contrary , keep in mind that the longer a javascript project grows without typing, the more difficult it will be to refactor. Type checking never eliminates risk, but reduces it significantly.
Track errorsAnother invaluable decision we made with the whole team is to use Sentry:
sentry.io/welcome . I am convinced that in nature there are other excellent tools for tracking errors, but we started with Sentry, and it served us just fine. With a Sentry, you see your sight. And at the first stage we moved in production environments like blindly, brother. At first they relied on testers and users, they thought we would learn from them about errors in the product, and users will always find those errors that have escaped testers. This is where Sentry helped. If you correctly mark up releases, you can concentrate on a specific release, a specific group of users and actually prevent bugs and errors. We caught a lot of errors before getting into production by analyzing the quality of the code in Sentry, where it is very easy to identify any unexpected data problems or other unrecorded situations.
Try not to go into production until you have implemented the ability to automatically catch errors.
Work better with Sentry or some other error reporting tool.
Optimize the build processYes, find time for this. Let's say your local dev build occurs in 20 seconds. What if you have 10 developers on a project, and you recompile the code every 12 minutes, that is, 40 times a day. This means that 800 seconds per day is spent waiting. In terms of working days and taking into account the annual one-month vacation, it turns out that each developer is wasting 50 hours a year, and the whole team - 500 man-hours. This is not such a trifle, if you can easily reduce the assembly time and save time spent on switching between contexts and waiting.
Our rebuilds take no more than 2-5 seconds due to the use of the Webpack DLL and other development optimizations. We also use code separation and hot reboot of modules - that is, only those modules that are modified are reloaded. We even have a stripped-down version of the assembly, therefore, working on certain parts of the application, we compile the entire application only at the very beginning. Many tricks can be done with a webpack.
AirBnB wrote a wonderful report on how they managed to optimize the build process:
github.com/webpack/webpack/issues/5718 . Many optimization methods mentioned there are used by us, others are not.
Try not to be limited to the standard webpack-assembly, we recommend to carry out a sufficiently deep optimization.
No, you need to customize the webpack assembly based on the specifics of a particular application. For example, if you use Typescript, then you may resort to awesome-typescript-loader, if not, use the blessed hack.
Use modern Javascript constructs, but consider what this will lead to. For example, using async / await is very convenient to write completely clean asynchronous code. But do not forget that if you expect
Promise.all
and any element of the promise is not fulfilled, the whole challenge will collapse. In this calculation, build your redux-actions, otherwise a small API failure can lead to the fact that whole pieces of your application will not load.
Another very nice construct is the extension operator, but note that it violates the rule of equality of objects and thus bypasses the natural way of using
PureComponent
.
Try not to use ES6 / ES7 constructs if the performance of the web application suffers because of this. For example, do you need this anonymous internal function in your
onClick
? If you do not pass any additional arguments, then most likely you do not need it.
Better understand the effects of different designs and use them efficiently.
Do you need a babel?At one of the first stages of rewriting from good old Javascript to Typescript, we still had a babel in the pipeline. But at some point we wondered: “what does the babel do in this mix?” Babel is an invaluable library that perfectly solves the tasks for which it is intended, but we use Typescript, which also translates the code for us. Babel we do not need. By rewriting the code without it, we accelerated the build process, getting rid of one level of complexity.
Try not to use libraries and loaders that you do not need. When was the last time you reviewed your package.json or webpack configuration and checked for unused boot loaders and libraries?
It is better to periodically review the toolchain builds and loadable libraries - perhaps you can get rid of something.
Consider which unwanted libraries you haveWhen updating dependencies there is always some risk, you can reduce it with the help of functional tests, Typescript and competent assembly. If not updated, the risk may be even higher. Take, for example, React 16, where key changes have occurred: in recent versions of React 15, it was possible to issue warnings that some dependencies do not yet comply with the new PropTypes standard and will be broken in the next release. These warnings looked like this:
Warning: Accessing PropTypes via the main React package is deprecated. Use the prop-types package from npm instead.
Therefore, if you have never updated the dependent libraries that resolve such problems, then you will not be able to upgrade to React 16.
Manage dependent libraries - a kind of double-edged sword. If you fix dependencies, then at first you reduce the risk, but in the future you still risk losing the opportunity to fix something or (potentially) optimize it. Some library dependencies may not comply with the rules, and product owners may not begin to port important fixes into old versions.
The other end of this "stick" - hard dependencies can interfere if library versions are updated too often.
We managed to find a balance between fixing and frequent library updates. In this case, there is a happy medium: if you allow the main release to stabilize, then at the hardening stage of the application, try to find time to update the dependencies.
Try not to fix dependencies and do not refuse to update. In addition, it is not necessary to update every major release immediately after release.
It is better to work out the sequence of checking releases of dependencies, estimate what is worth updating now and perform such updates at the stage of running the application.
Know your stack weaknessesFor example, we use
react-actions
and
react-redux
, but there is a drawback: the types of action arguments are not checked between actions (actions) and reducers (reducers). In this context, we faced some problems when we updated the action, but forgot to update the elements of the converter, and there was a discrepancy that was not caught during type checking. One way around this problem is to write a single interface containing all the arguments and work with it. In this case, we update this particular shared interface and we can be sure that all types will be properly tested.
Do not do this:
interface IActionProductName { productName: string; } interface IActionProductVersion { productVersion string; } const requestUpdateProductVersion = createAction(types.REQUEST_UPDATE_PRODUCT_VERSION, (productName: string, productVersion: string) => ({productName, productVersion}), null ); const receiveUpdateProductVersion = createAction(types.RECEIVE_UPDATE_PRODUCT_VERSION, (productName: string, productVersion: string) => ({productName, productVersion}), isXhrError ); [types.RECEIVE_UPDATE_PRODUCT_VERSION]: (state: ICaseDetailsState, action: ActionMeta): ICaseDetailsState => {
In large applications, this approach is simpler, cleaner and more compact, but it lacks type checking on the AND interfaces between the action and the transformer. Strictly speaking, there is still no true type checking between the action and the converter, but due to the lack of a single interface for all arguments, there is a risk of refactoring errors.
Better do this:
interface IActionUpdateProductNameVersion { productName: string; productVersion: string; } const requestUpdateProductVersion = createAction(types.REQUEST_UPDATE_PRODUCT_VERSION, (productName: string, productVersion: string) => ({productName, productVersion}), null ); const receiveUpdateProductVersion = createAction(types.RECEIVE_UPDATE_PRODUCT_VERSION, (productName: string, productVersion: string) => ({productName, productVersion}), isXhrError ); [types.RECEIVE_UPDATE_PRODUCT_VERSION]: (state: ICaseDetailsState, action: ActionMeta): ICaseDetailsState => {
When using the common
interfaces.IActionUpdateProductNameVersion
any changes in this interface will be picked up by both the action and the resolver.
Profile the application in the browserReact does not recognize you that it has performance problems, and in fact it can be difficult to identify them if you don’t look at the javascript profiling information.
I would say that many of the performance problems of React / Javascript fall into one of the following three categories.
First: maybe the component has been updated, but should not? And the next problem: maybe updating a component is more expensive than just displaying? Answering the first question is easy, the second is not so easy. But, answering the first question, you can use something like
github.com/MalucoMarinero/react-wastage-monitor , it's simple. The tool will inform the console about such cases when the component has been updated, but its properties have not changed. For this particular purpose it is good. We performed optimization with the help of this library, and then turned it off, because we did not succeed in eliminating node_modules, because it was imperfect: it depends on the functions of properties, etc. Each tool is good for what it is intended for.
The second category of Javascript optimizations is achieved using profiling. Are there any areas in the code that are slower than you expected? Are there any memory leaks?
Google made an excellent reference about this:
developers.google.com/web/tools/chrome-devtools/evaluate-performance/reference and
developers.google.com/web/tools/chrome-devtools/memory-problemsThe third category is the elimination of unnecessary calls and updates. This optimization is different from the first; then we checked whether the component should be updated. In this case, optimization begins with the question of whether a call is needed at all. So, it doesn't cost anything to make such an error: randomly send multiple calls from the backend to the same component.
Try not
to be limited to these:
componentWillReceiveProps(nextProps: IProps) { if (this.props.id !== nextProps.id) { this.props.dispatch(fetchFromBackend(id)); } } export function fetchFromBackend(id: string) { return async (dispatch, getState: () => IStateReduced) => {
Better do this:
componentWillReceiveProps(nextProps: IProps) { if (this.props.id !== nextProps.id && !nextProps.isFetchingFromBackend) { this.props.dispatch(fetchFromBackend(id)); } } , : export function fetchFromBackend(id: string) { return async (dispatch, getState: () => IStateReduced) => { if (getState().isFetchingFromBackend) return; ... } }
This example is a bit contrived, but with the logic in it is all right. The problem will arise if componentWillReceiveProps works in your component, but from the very beginning you didn’t set a check whether the call from the backend is needed - then it will be executed unconditionally.
The problem is even more complicated if you have to deal with a lot of different clicks and changing arguments. If you are displaying a sales order and the component has to be re-displayed with the new order, but even before this happens, the user can click somewhere else and make a new order. Making such asynchronous calls does not always turn out to be deterministic. Moreover, what if the first asynchronous call is executed later than the second due to some unexpected delay on the backend? Then the user will see the wrong order. In the above code example, this problem is not even affected, but this code does not allow making many calls while one call is still in progress. Ultimately, to cope with such a hypothetical situation, we would need to create an object with key access in the converter, like this:
objectCache: {[id: string]: object}; isFetchingCache: {[id: string]: boolean};
where the component always contains a link to the last
id
clicked, and
isFetchingCache
is marked with the last
id
.
It should be noted that the above are not all performance problems that may arise when working with React and JavaScript. In particular, once we had a chance to face a drop in performance when calling reducers. It turned out that the problem is related to the fact that in the response coming from the API, redux has a very deeply nested large object - it spoiled the performance with deep cloning. We managed to catch the problem by profiling JavaScript in Chrome: when the cloning function came to the top for a while, we quickly figured out what was wrong.