📜 ⬆️ ⬇️

Javascript Optimization: time-tested experience

Foreword


I have long wanted to write. Thoughts are there, desire is there, there is no time ... But now it was found, so hello, Habra.
Here I gathered all the ideas that helped and helped in the development of web applications. For convenience, I divided them into groups:
  1. Memory
  2. Operations optimization
  3. Highlighting critical areas
  4. Loops and object properties
  5. Little about dom
  6. DocumentFragment as intermediate buffer
  7. About conversions to objects
  8. Code break
  9. Drag and Drop Events
  10. Other tips

Now we will not talk about any libraries. I will try to transfer knowledge about the mechanisms of the language itself, and not their implementations in libraries.

Memory

Although this should not worry the client programmer, we don’t forget that the memory is still not infinite and may someday end, for example, when several massive programs are running: office, image editor, compilation of a large program, etc. Despite that this example is trivial, I really had this happen, although not because of the browser, but it also played its role: 1.3 GB of operatives (debugger, about 30 tabs), the brakes started to overload the pages of the OP into the paging file.
To reduce memory consumption, I suggest several ways:

1) reducing the number of local variables.
You ask what it means? I explain, in my practice I saw how code monkey students wrote similar code:

(function init(){ //      - for(var i=0,n=1;i<10;i++) //      n+=n; alert(n); //  for(var i=0,m=1;i<10;i++) m*=m; alert(m); })(); 

')
Maybe you don’t immediately see where the trick is, but: why create new variables if we have old variables used and already storing unnecessary values? In this example, for a normal solution, it is necessary to replace all m with n, which will save memory.
This method is best manifested in recursive functions, because each call of such a function provokes the creation and, note, deletion of local variables after the function completes, which also requires CPU time and memory.
For visual perception, you can bring an analogy with the lockers: you have 6 lockers, three of which can be filled; Why then do you need three more lockers, if in this case you have to open all 6 and then close all 6?

2) reducing the number of closures.
The closures cause significant memory consumption (3 MB per 1000 objects for chromium, perhaps in new versions, a different amount), so use them as little as possible. I use them in two cases:
  1. It is necessary to hide data inside some interface, not to give access from the outside;
  2. During recursion, when you need to make some notes in one common object (for example, when traversing HTML, add all nodes that have the custom dragndrop property to the array) if the selection by selectors is not appropriate.

Both cases imply some particular, unique cases. It means that single interfaces are created.
An example of the first case:
 (function init(){ var INTERNAL_NUMBER=0;//  return { get:function get(){return INTERNAL_NUMBER;},//,     set:function set(value){ //,        if(typeof value==”number”) INTERNAL_NUMBER=value; return INTERNAL_NUMBER; } } })(); 


This is how I create SINGLE interfaces.
If the first is already, I think, everything is clear, then the second case should be clarified: in fact, it refers to the previous item, because we replace the variable that could be passed as an argument to the function with a closure, thereby reducing the number of local variables inside this functions and at the same time observe the principle of minimum closures: this closure is characteristic only for this recursive function (although this is how you want it), because the same function is used during recursion (new closures are not created).
Example:

 (function init(){ var found=[]; (function traverse(html){ for(var i=html.firstChild;i;i=i.nextSibling) arguments.callee(i); if(typeof html.dragndrop==”object”) found.push(html); })(document.body); return found; })(); 


As you can see from the example, the recursive function contains 2 local variables (html, i) instead of three (html, i, found). In practice, the gain in speed is insignificant (at least from the closure of only one variable), but it gives an indication of the gain in memory.
And, please, do not blame for nextSibling, and not for nextElementSibling, everything was done primarily to clarify the essence of the closure within the recursive function.
ATTENTION: Never make a loop through a loop - this causes excessive memory consumption. Exceptions are cases when the script logic requires unconditional data hiding (but in any case, if I have a debugger, will I get there?). An example of improper use of closures:

 function addEvents2(divs) { for(var i=0; i<divs.length; i++) { divs[i].innerHTML = i divs[i].onclick = function(x) { return function() { alert(x) } }(i); } } 


Yes Yes. This is the very example from Ilya Kantor's article about closures. For an explanation of the essence - yes, it is normal, but for practical use it is absolutely wrong: several functions are created, each with its own closure. Well, if a few pieces. But if thousands ... The best way out in this case is to create only one function WITHOUT CLOSING and use the this property:
 function addEvents2(divs) { var f=function f(){alert(this.$i)}; for(var i=0; i<divs.length; i++) { divs[i].innerHTML = i; divs[i].$i=i; divs[i].onclick = f; } } 

And it is best to use the handler on the parent element (note egorinsk ).

Operations optimization

I once wrote about this, but I repeat once again: for each type of operation, among all possible options there is one of the fastest, which is preferable to use.
Let us have a variable v, the content of which depends on the context of the consideration; there is also a variable k, which has the same meaning.
OperationSourceComment
Cast to boolean!! vProbably everyone knows that.
Cast to integerv-0Just subtract zero
Fractional reductionv-0.0Small but win
Cast to stringv + ””Add an empty string
Object creation{}Indeed faster than through the operator new. Winning is the ability to specify properties
Creating an array[]An array is also an object, so it is this kind of creation that is faster.
Comparisonv === kComparison without coercion (if script logic allows it)
Injection / Decrement operation, assignment operation with arithmetic actionv + = 1; v / = 5;It may seem strange, but this way is faster, and in all browsers
Division / multiplication operations by numbers that are powers of two ( Dzuba remark)v << 2Operations are replaced by a bitwise shift. Winning is also characteristic of other languages.

Using these operations is recommended, but the script logic does not always allow you to put them in one place or another, so be vigilant.

Highlighting critical areas

I would advise theoretically to estimate how much a given section (for example, a cycle) would be critical for a given scope, i.e. with an increase in the number of iterations for this section, its execution time will also increase linearly or the dependence will already be in the form of a power function, where the degree is greater than one. And after optimization of all operations in this area, it is necessary to optimize the area as a whole, for example, to introduce an additional variable that would store the intermediate result used more than once.
Simply put, the critical section is the code on which the processor will work the longest, and noticeably for the page. For example, 100k operations when interpolating, if the application for applied mathematics.
In the subsections, I will give an example and indicate the essence.

Loops and object properties

The speed of the for and while loops is about the same. However, their difference is most pronounced in IE, where the for loop is many times faster, so I recommend using it. The for-in cycle NEVER use for arrays, in this case you will lose performance dramatically: not only does it work slower due to accessing the table of properties, there will also be losses and enumeration of properties that are unnecessary in a cycle, such as for example, length.
Probably, you already know about the technique of the reverse passage through the array, due to which the passage execution time is actually reduced. It is not always applicable, but in most cases is valid. I propose to consider an example in order to understand why “acceleration” occurs:

 var arr=[]; arr.length=100500; for(var i=0;i<arr.length;i++) …;//-  


Here you need to introduce the concept of object conversion - this is getting or setting the specified property of the current object or the value of a variable.
Thus, in the example above, for one iteration of the cycle (without taking into account the actions inside the cycle), 4 object hits occur (getting i, getting arr, getting arr.length, increasing i). In the best way back, these 4 hits are replaced by one:

 for(var i=arr.length;i--;) …;//-  


It should be noted that the postdecrement operator immediately returns the value, as a result of which there is no need to refer to i again.
As for the logic of cycles, try to complete the cycle as early as possible: for example, there is a test for the truth of each object in the array (by condition, everyone should be true); break as soon as you reach the first untrue element.

Little about dom

Lyrical digression: before writing this article, I checked if there are any similar ones here. It turned out that there is . I went to read, and, OH GOD, what I saw there:

 document.getElementById('elem').propertyOne = 'value of first property'; document.getElementById('elem').propertyTwo = 'value of second property'; document.getElementById('elem').propertyThree = 'value of third property'; 


And now the point: NEVER REPEAT ALREADY IMPLEMENTED ACTIONS, especially if they are related to DOM!
By the action taken in this case, I mean getting the item by ID. The larger the document, the slower the search is performed, and in this case the search is performed three times. Correct solution:

 var item=document.getElementById('elem'); item.propertyOne = 'value of first property'; item.propertyTwo = 'value of second property'; item.propertyThree = 'value of third property'; 


First, the speed increases, and second, the code is reduced. The same design can be screwed to cycles:

 for(var i=arr.length,c;i--;){ c=arr[i]; … } 


---

Developing the theme of cycles: how to quickly remove subitems of a node, knowing that object references exist? That's right, like this:

 var z,node=document.body; while(z=node.lastChild) node.removeChild(z); 


In fact, this code will be a critical section within the function, which is due to the large number of calls to DOM objects.
It is worth noting that DOM interfaces are ten times slower than embedded JavaScript objects. Therefore, if we are talking about performing calls to DOM properties, then the number of such calls should be minimized, especially if they occur in a cycle. In the previous example, there are only 2 DOM calls in the loop: to the lastChild property and removeChild functions; calls to z and node do not apply here, because they are ordinary variables. But perhaps I am mistaken.
Performance is also lost if, in the previous case, DOM event handlers were placed in the document. Try to keep them as small as possible and make them as simple as possible. In some cases, their code may be critical.

DocumentFragment as intermediate buffer

If you need to insert several sub-elements into the element one by one, do not rush to do it directly. After all at each insertion DOM event will be generated. To circumvent this problem, there is a DocumentFragment - an intermediate buffer that allows you to collect items and insert them in the right place ONE TIME, which significantly improves performance. To verify this, create a 200 by 100 table with and without using a DocumentFragment. For this experience, a special thanks to Ilya Kantor.
And all operations on objects (class assignment, id, attribute setting) are also better done in DocumentFragment. This approach does not allow the generation of events already inside the document, which does not cause a load.

About conversions to objects

On large volumes of JSON data, eval works VERY slowly, but since values ​​are undefined and functions are not included in JSON, you can create such an object only with eval.
For real JSON format, use JSON.parse. But the lack of the latter is that it requires full compliance with the JSON specification: double quotes for keys, no comments (although they should be).

Code break

To understand the text below, you will need to know about the JS call stack (note spmbt ): roughly speaking, this is the stack where all currently executing functions are placed in the order of the call. Putting the first function on the stack can be caused in several ways: 1) by calling from the global execution area; 2) running the function by timer / timeout; 3) the execution of a custom event handler. When the function completes its work, it is removed from the stack. It is also removed from the stack if an error has occurred inside it that has not been processed.

Did you know that redrawing a document (for example, when you change some element styles) only happens when the call stack is cleared? Now you know for sure. JavaScript does not allow you to create constructs of the type wait (2000); with the continuation of the code of the same function without sacrificing performance. Therefore, the code is divided into separate functions that control the elements. The advice is to try to evenly distribute the load across different functions with this code breakdown into functions. After all, it may happen that one function changes almost all styles of most elements of a document, the other computes a complex mathematical task, and the rest do almost nothing, as a result, the page “hangs”.
Uniformity can be achieved by setting the execution interval. As for effects, the longer the interval between the launch of functions, the more time to redraw the document and the less load on the processor. But if the interval is too large, it will be noticeable "twitching effect." The optimal interval is 20ms. The minimum interval is 4 ms, except Opera (1ms) and IE (15 ms). Even if you set the interval to 0, the actual function call will still occur after the minimum interval.

Drag and Drop Events

Such events are critical because they are triggered EVERY TIME as soon as a drag is detected. To reduce the load, it is better to replace them with timeout functions by means of a closure ( egorinsk remark):

 (function init(){//    ,        . //   var MODE_MOUSE_MOVE=true; var move_listener=function move_listener (evnt){ if(!MODE_MOUSE_MOVE) return; ev=evnt||window.event;//for IE8 timeout=setTimeout(move_handler,10); }, ev=null, move_handler=function move_handler(){ timeout=0; MODE_MOUSE_MOVE=true; if(typeof document.$onmousemove=="function") document.$onmousemove(ev); }, timeout=0; document.onmousemove=move_listener; })(); 


If you assign the function document. $ Onmousemove, then it will be executed every time a drag is detected, and its criticality will be reduced by using a timer. The timer timer is assigned in case you suddenly have to make it possible to interrupt this timer.
This also applies to the window resizing event.

Other tips

Try to use wrapper functions: they add code modularity and isolate the local variables of one module from the local variables of the other module. This reduces the likelihood of using an already declared variable that already has a certain role assigned to it.
In principle, everything is said. Maybe, optimize not only the code, but also its readability. In unfused source codes, make the names of functions and their parameters logical, prefix type (s - string, n - number, etc.). I do that, and the code is understood even after a year.

Afterword


I'm glad you read this article. I hope you enjoyed it in terms of presentation and accessibility. It may be bad that there are no examples of working with popular libraries, but I believe that you need to be able to use JS without libraries, especially if you are a client developer.

Maybe most of the councils have already become "Boyans" for you, but I think that you have learned something useful from here. thank

UPD 1. Taking into account the advice given in the comments, I correct all the found flaws

Source: https://habr.com/ru/post/137318/


All Articles