📜 ⬆️ ⬇️

Examine JS speed and page display algorithm

Testing the speed of performing JS or displaying pages is a thankless task. Any testing reflects reality only when it is performed under the same conditions as possible and things that are identical in functionality are tested. After all, the question of what is faster, a truck or a sports car, everyone will immediately reply that it is a sports car. And if across the field and with a manure trailer? The winner in each case will be the one who is best suited to perform specific tasks.

This article will be some hypotheses and some facts. There will be no fan speeches and calls to change the browser orientation.

So, our guinea pigs:

I did not test IE9, because I have it installed on a virtual machine, and this is fraught with a speed penalty and a noticeable spread of values.

')

JS engines optimized, optimized, but not optimized


The race for the speed of the JS engine did a good thing: it increased the responsiveness of browsers, gave coders more control over the display of pages and made the animation pleasing to the eye. These are not all advantages, and you can list them for a long time. We learned about the wonders of optimization on the wonderful benchmarks, such as SunSpider , Dromaeo , V8 Benchmark Suite and others. Now the question arises, what did all these benchmarks intend? JS execution speed? Maybe. And maybe not.

What should we build a DOM?


Let's take a simple, konevakosferichesky example: a script in a loop creates a DOM node, which then adds to the document.

I'll run a little ahead. Knowing that Javascript engines can exist separately from the browser, I assume that the execution of our test script will be divided into three stages:

  1. Execution of the JS function to create a node (for simplicity, let's call it “JS”)
  2. Creating a DOM object in the browser (DOM)
  3. Rendering of the node (Rendering)


Sequential execution


Obviously, the simplest way to implement the mechanism for displaying elements will be the sequential execution of commands: first we execute the JS function, at the command of which we will receive an instance of the DOM node, which we will send to render.

image

This is not the worst way to implement. Its main drawback is that it is impossible to “cut off” the JS engine from other functional parts of the browser. However, there is an easy way around this.

Perform tasks one by one


And let's make a feint with our ears and divide our layers by some execution queues. The JS engine will simply queue up a set of commands to execute and will go to sleep, having previously transferred the task further to the factory of DOM objects. The factory will execute all necessary commands, simultaneously creating a set of objects ready for rendering. We put everything in the rendering queue and pass the baton to the rendering system. Voila!

image

In this architecture, simply “cut off” one part of the browser from another, communication happens through a queue of tasks.

Full parallelism


In the previous embodiment, the steps are performed sequentially. And what if you make three parallel systems? The DOM factory observes the job queue, and as soon as a command appears, it immediately makes a DOM node and sends it to the drawing queue.

image

Fearfully? Yes, it is scary interesting!

Testing


Let's now measure the JS execution speed of each of the above architectures. Even before testing, it is already clear that in the first case we will measure the speed of all three stages, in the second and third - only the speed of creating queues. But this will not reflect reality!

To equalize the three approaches, I made a simple benchmark ( RapidShare / Yandex / pasteBin ). Clones of several blanks are randomly generated. If the clone is the last in a string, then it is assigned a width so that it occupies all available free space. The elements are floating and until they are drawn, it is not clear what position and width they have. In fact, I maximally led everything to the sequential execution of commands.

Testing really surprised me.

To start, look at the test results without load.

image
5007501000
Firefoxnineteenthirty45
Opera6ten14
Chromefive914

Fox is three times behind Opera and Chrome. Chrome is slightly faster than Opera, but as the number of iterations grows, the lag is reduced.

And now turn on the "brake"

image
5007501000
Firefox95023504800
Opera61012502100
Chrome570020,500~ 55000

I could not believe my eyes. On a thousand iterations, Chrome falls into cruel thoughtfulness and requests that you not mock his vulnerable soul and stop suffering by stopping the execution of the script. Here is a stick in the wheels!

Opera unconditionally won the competition, Fox the second.

Some analysis


If we take the ratio of the results with the load to the results without it, then we see a rather interesting picture. Firefox loses performance 50 to 100 times. The degradation of speed from the number of iterations is almost linear. The picture and behavior of the browser fits very well into the first scheme - the sequential execution of steps. Visually, Fox does not render the page until it finishes the loop.

At the Opera, the degradation of speed when switching on the load is from 100 to 150 times. Opera draws the page as the script runs, which is very similar to the scheme of parallel execution of the three stages.

Chromium has a degradation rate of between 1,000 and 4,000 times. Chrome does not render the page until it finishes the loop. This is very similar to the sequence of stages.

Funny coming out.

Conclusion


My article is partly theoretical in nature and the architectures that are presented in it are not actually confirmed.

When creating scripts that analyze the size of elements, be careful and vigilant. Differences in rendering architecture can lead to disastrous speed degradation.

UPD. Added a link to the benchmark on Yandex .

Many people ask in which program I create diagrams. In Corel Photopaint X5 manually.

UPD2. Added link to pasteBin. Thanks friends! .

Source: https://habr.com/ru/post/108705/


All Articles