
Since January of this year, Bill McCloskey and David Anderson have worked to make Firefox multiprocessor, Tom Shuster (evilpie), Felipe Gomez and Mark Hammond helped them with this. And now the moment has come when they
would like to know the opinion of the community about the work done .
Firefox has always used a single-process construction model. Interest in changes in the field of parallelization spurred the exit of the browser “Chrome”, it used one process for the interface and separate processes for working with the content of web pages. (However, six months before Chrome several processes began to use Internet Explorer 8.) Soon, several other browsers followed the example of Chrome, Mozilla began the
Electrolysis project to adapt the Gekko engine to use several processes .
What causes Mozilla to switch to a similar model for building its browser? First of all, this is performance and responsiveness. The main goal is to reduce javascript (jank), which manifests itself in standard operations - loading a particularly large page, typing in a web form or scrolling overloaded with page elements.
Responsiveness today is somewhat more important than performance. Part of the work was done in the framework of the
Snappy project. The main tasks were:
')
- Moving long operations to a separate thread so that the main process remains responsive.
- Implement asynchronous I / O so that the main process is not blocked by disk operations.
- Breaking long operations into parts with an event loop in between. An example of this is parallel garbage collection.
The simplest of these tasks have already been completed, now the most difficult ones remain.
Another necessity is safety. Now, after detecting an unclosed bug in Firefox, an attacker can execute arbitrary code on users' machines. To solve security problems, many techniques and techniques are used, but the most effective is to run the code in the sandbox.
However, placing the current single-process architecture of Firefox in the sandbox does not seem to be effective: the sandbox mode only does not allow the process to perform actions that it should not do, and the current Firefox organization (especially with many additions) requires broad access to the network and file system. Multiprocess Firefox will ensure that each of the web content processes runs in sandbox mode with deep restrictions, which, as developers hope, will reduce the number of vulnerabilities in the browser. Control of access to the file system will be done by the main process.
In addition, the developers tried to increase the stability of the Fire Fox, even though
Firefox remains the most stable browser in the world . Instead of dropping the whole browser, only the process responsible for a specific tab or element will fall.
Already, you can try to see what happened. To do this, simply
download the browser's
nightly build and set the parameter
browser.tabs.remote
to
true
. Developers strongly recommend creating a new profile.
about:memory
already displays consumption for individual processes.
This is how the multi-process Firefox window looks like. The underlined tab name reflects the fact that its content is processed in a separate process.

So the separate tab falls.

The first question that arises from the majority is related to the consumption of RAM. Users are convinced that more processes means more memory. The developers promise a number of optimizations and the introduction of certain types of cache that are common to several processes. If one of them has any data recorded by one process, another process will be able to check their presence and use the data from this cache instead of creating new ones in its memory area. Such a model will allow both to increase safety and to keep some speed.
With the use of MemBench benchmark and after opening 50 tabs, memory consumption has increased by only 10 megabytes (from 974 megabytes to 984 megabytes) compared to the usual single-process version. Over time, this difference will be minimal.
At the moment, it remains unknown when multiprocessing Firefox will reach the release stage - the developers face too much work. Details of the architecture are reflected in the
publication of Bill McCloskey .