
On Google I / O we were introduced to Polymer 1.0. This is a new release of the tool, which includes a number of features and innovations. Perhaps it is worth starting with Shady DOM.
Why do we need another DOM?
Encapsulation is the foundation of web components.
The goal of the web components is to provide the user with a simple interface for displaying complex elements whose implementation is hidden.
')
Browsers often use encapsulation. For example, the
<select>
and
<video>
elements are displayed using a non-usable DOM, which only the browser knows about.
There are many libraries that try to follow similar behaviors. For example, a jQuery plugin that turns your chosen element into a slider. As a rule, a plugin generates a bunch of DOMs around an element, trying to endow it with typical slider properties and capabilities. This approach is an excellent practice, but the entire DOM generated for the needs of the slider is not hidden and is on the page. This is not nearly as elegant as using
<select>
or
<video>
.
Shadow DOM aims to solve this problem. Browsers that support shadow DOM can display complex elements by hiding the implementation (DOM, CSS, JS).
Simple markup is good!
Let's imagine the
x-fade
element, the essence of which lies in the beautiful appearance of the image when it is loaded.
<x-fade> <img src="cool.png"> </x-fade>
And, let's say, we implemented a plugin for it:
$('x-fade').makeFade();
The author will be very pleased, as he will achieve the necessary behavior.
In fact, this is all we need from web components — simple markup to achieve the necessary behavior. But the plugin-based approach has a number of drawbacks, which solves the shadow DOM.
DOM pollution
Suppose after calling
makefade
, we have the following DOM:
<x-fade> <div> <img src="cool.png"> </div> <canvas></canvas> </x-fade>
The
x-fade
plugin needs some DOM to achieve its goal. The elements that he added are open, leading to the following problems:
- Implementation details are disclosed.
- Selectors that go through the document tree will include
<canvas>
and <div>
. - Since the author did not expect the appearance of these elements, they may inherit unnecessary styles.
- The
<img>
element, on the contrary, can lose its styles, since it is no longer part of the former DOM. - Can the author add a new item? Change or delete? What to do if the elements are no longer where they were originally.
Tree scoping
The scope of the tree allows us to hide part of the DOM tree from the main document.
If we implement
x-fade
using the shadow DOM, then after the
makeFade
call, our tree will look like this:
<x-fade> <img src="cool.png"> </x-fade>
That is, exactly the same as before initialization.
Browser mapping is different from how it is represented in the code. For the developer, this is still an element with only one
<img>
.
Thanks to this opportunity, we solved all the above problems. Namely:
- Implementation details are hidden.
- Samples traversing the document will not include the
canvas
and the <div>
. - New items will not inherit styles.
<img>
will not lose its styles, as it has not moved anywhere.- Developers can easily add a new image or change the current one.
Shadow DOM Encapsulation
If we decide to look at the full picture of what we got, then we will see the following:
<x-fade> <img src="cool.png"> #shadow-root <div> <content select="img"> </div> <canvas></canvas> </x-fade>
Yeah, here's our
<canvas>
and
<div>
. You could also mark the new
<content>
element. This is an example of a shadow DOM composition with a so-called light DOM — one that we can
pass to an element.
At the moment of rendering, these two DOMs are combined and look like the result of jQuery (for the browser, of course, we don’t see it):
<x-fade> <div> <img src="cool.png"> </div> <canvas></canvas> </x-fade>
Shadow DOM is so cool, so why do we need another shady DOM ?!

Shadow DOM hides the DOM tree from the entire document. Selectors that we will do according to the document (
childNodes
,
children
,
firstChild
etc.) will not include hidden items as a result.
To make a polyfil for such behavior is VERY difficult. We need to achieve the same composite mapping of the DOM tree, while hiding it from the logical code.
This means that we need to modify all available methods for working with elements in order to return custom information.
We implemented such a polyfil, but the price is:
- A lot of code.
- Redefinition of methods, slows down work with elements.
- Structures like the
NodeList
are not under our control. - Accessors (for example window.document, window.document.body) cannot be redefined.
- Polyphil returns proxied objects, which can be confusing.
Most of the projects simply can not be implemented due to the above disadvantages, and in Safari we have terrible performance.
Shady dom
A la Frankenstein, which Google is trying to praise in every way. It is a pity, but there is no other way out.
Roughly speaking, the Shady DOM provides us with a shadow DOM-compatible model of the scope of the tree. The result of the work we get exactly the same DOM as with the jQuery plugin.
<x-fade> <div> <img src="cool.png"> </div> <canvas></canvas> </x-fade>
In other words, gentlemen, all those flaws that we supposedly overcame - open implementation, problems with styles and the rest.
All that could save Google from the side is how the tree is represented in the code. But for this we MUST, use the new API for working with DOM and only then we will work with the elements as if nothing had happened and to see it like this:
<x-fade> <img src="cool.png"> </x-fade>
In fact, within an element, it looks quite worthy of itself:
var arrayOfNodes = Polymer.dom(x-fade).children;
This way we can work with both the internal DOM and the
light DOM.
The shadow DOM model was not completely embossed from the polymer. Shady compatibility with shadow allows us to write in the same style. If you want, you can make the polymer decide where it can use the shadow DOM natively, and where to include shady.
findings
- Web components need encapsulation, aha ...
- Shadow DOM implements encapsulation, but only Google supports it natively.
- Trying to make a polyfil for shadow DOM is a difficult and slow in the long run.
- Shady DOM presents us with a super-fast analogue of the shadow DOM polyphile, but with
some of the biggest bunch of flaws, but we have a new API for you. - Shady DOM gives us the opportunity to expand the audience of applications that we can develop using this model.
- All these inconveniences prove that all platforms must support the shadow DOM natively.
In fact, I am very pleased with the polymer itself. As was said at the conference, the components of the reactor work only in the reactor, the components of the angular are only with angular, and the components written with the use of polymer work everywhere. They occupy a level between the web platform and the frameworks. You can use them with any framework or write an application using only components.
I had the experience of crossing Backbone with React components, but this is not as cool as it may seem. But the polymer components + Backbone straight yum.