📜 ⬆️ ⬇️

ECMAScript 6

The boundaries of my language personify the boundaries of my world.
- Ludwig Wittgenstein

For the past few months, I have been writing only ECMAScript 6 code, taking advantage of the transformation [1] in currently supported versions of JavaScript.

ECMAScript 6, hereinafter ES6 and earlier ES.next , is the latest version of the specification. As of August 2014, new opportunities are not discussed, but details and extreme cases are still being clarified. The standard is expected to be completed and published in mid-2015.

The adoption of ES6 simultaneously led to an increase in performance (which makes my code more concise) and the elimination of a whole class of errors by eliminating common JavaScript pitfalls.

Moreover, it confirmed my belief in an evolutionary approach to language and software design, as opposed to clean-slate recreation .
')
This should be pretty obvious to you if you used CoffeeScript, which focuses on the good parts of JS and hides the bad ones. ES6 was able to take on so many innovations from CoffeeScript that some even questioned the further development of the latter.



Instead of doing a thorough analysis of the new features, I’ll talk about the most interesting ones. To encourage developers to update, new languages ​​and frameworks should (1) have a compelling compatibility story and (2) offer you a large enough carrot .

# Module Syntax


ES6 introduces syntax for defining modules and declaring dependencies. I underline the word syntax because ES6 is not related to the actual implementation of how the modules are selected or loaded.

This further enhances the interplay between the different contexts in which JavaScript can run.

Consider as an example the simple task of writing reusable CRC32 in JavaScript.

Until now, there have been no recommendations on how to actually solve this problem. The general approach is to declare a function:

function crc32(){ // … } 

With the reservation, of course, that it introduces a single fixed global name to which other parts of the code will have to refer. And in terms of code that uses the crc32 function, there is no way to declare a dependency. Once the function has been declared, it will exist until the code is interpreted.

In this situation, Node.JS chose the path to introduce the require function and module.exports and exports objects. Despite the success in creating a successful ecosystem of modules, interoperability was still somewhat limited.

A typical scenario to illustrate these flaws is to generate a bunch of browser modules using tools such as browserify or webpack . They are still in their infancy, because they perceive require () as syntax , effectively saving themselves from their inherent dynamism.

The given example is not subject to static analysis, therefore if you try to transport this code to the browser, it will break:

 require(woot() + '_module.js'); 

In other words, the packer algorithm cannot know in advance what woot () means.

ES6 introduced the right set of restrictions, taking into account most existing use cases, drawing inspiration from the most informally existing special modular systems, like jQuery $ .

The syntax requires some getting used to. The most common pattern for defining dependencies is surprisingly impractical.

The following code:

 import crc32 from 'crc32'; 

works for

 export default function crc32(){} 

but not for

 export function crc32(){} 

the latter is considered named export and requires the syntax {} in the import construct:

 import { crc32 } from 'crc32'; 

In other words, the simplest (and, perhaps, the most desirable) form for defining a module is the additional keyword default . Or, in case of its absence, use {} when importing.

# Destructuring


One of the most common patterns encountered in modern JavaScript code is the use of variant objects.

This practice is widely used in new browser APIs, for example, in the WHATWG fetch (a modern replacement for XMLHttpRequest ):

 fetch('/users', { method: 'POST', headers: { Accept: 'application/json', 'Content-Type': 'application/json' }, body: JSON.stringify({ first: 'Guillermo', last: 'Rauch' }) }); 

The widespread adoption of this model effectively prevents the JavaScript ecosystem from falling into a logical trap .

If we accept that the API takes the usual arguments, and not an object with parameters, then the fetch call becomes the task of memorizing the order of the arguments and inputting the null keyword in the right place.

 //       fetch('/users', 'POST', null, null, { Accept: 'application/json', 'Content-Type': 'application/json' }, null, JSON.stringify({ first: 'Guillermo', last: 'Rauch' })); 

On the implementation side, however, it does not look as beautiful. Looking at the function declaration, its signature no longer describes the input features:

 function fetch(url, opts){ // … } 

Usually this is followed by manually setting default values ​​for local variables:

 opts = opts || {}; var body = opts.body || ''; var headers = opts.headers || {}; var method = opts.method || 'GET'; 

And unfortunately for us, despite its prevalence, the practice of using || in fact introduces hard to detect errors. For example, in this case we do not allow the fact that opts.body can be 0 , so a reliable code will most likely look like this:

 var body = opts.body === undefined ? '' : opts.body; 

Due to destructuring, we can immediately clearly define the parameters, set the default values ​​correctly and set them in the local scope:

 fetch(url, { body='', method='GET', headers={} }){ console.log(method); //  opts. } 

Actually, the default value can be applied to the whole object with parameters:

 fetch(url, { method='GET' } = {}){ //       - {} //  "GET": console.log(method); } 

You can also destruct an assignment statement:

 var { method, body } = opts; 

It reminds me of the expressiveness provided with , but without magic or negative consequences.

# New agreements


Some parts of the language have been completely replaced by better alternatives , which will quickly become the new standard for how you write JavaScript.

I will talk about some of them.

# let / const instead of var


Instead of writing var x = y, most likely you will write let x = y . let allows declaring variables with block scope:

 if (foo) { let x = 5; setTimeout(function(){ //  x  `5` }, 500); } //  x  `undefined` 

This is especially useful for for or while cycles:

 for (let i = 0; i < 10; i++) {} // `i`   . 

Use const if you want to provide immutability with the same semantics as let .

# string patterns instead of concatenation


Due to the lack of sprintf or similar utilities in the standard javascript library, stringing has always been more painful than it should.

String patterns made embedding expressions in strings a trivial operation, as well as support for multiple lines. Just replace 'with `

 let str = `  ${first}.   ${new Date().getFullYear()}  `; 

# classes instead of prototypes


Class definition was a cumbersome operation and required deep knowledge of the internal structure of the language. Even in spite of the fact that the benefit of understanding the internal structure is obvious, the entry threshold for beginners was unreasonably high.

The class offers syntactic sugar for defining the constructor function , prototype methods , and getters / setters. It also implements prototype inheritance with built-in syntax (without additional libraries or modules).

 class A extends B { constructor(){} method(){} get prop(){} set prop(){} } 

I was initially surprised to learn that classes do not pop up (hoisted) (explained here ). Therefore, you should think about them, translating into var A = function () {} as opposed to function A () {} .

# () => instead of function


Not only because (x, y) => {} is shorter to write than function (x, y) {} , but the behavior of this in the function body will most likely refer to what you want.

The so-called “thick arrow” functions are lexically related . Consider an example of a method inside a class that runs two timers:

 class Person { constructor(name){ this.name = name; } timers(){ setTimeout(function(){ console.log(this.name); }, 100); setTimeout(() => { console.log(this.name); }, 100); } } 

To the dismay of newbies, the first timer (using function ) will output "undefined" . But the second correctly displays the name .

# Excellent async I / O support


Asynchronous code execution has accompanied us throughout almost the entire history of the language. setTimeout was eventually introduced around the time JavaScript 1.0 was released.

But, perhaps, the language does not actually support asynchrony. The return value of function calls that are scheduled to be “executed in the future” is usually equal to undefined or in the case of setTimeout - Number .

The introduction of Promise made it possible to fill a very large gap in compatibility and composition .

On the one hand, you will find the API more predictable. As a test, consider the new fetch API. How does this work behind the signature we just described? You guessed. It returns a Promise .

If you have used Node.JS in the past, you know that there is an informal agreement that callbacks follow the signature:

 function (err, result){} 

Also informally stated is the idea that callbacks will be called only once . And null will be the value in the absence of errors (and not undefined or false ). Except, perhaps, this is not always the case .

# Forward to the future


ES6 is gaining considerable momentum in the ecosystem. Chrome and io.js have already added some functionality from ES6. Much has been written about it.

But it is worth noting that this popularity was largely due to the availability of utilities for transformation , and not actual support. Excellent tools appeared to enable transformation and emulation of ES6, and browsers eventually added support for debugging code and error trapping (using code maps).

The evolution of the language and its intended functionality, ahead of the implementation. As mentioned above, Promise is really interesting as an independent block that offers a solution to the problem of callback hell once and for all.

The ES7 standard proposes to do this by introducing the possibility of waiting (async) of a Promise object:

 async function uploadAvatar(){ let user = await getUser(); user.avatar = await getAvatar(); return await user.save(); } 

Although this specification has been discussed for a long time, the same tool that compiles ES6 to ES5 has already implemented this.

There is still a lot of work to be done to make sure that the process of adopting a new language syntax and API becomes even more odd for those who are just starting to work.

But one thing is for sure: we must accept this future.

Footnotes:
one. ^ I use the word “transformation” in an article to explain the compilation of source code into source code in javascript. But the meaning of this term is technically debatable .

Source: https://habr.com/ru/post/252323/


All Articles