At the time of writing this article in JavaScript, there was no official modular system yet and everyone emulated the modules as best they could.
Modules or similar structures are an integral part of any adult programming language. Just nothing else. The modules allow you to limit the scope, allow you to reuse parts of the application, make the application more structured, separate resources from noise, and generally make the code more visible.
')
JavaScript has its own atmosphere - there are no official modules in the language; moreover, all the files are located remotely, one application stream. You have to constantly solve some strange problems with downloading, tricky to pack modules into one file to speed up the loading time. It happens that you need to fight with double standards, to adapt modules of a different format.
The fact is that before they didn’t think that huge projects could be done with JavaScript, and not just “patch the DOM”, so they didn’t think about the modules. Yes, and did not think about the future. And then suddenly the future turned up! Everything seems to be already there, and the modules in JavaScript, to put it mildly, are late. Therefore, developers have to spin and invent some modularity emulators.
I think many of you have read the excellent article Addy Osmani
Writing Modular JavaScript With AMD, CommonJS & ES Harmony , which was one of the chapters of his book
Learning JavaScript Design Patterns in this article talks about "modern" JavaScript modules or read a rather old article
JavaScript Module Module : In-Depth 2010 about the "old" modules.
I will not translate these articles and I will not make a hodgepodge of them. In my article I want to talk about my modular way. About how I made my way from the “old” modules to the “new” ones and what I use now and why.
This article consists of 3 parts: Module path, Module mate guides, and common types of modules.
tl; drI have come a long way from non-modules through
AMD and
browserify to
LMD , which satisfies all my needs and makes life easier. In the future, I bet on
ECMAScript 6 Modules .
Module path
Stage 1: Without Modules
In those days, when the JavaScript code was not enough, I completely managed without modules. Then I did not need them. The introduction of a modular system would turn my 50 lines of code into 150. And I could quickly patch the DOM without modules. I completely managed with namespaces and did not use the assembly, but minifiers were not developed then.
Module
MyNs.MyModule = function () {}; MyNs.MyModule.prototype = {
Assembly
<script src="myNs.js"/> <script src="myNs/myModule.js"/>
The progress of my application stepped another half step forward when I began to collect my files using cat
$ cat js/*.js > build.js
Stage 2: Preprocessing
Progress does not stand still and my 50 lines of code gradually turned into 1500, I began to use third-party libraries and their plugins. And the application that I wrote could be called the Rich Internet Application. The division into modules and their partial isolation solved my problems of that time. For assembly I began to use preprocessors. There were a lot of modules, they had dependencies, and I didn’t really want to resolve dependencies, so preprocessing was irreplaceable then. I used namespaces, although there was a lot of messing around with them:
if (typeof MyNamespace === 'undefined') { var MyNamespace = {}; }
and excess writings:
new MyNamespace.MyConstructor(MyNamespace.MY_CONST);
and the minifiers of that time compressed such code badly:
new a.MyConstructor(a.MY_CONST);
My modules stepped a little further forward when I began to apply total isolation and threw out the namespace, replacing it with a scope. And I began to use these modules:
include('deps/dep1.js'); var MyModule = (function () { var MyModule = function () {}; MyModule.prototype = {
And this assembly
(function () { include('myModule.js'); }());
And the same preprocessing
$ includify builds/build.js index.js
Each module has a local scope and the entire assembly is wrapped with another IEFE. This allows you to protect the modules from each other and the entire application from the global.
Stage 3: AMD

One day, while reading Reddit, I came across an article on AMD and RequireJS.
A small digression. In fact, the idea of AMD was borrowed from the
YUI Modules and well finished. For the use and declaration of modules, it was no longer necessary to write out unnecessary characters, configuration was also easier.
It was
YUI().use('dep1', function (Y) { Y.dep1.sayHello(); });
It became
require(['dep1'], function (dep1) { dep1.sayHello(); });
Having got acquainted with AMD, I realized that until that time I had done everything wrong. Only 2 functions
require()
and
define()
and all my problems were solved! The modules began to download their own dependencies, appeared sane exports and imports. The module was divided into 3 parts (import, export, module body), which could be easily understood. It also became easy to find the resources he needs and which he exports. The code has become structured and cleaner!
Module
define('myModule', ['dep1', 'dep2'], function (dep1, dep2) { var MyModule = function () {}; MyModule.prototype = {
Assembly
$ node r.js index.js bundle.js
But not everything is so simple ...
Stage 4: Disappointment at AMD
What I showed above is an ideal module and an ideal assembly. This does not happen in a real project. And it happens that the module has a lot of dependencies. Then it turns into something like this:
require(['deps/dep1', 'deps/dep2', 'deps/dep3', 'deps/dep4', 'deps/dep5', 'deps/dep6', 'deps/dep7'], function ( dep1, dep2, dep3, dep4, dep5, dep6, dep7) { return function () { return dep1 + dep2; }; });
This module can be used, but with it a lot of fuss. To overcome this problem, you can redo this module on Simplified CommonJS. Even in this case, it is possible not to write at all
define()
wrapper and create honest CommonJS modules, and then assemble them using
r.js
define(function (require, module, exports) { var dep1 = require('dep1'), dep2 = require('dep2'), dep3 = require('dep3'), dep4 = require('dep4'), dep5 = require('dep5'), dep6 = require('dep6'), dep7 = require('dep7'); return function () { return dep1 + dep2; }; });
The Simplified CommonJS format for RequireJS is “not native,” just the developer had to do it. If you start writing such modules, then RequireJS will start looking for dependencies of this module in regular form.

And something can not find:
require("myModule//"); require("my module"); require("my" + "Module"); var require = r; r("myModule");
This code is valid, but there is not a single module. Of course, the abstract example and some names are contrived, but I have often come across cases with dynamic construction of the module name, for example, with templates or some factories.
RequireJS, of course, has a solution for this — register each such module in the config file:
({ "paths": { "myModule": "modules/myModule.js" } })
It also happens that there are many such modules (templates) and you do not want to prescribe a new module each time into the config, and therefore the code begins to acquire all kinds of magic like dynamic generation of the config. And not to use "dynamic modules" is stupid with the available options.
I began to write honest CommonJS modules, use the assembly via r.js even in development. The rejection of AMD also allowed the use of these modules with Node.js without any magic. I began to understand that this tool suits me in principle, but with crutches and additional polishing.
Those features of dynamic loading of modules that RequireJS offered me were not needed by me. I wanted to be sure that I would have the most similar code in development and production, so asynchronous loading of modules in development did not suit me and that is why I assembled my modules into 1 file.
Some part of the project was loaded at the start (1 request), the rest of the parts were loaded on demand. And they were not loaded by a bunch of small requests, but by one big one (building several modules in the 1st file). This saved time and traffic and reduced the risk of network errors.
It also happens that you need to do several assemblies. For example, an application with a Russian locale for a testing environment or an application optimized for IE with English for a corporate network. Or an application optimized for iPad for Ukraine with disabled ads. Anarchy and copy-paste reigned ...
In the RequireJs philosophy, I didn’t like the fact that
require()
is a universal factory for the production of any resources.
require()
does an abstraction over plugins and already loaded modules. If the plug-in was not connected for some reason, then somehow it does not quite explicitly load it, and then using it loads the resource.
require(['async!i18n/data', 'pewpew.js', 'text!templates/index.html'], fucntion (data, pewpew, template) { });
In projects where resources are monotonous or resources are not very much - it may be ok.
Stage 5: Search for a module
I realized that I could no longer live like this ... but I knew what I needed:
1 Module must be CommonJS
It is often a case when you need to run the same module under Node.js and under JS @ DOM. Most often, these are some modules not related to the external environment (file system / DOM) or parts that are abstracted from it: templates (the most common part), time management functions, formatting functions, localization, validators ...
When writing AMD and you need to reuse something you have 2 ways: rewrite AMD on CJS or use node-require. More often choose the second option because you do not need to change anything. BUT. Then a modular porridge appears, a strange abstraction over the already existing module loading system in Node.js. I really didn’t like the AMD modules in Node.js.
CJS, in addition to compatibility with Node.js, is devoid of a
define()
wrapper and indentation that formats the function body. Its require and export is clearer and closer to ES6 Modules than
define()
-way. Compare yourself:
ES6 Modules
import "dep1" as dep1; import "dep2" as dep2; export var name = function () { return dep1 + dep2; };
CommonJS / Modules
var dep1 = require("dep1"), dep2 = require("dep2"); exports.name = function () { return dep1 + dep2; };
AMD
require(['dep1', 'dep2'], function (dep1, dep2) { return { name: function () { return dep1 + dep2; } }; });
And if it turns out that I have to go back to AMD, it will not hurt at all - I just need to register one line in the config file so that r.js wraps my CJS modules.
2 Module Builder
Today, everything is collected, even if you do not write CoffeeScript, then you somehow check, collect, compress your scripts.
To adapt the CJS module, you need a wrapper that the collector can do for me. The collector could also check me: whether all modules exist, whether I was mistaken in the name of the module, whether I declared all the plugins.
As a result of the assembly, I would like to receive 1 file that contains both my modules and scripts necessary for their work.
Dividing the application into “my scripts” and “not mine” “for the benefit of caching” (connect the bootloader code separately and my code separately) did not make sense to me because as I write mostly one-page web applications, and the cache today can be leaked for minutes Compiling all-in-one will also get rid of compatibility issues with the "module loader" when upgrading.
3 Flexible configuration system: dependencies, inheritance, mixins
As I already wrote, in my applications there are a lot of assemblies for different devices, browsers, environments and locales. I really wanted to get an unobtrusive configuration system without unnecessary copy-paste and writing.
For example, there is a
prod
config from it inherits the
dev
config and replaces some modules. There are also
ru
and
en
configs, which we can mix
prod+en
,
dev+ru
. Now, instead of “common” and copy-paste (
prod-ru
,
prod-en
,
dev-ru
,
dev-en
), we have only 4 “dry” config files:
prod
,
dev
,
ru
,
en
.
4 CLI
This is the interface to the robot that does half the work for you. If he is very overloaded or needs to -
-- ----
, then it starts to strain and causes a
Makefile
appear and waste a lot of time starting this very robot, which should save time.
Any actions that are repeated often should be as simple as possible. Default values should be used, the same argument names for subcommands. In general, the developer remembers and writes a minimum.
Compare
$ tool make -f path/to/build_name.js -o path/to/build.js
and
$ tool make build_name
And when you once again write out this long command in the console without autocomplete, you begin to hate this tool. It is clear that option 1 is perhaps more pronounced than the second, but it is very much like a graphoman's tool.
Stage 6: browserify
Browserify is a tool that allows you to run any Node.js modules in a browser.
Just
browserify main.js > bundle.js
and running.
Having worked with browserify for a while, I realized its true use-case: adapting the Node.js environment to work in a browser. Browserify is perfect for its purposes, but not for the realities in which web applications are created. When there are not adapted third-party modules, when there is a dynamic loading of large parts of the application. I had to conjure a lot in the console so that everything worked.
Stage 7: LMD

I really didn’t want to, but I had to start working on
LMD , a tool that would make my life easier. I could no longer adjust existing tools to my goals.
As a result, a tool was developed that was engaged in assembling the script part of my projects.
Here are a few features that formed the basis of the LMD:
1 Build from config
Since the presence of a config is inevitable, why not be based on it ?! The behavior of lmd is fully defined by the config, it contains both modules and plugins and export paths for the resulting file. Configs can be inherited and mixed with other configs.
It looks like a config
{ "name": "My Config", "root": "../js", "output": "../build.lmd.js", "modules": { "main": "index.js" }, "optimize": true, "ie": false, "promise": true }
If you have hundreds of modules - you do not need to register each module in the config! It is enough to register the “rewrite rule” for modules of the same type.
{ "modules": { "main": "index.js", "<%= file %>Template": "templates/*.html" } }
And in extreme cases, you can write a config in the form of a CJS module and generate everything on the fly.
2 Abstract FS: No binding to the file system
Binding to the file system on the one hand is natural and the HTTP server can uniquely reflect the file system. But it is worth remembering that the browser does not have a file system and the HTTP server delivers resources, and the code already understands that this text at this URL is a module. Resources can be moved, laid out on a CDN under arbitrary names.
The introduction of an abstract file system allows you to make abstractions over modules. For example, you have a locale module under which both locale.ru.json and locale.en.json can hide, because these modules have the same interface, we can transparently change one file to another.
You are free to call your modules as you like and connect without thinking about relative paths. If you have many modules and you forgot what file is hidden under this module, then you just need to use
lmd info
:
$ lmd info build_name | grep module_name info: module_name ✘ plain ✘ ✘ ✘ info: module_name <- /Users/azproduction/project/lib/module_name.js
3 Not overloaded with require () and plugins
I didn’t like that require is a factory, so its behavior was slightly rewritten. Now just
require()
loads modules from the abstract file system and nothing else. And
require.*()
Will use the plugin
*
and do the same thing. For example,
require.js()
load any JavaScript file by analogy with
$.loadScript
.
Plug-ins need to be explicitly registered in the config, however, LMD will help you remember to enable the plugin if you are writing the “correct code”.
For example, in this code LMD will help not to forget 3 plug-ins: css, parallel and promise
require.css(['/pewpew.css', '/ololo.css']).then(function () { });
But in this code only plugin js
var js = require.js; js('http://site.com/file.js').then(function () { });
You can enable and disable plugins using inheritance and configs mixes.
4 Adaptation of modules
It so happens that there are some files in the project that are difficult to call modules, but they need to be used like other modules. LMD can easily adapt any file and make a CJS module out of it at build time. In addition to the use of text files (templates) and JSON-files do not need to prescribe either plug-ins (see text plug-in for RequireJS) or adapters. Unlike the same RequireJS, LMD turns these files into honest modules, rather than adapting them with shim.
Today, LMD has a bunch of plug-ins and examples of working with them and a built-in analytics system for assembly work. And, of course, LMD makes my life easier. Further story about LMD goes beyond the boundaries of my article. Next time I will write an article with an example of a project on LMD.
Future?

Yes, of course, this is ES6 Modules. Their format is similar to many module formats from other languages and meets the expectations of newbies in JavaScript. They have all the necessary attributes of the module: import, export, module wrapper (in case you need to concatenate several files). They are
well broadcast in CJS and AMD . However, in the form in which they are now in draft they are difficult to use in real projects.
Import static. You need to use the module collector to speed up the start of the application. Importing an external module will be blocking:
<script> import {get, Deferred} from "http://yandex.st/jquery/3.0/jquery.min.js"; get('/').then(console.log.bind(console)); </script>
This is almost the same.
<script src="http://yandex.st/jquery/3.0/jquery.min.js"> <script> var get = $.get, Deferred = $.Deferred; get('/').then(console.log.bind(console)); </script>
In turn, the lock can be removed using
<script async/>
There is a dynamic loading of modules, but it is not perfect now:
Loader.load('http://json.org/modules/json2.js', function(JSON) { alert(JSON.stringify([0, {a: true}])); });
I hope that the module loader will be able to load an assembly of several modules. Then it will be enough.
The standard is now being actively discussed and what I showed you today may be tomorrow will look different (but unlikely). Today, the modules and syntax of import / export is similar to the one you used to see in other languages. This is good because many developers use JavaScript and it hurts them to see wild hacks like AMD. Today, one of the directions of development of ECMAScript is aimed at turning the language into a kind of assembler for broadcasting from other languages. And modules are an integral part of this direction.
findings
Today, we can say that JavaScript does not have a well-established modular system, only modularity emulators are available, but you can use the syntax of ES6 Modules and compile your modules in CJS and AMD. JavaScript has its own atmosphere, a lot of restrictions (network brakes, traffic, lags), which do not allow the use of many of the usual imports. The problem of assembly and asynchronous loading is somehow solved in popular modularity emulators, but how the ES6 developers will solve it is a question.
Materiel
If you have mastered my modular way, then I think you will be interested in my small modular classification.
I classified existing JavaScript “modules” and their infrastructure by features. The classification takes into account many features. Let's look at the classification of modules, and then the individual modular systems.
- Dependency resolution
- Manual control
- Dependencies are registered in the config
- Dependencies are written in the module itself.
- Dependencies are written in the module and in the config
- Access to dependencies
- Arbitrary
- Dynamic
- Declarative
- Export from module
- Chaotic export
- Strong name unmanaged export
- Self-export with a strong name
- Managed export with arbitrary name
- Honest import / export
- Module failure
- No build
- File concatenation by mask
- Preprocessing
- Static dependency analysis
- Build from config
- Module initialization and interpretation
- Initialized and interpreted at startup.
- Initialized at startup, interpreted on demand.
- Initialized and interpreted on demand.
- Loading external dependencies
- Unmanaged Module Loader
- Loader "managed" module
- Insulation Modules
- Modules not isolated
- Insulated modules
- Modules are totally isolated
Dependency resolution
How the assembly tool or the developer determines which dependencies need to be connected / initialized for the normal operation of this module. Dependencies, in turn, can also have dependencies.
Dependency resolution. Manual control
Dependency management on the developer’s shoulders. The developer analytically understands what dependencies need to be connected.
<script src="deps/dep1.js"/> <script src="deps/dep2.js"/> <script src="moduleName.js"/>
And accordingly in
main.js
var moduleName = function () { return dep1 + dep2; };

No third party libraries need to be used.

When there are not many modules and they are all yours - this is ok

When a lot of modules such code can not be supported

Multiple files = multiple requests to the server
Suitable for "fast nakodit."
Dependency resolution. Dependencies are registered in the config
Dependencies are registered in the external config and can be inherited. Using this config, some assembly tool loads / connects dependencies of this module. The config can be written both for a specific module and for the entire project.
This config is used in LMD
{ "modules": { "main": "moduleName.js" "<%= file %>": "deps/*.js" } }
And accordingly in
main.js
var dep1 = require('dep1'), dep2 = require('dep2'); module.exports function () { return dep1 + dep2; };

Modules are not tied to the file system (you can give any name to any file)
Without changing the name of the module, you can change its contents.
You need to write such a config. You
need an additional tool / library.Dependency resolution. Dependencies are written in the module itself.
The file itself declares the dependencies, the paths to the file and how they will be called during operation. The module actually determines any resources needed for operation, and the loader provides them. While dependencies and dependencies are not loaded, the module will not start its work.This method uses AMD (RequireJS) require(['deps/dep1', 'deps/dep2'], function (dep1, dep2) { return function () { return dep1 + dep2; }; });
If one module has a lot of dependencies, then this syntax is usually degraded to CommonJS define or use any perversions.Perversions require(['deps/dep1', 'deps/dep2', 'deps/dep3', 'deps/dep4', 'deps/dep5', 'deps/dep6', 'deps/dep7'], function ( dep1, dep2, dep3, dep4, dep5, dep6, dep7) { return function () { return dep1 + dep2; }; });
Degradation to commonjs define define(function (require, module, exports) { var dep1 = require('dep1'), dep2 = require('dep2'), dep3 = require('dep3'), dep4 = require('dep4'), dep5 = require('dep5'), dep6 = require('dep6'), dep7 = require('dep7'); return function () { return dep1 + dep2; }; });
When using such degradation, RequireJS looks for dependencies with regulars. This is 95% reliable. Honest way (AST or clever processing) consumes too many resources (amount of code and processing time), but also does not cover all needs.There are cases when it is also necessary to write a config in order, for example, to adapt some old module that cannot define or if some kind of “honest module” is dynamically initialized - require('templates/' + type)
and the regular schedule cannot find it. Dynamic initialization is a rare thing and is mainly used for dynamic loading of templates, but it is possible.
Almost all dependencies are described in the
Configs file itself asynchronously loaded
No need to write a config
But sometimes you have to write it all the same. You
need an additional tool / library.Dependency resolution. Dependencies are written in the module and in the config
Dependencies are registered with the file itself and in a special config.The config is used by any package manager to eliminate dependencies. For example npm andpackage.json
{ "dependencies": { "express": "3.x", "colors": "*" } }
And correspondingly main.js
The developer determines the list of dependencies and their versions. The package manager loads the modules and their dependencies. Here, in principle, without options, the manager knows nothing about the module. package.json
for the manager, the only interaction interface. In turn, each module can download its parts directly from the file system. require('pewpew.js')
If you use this approach for the browser, there are such advantages and disadvantages.
All dependencies are described in the file.
External dependencies version control is possible.
This module can be used both on the server and on the client
We need an additional tool / library to build, for example browserifyAccess to dependencies
Determines how the module uses dependencies within itself, how it accesses the required module.Access to dependencies. Arbitrary
All modules are open in the global scope or in the namespace. Each module can, without any restrictions anywhere, access any part of the application in any way. var dep1 = 1; var dep2 = 2; alert(dep1 + dep2);
If there are not many modules and they are not large, then this is ok.
If there are many modules, then such code cannot be maintained.
It’s impossible to determine the dependencies of the module by eye (you need to look for names of global variables or namespace)Access to dependencies. Dynamic
Access to the module can only be obtained through the "loader" - require()
or declaring the dependencies of the module through define()
This method is used in most popular libraries, when the require function is forwarded through the "short circuit of the module" through which the module can access other modules. This feature may also be available globally. var dep1 = require('./deps/dep1'), dep2 = require('./deps/dep2'); alert(dep1 + dep2);
Accordingly, the way with define()
require(['./deps/dep1', './deps/dep2'], function (dep1, dep2) { alert(dep1 + dep2); });
Easy to understand / find dependencies
Access to dependencies is moderated, you can lazily initialize the module, calculate runtime dependencies, etc.
You can statically determine almost the entire dependency graph
The code is a little Verbose, but this is a good support fee.
Additional library neededAccess to dependencies. Declarative
Modules are declared when writing code and are not loaded dynamically. A static code analyzer can unambiguously understand what set of modules is necessary for the application to work. Almost all import constructions work this way. import * from "dep1"; import * from "dep2";
Also under this method of access to dependencies can be attributed, and static AMD define () define('module', ['./deps/dep1', './deps/dep2'], function (dep1, dep2) { });
Static import allows collectors to collect dependencies, and ES6 Modules translators to convert code into ES3-compatible.
Static analysis is possible (full or partial).
ES6 Modules can be translated.
In its pure form it is rarely applicable.Export from module
Most often, modules provide some resources that other modules can use. These can be data, utilities (format of dates, numbers, i18n, etc.). Exporting from a module determines how the module says “I provide such and such resources”.Export. Chaotic export
The module exports anything, anytime, anywhere. var a = 10, b = ''; for (var i = 0; i < a; i++) { b += i; } var dep1 = b;
Clogged global scope of
hell and nightmare, in any case this is not supported in principleExport. Strong name unmanaged export
If we slightly modify the previous method by adding IIFE, we will get this method. The module knows in advance where it will lie and how it will be called. var dep1 = (function () { var a = 10, b = ''; for (var i = 0; i < a; i++) { b += i; } return b; })();
Or a slightly different option. (function () { var a = 10, b = ''; for (var i = 0; i < a; i++) { b += i; } exports.dep1 = b; })(exports);
Or named AMD define('dep1', [], function () { var a = 10, b = ''; for (var i = 0; i < a; i++) { b += i; } return b; });
It's easy.
No special tools are needed for building and using such modules (except AMD).
Only the necessary one is exported.
The module knows where it is exported and what its name will beExport. Self-export with a strong name
The basis of this method is a special function of “registering a module” ready()
, which a module must call when it is ready. It takes 2 arguments - the name of the module and the resources it provides. (function () { var a = 10, b = ''; for (var i = 0; i < a; i++) { b += i; } ready('dep1', b); })();
To download the dependencies of such a module, a function load()
similar torequire()
load('dep1', 'dep2', function (dep1, dep2) { ready('dep3', function () { return dep1 + dep2; }); });
load('dep3', do.stuff);
A module is exported asynchronously and can postpone its export.
The module does not know where it will be.
The module is exported itself (the module subordinates the module that uses it).
The module knows its name and can change it dynamically.
The module can register several modules.
A special library is needed.Export. Managed export with arbitrary name
The module does not know its name or where it will lie. The consumer of the module itself determines how this module will be called in the context of the consumer.This is CommonJS module var a = 10, b = ''; for (var i = 0; i < a; i++) { b += i; } module.exports = b;
or anonymous AMD define([], function () { var a = 10, b = ''; for (var i = 0; i < a; i++) { b += i; } return b; });
We can use any name during the export of the module. var dep1 = require('deps/dep1');
The module does not know where it is or what it will be called when using.
When renaming a module, you only need to rename the file. You
need a library to build and useExport. Honest import / export
This method of module declaration uses every second programming language. ECMAScript 6 Modules specification appeared long enough, therefore, sooner or later such syntax will come in JavaScript.We declare the module. module "deps" { var a = 10, b = ''; for (var i = 0; i < a; i++) { b += i; } export var dep1 = b; export var dep2 = b + 1; }
You can also declare a module without binding module {}
.
You can use default names and write less import * from "deps"; console.log(dep1);
You can avoid name conflicts using a kind of “namespace” import "crypto" as ns; console.log(ns.dep1);
You can export part of the module import {dep1} from "deps"; console.log(dep1);
Familiar imports from many languages - familiar and clear
This is ECMAScript 6
You need to translate the ES6 module into ES3-compatible code, for example, use modules from TypeScriptModule failure
Today, almost all modules are assembled as well. Even if you do not use CoffeeScript and AMD, then in any case you collect your project: concatenate files, compress them.No build
All in html <script src="deps/dep1.js"/> <script src="deps/dep2.js"/> <script src="moduleName.js"/>
It's easy.
With an increase in the number of modules, the application ceases to be supported and begins to slow down due to an increase in the number of requests.
HTML entity blending and the declaration of the module.
New assembly - new .htmlAssembly of modules. File concatenation by mask
We collect $ cat **/*.js > build.js
We use
<script src="build.js"/>
It's quite simple.
Only 1 file is loaded.
For each type of assembly, you need to create new scripts.
Files can be assembled in random order in different OS and FS.Assembly of modules. Preprocessing
The way is to search for special "tags" in files - include('path/name.js')
or // include path/name.js
similar ones. include('deps/dep1.js'); include('deps/dep2.js'); var moduleName = function () { return dep1 + dep2; };
All this is developed by a special utility in such a format. var dep1 = 1; var dep2 = 2; var moduleName = function () { return dep1 + dep2; };
.

1

- « »
include


,
"use strict"
, .
(function () { "use strict"; var i = 3; include('dep1');
var i = 4, dep = 01234;
, ;-)
.
Static analysis of the content of the module with the search for dependencies. This method uses r.js (RequireJS modules collector) and browserify (CommonJS modules adapter and browser Node.js infrastructure). They use an AST parser, look for define / require calls and thus find dependencies and, unlike include, place these dependencies outside the module.For example, here is such a module require(['dep1', 'dep2'], function (dep1, dep2) { return function () { return dep1 + dep2; }; });
if it is driven through r.js, it will be redone here define('dep1', [], function () { return 1; }); define('dep2', [], function () { return 2; }); require(['dep1', 'dep2'], function (dep1, dep2) { return function () { return dep1 + dep2; }; });
browserify ,

1



( )

,
.
. . . , - - .
LMD.
{ "root": "../js", "modules": { "main": "main.js", "dep1": "deps/dep1.js", "dep2": "deps/dep2.js" } }
The option, of course, is interesting, but why write the same thing 2 times in the module and in the config file ?!This is easily explained. LMD does not know about the file system, and the config is actually an abstract file system. This allows you not to think about relative paths and during the transfer / rename module do not run and do not change the path for the entire project. Using abstract FS, it becomes possible to get cheap Dependency Injection for localization, changing environment configs and other optimizations. It also happens that the modules are connected dynamically and the static analyzer cannot physically find them, so you have to make an entry about the module in the config. It is clear that registering the module to the config each time is a step backwards, so LMD has the ability to connect entire directories with subdirectories using glob-ingand a kind of rewrite rule.This config is identical to the previous one. { "root": "../js", "modules": { "<%= file %>": "**/*.js" } }
You determine which files are needed, and then write the template and thereby say how they need to submit this LMD module. To determine the name, LMD uses a template engine from lodash, so you can write more clever constructs : { "root": "../js", "modules": { "<%= file %><%= dir[0][0].toUpperCase() %><%= dir[0].slice(1, -1) %>": "{controllers,models,views}/*.js" } }
The results of this method are as follows:
Clearly - the entire project tree can be described in one file
Reliably - analyzer errors are excluded
Abstract file system
Need to write a config
Need a collectorModule initialization and interpretation
This is quite an important point, which allows you to reduce the lag when the application starts, when a lot of code is executed. When the code gets to the page, it is initialized (the function was written - it registered under some name) during initialization, the code is parsed, validated and transferred to AST for further interpretation and possible JIT compilation. When a function is called, its code is interpreted.The function is not initialized and not interpreted. Only javascript string is initialized. 'function a() {return Math.PI;}';
The function is initialized. function a() { return Math.PI; }
The function is initialized and interpreted. function a() { return Math.PI; } a();
Each function declaration and its call takes some time, especially on mobile, so it would be good to reduce this time.Initialized and interpreted at startup.
The module is delivered as it is and runs when the program starts. Even if we don't need it right now. As you can see in the module there are some cycles that can slow down the work. var dep1 = (function () { var a = 10, b = ''; for (var i = 0; i < a; i++) { b += i; } return b; })();
No need to use additional tools.
If the code is not large, then the initialization time is not significant.
As the amount of code increases, Startup Latency begins to manifest.Initialized at startup, interpreted on demand.
Now quite popular method that is used by both AMD and modules in Node.js define('dep1', [], function () { var a = 10, b = ''; for (var i = 0; i < a; i++) { b += i; } return b; });
This module will be initialized at startup. But his body will be executed on demand, and the result will be return b;
cached and the next call will not be interpreted.
You do not need to change much of the appearance of the
Startup Latency module; it is significantly reduced with a large amount of code. An
additional library is needed.Initialized and interpreted on demand.
A small modification of the previous method, which allows to postpone the initialization of the code. It is mainly used to optimize code loading on mobile devices. This optimization can be done for RequireJS and for LMD.A piece of LMD assembly (not config) { 'dep1': '(function(){var a=10,b="";for(var i=0;i<a;i++){b+=i;}return b;})' }
When a module requires module resources dep1
, the LMD interprets and initializes this code.Something like this: var resources = new Function('return ' + modules['dep1'])()(require, module, exports);
The initialization time of the code through new Function
can be a little slower than through fair initialization, but if you use this optimization wisely, then we can gain time at the start. The generated code can be optimized by the JIT compiler new Function
, as opposed to eval()
.
This operation is transparent to the developer.
Additional library is
needed.Loading external dependencies
As I said, JavaScript @ DOM has its own atmosphere, so the usual methods for loading modules do not work here. The modules lie remotely and their synchronous loading is not real. If in the desktop application we can synchronously link the library “at the speed of light”, then in JavaScript @ DOM this is hardly realistic due to the EventLoop blocking.We cannot download everything at once, so we have to invent and suffer something :)Unmanaged Module Loader
By an unsupported module, I simply understand any code that does not require any additional processing. Such a loader, for example, is jQuery.getScript(file)
about the following: var script = document.createElement('script'); script.src = file; script.onload = done; document.head.appendChild(script);
If you load several modules at the same time, they will be executed in the order of loading. It so happens that you need to run the modules in the order they are listed. The LAB.js library, for example, uses XHR to simultaneously download script code, and then executes this code sequentially. XHR, in turn, introduces its limitations. $LAB .script("framework.js").wait() .script("plugin.framework.js");
The rest of the boot loaders, like YepNope and script.js, do about the same thing.
Cheap solution
There may be restrictions on the part of XHR or additional writing.Loader "managed" module
Any adult modular system comes with its own loader and can load any modules and their dependencies. For example, it does function require()
and define()
of RequireJS.The function require()
from RequireJS will load the necessary dependencies and dependencies of dependencies and execute the code of these modules in the specified order. require(['dep1', 'dep2'], function (dep1, dep2) { console.log(dep1 + dep2); });
In LMD, for example, there is such a thing as a bundle - several modules assembled into one file. When downloading this bundle, all its modules become available to any module. _e4fg43a({ 'dep1': function () { return 1; }, 'dep2': 2, 'string': 'Hello, <%= name %>!' });
require.bundle('name').then(function () {
Manage both module loading and initialization.
Almost transparent for developer.
Requires additional tools and configuration.Insulation Modules
The security of modules or their isolation is needed, rather for developers than for those who break their labors. Direct and chaotic access to the properties of the modules can, if misused, “spoil the code”. On the other hand, if there are no traces of your JavaScript in the global scope, it will be harder for the researcher of your code to understand and “break” something, but then it’s more of a question of time.Modules not isolated
The module or some of its parts are available globally, any developer from any place can take and use. var dep1 = (function () { var a = 10, b = ''; for (var i = 0; i < a; i++) { b += i; } return b; })();
Again, this is just.
No tools needed.
We need to think about namespaces.
There is no division of labor for a module. He does his job and he manages to get dependencies.Insulated modules
The module is not available globally, but it can be obtained by knowing the name - require('pewpew')
. Hiding, as I said, is not the goal of a modular system, but a consequence. In AMD there are 2 functions with which you can somehow access the module - this require()
and define()
. It is enough to know the code name of the module to get its resources. define('dep3', ['dep1', 'dep2'], function (dep1, dep2) { return function () { return dep1 + dep2; }; });
The modules are isolated from other modules and you cannot spoil anything.
Access to another module is explicitly declared.
Special libraries are needed to work with such modules.Modules are totally isolated
The purpose of such modules is to make it impossible to reach the module from the outside. I think many have already seen such "modules", for example: $(function () { var dep1 = (function () { var a = 10, b = ''; for (var i = 0; i < a; i++) { b += i; } return b; })(); $('button').click(function () { console.log(dep1); }); });
In fact, it is a totally isolated module; one cannot reach its insides from the outside. But this is an example of a single module. If each such module is wrapped in a "closure", then they will not be able to interact. To isolate several modules, they can be placed in a common field of view or some common resources can be thrown into their field of view. With the help of these resources, such modules will be able to communicate with each other.Enough to wrap these modules in IEFE: (function () { var dep1 = 1; var dep2 = 2; var moduleName = function () { return dep1 + dep2; }; })();
This build method uses, for example, jQuery.LMD and browserify also totally isolate modules from the environment, but unlike the all-in-one assembly, their modules are isolated from each other and from the control part of the assembly.They are going about here in this structure: (function (main, modules) { function lmd_require() {}
In the simple case, total isolation can be easily achieved.
For other cases, additional tools are needed.Comparative table of popular module emulators in JavaScript
| AMD, YUI | ES6 | CJS / LMD | IEFE |
---|
Dependency resolution | In module + config | In the module | In config | Manual |
---|
Access to dependencies | Dynamic | Declarative | Dynamic | Arbitrary |
---|
Export | With arbitrary name | Honest import / export | With arbitrary name | Chaotic / Uncontrollable |
---|
Module failure | Static analysis | Not needed / Concatenation | Build from config | Concatenation |
---|
Interpretation of the module | On demand | Native solution | On demand | At start |
---|
Insulation Modules | Isolated | Isolated | Totally isolated | Not isolated |
---|
Common Module Formats
And finally, some background information on the existing modulators "emulators" in JavaScript.No module
var moduleName = function () { return dep1 + dep2; };
Namespace
var MyNs.moduleName = function () { return MyNs.dep1 + MyNs.dep2; };
IIFE return
var moduleName = (function (dep1, dep2) { return function () { return dep1 + dep2; }; }(dep1, dep2));
IIFE exports
(function (exports, dep1, dep2) { exports.moduleName = function () { return dep1 + dep2; }; }(window, dep1, dep2));
AMD
YUI modules are semantically similar to AMD. I will not show them. define(["dep1", "dep2"], function (dep1, dep2) { return function () { return dep1 + dep2; }; });
AMD wrapper for CommonJS
define(function (require, module, exports) { var dep1 = require('dep1'), dep2 = require('dep2'); module.exports = function () { return dep1 + dep2; }; });
Commonjs
var dep1 = require('dep1'), dep2 = require('dep2'); module.exports = function () { return dep1 + dep2; };
UMD
It can be seen that now there are at least 3 formats of modules that need to be supported. It's one thing if you write your project and can write on anything. Another thing is open-source projects in which it would be good to support all formats. All these modules are just different wrappers that essentially do the same thing - take resources and provide resources. Not so long ago, the UMD: Universal Module Definition project appeared , which “standardized” a universal wrapper for all formats. (function (root, factory) { if (typeof exports === 'object') {
It is clear that in the development of such a use is somehow strange, but the "export" is the most.Read
- JavaScript Module Pattern: In-Depth
- Creating YUI Modules
- Writing Modular JavaScript With AMD, CommonJS & ES Harmony
- Why AMD?
- AMD is Not the Answer
- Why not AMD?
- Proposal ES6 Modules
- Playing with ECMAScript.Harmony Modules using Traceur
- Author In ES6, Transpile To ES5 As A Build-step: A Workflow For Grunt
, , .