Comrades engineers, I report to you on the successes in the preparation of scientific and technical personnel in software engineering at the Faculty of IWT at the Kiev Polytechnic Institute and publish interesting code examples that were written for the training course, but I hope that they will be interesting from a practical point of view. The idea to introduce JavaScript and Node.js into the learning process has been matured for me for several years. But for mastering basic things in programming, I prefer C so that people can feel the machine, learn to control themselves and their code. But for applications, in which the level of abstraction C is no longer sufficiently illustrative, multi-paradigm and flexible JavaScript has taken root. With the help of the powerful and simple Node.js API, you can write conceptual code directly on a pair. In addition, knowledge of JavaScript is sure to be useful in practice to any engineer working in IT. Part of the code developed by the students of the course has already fallen into serious open source projects and this is an excellent practice that everyone can repeat, because we gradually spread the laboratory work on github and will continue to do so, supplying them with methodological instructions and not caring that students will write off forks, because all this is necessary first of all for them. These materials were used in the preparation of about 300 students of a polytechnic university for the 2015-2016 academic year. I will once again arrange the examples on the shelves at the summer school, which runs from August 9 to August 26, 2016, and the schedule of which can be found here . So, go to the most illustrative examples of code.
Writing something like Google Spreadsheets live Googleglox tables can even inspire losers. Let it be a small 6x5 spreadsheet, but several people can enter it and the data entered by them will be synchronized over the network in real time. It is vivid, effective and applicable in practice. In addition, this task hides additional topics, the use of WebSocket and EventEmitter, write its simple implementation on its own, and expand the capabilities of EventEmitter to subscribe to all events. The main objective of this work, designed for 1 lecture and 2 practices, is to master the event-reactive model of calculations, instead of passing through a cycle and broadcasting events through the network.
You can start no longer from scratch, the base server code is 41 lines, and 53 lines the client is written and laid out along with the task in the githaba: https://github.com/HowProgrammingWorks/EventDrivenProgramming
How to captivate students with distributed computing? I think they just need to give something simple that they will be able to realize in one or two pairs. No one in his right mind, of course, will write large computational tasks on Node.js, but it is very brief and indicative in the sense that JavaScript has a single-threaded execution model. We can visually parallelize the calculations, translating them into an asynchronous paradigm and distributing among several processes. To do this, it is proposed to move from cycles to iterators. In this case, the loop variables and state disappear, i.e. The original data set can already be cut and transferred to different processes. This is suitable for tasks in which the sequence of processing the elements of a data set is not important, and there are a lot of such tasks. In addition, iterators can be redefined, intercepted, or written by your own. We can implement interprocess communication and network sharing, hiding it behind the iterator abstraction. The application code will not change, only the implementation of the iterator will cut the task into parts, store the indices of the parts, and then, when getting the results (even if in a different order), they can be glued together in the correct sequence.
You can also start from scratch, there are blanks of the code on the githabab https://github.com/HowProgrammingWorks/InterProcessCommunication containing two options for exchanging between processes: via IPC (built into the operating system and having a wrapper in Node.js API) and via TCP sockets . The latter can be used not only for parallelization within a single server, but also for building a cluster of several multicore servers.
This task can be developed indefinitely, if a student is small and has the ability and desire to dig deeper, he can write a resource manager for a computing cluster in which he will keep the state of each computing process and distribute computing tasks to free resources or in general their performance. You can collect performance statistics for different tasks and optimize load balancing. You can implement calculations on the server, and send requests from many clients, but you can, on the contrary, have many workers and send them requests from one master. You can even make a request broker, then we have one TCP server that has two types of clients: the customer and the performer. Customers send tasks, and the broker (server) distributes them to the executors, then collects them and parts of the solution and sends integrated solutions to customers.
Another important topic is screening application code in sandboxes. In this sense, Node.js gives us access to the V8 virtual machine API and this allows us to create code execution contexts dynamically. Sandboxes have their own global context and absolutely do not have access to the main global context of the application, unless we specifically forward to them references to certain objects. Thanks to this, we can demonstrate the principles of inversion control and dependency injection for modules, not just for classes. We can take a link to an object or function from one sandbox, and inject it into another sandbox. We can wrap any API in the sandbox, adding logging, speed measurements, distributed computing, security functions and anything else to its behavior. Sandboxes, this is one of the most powerful features of a node, but unfortunately, so far little used in application applications.
Generally speaking, dependency management in a node out of the box is all for itself, everything is done through DL (dependency lookup), the implementation of which is the notorious require. This is the way in which the modules themselves load other modules into their context, indicating the full path to the file or the module name in the npm repository. At the same time, loadable modules get full access to the global context of the application and can change anything in it, for example, remove setTimeout, override Array.prototype.forEach () or replace require with its function. In general, JavaScript libraries often modify the base classes of the language and this leads to code conflicts. There is no salvation from this, except for how to run conflicting code in sandboxes and implement screening, and inject dependencies from the main application, creating links to them in the global context of sandboxes.
It is best to go to the external declarative description of dependencies, on the similarity of what we have in package.json. But require in the node duplicates in the imperative style, the declarative description that is contained in package.json. In large projects, it often happens that two different components of an application depend on different versions of the same library (this is terrible of course, but as it is), and only one can be described in package.json. You can arrange all these components as separate npm packages and raise your npm server or upload them to a private repository, of course. But it is much more beautiful to solve two problems at once with the help of sandboxes and dependency injection. The main module can read the dependency description file and load the required dependency versions into the right sandboxes, while protecting their contexts.
Link to the repository https://github.com/HowProgrammingWorks/InterProcessCommunication and it has 3 folders:
Next time I will show even more code samples from our labs, for example: web chat via web sockets, practical application of metaprogramming, extensible HTTP server for old web, asynchronous composition of functions, increasing code abstraction, development of specialized protocols, serializers and servers on TCP and UDP, directory synchronization over the network, building DSL languages, for example, a query language for in-memory data structures, etc. Thank you for your attention.
Source: https://habr.com/ru/post/307332/
All Articles