Many JavaScript frameworks offer their insight into how your code should look. Moreover, it’s not just about the style; it’s about the
way the scripts are written. This is due to the almost absolute democratic nature of JavaScript, yes, this is exactly what multiparadigmenny language with C-like syntax, prototype inheritance, dynamic typing and implementation varying from browser to browser is such. Therefore, when it comes to test-driven JavaScript, I understand that this is not just a special programming style, but special technical principles for a special framework that allows you to test JS applications.
In this article I will argue with myself what the testable JavaScript code is and how much it will cost if I start using it.
Attention: long postFurther, I will assume that the reader is somewhat familiar with
qUnit and / or
Jasmine . Slightly walk on the tops in 10 minutes, for this article it is enough. As usual, philosophy and generalization takes much more time and energy. In fact, for myself, I answered all the questions put in the annotations to this article, I just had to read
it and remember what unit tests
really are. The fact is that I never had to deal with front-end unit tests, so I was skeptical about this technology. I will try to explain the main difficulties below. I also ask you not to take the article as a manifesto for or against unit-testing client code.
')
In addition, we must remember that testing frameworks are different from each other. I tried to get rid of the main differences, but no doubt that, for example, Jasmine gives more freedom in writing testable code than qUnit. Nevertheless, the difficulties described in the article arise both there and there.
And so it went.
Anonymous functions
The first question was: “Ok, but how to test anonymous functions? What should I do if my code is solid anonymous callbacks and other lambda magic? ” The official documentation is quite accessible and explained what to do with asynchronous calls, but none of them was anonymous.
var callBack = function(x){ ... } someAsyncFunction(someOptions, callBack);
and
someAsyncFunction(someOptions, function(x){ ... });
A few different things, agree. But JavaScript applications are full of such constructs. Traditional jQuery design
$(document).ready(function(){
the goal is to prevent the application from running until the DOM is ready. Or even the usual wrapping of an application into an anonymous function, so as not to clutter up the global namespace:
(function(){ ... })()
It is not very clear how to test these functions if there is no access to them by identifier. If their whole point is that they do not have access. It turns out they need to provide access.
$(document).ready(Application) function Application () {
And now the application can be tested in qUnit like this:
test('Application Constructor is loaded', function () { ok(window.Application !== undefined , ' '); });
But there is one problem. What if I don't want to clutter up the global namespace? Unfortunately, at least one name will have to fork out. At a minimum, there is a practice like
Backdoor Manipulation . Something similar in JS can be implemented through the global window object, in which you can create a property in which to store a link to the application, for example:
(function(){ function Application(){
Like this. But we introduce an extra construction and this still does not save us from unnecessary names in the window object. It is sad. At this stage, my internal debater asks a reasonable question: actually, why are you testing the application as a whole, where are the
unit tests ? Fair I am testing the entire application. But after all, these are the applications on JS most often, they are (almost) always the essence of a whole. No one will select the application logic from the part that processes the data, connect these modules in different ways, if this is really not a giant application that needs just such an approach. Not every application needs such things as require.JS.
Argument # 1: It is necessary to divide the application into different pieces-models and test them separately, because this is the point of unit tests. Each unit can have its own namespace (by implementing the entire unit in a separate function) and the problem is solved.
Counter Argument # 1: JavaScript is not an OOP language in the classical sense, so that it is possible to test class methods. The implementation of OOP, which provides the ability to test individual objects in JavaScipt, is fraught with problems of excessive code. (The statement, to which I will not cite rigorous evidence, just remind you that there are no public and private keys in JS. Moreover, it suffices to recall what classes that are dear to the heart and their inheritance in CoffeeScript turn into when they are translated into JavaScript).
But since test-driven JavaScript exists, then there is such a style to organize an object model of the application so that it can be tested ...
Inheritance
In general, OOP, prototypical inheritance, and so on in JavaScript is a fertile ground for a heated debate, which has no place in this article. But before you start exploring inheritance in a test-driven context, I recommend reading
Douglas Crockford and Nicholas Zakas in Optimizing JavaScript Applications to keep up with the positive and negative aspects of various ways of organizing inheritance in JavaScript.
Probable inheritance is probably one of those parts that pleasantly surprised me because it didn’t cause unnecessary trouble when I followed the canonical prototype inheritance in JavaScript. Indeed, if we have access to an object, then we can immediately access its methods and immediately test them up and down. For example, I add to the prototype of an object a function to calculate the absolute value of a number:
myObject.prototype.abs = function(x){return x>0 ? x : -x;}
All this simplicity comes at the expense of one detail, which many consider to be a significant disadvantage of JavaScript. This is what the object does not have private and public properties. They are all available. Thus, we again arrive at the fact that only those functions and methods accessed can be tested. One has only to start imitating public and private properties of an object, like this:
function myClass (options){ var privateMethod = function(){ ... } var that = { publicMethod : function(){ ... } } return that; }
Access to the closure of publicMethod is feasible, since an object is returned that contains it as a property. The problem is that there is no way to get access to the privateMethod function from the outside, although it takes place in the scope of the myClass function, and within all closures of myClass you can refer to it, but for any other scope level there is no access to the function. Returning to paragraph 1.
This is especially significant when we write applications where a traditional OOP template is used, in which there are several objects with a variety of methods, internal logic and other things where there is no factory method for generating many instances and their inherited instances. Shortly speaking…
Argument # 2: If you use the traditional JavaScript prototype inheritance, then the entire object model of the application is automatically available for tests without any extra effort.
Counter Argument # 2: There are a thousand and one ways to implement inheritance (many of which are quite popular) in JavaScript, which deprive a developer of access to the methods of a class instance.
The problem of implementing inheritance and object programming in test-driven JavaScript is that some methods affect the state of the instance as a whole. How can we test that, at a certain event, the instance changed some internal property that does not depend on other functions, for example, when the cursor hovers over its color, it changes its color? Provided that we trust those frameworks with the help of which we implement this mapping. But a lot of things that we do with the help of different frameworks change states and only states.
Unfortunately, the ideas of unit testing are at odds with reality. Nobody (almost no one) develops a model, then writes JavaScript - a script, and then only does a GUI for this case, most often everything happens quite the opposite, the programmer gets the application interface and hears "make it work." I'm not saying that the first case is impossible or redundant, on the contrary - this is the best way, but alas, the front-end is what is perceived as a development stage that serves to
display the work of the back-end.
Returning to the topic of system states, I come to the main problem. Suppose that we absolutely trust every jQuery and are confident that a function that receives a string of the form "# 000000" in the argument will paint the figure in this color and do it without errors. Suppose that frameworks for working with DOM, and those involved in other mappings, are not as important to test as the functions where the data is processed in the application. But how to separate one from the other? How to write pure functions that it makes sense to test without adding side effects there?
var a = 10; function f(x) { a = x; }
This feature is not clean, alas. It receives data and changes the state of the system (the value of the variable
a
).
Pure functions
Pure functions are good. Well, when the functions do not change the state of the system. When functions receive data, they do something with it and return data. And they do nothing more.
But in JavaScript there are two nuances.
First : In JavaScript, functions are objects, and systems are in general objects, that is, system states are constructed by functions. System states are states of functions. There is no clear separation of the concept object and method. Everything ultimately merges into zen objects or functions, if you like. In other words, functions we construct objects, functions we calculate data and transfer them to functions that change the original objects.
(there should be a picture with Xzibit and the caption 'Yo dawg')Second : Isn't the essence of client-side JavaScripta a side effect? That is, in the display, in the dynamization. When was the last time you took some brutal Riemann integral to return the resulting data to the user in the browser console? I would be very surprised if someone at least once had to solve such tasks on the client side, and without display. If the application does not have any important logic, but only one mapping, then in essence there is nothing to test here.
Here is a pretty textbook example:
It is given an input; it is necessary to initiate validation of the content of the input by a regular expression by the keydown event. If the entered string is valid, then we paint the input in green, otherwise in red.
$("#myPrettyInput").on("keydown", function(){ if(___){ ... }else{ ... } });
To separate data processing (validation) from side effects (changing CSS), you can do the following:
$("#myPrettyInput").on("keydown", function(){ var val = $(this).val(); var regex = ... - ; if(validate(val, regex){
On the one hand, what's wrong? Think about the extra function written? Indeed, in the first case, most likely if there would be some kind of ugly expression like
$(this).val().match(regex).length
, and now we have quite a neat function. It does not matter that in the body of the function the same expression, the main thing it can be tested! Indeed, on such a simplified example, everything looks quite justified and decent. It seems that we have achieved testability by selecting only one function, but ...
Argument # 3: We got testable code and at any time we can check if our validation works or not by writing as many automated tests as we like. In addition, we got rid of giant expressions in the event handler.
Counter Argument # 3: We entered another variable. But what if the validation is not so simple, but includes the ajax challenge? For example, check whether the username is busy. And in order to realize the separation of side effects from strict data processing, something more is needed than to deduce a function from an event handler.
Counter example:
var validate = function(str, regex){ if(_____regexp){ $.ajax({ ... succsess : function(data){
Unfortunately, one callback identification is indispensable.
var validate = function(str, regex){ function ajax_validate(data){
Of course, in the previous listing, the ajax_validate function is clean (returns the result of validation, and this is always either true or false), but you can’t test it anyway, because it is a variable inside validate. Make it a validate closure? Stupid solution - from this validate turns into a class, ceases to be clean. Can then turn all strict checks that return true or false into the closures of any individual module that will be occupied exclusively by all sorts of validations in the application? In terms of testability - this is quite a satisfactory solution, in terms of modularity, too. But I hope you have traced how the amount of work has increased. And I, in fact, did not add anything special to the application. I ask myself, where is the golden mean?
It seems logical to answer the question that you need to put data handlers in one heap, event handlers in another. So I refuse to test the part that colors the input in the desired color, and I concentrate on making it possible to test the part of the program where it is
calculated what color to paint the input . But again, with the proviso that if the application is not too large / complex / intricate. If it really is, then it is better, yes, to introduce an extra module, to spend time, but to achieve testability and ease of further use. And if the application does not require special tricks, I would do this:
$("#myPrettyInput").on("keydown", function(){ var val = $(this).val(); if(validate(val,regex)){ var ajaxValidationResult = false; $.ajax({ ... success : function(data){ if(ajax_validate(data)){ ajaxValidationResult = true;
For me, this is quite an acceptable ratio of extra work to the benefits of testability. But the golden mean, I can not call it. What are my solutions? In fact, no universal. Inside JavaScript there are many possibilities built in to make it as testable as you wish. This is where the functional nature of JavaScript comes up. Where it is possible to transfer functions in functions and return functions too. This is convenient in the sense that the application logic can be executed in a fairly free declarative style (this is the part that is not related to unit tests), while those parts of the program that are responsible for processing the data remain testable. I often use the following scheme:
function App(){ function Vehicle(){
In this way, it is possible to access closures of any class. At the end, I can choose a method of accessing either through a backdoor or simply returning the desired object. However, such an imitation of public / private methods as a whole does not fulfill the very idea of public / private methods, but serves only as a demarcation between functions accessible from other areas or not, which is a rather ugly add-in to ensure testability, and not for public separation. and private methods of classical OOP. In this way it makes more sense to separate the pure functions and functions with side effects. In addition, it is also necessary to remember that such an organization of the structure of an application is far from the most productive and is more suitable for large multi-connected applications with several modules than for applications that use multiple instances of classes.
Returning to the separation of data processing and display, I will say that some libraries force the separation of pure functions from side effects (mostly functional and / or declarative). Others unfortunately do not particularly support this idea. Although for the time that I am familiar with JavaScript, I am convinced that nothing is impossible. The question is completely different. What effort?
For example, in
this article , I am testing a fairly simple application (toy) and because as I use bacon.js - a declarative library, the essence of which is to save the developer from the hell of nested callbacks hanging from the event listeners, I generally managed to bring the application without unnecessary losses to testable mind.
Summary:Unfortunately, nothing more intelligible than a rather banal “every instrument is appropriate in its place” I cannot say. I have used qUnit and Jasmine only a few times in testing client code for more or less large applications. But these were the cases where I had not already written a single line of code already anticipating how exactly in this application I can separate side effects from data processing, and that if I write testable code, the application does not turn into unintelligible stuffing.
Finally, I would like to say that the very idea of unit testing fits well with the modular essence of NodeJS. Shooting in this article also test-driven programming on the NodeJS platform would be superfluous, although the topic is definitely exciting. Therefore, if anyone is interested, I will write about it in the next article.