I will try to tell in general how the process of testing interfaces in TCS looks like.

Troubled past
It was all simple: the task arrived, the task was done, the task was tested manually by the tester, the task went on display to the users. But then, everything became more complicated, the tasks became more and more, developers were added, and testing, it happened, came to a standstill.
')
Charming present
Our team has changed a lot - a small web development department has become many times larger. The process itself has changed - now our interfaces are covered with tests both inside (code) and outside. And yes, we have code review, and the development of tasks is carried out in the branches, we carefully write the documentation in the wiki and the JS DOC generator.
Code testing
Obviously, where there is data processing, different calculations - there should be unit tests. Yes, that there is modest - where there is code, there must be tests.
There are various approaches to development through testing: TDD, BDD, etc. We will not understand how they differ from each other, but will dwell in more detail on our testing process.
Grunt is responsible for building statics and running tests. We use a bunch of Grunt + Karma + PhantomJS + Jasmin + Sinon + CoffeeScript. Yes, you heard right CoffeeScript.
Once we had a heated discussion about the fact that CS is beautiful, fashionable and greatly speeds up development, but for many reasons we have given up on this bad idea of ​​writing all the code on CS. But! We write tests on CS for one main reason - writing and reading a sheet of callbacks is much nicer on CS than on JS. The code is more compact and pleasant.
Jasmine is chosen for simplicity, Sinon is for emulating API requests, Karma is just a cool test runner, and PhantomJS is for running autotests from Team City.
Immediately make a reservation, we did not fan and cover with the unit tests everything, but only the common components and the places where the data is processed. Yes, let everyone say that this is bad and that all the code should be covered with tests, but we didn’t see such a need for it, especially since it is possible to cover work with the DOM with tests, but it’s pointless and time consuming.
We have a Team City, which, according to our instructions, automatically starts the build and tests for each branch transferred to code review and if something goes wrong, the developer will know about it and the broken code will not get into the master.
All our tests are divided into modules. The module is test cases + config for launch. This approach makes it possible to run the necessary tests separately, or, using the common configuration file, run everything at once.
There are certain moments when I want to cover DOM with unit tests, and CS, with its wonderful possibility of multi-line comments, helps us in this. You simply write the necessary HTML code in a test case or in a separate file, and then connect it where it is needed.
GUI testing
As I wrote earlier, the developers do not cover the unit with tests of work with the DOM, because they consider this a meaningless undertaking. For this, TKS Bank has a testing department, it is engaged in testing the visual part of the interface.
There are two types of testing:
With the first option, everything is clear, click until the mouse breaks, and the buttons do not pop out of the keyboard. With the second - a little harder ...
For testing the interface, we have not only browsers designated, but also versions of individual browsers, there are also a bunch of test cases written for them, in addition, there are test data that you need to use in testing on a particular browser in its various versions. Naturally, all this is quite difficult to check manually. Yes, and there was a desire to save the manual from routine, boring and tedious work. In general, yes, it is practically impossible to do without automation of testing today, while ever-expanding commercial and opensource tools and solutions make us look at testing automation from the other side, and look to those who are in doubt in this direction more often.
Our auto tests use Selenium WebDriver. We have developed our own framework for testing the “muzzle”, based on a bunch of popular and proven solutions, which allows us to write the most pure and transparent tests, eliminating code duplication and driving the framework into a rigid framework for designing and constructing the framework, which ensures flexibility of the final tests and the simplicity of supporting their performance. .
Directly, the testing itself takes place in a deployed distributed network of the selenium grid, where the running machines with a certain OS and a set of browser versions expect their time (in fact, of course, faster) of fame. Tests from TeamCity are launched, part - automatically, looking at its build trigger, as, for example, smoke tests, which are started after each deployment of the test bench, part - manually, on demand, for example, more cumbersome tests from the set of regression tests, allowing , reveal the introduced bugs. Speaking of bugs. Auto tests cover not only superficial GUI testing of the portal, most of the auto tests are complex, and cover testing at the database and web services level. So in case of autotest test, the tester receives not only a screenshot with the information “broke here, look further”, but also a sane report with information on the received / left, for example, data in the database.
In addition to the test environment, we have smoke test tests for the combat environment, they are less numerous and cover only the critically important functionality, in case of unforeseen failures.
I would appreciate comments and comments on the case, for the formation of the following articles, where we describe in more detail how and what we have arranged.