📜 ⬆️ ⬇️

Spherical testing in vacuum: How to eat, how it should be, how it will be

Testing takes a special place in the work of each of us. This is a very important, difficult, not the most pleasant, often underestimated, underestimated part of our work. Therefore, I, as a practicing developer and technical manager of a small startup, was pleased to have the opportunity to talk with an expert in this field and ask him my burning questions. Why don't programmers work on TDD? How to solve problems related to unit testing of a database system? How to get rid of the "human factor" and automate, ultimately, user interface testing?



As part of the preparation of Joker 2016 , a post about Legacy was released , which caused a heated discussion of testing in Java, which we decided to continue in an interview with Nikolai Alimenkov.
')
Nikolai is an expert in Java development with 12 years of experience. In addition to his main work activity, he is a co-founder and trainer of the XP Injection training center, an active participant and speaker at international conferences. With his participation IT-conferences of Selenium Camp, JEEConf, XP Days Ukraine and IT Brunch were organized. We talked about what could be improved in the field of testing in our team “here and now”, and about what technological changes we should prepare for in the future.

- Nikolay, my first question is about self-testing code that uses asserts inside itself. Your attitude to this practice.

- It seems to me that this idea was obviously incorrect. It was assumed that such checks could replace tests and make the code self-verifying, but, unfortunately, did not grow together. The reason is very simple: this idea relies on the fact that the developer, when he is writing code, is both thinking about the implementation and about the side effects that may occur. But developers aren't so good at switching context. Instead, unit tests are forced to first focus on testing, and then return to the focus on development.

Asserts are no longer replaced by tests, but by runtime checks: for example, if something is null, then Exception is needed. But assertions are not mandatory, you can disable them, and then the check is not performed. Now there are many other approaches that allow you to do better. For example, the approach with annotations, where we can put the NotNull annotation on the input parameters of a method or on a variable. And this annotation can be put in handlers that will check and throw Exception. Now there are special validation frameworks that work quite well.

But asserts, it seems to me, died as soon as they appeared. I have seen a lot of code in very many companies, and I have not seen a single company in which they were used seriously. Here are just one.

- In theory, everyone understands that test-driven development is great and good. But in practice, not all code is covered by unit tests. Do you think, besides the laziness of the developer, why does this happen?

- And here it’s not the developer’s laziness. There are two reasons for this, in my opinion.

The first is that people do not know how to do this. In order to develop on TDD, preparation is needed. And this is not enough, it is necessary to understand the tools, how to use them and what advantage it gives. A person who takes courses, or he studies TDD himself, or sits down to work with someone competent, who already works on TDD, sees so many advantages in work that after that it becomes clear that it is silly not to do so.

And the second reason is that many of the developers, especially with high self-esteem, "include the architect mode". This is when a person looked at the puzzle with one eye and said: “Ok, that's it, I see. Here I will have a factory, there will be such and such a pattern, here - such and such. ” And he immediately throws these thoughts into the code. Then there comes a time when it is necessary to integrate everything that he “designed” with the rest of the code. And it becomes clear that it does not integrate. Or someone looks at the code for code review, and it becomes clear that all methods are gigantic, nothing is clear, the fifth if-s nesting. Surely everyone has seen examples when “Hello, world!” With the help of design patterns can be depicted in such a way that you can not figure out what is in front of you “Hello, world!”.

When you work on TDD, you have written a test, and now your task is simply to make it work. Your task is not to make a super cool design. The task to make a super cool design arises after the code is earned. You then look at him and say: “So I wrote a simple solution. Can you somehow make it more beautiful? Can you make it somehow more elegant? Or reusable? ”And if not - well, okay. It works and works, let's go further. That is, here are the reasons: the “architect mode” and the inability to write tests, and the inability to work with the right tools.

- Well, all the same, it seems to me that it’s not only a matter of the inability to write tests, but also the inability to write the code under test. In the difficulties associated with writing test code.

- And it is not necessary to write. If you work on TDD, then you are not faced with the task of writing test code, because you have no chance to write it untested. Here is the joke!

- Um! Yeah! A simple idea, do not say anything!

- When you originally drew a test how the code should look, so that the test is comfortable, then you will certainly get test code that you can test beautifully, which integrates well, and has a cool API. But if you post factum decided to already implement the test, then, of course, testability plays a very important role.

If you first wrote some code, and then you approach it and say: “Well, now I’ll write my first unit test on it” and then “plugging” often occurs. Well, here is a classic example - when you made three boolean parameters in a method. And here you pass them: foo (true, false, true). And then you see it yourself and say: “Aah, what is it all about? I can’t understand anything! ”Or, for example, in order to call one method, you need to do so much of the setup that you already forget, why do you even write this test. This is exactly what happens when you write a post factum test.

And if you work on TDD, then it goes like this: you write, write, you test, finally you look: “Oh, here is a beautiful API! This is exactly how it should look, the way it should be! ”And quickly generated the entire API. Because the benefit - if we are talking about Java development - a lot of things are generated using the IDE from the test and do not write with your hands. As a result, the person who works on TDD, works a little faster due to this factor. He does not hand write a single method signature, a single constructor, or a single field. This is all generated. And megabystro. All that a person who writes on TDD writes is implementations of methods. IDE takes over the creation of classes, constructors, getters, setters, method declarations, it helps a lot and saves a huge amount of time.

- But still, tests sometimes force to interfere with the implementation of the code. For example, I want to make some private one, but I have to open it.

- No, do not have to open. Here we come back to the fact that we do not know how to do it right. Because when you want to test something inside, and it is privately closed in you, it only means that your class, which you are testing, has too many responsibilities. And this means that, in a good way, you have to change a few of your designs and bring the aspect of the code you want to test into a separate class whose task will be to do just that. An experienced developer is a hint that his design has become too complex. You can, of course, make one class in which to cram everything. And he will be able to print and save the base, and transform it into JSON, and calculate some algorithms. And in the end, you get an over-complicated solution. And when the work goes as a team, it will be very hard to modify such a code, because everyone will sneak into this code and change it. And this is a hint. If you looked and said: “How can I test this? I need to open it ”- that's all, it's a bell at once that the design should be changed.

- How interesting. Okay. Well then let's talk about such a thing as the need for a test to use the database. Because of the need to constantly have an available database, configure connections, run unit tests that are database dependent, it turns into hell. We disable them in the build scripts, to be honest. What to do? Somewhere we tried to write mocks of JDBC data sources - very difficult.

- Actually, I am surprised that so much time has passed, and this question still arises. This means that I am not everywhere where it could convey, informed.

I started talking about this issue a long time ago, and I have a report called “TDD for database”, which I spoke at some big conferences. Here and here , there is a video, and here - a presentation, where I demonstrate in live-coding mode how to do it, how the database is connected, how this database is later used in tests.

First, do not get wet with the JDBC API. Because if we do moki on the JDBC API, what will we test? We will test not that our data integration works correctly, but what we have sent seems like a more or less correctly formulated SQL query. But in SQL, you can easily rebuild the query so that it will be different in appearance, but essentially the same. For example, rearrange the AND parts to each other. You can say: WHERE "user" = 'Vasya' AND "role" = 'admin', but vice versa. It turns out that if we do moki for such requests, then with such a change, which functionally did not change anything, we will not test anything. Therefore, it is believed that integration tests should be written on the database, which will raise the real context, raise the real base and work on it.

- But working with a real base is very slow!

- Here in-memory databases come to the rescue. There is a good old HSQLDB, that is, H2, which is, let's say, its modern version. Not only does H2 allow you to rise as an in-memory-base, it can also work in the syntax modes for supporting different databases. That is, you can say: “H2, get up and work in the syntax of MySQL. H2, go up and work in the syntax of Oracle. " They are not 100% compatible, but nonetheless solve most of the problems.

Plus, a lot of articles have already been written, how fast - exactly fast! - raise the base with the help of the RAM disk. That is, it is not mapped onto a disk subsystem, but rather made a RAM disk and mapped it into RAM. After all, in order to write unit tests, we do not need to raise the dump by 10 gigabytes from production, right? We are not going to drive any specific performance scenarios. We write normal unit tests that should test our logic of working on a small number of records.

And finally, another helper in this is DbUnit. DbUnit, which makes it very easy to manipulate test data sets, make them very convenient, make them in XML, in JSON, in what is convenient. But best of all, in my opinion, to work for these tasks in XML, because this is how we get structured data. And in this case, we simply can easily have data sets focused on specific tests. That is, for example, if we need to check that the search works, we insert five to ten entries that demonstrate the diversity of what we are looking for, and we focus only on these 10 entries. And we insert only those columns that we need.

If we are talking about a situation where there is data that is interconnected, then such tricks work as, for example, turning off constraints on the fly. That is, we, for example, open Connection and say: "Disable, please, check all constraints." It depends on the database: somewhere it can only be done globally, somewhere it can be done within the context of a connection. We disable the frameworks and this allows us, if we are looking for, for example, users, not to be distracted by the additional data that must be with users, and not insert them.

Another trick is the ability to do a test in a transaction. This means that in the test in the transaction data is inserted, in the same transaction the request is made, which is necessary in order to get this data, and at the end we roll back. It is clear that not all tests can be written with this approach. This works especially well for tests that receive data. For tests that insert data, this is not always good. Because it is just there that it is interesting to see how the constraints worked, whether the exception rushed right, whether we intercepted, wrapped it, and so on.

What I have listed is not the only solution. This is what is, what is known to everyone, what has been known for a long time. They make more and more new solutions that allow expanding this functionality, making it more convenient and making database testing comfortable.

“For a long time, I had the impression that automating user interface testing is such a complicated and unreliable thing that it’s better to trust people who work with a more or less vaguely formulated test script than to write automated scripts, and I’m automating testing UI put a cross.

- Well, it's you in vain, because in recent years a huge amount of bodies and approaches have appeared. If we talk about the web-interface, then, indeed, once upon a time we walked around the browser, invented various hacks, how to “pull” something on the page itself through JavaScript. But now everything has become much simpler, because certain standards are emerging. If we are talking about tools for a web application, then this is WebDriver, whose support now diverges between the browsers themselves, and already the browsers themselves embed the WebDriver implementation, which allows you to control the browser remotely. And it turns out that we completely control the browser from the tests, do everything with it just like an ordinary user. We can receive any information from the browser, we can pull out all the logs, internal processing, communication with the external environment.

In the same way testing of mobile applications is greatly expanded. There are more and more options for working with real devices, as well as with emulators, which also have their own frameworks, which are fairly easy to write through API tests.

Regarding real devices, one of the directions that is actively developing now is robotized testing. When printed on 3D printers, mini-robots, in which they insert the phone, and they are programmable. That is, there is a microcontroller, this microcontroller can be programmed by sending it commands. Accordingly, the robot has a feather with an elastic at the end, like a mechanical finger. The device is placed in a certain position, and then it all works easily.

- How does he read what is on the screen?

- What is on the screen is read by connecting the phone. You can read what is on the screen, and depending on it, do some actions. It is clear that it is not suitable for everything, but work with some applications is very easy to automate. This approach is only being actively developed. There is still no production-solution that could replace testers in this area. But, nevertheless, it is becoming more and more popular.

- Fantasy! But our team is far from this, of course.

- Please: now crowdsourcing testing continues to evolve, when you can send your application to a large platform that has a lot of specialists - Chinese, Indians, our compatriots - who receive rates for hours spent at work. You can scale your testing as you please in semi-automatic, let's call it that, mode.

- Do these people get test scripts?

“They get test scripts, they get an application describing how it should work, and they are testing.” This is an interesting opportunity that many people don’t know and suffer from without having a team of testers afloat, although they could have given it there.

Plus, today another very actively developing area is visual testing (for example, using Applitools). If you explain in very simple words, then a certain script is executed for the application using WebDriver, if we are talking about the web, and screenshots are taken. And further implemented algorithms for comparing screenshots in order to identify changes. There was, for example, a screenshot taken yesterday, and we looked at it and said: “Everything is cool with it. This will be our baseline. ” And now we have taken a screenshot today. There are algorithms for analyzing these screenshots, which allow you to track exactly where the changes have occurred, and group them.

- And, let's say, we changed the font, the picture has changed. So what? test now failed?

- Not! They know how to work this out. They know how to group these things. That is, they say: "Yeah, here we see that the font has changed." We looked at a few more screenshots and said: "This is the type of change number 1 - the font has changed." And you have to confirm it. You should say: "This is correct, we really changed the font in the application." , , , , .

, . , , - , , , . « »? : « — - , , », . , , . - , , , , , .

, : «, ». : «Confirmed, , ». baseline . , .

, : . , . : Firefox, Chrome, IE, Safari, . . , baseline- .

, .

— .

— , . , Selenium Camp , . Selenium WebDriver. , WebDriver- , «». , .

, . . , Google, Facebook .

— , , ? — ?

— , , .

— UI?

— ! , , API.

, , -, - , «» . , , - , , . And that's bad. .

— - workflow . . workflow- , , , , , , . , , , , .

— ? , ?

— ? . - flow. flow, , . , flow , , . flow, , , . . , , , . - , . . , , - , , . , , , API , «». , , . , . , , , - , .

, . IDE . , - , . background-. , , , IDE , , - , - .

, , , , . , , CI- TeamCity. : , , : «, ?» — , , , — CI-.

— Jenkins ?

— , , .

— !

— , Jenkins , ! , CI. , : « quality gate». quality gate , .

, IDE . , , nightly builds, . , , .

, , , , WebDriver, . API . W3C, , , — , , , — , . , , , API , .

— , . !

— ! !



, QA, Joker 2016 (, 14-15 ).

, (, 10 ).

, 09.09!

Source: https://habr.com/ru/post/309502/


All Articles