📜 ⬆️ ⬇️

Approach to testing code in real life. Part two

I think almost everyone has come across this opinion: writing tests is difficult, all examples of writing tests are given for the simplest cases, but in real life they do not work. I have the impression in recent years that writing tests is very simple, even trivial. I continue what I started in the first part .

We continue to test requests

I don’t know about you, but writing complex queries to the database, no matter whether SQL or HQL queries, always caused some apprehension. Sometimes, you would write a very competent query, which absolutely should do what you need, run the code in production, and everything seems to be normal, but after some time it turns out that there is such a combination of data, where something is calculated completely wrong, or some records are missing, or something else like that. In particular, this applies to queries in which you have to use a lot of outer joins, and even group data by some fields, which are either there or not. In general, of course, I hope. Complex queries. In our project there were several such that, to make life easier for Flex-programmers, they provided complex branched data in the form of a simple table. Our answer to all this complexity? Of course, tests! At the same time, I am also a living person, and it is completely uninteresting for me to write 20 tests with a bunch of repetitive (or slightly different) code of this type:
public void test() { Application app1 = ApplicationMother.makeApplication(); Version version11 = VersionMother.makeVersion(app1); save(app1, version11); DeploymentMother.makeDeployment(app1); DeploymentMother.makeDeployment(app1); //   4  List<ResultRow> result = myRepository.executeComplicatedQuery(); assertEquals(app1.getId(), result.get(0).getAppId()); assertEquals(version11.getId(), result.get(0).getVersionId()); assertEquals(2, result.get(0).getDeploymentCount()); //      } //   19        
To avoid this flour, we recall that writing tests is also programming, no less interesting than writing the product itself. And all the approaches that we apply to the second, are quite applicable to the first. In particular, remember that refactoring is our friend. Anything that even remotely resembles a repeating code, we ruthlessly remove helpers and helpers in methods. As a result, we have:
 public void testDeletedDeploymentsDontCount() { TestData data = app("app 1") .withNonDeletedVersion().deployedTimes(2) .withDeletedVersions(1).deployedTimes(2) .app("app 2") .withNonDeletedVersion().deployedTimes(1); queryAndAssert( data.makeExpectedRow(1, 1, 2), data.makeExpectedRow(1, 2, 0), data.makeExpectedRow(2, 1, 0) ); //  TestData.makeExpectedRow  : //    ,    ,    deployment-. 
Already better, right? The code has become much more readable. When using method-chaining (in my opinion, this is called the ExpressionBuilder Pattern ), you always have the opportunity to add additional methods whose call will create whole structures, for example:
 TestData data = times(5).deployedAppAndVersion() .times(2).deletedDeployedApp(); 
Yes, for this I had to write some amount of service code, but, firstly, it was really interesting, and secondly, it was beautiful. That is, I enjoyed it, learned new tricks, and was useful, because now I can add any number of tests to any stupid combinations of basic data at any time, and it will take me two minutes from strength. If interested, an approximate breakdown by time:How much time would it take if a bug-report came to every error, I opened the request again, remembered what kind of freak I programmed it and why I repaired it, sent the code, and so on, I’m even afraid to imagine. I hope that This approach is applicable not only to the testing of requests. Therefore, we proceed further:

We are testing interaction with third-party services.

In our case, the virtualization infrastructure acts as a third-party service, i.e. a service that starts, quenches, and monitors virtual machines. Difficulties here - the sea. Starting from the fact that no one really knows what she is doing there inside (“neonka inside her”, aha), and ending with the fact that her API was done quite well through one place in the last century, and, for example, instead of In order to tell her (infrastructure): “Launch 3 virtual machines of this kind, and knock here when you finish,” you have to give commands to create, then all the time to ask for status, etc. At the same time, I remind you that every programmer runs all the tests on his machine very often, and the team is geographically scattered, so working with a real infrastructure all the time is unrealistic. So we come to dividing all our tests into several groups. Usually allocate unit, integration, acceptance tests. But for me the group of integration tests is a bit confusing. The query tests described in the previous section are unit or integration? Therefore, our group number one includes all tests that can be run on a laptop, and group number 2 includes all the tests that need to be run inside the corporate network and that have external dependencies (for example, virtualization infrastructure). There is also group number 3, but about it later. Let us ask ourselves the question - what, actually, do we need to test and in what cases? We do not test libraries and strontium services as such, so we will pretend that the infrastructure will correctly perform what we say to it. Our project has a separate module that wraps the HTTP API into Java code. We test this module with a number of simple tests that test its own logic, which is not much there. Plus, literally a couple of tests in the second group, which run only inside the corporate network, and make sure that this virtual connector can still connect, start and kill a virtual machine, get its status. It is much more interesting and useful to test the logic of our application. This is where stubs come to our rescue. Fowler, as always - our everything.
 // VirtualInfrastructureManager -  ,     public class VirtualInfrastructureManagerStub implements VirtualInfrastructureManager { private List<VirtualMachineState> vms; public VirtualMachineState createVm(VirtualMachineDescription vmDescription) { vms.add(makeVm(vmDescription)); } public List<VirtualMachineState> pollForStatuses() { return vms; } //,   .   } 
Now, with the help of Spring magic, in our test context we substitute this class wherever a virtual infrastructure is required. In all of our tests, we do not check whether a virtual machine is really created, but how our application behaves when it is created.
 public void testDeploymentStarted() { Deployment deployment = DeploymentMother.makeNewDeployment(); deploymentManager.startDeployment(deployment); assertDeploymentGoesThroughStatuses(NEW, STARTING, CONFIGURING, RUNNING); } 
To make it even more interesting, you can share Chaos Monkey in our stub.
 public class VirtualInfrastructureManagerStubWithChaosMonkey implements VirtualInfrastructureManager { private ChaosMonkey monkey; public VirtualMachineState createVm(VirtualMachineDescription vmDescription) { monkey.rollTheDice(); vms.add(makeVm(vmDescription)); } ... private class ChaosMonkey { public void rollTheDice() { if (iAmEvil()) throw new VirtualInfrastructureException("Ha-ha!"); return; } } } 
Imagine how many interesting situations you can create, if the monkey can be given the probability of error. Or if she periodically will "kill" our virtual machines by random law?
 public void testRunningDeploymentRecovers() { Deployment deployment = startDeploymentAndAssertRunning(); ((VirtualInfrastructureManagerStubWithChaosMonkey)virtualInfrastructureManager) .getChaosMonkey().killVms(1); assertDeploymentGoesThroughStatuses(BROKEN, RECOVERING, RUNNING); } 
And we get all this fun right in our own machine, without having to connect to a corporate VPN, without waiting five minutes to launch each virtual machine.

Short digression: Continuous Integration

I'll be brief - if you have a team with more than one person, then you definitely need a continuous integration server. Needed as much as the code repository. What is the point of writing all these smart and beautiful tests, if they do not run and catch errors? In this project, we had three different build processes. First, it starts automatically every time someone makes a git push, chases the tests of the first group. There was a rule in the team - if you made a push, you do not go home until the first build is successful (approximately 20 minutes). In some offices, a plush cockroach was put on the table of the one who broke the assembly, in some of them it was obliged to wear a dimensionless yellow jersey with the inscription “Kick me, I broke the build”, and we just had to break it - repair it. If the first build was successful, the second one was launched, which performed the tests from the second group - which worked with the real infrastructure. This process was quite lengthy (unfortunately). If it ended successfully, then the third build was turned on, which poured the collected code onto the test server, and ran literally a couple of tests to make sure that the upgrade was successful. It was at this stage that problems with changing the structure of the database were caught. After all three builds, we had a server on which the working code was guaranteed, and we could show it to clients.

We test the application entirely

All this, of course, is wonderful, but in order to sleep, we need to be sure that our application works completely, and not just its individual pieces. In this case, if each individual module and their combinations are well covered with tests, we do not need to check every aspect of the work in order not to do the same work twice. Here it is useful to refer again to the technical specifications for your product. For example, one of the stories (user story) said: "As a user, I want to be able to run an application that requires MySQL to work in master-slave replication mode." We have tests that check the download of the application, there are tests that guarantee the correctness of the information that we send to each virtual machine for all cases. But the code itself, which will configure all the necessary software on a virtual machine, is, firstly, written by other people, and, secondly, in a different language. What do we do in this case? Stupidly follow the description of the story. Said "my application requires a master-slave" - ​​please. We write a primitive web application consisting of two JSP pages. On one page, we create a JDBCConnection to the master, and write a record to the database. On the second page, create a JDBCConnection to the slave, and read. The time spent on creating this test application is 10 minutes. Our test loads this application into our product, asks you to start it in the required mode, and then simply access HTTP on the first page, and then on the second. If the desired text appeared on the second page - everything is in order. If you creatively approach the list of requirements, such full tests will require very little. But imagine what a huge help when we were ordered to urgently add support for a couple of other operating systems for running virtual machines!

Testing the web interface

This is a completely separate topic, very interesting, with its own pitfalls. I will not begin to disclose it now, because it was not done in the project described. The interface was written in Flex, it was engaged in another group with their cockroaches. I protested, intrigued, did what I could, but nothing has changed. I myself was testing web interfaces a year and a half ago on a completely different project, and now it would be difficult for me from memory to give good examples of web tests. I can only say that Selenium is your very big friend. You may ask, how did we test our current project completely? Especially for this, we had to write a set of REST web services, and all the tests called them. Our big victory, I believe, is that we made the Flex interface contact these services, so we were pretty sure that the product was working, and if something didn’t work, this was an interface problem. By the way, since the interface has always lagged behind us in terms of functionality, we wrote a client for the command line that accessed web services, and this client was so much liked by potential clients that at one time the administration was going to throw out the web interface. But this is a completely different story.

Conclusion

I did not set myself the task to write a testing textbook, or to describe all the variants of tests. I did not try to prove once again that writing tests was a must. I just wanted to show that writing tests is not difficult at all and can be fun (not to mention useful). Several conclusions and observations:

')

Source: https://habr.com/ru/post/122043/


All Articles