📜 ⬆️ ⬇️

Experience from Yandex. How to make your report for autotests

I want to share my experience on how to create good reports on auto tests and at the same time invite you to the first Yandex event specifically about testing.

First a few words about the event. November 30 in St. Petersburg, we will hold a test environment - its first event specifically for testers. There we will tell you how we have tested, what we have done to automate it, how we work with errors, data and graphs, and much more. Participation is free, but only 100 seats, so you need to manage to register .

The test environment for us in the first place is a platform for communication. We want not only to tell about ourselves, but also to talk with the participants about how they work, to exchange knowledge, to answer some questions. We think there will be a lot of common topics, but in order for you to start thinking about them right now, we are starting a series of publications about testing in Yandex.
')
Test automation on the Test environment will be devoted to several reports, including mine. So, I'll start.
image
There are unit tests, and there are high-level. And when their number begins to grow, the analysis of the results of launches becomes a problem. Tell me honestly, who among you has not thought of making your report?

With detailed logs, screenshots, request / response dumps and other additional information (which, by the way, makes it much easier to find the specific causes of the error). I am sure that some even succeeded in this matter. The problem is that it is difficult to make one universal report for all types of tests, and it is a long time to make a separate report for a specific task. Unless, of course, you accidentally use jUnit and Maven . In this case, it is possible to make a simple report for a specific type of test in a few hours. Let's see, why do we need a test report other than xUnit ?

High-level tests differ from unit-tests and have a number of features:

  1. They affect much more functionality, which makes it difficult to localize the problem. For example, a test through a web-interface affects the functionality of the API, which in turn affects the functionality of the database, which in turn ... well, you understand.
  2. Such tests affect the system through intermediaries. It can be a browser, http-server, proxy, third-party systems, which also contain their own logic.
  3. There are usually quite a few such tests and often you have to introduce additional categorization. These can be components, areas of functionality, criticality.

All these factors significantly slow down the rate of problem localization. For example, this is what an error in the test for the web interface “Can not click on element“ Search Button ”” may mean:


If you add a screenshot, page sources, a network log and a summary of space activity in the datacenter area to the results of this test, then it will be much easier to point out a specific problem, which means we will spend less time. In this case, there is a need for a specific report with additional information.

There was a test


As a test subject for our experiments, we take a completely ordinary test:

public class ScreenShotDifferTest { private final long DEVIATION = 20L; private WebDriver driver = new FirefoxDriver(); public ScreenShooter screenShooter = new ScreenShooter(); @Test public void originPageShouldBeSameAsModifiedPage() throws Exception { BufferedImage originScreenShot = screenShooter.takeScreenShot("http://www.yandex.ru", driver); BufferedImage modifiedScreenShot = screenShooter.takeScreenShot("http://beta.yandex.ru", driver); long diffPixels = screenShooter.diff(originScreenShot, modifiedScreenShot); assertThat(diffPixels, lessThan(DEVIATION); } @After public void closeDriver() { driver.quit(); } } 

Go through the code:


In this form, the test can be used without a beautiful report, as it always compares the same page with itself. But this test will be much more effective if you add the standard jUnit parameterization to it :

  @RunWith(Parameterized.class) public class ScreenShotDifferTest { ... private String originPageUrl; private String modifiedPageUrl; public ScreenShotDifferTest (String originPageUrl, String modifiedPageUrl) { this.modifiedPageUrl = modifiedPageUrl; this.originPageUrl = originPageUrl; } @Parameterized.Parameters(name = "{0}") public static Collection<Object[]> readUrlPairs () { return Arrays.asList( new Object[]{"Yandex Main Page", "http://www.yandex.ru/", "http://beta.yandex.ru/"}, new Object[]{"Yandex.Market Main Page", "http://market.yandex.ru/", "http://beta.market.yandex.ru/"} ); } ... } 

Data is better to pull out of the repository, to which the person using the test has access. But for clarity, the above method fits perfectly.

So, let's imagine that we have not 2 parameters, but 20, or better than 200. The standard test report will look like this :

image

What conclusion can be drawn from the test report?

image

Let's think together what data we need in order to quickly decide on errors:

  1. Screenshots of the original page and the candidate.
  2. Difa screenshots (you can, for example, mark all different pixels in red)
  3. Source page source and candidate.

In the presence of such data to draw conclusions about the problems will be much easier, which means - cheaper.

Report implementation


In order to build an advanced test report, we need to go through three stages:

  1. Model It will contain all the information needed to be displayed in the report.
  2. Adapter He must collect all the necessary information from the test to the model.
  3. Report generation Based on the collected data, we generate a report based on templates.

So, in order.

Model


To solve this problem, we will use xsd-schemes for the subsequent generation of java-classes using Java JAXB . Fortunately, our model contains little data and is easily described by the scheme.

 <?xml version="1.0" encoding="UTF-8"?> <xsd:schema attributeFormDefault="unqualified" elementFormDefault="unqualified" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:ns="urn:report.examples.qatools.yandex.ru" targetNamespace="urn:report.examples.qatools.yandex.ru" version="2.1"> <xsd:element name="testCaseResult" type="ns:TestCaseResult"/> <!--  --> <xsd:complexType name="TestCaseResult"> <xsd:sequence> <xsd:element name="description" type="xsd:string"/> <!--      --> <xsd:element name="origin" type="ns:ScreenShotData" nillable="false"/> <!--   (   )--> <xsd:element name="modified" type="ns:ScreenShotData" nillable="false"/> <!--     (  )--> <xsd:element name="diff" type="ns:DiffData" nillable="false"/> <!--   --> <xsd:element name="message" type="xsd:string"/> <!--  ,   --> </xsd:sequence> <xsd:attribute name="uid" type="xsd:string"/> <!--ID- --> <xsd:attribute name="title" type="xsd:string"/> <!--  ,    --> <xsd:attribute name="status" type="ns:Status"/> <!--  --> </xsd:complexType> <xsd:complexType name="ScreenShotData"> <xsd:sequence> <xsd:element name="pageUrl" type="xsd:string"/> <!-- ,    --> <xsd:element name="fileName" type="xsd:string"/> <!--   --> </xsd:sequence> </xsd:complexType> <xsd:complexType name="DiffData"> <xsd:sequence> <xsd:element name="pixels" type="xsd:long" default="0"/> <!--  --> <xsd:element name="fileName" type="xsd:string"/><!--   --> </xsd:sequence> </xsd:complexType> <xsd:simpleType name="Status"> <xsd:restriction base="xsd:string"> <xsd:enumeration value="OK"/> <xsd:enumeration value="FAIL"/> <xsd:enumeration value="ERROR"/> </xsd:restriction> </xsd:simpleType> </xsd:schema> 

The circuit is ready! It now remains to generate classes for this scheme. For this we use a powerful maven-jaxb2-plugin . The advantage of this plugin is that classes are generated at each compilation. Thus, you can be 100% sure that the generated code conforms to the scheme, and save yourself from errors like “Oh, I forgot to regenerate ...” The result of the plug-in's work will be the generated classes (be careful - they are huge):
TestCaseReport
 /** * <p>Java class for TestCaseResult complex type. * * <p>The following schema fragment specifies the expected content contained within this class. * * <pre> * <complexType name="TestCaseResult"> * <complexContent> * <restriction base="{http://www.w3.org/2001/XMLSchema}anyType"> * <sequence> * <element name="message" type="{http://www.w3.org/2001/XMLSchema}string"/> * <element name="description" type="{http://www.w3.org/2001/XMLSchema}string"/> * <element name="origin" type="{urn:report.examples.qatools.yandex.ru}ScreenShotData"/> * <element name="modified" type="{urn:report.examples.qatools.yandex.ru}ScreenShotData"/> * <element name="diff" type="{urn:report.examples.qatools.yandex.ru}DiffData"/> * </sequence> * <attribute name="uid" type="{http://www.w3.org/2001/XMLSchema}string" /> * <attribute name="title" type="{http://www.w3.org/2001/XMLSchema}string" /> * <attribute name="status" type="{urn:report.examples.qatools.yandex.ru}Status" /> * </restriction> * </complexContent> * </complexType> * </pre> * * */ @XmlAccessorType(XmlAccessType.FIELD) @XmlType(name = "TestCaseResult", propOrder = { "message", "description", "origin", "modified", "diff" }) public class TestCaseResult { @XmlElement(required = true) protected String message; @XmlElement(required = true) protected String description; @XmlElement(required = true) protected ScreenShotData origin; @XmlElement(required = true) protected ScreenShotData modified; @XmlElement(required = true) protected DiffData diff; @XmlAttribute(name = "uid") protected String uid; @XmlAttribute(name = "title") protected String title; @XmlAttribute(name = "status") protected Status status; /** * Gets the value of the message property. * * @return * possible object is * {@link String } * */ public String getMessage() { return message; } /** * Sets the value of the message property. * * @param value * allowed object is * {@link String } * */ public void setMessage(String value) { this.message = value; } /** * Gets the value of the description property. * * @return * possible object is * {@link String } * */ public String getDescription() { return description; } /** * Sets the value of the description property. * * @param value * allowed object is * {@link String } * */ public void setDescription(String value) { this.description = value; } /** * Gets the value of the origin property. * * @return * possible object is * {@link ScreenShotData } * */ public ScreenShotData getOrigin() { return origin; } /** * Sets the value of the origin property. * * @param value * allowed object is * {@link ScreenShotData } * */ public void setOrigin(ScreenShotData value) { this.origin = value; } /** * Gets the value of the modified property. * * @return * possible object is * {@link ScreenShotData } * */ public ScreenShotData getModified() { return modified; } /** * Sets the value of the modified property. * * @param value * allowed object is * {@link ScreenShotData } * */ public void setModified(ScreenShotData value) { this.modified = value; } /** * Gets the value of the diff property. * * @return * possible object is * {@link DiffData } * */ public DiffData getDiff() { return diff; } /** * Sets the value of the diff property. * * @param value * allowed object is * {@link DiffData } * */ public void setDiff(DiffData value) { this.diff = value; } /** * Gets the value of the uid property. * * @return * possible object is * {@link String } * */ public String getUid() { return uid; } /** * Sets the value of the uid property. * * @param value * allowed object is * {@link String } * */ public void setUid(String value) { this.uid = value; } /** * Gets the value of the title property. * * @return * possible object is * {@link String } * */ public String getTitle() { return title; } /** * Sets the value of the title property. * * @param value * allowed object is * {@link String } * */ public void setTitle(String value) { this.title = value; } /** * Gets the value of the status property. * * @return * possible object is * {@link Status } * */ public Status getStatus() { return status; } /** * Sets the value of the status property. * * @param value * allowed object is * {@link Status } * */ public void setStatus(Status value) { this.status = value; } } 

Classes are ready too. Now you can easily serialize objects into xml files:
 TestCaseResult testCaseResult = ... JAXB.marshal(testCaseResult, file); 


And read the objects from the xml file
 TestCaseResult testCaseResult = JAXB.unmarshal(file, TestCaseResult.class) 


Adapter


Let me remind you that we need an adapter in order to fill the model with data from the test during its execution. To implement the adapter, we will use the jUnit Rules mechanism, or to be more precise, the TestWatcher Rule:
 public abstract class TestWatcher implements org.junit.rules.TestRule { //     protected void starting(org.junit.runner.Description description) {...} //        protected void succeeded(org.junit.runner.Description description) {...} //  ,  // // !!()!! assumeThat() protected void skipped(org.junit.internal.AssumptionViolatedException e, org.junit.runner.Description description) {...} //          protected void failed(java.lang.Throwable e, org.junit.runner.Description description) {...} //     protected void finished(org.junit.runner.Description description) {...} } 

Let's consider each method in turn and think about where we can collect the necessary data.

In addition to all of the above, our steering wheel should be able to take and save screenshots, as described in the methods:


All files will be added to the target/site/custom directory, as it is the default for reports.

After using 'ScreenShotDifferRule', our test remains almost unchanged:
 @RunWith(Parameterized.class) public class ScreenShotDifferTest { private String originPageUrl; private String modifiedPageUrl; ... @Rule public ScreenShotDifferRule screenShotDiffer = new ScreenShotDifferRule(driver); public ScreenShotDifferTest(String title, String originPageUrl, String modifiedPageUrl) { this.modifiedPageUrl = modifiedPageUrl; this.originPageUrl = originPageUrl; } ... @Test public void originShouldBeSameAsModified() throws Exception { BufferedImage originScreenShot = screenShotDiffer.takeOriginScreenShot(originPageUrl); BufferedImage modifiedScreenShot = screenShotDiffer.takeModifiedScreenShot(modifiedPageUrl); long diffPixels = screenShotDiffer.diff(originScreenShot, modifiedScreenShot); assertThat(diffPixels, lessThan((long) 20)); } ... } 


Now, using the simple ScreenShotDifferRule after each test, we will receive structured data in the following form:

1. {uid} -testcase.xml
 <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <testCaseResult status="OK" title="originShouldBeSameAsModified[0](ru.yandex.qatools.examples.report.ScreenShotDifferTest)" uid="ru.yandex.qatools.examples.report.ScreenShotDifferTest.originShouldBeSameAsModified[0]"> <origin> <pageUrl>http://www.yandex.ru/</pageUrl> <fileName>{uid}-origin.png</fileName> </origin> <modified> <pageUrl>http://www.yandex.ru/</pageUrl> <fileName>{uid}-modified.png</fileName> </modified> <diff> <pixels>0</pixels> <fileName>{uid}-diff.png</fileName> </diff> </testCaseResult> = "originShouldBeSameAsModified [ <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <testCaseResult status="OK" title="originShouldBeSameAsModified[0](ru.yandex.qatools.examples.report.ScreenShotDifferTest)" uid="ru.yandex.qatools.examples.report.ScreenShotDifferTest.originShouldBeSameAsModified[0]"> <origin> <pageUrl>http://www.yandex.ru/</pageUrl> <fileName>{uid}-origin.png</fileName> </origin> <modified> <pageUrl>http://www.yandex.ru/</pageUrl> <fileName>{uid}-modified.png</fileName> </modified> <diff> <pixels>0</pixels> <fileName>{uid}-diff.png</fileName> </diff> </testCaseResult> ru.yandex.qatools.examples.report.ScreenShotDifferTest.originShouldBeSameAsModified [ <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <testCaseResult status="OK" title="originShouldBeSameAsModified[0](ru.yandex.qatools.examples.report.ScreenShotDifferTest)" uid="ru.yandex.qatools.examples.report.ScreenShotDifferTest.originShouldBeSameAsModified[0]"> <origin> <pageUrl>http://www.yandex.ru/</pageUrl> <fileName>{uid}-origin.png</fileName> </origin> <modified> <pageUrl>http://www.yandex.ru/</pageUrl> <fileName>{uid}-modified.png</fileName> </modified> <diff> <pixels>0</pixels> <fileName>{uid}-diff.png</fileName> </diff> </testCaseResult> 


2. {uid} -origin.png
image

3. {uid} -diff.png
image

Report generation


We need to implement the Maven Report Plugin, which will collect all {{uid}} - testcase.xml files into one and generate an html page based on it. To do this, add a TestSuiteResult aggregator object of all TestCaseResult to our model. I will not dig deep into the creation of plug-ins for Maven - this is a topic for a separate article. Instead, I propose to consider a ready-made plugin that solves our problem.

So, we have ScreenShotDifferReport Plugin . The heart of the plugin is the public void exec () method. In our case, it should:
  1. Find all files with data about passing tests.
     File[] testCasesFiles = listOfFiles(reportDirectory, ".*-testcase\\.xml"); 
  2. Read them and convert them to objects.
     List<TestCaseResult> testCases = convert(testCasesFile, new Converter<File, TestCaseResult>(){ public TestCaseResult convert (File file) { return JAXB.unmarshall(file, TestCaseResult.xml); } }); 
  3. Based on the data generate index.html. As a template engine, you can use the freemarker and this template .
     String source = processTemplate(TEMPLATE_NAME, testCases); 
  4. Add information about this report to the grouping maven report.
     Sink sink = new Sink(); sink.setHtml(source); sink.close();(( 

To get the finished report we need to run the mvn clean install command. For simplicity, you can download the project github.com/yandex-qatools/tests-report-example and execute a command for it. As a result of the command execution in the tests-report-example module in the target / site / directory, you will see a project report .

Check result


Now you need to install the entire project. To do this, run the mvn clean install command in the project root. After its execution, we will get artifacts ready for use. We connect our newly created plugin to the autotest project together with the standard surefire plugin.

 <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-site-plugin</artifactId> <version>3.2</version> <configuration> <reportPlugins> <plugin> <groupId>ru.yandex.qatools.examples</groupId> <artifactId>custom-report-plugin</artifactId> <version>${project.version}</version> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-surefire-report-plugin</artifactId> <version>2.14.1</version> </plugin> </reportPlugins> </configuration> </plugin> 

And run the command mvn clean site .

Voila! After passing the tests, the site phase will be executed, within which two reports will be generated: SureFire Report and Custom Report .

“Why build two reports?” You ask. The fact is that the jUnit Rules mechanism is not perfect. If an exception is thrown in the test constructor or in the parameterization method, the steering wheel will not be created, which means that the data for building the report will not be collected. Which in turn means that the test will not get into the report. You can improve the data collection process with RunListener or Runner , but this seems like redundant logic. All information regarding broken tests is in the SureFire report.

Total


So, we learned how to build simple reports using the jUnit and Maven framework extensions.

pros

  1. Free of charge, we get all the features of the jUnit framework for running and organizing tests (parallel launch, parameterization, categories).
  2. Clearly share data and presentation. You can make the adapter in another language (for example, in python), but use the same plugin to generate the presentation. Or use different plugins for the same data.
  3. We get the logic of delivering reports to the repository (ssh, https, ftp, webdav, etc.) using the Maven Wagon Plugin for free .
  4. We can generate a "partial report". This is achieved through the separation of test execution threads and report building. One thread performs tests (which generate data), and the second periodically builds a report.

Minuses

  1. Requires a good knowledge of technology (XSD, JAXB, jUnit Rules, Maven Reporting Plugin). If something goes wrong, you risk losing a lot of time.
  2. It is quite difficult to test the entire cycle of building a complex report (from a schema to html)

Recommendations

  1. The development of such systems is time consuming. It took about 50 liters of coffee, two bags of cookies and 793 clicks on the Build button to develop the first one, taking into account the analysis of technologies and the collection of rakes. Now creating a report for a specific task takes about two days. Evaluate the time you win using this report. It should be more.
  2. The greatest effect is achieved when the whole team takes part in the review of such reports.


In the article I talked about the use of the following technologies:
1. jUnit , jUnit Rules for the implementation of the Adapter.
2. JAXB for serializing / deserializing the model in xml.
3. Maven Reporting Plugins to generate a report on the finished data.

The source code of the example is available on github . Joba, who builds the report, is available at .

Source: https://habr.com/ru/post/200364/


All Articles