This time we talked about automated testing with Alan Page, who had a hand in creating Windows 95, Internet Explorer and MS Office. Alan - a great specialist and interlocutor. In this interview, he talks in simple and accessible language about non-trivial aspects of the process. We focused on the issues of defining the boundaries between development and testing, problems with law, evaluation of the quality of tests and the difference between testing large projects from small ones.

But first, a few facts from his biography. Alan Page has been testing automation for nearly 25 years. The main author of the books is How We Test Software at Microsoft and Beautiful Testing; author of a number of works and notes on the automatic testing of “The A Word”. Maintains his blog on development and testing. Alan joined Microsoft in the Windows 95 team, and later worked on earlier versions of Internet Explorer and Office Lync. For two years, he headed the Microsoft Test Excellence Center. In January 2017, he left Microsoft for Unity as a director of quality.
At the moment, Alan is one of our keynote speakers at the upcoming December conference of
Heisenbug 2017 Moscow , in connection with which we rushed him to ask him about the main signs of the need to move to automation of everything and everyone.
')
- Alan, how do you assess the readiness of the team to move to automated testing? How to understand whether there is expertise in the team, and if not - how to look for new people and how to build training?Alan Page: As far as expertise is concerned, being able to work with such frameworks as appium or selenium is important, but not paramount. The goal of my team is to
accelerate the achievement of quality acceptable to the client , so I look at things that make the team more effective in testing and target quality. This, of course, includes expertise in writing automated test cases, but it is much more important to be able to write full-scale tests and understand testing tools aimed at improving the efficiency of the whole team.
- About technologies: what are the most advanced must-have tools, IDE and plug-ins? And what do you recommend to read about it? Who writes well and accurately about testing?Alan Page: Then I, perhaps, will not say anything surprising. Selenium is not going anywhere. I am personally impressed with Python and pytest (
and I, as the author, agree with him ), although I still live in my heart somewhere S. As for the IDE: I wrote in Notepad ++ in different languages ​​for years and only switched six months ago on Sublime Text. Therefore, I think that here I am not an adviser.
About blogs: you should subscribe to Richard Bradshaw (twitter @friendlytester), he is full of ideas on automation. And he can borrow a lot of interesting things. By the way, here is his blog:
thefriendlytester.co.uk .
- Many projects suffer from problems with legacy code: some parts of the code become difficult to maintain and virtually impossible to change. How does the process of automatic testing in projects with Legacy change (especially if there are a lot of it)? What are the advantages of covering the Legacy code with tests?Alan Page: There are, of course, a couple of factors. If Legacy does not change, the code is quite old, and the bugs there will not be fixed exactly, I wouldn’t have spent any time on any automated tests.
In all other cases, tests will probably still be needed. The book “Working Effectively WIth Legacy Code” by Michael Feathers describes such situations in detail and in general can help any development team to understand the situation: with ideas and strategies on how to make the code easy to change and reworkable and how to write the necessary tests.
- At what stages of the work on the project should automatic testing appear and at what point should we start thinking about it?Alan Page: I start thinking about automation as soon as I see a document with a project or a diagram on a board. But sometimes even earlier - as soon as conversations about what exactly we are going to create begin - I immediately ask myself: “how will we check and test this?” We sit down and discuss it with the team, I make a diagram of links (
Mindmap ). And by that moment, when something appears that can be physically “touched”, everything in my head is already thought out, so that the process starts without a hitch.
- How does the automatic testing system scale with the growth of the project? What is the qualitative difference between testing small and large projects?Alan Page: Good question. And this is a great reason to turn to the automatic testing pyramid. At the base is a group of small tests (
from the author: sometimes this level is also called “lower” ). The test suite grows linearly with the scale of the project. With the complexity of the logic of the project and the exit to the middle level, integration (or average) tests can potentially begin to grow. I try to limit dependencies where I can, but at this level of testing the growth of the project itself is usually not a significant problem.
End-to-end tests (
approx .: or top-level; “large”; also the term “inseparable” testing is encountered; and the term itself is often referred to as testing through external interfaces ) - they almost always require additional attention in large projects. Tests are needed that drive and analyze the whole system. And this is completely nontrivial (both in terms of automation, and in the understanding of the system from the perspective of the tester). Since a large project is often a system within systems, it is possible, but not so simple, to write a high-quality test suite that covers a substantial portion of user behavior.
- By what signs can one understand that something is wrong with the testing process?Alan Page: The main culprit in this situation is usually the “strange” or not too unreliable tests, and how the team processes false positives plays a big role. If they can quickly figure out and understand why the test behaves this way - there is no cause for concern. And when there is no understanding, or, even worse, when such tests are simply ignored - it should always be suspicious that something is clearly wrong here.
You should also pay attention (perhaps something is related to the previous aspect), when the team does not run tests regularly. Whether it is unit tests before check-in (
~ commit from SVN - approx. Ed. ), Integration tests before merging branches or end-to-end tests before release. If the team does not rely on automated tests when making a business decision - whether to invest in even more extensive testing, then I begin to suspect that, perhaps, their automated testing system is still worse than I thought.
- What are the key trends in automated testing and where is the line between testing and development?Alan Page: I think that today this boundary is blurred more than ever. Testers from my team in the order of things fix bugs in production, and their fellow developers write most of the automatic tests (small and medium tests, about which I spoke a little earlier). I regularly see how testers with shoulder to shoulder with the developers organize brainstorming tests, test in pairs and debug together. A modern tester - in the role of a test and quality specialist in a team that deals with new system functions, for truly effective work
should be on this blurred border, and I think that the quality will only increase due to the blurring of this border.
If the topic of testing is close to you just as we are, you will most likely be interested in these reports at our December conference
Heisenbug 2017 Moscow :
The interview was prepared with the participation of Sergey Paramonov varagian .