📜 ⬆️ ⬇️

Future testing

With the joint efforts of the Testers Club members , we translated a series of notes by James Whittaker entitled “The Future of Testing”. This series in the original was published at the end of 2008, and in it James made a number of predictions as to what the work of testers will look like in the future, in 10-20 years. His predictions are largely based on the ideas that developed and continue to grow at Microsoft, where James was working at the time.

In translation, we collected all the notes in the series into one article, consisting of eight parts:
  1. Testosourcing
  2. Virtualization
  3. Information
  4. Move testing to top
  5. Visualization
  6. Culture testing
  7. Testers in the role of designers
  8. Testing after release
However, all eight parts in one Habratopic did not fit, so look for the first four under the cut, and a topic will appear a little later with the remaining four parts.

So, before you - the future of testing.
')

1. "Testing"


Outsourcing. This is a familiar term, and a significant part of the testing field developed in 2008 along this path. However, it was not always the case, and not everything must necessarily remain the same in the future. In this first part I will talk about how, in my opinion, testing will be carried out in the future, and how outsourcing can be replaced by a fundamentally different business model for software testing.

At the very beginning there was very little testing outsourced. Testing programs were conducted by internal employees working in the same company in which these programs were written. The developers and testers (often the same people involved in both) worked side by side in order to write a program, test it and publish it.

The role of suppliers in times of internal testing was limited to providing tools that helped companies do their own testing. However, the role of suppliers soon changed, as demand arose not only for tools. Instead of providing tools for internal testing, vendors have emerged that are ready to perform self-testing. We call this “outsourcing,” and this is still the main scheme used by software vendors for testing: transfer testing work to a contractor.

Thus, the first two generations of testing are as follows:
GenerationSupplier Role
№1Internal testingProvide tools
№2OutsourcingProvide testing services (this includes the use of tools)
The next logical step in the evolution of testing is the provision of testers by suppliers, and now we are just witnessing the beginning of the crowdsourcing era. The emergence of the company uTest marks the beginning of this era, and it will be very interesting to follow how events develop. Will crowdsourcers be able to demonstrate higher efficiency than outsourcers and capture this market in the future? Obviously, this will determine the market economy and the ability of the “crowd” to perform certain types of work, but in my personal opinion, the chances are on the side of the “crowd”. Of course, these are not mutually exclusive options, but an evolutionary process. The old model will gradually give way to a newer model. This will be a situation where Darwinian natural selection will take place in a surprisingly short time span of several years. The fittest will survive, and the time will be determined by the economy and the quality of work. Crowdsourcing has an advantage in this fight, including an unimaginably huge number of tests and test environments that can be floated, thanks to the size of the “crowd” and the diversity of the experience of its members.

This gives us the third generation:
Number 3CrowdsourcingProvide testers (this includes testing and using tools)
And what next? Is there an aggressive gene hiding deep in the DNA of our discipline that will make crowdsourcing develop into something else better? I think - yes, although this may take years and a few technological leaps. I will invent a new term now, solely in order to somehow name this new concept: test sourcing.
№4TestingProvide testing artifacts (this includes testers, testing, and tools)
However, testing cannot be imagined without one key technological leap that is yet to occur. This technological leap is virtualization, and the second part of this series will be devoted to it.

2. Virtualization


In order for testers to emerge, two key technological barriers must be overcome: the reuse of test artifacts and the accessibility of user environments. Let me explain what it is:

Reuse: Reuse of software artifacts is already available, thanks to the popularization of object-oriented programming and technologies derived from it in the 1990s. Most of the programs currently being developed are compiled from already existing libraries assembled into a single whole. Unfortunately, in testing it has not yet come. The situation when I can write a test and just pass it to another tester for reuse is very rare in practice. Tests depend too much on the test platform on which they were developed, they are tied to a specific application under test, they require some tools that other testers may not have, they depend on specific frameworks, libraries, network settings (and this list can be continue), which cannot be easily reproduced by those who would like to reuse these tests.

Environments: The number of user environments required to conduct extensive testing is amazing. Suppose I have developed an application that is intended for use on various mobile phones. Where can I get all these phones to test my application on them? How can I configure these phones so that we get a representative sample of all possible settings that exist for real users of these phones? And the same can be said for any other type of application. If I develop a web application, how can I take into account all possible operating systems, browsers, browser settings, plug-ins installed in them, registry settings, security settings, specific settings for a specific computer and various applications that may conflict with my application?

The answer to both of these needs can be virtualization, which is rapidly becoming cheaper, faster and more powerful and gradually expanding the range of applications from use in a test lab to deployment of IT infrastructure.

Virtualization has a huge potential that can be used by a crowd of crowdsourcers. Specialized test kits, test frameworks, test tools can be turned into virtual machines in one click, which can be used by anyone, anywhere. Just as modern developers can reuse the code created by their colleagues and predecessors, testers from the crowd will soon be able to reuse test sets and test tools. And just as the reuse of software components expands the range of applications that a developer can create, it will expand the range of applications testers can test. Virtualization provides the ability to easily reuse complex and difficult to build test infrastructures.

In addition, virtualization also provides testers with access to user environments. In one click, the user can turn his computer into a virtual machine and transfer it to testers or make it publicly available in the cloud. Now we can already store all the existing video materials in the world so that they can be viewed by everyone from anywhere, why don't we do the same with user environments? Virtualization technologies are ready to use (for personal computers) or almost ready (for mobile or specialized devices). We just need to learn how to use them to solve testing problems.

In the end, a situation should arise when there will be a huge amount of reusable test infrastructures and user environments that can be used by any tester from anywhere in the world. This will give a powerful tool to crowdsource crowd, they will be in a more advantageous position from a technological point of view than specialized outsourcers, and if we take into account the number advantage of crowdsourcers (at least in theory, but also in practice, most likely), it becomes it is clear that everything favors the development of this new paradigm.

The market is also on the side of a crowdsourcing model equipped with virtualization tools. User environments will gain market value when testers from the “crowd” strive to get them to secure a competitive advantage. This will encourage users to press the cherished button that virtualizes their environment and gives access to it (of course, this model has legal aspects, but they are solvable). And since environments with potential problems will be valued more than stable ones, this will be a pleasant moment for users who have problems with drivers or applications - the virtual machines they create will be valued higher, this will compensate them. On the other hand, it will encourage testers to provide access to their test sets and make them as reusable as possible. All this will help to saturate the market with test artifacts, and the key to this is virtualization.

And how will this rich virtualization future affect individual testers? I think that in two years, well, or in five millions of user environments, they will be virtualized, saved, replicated, and made publicly available (although you can assume that it will take longer if you are a skeptic). I imagine open libraries of such environments, which testers can use for free, and private libraries, available only to subscribers. Tests and test kits will be available in the same way, the fee for their use will depend on their completeness and applicability.

Perhaps the time will come when there will be very few people testers, they will be needed only for testing niche or specialized products (or products of ultra-high complexity such as operating systems). For the vast majority of products, it is enough to hire one test designer who chooses a subset of the huge number of available tests and test environments and runs them all in parallel: millions of person-years of testing are compressed in a matter of hours thanks to automation and virtualization. This is the world of testing.

This is the end of testing as we know it, but it is the beginning of a new era, carrying new interesting tasks to the community of testers. Everything that we now know about testing will of course be applicable in this new world, but it will be used in a completely different manner.

And this is quite a feasible future that does not require anything other than virtualization technologies that either already exist or are just now appearing on the horizon. This also means changing the role of testers, they will act as designers (if you need to perform tests) or developers (if you need to create or maintain reusable test artifacts). There will be no more heroes of the last line of defense, testers will become full-fledged citizens of this virtualized future.

3. Information


So, we come to my third prediction, which relates to information and how testers will use it to improve testing in the future.

What information do we use when testing programs? Specs? User's manual? Previous (or competing) versions? Source? Network protocol analysis? Process monitoring? Does this information help and how easy is it to use?

Information is the basis of all our actions as testers. The more information about what the program should do and how it does it, the better we can test it. I find it unacceptable that testers receive too little information, and also that all incoming information is not created given that it was convenient for testers to use in their work. I am pleased to note that this situation is changing (and rather quickly), and in the near future we will undoubtedly receive the right information at the right time.

I found a great idea how to present information for testing in video games. In games, we came very close to perfection in the ways of providing and using information. The more information about the game, players, obstacles, environment - the better you play and you can achieve better results. In video games, this information is displayed on a special information panel, called the HUD, or head up display. All information about the weapon, health and capabilities of the player is visible and available for instant use. Information about the current location of the player in the form of a mini-card and information about opponents is also available (my son played at Pokémon, who had access to Pokédex, where you could get information about all types of Pokemon existing in the game ... I would like to have such Bug-é-dex, containing information about all the bugs that I may encounter). The idea is very simple: the more information you can get and use, the higher the chances of success in the game.

I would really like to do the same for testers: give them more information to increase the success of their work. But most of the world's testing is stuck in a black box without a good information infrastructure. Where is our mini-map, which shows what we are testing now and how it relates to the whole system. Would it be great if I could hover over the UI element and see the source code or the list of properties implemented in this element (and which I can test)? If I test the API, then why can not I see the list of combinations of parameters that I and my comrades have already checked? I need to receive this information promptly, in a concise and easy-to-read form that helps me to test, instead of wandering in search of the necessary information on a site made in SharePoint, or on a database full of unrelated project documents. It only distracts me. I want to see it straight ahead!

My colleague from Microsoft, Joe Alan Muharsky, called this collection of information that I so much want to organize - THUD, or HUD for testers, its purpose is to present the information that the tester needs to search for bugs and check functionality in an easily understood format. Think of THUD as a wrapper around a program under test that provides information and tools that are useful and applicable in the current context. Occasionally there are systems that are used as THUD and even contain the right information. And in the future, testers simply cannot imagine testing without such a dashboard, as there are no players who can do without it, traveling through unpredictable and dangerous worlds.

If it looks like cheating, then so be it. Players using dishonest tricks (cheats) have a huge advantage over players who do not use them. Having access to the source codes, protocols and various components of the application, we, perhaps, really “cheat”. But, using such a hoax, we can get a significant advantage in the hunt for bugs over the usual testers who catch bugs with a black box. And this is exactly what we want: to be in a situation where we find errors in our products faster and more efficiently than anyone else. I sincerely approve such a fraud, but we still cannot benefit from the possession of information that could be used to cheat.

And in the future we can. This future will be very different from the present with its information hunger, in which we now have to work.

4. Move testing to top


In testing, there is a gap that “erodes” quality, productivity, overall controllability of the entire development life cycle. This is the time interval between the moment when the defect is introduced into the system and the moment when this defect is detected. The longer this interval, the longer the defect is in the system. Obviously, this is bad, but the reasoning that the longer a defect is in the system, the more expensive it is to fix it, should remain in the past.

In the future, we must bridge this gap completely.

For this it is necessary to make fundamental changes in the way of testing. Now the developer has the opportunity to make a defect in the system, and this happens quite by accident - the development environment doesn’t prevent this much, and only a few attempts are made to find the error before compiling. We create bugs and allow them to exist carelessly until the late stages of the development process, and then we pin our hopes on the heroes of the last line of defense that they will save us.

We, testers, have a whole set of methods for finding defects and analyzing programs. What we need to do in the future is to learn how to use these techniques at earlier stages of the development process, much earlier than we do now. I foresee two main ideas that will help us implement this. The first is to not wait for the compiled code to appear, but to apply tests to earlier development artifacts. The second is to compile and build as soon as possible so that we can test the program as soon as possible.

Let's look at them in order, starting with "testing early development artifacts." On the last line of defense, we apply various defect search strategies to the executable program code using external (public) program interfaces. We take a compiled program or a set of libraries, cling them to our test environment, and mock them using various input parameters and data, until we find a certain number of bugs in order to have at least some confidence that the quality is high enough . But why wait until the binaries are ready? Why can't we apply these testing methods to architectural artifacts? To the requirements and user stories? By specification and design? How did it happen that all technologies, techniques, knowledge collected over the past half century, apply only to the executable artifact? Why architecture can not be tested in the same way? Why can't we apply what we know to design and user stories? The answer is: there are no compelling reasons for which we cannot do this. I already see that many progressive groups at Microsoft use early testing methods, and in the future, I hope, we will figure out how to do this collectively. Testing will not begin when something is being tested, as it is now, but when there is something that needs testing. This is a subtle but important distinction.

“Early compilation” is the second part, but its implementation is a technological barrier, to overcome which a leap is necessary. Now we are writing the program component by component, and we cannot assemble the system entirely until each part of it is ready. This means that testing has to wait until all components have reached a certain level of completeness. Bugs may be in the program for days and weeks before the testing goes to find them. Can we replace unfinished components with virtual ones? Or stubs that mimic the behavior of a component from the point of view of an external observer? Can we create general-purpose chameleon components that will change their behavior to match the system in which they are (temporarily) embedded? I guess we can, because ... we have to do it. Virtual components and chameleon components allow testers to apply their art of detecting bugs immediately after the bug is created. Errors will have little chance of living longer than the first breath.

Testing is too important to wait for the end of the development cycle. Yes, iterative development and flexible methodologies allow us to create code suitable for testing earlier (albeit with less, incomplete functionality), but we still detect many bugs after release. What we are doing now is not enough. The future should shift the direction of impact testing to early development artifacts and allow us to create a workable, testable environment together long before the system can be fully assembled.

To be continued...

Source: https://habr.com/ru/post/83035/


All Articles