📜 ⬆️ ⬇️

How our testing is arranged and why QA participates in the formulation of problems to our developers

Good day!

My name is Eugene, I’m Acronis Cloud Test Manager, and I want to tell you how it all works.

In general, QA is almost like the KGB: we are not always visible, but we are everywhere . We participate in processes, starting from the earliest stages, when there is still a discussion of technical requirements, their revision, and rough prototyping of features. QA does not have the right to vote, but it does explain to Devlida and the program manager’s dangerous places based on his experience. And, as a rule, this explanation affects the requirements for the feature.

')

Process step by step


First stage: the designer who drew the feature in the interface, the developer, PM and QA sit in the same room and discuss how it should work. At the very beginning, we stand out from the position of “What will happen if ...” - we think what will happen to the product during the sale and what pitfalls can come out in this process. A productologist rarely evaluates a feature from the point of view of stability — its task is to think about how it will help the end user, and ours how it can harm. For example, somehow we wanted to add a couple of settings, and to provide access to them - another role for users just below the admin. We, as representatives of ordinary users, opposed it because it complicated the interface and understanding of what was happening. Instead, we decided to make the feature differently, by unloading the GUI, but adding these parameters as hidden entities in the console.

The second stage: QA looks at the technical project and shows hazardous places (as a rule, this concerns systemic things, such as what is better to do in the current architecture differently or with an understanding of the new architecture we are going to).

When ready, the “sale of features” begins: the developer assembles a designer, PMA and a representative of QA. The designer checks that it is made as he intended, PM looks at the functional, and QA agrees that the feature can be tested in this form.

Further, the feature returns to QA already from the developer after implementation and receives a quality level. Of course, the developer has previously tested it himself, as he can and can. If the feature comes to QA raw, then we expose it to a low level, and it immediately, without further consideration, goes back to the development with a list of open bugs.

If the feature is “sold” successfully and fulfills its function, then work begins. The first stage is the final test plan. In general, we begin to write a test plan immediately after the feature requirements have been agreed and fixed. Auto tests can be written immediately or added with medium and low priority. It happens that the feature will be tested by the autotest only in critical places, and then gradually will be included in the run plan for robots more fully. Not all features fall into candidates for automation, naturally. For example, in the Enterprise segment there are often a lot of one-time small things that are literally needed by a couple of customer companies. They are most often checked manually, as are minor features in consumer products. But all that is responsible for the direct functionality of the product is covered by autotests almost always completely, but not always in one pass. For writing tests we have our own Python framework.

Next is the manual and automatic testing of the plan. The result of this stage is an assessment by quality level. In order for a feature to be included in a release, it needs to get a “4” or “5” on a five-point scale. With the five (quibbles, suggestions for improvement) passes without question, with the four (a couple of not very significant major-bugs) is included in the release only by decision of the product manager. In general: 1 - the feature does not work at all, 2 - it works, but most of its functionality is not performed, 3 - a significant part works, but there are very unpleasant critical bugs, 4 - it works almost completely, but there are some minor complaints. 5 - the feature works perfectly, and there are no bugs on it at all, or they are very minor. A couple of times a year, we include the necessary functionality with an estimate just below the four, but we always mark it as a beta for the final client.

If the bug affects the basic functionality, then he criticized for importance, if he also often shoots at the same time, then he has a very high priority in urgency.

Bugs on manual testing are got in Djira hands. Autotest bugs - automatically, and our framework checks to see if there is already such a bug, whether it makes sense to rediscover it. Criticality and priority of the bug is assigned by a QA specialist manually.

What happens when the developer does not agree with the QA opinion on the assessment of the bug? Then we all sit down and understand. I must say that about three years ago, this problem really was, but now we have allocated QA into a separate unit and have registered quite a few signs and properties of the bug, which do not allow double interpretations. In general, we have all the development - in Russia, most of the people - in Moscow. The whole QA is sitting in the same office and nearby, so there are no problems with refinement and interaction. He reached his feet quickly and discussed everything promptly. It really helps.

First we check builds on local stands. If everything is ok, then we lay out this build for preproduction deployed on the production infrastructure where the last known build is located. Thus, we once again check the update in the conditions as close as possible to the actual production.

After - lay out the build on the beta server. We have a portal where you can play around with the new version (as a rule, our proven and most active partners have access to it and give a rather extensive feedback). By the way, if you want to receive an invitation to this server, you can write to your colleagues, and they will organize everything (diana.kruglova@acronis.com).

People


QA requirements are almost the same as for developers, but taking into account the fact that you will have to write mostly autotests. Plus, we select people who understand the basics of UI / UX (and retrain if necessary), because a large proportion of features are now at the junction with the interface.

Our team consists of technically competent specialists, necessarily smart and well-developed logic. The time of testers, like monkeys stupidly repeating tests, has long passed. Instead of monkeys, we have autotest modules, which themselves deploy infrastructure from about 30 typical environments, themselves bring it to the desired state, set beta and chase them through the test program, simultaneously recording a log and taking screenshots.

Although, of course, we still have a lot of manual labor.

Typically, the distribution of working time is as follows: 30% is spent on communication with developers and clarification of technical requirements, then approximately in half - on manual work and writing autotests in our framework. Naturally, there are people who more often and more make hands, and there are those who almost always write code.

Speaking about the development of the tester as a profession, I can say that automatists often want to try myself as a developer. Why? Because there is still a stereotype that in development you make your product, and in tests you serve someone else.

Our way is a little different from this standard one - the fact is that tasks in automation are often more interesting than development tasks. Most of the development in stable multi-year projects is support. And with us, as it happened, the last few years, the development went quite rapidly: we were just picking up rocket science for testing. I had previously worked for Parallels, and we developed a system there for five years that automated everything. From virtual machines to hardware, where software is turning, running, sending bugs and checking bugs checked marks already. We have, I think, a couple of years will be stormy.

Therefore, our best specialists often grow into product managers. Since the qualification involves thinking a few steps forward, plus communication, plus knowledge of the product as a whole, plus the desire to improve the product and an understanding of what should be improved in the first place, it turns out almost ready PM after 2-3 years of work in QA.

Recursion


Testing autotests the one who wrote them. Otherwise, we would need another QA.

Old small bugs


Almost in every tracker of multi-year projects a group of non-urgent, frivolous or even strange rare bugs with the lowest priority accumulate, which are dragged like a tail from year to year. According to them, we do a revaluation procedure about once a year and decide whether to drag tails. Most often - do not. Close the effort of will and "cut off" the tail.

"External" bugs


After releases, bugs come to us from support or from the team looking for reviews in social networks (they are more suspicious than ready-made symptoms). Sometimes completely magical things reach the third line. For example, a client (Taiwan, source - English-speaking support) installed the product on Win8.1 Pro OS and created a “protected area on disk” with it, then rebooted its PC 750 times. And after that, his screen began to flicker. At the user's urgent request, this scenario was tested several times on different machines.

Or here's a story from Hong Kong:

SCENARIO:
1) Create a backup of a disk drive like Windows 2000 or DOS for Windows 7.
2) Boot the same system using ABR 11 bootable media with Universal Restore.
3) Create backup and backup task above.
4) Select the disk / partitions for recovery

ACTUAL RESULT:
Universal Restore is not offered during disk / partition recovery.

EXPECTED RESULT:
Universal Restore option should be available and should recover windows 7 properly. Older OS might not be recovered.

Environment: RAID controller (LSI 9260-8i)

ORIGINAL SETUP:
4x 640GB in RAID 5 level, Partition ->
C: (1st partition, FAT32),
D: (2nd partition, Window 7 system, NTFS),
E: (3rd partition for data storage, NTFS),
F: (4th partition for data storage, NTFS),
N: (5th partition for data storage, NTFS)

This story ended with the fact that we were able to figure out what caused the failures in loading the client OS, and, of course, boot successfully. There were no errors in the product.

Release dates


In general, specific products are assigned to each product family. When a new product is formed - we get people to it, it happens, we appoint someone from the "veterans" to be their leader. If a product is small, at first it is tested on the basis of the parent and on its infrastructure, and then it is separated.

Something like that. You can ask me about the process, and my colleague, who was involved in automating test automation, just writes about how to organize all this correctly from the software point of view.

Source: https://habr.com/ru/post/278941/


All Articles