📜 ⬆️ ⬇️

Features of manual testing in ALM Works and Odnoklassniki

image


If programming, as the well-known joke claims, is the process of introducing errors into the code base of a project, then there must be superheroes who do not spare their nerves and eyes, so that the number of bugs and flaws does not go too far. These people live among us, and, believe me, every word of the word spoken about the next bug that has appeared in this or that program is taken to heart: this means that their work is not finished, that their crusade against the search for problems continues.


One of the established myths about testing is the idea that testing software is a matter for interns. Another myth can be called the idea that the process of catching bugs is called testing as such, and the success of catching is determined by the number of problems encountered (this is especially true for manual testing, and not just for autotest runs). Both he and the other myths are very tenacious, not just among IT people, but even among developers — a fact that is quite surprising, since someone who is, and they know the kitchen of the process of creating really good software. However, we live with what we live with, it remains only to change the situation as much as we can.


For the sake of such a topic, we invited two experts in the field of testing to communicate: Nikita Makarova , who is engaged in testing at Odnoklassniki, and Yulia Atlygin , who is responsible for the same direction in ALM Works.




Julia Atlygina has been testing for more than 9 years, the last of which she spent at ALM Works, developing plugins for Atlassian JIRA and Confluence. The role of the tester combines with the roles of the Product Owner and SAFe-consultant. The classic ending of a brief biography of Yuli lately - “If you have questions about JIRA, feel free to ask!”

- Julia, I will immediately ask: what is the measure of the "goodness" of the tester?


A good tester should be in some sense a perfectionist, and first of all worry not so much for the product itself, but for the work of the user in it. After all, blocks that are not aligned with 1 pixel in the UI are also a problem, as well as incomprehensible behavior, when in the digital field, by mistake, they wrote a space before the number, or only a space at all.


Another thing is that tickets for a pixel that has moved out are usually not the most important. If we are talking about the benefits for the user, then we need to rank the tickets according to the benefits and correct, if possible, the most unpleasant. Of course, this brings with it unpleasant decisions about what we will eliminate (or, more precisely, try to have time to eliminate) within the time frame for the release of the current release, and what is not.


- Where did the idea come from that testing, especially manual, is a job for trainees? How important are experience and knowledge in the tester's work?


Experience, and even, I would say, a certain intuition in general means a lot in the work of a tester. If you plant dozens of inexperienced trainees for manual testing, they will surely find several hundreds of flaws, but very few of them will be important - the guys will not have enough experience, and somewhere they will have the knowledge to understand where the most important bugs are looked for and what they have found sense reporti. And here, as nowhere else, we need a mentor who will explain, teach, act as the primary filter that will allow us to gain first experience.


- What distinguishes a good tester from a bad one? Not the number of bugs brought up in the tracker?


The work of the tester is difficult to estimate by some quantitative metrics. Twenty established tickets about shifts by one pixel and one, but a hard-to-find bug inside the program, in certain circumstances corrupting data - most likely, the second one is more important for the user (and for the program being tested), but this does not mean that the “little things” in the UI especially visible to users, should not be searched, reported and corrected. My personal, "mental", criterion of the tester's work - how many of the bugs fixed by this tester were removed in the current release.


Test, of course, can be different. Where we have clear test specifications (up to indicating, say, what values ​​we need to enter in which fields and what answer the program should show us at the output), we’d best use automated tests, but they will never replace human participation in testing. When we have a test plan, sometimes there are quite general guidelines - say, “start a user with a login of non-ASCII characters” - and these instructions each person performs a little differently, which at the output gives new options every time test. In addition, only manual testing can detect annoying flaws in the layout, incomprehensible or illogical from the user's point of view, the behavior of the program and other poorly formalized things to automate.


It happens that manual testing is neglected, relying on automated tests, but experience suggests that there are many situations where you should not do this, and this is especially true if the program changes, is refined, changes including. its UI, and tests are written to test the internal logic of the program and are simply not designed to find flaws in the new UI.


- Suppose we create a new project from scratch. We found developers, designers and now we are looking for testing specialists. How many specialists are needed in this field?


The question is not so simple. Of course, briefly: that the more - the better! In fact, the answer is very dependent on the project, on the requirements for it, on the development process itself. In some cases, there will be few two testers per developer, in some cases there may be a lot of this ratio.


- But still - isn’t the work of the tester boring?


In my opinion, the tester must be a perfectionist, and any, even the smallest, flaws in the program should upset him to the depth of his soul. And this is not only the notorious single-pixel layout errors, but, in certain cases, the layout of some form entirely: let's say, initially placed two input fields on it, the accompanying text and the OK button, and, from the users' point of view, it was convenient and understandable. If later, in the process of expanding the functionality of the program, a dozen UI elements were added to the same form, which turned out to be interconnected in a not very simple way, almost certainly no one, except the tester, would say that it became inconvenient to use the form: developers are confident that the form itself works correctly. It works correctly, but is inconvenient.


Therefore, willy-nilly, the tester should be aware of the work of the entire system as a whole and quite well understand the work of each of its parts. For me, some criterion for the work of testers is that people, rather than developers, go with questions like “if we change this part of the system here, how do you think this will affect the whole system?”


We in the company have found a solution for ourselves in that we arrange training for “fry”, as I call them, and train young, but already more experienced, testing specialists from interns. Among other things, this training gives the opportunity to the “fry”, and we understand who, by virtue of her temperament, is well suited to work as a tester, and to whom such work (and the whole way of thinking) will not be a joy.


Nikita Makarov worked in outsourcing and grocery companies. He was engaged in the automation of embedded operating systems based on Linux, integrated VPN solutions for business, software and hardware systems. Since January 2012, he has been the head of the test automation group in the Odnoklassniki project.

- Nikita, to begin with, I will ask: you do not have software that is delivered to the buyer in the form of a “box”, but an online service. Does this situation somehow make it easier to catch flaws? What is the difference between these options in terms of testing?


First of all, an understanding of how controlled the use of the product being developed and tested is for us.


The client buys the boxed software and then installs, configures and uses them in the way that is convenient for him and seems correct. This may not coincide with the idea that the developers have in the process of creating it - and therefore, it cannot avoid a customer’s dissatisfaction. To prevent this from happening, we sit down in advance, invent everything, even the most unbelievable usage scenarios, and test the software for all this variety. The advantage of the approach is that we will catch a significantly larger number of flaws, even those that would not have been manifested in “normal” use. Minus - forces on such a development takes significantly more, and this is reflected in the cost and duration of work. If a bug is already installed after the client has installed the software, diagnostics and catching the problem may be delayed for a long time.


With software for our own online service, the situation is different: we know the operating conditions perfectly well (moreover, we control them - from physical servers, network infrastructure, and to the versions of the software being used), and we can carry out the tests we need at any time , even in the process of "combat" work. At the same time, we have the opportunity to launch, say, a new version of a module only for a small percentage of visitors to the service and to compare two versions of the work under this load - more than a convenient opportunity!


As an example: we can never afford to release a frankly crude code to production, since a negative opinion about it can lead to an outflow of users from the service: this is such a negative option that no one will risk. At the same time, the service works for us with very large loads, so large that it is almost impossible to assemble a test bench capable of reproducing them. How in such a situation to test? Fortunately, we can afford to deploy a new code on a separate server group and send not all visitors to it, but a small fraction of them: if we see that the changes are bad, then just transfer people back to the old version, otherwise we’ll just increase the percentage receiving a new version of up to hundreds.


- What do you test on the project manually, and what do you put on automation? Do I generally need manual testing for such a dynamic project? What kind of errors are caught manually?


We want (and try to achieve this) to automatically check that if a user on our site does something in the right way, then he will get the right things that are understandable to him. In other words, we are trying to automatically run alpha tests - everything that can be automatically checked in them.


Manually, on the contrary, we catch what the automation is not able to do: in particular, we deal with the flaws that users complained about, or step by step we collect information about why changes in one service subsystem could affect some of its other subsystem.


- Should I take the catch of UI flaws and layout to the work of testers, or should designers, layout designers, and testers deal with performance in the technical sense?


We try to test incl. and UI, and first all together - both designers and testers. After that, testers separately conduct their testing, because between the design and the user is the layout, and the most beautiful and comfortable design can run into the harsh reality, in which the same thoughtful and verified design looks slightly different or works slightly differently on different devices, and devices, as we understand, are on the market today a great many. Moreover, user convenience is important to us, and we spend time studying and understanding how to make it more convenient.


- How to evaluate the quality of the tester? Not the number of the bugs brought in unit of time?


Here we must make a small digression. The fact is that our entire big project inside is divided into fairly independent teams, with their release schedules, with their approach to development and with their priorities in testing: somewhere important from the very beginning to carry out a deep planning of all actions, somewhere deal with the results of alpha testing and draw conclusions based on them.
The composition of the teams are usually constant, so that the vision of the product is preserved in the head and that the team members understand whether they can do “well”.


So, the quality of the tester’s work is assessed by how well he does everything he needs so that the team’s releases are released on schedule.


- A question of planning and, probably, finance: is there no point, instead of a dozen experienced testers, to plant more (say, fifty) trainees for the same amount of work, in the hope that they will “take” by number, not skill, and with them one- Two experienced professionals who would sort out the buzz? This is “stupid and rude,” but perhaps in the future more efficiently?


I think no. Quite the contrary: “throwing the problem with meat” has at least some meaning only if the problem is simple, and in principle it is possible to “throw” it like this, not including the head. Our system is too complicated, and we need to know too much about it in order for us to pass this option. Even among experienced testers who have just been hired by us, very few people will be able to immediately understand the variety of project systems, so we used to give for about six months only to understand what is organized and, I repeat, here we are talking about an experienced worker, which does not need to learn a profession.


Yes, we are happy to take interns - and we will teach them until we see their willingness (and they do not feel the strength) to become a full-fledged tester.


- Testers are much more flaws than actually corrected. Surely at the same time there is some kind of personal attitude to the speed of corrections - or not? What is the attitude closer to testers, “the bug is found - the work is done” or “as long as there are bugs - you can not relax”?


Of course, in the developing project there will always appear new flaws, and testers will never have to sit idly. But at the same time, each bug found will ultimately be the reason for the next improvement - which means that as a result we can make the world a little bit better.




That came to the end of this interview. Finally, I would like to note that Nikita and Yulia can be met at the Heisenbug 2017 Moscow conference, which will be held on December 8-9. Julia will have a report on the tester's tools , and Nikita will cover the topic of testing the white box . The conference will be organized discussion zones, so that you can come up and ask the speakers your questions.


')

Source: https://habr.com/ru/post/342954/


All Articles