I noticed that there are almost no notes on the interesting, in my opinion, uTest.com service. I have been working with him for six months already, I tested about a dozen releases, once I even received an award as the best project tester, I took part in the Bug Battle competition, I communicate on the forum and with staff members.
In this article I will share my thoughts on what features of testing a testing customer may encounter who has turned to uTest and what benefits it can extract. If the topic is interesting, later I will write about what the performer gets, joining this community, what types of earnings are available and how best to start a freelance tester career at uTest. If you have any questions on the topic, which for some reason can not be asked in the comments, write to the mail.
What is crowdsourcing?
The term was introduced in June 2006, by Wired editors Jeff Howe and Mark Robinson in the article
The Rise of Crowdsourcing . According to their definition, crowdsourcing is the transfer by a company or organization of functions previously performed by its employees or by contract to a certain community of people in the form of circulation. Work can be collective, but, as a rule, is still done by individuals. The most important requirements are the use of appeals to the community and a large audience of potential employees.
Testing features
It is quite logical that this type of testing is not acceptable for all projects, for example, it is hardly worth the risk if you give something original and unparalleled to testing. Yes, to work on a number of projects, you need to sign an NDA, but, as far as I know, information about yourself provided by testers is not verified, so it is entirely possible to avoid liability by specifying incorrect data. Moreover, it is unlikely to identify the source of information leakage, so that even no one will sue.
')
Another exception is multi-component projects or those requiring complex and expensive equipment. I do not think that someone has a server from Sun or IBM at home, and doing such testing at work is quite problematic. For complex projects, a good understanding of the functioning of individual components, ways of their interaction with each other, excellent integration testing skills, a tendency to carefully analyze and analyze the causes of the problem is required. If at work, testers can communicate with developers, use custom-developed debugging tools and emulators, then the experts hired for one test cycle are deprived of these features.
Based on the specifics of the process of testing in crowdsourcing, we can conclude that for the majority of community members this is a side job, which we were able to confirm by communicating with them in the internal forum. Accordingly, one should not expect one hundred percent concentration of testers on the project and deep, thorough testing.
I will give an example from my practice. One afternoon, I received a notification about the launch of a project in which it was necessary to check the localization of several dynamic web pages. Since I was at work, I managed to start testing only in the evening. By this time, one of the Russian-speaking colleagues had already found about 15 defects associated with localization and completed testing. Since the simplest defects were already found, I had to try, but I managed to find about 10 more complex and non-obvious problems. After that, the next day I looked at the list of defects on this project and saw that after me 5 more defects were found on the same pages.
Thus, on the one hand, the quality of testing improves, since everyone works independently and tries to find the maximum number of defects, maximizing their profits. On the other hand, one cannot be sure of the quality of testing, because the tester can accomplish his task (earn a certain amount with some effort), but do not do yours - assess the real quality of the product.
Another bottleneck is testing coverage. In this scheme of work, testers are interested in finding the maximum number of defects. If testing occurs within a company or a team is hired, you can specify the depth of testing for each functional area and the criteria for suspending testing. In the case of uTest, I did not encounter such limitations. Testing itself in this case resembles the process of ore mining: an area with a high density of defects is located and is being developed, it is simply unprofitable to study the others. It would seem that when the “vein” is exhausted, testers will move to areas that are less saturated with defects, but the time and budget constraints of the project should be taken into account. In addition, where “your” tester will analyze the causes of a malfunction, find its cause and introduce one defect, crowdsourcing will most likely cause defects for each individual symptom, so this is more a matter of trust in professionalism of a person you are unfamiliar with.
Then there is another snag: the fact is that in order to join this community of testers do not need to confirm their qualifications. So, when giving a release for testing, be prepared for the fact that anyone can work on it, including people who have little idea what testing is, or poor English. Several times I came across the fact that the defects are clearly contradictory to what is stated in the release requirements, some participants behave rather stupidly on “live” load tests and rather interfere than help, sometimes even clearly ignoring the requirements of the product manager who is trying to discipline. Of course, there are closed projects where only well-proven testers with a high rating (the ratio of correctly entered and accepted defects to the total number of registered), or distinguished themselves in quarterly testing competitions, fall into, but they will obviously cost the customer more.
And finally, one of the rather unpleasant problems, the solutions to which are now being discussed in the internal forum: the correctness of the indication of criticality and the type of defect. By increasing the level of criticality and selecting the type, performers may try to increase their profits. As one of the testers wrote on the forum, on individual projects, up to 35% of the budget was spent due to overstated figures. Currently, project managers and customer representatives are struggling with this, but this adds to their work and, moreover, sometimes goes unnoticed. On the new version of the platform, which was released about a month ago, these parameters to a lesser extent affect the payment of the accepted defect, but this does not completely eliminate such manipulations.
Customer benefits
The previous section may perhaps suggest that the use of crowdsourcing is inefficient, but this is not the case. The main thing is to assess in advance all the positive and negative sides and find the right approach. In addition, uTest employees are interested in long-term cooperation, so that they will help develop a strategy that best takes into account the needs of the customer in the first place.
Turning to the community you get at your disposal a whole army of testers, ready to work around the clock and possessing almost all possible devices and systems that are available to the average user.
Checking the localization of the project now does not present the slightest difficulty, because you have access to the speakers of all languages ​​who will appreciate the correctness of the translation, taking into account the cultural peculiarities of the particular country that is planned to enter the market.
Quality control can be performed in the shortest possible time, because these testers are ready to work at night and seven days a week (in fact, they usually work at this time), just to get ahead of their colleagues when entering defects. Taking into account the fact that all the time zones are represented, the customer receives a truly continuous testing, so after releasing the release to testing on Friday evening you will receive a report on the work done on Monday morning, while the number of man-hours can be almost any.
It is no secret that while working for a long time in one company, testers, like other specialists, get used to certain methodologies and approaches, while product testing turns out to be somewhat one-sided. In the case of attracting specialists from around the world, the company can see its project through the prism of, perhaps, all existing schools and methodologies and synthesize the most reliable assessment of quality. It will also allow you to take a fresh look and see the problems that your specialists are used to and do not document.
The customer receives feedback from the end user before entering the market, which can significantly reduce the costs associated with upgrading after a commercial launch. In addition, unlike ordinary users who may not know what they really want or what will be convenient for them, or will not be able to explain their needs, reviews will be written by professionals doing it every day and knowing the features of the implementation of existing systems. It may well be that a competitor’s tester will work on testing a product, and it will help to make a new product devoid of the shortcomings of an existing one!
Crowdsourcing is very convenient for companies developing products for mobile devices. Since many such devices have their own unique characteristics, such as screen size and resolution, memory size, processor frequency, platform features, testing on all devices declared as supported can be a lot of headaches. And what if you need to make sure that the multi-component application works well in the networks of different mobile operators? Perhaps without crowdsourcing just can not do. It is not necessary to resort to it for testing each assembly, but it will definitely not hurt to check it before the release.
The rest of the benefits for the customer will stem from the specifics of his business, but I would like to mention among the clients of uTest such well-known brands like Google, Microsoft, ICQ and Babylon. A more complete list can be found on their website -
www.utest.com/customers