Testing is a very important stage in the development of mobile applications.
The cost of an error in the release of a mobile application is high. Applications get on Google Play within a few hours, in the Appstore a few weeks. It is not known how long the users will be updated. Errors cause a violent negative reaction, users leave low marks and hysterical reviews. New users, seeing this, do not install the application.
Mobile testing is a complex process: dozens of different screen resolutions, hardware differences, several versions of operating systems, different types of Internet connection, sudden breaks in communications.
')
Therefore, we have 8 people working in the testing department (0.5 testers for a programmer), and a dedicated test lead keeps track of its development and processes.
Under the cut, I'll tell you how
we are testing mobile applications.

Testing Requirements
Testing begins before development. The design department transmits to the testers a navigation scheme and screen layouts, the project manager - invisible requirements on the design. If the design is provided by the customer, the mock-ups are checked by our designers before being submitted to the testing department.

The tester analyzes the requirements for completeness and inconsistency. In each project, the initial requirements contain conflicting information. We solve them before the start of development. Also, in each project, the requirements are incomplete: there are not enough layouts of secondary screens, restrictions on input fields, error displays, buttons do not lead anywhere. Invisible things on mock-ups are not obvious: animations, caching of pictures and screen contents, work in non-standard situations.
Disadvantages requirements are discussed with the project manager, developers and designers. After 2-3 iterations, the whole team understands the project much better, recalls the forgotten functionality, fixes decisions on controversial issues.
Basically, basecamp is used at this stage.
When the requirements are complete and consistent, the tester compiles smoke and functional tests covering the source data. Tests are divided into common and specific for different platforms. We use
Sitechco to store and run tests.

For example, 1856 tests were written for the
Trava project at this stage.
The first step of testing is complete. The project goes into development.
Build server
All our
projects are going to
TeamCity build server.

If the project manager ticks “for testing,” testers leave a letter about the new build for testing. Her number is displayed on the monitor in the testers office. Red displays the builds released for the last 24 hours, they need to be tested more actively than white ones.

Without the "magic monitor" (by the way, it works on android), old builds were often tested. A new build with bugs hit the customer. Now, before running the test cases, just look at the monitor, the confusion was resolved.
Testing builds is quick and complete.
Quick testing
Rapid testing is carried out after the completion of the development iteration, if the assembly does not go into release.
To begin with, smoke tests are performed to see if it makes sense to test the build.
Then all performed tasks and fixed bugs for the iteration from Jira are taken and the compliance of the result with the task description is meticulously checked. If the task included new interface elements, it is sent to the designers for verification with the layouts.
Incorrectly executed tasks are rediscovered. Bugs are logged in Jira. The logs from the smartphone must be attached to non-UI bugs. To UI bugs screenshots with notes that is not so.
After that, functional tests of this iteration are performed. If bugs that were not covered by test cases were found, a new test case is created.
For Android applications run
monkey tests .
adb shell monkey -p ru.stream.droid --throttle 50 --pct-syskeys 0 --pct-ap pswitch 0 -v 5000
At the end of testing, a checkmark “testing bugs passed” is checked in the build server (yes, the name of the checkmark is not very correct :).
If in the course of testing no blocker, critical and major bugs were found, a check mark is placed “can be shown to the customer”. No build is sent to the customer without the approval of the testing department. (In coordination with the customer, builds with major bugs are sometimes sent).
The criticality of the bug is determined by the table.

Upon completion of testing, PM receives a detailed report letter.

Full testing
Full testing is carried out before release. Includes quick testing, regression testing, monkey testing on 100 devices and testing updates.
Regression testing involves running ALL test cases for a project. Test cases not only for the last iteration, but also for all previous and general test cases on the requirements. It takes a day or three per device, depending on the project.
A very important step - testing updates. Almost all applications store data locally (even if it is a login cookie) and it is important to make sure that after updating the application all user data will be saved. The tester downloads the build from the market, creates the saved data (login, playlists, finance accounting translations), updates the application for the test build and checks that everything is in place. Then he runs a smoke test. The process is repeated on 2-3 devices.
Developers often forget about data migration from old versions and testing updates has allowed us to identify many critical errors with crashes, deleting user data about purchases. This saved more than one application from angry reviews and the loss of the audience.
We run the monkey test version on 10 iOS and 80 Android devices using the
Appthwack service.
At the end of the full test, in addition to the letter, a detailed report is compiled manually.

The assembly goes into release only with 100% passing of all test cases.
Testing external services
Testing integration with Google Analytics, Flurry or customer statistics is not easy. It happened that the release went to the assembly with a non-working Google Analytics and no one paid attention to it.
Therefore, in a mandatory order for external services, a test account is created and it is checked with full testing. In addition, sending statistics is recorded in the logs, which are checked by testers. With the release of the test account is replaced by combat.
Time tracking
Time accounting testers made in a separate project Jira. A separate task is set up to compile test cases, test runs, writing reports on a project, and the elapsed time is noted by standard means.
UPD: but tell us how testing is arranged for you, at least how many testers are per developer
Subscribe to our
blog . Every Thursday, useful articles about mobile development, marketing and business of mobile studio.