"Junior performance testers" does not happen. But there are people who begin to engage in performance testing. (c) Scott Barber (aka The Perf Guy)
In testing computer programs, there is a “generally accessible” area of ​​functional testing, where access is open to everyone, and there are a number of areas with a fairly high “entry threshold”, and performance testing is among them.
This type of testing requires a good command of the weapon; you will not take it with your bare hands. First, you need the weapon itself - performance testing necessarily requires the ability to use special tools. Secondly, you need to carefully study your opponent - you need a good understanding of the protocols of interaction of the program under test with the outside world and its internal physical and logical architecture. And of course, you need to master the techniques - to know what kind of workload and how to apply for the application under test, and what to look at in order to identify performance problems.
On February 18, a new
online training course will start, lasting for 6 lessons
“Performance Testing” , by author and presenter Alexey Barantsev.
')
At the training we will learn to handle this weapon:
- learn about tools designed to generate load and monitor various performance characteristics,
- learn how to use these tools to generate loads of various types,
- examine typical architectural patterns for building applications and the associated sources of potential performance problems,
- Consider ways to identify performance problems based on the analysis of monitoring results.
However, this is only the first stage. In performance testing, not only is the high input threshold, it is quite difficult to climb the second step.
In the second part of the training, designed for those who have already mastered the simple techniques of performance testing, we will look more deeply at the nine basic principles of performance testing highlighted by Scott Barber:
- Context - the external context of the project, in which performance testing is performed,
- Criteria - what can be considered as a successful result in terms of users, business, project, system,
- Test planning and design - what tests are needed, how can they be done and how much time and resources will it take,
- Setup - preparation of the test bench, as well as tools for load generation and monitoring,
- Implementation of tests - development of tests in accordance with the previously conceived plan,
- Execution - running tests, monitoring and collecting data on system performance characteristics,
- Analysis of results - assessment of the quality and reliability of the collected data and the identification of problems with performance,
- Consolidation of results - data processing for a more convenient presentation,
- Iterative approach - repeating testing at different stages or with different variations.
Detailed course program:
First lesson1. Introduction to performance testing - why do we conduct it and what errors can we detect.
2. Three basic components: creating a load, collecting metrics, analyzing results.
3. Overview of load generation tools. A closer look at the JMeter tool.
4. Creating a simple load generator - recording user actions and playing the recorded script in several streams.
5. Debugging a script - how to understand what is really happening there.
6. Collection of main metrics - response time, throughput, number of failures.
7. Analysis of the collected results, preparation of the report.
Second lesson1. Again, the main thing - why we are conducting performance testing and what errors we can detect.
2. System performance requirements. Requirements analysis and determination of testing objectives.
3. What is the load model and how does it relate to the goals of testing.
4. Translation of goals into a practical plane - design of test scenarios and their layout in accordance with the load model.
5. Test data, parameterization and correlation of test scenarios.
6. Additional elements of the script - the delay between requests, verification of responses.
7. Additional metrics - collecting data on the performance of the operating system, application server, DBMS.
Third class1. And once again about the main thing - why do we conduct performance testing and what errors can we detect.
2. A little about the architecture of distributed systems and the sources of performance problems.
3. False-positive and false-negative results, possible causes of their occurrence.
4. Typical load models - to detect what kind of problems they target.
5. Complex test scenarios, various protocols of interaction with the system under test.
6. Distributed testing, generating loads from multiple sources, collecting metrics in a distributed environment.
7. Interpretation of the results - what problems lie here and how to avoid them.
Course Format
Online training, consisting of two steps of varying complexity, with weekly online classes and practical homework.
Each level consists of three classes.
Online classes will be held every Thursday from 15-00 to 16-30.
Each course participant will receive feedback from the coach on the results of homework. The most common mistakes will be dealt with in online classes.
Between classes at any time you can ask a question to the coach in the forum.
In case you miss any activity, you will receive his record and homework.
The terms of participation
The first stage will be held on February 18, February 25, March 4.
Secondary classes will be held April 8, 15, 22
The cost of participation in one step - 2500 rubles, in two 4000 rubles.
Registration Terms