Original The translation is diluted by the reflections and additions of the author from his own experience.
What is it all about
As a testing engineer, you have probably heard of such types of testing as “smoke” (“smoke”), ““ sanitary testing ”(sanity),“ re-test ”and regression testing. Quite possibly, many of these species are used by you on a daily basis.
')
In this article I would like to clarify and explain the difference between these types of testing and try to understand, draw boundaries (albeit conditional) where one type of testing ends and another begins.
For beginners in testing (and even experienced testers), the separation of these concepts can be difficult. And in fact, how to distinguish where sanity testing begins and smoke ends? How much we need to limit the verification of part of the functionality of the system or its components to call it "smoke" testing? Is the login / password input to the user login form a smoke test, or is the fact of its appearance on the site page already passed by the test?
Strictly speaking, you will still be able to conduct testing, even though you will not be able to say exactly what the difference is. You can not even think about the distinction, what kind of testing you are currently busy. But still, to grow over yourself in a professional sense, you need to know what you are doing, why, and how correctly you do it.
Likbez
Below are brief definitions of types of testing that we compare today:
- Smoke tests : run every time we get a new build (version), a project (system) for testing, while considering it relatively unstable. We need to make sure that the critical functions of the AUT (Application Under Test) work as expected. The idea of this type of testing is to identify serious problems as early as possible, and reject this build (return for revision) at an early stage of testing, so as not to delve into long and complex tests, without spending time on knowingly defective software.
- Sanitary testing : used every time we get a relatively stable software build to determine the performance in detail. In other words, the validation of the fact that important parts of the functionality of the system operate according to the requirements at a low level takes place.
Both of these types of testing are aimed at avoiding the loss of time and effort, to quickly identify the flaws of software and their criticality, as well as whether it deserves a transition to a more in-depth and thorough testing phase or not.
- Re-test : it is carried out if the feature / functionality already had defects, and these defects were recently fixed
- Regression tests : actually what takes the lion's share of time and why testing is automated. AUT regression testing is conducted when it is necessary to make sure that new (added) application functions / fixed defects did not affect the current, already existing functionality that worked (and tested) before.
For a better understanding, a comparative table of these concepts and scope is presented below:
Smoke | Sanity | Regression | Re-test (Re-test) |
---|
Performed to verify that the critical AUT functional parts are working as expected. | Aims to establish the fact that certain parts of the AUT still work as expected after minor changes or bug fixes. | Confirm that recent changes in the code or application as a whole did not adversely affect the existing functionality / feature set | Re-checks and confirms the fact that the previously littered test cases pass after the defects are corrected. |
The goal is to test the "stability" of the system as a whole in order to give a green light to more thorough testing. | The goal is to check the overall state of the system in detail, to proceed to more thorough testing. | The goal is to make sure that recent changes in the code did not have side effects on the established working functionality. | Re-test verifies that the defect is fixed. |
Rechecking defects is not a Smoke goal. | Rechecking defects is not a Sanity goal | Retesting Defects is not a Regression Target | The fact that the defect is corrected confirms Re-Test |
Smoke testing is performed before regression. | Sanitary testing is performed before regression and after smoke tests. | It is carried out on the basis of project requirements and availability of resources (closed by autotests), “regression” can be conducted in parallel with re-tests | - Re-test is performed before sanity testing
- Also, the priority of the re-test is higher than the regression checks, therefore it should be performed before them |
Can be done automatically or manually. | More often performed manually | The best reason to automate this type of testing, because manual can be extremely expensive in terms of resources or time | Not amenable to automation |
Is a subset of regression testing. | Subset of acceptance testing | It is carried out at any modification or changes in an already existing project. | The re-test is performed on the corrected assembly using the same data, on the same environment, but with a different set of input data. |
Test cases are part of regression test cases, but covering extremely critical functionality. | Sanitary can be performed without test cases, but knowledge of the system under test is mandatory | Regression testing test cases can be obtained from functional requirements or specifications, user manuals, and are carried out regardless of what the developers have fixed. | The same test case that identified the defect is used. |
Well, in essence?
I will give an example of the delineation of concepts on my current project.
Example: we have a web service with a user interface and a RESTful API. As testers, we know:
- That he has 10 entry points, for simplicity, in our case located on the same IP
- We all know they accept a GET request to enter, returning any data in json format.
Then you can make a series of statements about what types of tests to use at which point in time:
- Having executed one simple GET request to one of these entry points, and having received the answer in json format, we are already convinced that smoke testing has been passed.
If one of these entry points also returns data from the database, while the first does not, you need to additionally perform another request to make sure that the application
correctly processes requests to the database. And on this "smoke" test is over.
That is, we fulfilled the request - the answer came from the service, and it did not “smoke”, that is, did not return an error 4xx or 5xx, and something unintelligible, instead of json. On this we can say that the "smoke" test is passed. To check that it works the same way, it’s enough to just open the page in the browser once. - Sanitary testing in this case will consist of the execution of a request to all 10 entry points to the api, a comparison of the received json with the expected, as well as the availability of the required data in it.
- Regression tests will consist of smoke + sanity + UI running together in one heap. Objective: to verify that the addition of the 11th entry point did not break, for example, password recovery.
- The re-test in this example is a spot check that, for example, a broken entry point to api in the next build works as intended.
At the same time, if this api accepts the same post-requests, then it is obvious that these requests should be included in another set of sanity tests. By analogy with the UI, we will check all pages of the application.
Summarize
I hope that after reading this article, you will have clarity in determining what type of testing you use at what stage, and what is the difference between these types of testing. As mentioned at the beginning, the boundary between these concepts is very conditional and remains at your discretion in the project.
UPD :
Often, “consistency testing” or “sanity testing” is called the term “sanitary testing”. I think that this is due to the phonetic properties of the English word sanity, which is similar in sound to something “sanitary”.
Google translate clarifies . On the Internet there are both options. Regarding this article, please consider “sanitary” testing as “testing for consistency”.
Thanks
astenix for the tip