Hello!
We continue the
article about familiarity with testing in Python, which we have prepared for you as part of our course
βPython Developerβ .
Testing for Django and Flask Web Frames')
If you are writing tests for web applications using one of the popular frameworks, for example, Django or Flask, then you should remember the important differences in writing and running such tests.
How Are They Different from Other Applications?Think about the code you want to test in a web application. All routes, views and models require a lot of imports and knowledge about the framework used.
This is similar to testing the car, which was discussed in the first part of the tutorial: before you conduct simple tests, such as checking the operation of headlights, you need to turn on the computer in the car.
Django and Flask simplify this task and provide a unittest-based test framework. You can continue to write tests in the usual way, but perform them a little differently.
How to use the Django test runner
The Django startapp template creates a file tests.py in your application directory. If not already, create it with the following content:
from django.test import TestCase class MyTestCase(TestCase):
The main difference from the previous examples is to inherit from
django.test.TestCase
, not
unittest.TestCase
. The API of these classes is the same, but the Django TestCase class configures everything for testing.
To execute the test suite, use
manage.py
test instead of unittest on the command line:
$ python manage.py test
If you need several test files, replace tests.py with the tests folder, put an empty file called
__init__.py
and create
test_*.py
files. Django will detect and execute them.
More information is available on
the Django documentation site .
How to use unittest and flask
To work with Flask, the application must be imported and transferred to test mode. You can create a test client and use it to send requests to any routes in your application.
Instantiation of a test client occurs in the setUp method of your test case. In the following example, my_app is the name of the application. Do not worry if you do not know what setUp does. We will get acquainted with it in the section βMore Advanced Test Scriptsβ.
The code in the test file will look like this:
import my_app import unittest class MyTestCase(unittest.TestCase): def setUp(self): my_app.app.testing = True self.app = my_app.app.test_client() def test_home(self): result = self.app.get('/')
Then test cases can be performed using the
python -m unittest discover.
More information is available on the Flask documentation website.
More Advanced Test ScriptsBefore you start creating tests for your application, remember the three main steps of any test:
- Creation of input parameters;
- Execution of the code, obtaining data output;
- Comparing output data with an expected result;
This can be more complicated than creating a static value for source data like a string or a number. Sometimes your application requires an instance of a class or context. What to do in this case?
The data that you create as source is called fixture. Creating and re-using fixtures is a common practice.
Running the same test several times with different values ββin anticipation of the same result is called parameterization.
Handling Expected CrashesEarlier, when we compiled a list of scripts for testing
sum()
, the question arose: what happens when we provide a bad value, for example, one integer or a string?
In this case, it is expected that
sum()
will generate an error. If an error occurs, the test will fail.
There is a specific way to handle the expected errors. You can use
.assertRaises()
as the context manager, and then perform the test steps inside the
with
block:
import unittest from my_sum import sum class TestSum(unittest.TestCase): def test_list_int(self): """ , """ data = [1, 2, 3] result = sum(data) self.assertEqual(result, 6) def test_list_fraction(self): """ , """ data = [Fraction(1, 4), Fraction(1, 4), Fraction(2, 5)] result = sum(data) self.assertEqual(result, 1) def test_bad_type(self): data = "banana" with self.assertRaises(TypeError): result = sum(data) if __name__ == '__main__': unittest.main()
This test case will be passed only if
sum(data)
returns a TypeError. You can replace TypeError with any other type of exception.
Behavior Isolation in the AppendixIn the last part of the tutorial, we talked about side effects. They complicate unit testing, since each test run may produce a different result or worse - one test can affect the state of the entire application and cause another test to fail!
There are some simple techniques for testing parts of an application with a lot of side effects:
- Refactoring code in accordance with the Principle of Uniform Responsibility;
- Mocking all methods and function calls to eliminate side effects;
- Using integration tests instead of modular for this piece of the application.
- If you're not familiar with mocking, check out great examples of Python CLI Testing .
Writing Integration TestsSo far, we have paid more attention to unit tests. Unit testing is a great way to create predictable and stable code. But in the end, your application should work on startup!
Integration testing is needed to test the co-operation of multiple application components. Such testing may require acting as a buyer or user:
- Call HTTP REST API;
- Python API call;
- Call a web service;
- Run command line.
All these types of integration tests can be written in the same way as modular ones, following the pattern Introduction Parameters, Execution, Validation. The most significant difference is that integration tests simultaneously check more components, and therefore lead to more side effects than unit tests. In addition, integration tests require more fixtures, such as a database, a network socket, or a configuration file.
Therefore, it is recommended to separate unit tests and integration tests. Creating fixtures for integration, for example, a test database or test cases themselves, takes much longer than performing unit tests, so it is worthwhile to carry out integration tests before going into production instead of launching them at each commit.
The simplest way to separate modular and integration tests is to spread them across different folders.
project/
β
βββ my_app/
β βββ __init__.py
β
βββ tests/
|
βββ unit/
| βββ __init__.py
| βββ test_sum.py
|
βββ integration/
βββ __init__.py
βββ test_integration.py
You can run a specific group of tests in different ways. A flag to specify the source directory, -s, can be added to unittest discover with a path containing tests:
$ python -m unittest discover -s tests/integration
unittest will display all results in the tests / integration directory.
Testing Data-Oriented ApplicationsMany integration tests require backend data, for example, a database with specific values. Imagine you need a test to verify that the application is working correctly with more than 100 clients in the database, or to check the correctness of the display of the order page, even if all the names of the goods are in Japanese.
These types of integration tests will depend on various test fixtures to ensure their repeatability and predictability.
Test data should be stored in the fixtures folder inside the integration tests directory to emphasize their βtestabilityβ. Then in the tests you can download the data and run the test.
Here is an example of a data structure consisting of JSON files:
project/
β
βββ my_app/
β βββ __init__.py
β
βββ tests/
|
βββ unit/
| βββ __init__.py
| βββ test_sum.py
|
βββ integration/
|
βββ fixtures/
| βββ test_basic.json
| βββ test_complex.json
|
βββ __init__.py
βββ test_integration.py
In the test case, you can use the .setUp () method to load test data from the fixture file along a known path and run several tests with this data. Remember that you can store multiple test cases in one Python file, unittest will find and execute them. You can have one test case for each set of test data:
import unittest class TestBasic(unittest.TestCase): def setUp(self):
If your application depends on data from a remote location, such as a remote API, make sure the tests are repeatable. Development may be delayed due to tests that failed when the API was disabled and communication problems occurred. In such cases, it is better to store the remote fixtures locally to call them again and send them to the application.
requests
library has a free response package that allows you to create response fixtures and store them in test folders. Find out more
on their GitHub page .
The next part will be about testing in several environments and test automation.
THE END
Comments / questions as always are welcome. Here or go to
Stas on
the open day .