In this chapter, we will look at configuration files that affect pytest, discuss how pytest changes its behavior based on them, and make some changes to the Tasks project configuration files.
The examples in this book are written using Python 3.6 and pytest 3.2. pytest 3.2 supports Python 2.6, 2.7 and Python 3.3+.
The source code for the Tasks project, as well as for all the tests shown in this book, is available via the link on the book's web page at pragprog.com . You do not need to download the source code to understand the test code; The test code is presented in a convenient form in the examples. But to follow along with the project objectives, or to adapt test examples to test your own project (your hands are untied!), You should go to the book’s web page and download the work. In the same place, on the book’s web page there is a link for errata messages and a discussion forum .
Under the spoiler is a list of articles in this series.
So far in this book, I've talked about various non-test files that affect pytest mainly in passing, with the exception of conftest.py, which I discussed in some detail in chapter 5, Plugins, on page 95. In this chapter, we will look at configuration files that affect pytest, discuss how pytest changes its behavior based on them, and make some changes to the Tasks project configuration files.
Before I tell you how you can change the default behavior in pytest, let's go over all non-test files in pytest and, in particular, who should take care of them.
You should know the following:
pytest.ini
.conftest.py
file conftest.py
, and for all its subdirectories. The conftest.py
file conftest.py
described in Chapter 5 “Plugins” on page 95.__init__.py
: When placed in each test subdirectory, this file allows you to have identical test file names in several test directories. We will look at an example of what goes wrong without __init__.py
files in the test directories in the article “Avoiding File Name Collisions” on page 120.If you use tox, you will be interested in:
pytest.ini
, but for tox
. However, you can place your pytest
configuration pytest
instead of having both a tox.ini
file and a pytest.ini
file, saving you a single configuration file. Tox is covered in Chapter 7, “Using pytest with other tools,” on page 125.If you want to distribute a Python package (for example, Tasks), this file will be interesting:
setup.py
. You can add multiple lines to setup.py
to run the python setup.py test
and run all your pytest tests. If you are distributing a package, you may already have a setup.cfg
file, and you can use this file to store the Pytest configuration. You will see how this is done in Appendix 4, “Packaging and Distributing Python Projects,” on page 175.Regardless of which file you put your pytest configuration into, the format will be basically the same.
For pytest.ini
:
ch6 / format / pytest.ini
[pytest] addopts = -rsxX -l --tb=short --strict xfail_strict = true ... more options ...
For tox.ini
:
ch6 / format / tox.ini
... tox specific stuff ... [pytest] addopts = -rsxX -l --tb=short --strict xfail_strict = true ... more options ...
For setup.cfg
:
ch6 / format / setup.cfg
... packaging specific stuff ... [tool:pytest] addopts = -rsxX -l --tb=short --strict xfail_strict = true ... more options ...
The only difference is that the section header for setup.cfg is [tool:pytest]
instead of [pytest]
.
You can get a list of all valid parameters for pytest.ini
from pytest --help
:
$ pytest --help ... [pytest] ini-options in the first pytest.ini|tox.ini|setup.cfg file found: markers (linelist) markers for test functions empty_parameter_set_mark (string) default marker for empty parametersets norecursedirs (args) directory patterns to avoid for recursion testpaths (args) directories to search for tests when no files or directories are given in the command line. console_output_style (string) console output: classic or with additional progress information (classic|progress). usefixtures (args) list of default fixtures to be used with this project python_files (args) glob-style file patterns for Python test module discovery python_classes (args) prefixes or glob names for Python test class discovery python_functions (args) prefixes or glob names for Python test function and method discovery xfail_strict (bool) default for the strict parameter of xfail markers when not given explicitly (default: False) junit_suite_name (string) Test suite name for JUnit report junit_logging (string) Write captured log messages to JUnit report: one of no|system-out|system-err doctest_optionflags (args) option flags for doctests doctest_encoding (string) encoding used for doctest files cache_dir (string) cache directory path. filterwarnings (linelist) Each line specifies a pattern for warnings.filterwarnings. Processed after -W and --pythonwarnings. log_print (bool) default value for --no-print-logs log_level (string) default value for --log-level log_format (string) default value for --log-format log_date_format (string) default value for --log-date-format log_cli (bool) enable log display during test run (also known as "live logging"). log_cli_level (string) default value for --log-cli-level log_cli_format (string) default value for --log-cli-format log_cli_date_format (string) default value for --log-cli-date-format log_file (string) default value for --log-file log_file_level (string) default value for --log-file-level log_file_format (string) default value for --log-file-format log_file_date_format (string) default value for --log-file-date-format addopts (args) extra command line options minversion (string) minimally required pytest version xvfb_width (string) Width of the Xvfb display xvfb_height (string) Height of the Xvfb display xvfb_colordepth (string) Color depth of the Xvfb display xvfb_args (args) Additional arguments for Xvfb xvfb_xauth (bool) Generate an Xauthority token for Xvfb. Needs xauth. ...
You will see all of these settings in this chapter, with the exception of the doctest_optionflags
, which is discussed in chapter 7, “Using pytest with other tools”, on page 125.
The previous list of settings is not constant. For plugins (and conftest.py files), it is possible to add ini file options. Added options will also be added to the pytest command output --help.
Now let's take a look at some of the configuration changes that we can make using the built-in INI file settings available in core pytest.
You have already used some command line options for pytest , such as -v/--verbose
for verbose output of -l/--showlocals
to view local variables with stack trace for failed tests. You may find that you always use some of these options—or
and prefer to use them—for a project
. If you install addopts
in pytest.ini
for the parameters you need, then you no longer have to enter them. Here is a set that I like:
[pytest] addopts = -rsxX -l --tb=short --strict
The -rsxX
allows the pytest installation to report the causes of all skipped
, xfailed
or xpassed
tests. The -l
switch will allow pytest to display the stack trace for local variables in the event of each failure. --tb=short
will remove most of the stack trace. However, leave the file and line number. The --strict
disables the use of tokens if they are not registered in the configuration file. You will see how to do this in the next section.
Custom markers, as described in “Marking Test Functions” on page 31, are great for allowing you to mark a subset of tests to run with a specific marker. However, it is too easy to make a mistake in the marker and eventually some tests are marked @pytest.mark.smoke
, and some are marked @pytest.mark.somke
. By default, this is not an error. pytest just thinks you created two markers. However, this can be fixed by registering the markers in pytest.ini, like this:
[pytest] ... markers = smoke: Run the smoke test test functions get: Run the test functions that test tasks.get() ...
By registering these markers, you can now also see them using pytest --markers
with their descriptions:
$ cd /path/to/code/ch6/b/tasks_proj/tests $ pytest --markers @pytest.mark.smoke: Run the smoke test test functions @pytest.mark.get: Run the test functions that test tasks.get() @pytest.mark.skip(reason=None): skip the ... ...
If markers are not registered, they will not be displayed in the --markers
list. When they are registered, they are displayed in the list, and if you use --strict
, any markers with errors or unregistered are displayed as errors. The only difference between ch6/a/tasks_proj
and ch6/b/tasks_proj
is the content of the pytest.ini file. ch6/a
empty. Let's try running tests without registering any markers:
$ cd /path/to/code/ch6/a/tasks_proj/tests $ pytest --strict --tb=line ============================= test session starts ============================= collected 45 items / 2 errors =================================== ERRORS ==================================== ______________________ ERROR collecting func/test_add.py ______________________ 'smoke' not a registered marker ________________ ERROR collecting func/test_api_exceptions.py _________________ 'smoke' not a registered marker !!!!!!!!!!!!!!!!!!! Interrupted: 2 errors during collection !!!!!!!!!!!!!!!!!!! =========================== 2 error in 1.10 seconds ===========================
If you use your markers, then you can at it. You'll thank me later. Let's go ahead and add a pytest.ini file to the tasks project:
If you use markers in pytest.ini
to register your markers, you can also add --strict
to your addopts
. You will thank me later. Let's continue and add the pytest.ini file to the task project:
If you use markers in pytest.ini
to register markers, you can also add --strict
to existing ones with addopts
. Cool?! Postpone the thanks and add the pytest.ini
file to the tasks
project:
ch6 / b / tasks_proj / tests / pytest.ini
[pytest] addopts = -rsxX -l --tb=short --strict markers = smoke: Run the smoke test test functions get: Run the test functions that test tasks.get()
Here is the default combination of flags:
-rsxX
to report which tests skipped, xfailed, or xpassed,--tb = short
for shorter tracing upon failures,--strict
to allow only declared markers.This should allow us to conduct tests, including smoke tests:
$ cd /path/to/code/ch6/b/tasks_proj/tests $ pytest --strict -m smoke ===================== test session starts ====================== collected 57 items func/test_add.py . func/test_api_exceptions.py .. ===================== 54 tests deselected ====================== =========== 3 passed, 54 deselected in 0.06 seconds ============
The minversion
allows minversion
to specify the minimum version of pytest expected for tests. For example, I decided to use approx()
when testing floating-point numbers to determine “fairly close” equality in tests. But this feature was not introduced in pytest prior to version 3.0. To avoid confusion, I add the following to projects that use approx()
:
[pytest] minversion = 3.0
Thus, if someone tries to run tests using an older version of pytest, an error message will appear.
Did you know that one of the definitions of “recurse” is to swear twice in your own code? Well no. In fact, this means accounting subdirectories. pytest will turn on test detection by recursively examining a bunch of directories. But there are some directories that you want to exclude from viewing pytest.
The default value for norecurse
is '. * Build dist CVS _darcs {arch} and *.egg. Having '.*'
'. * Build dist CVS _darcs {arch} and *.egg. Having '.*'
'. * Build dist CVS _darcs {arch} and *.egg. Having '.*'
Is a good reason to name your virtual environment '.venv', because all directories starting with a dot will not be visible.
In the case of the Tasks project, it does not hurt to specify src
, because searching in test files with pytest will be a waste of time.
[pytest] norecursedirs = .* venv src *.egg dist build
When overriding a parameter that already has a useful value, such as this parameter, it is useful to know what the default values ​​are and return the ones you need, as I did in the previous code with *.egg dist build
.norecursedirs
is a kind of consequence for test paths, so let's look at it later.
While norecursedirs
tells the pytest to drop in, testpaths
tells the pytest where to look. testspaths
is a list of directories relative to the root directory for finding tests. It is used only if no directory, file, or nodeid
is specified as an argument.
Suppose that for the Tasks
project we put pytest.ini
in the tasks_proj
directory instead of tests:
\code\tasks_proj>tree/f . │ pytest.ini │ ├───src │ └───tasks │ api.py │ ... │ └───tests │ conftest.py │ pytest.ini │ ├───func │ test_add.py │ ... │ ├───unit │ test_task.py │ __init__.py │ ...
Then it may make sense to put the tests in testpaths
:
[pytest] testpaths = tests
Now, if you run pytest from the tasks_proj
directory, pytest will only search for tasks_proj/tests
. The problem here is that during the development and debugging of tests, I often iterate through the test directory, so I can easily test a subdirectory or file without pointing all the way. Therefore, this parameter is of little help to me in interactive testing.
However, it is great for tests running from a continuous integration server or tox. In these cases, you know that the root directory will be fixed, and you can list the directories relative to that fixed root directory. These are also cases where you really want to shorten the testing time, so getting rid of the search for tests is great.
At first glance it may seem silly to use both test paths and norecursedirs
. However, as you have already seen, test paths are of little help in interactive testing from different parts of the file system. In these cases, norecursedirs
can help. In addition, if you have test directories that do not contain tests, you can use norecursedirs
to avoid them. But in fact, what's the point of putting additional directories in tests that do not have tests?
pytest finds tests to run based on certain test detection rules. Standard test detection rules:
• Start with one or more directories. You can specify the names of files or directories on the command line. If you did not specify anything, the current directory is used.
• Search in the catalog and in all its subdirectories test modules.
• A test module is a file with a name similar to test_*.py
or *_test.py
.
• Look at test modules for functions that start with test .
• Look for classes that start with Test. Look for methods in those classes that start with `test ,
init` ,
.
These are standard detection rules; However, you can change them.
The usual rule for detecting tests for pytest and classes is to consider a class as a potential test class if it starts with Test*
. A class also cannot have a __init__()
method. But what if we want to call our test classes as <something>Test
or <something>Suite
? This is where python_classes
comes python_classes
:
[pytest] python_classes = *Test Test* *Suite
This allows us to call classes like this:
class DeleteSuite(): def test_delete_1(): ... def test_delete_2(): ... ....
Like pytest_classes
, python_files
modifies the default test discovery rule, which is to search for files that start with test_*
or have at the end of *_test
.
Suppose you have a custom test framework in which you named all your test files check_<something>.py
. It seems reasonable. Instead of renaming all your files, just add a line to pytest.ini
as follows:
[pytest] python_files = test_* *_test check_*
Very simple. Now you can gradually transfer the naming convention if you wish, or simply leave it as check_*
.
python_functions
acts like the two previous settings, but for test functions and method names. The default is test_*
. And to add check_*
guessed it — do it:
[pytest] python_functions = test_* check_*
pytest
naming pytest
don't seem so restrictive, do they? So if you don’t like the default naming convention, just change it. Nevertheless, I urge you to have a more compelling reason for such decisions. Migrating hundreds of test files is definitely a good reason.
Setting xfail_strict = true
results in tests marked @pytest.mark.xfail
not being recognized as causing an error. I think this installation should always be. For more information about the xfail
token xfail
see “Marking Tests for xfail
Failure” on page 37.
The usefulness of having the __init__.py
file in each test subdirectory of the project confused me for a long time. However, the difference between having them or not having them is simple. If you have __init__.py
files in all of your test subdirectories, you can have the same test file name in several directories. And if not, then so do not work.
Here is an example. The directory a
and b
both have a file test_foo.py
. It doesn't matter what the files contain, but for this example they look like this:
ch6 / dups / a / test_foo.py
def test_a(): pass
ch6 / dups / b / test_foo.py
def test_b(): pass
With this directory structure:
dups ├── a │ └── test_foo.py └── b └── test_foo.py
These files do not even have the same content, but the tests are corrupted. It will be possible to run them separately, but there is no launch of pytest
from the dups
directory:
$ cd /path/to/code/ch6/dups $ pytest a ============================= test session starts ============================= collected 1 item a\test_foo.py . ========================== 1 passed in 0.05 seconds =========================== $ pytest b ============================= test session starts ============================= collected 1 item b\test_foo.py . ========================== 1 passed in 0.05 seconds =========================== $ pytest ============================= test session starts ============================= collected 1 item / 1 errors =================================== ERRORS ==================================== _______________________ ERROR collecting b/test_foo.py ________________________ import file mismatch: imported module 'test_foo' has this __file__ attribute: /path/to/code/ch6/dups/a/test_foo.py which is not the same as the test file we want to collect: /path/to/code/ch6/dups/b/test_foo.py HINT: remove __pycache__ / .pyc files and/or use a unique basename for your test file modules !!!!!!!!!!!!!!!!!!! Interrupted: 1 errors during collection !!!!!!!!!!!!!!!!!!! =========================== 1 error in 0.34 seconds ===========================
Nothing is clear!
This error message does not tell what went wrong.
To fix this test, simply add an empty __init__.py
file to the subdirectories. Here is an example of the dups_fixed
directory with the same duplicate file names, but with __init__.py
files added:
dups_fixed/ ├── a │ ├── __init__.py │ └── test_foo.py └── b ├── __init__.py └── test_foo.py
Now let's try again from the top level in dups_fixed
:
$ cd /path/to/code/ch6/ch6/dups_fixed/ $ pytest ============================= test session starts ============================= collected 2 items a\test_foo.py . b\test_foo.py . ========================== 2 passed in 0.15 seconds ===========================
So it will be better.
Of course, you can convince yourself that you will never have duplicate file names, so it does not matter. Everything, like, normal. But projects grow and test directories grow, and you definitely want to wait for it to happen to you before taking care of it? I say, just put these files in there. Make it a habit and don't worry about it again.
In Chapter 5, Plugins, on page 95, you have created a plugin called —nest command-line option. Let's extend that to include a pytest.ini option called nice.
In Chapter 5, "Plugins" on page 95, you created a plugin called pytest-nice
that includes the command line --nice
. Let's extend this by turning on the pytest.ini
option called nice
.
pytest_addoption
pytest_nice.py
hook function: parser.addini('nice', type='bool', help='Turn failures into opportunities.')
getoption()
will also have to call getini('nice')
. Make these changes.nice
to the pytest.ini
file.nice
parameter from pytest.ini
works correctly.While pytest is extremely powerful in itself — especially with plugins — it also integrates well with other software development and software testing tools. In the next chapter, we will look at using pytest in combination with other powerful testing tools.
Source: https://habr.com/ru/post/448796/
All Articles