📜 ⬆️ ⬇️

Python Testing with pytest. Configuration, CHAPTER 6

Back Next


In this chapter, we will look at configuration files that affect pytest, discuss how pytest changes its behavior based on them, and make some changes to the Tasks project configuration files.



The examples in this book are written using Python 3.6 and pytest 3.2. pytest 3.2 supports Python 2.6, 2.7 and Python 3.3+.


The source code for the Tasks project, as well as for all the tests shown in this book, is available via the link on the book's web page at pragprog.com . You do not need to download the source code to understand the test code; The test code is presented in a convenient form in the examples. But to follow along with the project objectives, or to adapt test examples to test your own project (your hands are untied!), You should go to the book’s web page and download the work. In the same place, on the book’s web page there is a link for errata messages and a discussion forum .

Under the spoiler is a list of articles in this series.



Configuration


So far in this book, I've talked about various non-test files that affect pytest mainly in passing, with the exception of conftest.py, which I discussed in some detail in chapter 5, Plugins, on page 95. In this chapter, we will look at configuration files that affect pytest, discuss how pytest changes its behavior based on them, and make some changes to the Tasks project configuration files.


Understanding pytest configuration files


Before I tell you how you can change the default behavior in pytest, let's go over all non-test files in pytest and, in particular, who should take care of them.


You should know the following:



If you use tox, you will be interested in:



If you want to distribute a Python package (for example, Tasks), this file will be interesting:



Regardless of which file you put your pytest configuration into, the format will be basically the same.


For pytest.ini :


ch6 / format / pytest.ini

 [pytest] addopts = -rsxX -l --tb=short --strict xfail_strict = true ... more options ... 

For tox.ini :


ch6 / format / tox.ini

 ... tox specific stuff ... [pytest] addopts = -rsxX -l --tb=short --strict xfail_strict = true ... more options ... 

For setup.cfg :


ch6 / format / setup.cfg

 ... packaging specific stuff ... [tool:pytest] addopts = -rsxX -l --tb=short --strict xfail_strict = true ... more options ... 

The only difference is that the section header for setup.cfg is [tool:pytest] instead of [pytest] .


List the Valid ini-file Options with pytest –help


You can get a list of all valid parameters for pytest.ini from pytest --help :


 $ pytest --help ... [pytest] ini-options in the first pytest.ini|tox.ini|setup.cfg file found: markers (linelist) markers for test functions empty_parameter_set_mark (string) default marker for empty parametersets norecursedirs (args) directory patterns to avoid for recursion testpaths (args) directories to search for tests when no files or directories are given in the command line. console_output_style (string) console output: classic or with additional progress information (classic|progress). usefixtures (args) list of default fixtures to be used with this project python_files (args) glob-style file patterns for Python test module discovery python_classes (args) prefixes or glob names for Python test class discovery python_functions (args) prefixes or glob names for Python test function and method discovery xfail_strict (bool) default for the strict parameter of xfail markers when not given explicitly (default: False) junit_suite_name (string) Test suite name for JUnit report junit_logging (string) Write captured log messages to JUnit report: one of no|system-out|system-err doctest_optionflags (args) option flags for doctests doctest_encoding (string) encoding used for doctest files cache_dir (string) cache directory path. filterwarnings (linelist) Each line specifies a pattern for warnings.filterwarnings. Processed after -W and --pythonwarnings. log_print (bool) default value for --no-print-logs log_level (string) default value for --log-level log_format (string) default value for --log-format log_date_format (string) default value for --log-date-format log_cli (bool) enable log display during test run (also known as "live logging"). log_cli_level (string) default value for --log-cli-level log_cli_format (string) default value for --log-cli-format log_cli_date_format (string) default value for --log-cli-date-format log_file (string) default value for --log-file log_file_level (string) default value for --log-file-level log_file_format (string) default value for --log-file-format log_file_date_format (string) default value for --log-file-date-format addopts (args) extra command line options minversion (string) minimally required pytest version xvfb_width (string) Width of the Xvfb display xvfb_height (string) Height of the Xvfb display xvfb_colordepth (string) Color depth of the Xvfb display xvfb_args (args) Additional arguments for Xvfb xvfb_xauth (bool) Generate an Xauthority token for Xvfb. Needs xauth. ... 

You will see all of these settings in this chapter, with the exception of the doctest_optionflags , which is discussed in chapter 7, “Using pytest with other tools”, on page 125.


Plugins can add ini file options


The previous list of settings is not constant. For plugins (and conftest.py files), it is possible to add ini file options. Added options will also be added to the pytest command output --help.
Now let's take a look at some of the configuration changes that we can make using the built-in INI file settings available in core pytest.


Changing default command line parameters


You have already used some command line options for pytest , such as -v/--verbose for verbose output of -l/--showlocals to view local variables with stack trace for failed tests. You may find that you always use some of these options—or and prefer to use them—for a project . If you install addopts in pytest.ini for the parameters you need, then you no longer have to enter them. Here is a set that I like:


 [pytest] addopts = -rsxX -l --tb=short --strict 

The -rsxX allows the pytest installation to report the causes of all skipped , xfailed or xpassed tests. The -l switch will allow pytest to display the stack trace for local variables in the event of each failure. --tb=short will remove most of the stack trace. However, leave the file and line number. The --strict disables the use of tokens if they are not registered in the configuration file. You will see how to do this in the next section.


Registration of markers to avoid typographical errors


Custom markers, as described in “Marking Test Functions” on page 31, are great for allowing you to mark a subset of tests to run with a specific marker. However, it is too easy to make a mistake in the marker and eventually some tests are marked @pytest.mark.smoke , and some are marked @pytest.mark.somke . By default, this is not an error. pytest just thinks you created two markers. However, this can be fixed by registering the markers in pytest.ini, like this:


 [pytest] ... markers = smoke: Run the smoke test test functions get: Run the test functions that test tasks.get() ... 

By registering these markers, you can now also see them using pytest --markers with their descriptions:


 $ cd /path/to/code/ch6/b/tasks_proj/tests $ pytest --markers @pytest.mark.smoke: Run the smoke test test functions @pytest.mark.get: Run the test functions that test tasks.get() @pytest.mark.skip(reason=None): skip the ... ... 

If markers are not registered, they will not be displayed in the --markers list. When they are registered, they are displayed in the list, and if you use --strict , any markers with errors or unregistered are displayed as errors. The only difference between ch6/a/tasks_proj and ch6/b/tasks_proj is the content of the pytest.ini file. ch6/a empty. Let's try running tests without registering any markers:


 $ cd /path/to/code/ch6/a/tasks_proj/tests $ pytest --strict --tb=line ============================= test session starts ============================= collected 45 items / 2 errors =================================== ERRORS ==================================== ______________________ ERROR collecting func/test_add.py ______________________ 'smoke' not a registered marker ________________ ERROR collecting func/test_api_exceptions.py _________________ 'smoke' not a registered marker !!!!!!!!!!!!!!!!!!! Interrupted: 2 errors during collection !!!!!!!!!!!!!!!!!!! =========================== 2 error in 1.10 seconds =========================== 

If you use your markers, then you can at it. You'll thank me later. Let's go ahead and add a pytest.ini file to the tasks project:


If you use markers in pytest.ini to register your markers, you can also add --strict to your addopts . You will thank me later. Let's continue and add the pytest.ini file to the task project:


If you use markers in pytest.ini to register markers, you can also add --strict to existing ones with addopts . Cool?! Postpone the thanks and add the pytest.ini file to the tasks project:


ch6 / b / tasks_proj / tests / pytest.ini

 [pytest] addopts = -rsxX -l --tb=short --strict markers = smoke: Run the smoke test test functions get: Run the test functions that test tasks.get() 

Here is the default combination of flags:



This should allow us to conduct tests, including smoke tests:


 $ cd /path/to/code/ch6/b/tasks_proj/tests $ pytest --strict -m smoke ===================== test session starts ====================== collected 57 items func/test_add.py . func/test_api_exceptions.py .. ===================== 54 tests deselected ====================== =========== 3 passed, 54 deselected in 0.06 seconds ============ 

Require Pytest Minimum Version


The minversion allows minversion to specify the minimum version of pytest expected for tests. For example, I decided to use approx() when testing floating-point numbers to determine “fairly close” equality in tests. But this feature was not introduced in pytest prior to version 3.0. To avoid confusion, I add the following to projects that use approx() :


 [pytest] minversion = 3.0 

Thus, if someone tries to run tests using an older version of pytest, an error message will appear.


Stop pytest from searching in the wrong places.


Did you know that one of the definitions of “recurse” is to swear twice in your own code? Well no. In fact, this means accounting subdirectories. pytest will turn on test detection by recursively examining a bunch of directories. But there are some directories that you want to exclude from viewing pytest.


The default value for norecurse is '. * Build dist CVS _darcs {arch} and *.egg. Having '.*' '. * Build dist CVS _darcs {arch} and *.egg. Having '.*' '. * Build dist CVS _darcs {arch} and *.egg. Having '.*' Is a good reason to name your virtual environment '.venv', because all directories starting with a dot will not be visible.


In the case of the Tasks project, it does not hurt to specify src , because searching in test files with pytest will be a waste of time.


 [pytest] norecursedirs = .* venv src *.egg dist build 

When overriding a parameter that already has a useful value, such as this parameter, it is useful to know what the default values ​​are and return the ones you need, as I did in the previous code with *.egg dist build .
norecursedirs is a kind of consequence for test paths, so let's look at it later.


test directory tree specification


While norecursedirs tells the pytest to drop in, testpaths tells the pytest where to look. testspaths is a list of directories relative to the root directory for finding tests. It is used only if no directory, file, or nodeid is specified as an argument.


Suppose that for the Tasks project we put pytest.ini in the tasks_proj directory instead of tests:


 \code\tasks_proj>tree/f . │ pytest.ini │ ├───src │ └───tasks │ api.py │ ... │ └───tests │ conftest.py │ pytest.ini │ ├───func │ test_add.py │ ... │ ├───unit │ test_task.py │ __init__.py │ ... 

Then it may make sense to put the tests in testpaths :


 [pytest] testpaths = tests 

Now, if you run pytest from the tasks_proj directory, pytest will only search for tasks_proj/tests . The problem here is that during the development and debugging of tests, I often iterate through the test directory, so I can easily test a subdirectory or file without pointing all the way. Therefore, this parameter is of little help to me in interactive testing.


However, it is great for tests running from a continuous integration server or tox. In these cases, you know that the root directory will be fixed, and you can list the directories relative to that fixed root directory. These are also cases where you really want to shorten the testing time, so getting rid of the search for tests is great.


At first glance it may seem silly to use both test paths and norecursedirs . However, as you have already seen, test paths are of little help in interactive testing from different parts of the file system. In these cases, norecursedirs can help. In addition, if you have test directories that do not contain tests, you can use norecursedirs to avoid them. But in fact, what's the point of putting additional directories in tests that do not have tests?


Modifying Test Detection Rules


pytest finds tests to run based on certain test detection rules. Standard test detection rules:


• Start with one or more directories. You can specify the names of files or directories on the command line. If you did not specify anything, the current directory is used.
• Search in the catalog and in all its subdirectories test modules.
• A test module is a file with a name similar to test_*.py or *_test.py .
• Look at test modules for functions that start with test .
• Look for classes that start with Test. Look for methods in those classes that start with `test , init` , .


These are standard detection rules; However, you can change them.


python_classes


The usual rule for detecting tests for pytest and classes is to consider a class as a potential test class if it starts with Test* . A class also cannot have a __init__() method. But what if we want to call our test classes as <something>Test or <something>Suite ? This is where python_classes comes python_classes :


 [pytest] python_classes = *Test Test* *Suite 

This allows us to call classes like this:


 class DeleteSuite(): def test_delete_1(): ... def test_delete_2(): ... .... 

python_files


Like pytest_classes , python_files modifies the default test discovery rule, which is to search for files that start with test_* or have at the end of *_test .
Suppose you have a custom test framework in which you named all your test files check_<something>.py . It seems reasonable. Instead of renaming all your files, just add a line to pytest.ini as follows:


 [pytest] python_files = test_* *_test check_* 

Very simple. Now you can gradually transfer the naming convention if you wish, or simply leave it as check_* .


python_functions


python_functions acts like the two previous settings, but for test functions and method names. The default is test_* . And to add check_* guessed it — do it:


 [pytest] python_functions = test_* check_* 

pytest naming pytest don't seem so restrictive, do they? So if you don’t like the default naming convention, just change it. Nevertheless, I urge you to have a more compelling reason for such decisions. Migrating hundreds of test files is definitely a good reason.


XPASS ban


Setting xfail_strict = true results in tests marked @pytest.mark.xfail not being recognized as causing an error. I think this installation should always be. For more information about the xfail token xfail see “Marking Tests for xfail Failure” on page 37.


Prevent file name conflicts


The usefulness of having the __init__.py file in each test subdirectory of the project confused me for a long time. However, the difference between having them or not having them is simple. If you have __init__.py files in all of your test subdirectories, you can have the same test file name in several directories. And if not, then so do not work.


Here is an example. The directory a and b both have a file test_foo.py . It doesn't matter what the files contain, but for this example they look like this:


ch6 / dups / a / test_foo.py
 def test_a(): pass 


ch6 / dups / b / test_foo.py
 def test_b(): pass 

With this directory structure:


 dups ├── a │ └── test_foo.py └── b └── test_foo.py 

These files do not even have the same content, but the tests are corrupted. It will be possible to run them separately, but there is no launch of pytest from the dups directory:


 $ cd /path/to/code/ch6/dups $ pytest a ============================= test session starts ============================= collected 1 item a\test_foo.py . ========================== 1 passed in 0.05 seconds =========================== $ pytest b ============================= test session starts ============================= collected 1 item b\test_foo.py . ========================== 1 passed in 0.05 seconds =========================== $ pytest ============================= test session starts ============================= collected 1 item / 1 errors =================================== ERRORS ==================================== _______________________ ERROR collecting b/test_foo.py ________________________ import file mismatch: imported module 'test_foo' has this __file__ attribute: /path/to/code/ch6/dups/a/test_foo.py which is not the same as the test file we want to collect: /path/to/code/ch6/dups/b/test_foo.py HINT: remove __pycache__ / .pyc files and/or use a unique basename for your test file modules !!!!!!!!!!!!!!!!!!! Interrupted: 1 errors during collection !!!!!!!!!!!!!!!!!!! =========================== 1 error in 0.34 seconds =========================== 

Nothing is clear!
This error message does not tell what went wrong.


To fix this test, simply add an empty __init__.py file to the subdirectories. Here is an example of the dups_fixed directory with the same duplicate file names, but with __init__.py files added:


 dups_fixed/ ├── a │ ├── __init__.py │ └── test_foo.py └── b ├── __init__.py └── test_foo.py 

Now let's try again from the top level in dups_fixed :


 $ cd /path/to/code/ch6/ch6/dups_fixed/ $ pytest ============================= test session starts ============================= collected 2 items a\test_foo.py . b\test_foo.py . ========================== 2 passed in 0.15 seconds =========================== 

So it will be better.


Of course, you can convince yourself that you will never have duplicate file names, so it does not matter. Everything, like, normal. But projects grow and test directories grow, and you definitely want to wait for it to happen to you before taking care of it? I say, just put these files in there. Make it a habit and don't worry about it again.


Exercises


In Chapter 5, Plugins, on page 95, you have created a plugin called —nest command-line option. Let's extend that to include a pytest.ini option called nice.


In Chapter 5, "Plugins" on page 95, you created a plugin called pytest-nice that includes the command line --nice . Let's extend this by turning on the pytest.ini option called nice .


  1. Add the following line to the pytest_addoption pytest_nice.py hook function: parser.addini('nice', type='bool', help='Turn failures into opportunities.')
  2. Places in the plugin that use getoption() will also have to call getini('nice') . Make these changes.
  3. Check it out manually by adding nice to the pytest.ini file.
  4. Don't forget about plugin tests. Add a test to make sure that the nice parameter from pytest.ini works correctly.
  5. Add tests to the plugins directory. You need to find some additional Pytester features .

What's next


While pytest is extremely powerful in itself — especially with plugins — it also integrates well with other software development and software testing tools. In the next chapter, we will look at using pytest in combination with other powerful testing tools.


Back Next


')

Source: https://habr.com/ru/post/448796/


All Articles