📜 ⬆️ ⬇️

Python Testing with pytest. Builtin Fixtures, Chapter 4

Back Next


The built-in fixtures that come with pytest can help you do pretty useful things in your tests with ease and ease. For example, in addition to processing temporary files, pytest includes built-in fixtures for accessing command line parameters, communication between test sessions, checking output streams, changing environment variables, and polling warnings.



The source code for the Tasks project, as well as for all the tests shown in this book, is available via the link on the book's web page at pragprog.com . You do not need to download the source code to understand the test code; The test code is presented in a convenient form in the examples. But to follow along with the project objectives, or to adapt test examples to test your own project (your hands are untied!), You must go to the book’s web page to download the work. In the same place, on the web page of the book there is a link to the errata message and a discussion forum .

Under the spoiler is a list of articles in this series.



In the previous chapter, you looked at what fixtures are, how to write them, and how to use them for test data, as well as for setup and teardown code.


You also used conftest.py to share fixtures between tests in several test files. At the end of Chapter 3, in pytest Fixtures, the following fixtures were installed on page 49 of the “Tasks” project: tasks_db_session , tasks_just_a_few , tasks_mult_per_owner , tasks_db , db_with_3_tasks and db_with_multi_per_owner, which can be used by any test programmers who can use any test programs. needs them.


Reusing ordinary fixtures is such a good idea that the pytest developers included some of the frequently required fixtures in pytest. You have already seen how tmpdir and tmpdir_factory are used by the Tasks project in the area change area for Tasks project fixtures on page 59. You will understand them in more detail in this chapter.


The built-in fixtures that come with pytest can help you do pretty useful things in your tests with ease and ease. For example, in addition to processing temporary files, pytest includes built-in fixtures for accessing command line parameters, communication between test sessions, checking output streams, changing environment variables, and polling warnings. Built-in fixtures are extensions to the pytest core functionality. Let's now look at a few of the most commonly used embedded fixtures in order.


Using tmpdir and tmpdir_factory


If you are testing something that reads, writes, or modifies files, you can use tmpdir to create files or directories used by one test, and you can use tmpdir_factory when you want to set up a directory for several tests.


The tmpdir tmpdir has a function scope, and the tmpdir_factory fixture has a session scope. Any single test that requires a temporary directory or file for only one test can use tmpdir . This is also true for fixtures, which customizes a directory or file that must be recreated for each test function.


Here is a simple example of using tmpdir :


ch4/test_tmpdir.py

 def test_tmpdir(tmpdir): # tmpdir    ,    # join()  ,    , #     a_file = tmpdir.join('something.txt') #    a_sub_dir = tmpdir.mkdir('anything') #      (  ) another_file = a_sub_dir.join('something_else.txt') #    'something.txt' a_file.write('contents may settle during shipping') #    'anything/something_else.txt' another_file.write('something different') #      assert a_file.read() == 'contents may settle during shipping' assert another_file.read() == 'something different' 

The value returned from tmpdir is an object of type py.path.local.1 that seems like all we need for temporary directories and files. However, there is one trick. Because the tmpdir fixture tmpdir defined as a function scope , tmpdir cannot be used to create folders or files that should be available longer than a single test function. For fixtures with a scope other than a function (class, module, session), tmpdir_factory is available.


The tmpdir_factory tmpdir_factory very similar to tmpdir , but has a different interface. As described in the “Specification of Scope (Fixed Scope)” section, on page 56, functional area fixtures are run once for each test function, modular area fixtures are run once per module, class fixtures once for each class, and area check tests work once per session. Thus, resources created in session area records have a lifetime of the entire session. To show how similar tmpdir and tmpdir_factory , I’ll tmpdir example, where tmpdir_factory :


ch4 / test_tmpdir.py

 def test_tmpdir_factory(tmpdir_factory): #      . a_dir   # ,    tmpdir a_dir = tmpdir_factory.mktemp('mydir') # base_temp    'mydir'    #  getbasetemp(),  # ,    base_temp = tmpdir_factory.getbasetemp() print('base:', base_temp) #       , #    ' test_tmpdir ()',   , #    a_dir  tmpdir a_file = a_dir.join('something.txt') a_sub_dir = a_dir.mkdir('anything') another_file = a_sub_dir.join('something_else.txt') a_file.write('contents may settle during shipping') another_file.write('something different') assert a_file.read() == 'contents may settle during shipping' assert another_file.read() == 'something different' 

The first line uses mktemp('mydir') to create a directory and saves it to a_dir . For the rest of the function, you can use a_dir in the same way as tmpdir returned from the tmpdir .


In the second line of the tmpdir_factory example, the getbasetemp() function returns the base directory used for this session. The print statement in the example is needed so that you can view the directory on your system. Let's see where it is:


 $ cd /path/to/code/ch4 $ pytest -q -s test_tmpdir.py::test_tmpdir_factory base: /private/var/folders/53/zv4j_zc506x2xq25l31qxvxm0000gn/T/pytest-of-okken/pytest-732 . 1 passed in 0.04 seconds 

This base directory is system and user dependent, and pytest - NUM changes for each session with increasing NUM . The base directory is left alone after the session. pytest clears it, and only the most recent few temporary base directories remain in the system, which is great if you want to check files after a test run.


You can also specify your own base directory if you need to use pytest --basetemp=mydir .


Using temporary directories for other areas


We get the temporary directories and session area files from the tmpdir_factory , and the directories and function area files from the tmpdir . But what about other areas? What if we need a temporary directory for the scope of a module or class? To do this, we create another fixture of the area of ​​the desired size and for this you should use the tmpdir_factory .


For example, suppose we have a module full of tests, and many of them should be able to read some data from the json file. We were able to put the fixture volume of the module into the module itself, or into the conftest.py file, which configures the data file as follows:


ch4 / authors / conftest.py

 """Demonstrate tmpdir_factory.""" import json import pytest @pytest.fixture(scope='module') def author_file_json(tmpdir_factory): """     .""" python_author_data = { 'Ned': {'City': 'Boston'}, 'Brian': {'City': 'Portland'}, 'Luciano': {'City': 'Sau Paulo'} } file = tmpdir_factory.mktemp('data').join('author_file.json') print('file:{}'.format(str(file))) with file.open('w') as f: json.dump(python_author_data, f) return file 

The fixture author_file_json() creates a temporary directory named data and creates a file named author_file.json in the data directory. It then writes the python_author_data dictionary as json . Since this is the fixture of the module area, a json file will be created only once for each module using the test:


ch4 / authors / test_authors.py

 """ ,    .""" import json def test_brian_in_portland(author_file_json): """,   .""" with author_file_json.open() as f: authors = json.load(f) assert authors['Brian']['City'] == 'Portland' def test_all_have_cities(author_file_json): """        .""" with author_file_json.open() as f: authors = json.load(f) for a in authors: assert len(authors[a]['City']) > 0 

Both tests will use the same JSON file. If one test data file works for several tests, it makes no sense to recreate it for both tests.


Using pytestconfig


Using the built-in pytestconfig fixture, you can control how pytest works with command line arguments and parameters, configuration files, plugins, and the directory from which you started pytest. The pytestconfig fix is ​​a shortcut to request.config, and is sometimes referred to in the pytest documentation as the " pytest config object " (pytest configuration object).


To find out how pytestconfig works, you can see how to add a custom command line parameter and read the parameter value from the test. You can read the command line parameters directly from pytestconfig, but to add a parameter and analyze it, you need to add a hook function. The hook functions, which I describe in more detail in Chapter 5, “Plugins,” on page 95, are another way to control the behavior of pytest and are often used in plugins. However, adding a custom command line option and reading it from pytestconfig is quite common, so I want to highlight it here.


We will use the pytest hook pytest_addoption to add several parameters to the parameters already available on the pytest command line:


ch4 / pytestconfig / conftest.py

 def pytest_addoption(parser): parser.addoption("--myopt", action="store_true", help="some boolean option") parser.addoption("--foo", action="store", default="bar", help="foo: bar or baz") 

Adding command line parameters via pytest_addoption should be done via plugins or in the conftest.py file located at the top of the project directory structure. You should not do this in a test subdirectory.


The parameters --myopt and --foo <value> were added to the previous code, and the help line was changed as shown below:


 $ cd /path/to/code/ch4/pytestconfig $ pytest --help usage: pytest [options] [file_or_dir] [file_or_dir] [...] ... custom options: --myopt some boolean option --foo=FOO foo: bar or baz ... 

Now we can access these options from the test:


ch4 / pytestconfig / test_config.py

 import pytest def test_option(pytestconfig): print('"foo" set to:', pytestconfig.getoption('foo')) print('"myopt" set to:', pytestconfig.getoption('myopt')) 

Let's see how it works:


 $ pytest -s -q test_config.py::test_option "foo" set to: bar "myopt" set to: False .1 passed in 0.01 seconds $ pytest -s -q --myopt test_config.py::test_option "foo" set to: bar "myopt" set to: True .1 passed in 0.01 seconds $ pytest -s -q --myopt --foo baz test_config.py::test_option "foo" set to: baz "myopt" set to: True .1 passed in 0.01 seconds 

Since pytestconfig is a fixture, it can also be obtained from other fixtures. You can make fixtures for option names if you want, for example:


ch4 / pytestconfig / test_config.py

 @pytest.fixture() def foo(pytestconfig): return pytestconfig.option.foo @pytest.fixture() def myopt(pytestconfig): return pytestconfig.option.myopt def test_fixtures_for_options(foo, myopt): print('"foo" set to:', foo) print('"myopt" set to:', myopt) 

You can also access the built-in parameters, not just the ones being added, as well as information about how pytest was run (directory, arguments, etc.).


Here is an example of several values ​​and configuration parameters:


 def test_pytestconfig(pytestconfig): print('args :', pytestconfig.args) print('inifile :', pytestconfig.inifile) print('invocation_dir :', pytestconfig.invocation_dir) print('rootdir :', pytestconfig.rootdir) print('-k EXPRESSION :', pytestconfig.getoption('keyword')) print('-v, --verbose :', pytestconfig.getoption('verbose')) print('-q, --quiet :', pytestconfig.getoption('quiet')) print('-l, --showlocals:', pytestconfig.getoption('showlocals')) print('--tb=style :', pytestconfig.getoption('tbstyle')) 

We will return to pytestconfig when I demonstrate the ini-files in Chapter 6, “Configuration,” on page 113.


Using cache


Usually, we, testers, think that each test is as independent as possible from other tests. You should make sure that the order accounting dependencies have not crept. I would like to be able to run or restart any test in any order and get the same result. In addition, it is necessary that test sessions be repeatable and do not change the behavior based on previous test sessions.


However, sometimes transferring information from one test session to another can be very useful. When we want to pass information to future test sessions, we can do this with the built-in cache fixture.


The cache fix is ​​designed to store information about one test session and receive it in the next. An excellent example of using cache privileges for business use is the built-in functionality --last-failed and --failed-first . Let's see how the data for these flags is stored in the cache.


Here is the help text for the options --last-failed and --failed-first , as well as several cache parameters:


 $ pytest --help ... --lf, --last-failed rerun only the tests that failed at the last run (or all if none failed) --ff, --failed-first run all tests but run the last failures first. This may re-order tests and thus lead to repeated fixture setup/teardown --cache-show show cache contents, don t perform collection or tests --cache-clear remove all cache contents at start of test run. ... 

To see them in action, we will use these two tests:


ch4 / cache / test_pass_fail.py


 def test_this_passes(): assert 1 == 1 def test_this_fails(): assert 1 == 2 

Let's run them using --verbose to see the function names, and --tb=no to hide the stack trace:


 $ cd /path/to/code/ch4/cache $ pytest --verbose --tb=no test_pass_fail.py ==================== test session starts ==================== collected 2 items test_pass_fail.py::test_this_passes PASSED test_pass_fail.py::test_this_fails FAILED ============ 1 failed, 1 passed in 0.05 seconds ============= 

If you run them again with the --ff or --failed-first flag, then the tests that failed earlier will be executed first, and then the entire session:


 $ pytest --verbose --tb=no --ff test_pass_fail.py ==================== test session starts ==================== run-last-failure: rerun last 1 failures first collected 2 items test_pass_fail.py::test_this_fails FAILED test_pass_fail.py::test_this_passes PASSED ============ 1 failed, 1 passed in 0.04 seconds ============= 

Or you can use --lf or --last-failed to simply run tests that failed last time:


 $ pytest --verbose --tb=no --lf test_pass_fail.py ==================== test session starts ==================== run-last-failure: rerun last 1 failures collected 2 items test_pass_fail.py::test_this_fails FAILED ==================== 1 tests deselected ===================== ========== 1 failed, 1 deselected in 0.05 seconds =========== 

Before we look at how crash data is stored and how you can use the same mechanism, let's look at another example that makes the value --lf and --ff even more obvious.


Here is a parameterized test with one failure:


ch4 / cache / test_few_failures.py

 """Demonstrate -lf and -ff with failing tests.""" import pytest from pytest import approx testdata = [ # x, y, expected (1.01, 2.01, 3.02), (1e25, 1e23, 1.1e25), (1.23, 3.21, 4.44), (0.1, 0.2, 0.3), (1e25, 1e24, 1.1e25) ] @pytest.mark.parametrize("x,y,expected", testdata) def test_a(x, y, expected): """Demo approx().""" sum_ = x + y assert sum_ == approx(expected) 

And at the exit:


 $ cd /path/to/code/ch4/cache $ pytest -q test_few_failures.py .F... ====================== FAILURES ====================== _________________________ test_a[1e+25-1e+23-1.1e+25] _________________________ x = 1e+25, y = 1e+23, expected = 1.1e+25 @pytest.mark.parametrize("x,y,expected", testdata) def test_a(x, y, expected): """Demo approx().""" sum_ = x + y > assert sum_ == approx(expected) E assert 1.01e+25 == 1.1e+25 ± 1.1e+19 E + where 1.1e+25 ± 1.1e+19 = approx(1.1e+25) test_few_failures.py:17: AssertionError 1 failed, 4 passed in 0.06 seconds 

Maybe you can identify the problem right away. But let's imagine that the test is longer and more difficult, and it is not so obvious that this is not the case. Let's run the test again to see the error again. The test case can be specified on the command line:


 $ pytest -q "test_few_failures.py::test_a[1e+25-1e+23-1.1e+25]" 

If you do not want to copy-paste (copy / paste ) or if several unsuccessful cases happen that you would like to restart, then --lf much easier. And if you are really debugging a test failure, another flag that can ease the situation, --showlocals , or -l for short:


 $ pytest -q --lf -l test_few_failures.py F ====================== FAILURES ====================== _________________________ test_a[1e+25-1e+23-1.1e+25] _________________________ x = 1e+25, y = 1e+23, expected = 1.1e+25 @pytest.mark.parametrize("x,y,expected", testdata) def test_a(x, y, expected): """Demo approx().""" sum_ = x + y > assert sum_ == approx(expected) E assert 1.01e+25 == 1.1e+25 ± 1.1e+19 E + where 1.1e+25 ± 1.1e+19 = approx(1.1e+25) expected = 1.1e+25 sum_ = 1.01e+25 x = 1e+25 y = 1e+23 test_few_failures.py:17: AssertionError ================= 4 tests deselected ================= 1 failed, 4 deselected in 0.05 seconds 

The reason for the failure should be more obvious.


In order to keep in mind that the test could not last time, there is a little trick. pytest stores test error information from the last test session and you can view the saved information with --cache-show :


 $ pytest --cache-show ===================== test session starts ====================== ------------------------- cache values ------------------------- cache/lastfailed contains: {'test_few_failures.py::test_a[1e+25-1e+23-1.1e+25]': True} ================= no tests ran in 0.00 seconds ================= 

Or you can look in the cache directory:


 $ cat .cache/v/cache/lastfailed { "test_few_failures.py::test_a[1e+25-1e+23-1.1e+25]": true } 

The --clear-cache option allows you to clear the cache before a session.


The cache can be used not only for --lf and --ff . Let's write a fixture that records how long tests take, saves time, and the next time it starts it reports an error in tests that take twice as long as more time than, say, last time.


The interface for cash fixture is simple.


 cache.get(key, default) cache.set(key, value) 

By convention, the key names begin with the name of your application or plugin, followed by / , and continue to separate the key name sections with / . The value that you store can be anything that is converted to json , as represented in the .cache directory .


Here is our fixture used to record test times:


ch4 / cache / test_slower.py


 @pytest.fixture(autouse=True) def check_duration(request, cache): key = 'duration/' + request.node.nodeid.replace(':', '_') #   (nodeid)    #      .cache #    -     start_time = datetime.datetime.now() yield stop_time = datetime.datetime.now() this_duration = (stop_time - start_time).total_seconds() last_duration = cache.get(key, None) cache.set(key, this_duration) if last_duration is not None: errorstring = "       2-  " assert this_duration <= last_duration * 2, errorstring 

Since the fixture is autouse , it does not need to be referenced from the test. The request object is used to get the nodeid to use in the key. nodeid is a unique identifier that works even with parameterized tests. We add a key with 'duration /' to be good cache dwellers. The code above yield is executed before the test function; code after yield is executed after the test function.


Now we need some tests that take different time intervals:


ch4 / cache / test_slower.py


 @pytest.mark.parametrize('i', range(5)) def test_slow_stuff(i): time.sleep(random.random()) 

Since you probably don’t want to write a bunch of tests for this, I used random and parameterization to easily generate some tests that sleep for a random amount of time, shorter than a second. Let's see how it works a couple of times:


 $ cd /path/to/code/ch4/cache $ pytest -q --cache-clear test_slower.py ..... 5 passed in 2.10 seconds $ pytest -q --tb=line test_slower.py ...E..E =================================== ERRORS ==================================== ___________________ ERROR at teardown of test_slow_stuff[1] ___________________ E AssertionError: test duration over 2x last duration assert 0.35702 <= (0.148009 * 2) ___________________ ERROR at teardown of test_slow_stuff[4] ___________________ E AssertionError: test duration over 2x last duration assert 0.888051 <= (0.324019 * 2) 5 passed, 2 error in 3.17 seconds 

Well, that was fun. Let's see what's in the cache:


 $ pytest -q --cache-show -------------------------------- cache values --------------------------------- cache\lastfailed contains: {'test_slower.py::test_slow_stuff[2]': True, 'test_slower.py::test_slow_stuff[4]': True} cache\nodeids contains: ['test_slower.py::test_slow_stuff[0]', 'test_slower.py::test_slow_stuff[1]', 'test_slower.py::test_slow_stuff[2]', 'test_slower.py::test_slow_stuff[3]', 'test_slower.py::test_slow_stuff[4]'] cache\stepwise contains: [] duration\test_slower.py__test_slow_stuff[0] contains: 0.958055 duration\test_slower.py__test_slow_stuff[1] contains: 0.214012 duration\test_slower.py__test_slow_stuff[2] contains: 0.19001 duration\test_slower.py__test_slow_stuff[3] contains: 0.725041 duration\test_slower.py__test_slow_stuff[4] contains: 0.836048 no tests ran in 0.03 seconds 

You can easily see the duration data separately from the cache data due to the prefix of the cache data names. However, it is interesting that lastfailed functionality can work with a single cache entry. Our duration data takes one cache entry for each test. Let's follow the lastfailed example and put our data in one record.


We read and write to the cache for each test. We can divide the fixture into the fixture of the function scope for measuring the duration and the fixture of the session scope for reading and writing to the cache. However, if we do this, we will not be able to use the cache fixture, because it has the scope of the function. Fortunately, a quick glance at the implementation on GitHub shows that the cache fixture simply returns request.config.cache . It is available in any area.


Here is one of the possible reorganizations of the same functionality:


ch4 / cache / test_slower_2.py


 Duration = namedtuple('Duration', ['current', 'last']) @pytest.fixture(scope='session') def duration_cache(request): key = 'duration/testdurations' d = Duration({}, request.config.cache.get(key, {})) yield d request.config.cache.set(key, d.current) @pytest.fixture(autouse=True) def check_duration(request, duration_cache): d = duration_cache nodeid = request.node.nodeid start_time = datetime.datetime.now() yield duration = (datetime.datetime.now() - start_time).total_seconds() d.current[nodeid] = duration if d.last.get(nodeid, None) is not None: errorstring = "test duration over 2x last duration" assert duration <= (d.last[nodeid] * 2), errorstring 

Fixture duration_cachebelongs to the session area. It reads the previous entry or empty dictionary if there are no previous cached data before running any tests. In the previous code, we saved both the extracted dictionary and the empty one in the namedtuplenamed Duration with access methods currentand last. Then we passed this namedtupleto test_duration, which is a function and runs for each test function. As the test runs, the same namedtupleis passed to each test, and the time for the current test is stored in the dictionary d.current. At the end of the test session, the collected current dictionary is saved in the cache.


, :


 $ pytest -q --cache-clear test_slower_2.py ..... 5 passed in 2.80 seconds $ pytest -q --tb=no test_slower_2.py ...EE.. 7 passed, 2 error in 3.21 seconds $ pytest -q --cache-show -------------------------------- cache values --------------------------------- cache\lastfailed contains: {'test_slower_2.py::test_slow_stuff[2]': True, 'test_slower_2.py::test_slow_stuff[3]': True} duration\testdurations contains: {'test_slower_2.py::test_slow_stuff[0]': 0.483028, 'test_slower_2.py::test_slow_stuff[1]': 0.198011, 'test_slower_2.py::test_slow_stuff[2]': 0.426024, 'test_slower_2.py::test_slow_stuff[3]': 0.762044, 'test_slower_2.py::test_slow_stuff[4]': 0.056003, 'test_slower_2.py::test_slow_stuff[5]': 0.18401, 'test_slower_2.py::test_slow_stuff[6]': 0.943054} no tests ran in 0.02 seconds 

.


capsys


capsys builtin : stdout stderr , . stdout stderr.


, stdout:


ch4/cap/test_capsys.py

 def greeting(name): print('Hi, {}'.format(name)) 

, . - stdout. capsys:


ch4/cap/test_capsys.py

 def test_greeting(capsys): greeting('Earthling') out, err = capsys.readouterr() assert out == 'Hi, Earthling\n' assert err == '' greeting('Brian') greeting('Nerd') out, err = capsys.readouterr() assert out == 'Hi, Brian\nHi, Nerd\n' assert err == '' 

stdout stderr capsys.redouterr() . — , , .


stdout . , stderr :


 def yikes(problem): print('YIKES! {}'.format(problem), file=sys.stderr) def test_yikes(capsys): yikes('Out of coffee!') out, err = capsys.readouterr() assert out == '' assert 'Out of coffee!' in err 

pytest . print . . -s , stdout . , , . , pytest , , . capsys . capsys.disabled() , .


Here is an example:


ch4/cap/test_capsys.py

 def test_capsys_disabled(capsys): with capsys.disabled(): print('\nalways print this') #    print('normal print, usually captured') #  ,   

, 'always print this' :


 $ cd /path/to/code/ch4/cap $ pytest -q test_capsys.py::test_capsys_disabled 

, always print this , capys.disabled() . print — print, normal print, usually captured ( , ), , -s , --capture=no , .


monkeypatch


"monkey patch" — . "monkey patching" — , , . monkeypatch . , , , , . , . API , monkeypatch .


monkeypatch :



raising pytest, , . prepend setenv() . , + prepend + <old value> .


monkeypatch , , dot- . , dot- . , cheese- :


ch4/monkey/cheese.py

 import os import json def read_cheese_preferences(): full_path = os.path.expanduser('~/.cheese.json') with open(full_path, 'r') as f: prefs = json.load(f) return prefs def write_cheese_preferences(prefs): full_path = os.path.expanduser('~/.cheese.json') with open(full_path, 'w') as f: json.dump(prefs, f, indent=4) def write_default_cheese_preferences(): write_cheese_preferences(_default_prefs) _default_prefs = { 'slicing': ['manchego', 'sharp cheddar'], 'spreadable': ['Saint Andre', 'camembert', 'bucheron', 'goat', 'humbolt fog', 'cambozola'], 'salads': ['crumbled feta'] } 

, write_default_cheese_preferences() . , . , . .


, . , read_cheese_preferences() , , write_default_cheese_preferences() :


ch4/monkey/test_cheese.py

 def test_def_prefs_full(): cheese.write_default_cheese_preferences() expected = cheese._default_prefs actual = cheese.read_cheese_preferences() assert expected == actual 

, , , cheese- . .


HOME set , os.path.expanduser() ~ , HOME . HOME , :


ch4/monkey/test_cheese.py

 def test_def_prefs_change_home(tmpdir, monkeypatch): monkeypatch.setenv('HOME', tmpdir.mkdir('home')) cheese.write_default_cheese_preferences() expected = cheese._default_prefs actual = cheese.read_cheese_preferences() assert expected == actual 

, HOME . - expanduser() , , «On Windows, HOME and USERPROFILE will be used if set, otherwise a combination of….» . Wow! , Windows. , .


, HOME , expanduser :


ch4/monkey/test_cheese.py

 def test_def_prefs_change_expanduser(tmpdir, monkeypatch): fake_home_dir = tmpdir.mkdir('home') monkeypatch.setattr(cheese.os.path, 'expanduser', (lambda x: x.replace('~', str(fake_home_dir)))) cheese.write_default_cheese_preferences() expected = cheese._default_prefs actual = cheese.read_cheese_preferences() assert expected == actual 

, cheese os.path.expanduser() -. re.sub ~ . setenv() setattr() . , setitem() .


, , , . , , write_default_cheese_preferences() :


ch4/monkey/test_cheese.py

 def test_def_prefs_change_defaults(tmpdir, monkeypatch): #      fake_home_dir = tmpdir.mkdir('home') monkeypatch.setattr(cheese.os.path, 'expanduser', (lambda x: x.replace('~', str(fake_home_dir)))) cheese.write_default_cheese_preferences() defaults_before = copy.deepcopy(cheese._default_prefs) #     monkeypatch.setitem(cheese._default_prefs, 'slicing', ['provolone']) monkeypatch.setitem(cheese._default_prefs, 'spreadable', ['brie']) monkeypatch.setitem(cheese._default_prefs, 'salads', ['pepper jack']) defaults_modified = cheese._default_prefs #       cheese.write_default_cheese_preferences() #    actual = cheese.read_cheese_preferences() assert defaults_modified == actual assert defaults_modified != defaults_before 

_default_prefs - , monkeypatch.setitem() , .


setenv() , setattr() setitem() . del . , , - . monkeypatch .


syspath_prepend(path) sys.path , , . stub-. monkeypatch.syspath_prepend() , , stub-.


chdir(path) . , . , , monkeypatch.chdir(the_tmpdir) .


monkeypatch unittest.mock , . 7 " pytest " . 125.


doctest_namespace


doctest Python docstrings , , . pytest doctest Python --doctest-modules . doctest_namespace , autouse , pytest doctest . docstrings .


doctest_namespace , Python . , numpy import numpy as np .


. , unnecessary_math.py multiply() divide() , . , docstring , docstrings :


ch4/dt/1/unnecessary_math.py

 """ This module defines multiply(a, b) and divide(a, b). >>> import unnecessary_math as um Here's how you use multiply: >>> um.multiply(4, 3) 12 >>> um.multiply('a', 3) 'aaa' Here's how you use divide: >>> um.divide(10, 5) 2.0 """ def multiply(a, b): """ Returns a multiplied by b. >>> um.multiply(4, 3) 12 >>> um.multiply('a', 3) 'aaa' """ return a * b def divide(a, b): """ Returns a divided by b. >>> um.divide(10, 5) 2.0 """ return a / b 

unnecessary_math , um , import noecessary_math as um -. docstrings import , um . , pytest docstring . docstring , docstrings :


 $ cd /path/to/code/ch4/dt/1 $ pytest -v --doctest-modules --tb=short unnecessary_math.py ============================= test session starts ============================= collected 3 items unnecessary_math.py::unnecessary_math PASSED unnecessary_math.py::unnecessary_math.divide FAILED unnecessary_math.py::unnecessary_math.multiply FAILED ================================== FAILURES =================================== ______________________ [doctest] unnecessary_math.divide ______________________ 034 035 Returns a divided by b. 036 037 >>> um.divide(10, 5) UNEXPECTED EXCEPTION: NameError("name 'um' is not defined",) Traceback (most recent call last): ... File "<doctest unnecessary_math.divide[0]>", line 1, in <module> NameError: name 'um' is not defined ... _____________________ [doctest] unnecessary_math.multiply _____________________ 022 023 Returns a multiplied by b. 024 025 >>> um.multiply(4, 3) UNEXPECTED EXCEPTION: NameError("name 'um' is not defined",) Traceback (most recent call last): ... File "<doctest unnecessary_math.multiply[0]>", line 1, in <module> NameError: name 'um' is not defined /path/to/code/ch4/dt/1/unnecessary_math.py:23: UnexpectedException ================ 2 failed, 1 passed in 0.03 seconds ================= 

- import docstring:


ch4/dt/2/unnecessary_math.py

 """ This module defines multiply(a, b) and divide(a, b). >>> import unnecessary_math as um Here's how you use multiply: >>> um.multiply(4, 3) 12 >>> um.multiply('a', 3) 'aaa' Here's how you use divide: >>> um.divide(10, 5) 2.0 """ def multiply(a, b): """ Returns a multiplied by b. >>> import unnecessary_math as um >>> um.multiply(4, 3) 12 >>> um.multiply('a', 3) 'aaa' """ return a * b def divide(a, b): """ Returns a divided by b. >>> import unnecessary_math as um >>> um.divide(10, 5) 2.0 """ return a / b 

:


 $ cd /path/to/code/ch4/dt/2 $ pytest -v --doctest-modules --tb=short unnecessary_math.py ============================= test session starts ============================= collected 3 items unnecessary_math.py::unnecessary_math PASSED [ 33%] unnecessary_math.py::unnecessary_math.divide PASSED [ 66%] unnecessary_math.py::unnecessary_math.multiply PASSED [100%] ===================== 3 passed in 0.03 seconds ====================== 

docstrings .


doctest_namespace , autouse conftest.py , :


ch4/dt/3/conftest.py

 import pytest import unnecessary_math @pytest.fixture(autouse=True) def add_um(doctest_namespace): doctest_namespace['um'] = unnecessary_math 

pytest um doctest_namespace , unnecessary_math . conftest.py, doctests, conftest.py um .


doctest pytest 7 " pytest " . 125.


recwarn


recwarn , . Python , , , . , , , , . :


ch4/test_warnings.py

 import warnings import pytest def lame_function(): warnings.warn("Please stop using this", DeprecationWarning) # rest of function 

, :


ch4/test_warnings.py

 def test_lame_function(recwarn): lame_function() assert len(recwarn) == 1 w = recwarn.pop() assert w.category == DeprecationWarning assert str(w.message) == 'Please stop using this' 

recwarn , category (), message (), filename ( ) lineno ( ), .


. , , , . recwarn.clear() , , .


recwarn , pytest pytest.warns() :


ch4/test_warnings.py

 def test_lame_function_2(): with pytest.warns(None) as warning_list: lame_function() assert len(warning_list) == 1 w = warning_list.pop() assert w.category == DeprecationWarning assert str(w.message) == 'Please stop using this' 

pytest.warns() . recwarn pytest.warns() , , , .


Exercises


  1. ch4/cache/test_slower.py autouse , check_duration() . ch3/tasks_proj/tests/conftest.py .
  2. Perform tests from Chapter 3.
  3. For really very fast tests, 2x is really fast still very fast. Instead of 2x, change the fixture to check for 0.1 seconds plus 2x for the last duration.
  4. Run pytest with the modified fixture. The results seem reasonable?

What's next


In this chapter, you covered several pytest built-in fixtures. Next, you will look at Plugins in more detail. The nuances of writing large plugins can become a book in and of themselves; however, small custom plugins are a regular part of the pytest ecosystem.


Back Next


')

Source: https://habr.com/ru/post/448792/


All Articles