
When I got a question about testing the code, I did not hesitate to use boost :: test. To expand the horizons I tried the Google Test Framework. In addition to any bonuses available in it, unlike boost :: test, the project is booming. I would like to share the acquired knowledge. Anyone interested is asking
Key Concepts
The key concept in the Google test framework is the concept of assertion.
An assertion is an expression that can result in success (success), non-critical failure (nonfatal failure) and critical failure (fatal failure). Critical failure causes the completion of the test, in other cases, the test continues. The
test itself is a set of statements. In addition, tests can be grouped into
sets (test case). If it is difficult to customize a group of objects to be used in different tests, you can use fixture. Combined test suites are a
test program .
Assertions
Statements generating in the case of their falsity critical failures begin with ASSERT_, non-critical ones - EXPECT_. It should be borne in mind that in the event of a critical failure, an immediate return is made from the function in which the assertion that caused the failure occurred. If this statement is followed by some kind of memory-cleansing code or some other final procedure, you can get a memory leak.
')
The following statements are true (non-critical ones start not with ASSERT_, but with EXPECT_):
Simplest logical
- ASSERT_TRUE (condition);
- ASSERT_FALSE (condition);
Comparison
- ASSERT_EQ (expected, actual); - =
- ASSERT_NE (val1, val2); -! =
- ASSERT_LT (val1, val2); - <
- ASSERT_LE (val1, val2); - <=
- ASSERT_GT (val1, val2); ->
- ASSERT_GE (val1, val2); -> =
String comparison
- ASSERT_STREQ (expected_str, actual_str);
- ASSERT_STRNE (str1, str2);
- ASSERT_STRCASEEQ (expected_str, actual_str); - case-insensitive
- ASSERT_STRCASENE (str1, str2); - case-insensitive
Check for exceptions
- ASSERT_THROW (statement, exception_type);
- ASSERT_ANY_THROW (statement);
- ASSERT_NO_THROW (statement);
Predicate Verification
- ASSERT_PREDN (pred, val1, val2, ..., valN); - N <= 5
- ASSERT_PRED_FORMATN (pred_format, val1, val2, ..., valN); - works similar to the previous one, but allows you to control the output
Floating point comparison
- ASSERT_FLOAT_EQ (expected, actual); - inaccurate float comparison
- ASSERT_DOUBLE_EQ (expected, actual); - inaccurate comparison double
- ASSERT_NEAR (val1, val2, abs_error); - the difference between val1 and val2 does not exceed the error abs_error
Challenge failure or success
- SUCCEED ();
- FAIL ();
- ADD_FAILURE ();
- ADD_FAILURE_AT (“file_path”, line_number);
You can write your own function that returns AssertionResult
::testing::AssertionResult IsTrue(bool foo) { if (foo) return ::testing::AssertionSuccess(); else return ::testing::AssertionFailure() << foo << " is not true"; } TEST(MyFunCase, TestIsTrue) { EXPECT_TRUE(IsTrue(false)); }
You can control data types using the :: testing :: StaticAssertTypeEq <T1, T2> () function. The compilation will fail with an error if there is a mismatch between types T1 and T2.
In the case of a false statement, the data used in the statement is issued. In addition, you can set your own comment:
ASSERT_EQ(1, 0) << "1 is not equal 0"
You can use extended character sets (wchar_t) both in comments and in statements regarding strings. In this case, the issue will be in UTF-8 encoding.
Tests
To define a test, use the TEST macro. It defines a void function in which statements can be used. As noted earlier, a critical failure causes an immediate return from a function.
TEST(test_case_name, test_name) { ASSERT_EQ(1, 0) << "1 is not equal 0"; }
TEST takes 2 parameters that uniquely identify the test — the name of the test suite and the name of the test. Within the same test suite, the test names should not be the same. If the name starts with DISABLED_, it means that you have marked the test (test suite) as temporarily unused.
You can use statements not only as part of a test, but also call them from any function. There is only one limitation - statements generating critical failures cannot be called from non-void functions.
Fixtures
It happens that the objects involved in testing are difficult to configure for each test. You can set up the setup process once and execute it for each test automatically. In such situations, fixings are used.
The fixation is a class inherited from :: testing :: Test, inside which all the objects necessary for testing are declared, while in the constructor or the SetUp () function, they are set, and the TearDown () function releases resources. The tests themselves, in which commitments are used, must be declared using the macro TEST_F, the first parameter of which is not the name of the test set, but the name of the fixation.
For each test, a new fixation will be created, configured with SetUp (), a test started, resources released with TearDown (), and the fixation object removed. Thus, each test will have its own copy of the fixation “not corrupted” by the previous test.
#include <gtest/gtest.h> #include <iostream> class Foo { public: Foo() : i(0) { std::cout << "CONSTRUCTED" << std::endl; } ~Foo() { std::cout << "DESTRUCTED" << std::endl; } int i; }; class TestFoo : public ::testing::Test { protected: void SetUp() { foo = new Foo; foo->i = 5; } void TearDown() { delete foo; } Foo *foo; }; TEST_F(TestFoo, test1) { ASSERT_EQ(foo->i, 5); foo->i = 10; } TEST_F(TestFoo, test2) { ASSERT_EQ(foo->i, 5); } int main(int argc, char *argv[]) { ::testing::InitGoogleTest(&argc, argv); return RUN_ALL_TESTS(); }
In some cases, creating test objects is a very expensive operation, and tests do not make any changes to objects. In this case, you can not create a new commit for each test, and use a distributed commit with global SetUp () and TearDown (). A fixation automatically becomes distributed if there is at least one static member in the class. The static functions SetUpTestCase () and TearDownTestCase () will be called to set up the object and free the resources, respectively. Thus, the test suite before the first test will call SetUpTestCase (), and after the last test TearDownTestCase ().
If there is a need for SetUp () and TearDown () for the entire test program, and not just for the test set, you need to create a descendant class for :: testing :: Environment, override SetUp () and TearDown () and register it using AddGlobalTestEnvironment.
Running tests
Having declared all the necessary tests, we can run them using the RUN_ALL_TESTS () function. The function can only be called once. It is desirable that the test program returns the result of the RUN_ALL_TESTS () function, since some automated testing tools determine the result of the test program execution according to what it returns.
Flags
The InitGoogleTest (argc, argv) function called before RUN_ALL_TESTS () makes your test program not just an executable file that displays test results. This is a holistic application that accepts input parameters that change its behavior. As usual, the -h, --help options will give you a list of all supported options. I will list some of them (for a complete list, you can refer to the documentation).
- ./test --gtest_filter = TestCaseName. * - TestCaseName.SomeTest - run all TestCaseName tests with the exception of SomeTest
- ./test --gtest_repeat = 1000 --gtest_break_on_failure - run the testing program 1000 times and stop at the first failure
- ./test --gtest_output = "xml: out.xml" - in addition to issuing to std :: out, an out.xml - XML ​​report will be created with the results of the test program
- ./test --gtest_shuffle - run tests in random order
If you use any parameters all the time, you can set the appropriate environment variable and run the executable file without parameters. For example, setting the GTEST_ALSO_RUN_DISABLED_TESTS variable to a non-zero value is equivalent to using the --gtest_also_run_disabled_tests flag.
Instead of conclusion
In this post, I briefly ran through the main functions of the Google Test Framework. For more information, refer to the
documentation . From there, you can learn about the ASSERT_DEATH used when the program crashes, about keeping additional logs, about parameterized tests, setting up output, testing private class members, and more.
UPD: By fair note,
nikel added brief information about the use of flags.
UPD 2: Correction of markup after changes on Habré (native tag source).