Six months ago on my project there was about 0% code coverage with unit tests. There were not enough simple classes, it was easy to create unit tests for them, but it was relatively useless, because in fact the important algorithms were in complex classes. And the complex, in terms of behavior, classes were difficult to unit-test since such classes were tied to other complex classes and configuration classes. It was impossible to create an object of a complex class and, moreover, test it with unit tests.
Some time ago I read "Writing Testable Code" on the Google Testing Blog .
The key idea in the article is that C ++ code suitable for unit testing is not written in the same way as the usual C ++ code.
Before that, I had the impression that the unit testing framework was the most important for writing unit tests. But everything turned out wrong. The role of the framework is secondary, first of all it is required to write the code that is suitable for unit testing. The author uses the term "testable code" for this. Or, as it seems to me more accurate, "unit-testable code". Then everything is quite simple. For a testable code, you can immediately write UT and then there will be Test Driven Development (TDD), you can and later, the code still allows it. I write the tests right away with the code, and then I look at the coverage report for forgotten and missing places in the code and supplement the tests.
In his article, the author cites several principles. I will note and comment on the most important ones from my point of view.
#one. Mixing object graph construction with application logic:
Absolutely important principle. In fact, any complex class usually creates several classes of other objects within itself. For example, in the constructor or during configuration processing.
The usual approach is to use new directly in the class code. This is completely wrong for unit testing. If you create a class in this way, then you end up with exactly a bunch of stuck class objects that cannot be tested.
The correct approach from the point of view of UT - if the class needs to create objects, then the class should receive a pointer or a link to the interface of the class factory as input.
Example:
// class input_handler_factory_i { virtual ~input_handler_factory_i() {} // }; // class input_handler_factory : input_handler_factory_i { // }; class input_handler { public: input_handler(std::shared_ptr<input_handler_factory_i>) }; // - class test_input_handler_factory : input_handler_factory_i { // };
I usually return exactly std :: shared_ptr from the methods of the factory class. Thus, you can save directly in unit tests.
created test objects and check their status. Still. In the factory, I not only create objects, but I can do a pending initialization of objects.
# 2. Ask for things, Don't look for things (aka Dependency Injection / Law of Demeter):
Objects with which the class interacts should be provided to it directly.
For example, instead of passing a reference to an object of an application class to a class, whose class constructor will receive a reference to a meta :: class_repository object, you should pass a meta :: class_repository reference to the class constructor.
With this approach, in unit tests it is enough to create a meta :: class_repository object, and not to create an object of the application class.
# 6. Static methods: (or living in a procedural world):
Here the author has an important thought:
The key to testing is the presence of seams.
Interfaces are important. No interfaces - no way to test.
Example.
I needed to write unit tests for the failover service. It is tied to the library class zookeeper :: config_service in its work. There were no “seams” for zookeeper :: config_service. I asked the zookeeper :: config_service developer to add the zookeeper :: config_service_i interface and add the zookeeper :: config_service inheritance from zookeeper :: config_service_i.
If it were not possible to add an interface so simply, I would use a proxy object and an interface for a proxy object.
# 7. Favor composition over inheritance
Inheritance sticks together classes and makes unit testing of a particular class difficult. So it's better without inheritance.
However, sometimes inheritance is indispensable. For example:
class amqp_service : public AMQP::service_interface { public: uint32_t on_message(AMQP::session::ptr, const AMQP::basic_deliver&, const AMQP::content_header&, dtl::buffer&, AMQP::async_ack::ptr) override; };
This is an example when the on_message method needs to be defined in a child class and cannot be done without inheritance from the AMQP :: service_interface class. In this case, I do not add complex algorithms to amqp_service :: on_message (). In the amqp_service :: on_message () call, I immediately make a call to input_handlers :: add_message (). Thus, the logic of the processing of AMQP messages is transferred to input_handlers, which is already written correctly from the point of view of unit testing and which I can fully test.
#9. Mixing Service Objects with Value Objects
An important idea. Classes of service objects are complex and their objects are created in factories.
From the point of view of effort, the simultaneous development of code and unit tests significantly increases development time. These are some of the options:
1) If you just cover the main scenarios.
2) If you additionally cover "dark corners", which are visible only in the coverage report and which usually the tester can simply not check and, as a result, not spend time on it.
3) If you add unit tests for negative, rare or complex scenarios. For example, UT to check the change in the number of workers in the configuration on the fly with an empty and non-empty processing queue.
4) If the code was not testable, but the task was to be refined with the addition of features and unit tests, which would require refactoring.
I will not give accurate assessments, but my impression is that if unit testing is performed not only for the main scenario, but taking into account points 2 and 3, the development time grows by 100% compared to development without unit tests. If the code is not testable, and a feature with unit tests is added to it, then refactoring such a code in order to turn it into a testable increases labor costs by 200%.
Additional nuance on labor costs. If the developer approaches UT writing carefully and does everything from points 1, 2 and 3, and the team leader considers that unit tests are basically point 1, then questions are possible
why it takes so long to develop.
There is also a question about the performance of such testable code. Once I heard such an opinion that inheritance from interfaces and the use of virtual functions affects performance and therefore it’s not worth writing code. And just successfully one of the tasks I had was to increase the processing performance of AMQP messages 5 times to 25,000 records per second. After completing this task, I did profiling on the Linux work program. In the top were pthread_mutex_lock and pthread_mutex_unlock, which came from the class allocators. The overhead of calling virtual functions simply had no noticeable effect. The performance conclusion I got was such that the use of interfaces had no effect on performance.
In conclusion, here are the test coverage estimates for some files on my project after switching to development with unit tests. The failover_service.cpp, input_handlers.cpp and input_handler.cpp files were developed using the "Writing Testable Code" and have a high degree of code coverage with unit tests.
Test: data_provider_coverage Lines: 1410 10010 14.1 % Date: 2016-06-28 16:38:35 Functions: 371 1654 22.4 % Filename / Line Coverage / Functions coverage amqp_service.cpp 8.0 % 28 / 350 25.6 % 10 / 39 config_service.cpp 1.5 % 7 / 460 6.3 % 4 / 63 event_controller.cpp 0.3 % 1 / 380 3.6 % 2 / 55 failover_service.cpp 81.8 % 323 / 395 66.7 % 34 / 51 file_service.cpp 31.5 % 40 / 127 52.6 % 10 / 19 http_service.cpp 0.7 % 1 / 152 10.5 % 2 / 19 input_handler.cpp 73.0 % 292 / 400 95.7 % 22 / 23 input_handler_common.cpp 16.4 % 12 / 73 20.8 % 5 / 24 input_handler_worker.cpp 0.3 % 1 / 391 5.9 % 2 / 34 input_handlers.cpp 98.6 % 217 / 220 100.0 % 26 / 26 input_message.cpp 86.6 % 110 / 127 90.3 % 28 / 31 schedule_service.cpp 0.2 % 3 / 1473 1.6 % 2 / 125 telnet_service.cpp 0.4 % 1 / 280 7.7 % 2 / 26
Addition
Building a report I do this:
# coverage COV_DIR=./tmp.coverage mkdir -p $COV_DIR mkdir -p ./coverage.report find $COV_DIR -mindepth 1 -maxdepth 1 -exec rm -fr {} \; find . -name "*.gcda" -exec cp "{}" $COV_DIR/ \; find . -name "*.gcno" -exec cp "{}" $COV_DIR/ \; lcov --directory $COV_DIR --base-directory ./ --capture --output-file $COV_DIR/coverage.info lcov --remove $COV_DIR/coverage.info "/usr*" -o $COV_DIR/coverage.info lcov --remove $COV_DIR/coverage.info "*gtest*" -o $COV_DIR/coverage.info lcov --remove $COV_DIR/coverage.info "**unittest*" -o $COV_DIR/coverage.info genhtml -o coverage.report -t "my_project_coverage" --num-spaces 4 $COV_DIR/coverage.info gnome-open coverage.report/src/index.html
Supplement №2
For unit testing of algorithms that need to perform a specific action, for example, every minute or every hour, I pass one of the parameters to the function of obtaining time to such algorithms:
using time_function_t = std::function<time_t(time_t*)>; class service { public: service(time_function_t = &time); };
And in the unit test, another time use function is used. For example, here is a time function that allows you to go to the next minute by running ++ minute_passed:
std::atomic_int minute_passed{0}; time_t start_ts = time(nullptr); time_function = [&](time_t*) { auto current_ts = time(nullptr); auto diff_ts = current_ts - start_ts; return start_minute_ts + 60 * minute_passed + diff_ts; }; service test_srv(time_function);
Source: https://habr.com/ru/post/304492/
All Articles