
Good day. A week ago, for the third time, I applied the library to create / launch .NET benchmarks
BenchmarkDotNet . The library turned out to be quite convenient, but practically not illuminated in Habré, which I will now correct.
By benchmark, I mean the measurement of the execution time of the method (s). To begin, imagine the process of writing a benchmark with your hands. We create a test method, select Release build, create a “measuring” method, collect garbage in it, set StopWatch at the beginning and at the end, start the warm-up, run the test method. If the test method runs faster than one “tick” of StopWatch, we run the test method many times (let it be a million), divide the total time by million, we get the result (you need not forget to subtract the time of “idle” cycle run for a million operations from the total time) .
')
As you can see, there are already a lot of details, and if you can still live with them, then with measurements, performance for different architectures (x86 \ x64) and different compilers, everything becomes very bad (one of the library's authors, Andrei
DreamWalker, tells in detail about the creation of
benchmarks and
micro-optimization details
) Akinshin). As you might guess, BenchmarkDotNet takes care of these details on itself.
Installation
Nuget package, no dependencies; at the time of publication of the article version v0.9.1.
Simplest example
First of all I checked the library "for lice".
public class TheEasiestBenchmark { [Benchmark(Description = "Summ100")] public int Test100() { return Enumerable.Range(1, 100).Sum(); } [Benchmark(Description = "Summ200")] public int Test200() { return Enumerable.Range(1, 200).Sum(); } } [TestClass] public class UnitTest1 { [TestMethod] public void TestMethod1() { BenchmarkRunner.Run<TheEasiestBenchmark>(); } }
As you can see, for a simple start, it is enough to hang the [Benchmark (Description = "TestName")] attribute on the methods under test, and run the code in the console or in a unit test. The requirements for the method are small: it must be public (otherwise there will be no measurements) and take no arguments (otherwise we will get an exception). After the benchmark is completed, a detailed test report will appear in the console, with a summary table at the end.
Method | Median | Stddev |
---|
Summ100 | 1.0282 us | 0.1071 us |
Summ200 | 1.9573 us | 0.0648 us |
By default, it specifies the name of the method, median, standard deviation. If you do not set the “Description” property in the [Benchmark] attribute in the Method column, the method name will be displayed. By the way, the rows of the table are sorted according to the values ​​of the Description property (method names). It is also worth noting that an uncaught exception in the method stops the measurement (specifically this method).
To measure the performance of methods with arguments, you can create an additional “measuring” method:
private double SomeBusinessLogic(int arg){ ... } [Benchmark(Description = "Summ100")] public void MeasurmentMethod() { SomeBusinessLogic(42); }
Benchmark Settings
Configuring benchmarks by using the Config attribute. The possibilities are considerable: environment settings \ platform \ jitter, number of launches, output settings, loggers, analyzers ... Examples of configuration can be found on the library page on
github .
The easiest option is to set up: we hang the Config attribute on the class containing the Benchmark methods, and in the constructor we pass a string with the settings. So, if you want to see the maximum launch time in the summary table, use the following code:
[Config("columns=Max")] public class TheEasiestBenchmark { [Benchmark(Description = "Summ100")] public int Test100() { return Enumerable.Range(1, 100).Sum(); } }
Method | Median | Stddev | Max |
---|
Summ100 | 1.0069 us | 0.0124 us | 1.0441 us |
Another option is to create a descendant class from ManualConfig, and pass its type to the constructor of the Config attribute.
[Config(typeof(HabrExampleConfig))] public class TheEasiestBenchmark { private class HabrExampleConfig : ManualConfig { public HabrExampleConfig() { Add(StatisticColumn.Max);
Method | Median | Stddev | Max |
---|
Summ100 | 1.0114 us | 0.0041 us | 1.0201 us |
On the one hand, there is more code, on the other hand, autocompletion works when creating a class: it is easier to configure, it is more difficult to make a mistake.
A little bit about the settings
A lot of settings, and they are divided by type.
The first type of settings is Job. As follows from the documentation, it is necessary to configure the environment: the expected platform (x64 \ x86), jitter, runtime. In addition, if you are not satisfied with the test run time (the library tries to select the optimum accuracy / start time for the criteria), you can adjust the number of warm-up and target launches, or simply specify the desired run time. In addition, you need to be careful with the settings of the environment: if the class is in a project oriented to .NET 4.6, and the config is set to .NET 4.5, we get an error during the launch process (which is, in general, logical).
The following type of settings: Columns already familiar to us. Allows you to configure the displayed information. The full list of available access columns is in the Columns -> default section of the
documentation . The main columns used are PropertyColumn. * (For example, PropertyColumn.Runtime), StatisticColumn. * (For example, StatisticColumn.Median).
Another item of settings: Exporters. Specifies which additional results files to generate. Possible files: html, txt, csv, R plots, markdown markup for SO, github. So, to create R graphs and csv documents in the MyConfig constructor, we add Add (RPlotExporter.Default, CsvExporter.Default);
A class with all these settings might look like this:
internal class HabrExampleConfig : ManualConfig { public HabrExampleConfig () { Add(new Job {IterationTime = 1,WarmupCount = 1,TargetCount = 1}); Add(StatisticColumn.Max); Add(RPlotExporter.Default, CsvExporter.Default); } } [Config(typeof(HabrExampleConfig ))] public class TheEasiestBenchmark{...}
Almost the same is the result of another configuration method — creating your own configuration attribute.
[MyConfigSource] public class TheEasiestBenchmark { private class MyConfigSourceAttribute : Attribute, IConfigSource { public IConfig Config { get; private set; } public MyConfigSourceAttribute() { Config = ManualConfig.CreateEmpty() .With(StatisticColumn.Max) .With(new Job {Platform = Platform.X64}) .With(RPlotExporter.Default); } } [Benchmark(Description = "Summ100")] public int Test100() { return Enumerable.Range(1, 100).Sum(); } }
It should be noted that all three configuration methods only add something to the basic configuration. So, the three basic columns Method \ Median \ StdDev will always be output to the console.
If there is a desire to limit the output (and the generation of the resulting files), you can use the UnionRule property.
[Config(typeof(HabrExampleConfig))] public class TheEasiestBenchmark { private class HabrExampleConfig : ManualConfig { public HabrExampleConfig() { Add(PropertyColumn.Method, StatisticColumn.Max);
Method | Max |
---|
Summ100 | 1.0308 us |
This approach is useful to those who wish to customize the launch of benchmarks in the CI process, because the additional generated files with the results are likely to be redundant.
Additional features
Parameterized Tests
If you want to experimentally check the complexity of the algorithm, or just have an idea of ​​the speed of the method with different arguments, you can use the Params attribute.
So, we can measure the speed of counting the inclusions of the character 'a' in different lines:
[Params("habrahabr", "geektimes", "toster", "megamozg")] public string arg; [Benchmark(Description = "Test")] public int CountLetterAIncludings() { int res = 0; for (int i = 0; i < arg.Length; i++) { if (arg[i] == 'a'){res++;} } return res; }
Method | Median | Stddev | arg |
---|
Test | 112.4087 ns | 1.1556 ns | geektimes |
Test | 113.0916 ns | 1.4137 ns | habrahabr |
Test | 104.3207 ns | 4.2854 ns | megamozg |
Test | 80.3665 ns | 0.4564 ns | toster |
Relative start time
Suppose we want to know not only the absolute times of test methods, but also relative ones. To do this, we select a method whose time is considered “normal”, and change its Benchmark attribute, setting BaseLine to true.
[Benchmark(Description = "Summ100")] public int Test100() { return Enumerable.Range(1, 100).Sum(); } [Benchmark(Description = "Summ200", Baseline = true)] public int Test200() { return Enumerable.Range(1, 200).Sum(); }
Method | Median | Stddev | Scaled |
---|
Summ100 | 1.0113 us | 0.0055 us | 0.52 |
Summ200 | 1.9516 us | 0.0120 us | 1.00 |
Processing results
If there is a desire / need in some way to be perverted with statistics, or you want to write your Exporter, the Summary class is at your service. Run the test in a unit test.
Summary result = BenchmarkRunner.Run<TheEasiestBenchmark>();
and use all the information about each benchmark
completely free of charge and without SMS .
result.Benchmarks [index] contains information about the Job'e and parameters, result.Reports [index] stores data about the time of the test run and its type (warm-up \ combat).
In addition, as I wrote above, the library allows you to save test results in html, csv, txt formats, and also supports saving in markdown markup and formed in R png-pictures. So, all the test results in this article are copied from the generated html files.
Summarizing the above, BenchmarkDotNet takes on routine actions when compiling benchmarks and provides decent opportunities for formatting results with minimal effort. So, if you want to quickly measure the speed of a method, get accurate results for methods with a short execution time, or get a beautiful schedule for management - you already know what to do. :)