📜 ⬆️ ⬇️

Go Benchmarks

Benchmarks


Benchmarks are tests for performance. It is quite useful to have them in the project and compare their results from commit to commit. Go has a very good toolkit for writing and running benchmarks. In this article, I will show how to use the testing package to write benchmarks.

How to write a benchmark


It's easy to go. Here is an example of the simplest benchmark:
 func BenchmarkSample(b *testing.B) { for i := 0; i < bN; i++ { if x := fmt.Sprintf("%d", 42); x != "42" { b.Fatalf("Unexpected string: %s", x) } } } 

Save this code to the bench_test.go file and run the go test -bench=. bench_test.go go test -bench=. bench_test.go .
You will see something like:
testing: warning: no tests to run
Pass
BenchmarkSample 10,000,000 206 ns / op
ok command-line-arguments 2.274s

We see here that one iteration of the benchmark took 206 nanoseconds. It was really easy. But there are a couple more interesting things about benchmarks in Go.

What can you test with benchmarks?


By default, go test -bench=. tests only the speed of your code, but you can add the -benchmem flag, which will allow you to test the memory consumption and the number of memory allocations. It will look like this:
Pass
BenchmarkSample 10,000,000 208 ns / op 32 B / op 2 allocs / op

Here we see the number of bytes and memory allocations per iteration. Useful information as for me. You can also include these results for each benchmark separately by calling the b.ReportAllocs() method.
But that's not all, you can also set the bandwidth per iteration in bytes using the b.SetBytes(n int64) method. For example:
 func BenchmarkSample(b *testing.B) { b.SetBytes(2) for i := 0; i < bN; i++ { if x := fmt.Sprintf("%d", 42); x != "42" { b.Fatalf("Unexpected string: %s", x) } } } 

Now the output will be:
Pass
BenchmarkSample 5000000 324 ns / op 6.17 MB / s 32 B / op 2 allocs / op
ok command-line-arguments 1.999s

You can see a column with a bandwidth that is equal to 6.17 MB/s in my case.

Initial conditions for benchmarks


What if you need to do something before each iteration of the benchmark? Of course, you do not want to include the time of this operation in the benchmark results. I wrote a very simple Set data structure for testing:
 type Set struct { set map[interface{}]struct{} mu sync.Mutex } func (s *Set) Add(x interface{}) { s.mu.Lock() s.set[x] = struct{}{} s.mu.Unlock() } func (s *Set) Delete(x interface{}) { s.mu.Lock() delete(s.set, x) s.mu.Unlock() } 

and benchmark for the Delete method:
 func BenchmarkSetDelete(b *testing.B) { var testSet []string for i := 0; i < 1024; i++ { testSet = append(testSet, strconv.Itoa(i)) } for i := 0; i < bN; i++ { b.StopTimer() set := Set{set: make(map[interface{}]struct{})} for _, elem := range testSet { set.Add(elem) } for _, elem := range testSet { set.Delete(elem) } } } 

This code has two problems:

For such cases, we have the b.ResetTimer() , b.StopTimer() and b.StartTimer() . Here is their use in the previous benchmark:
 func BenchmarkSetDelete(b *testing.B) { var testSet []string for i := 0; i < 1024; i++ { testSet = append(testSet, strconv.Itoa(i)) } b.ResetTimer() for i := 0; i < bN; i++ { b.StopTimer() set := Set{set: make(map[interface{}]struct{})} for _, elem := range testSet { set.Add(elem) } b.StartTimer() for _, elem := range testSet { set.Delete(elem) } } } 

Now the initial setting will not be taken into account in the results and we will see only the results of calling the Delete method.
')

Benchmark comparison


Of course, benchmarking is of little use if you cannot compare them after changing the code. Here is a sample code that serializes the structure in json and a benchmark for it:
 type testStruct struct { X int Y string } func (t *testStruct) ToJSON() ([]byte, error) { return json.Marshal(t) } func BenchmarkToJSON(b *testing.B) { tmp := &testStruct{X: 1, Y: "string"} js, err := tmp.ToJSON() if err != nil { b.Fatal(err) } b.SetBytes(int64(len(js))) b.ResetTimer() for i := 0; i < bN; i++ { if _, err := tmp.ToJSON(); err != nil { b.Fatal(err) } } } 

Suppose this code has already been added to git, now I want to try a cool trick and measure the performance gain (or drop). I am slightly changing the ToJSON method:
 func (t *testStruct) ToJSON() ([]byte, error) { return []byte(`{"X": ` + strconv.Itoa(tX) + `, "Y": "` + tY + `"}`), nil } 

It's time to start benchmarks, this time we will save their output to files:
go test -bench =. -benchmem bench_test.go> new.txt
git stash
go test -bench =. -benchmem bench_test.go> old.txt

We can compare these results using the benchcmp utility. You can install it by running go get golang.org/x/tools/cmd/benchcmp . Here are the comparison results:
# benchcmp old.txt new.txt
benchmark old ns / op new ns / op delta
BenchmarkToJSON 1579 495 -68.65%

benchmark old MB / s new MB / s speedup
BenchmarkToJSON 12.66 46.41 3.67x

benchmark old allocates new allocs delta
BenchmarkToJSON 2 2 + 0.00%

benchmark old bytes delta
BenchmarkToJSON 184 48 -73.91%

It is very useful to have such tables with changes, besides, they can add solidity to your pull requests in opensource projects.

Record Profiles


You can also record cpu and memory profiles during benchmarking:
go test -bench =. -benchmem -cpuprofile = cpu.out -memprofile = mem.out bench_test.go

About profile analysis you can read a great post on the official Go blog.

Conclusion


Benchmarks are a great tool for a programmer. And Go allows you to write and analyze benchmark results very easily. New benchmarks allow you to find bottlenecks in performance, suspicious code (effective code is usually easier and easier to read), or using the wrong tools for tasks.

Existing benchmarks will allow you to be more confident in the changes and their results can be a voice in your favor during the review. Writing benchmarks gives great advantages to the programmer and the program, and I advise you to write more of them. It's fun!

Source: https://habr.com/ru/post/268585/


All Articles