Photo Roman Pronskiy
This is a translation of the post of one of the main developers of the Go language, Russ Cox, where in the traditional New Year time format he gives himself promises and plans to fulfill them.
It was time to make decisions, and I thought that it makes sense to talk a little about what I want to work on in the coming year in relation to Go.
Every year I set a goal to help go-developers . I want to be sure that what the Go creators do has a positive effect on all Go developers. Because they have a lot of ways to make a mistake: for example, you can spend too much time cleaning or optimizing code that does not require it; respond only to the most common or recent complaints and requests; focus unnecessarily on short-term improvements. Therefore, it is so important to look at everything from the outside and to do what will bring the most benefit to the Go community.
In this article I will describe a few basic tasks that I will focus on this year. This is my own list, not the entire Go creators team.
First, I want to get feedback on everything that is written here. Secondly, I want to show that I really consider the problems described below important. I think that people too often perceive the lack of activity on the part of the Go team as a signal that everything is fine, although in reality we are just solving other, more important tasks.
We have a constantly popping problem with moving types from one package to another with large-scale code base refactorings. We tried to solve it last year with the help of general aliases , but it did not work: we explained the changes too badly, and the changes themselves were late, the code was not ready for Go 1.8 release. Learning from this experience, I spoke and wrote an article about the underlying problem, and this gave rise to a productive discussion in the Go tracker dedicated to possible solutions. It seems that introducing more limited type aliases will be the right next step. Hope they will appear in Go 1.9. # 18130
In February 2010, I developed Go support for downloading published packages (goinstall, which became go get). Since then a lot has happened. In particular, the ecosystems of other languages have seriously raised expectations for package management systems, and the OpenSource community largely agrees with semantic versioning , which gives a basic understanding of the compatibility of different versions. Here Go needs improvements, and a group of people are already working on a solution . I want to make sure these ideas are properly integrated into the standard Go toolkit. I want package management to be another Go value.
The build with the go
command has a number of flaws that it’s time to correct Below are three typical examples to which I intend to devote my work.
Assemblies can be very slow because the go
utility does not actively cache the build results. Many people do not understand that go install
saves the results of their work, but go build
does not, therefore, they start go build
time after time and the build is expectedly slow. The same applies to repeating go test
without go test –i
in the presence of modified dependencies. As far as possible, all types of assemblies should be incremental. # 4719
Test results also need to be cached: if the input data has not changed, then usually there is no need to restart the test itself. This will greatly reduce the cost of launching "all tests", subject to minor changes or their absence. # 11193
Work outside GOPATH should be supported in much the same way as work within it. In particular, it should be possible to run git clone
, enter the directory with cd
, and then run the go
command, and make it all work fine. Package management only increases the importance of this task: you should be able to work with different versions of a package (say, v1 and v2) without having to maintain separate GOPATH for them. # 17271
It seems to me that specific examples from real projects benefited my presentation and the article on refactoring the code base. We also came to the conclusion that adding to the vet should solve problems typical of real programs. I would like this analysis of actual practice to become a standard way to discuss and evaluate changes in Go.
Now there is no generally accepted characteristic set of code to conduct a similar analysis: everyone must first create their own project, and this is too much work. I would like to build a single, autonomous Git repository containing our official basic code set for analysis, which the community can check with. A possible starting point is the top 100 Go repositories on GitHub by the number of stars, forks, or both.
Go-distribution comes with a powerful tool - go vet
, which indicates common errors. The level for these errors is high, so you need to listen to his messages. But the main thing - do not forget to run the vet. Although it would be more convenient to not remember. I think that we could run the vet in parallel with the final compilation and linking of the binaries that occur when doing a go test
without any slowdown. If we succeed in this and if we limit the allowed vet checks to a sample that is 100% accurate, then we will generally be able to turn the vet transmission into a prerequisite for running the test. Then the developers will not need to remember that you need to run go vet
. They will run the go test
, and the vet will occasionally report something important, avoiding unnecessary debugging. # 18084 # 18085
Part of the common practice of error reporting in Go is that the functions include the relevant accessible context, including information about what operation was attempted (the name of the function and its arguments). For example, this program
err := os.Remove("/tmp/nonexist") fmt.Println(err)
displays
remove /tmp/nonexist: no such file or directory
Not all Go code does the same as os.Remove
does. A lot of code does just
if err != nil { return err }
throughout the call stack and throws out a useful context that would be worth showing (for example, as remove /tmp/nonexist:
above). I would like to understand if we are not mistaken in our expectations for the inclusion of the context, and whether we can do something that will make it easier to write code that returns more informative errors.
There are also various discussions in the community about interfaces for clearing errors from context. And I want to understand when it is justified, and whether we should work out some kind of official recommendation.
In Go 1.7, we added a new context package to store information that is somehow related to the request (for example, about timeouts, about whether the request was canceled and about authorization data ). Individual context is immutable (as strings or integer values): you can only get a new, updated context and pass it explicitly down the call stack, or (which is less common) back to the top. The context of today is transmitted via the API (for example, database / sql and net / http ) mainly so that they can stop processing the request when the caller no longer needs the result of the processing. Information about timeouts is quite suitable for passing in context, but is completely inappropriate for database options, for example, because they are unlikely to be applied as well to all possible database operations during the execution of a query. What about a time source or a logger? Can I store them in context? I will try to understand and describe the criteria for what can be used in context and what cannot.
Unlike other languages, the memory model in Go is intentionally made modest, not giving users many promises. In fact, the document says that it is not worth reading too much. At the same time, it requires more from the compiler than in other languages: in particular, a race with integer values is no excuse for the arbitrary behavior of your program. There are full spaces: for example, the sync / atomic package is not mentioned. I think the developers of the main compiler and runtime systems will agree with me that these atomics should behave in the same way as seqcst atomics in C ++ or volatile in Java. But at the same time, we must neatly fit this into the memory model and into a long, long blog post. # 5045 # 7948 # 9442
Race detector is one of Go's favorite features. But not having a race would be even better. I would really like to have a reasonable way to integrate link immutability into Go, so that programmers can make clear, verified judgments about what can and can not be written, thereby preventing certain race conditions at the compilation stage. Go already has one immutable type — string
; It would be good to retroactively determine that string
is a named type (or alias) for an immutable []byte
. I do not think that this will be possible to realize this year, but I want to understand the possible solutions. Javari, Midori, Pony and Rust have already identified interesting options, in addition, there are a number of studies on this topic.
In the long run, if we manage to statically exclude the possibility of race condition, this will eliminate most of the memory model. Perhaps this is an impossible dream, but, again, I would like to better understand the potential solutions.
The most heated debates between Go-developers and programmers in other languages are burning about whether Go should support generics (or how long it should have happened). As far as I know, the Go creators team never said that Go does not need generics. We said that there are more important tasks that need to be addressed. For example, I believe that improving support for package management will have a much stronger positive impact on most Go developers than the introduction of generics. But at the same time, we realize that in some cases the lack of parametric polymorphism is a serious obstacle.
Personally, I would like to be able to write common functions for working with channels, for example:
// Join // . func Join(inputs ...<-chan T) <-chan T // Dup c c1 c2 func Dup(c <-chan T) (c1, c2 <-chan T)
I would also like to be able to write higher-level data processing abstractions on Go, similar to FlumeJava or LINQ , so that type mismatch errors are caught during compilation, not execution. You can also write a bunch of data structures or algorithms using generics, but I think these high-level tools are more interesting.
For several years, we have been trying to find the right way to introduce generics into Go. The last few sentences concerned the development of a solution that would provide a general parametric polymorphism (like chan T
) and the unification of string
and []byte
. If the latter is solved by parametrization on the basis of immutability, as described in the previous part, then perhaps this will simplify the requirements for the generics architecture.
When in 2008 I first thought about generics in Go, the main examples were C #, Java, Haskell, and ML. But none of the approaches implemented in these languages looked perfect for Go. Today there are more recent solutions, for example, in Dart, Midori, Rust and Swift.
Several years have passed since we ventured to take up this topic and consider possible options. It is probably time to look around again, especially in the light of information about the variability / immutability and decisions from new languages. I do not think that generics will appear in Go this year, but I would like to sort out this issue.
Source: https://habr.com/ru/post/320724/