⬆️ ⬇️

API framework on Golang

In the process of exploring Golang, I decided to make an application framework, with which it will be convenient for me to work in the future. The result was, in my opinion, a good preparation, which I decided to share, and at the same time to discuss points that arose during the creation of the frame.



image



In principle, the design of the Go language hints, as it were, that it is not necessary to make large-scale applications (I’m talking about the lack of generics and not a very powerful error handling mechanism). But we all the same know that the size of applications usually does not decrease, and more often quite the opposite. Therefore, it is better to immediately make a framework on which it will be possible to string new functions without prejudice to the support of the code.



I tried to insert less code into the article, instead I added links to specific lines of code on Github in the hope that it would be more convenient to see the whole picture.



First, I sketched myself a plan of what should be in the app. Since in the article I will tell about each item separately, I will first give the main of this list as content.





Package manager



After reading the descriptions for various implementations, I chose a govendor and at the moment was pleased with the choice. The reason is simple - it allows you to install dependencies inside the directory with the application, store information about packages and their versions.



Information about packages and their versions is stored in a single file vendor.json . Its disadvantage in this approach, too. If you add a package with its dependencies, then along with information about the package, the file will also contain information about its dependencies. The file quickly grows and it is no longer possible to clearly determine which dependencies are the main dependencies and which are the derivatives.



In PHP Composer or in npm, the main dependencies are described in one file, and the main and derived dependencies and their versions are automatically recorded in the lock file. This approach is more convenient in my opinion. But for now the govendor implementation was enough for me.



Framework



From the framework, I do not really need much, a convenient router, validation of requests. All this was found in the popular Gin . On it and stopped.



Dependency Injection



With DI I had to suffer a little. First chose Dig. And at first everything was fine. Described services, Dig further builds dependencies, conveniently. But then it turned out that services cannot be redefined, for example, when testing. Therefore, in the end, I came to the conclusion that I took a simple service container sarulabs / di .



I just had to fork it, because out of the box it allows you to add services and prohibits overriding them. And when writing autotests in my opinion it is more convenient to initialize the container as in the application, and then override some of the services, specifying stubs instead. In the fork added a method to override the description of the service.



But in the end, as in the case of Dig, and in the case of the service container, I had to take the tests into a separate package. Otherwise, it turns out that tests are run separately by package ( go test model/service ), but not run immediately for the entire application ( go test ./... ), because of the cyclic dependencies that arise.



Answers in JSON / XML format in accordance with the request headers



I didn’t find this in Gin, so I just added a method to the base controller that generates a response depending on the request header.



 func (c BaseController) response(context *gin.Context, obj interface{}, code int) { switch context.GetHeader("Accept") { case "application/xml": context.XML(code, obj) default: context.JSON(code, obj) } } 


ORM



With ORM, I did not experience a long agony of choice. There was plenty to choose from. But the description of the functions like GORM, he is one of the most popular at the time of selection. There is support for the most used DBMS. At least PostgreSQL and MySQL are definitely there. It also has methods for managing the database schema that can be used when creating migrations.



Migrations



For migrations I stopped at the gorm-goose package. I put a separate package globally and run them migration. At first, I was confused by this implementation, since the connection with the database must be described in a separate db / dbconf.yml file . But then it turned out that the connection string in it can be described in such a way that the value is taken from the environment variable.



 development: driver: postgres open: $DB_URL 


And it is quite convenient. At least with docker-compose, I didn’t have to duplicate the connection string .



Gorm-goose also supports rollbacks of migrations, which I find very useful.



Basic CRUD repository



I prefer everything that addresses resources to be placed in a separate repository layer. In my opinion, with this approach, the code of business logic is cleaner. The code of business logic in this case knows only that it needs to work with the data that it takes from the repository. And what happens in the repository, business logic is not important. The repository can work with a relational database, with KV-storage, with a disk, and maybe with the API of another service. The code of business logic in all these cases will be the same.



CRUD repository implements the following interface.



 type CrudRepositoryInterface interface { BaseRepositoryInterface GetModel() (entity.InterfaceEntity) Find(id uint) (entity.InterfaceEntity, error) List(parameters ListParametersInterface) (entity.InterfaceEntity, error) Create(item entity.InterfaceEntity) entity.InterfaceEntity Update(item entity.InterfaceEntity) entity.InterfaceEntity Delete(id uint) error } 


That is, it implements CRUD operations Create() , Find() , List() , Update() , Delete() and the GetModel() method.



About GetModel () . There is a base repository CrudRepository , which implements the basic CRUD operations. In the repositories that build it to yourself, it is enough to indicate with which model they should work. To do this, the GetModel() method must return a GORM model. Then I had to use the result GetModel() using reflection in CRUD methods.



For example



 func (c CrudRepository) Find(id uint) (entity.InterfaceEntity, error) { item := reflect.New(reflect.TypeOf(c.GetModel()).Elem()).Interface() err := c.db.First(item, id).Error return item, err } 


That is, in fact, in this case it was necessary to abandon static typing in favor of dynamic. At such moments, the lack of generics in the language is especially felt.



In order for the repositories that work with specific models, you can implement your rules for filtering lists in the List() method, first make the implementation of late binding, so that the method responsible for constructing a sample request is called from the List() method. And this method could be implemented in a specific repository. It is difficult to somehow abandon the patterns of thinking that emerged when working with other languages. But, having looked at it with a fresh look, and having evaluated the “elegance” of the chosen path, then I still redid it for an approach that is closer to Go. To do this, simply in CrudRepository through the interface declared query builder , which is already List() .



 listQueryBuilder ListQueryBuilderInterface 


It turns out pretty funny. Limiting the language to late binding, which at first seems to be a disadvantage, pushes for a clearer separation of the code.



Basic CRUD service



There is nothing interesting here, since there is no business logic in the frame. The calls of the CRUD methods to the repository are simply proxied .



In the services layer, business logic must be implemented.



Basic CRUD controller



CRUD methods are implemented in the controller. They process the parameters from the request, transfer control to the corresponding service method, and form the response to the client based on the service response.



I had the same story with the controller as with the repository on filtering lists. As a result, I remade the implementation with homemade late binding and added a hydrator , which, based on the query parameters, forms a structure with parameters for filtering the list.



In the hydrator that comes with the CRUD controller, only the parameters for pagination are processed. In specific controllers that integrate a CRUD controller, you can override the hydrator .



Validation of requests



Validation is performed by means of Gin. For example, when adding a record (the Create() method), it is enough to decorate the elements of the entity structure



 Name string `binding:"required"` 


The framework method ShouldBindJSON() takes care of checking the parameters of the request against the requirements described in the decorator.



Configs and environment variables



I really liked the implementation of the Viper , especially in conjunction with the Cobra.



I described config reading in main.go. Baseline parameters that do not contain secrets are described in the base.env file . You can override them in the .env file that is added to .gitignore. In .env you can describe secret values ​​for the environment.



Environment variables have a higher priority.



Console commands



To describe the console commands chose Cobra . How good to use Cobra with Viper. We can describe the command



 serverCmd.PersistentFlags().StringVar(&serverPort, "port", defaultServerPort, "Server port") 


And associate the environment variable with the value of the command parameter



 viper.BindPFlag("SERVER_PORT", serverCmd.PersistentFlags().Lookup("port")) 


In essence, the entire application of this framework is a console. The web server is started by one of the console server commands.



 gin -i run server 


Logging



For logging I chose the logrus package, because there I have everything that I usually need: setting logging levels, where to log, adding hooks, for example, to send logs to the alert system.



Logger integration with the alert system



I chose Sentry, since everything turned out to be quite simple with it thanks to the ready integration with logrus: logrus_sentry . In the config, I made the parameters with the URL to Sentry SENTRY_DSN and the timeout for sending to Sentry SENTRY_TIMEOUT . It turned out that by default the timeout is short, if I’m not mistaken, 300 ms, and many messages were not delivered.



Setting up an alert for errors



Panic handling done separately for the web server and console commands .



Unit Tests with Service Override via DI



As noted above, for unit tests I had to allocate a separate package. Since the selected library for creating the container service did not allow overriding the services, in the fork, added a method to override the description of the services. Thanks to this, in the unit test you can use the same description of services as in the application.



 dic.InitBuilder() 


And override the description of only some of the services in this way.



 dic.Builder.Set(di.Def{ Name: dic.UserRepository, Build: func(ctn di.Container) (interface{}, error) { return NewUserRepositoryMock(), nil }, }) 


Then you can build a container and use the necessary services in the test:



 dic.Container = dic.Builder.Build() userService := dic.Container.Get(dic.UserService).(service.UserServiceInterface) 


Thus, we will test the userService, which will use the provided stub instead of the real repository.



Percentage and code coverage code tests

I was completely satisfied with the standard go test utility.



You can run tests separately



 go test test/unit/user_service_test.go -v 


You can run all the tests at once



 go test ./... -v 


You can build a coverage map and calculate the percentage of coverage.



 go test ./... -v -coverpkg=./... -coverprofile=coverage.out 


And see the code coverage map in the browser.



 go tool cover -html=coverage.out 


Swagger



For Gin, there is a gin-swagger project that can be used both to generate a specification for Swagger and to generate documentation based on it. But, as it turned out, to generate a specification for specific operations, it is necessary to specify comments on specific functions of the controller. For me, this was not very convenient, since I did not want to duplicate the CRUD code of operations in each controller. Instead, I simply embed a CRUD controller into specific controllers, as described above. I also didn’t really want to create stub functions for this.



Therefore, I came to the conclusion that the generation of specifications is performed using goswagger , because in this case operations can be described without being tied to specific functions .



 swagger generate spec -o doc/swagger.yml 


By the way, with goswagger one could even go from the reverse, and generate the web server code based on the Swagger specification. But with this approach, there were difficulties with the use of ORM and I finally refused.



Documentation generation is performed using gin-swagger, for this , a previously generated file with specification is specified.



Docker compose



In the framework added a description of two containers - for the code and for the base . When starting the container with code, we are waiting for the container with the base to start completely. And at each start we roll migration if necessary. The parameters of the database connection for performing migrations are described, as mentioned above, in dbconf.yml , where it was possible to use the environment variable to transfer the database connection settings.



Thanks for attention. In the process had to adapt to the peculiarities of the language. I would be interested to hear from colleagues who have spent more time with Go. Surely some moments could have been made more elegant, so I will be glad for useful criticism. Link to the frame: https://github.com/zubroide/go-api-boilerplate



')

Source: https://habr.com/ru/post/455302/



All Articles