From the author. Already for the fifth December in a row on the GopherAcademy blog , the most diverse representatives of the Go community are sharing their experiences in a special pre-Christmas series of posts. This year I also decided to offer my article based on the first part of our workshop with Igor Dolzhikov on microservices . On Habré a small part of this guide, we have already considered earlier .
If you have ever tried Go, you know that writing services on Go is very simple. We need literally a few lines of code in order to be able to start the http-service. But what needs to be added if we want to prepare such an application in production? Let's look at this on the example of a service that is ready to launch in Kubernetes .
All steps from this article can be found in one tag , or you can follow the article examples for commit .
So, we have a very simple application:
package main import ( "fmt" "net/http" ) func main() { http.HandleFunc("/home", func(w http.ResponseWriter, _ *http.Request) { fmt.Fprint(w, "Hello! Your request was processed.") }, ) http.ListenAndServe(":8000", nil) }
If we want to try to run it, the go run main.go
will be enough. With curl, we can check how this service works: curl -i http://127.0.0.1:8000/home
. But when we launch this application, we see that there is no information in the terminal about its state.
First of all, let's add logging in order to understand what is happening with the service and so that errors or other important situations can be logged. In this example, we will use the simplest logger from the standard Go library, but for a real service running in production, there may be interesting more complex solutions, such as glog or logrus .
We may be interested in 3 situations: when the service starts, when the service is ready to process requests, and when http.ListenAndServe
returns an error. The result is something like this :
func main() { log.Print("Starting the service...") http.HandleFunc("/home", func(w http.ResponseWriter, _ *http.Request) { fmt.Fprint(w, "Hello! Your request was processed.") }, ) log.Print("The service is ready to listen and serve.") log.Fatal(http.ListenAndServe(":8000", nil)) }
Already better!
For this application, we will most likely want to use a router to simplify the handling of different URIs, HTTP methods, or other rules. In the standard Go library there is no router, so let's try gorilla / mux , which is quite compatible with the standard net/http
library.
If your service requires some noticeable number of routing rules, it makes sense to put everything related to routing into a separate package. Let's take the initialization and setting of the routing rules, as well as handler functions into the handlers package (you can see the complete changes here ).
Add the Router
function, which will return the configured router, and the home
function, which will process the rule for the /home
path. I prefer to divide such functions into separate files:
package handlers import ( "github.com/gorilla/mux" ) // Router register necessary routes and returns an instance of a router. func Router() *mux.Router { r := mux.NewRouter() r.HandleFunc("/home", home).Methods("GET") return r }
package handlers import ( "fmt" "net/http" ) // home is a simple HTTP handler function which writes a response. func home(w http.ResponseWriter, _ *http.Request) { fmt.Fprint(w, "Hello! Your request was processed.") }
In addition, we need small changes in the main.go
file:
package main import ( "log" "net/http" "github.com/rumyantseva/advent-2017/handlers" ) // How to try it: go run main.go func main() { log.Print("Starting the service...") router := handlers.Router() log.Print("The service is ready to listen and serve.") log.Fatal(http.ListenAndServe(":8000", router)) }
It's time to add a few tests . To do this, you can use the standard httptest
package. For the Router
function, you can write something like this:
package handlers import ( "net/http" "net/http/httptest" "testing" ) func TestRouter(t *testing.T) { r := Router() ts := httptest.NewServer(r) defer ts.Close() res, err := http.Get(ts.URL + "/home") if err != nil { t.Fatal(err) } if res.StatusCode != http.StatusOK { t.Errorf("Status code for /home is wrong. Have: %d, want: %d.", res.StatusCode, http.StatusOK) } res, err = http.Post(ts.URL+"/home", "text/plain", nil) if err != nil { t.Fatal(err) } if res.StatusCode != http.StatusMethodNotAllowed { t.Errorf("Status code for /home is wrong. Have: %d, want: %d.", res.StatusCode, http.StatusMethodNotAllowed) } res, err = http.Get(ts.URL + "/not-exists") if err != nil { t.Fatal(err) } if res.StatusCode != http.StatusNotFound { t.Errorf("Status code for /home is wrong. Have: %d, want: %d.", res.StatusCode, http.StatusNotFound) } }
Here we check that calling GET
for /home
will return 200
. And when you try to send POST
expected response will be 405
. And finally, for a non-existent path, we expect 404
. In general, this test may be somewhat redundant, because the work of the router is already covered by tests within the gorilla/mux
, so here you can check even fewer cases.
For the home
function, it makes sense to check not only the code, but also the response body:
package handlers import ( "io/ioutil" "net/http" "net/http/httptest" "testing" ) func TestHome(t *testing.T) { w := httptest.NewRecorder() home(w, nil) resp := w.Result() if have, want := resp.StatusCode, http.StatusOK; have != want { t.Errorf("Status code is wrong. Have: %d, want: %d.", have, want) } greeting, err := ioutil.ReadAll(resp.Body) resp.Body.Close() if err != nil { t.Fatal(err) } if have, want := string(greeting), "Hello! Your request was processed."; have != want { t.Errorf("The greeting is wrong. Have: %s, want: %s.", have, want) } }
Run go test
and check that the tests work:
$ go test -v ./... ? github.com/rumyantseva/advent-2017 [no test files] === RUN TestRouter --- PASS: TestRouter (0.00s) === RUN TestHome --- PASS: TestHome (0.00s) PASS ok github.com/rumyantseva/advent-2017/handlers 0.018s
The next important step is the ability to configure the service. Now at startup, the service always listens on port 8000
, and the ability to configure this value can be useful. The manifest of twelve-factor applications , which is a very interesting approach to writing services, recommends that we store the configuration based on the environment. So, we will set the config for the port through the environment variable :
package main import ( "log" "net/http" "os" "github.com/rumyantseva/advent-2017/handlers" ) // How to try it: PORT=8000 go run main.go func main() { log.Print("Starting the service...") port := os.Getenv("PORT") if port == "" { log.Fatal("Port is not set.") } r := handlers.Router() log.Print("The service is ready to listen and serve.") log.Fatal(http.ListenAndServe(":"+port, r)) }
In this example, if the port is not set, the application will immediately terminate with an error. It makes no sense to try to continue working if the configuration is specified incorrectly.
A few days ago, an article about the make
utility was published on the GopherAcademy blog, which can be very useful if you have to deal with repetitive actions. Let's see how you can use this in our project. Right now we have two repetitive actions: running tests and compiling and running a service. Let's add these actions to the Makefile , but instead of a simple go run
we will now use the go build
and then run the compiled binary, this option is better if in the future we are preparing an application for production:
APP?=advent PORT?=8000 clean: rm -f ${APP} build: clean go build -o ${APP} run: build PORT=${PORT} ./${APP} test: go test -v -race ./...
In this example, we brought the name of the binary to a separate APP
variable, so as not to repeat it several times.
In addition, if we want to run the application in the manner described, we must first remove the old binary (if it exists). Therefore, when running make build
, clean
is first called.
The next practice we add to the service is versioning. Sometimes it is useful to know which particular build and even the commit we use in production, and when exactly the binary has been built.
In order to store this information, add a new version
- version
:
package version var ( // BuildTime is a time label of the moment when the binary was built BuildTime = "unset" // Commit is a last commit hash at the moment when the binary was built Commit = "unset" // Release is a semantic version of current build Release = "unset" )
We can log these variables when the application starts:
... func main() { log.Printf( "Starting the service...\ncommit: %s, build time: %s, release: %s", version.Commit, version.BuildTime, version.Release, ) ... }
And also we can add them to home
(don't forget to fix the tests!):
package handlers import ( "encoding/json" "log" "net/http" "github.com/rumyantseva/advent-2017/version" ) // home is a simple HTTP handler function which writes a response. func home(w http.ResponseWriter, _ *http.Request) { info := struct { BuildTime string `json:"buildTime"` Commit string `json:"commit"` Release string `json:"release"` }{ version.BuildTime, version.Commit, version.Release, } body, err := json.Marshal(info) if err != nil { log.Printf("Could not encode info data: %v", err) http.Error(w, http.StatusText(http.StatusServiceUnavailable), http.StatusServiceUnavailable) return } w.Header().Set("Content-Type", "application/json") w.Write(body) }
We will use a linker to set the BuildTime
, Commit
and Release
variables at compile time.
Add new variables to the Makefile
:
Makefile
RELEASE?=0.0.1 COMMIT?=$(shell git rev-parse --short HEAD) BUILD_TIME?=$(shell date -u '+%Y-%m-%d_%H:%M:%S')
Here, COMMIT
and BUILD_TIME
defined through specified commands, and for RELEASE
we can use, for example, semantic versioning or simply incremental versions of assemblies.
Now we will rewrite the goal of the build
in order to use the values of these variables:
Makefile
build: clean go build \ -ldflags "-s -w -X ${PROJECT}/version.Release=${RELEASE} \ -X ${PROJECT}/version.Commit=${COMMIT} -X ${PROJECT}/version.BuildTime=${BUILD_TIME}" \ -o ${APP}
We also added the PROJECT
variable to the start of the Makefile
in order not to repeat the same thing several times:
Makefile
PROJECT?=github.com/rumyantseva/advent-2017
All changes made in this step can be found here . Try make run
to check how this works.
There is one thing I don’t like about our code: the handler
package depends on the version
package. Changing this is easy: we need to make the home
function configurable:
handlers/home.go
// home returns a simple HTTP handler function which writes a response. func home(buildTime, commit, release string) http.HandlerFunc { return func(w http.ResponseWriter, _ *http.Request) { ... } }
And, again, do not forget to correct the tests and make all the necessary changes .
In the case of launching the service in Kubernetes, it is usually required to add two helpers: liveness and readiness tests . The purpose of the liveness test is to give an understanding that the service has started. If the liveness test fails, the service will be restarted. The purpose of the readiness test is to give an understanding that the application is ready to receive traffic. If the readiness fails, the container will be removed from the service load balancers.
In order to determine the liveness test, you can write a simple handler, which always returns code 200
:
// healthz is a liveness probe. func healthz(w http.ResponseWriter, _ *http.Request) { w.WriteHeader(http.StatusOK) }
For readiness samples, a similar solution is often enough, but sometimes it is necessary to wait for some event (for example, a database is ready) in order to start processing traffic:
// readyz is a readiness probe. func readyz(isReady *atomic.Value) http.HandlerFunc { return func(w http.ResponseWriter, _ *http.Request) { if isReady == nil || !isReady.Load().(bool) { http.Error(w, http.StatusText(http.StatusServiceUnavailable), http.StatusServiceUnavailable) return } w.WriteHeader(http.StatusOK) } }
In this example, we return 200
only if the isReady
variable isReady
set and true
.
Let's see how this can be used:
func Router(buildTime, commit, release string) *mux.Router { isReady := &atomic.Value{} isReady.Store(false) go func() { log.Printf("Readyz probe is negative by default...") time.Sleep(10 * time.Second) isReady.Store(true) log.Printf("Readyz probe is positive.") }() r := mux.NewRouter() r.HandleFunc("/home", home(buildTime, commit, release)).Methods("GET") r.HandleFunc("/healthz", healthz) r.HandleFunc("/readyz", readyz(isReady)) return r }
Here we say that the application is ready to handle traffic 10 seconds after launch. Of course, in real life it makes no sense to wait 10 seconds, but maybe you want to add warm cache or something like that here.
As always, full changes can be found on GitHub .
Note. If the application receives too much traffic, it will start responding unstably. For example, a liveness test may fail due to timeouts, and the container will be reloaded. For this reason, some engineers prefer not to use liveness samples at all. Personally, I think it is better to scale resources if you notice that more and more requests come to the service. For example, you can try the automatic scaling of the stock through HPA .
When a service needs to be stopped, it’s good practice not to immediately disconnect connections, requests and other operations, but to handle them correctly. Go supports "graceful shutdown" for http.Server
, starting with version 1.8. Consider how this can be used :
func main() { ... r := handlers.Router(version.BuildTime, version.Commit, version.Release) interrupt := make(chan os.Signal, 1) signal.Notify(interrupt, os.Interrupt, syscall.SIGTERM) srv := &http.Server{ Addr: ":" + port, Handler: r, } go func() { log.Fatal(srv.ListenAndServe()) }() log.Print("The service is ready to listen and serve.") killSignal := <-interrupt switch killSignal { case os.Interrupt: log.Print("Got SIGINT...") case syscall.SIGTERM: log.Print("Got SIGTERM...") } log.Print("The service is shutting down...") srv.Shutdown(context.Background()) log.Print("Done") }
In this example, we intercept the system signals SIGINT
and SIGTERM
and, if one of them is caught, stop the service correctly.
Note. When I wrote this code, I also tried to intercept SIGKILL
here. I saw this approach several times in different libraries and was sure that it worked. But, as Sandor Szücs noted, intercepting SIGKILL
impossible. In the case of SIGKILL
application will be stopped immediately.
Our application is almost ready to launch in Kubernetes, it's time to containerize it.
The simplest Dockerfile
, which will be needed, may look like this:
Dockerfile
FROM scratch ENV PORT 8000 EXPOSE $PORT COPY advent / CMD ["/advent"]
We create the lowest possible container, copy the binary there and run it (besides, we did not forget to forward the variable PORT
).
Now let's change the Makefile
bit and add the image build and the container launch there. Here we can use two new variables: GOOS
and GOARCH
, which we will use for cross-compiling within the framework of the build
goal.
... GOOS?=linux GOARCH?=amd64 ... build: clean CGO_ENABLED=0 GOOS=${GOOS} GOARCH=${GOARCH} go build \ -ldflags "-s -w -X ${PROJECT}/version.Release=${RELEASE} \ -X ${PROJECT}/version.Commit=${COMMIT} -X ${PROJECT}/version.BuildTime=${BUILD_TIME}" \ -o ${APP} container: build docker build -t $(APP):$(RELEASE) . run: container docker stop $(APP):$(RELEASE) || true && docker rm $(APP):$(RELEASE) || true docker run --name ${APP} -p ${PORT}:${PORT} --rm \ -e "PORT=${PORT}" \ $(APP):$(RELEASE) ...
So, we added the container
target to build the image and adjusted the run
target so that instead of launching the binary, the container now runs. All changes are available here .
Now you can try running make run
to check the whole process.
There is one external dependency in our project - github.com/gorilla/mux
. And that means for an application that is really ready for production, you need to add dependency management . If we use the dep utility, then all we need to do is call the dep init
command:
$ dep init Using ^1.6.0 as constraint for direct dep github.com/gorilla/mux Locking in v1.6.0 (7f08801) for direct dep github.com/gorilla/mux Locking in v1.1 (1ea2538) for transitive dep github.com/gorilla/context
As a result, the Gopkg.toml
and Gopkg.lock
and the vendor
directory containing all the dependencies used were created. Personally, I prefer to push vendor
in git, especially for important projects.
And finally, the final step : run the application in Kubernetes. The easiest way to try Kubernetes is to install and configure minikube on your local environment.
Kubernetes downloads images from the registry (Docker registry). In our case, a public registry is enough - Docker Hub . We will need another variable and another command in the Makefile
:
CONTAINER_IMAGE?=docker.io/webdeva/${APP} ... container: build docker build -t $(CONTAINER_IMAGE):$(RELEASE) . ... push: container docker push $(CONTAINER_IMAGE):$(RELEASE)
Here, the CONTAINER_IMAGE
variable sets the registry repository, where we will send and where we will download images of containers. As you can see, in this example, the user name ( webdeva
) is used in the registry path. If you do not have an account on hub.docker.com , it's time to start it and then log in using the docker login
. After that you can send images to the registry.
Let's try make push
:
$ make push ... docker build -t docker.io/webdeva/advent:0.0.1 . Sending build context to Docker daemon 5.25MB ... Successfully built d3cc8f4121fe Successfully tagged webdeva/advent:0.0.1 docker push docker.io/webdeva/advent:0.0.1 The push refers to a repository [docker.io/webdeva/advent] ee1f0f98199f: Pushed 0.0.1: digest: sha256:fb3a25b19946787e291f32f45931ffd95a933100c7e55ab975e523a02810b04c size: 528
Works! Now the created image can be found in the registry .
We define the necessary configurations (manifests) for Kubernetes. They are static files in JSON or YAML format, so for the substitution of "variables" we will have to use the help of the sed
utility. In this example, we will look at three types of resources: deployment , service, and ingress .
Note. The helm project solves the problem of managing the release of configurations in Kubernetes in general and addresses the issues of creating flexible configurations in particular. So if simple sed
not enough, it makes sense to get to know Helm.
Consider the configuration for deployment:
apiVersion: extensions/v1beta1 kind: Deployment metadata: name: {{ .ServiceName }} labels: app: {{ .ServiceName }} spec: replicas: 3 strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 50% maxSurge: 1 template: metadata: labels: app: {{ .ServiceName }} spec: containers: - name: {{ .ServiceName }} image: docker.io/webdeva/{{ .ServiceName }}:{{ .Release }} imagePullPolicy: Always ports: - containerPort: 8000 livenessProbe: httpGet: path: /healthz port: 8000 readinessProbe: httpGet: path: /readyz port: 8000 resources: limits: cpu: 10m memory: 30Mi requests: cpu: 10m memory: 30Mi terminationGracePeriodSeconds: 30
Kubernetes configuration issues are best addressed in a separate article, but, as you can see, among other things, the registry and the container image are defined, as well as the rules for liveness and readiness samples.
The typical configuration for service looks simpler:
apiVersion: v1 kind: Service metadata: name: {{ .ServiceName }} labels: app: {{ .ServiceName }} spec: ports: - port: 80 targetPort: 8000 protocol: TCP name: http selector: app: {{ .ServiceName }}
And finally, ingress. Here we define the configuration of the ingress controller, which will help, for example, to access the service from outside Kubernetes. Suppose we want to send requests to the service when accessing the domain advent.test
(which in reality, of course, does not exist):
apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: kubernetes.io/ingress.class: nginx ingress.kubernetes.io/rewrite-target: / labels: app: {{ .ServiceName }} name: {{ .ServiceName }} spec: backend: serviceName: {{ .ServiceName }} servicePort: 80 rules: - host: advent.test http: paths: - path: / backend: serviceName: {{ .ServiceName }} servicePort: 80
In order to check how the configuration works, install the minikube
using its official documentation . In addition, we need the kubectl utility to apply the configurations and test the service.
To start minikube
, enable ingress and prepare kubectl
will need the following commands:
minikube start minikube addons enable ingress kubectl config use-context minikube
Now add a separate target to the Makefile
to install the service in the minikube
:
Makefile
minikube: push for t in $(shell find ./kubernetes/advent -type f -name "*.yaml"); do \ cat $$t | \ gsed -E "s/\{\{(\s*)\.Release(\s*)\}\}/$(RELEASE)/g" | \ gsed -E "s/\{\{(\s*)\.ServiceName(\s*)\}\}/$(APP)/g"; \ echo ---; \ done > tmp.yaml kubectl apply -f tmp.yaml
These commands "compile" all *.yaml
configurations into one file, replace the "variables" Release
and ServiceName
real values (I use gsed
instead of the usual sed
) and run kubectl apply
to install the application in Kubernetes.
Check how the configurations were applied:
$ kubectl get deployment NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE advent 3 3 3 3 1d $ kubectl get service NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE advent 10.109.133.147 <none> 80/TCP 1d $ kubectl get ingress NAME HOSTS ADDRESS PORTS AGE advent advent.test 192.168.64.2 80 1d
Now we will try to send a request to the service through the specified domain. First of all, we need to add the domain advent.test
to the local /etc/hosts
(for Windows - %SystemRoot%\System32\drivers\etc\hosts
):
echo "$(minikube ip) advent.test" | sudo tee -a /etc/hosts
And now you can check the work of the service:
curl -i http://advent.test/home HTTP/1.1 200 OK Server: nginx/1.13.6 Date: Sun, 10 Dec 2017 20:40:37 GMT Content-Type: application/json Content-Length: 72 Connection: keep-alive Vary: Accept-Encoding {"buildTime":"2017-12-10_11:29:59","commit":"020a181","release":"0.0.5"}%
Hurray, it works!
All manual steps can be found here , two options are available: commit-for-commit and all steps in one directory . If you have questions, you can create an issue , knock me on Twitter: @webdeva , or just leave a comment here.
If you are wondering what a real and more flexible production-ready service might look like, look at the project takama / k8sapp - a Go-application template that satisfies the requirements of Kubernetes.
PS I express my gratitude to Natalie Pistunovich , Paul Brousseau , Sandor Szücs , Maxim Filatov and other community members for reviewing and commenting.
Source: https://habr.com/ru/post/345332/
All Articles