📜 ⬆️ ⬇️

Building a microservice architecture on Golang and gRPC, part 2 (docker)

It's time to take up the containers


First of all, we use the latest Linux Alpine image. Linux Alpine is a lightweight Linux distribution designed and optimized for running web applications in Docker. In other words, Linux Alpine has enough dependencies and functionality to run most applications. This means that the size of the image is about 8 MB!

Compared to, let's say ... a Ubuntu virtual machine of about 1 GB, which is why Docker images have become more naturally suited for microservices and cloud computing.

So, now I hope that you see the value in containerization, and we can start “Dockerising” our first service. Let's create a Dockerfile $ touch consignment-service / Dockerfile .
')


First part
Original EwanValentine repository
Original article

In the Dockerfile, add the following:

FROM alpine:latest RUN mkdir /app WORKDIR /app ADD consignment-service /app/consignment-service CMD ["./consignment-service"] 

Then we create a new directory to host our application. Then we add our compiled binary file to our Docker container and run it.

Now let's update the build entry of our Makefile to create a Docker image.

 build: ... GOOS=linux GOARCH=amd64 go build docker build -t consignment . 

We added two more steps, and I would like to explain them in a bit more detail. First of all, we create our binary Go file. However, you will notice two environment variables before we run $ go build. GOOS and GOARCH allow you to cross-compile your binary file for another operating system. Since I am developing for a Macbook, I cannot compile the go executable, and then run it in a Docker container that uses Linux. The binary will be completely meaningless in your Docker container, and it will generate an error.

The second step I added is the docker build process. Docker will read your Dockerfile and create an image with the name consignment-service, the dot indicates the path to the directory, so here we just want the build process to look into the current directory.

I'm going to add a new entry to our Makefile:

 run: docker run -p 50051:50051 shippy-service-consignment 

Here we launch our docker image, opening port 50051. Since Docker is working on a separate network level, you need to redirect the port. For example, if you want to start this service on port 8080, you must change the argument -p to 8080: 50051. You can also run the container in the background by including the -d flag. For example, docker run -d -p 50051: 50051 consignment-service .

Run $ make run , then in a separate terminal panel again $ go run main.go and check that it still works.

When you run $ docker build, you embed your code and runtime into an image. Docker images are portable images of your environment and its dependencies. You can share Docker images by posting them in the Docker Hub. This is similar to npm or the yum repository for dockers. When you define a FROM in your Dockerfile, you tell Docker to pull this image out of the Docker storage for use as a base. Then you can expand and redefine portions of this base file, overriding them as you like. We will not publish our images of dockers, but feel free to browse through the dockers repository and note that virtually any software has already been packaged in containers. Some really wonderful things were dockerised.

Each ad in the Dockerfile is cached when it is first built. This eliminates the need to rebuild the entire runtime environment each time you make changes. Docker is smart enough to figure out which parts have changed and which need to be rebuilt. This makes the build process incredibly fast.

Enough about containers! Let's go back to our code.

When creating a gRPC service, there is a lot of standard code for creating connections, and you need to hard-code the location of the service address in the client or another service so that it can connect to it. This is difficult because when you start services in the cloud, they may not use the same host, or the address or ip may change after redeploying the service.

This is where the discovery service comes into play. Discovery Service updates the directory of all your services and their location. Each service is registered at run time and cancels registration at closing. Each service is then assigned a name or identifier. Thus, even if it may have a new IP address or host address, provided that the service name remains the same, you do not need to update calls to this service from other services.

As a rule, there are many approaches to this problem, but, like most things in programming, if someone has already dealt with this problem, there is no point in reinventing the wheel. @Chuhnk (Asim Aslam), the creator of Go-micro , solves these problems with fantastic clarity and ease of use. He single-handedly produces fantastic software. Please consider helping him if you like what you see!

Go-micro


Go-micro is a powerful microservice framework written in Go, for use mostly with Go. However, you can use Sidecar to interact with other languages.

Go-micro has useful features for creating microservices in Go. But we will start with perhaps the most common problem that he solves, and this is the discovery of the service.

We will need to make several updates to our service in order to work with go-micro. Go-micro integrates as a Protoc plugin, in this case replacing the standard gRPC plugin that we currently use. So let's start by replacing this in our Makefile.

Be sure to install the go-micro dependencies:

 go get -u github.com/micro/protobuf/{proto,protoc-gen-go} 

Update our Makefile to use the go-micro plugin instead of the gRPC plugin:

 build: protoc -I. --go_out=plugins=micro:. \ proto/consignment/consignment.proto GOOS=linux GOARCH=amd64 go build docker build -t consignment . run: docker run -p 50051:50051 shippy-service-consignment 

Now we need to update our shippy-service-consignment / main.go to use go-micro. This abstracts most of our previous gRPC code. It easily handles the registration and speeds up the writing of the service.

shippy-service-consignment / main.go
 // shippy-service-consignment/main.go package main import ( "fmt" //  protobuf  pb "github.com/EwanValentine/shippy/consignment-service/proto/consignment" "github.com/micro/go-micro" "context" ) //repository -   type repository interface { Create(*pb.Consignment) (*pb.Consignment, error) GetAll() []*pb.Consignment } // Repository -    , //       type Repository struct { consignments []*pb.Consignment } func (repo *Repository) Create(consignment *pb.Consignment) (*pb.Consignment, error) { updated := append(repo.consignments, consignment) repo.consignments = updated return consignment, nil } func (repo *Repository) GetAll() []*pb.Consignment { return repo.consignments } //         //       proto. //           . type service struct { repo repository } // CreateConsignment -        , //    create,      //     gRPC. func (s *service) CreateConsignment(ctx context.Context, req *pb.Consignment, res *pb.Response) error { // Save our consignment consignment, err := s.repo.Create(req) if err != nil { return err } // Return matching the `Response` message we created in our // protobuf definition. res.Created = true res.Consignment = consignment return nil } //GetConsignments -         func (s *service) GetConsignments(ctx context.Context, req *pb.GetRequest, res *pb.Response) error { consignments := s.repo.GetAll() res.Consignments = consignments return nil } func main() { repo := &Repository{} //     Go-micro srv := micro.NewService( //           proto micro.Name("shippy.service.consignment"), ) // Init will parse the command line flags. srv.Init() //   pb.RegisterShippingServiceHandler(srv.Server(), &service{repo}) //   log.Println(" ") if err := srv.Run(); err != nil { fmt.Println(err) } } 


The main changes here are the way we create our gRPC server, which was carefully abstracted from mico.NewService (), which handles the registration of our service. And finally, the service.Run () function, which handles the connection itself. As before, we register our implementation, but this time with a slightly different method.

The second largest change concerns the service methods themselves: the arguments and types of responses are slightly modified to take both the request and the response structure as arguments, and now only return an error. In our methods, we set the answer, which is processed by go-micro.

Finally, we no longer program the port. Go-micro must be configured using environment variables or command line arguments. To set the address, use MICRO_SERVER_ADDRESS =: 50051. By default, Micro uses mdns (multicast dns) as a service discovery broker for local use. Usually you do not use mdns to discover services in a production environment, but we want to avoid having to run something like Consul or etcd locally for testing. More on this later.

Let's update our Makefile to reflect this.

 build: protoc -I. --go_out=plugins=micro:. \ proto/consignment/consignment.proto GOOS=linux GOARCH=amd64 go build docker build -t consignment . run: docker run -p 50051:50051 \ -e MICRO_SERVER_ADDRESS=:50051 \ shippy-service-consignment 

-e is the flag of the environment variable, it allows you to transfer environment variables to your Docker container. You must have a flag for each variable, for example -e ENV = staging -e DB_HOST = localhost, etc.

Now, if you run $ make run, you will have a Dockerised service with service discovery. So let's update our Cli tool to use it.

consignment-cli
 package main import ( "encoding/json" "io/ioutil" "log" "os" "context" pb "github.com/EwanValentine/shippy-service-consignment/proto/consignment" micro "github.com/micro/go-micro" ) const ( address = "localhost:50051" defaultFilename = "consignment.json" ) func parseFile(file string) (*pb.Consignment, error) { var consignment *pb.Consignment data, err := ioutil.ReadFile(file) if err != nil { return nil, err } json.Unmarshal(data, &consignment) return consignment, err } func main() { service := micro.NewService(micro.Name("shippy.cli.consignment")) service.Init() client := pb.NewShippingServiceClient("shippy.service.consignment", service.Client()) // Contact the server and print out its response. file := defaultFilename if len(os.Args) > 1 { file = os.Args[1] } consignment, err := parseFile(file) if err != nil { log.Fatalf("Could not parse file: %v", err) } r, err := client.CreateConsignment(context.Background(), consignment) if err != nil { log.Fatalf("Could not greet: %v", err) } log.Printf("Created: %t", r.Created) getAll, err := client.GetConsignments(context.Background(), &pb.GetRequest{}) if err != nil { log.Fatalf("Could not list consignments: %v", err) } for _, v := range getAll.Consignments { log.Println(v) } } 


Here we imported go-micro libraries to create clients and replaced the existing connection code with the go-micro client code, which uses the service permission instead of directly connecting to the address.

However, if you run it, it will not work. This is because we are now running our service in the Docker container, which has its own mdns, separate from the mdns host we are currently using. The easiest way to fix this is to make sure that both the service and the client are running in dockerland, so they both work on the same host and use the same network layer. So let's create a make consignment-cli / Makefile and create several entries.

 build: GOOS=linux GOARCH=amd64 go build docker build -t shippy-cli-consignment . run: docker run shippy-cli-consignment 

As before, we want to compile our binary file for Linux. When we run our docker image, we want to pass an environment variable to tell the go-micro command to use mdns.

Now let's create a Dockerfile for our CLI tool:

 FROM alpine:latest RUN mkdir -p /app WORKDIR /app ADD consignment.json /app/consignment.json ADD consignment-cli /app/consignment-cli CMD ["./shippy-cli-consignment"] 

This is very similar to our service Dockerfile, except that it also retrieves our json data file.

Now, when you run $ make run in your shippy-cli-consignment, you should see Created: true, just like before.

Now it seems like a good time to take a look at the new Docker feature: multi-stage builds. This allows us to use multiple Docker images in a single Dockerfile.

This is especially useful in our case, since we can use one image to create our binary file with all the right dependencies. And then use the second image to launch it. Let's try this, I will leave detailed comments along with the code:
consignment-service / Dockerfile
 # consignment-service/Dockerfile #     golang,    #     .    `as builder`, #     ,      . FROM golang:alpine as builder RUN apk --no-cache add git #         gopath WORKDIR /app/shippy-service-consignment #       COPY . . RUN go mod download #     ,   #       Alpine. RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o shippy-service-consignment #      FROM, #   Docker        . FROM alpine:latest # ,    -     RUN apk --no-cache add ca-certificates #   ,     . RUN mkdir /app WORKDIR /app #   ,       , #         `builder` #       , #    ,    , #      . ! COPY --from=builder /app/shippy-service-consignment/shippy-service-consignment . #     !        #        # run time . CMD ["./shippy-service-consignment"] 


Now I will go to the other Docker files and apply this new approach. Oh, and don't forget to remove the $ go build from your Makefiles!

Ship service


Let's create the second service. We have a service (shippy-service-consignment) that deals with the coordination of a lot of containers with a vessel that is best suited for this lot. To comply with our lot, we must send the weight and number of containers to our new ship service, which will then find a ship that can handle this lot.

Create a new directory in your $ mkdir vessel-service root directory, now create a subdirectory for our new protobuf services definition, $ mkdir -p shippy-service-vessel / proto / vessel . Now let's create a new protobuf, $ touch shippy-service-vessel / proto / vessel / vessel.proto .

Since the protobuf definition is really the core of our software design, let's start with it.

vessel / vessel.proto
 // shippy-service-vessel/proto/vessel/vessel.proto syntax = "proto3"; package vessel; service VesselService { rpc FindAvailable(Specification) returns (Response) {} } message Vessel { string id = 1; int32 capacity = 2; int32 max_weight = 3; string name = 4; bool available = 5; string owner_id = 6; } message Specification { int32 capacity = 1; int32 max_weight = 2; } message Response { Vessel vessel = 1; repeated Vessel vessels = 2; } 


As you can see, this is very similar to our first service. We create a service with one rpc method called FindAvailable. This takes the type Specification and returns the type Response. The Response type returns either a Vessel type or multiple ships using a repeating field.

Now we need to create a Makefile to handle our build logic and our startup script. $ touch shippy-service-vessel / Makefile . Open this file and add the following:

 // vessel-service/Makefile build: protoc -I. --go_out=plugins=micro:. \ proto/vessel/vessel.proto docker build -t shippy-service-vessel . run: docker run -p 50052:50051 -e MICRO_SERVER_ADDRESS=:50051 shippy-service-vessel 

This is almost identical to the first Makefile we created for our consignment service, but notice that the service names and ports have changed a bit. We cannot launch two docking containers on the same port, so we use port forwarding for Dockers so that this service redirects from 50051 to 50052 on the host network.

Now we need a Dockerfile using our new multi-step format:

 # vessel-service/Dockerfile FROM golang:alpine as builder RUN apk --no-cache add git WORKDIR /app/shippy-service-vessel COPY . . RUN go mod download RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o shippy-service-vessel FROM alpine:latest RUN apk --no-cache add ca-certificates RUN mkdir /app WORKDIR /app COPY --from=builder /app/shippy-service-vessel . CMD ["./shippy-service-vessel"] 

Finally, we can write our implementation:

vessel-service / main.go
 // vessel-service/main.go package main import ( "context" "errors" "fmt" pb "github.com/EwanValentine/shippy/vessel-service/proto/vessel" "github.com/micro/go-micro" ) type Repository interface { FindAvailable(*pb.Specification) (*pb.Vessel, error) } type VesselRepository struct { vessels []*pb.Vessel } // FindAvailable -     , //           , //      . func (repo *VesselRepository) FindAvailable(spec *pb.Specification) (*pb.Vessel, error) { for _, vessel := range repo.vessels { if spec.Capacity <= vessel.Capacity && spec.MaxWeight <= vessel.MaxWeight { return vessel, nil } } //     return nil, errors.New("     ") } //    grpc type service struct { repo repository } func (s *service) FindAvailable(ctx context.Context, req *pb.Specification, res *pb.Response) error { //     vessel, err := s.repo.FindAvailable(req) if err != nil { return err } //       res.Vessel = vessel return nil } func main() { vessels := []*pb.Vessel{ &pb.Vessel{Id: "vessel001", Name: "Boaty McBoatface", MaxWeight: 200000, Capacity: 500}, } repo := &VesselRepository{vessels} srv := micro.NewService( micro.Name("shippy.service.vessel"), ) srv.Init() //    pb.RegisterVesselServiceHandler(srv.Server(), &service{repo}) if err := srv.Run(); err != nil { fmt.Println(err) } } 


We now turn to the interesting part. When we create a shipment, we need to change our cargo handling service in order to contact the ship search service, find the ship and update the ship_id parameter in the created lot:

shippy / consignment-service / main.go
 package main import ( "context" "fmt" "log" "sync" pb "github.com/EwanValentine/shippy-service-consignment/proto/consignment" vesselProto "github.com/EwanValentine/shippy-service-vessel/proto/vessel" "github.com/micro/go-micro" ) const ( port = ":50051" ) type repository interface { Create(*pb.Consignment) (*pb.Consignment, error) GetAll() []*pb.Consignment } // Repository -    , //       type Repository struct { mu sync.RWMutex consignments []*pb.Consignment } //Create -     func (repo *Repository) Create(consignment *pb.Consignment) (*pb.Consignment, error) { repo.mu.Lock() updated := append(repo.consignments, consignment) repo.consignments = updated repo.mu.Unlock() return consignment, nil } //GetAll -       func (repo *Repository) GetAll() []*pb.Consignment { return repo.consignments } //         //       proto. //            type service struct { repo repository vesselClient vesselProto.VesselServiceClient } // CreateConsignment -         create, //     ,     gRPC. func (s *service) CreateConsignment(ctx context.Context, req *pb.Consignment, res *pb.Response) error { //         , //    vesselResponse, err := s.vesselClient.FindAvailable(context.Background(), &vesselProto.Specification{ MaxWeight: req.Weight, Capacity: int32(len(req.Containers)), }) log.Printf(" : %s \n", vesselResponse.Vessel.Name) if err != nil { return err } //     id  req.VesselId = vesselResponse.Vessel.Id //      consignment, err := s.repo.Create(req) if err != nil { return err } res.Created = true res.Consignment = consignment return nil } // GetConsignments -         func (s *service) GetConsignments(ctx context.Context, req *pb.GetRequest, res *pb.Response) error { consignments := s.repo.GetAll() res.Consignments = consignments return nil } func main() { //   repo := &Repository{} //  micro srv := micro.NewService( micro.Name("shippy.service.consignment"), ) srv.Init() vesselClient := vesselProto.NewVesselServiceClient("shippy.service.vessel", srv.Client()) //      gRPC. pb.RegisterShippingServiceHandler(srv.Server(), &service{repo, vesselClient}) //   if err := srv.Run(); err != nil { fmt.Println(err) } } 


Here we have created a client instance for our vessel service, which allows us to use the name of the service, i.e. shipy.service.vessel to call the ship's service as a client and interact with its methods. In this case, only one method (FindAvailable).We ship the batch weight along with the number of containers we want to ship as a specification for the vessel’s service. Which returns us to the vessel corresponding to this specification.

Update the consignment-cli / consignment.json file, delete the hard-coded ship_id, because we want to confirm that our ship search service is working. So let's add some more containers and increase the weight. For example:

 { "description": "  ", "weight": 55000, "containers": [ { "customer_id": "_001", "user_id": "_001", "origin": "--" }, { "customer_id": "_002", "user_id": "_001", "origin": "" }, { "customer_id": "_003", "user_id": "_001", "origin": "" } ] } 

Now run $ make build && make run in consignment-cli. You should see the answer with a list of generated shipments. In your batches, you should see that the vessel_id parameter is set.

So, we have two interconnected microservice and command line interface!
In the next part of this series, we will look at saving some of this data using MongoDB. We will also add a third service and use docker-compose to locally manage our growing container ecosystem.

The first part of the
original repository EwanValentine

Source: https://habr.com/ru/post/455812/


All Articles