📜 ⬆️ ⬇️

NGINX and gRPC are real friends now

A few days ago, a new version of Nginx was released - 1.13.10 . The main feature of this release is native support for HTTP / 2 proxying, and, as a result, gRPC.

Probably, now, when the world is flooded with microservices, as well as heterogeneous stacks of technologies, everyone knows what gRPC is. If not, this is how protobuf (which gRPC including can be used for serialization), or Apache Thrift on steroids. The technology allows you to organize the interaction of many services with each other in a very efficient manner.

High gRPC performance is achieved due to several things: using HTTP / 2 for multiplexing, data compression. In addition, the framework encourages programmers to develop their services in a non-blocking style (a-ka NIO), using libraries such as Netty within themselves.

image
')
Image taken from https://www.slideshare.net/borisovalex/enabling-googley-microservices-with-grpc-at-jdkio-2017

Another important gRPC feature is native backpressure support. This property is implemented with the help of deadline abstraction: the client's timeout exposes itself through the whole hats of services. If the next call does not fit into the specified deadline (timeout), then the whole chain of calls will be logged. This protects the system from a chain reaction. More details about this told Alexander Borisov from Google. You can watch this report on youtube.

Let's return to our rams: Nginx and gRPC. At first glance it may seem that these are two incompatible technologies. Nginx is used as the entry point to the system. At the same time, gRPC is a tool for the interaction of microservices within the system. However, this is not always the case.

Consider a company that develops an API. This company may have mobile apps that consume the same API. Applications usually cannot go directly to microservices that are not accessible from the public network. Therefore, some Gateway is required, which will receive requests from outside and proxy them to internal microservices.

The Gateway function can run multiple classes of systems. First, it can be an honest application in any programming language. Of the benefits of this approach is greater flexibility. Of the minuses - often, reduced performance. In addition, when programming your Gateway, you just need to make bugs that can affect the security of the system.

Another implementation of Gateway is to use a turnkey solution from the Reverse Proxy class. This may be familiar to all Nginx. But there are other modern alternatives. This and Envoy, this and Træfik, this and Caddy. Probably, the advantages of Proxy are clear to everyone: it is fast, it is reliable. We get traffic balancing out of the box. We get out of the box SSL termination. In addition, in any Proxy, a very flexible routing system is usually implemented, which allows you to route traffic to different applications via different URLs.

So, we realized that sometimes we need to expose gRPC outside the system, apparently using some kind of Reverse Proxy. But here is bad luck. We do not want anything fashionable on the Nginx project, and in the old bug, there is no way to proxy HTTP / 2 . The solution is to upgrade to 1.13.10! The guys finally made native support for HTTP / 2 proxying, as well as gRPC.

image

Out of the box you will receive a package of benefits. TLS-termination, balancing traffic on nodes, powerful routing, as well as a number of other Nginx features known to you.

All you need to do to start using gRPC proxy traffic is to podshamanit config (and, possibly, to collect Nginx with a couple of new modules, if you collect Proxy yourself). HelloWorld config is described as:

server { listen 80 http2; charset utf-8; access_log logs/access.log; location / { grpc_pass grpc://movie:6565; } } 

I myself am a simple person: until I see, I will not believe. Therefore, I put a Demo for demonstration, where there is a Server, which gives a list of the best films (a set of specified lines), and there is a Client who reads these films. Client and server work through Nginx.

We write movies like this:

 @Override public void getRating(Moviesrating.GetRatingRequest request, StreamObserver<Moviesrating.GetRatingResponse> responseObserver) { log.info("getRating(): request={}", request); List<String> bestMovies = Arrays.asList( "The Shawshank Redemption", "The Godfather", "The Dark Knight", "Interstellar" ); responseObserver.onNext(Moviesrating.GetRatingResponse.newBuilder() .addAllMovie(bestMovies) .build()); responseObserver.onCompleted(); } 

And so we read films:

 @GetMapping("/top") Mono<List<Movie>> top() { log.info("top()"); ListenableFuture<Moviesrating.GetRatingResponse> ratingFuture = moviesRatingStub.getRating( Moviesrating.GetRatingRequest.newBuilder().build()); CompletableFuture<List<Movie>> completable = new CompletableFuture<List<Movie>>() { @Override public boolean cancel(boolean mayInterruptIfRunning) { boolean result = ratingFuture.cancel(mayInterruptIfRunning); super.cancel(mayInterruptIfRunning); return result; } }; ratingFuture.addListener(() -> { try { completable.complete(ratingFuture.get().getMovieList().stream() .map(Movie::new) .collect(Collectors.toList())); } catch (InterruptedException | ExecutionException e) { e.printStackTrace(); } }, executor); return Mono.fromFuture(completable); } 

Everything works, men from Nginx are not deceived, you can believe. And if you do not believe it - https://github.com/Hixon10/grpc-nginx - check for yourself.

Source: https://habr.com/ru/post/351994/


All Articles