📜 ⬆️ ⬇️

Yet another tutorial: launch dotnet core docker application on Linux



One cloudy summer day, after visiting the section from Avito on RIT2017, it suddenly dawned on me that the HYIP about the docker had not stopped in a couple of years and it was time, finally, to master it. Dotnet core + C # was chosen as the test subject for packaging, since it has long been interesting to see what it is like to develop in C # for Linux.

Warning reader: The article is aimed at completely new to the docker / dotnet core and was written mostly as a reminder for yourself. I was inspired by the first 3 parts of the Docker Get Started Guide and some blog post on english . Who is good with English, you can read them right away and in general it will be very similar. If after all of the above, you have not yet decided to continue reading, then welcome under cat.

Prerequisites

So, we will need Linux itself (in my case it was Ubuntu 16.04 under VirtualBox on Windows 10), dotnet core , docker , and also docker compose , so that it would be more convenient to lift several containers at once.
')
No special installation problems should arise. At least, I did not have those.

Choosing a development environment
Formally, you can write at least in notepad and debug through logs or console messages, but since I am somewhat spoiled, I still wanted to get a normal debugging and, preferably, also a normal refactoring.

From what can under Linux, for myself I tried the Visual Studio Code and JetBrains Rider.

Visual studio code
What can I say - it works. It is possible to debug, the syntax is highlighted, but it’s all very simple - the impression is left that this is a notepad with debugging options.

Rider
In essence, Idea, crossed with Resharper, is simple and clear, if you have worked with any IDE from JetBrains before. Until recently, debug did not work under linux, but in the latest EAP build it was returned. In general, for me the choice in favor of Rider was unequivocal. Thank you JetBrains for their cool products.

Create a project

For educational purposes, we despise the Create Project buttons of various IDEs and make all the handles through the console.

1. Go to the directory of our future project
2. We'll see for the sake of interest what patterns we can use.

dotnet new -all 

3. Create a WebApi project

 dotnet new webapi 

4. Tighten dependencies

 dotnet restore 

5. Run our application

 dotnet run 

6. Open http://localhost:5000/api/values and enjoy the work of C # code on Linux

Preparing an application for dockerization

Go to Program.cs and add in the host setup

 .UseUrls("http://*:5000") // listen on port 5000 on all network interfaces 

In the end, you should get something like

 public static void Main(string[] args) { var host = new WebHostBuilder() .UseKestrel() .UseContentRoot(Directory.GetCurrentDirectory()) .UseUrls("http://*:5000") // listen on port 5000 on all network interfaces .UseStartup<Startup>() .Build(); host.Run(); } 

This is necessary so that we can turn to the application inside the container.
By default, Kestrel , on which our application runs, listens to http://localhost:5000 . The problem is that localhost is a loopback interface and, when running an application in a container, is available only inside the container.

Accordingly, having dotted the dotnet core application with the default setting of the url you are listening, you can then wonder for a long time why port forwarding does not work, and re-read your docker file in search of errors.

And you can add some functionality to the application.
Passing parameters to the application

When I run the container, I would like to be able to pass parameters to the application.
A quick googling showed that if we can do without an exotic type of access to the configuration service from inside the container, then you can use parameter passing through environment variables or replacing the config file.

Well, we will pass through variable environments.

Let's go to Startup.cs publicpublic Startup(IHostingEnvironment env) and see that the AddEnvironmentVariables() method is called in our ConfigurationBuilder .

Actually all - now you can inject parameters from environment variables anywhere through DI.

Instance ID

When the instance instance starts, we will generate a new Guid and stick it in the IoC container to distribute to the suffering. It is necessary, for example, to analyze logs from several service instances running in parallel.

Everything is also pretty trivial - in the ConfigurationBuilder call:

  .AddInMemoryCollection(new Dictionary<string, string> { {"InstanseId", Guid.NewGuid().ToString()} }) 

After these two steps, public Startup(IHostingEnvironment env) will look something like this:

 public Startup(IHostingEnvironment env) { var builder = new ConfigurationBuilder() .SetBasePath(env.ContentRootPath) .AddJsonFile("appsettings.json", optional: false, reloadOnChange: true) .AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true) .AddInMemoryCollection(new Dictionary<string, string> { {"InstanseId", Guid.NewGuid().ToString()} }) .AddEnvironmentVariables(); Configuration = builder.Build(); } 

A little about DI

It did not seem to me at all somewhat intuitive. I didn’t dig deep, but nonetheless I’ll give below a small example of how to throw the instance Id that we set at the start, and something from the environment variables (for example, the MyTestParam variable) to the controller.

The first step is to create a class of settings - the field names must match the names of the configuration parameters that we want to inject.

  public class ValuesControllerSettings { public string MyTestParam { get; set; } public string InstanseId { get; set; } } 

Next, go to Startup.cs and make changes to ConfigureServices(IServiceCollection services)

  // This method gets called by the runtime. Use this method to add services to the container. public void ConfigureServices(IServiceCollection services) { //  Ionfiguration //    ValuesControllerSettings services.Configure<ValuesControllerSettings>(Configuration); // Add framework services. services.AddMvc(); } 

And the last step is going to our experimental and only ValuesController created by the machine ValuesController and write the injection through the constructor.

  private readonly ValuesControllerSettings _settings; public ValuesController(IOptions<ValuesControllerSettings> settings) { _settings = settings.Value; } 

not forgetting to add using Microsoft.Extensions.Options; . For the test, we override the response of any method you like to Get to return the parameters acquired by the controller, run, check → profit.

We collect and run a docker image

1. First of all, we will get the binaries of our application for publication. To do this, open the terminal, go to the project directory and call:

 dotnet publish 

More details about the team can be read here .

Running this command without add. the arguments from the project directory will be added to the dll for publication in ./bin/Debug/[framework]/publish

2. Actually we will put this daddy in our docker-image.

To do this, create in the Dockerfile project Dockerfile and write something like this there:

 #      FROM microsoft/dotnet:runtime #        CMD WORKDIR /testapp #      (, dockerfile     )    COPY /bin/Debug/netcoreapp1.1/publish /testapp #     5000,   Kestrel EXPOSE 5000 #       CMD ["dotnet",".dll"] 

3. After the Dockerfile written, run:

 docker build -t my-cool-service:1.0 . 

Where my-cool-service is the image name, and 1.0 is the tag indicating the version of our application.

4. Now we’ll check that the image of our service is in the repository:

 docker images 

5. And finally, run our image:

 docker run -p 5000:5000 my-cool-service:1.0 

6. Open http://localhost:5000/api/values and enjoy the work of C # code on Linux in docker

Useful commands for working with docker
View images in the local repository
 docker images 

View running containers
 docker ps 

Run container in detached mode
 docker run   -d 

Get container information
 docker inspect ___ 

Stop container
 docker stop ___ 

Delete all containers and all images
 # Delete all containers docker rm $(docker ps -a -q) # Delete all images docker rmi $(docker images -q) 


Little docker-compose last

docker-compose is useful for running groups of related containers. As an example, I will give the development of a new microservice: let us write service3, who wants to communicate with already written and documented service1 and service2. Service1 and service2 for development purposes can be conveniently and quickly raised from the repository via docker-compose .

Let's write a simple docker-compose.yml , which will lift the container of our application and the container with nginx (I don’t know why we might need it locally when developing, but for example it will do) and configure the latter as a reverse proxy for our application.

 #   docker-compose  version: '3.3' services: #    service1: container_name: service1_container #    image: my-cool-service:1.0 #  ,      environment: - MyTestParam=DBForService1 # nginx reverse-proxy: container_name: reverse-proxy image: nginx #      nginx ports: - "777:80" #  nginx   volumes: - ./test_nginx.conf:/etc/nginx/conf.d/default.conf 

docker-compose picks up the local network at startup between the services described in the docker-compose file and distributes the hostname according to the service names. This allows such services to communicate conveniently with each other. Let's use this property and write a simple configuration file for nginx

 upstream myapp1 { server service1:5000; /*           docker-compose */ } server { listen 80; location / { proxy_pass http://myapp1; } } 

Call:

 docker-compose up 

from the directory with docker-compose.yml and get nginx, as a reverse proxy for our application. It is recommended to imagine that here instead of nginx something is really useful and necessary for you. For example, a database when running tests.

Conclusion

We created a dotnet core application on Linux, learned how to build and run a docker image for it, and also learned a little about docker-compose .

I hope that this article will help someone to save some time on the way to master docker and / or dotnet core.

A request to the readers: if anyone has experience with the dotnet core in production on Linux (not necessarily in the docker, although the docker is especially interesting), please share your impressions of use in the comments. It will be especially interesting to hear about real problems and how they were solved.

Source: https://habr.com/ru/post/332582/


All Articles