For everything about everything enough 50 cups of coffee.
In addition to the rule of thumb outlined above, we publish a brief note about points that need to be paid close attention so that nothing breaks in battle and in the process. A note was made hot on the heels of the release of the mobile service that had completely migrated to .Net Core (the beginning was made here ). We managed to perform this operation imperceptibly for the customer, almost without stopping the main development process.
Below will be a ready action plan, there will be a very capacious test list, there will be this picture for the mood:
With fundamental rewriting of the code, the service will need time to infuse as long as possible in order to have time to fix all the flaws on the test environment.
Why is it important not to start this business ahead of time? Because you have to pull two branches of the code with the new and old .net, because at any moment the bird can fly urgent or need to hold a demo of new features, and then you need to make changes to the old stable branch. To experience minimal concern about this, it is better to shorten the transition state.
By the way, when working with code, we quickly came to the conclusion that it is more convenient to keep two copies of the repository locally. It is easier and more convenient than switching two massive branches.
The implementation of the .Net Core WCF client is still far from ideal. Despite the fact that the old sores are in some sense fixed, newer versions still have to use workaround ( 1 , 2 ).
For history: on .Net Core 2.0, the stable working version of WCF is 4.4.2 from myget repository. She, for example, has no problems with early timeout
At the time of the start of the migration, we used the .Net Core 2.0 version. Meanwhile, Microsoft relies .Net Core 2.1. Who cares to admire the success of the Redmond guys in optimizing the platform, please read what progress the Bing search engine made when upgrading to the new version (spoiler: latency fell 34%!)
We also upgraded to .Net Core 2.1 and WCF 4.5.3. And they didn’t forget to specify a fresh microsoft / dotnet base image in Dockerfile: 2.1-aspnetcore-runtime. What a surprise it was when instead of 1.4GB we saw an image size of 0.5GB (we are talking about a Windows image, all of a sudden).
We have two environments at our disposal. We left the demo with the old version as a reference. On the test environment, a new service has been laid out - run in on developers and testers.
There was some confusion due to the fact that usually developers work with dough, and testers mostly with demo. In case it was necessary to refresh the old service, the situation was exactly the opposite of normal. Therefore, the discussion and the crib are useful, where and what to look for.
To run the .Net Core service in IIS, you need to install a module that comes with runtime.
AppPool switch to CLR Runtime = No Managed Code.
In a solution in a standard web.config, it is important not to forget to set the desired requestTimeout and disable the WebDAV module if there are DELETE methods.
Further, there are two options for publishing a service in IIS:
Both that, and another allows to stop working process and to unlock the executed files. Otherwise, an error will appear that the files are not available for rewriting.
We refused to log in via Nlog in favor of Serilog, and lost automatic log compression - there is simply no such feature in Serilog. In this case, you can be saved by regular Windows tools and set NTFS compression in the properties of the directory.
Here is the most compressed checklist for the most fragile places:
if (env.IsDevelopment())
In our mobile application, a code generator is used to work with API based on the description of swagger.json, so it was important that the difference from the original description was minimal. The latest version of Swashbuckle.AspNetCore greatly changed the interface and the generated swagger.json. I had to roll back to the old version of Swashbuckle.AspNetCore 1.2.0 and add a couple of filters.
In our case, the combat environment consists of two nodes: active and passive.
In order to switch to a new service unnoticed, we duplicated the pool and site on each node, and wrote a script to switch the binding between the old and the new site.
Thus, in the case of an emergency, we were able to quickly switch to the old version.
Further, after deploying to the battle, during the week we were convinced of the viability of the service and lit the green light for the release of the mobile application. Life on the project safely returned to its former course.
Now our service is completely ready to acquire a docker-container for delivery to the cluster. We are ready to deploy to Kubernetes and to the Service Fabric.
Now the preparation for the presentation of the new infrastructure to the customer is in full swing. We will tell about your achievements in the next series, keep abreast;)
Source: https://habr.com/ru/post/422441/
All Articles