Microsoft did a great job creating a new web development platform. The new ASP.NET Core is similar to the old ASP.NET MVC, only perhaps by the MVC architecture itself. The difficulties and familiar things from the old platform are gone, embedded DI and lightweight view components appeared, HTTP modules and handlers gave way to middleware, etc. Coupled with cross-platform and good performance, all this makes the platform very attractive for choice. In this article I will tell you how I managed to solve the specific task of logging lengthy requests to third-party APIs to improve the convenience of analyzing incidents.
Instead of intro ...
In diplomas and theses somewhere in the beginning, a problem should usually be stated. But what I describe below is probably not even a problem, but the desire to make the application more usable and the world around it a little bit more beautiful. This is one of those moments in my life where I ask myself, can I do better? And after that I try to find ways to do it. Sometimes I do it, sometimes I understand that it is either impossible to do better, or it is worth it to spend so much effort that it will eventually lose all meaning. In the end, it
will not turn the world around and solve the problem of poverty on the planet is not something that is worth spending their time and energy. But anyway, I get some experience from this that cannot but come in handy in the future. In this article I will describe what seems to me that I managed to do better.
Task statement
To begin with, the company where I work is developing high-load websites. The applications I have been able to work on at the moment, for the most part, are a kind of hidden authorization of the access interface to the customer’s internal APIs. There is little logic there and it is distributed in approximately equal proportions between the frontend and backend. Most of the key business logic lies precisely in these APIs, and sites that are visible to the end customer are their conciergers.
Sometimes it turns out that third-party APIs in relation to the site respond to user requests for a very long time. Sometimes this is a temporary effect, sometimes permanent. Since applications are by nature highly loaded, we should not allow user requests to be in a hung state for a long time, otherwise it could threaten us with an ever increasing number of open sockets, a rapidly replenishing queue of unanswered requests on the server, and also a large number of clients worrying and worrying why their favorite site opens so long. With the API, we can not do anything. And in connection with this, since very old times, we have imposed a limit on the balancer for the duration of the query execution, equal to 5 seconds. This primarily imposes restrictions on the architecture of the application, as it implies even more asynchronous interaction, and as a result solves the above problem. The site itself opens quickly, and the download indicators are spinning on the open page, which will eventually yield some result to the user. It will be what the user expects to see, or the error no longer plays a big role, and this will be a completely different story ...
')
Completely different story The attentive reader will note: but if the restriction is on all requests, then it also applies to AJAX requests, too. All right I don’t know at all how to distinguish an AJAX request from the usual transition to a page that is 100% working in all cases. Therefore, long AJAX requests are implemented according to the following principle: from the client we make a request to the server, the server creates a Task and associates with it a specific GUID, then returns this GUID to the client, and the client receives the result on this GUID when it comes to the server from the API. At this stage, we almost got to the essence of my
contrived problem problem.
All requests and responses to these APIs should be logged and stored for debriefing, and from logs we should receive maximum useful information: called action / controller, IP, URI, user login, etc. etc. When I first used NLog to log my request / response in my ASP.NET Core application, in principle, I was not surprised to see something like ElasticSearch:
2017-09-23 23:15:53.4287|0|AspNetCoreApp.Services.LongRespondingService|TRACE|/| DoJob method is starting (AspNetCoreApp.Services.LongRespondingService.DoJob:0)| url:http:///
At the same time, in the NLog configuration, everything looked quite correct and the following was set as the layout:
${longdate}|${event-properties:item=EventId.Id}|${logger}|${uppercase:${level}}|${aspnet-mvc-controller}/${aspnet-mvc-action}| ${message} (${callsite}:${callsite-linenumber})| url:${aspnet-request-url}
The problem here is that the response from the API comes after the client completes the request (it returns only the GUID). It was here that I began to think about possible ways to solve this problem ...
Possible solutions to the problem
Of course, any problem can be fixed in several ways. And to score on it is also one of the techniques that sometimes should not be neglected. But let's talk about real solutions and their consequences.
Pass the task ID to the API call method.
Perhaps this is the first thing that can come to mind. We generate the GUID, log this GUID before exiting the action and transfer it to the API service.
The problems of this approach are obvious. We must pass this ID to absolutely all methods of accessing third-party APIs. In this case, if we want to reuse this piece somewhere else, where such functionality is not required, then we will be very much disturbed.
The situation is aggravated if, before giving the data to the client, we want to somehow process, simplify or aggregate it. This refers to the business logic and it should be taken out in a separate piece. It turns out that this piece, which should remain as independent as possible, will also work with this identifier! So things are not done, so consider other ways.
Save task ID in CallContext
If we think about what tools the framework provides for this purpose, then first of all we will recall CallContext with its LogicalSetData / LogicalGetData. Using these methods, you can save the task ID in CallContext, and .NET (better to say, .NET 4.5) will make sure that the new threads automatically access the same data. Inside the framework, this is implemented using a pattern somewhat similar to Memento, which should be used by all methods of starting a new thread / task:
Now that we know how to save the ID, and then get it in our task, we can include this identifier in each of our logged messages. You can do this directly when calling the logging method. Or, in order not to clog the data access layer, you can use the capabilities of your logger. In NLog, for example, there are
Layout Renderers .
Also, in extreme cases, you can write your logger. In ASP.NET Core, all logging is done using special interfaces located in the Microsoft.Extensions.Logging namespace, which are introduced into the class via DI. Therefore, we only need to implement two interfaces: ILogger and ILoggerProvider. I think this option may be useful if your logger does not support extensions.
And so that everything
worked out as it should, I recommend reading the
article of Stephen Cleary . It does not have a binding to the .NET Core (in 2013, it simply was not there yet), but something useful for yourself can be definitely emphasized there.
The disadvantage of this approach is that messages with an identifier will get into the log and in order to get the full picture you will need to search for an HTTP request with the same ID. I won't say anything about performance, because even if there is some kind of drawdown, then compared to other things it will seem to be incommensurably small value.
And what if you think why it does not work?
As I said before, messages like this appear in our log file:
2017-09-23 23:15:53.4287|0|AspNetCoreApp.Services.LongRespondingService|TRACE|/| DoJob method is starting (AspNetCoreApp.Services.LongRespondingService.DoJob:0)| url:http:///
Those. everything related to IHttpContext is simply null. It seems to be understandable: the execution of the request is over, so NLog cannot receive data, i.e. HttpContext is simply no longer a reference.
Finally, I decided to look, and how the actual NLog gets a link to the HttpContext outside the controller. Since SynchronizationContext and HttpContext.Current in .NET Core
were finished (yes, this is again Stephen Cleary), then there must be some other way to do it.

Having picked the source code for NLog, I found a certain
IHttpContextAccessor . The thirst to understand what is happening here all the same made me climb into GitHub again and see what this magic thing with one property is. It turned out that this is just an
abstraction over AsyncLocal , which is essentially a new version of LogicalCallContext (the very same LogicalSetData / LogicalGetData methods). By the way, for the .NET Framework, this was
not always the case .
After that, I asked myself the question: why, then, actually it does not work? We run Task in the standard way, there is no unmanaged code here ... Running the debugger to see what is happening with HttpContext at the time of calling the logging method, I saw that HttpContext is there, but the properties in it are all set to zero except for Request.Scheme, which at the time call equals "http". So it turns out that in my log instead of URL - a strange "
http:///
".
So, it turns out that in order to improve the performance of ASP.NET Core, somewhere inside of itself, HttpContexts are reset to zero and reuse them. Apparently, such subtleties together allow to achieve significant advantages in comparison with the old ASP.NET MVC.
What could I do about it? Yes, just save the current state of HttpContext'a! I wrote a simple service with a single CloneCurrentContext method that I registered in the DI container.
HttpContextPreserver public class HttpContextPreserver : IHttpContextPreserver { private readonly IHttpContextAccessor _httpContextAccessor; ILogger _logger; public HttpContextPreserver(IHttpContextAccessor httpContextAccessor, ILogger<HttpContextPreserver> logger) { _httpContextAccessor = httpContextAccessor; _logger = logger; } public void CloneCurrentContext() { var httpContext = _httpContextAccessor.HttpContext; var feature = httpContext.Features.Get<IHttpRequestFeature>(); feature = new HttpRequestFeature() { Scheme = feature.Scheme, Body = feature.Body, Headers = new HeaderDictionary(feature.Headers.ToDictionary(kvp => kvp.Key, kvp => kvp.Value)), Method = feature.Method, Path = feature.Path, PathBase = feature.PathBase, Protocol = feature.Protocol, QueryString = feature.QueryString, RawTarget = feature.RawTarget }; var itemsFeature = httpContext.Features.Get<IItemsFeature>(); itemsFeature = new ItemsFeature() { Items = itemsFeature?.Items.ToDictionary(kvp => kvp.Key, kvp => kvp.Value) }; var routingFeature = httpContext.Features.Get<IRoutingFeature>(); routingFeature = new RoutingFeature() { RouteData = routingFeature.RouteData }; var connectionFeature = httpContext.Features.Get<IHttpConnectionFeature>(); connectionFeature = new HttpConnectionFeature() { ConnectionId = connectionFeature?.ConnectionId, LocalIpAddress = connectionFeature?.LocalIpAddress, RemoteIpAddress = connectionFeature?.RemoteIpAddress, }; var collection = new FeatureCollection(); collection.Set(feature); collection.Set(itemsFeature); collection.Set(connectionFeature); collection.Set(routingFeature); var newContext = new DefaultHttpContext(collection); _httpContextAccessor.HttpContext = newContext; } } public interface IHttpContextPreserver { void CloneCurrentContext(); }
I did not use deep cloner, as this would add a heavy reflection to the project. And I need it all in one place. Therefore, I simply create a new HttpContext based on the existing one and copy into it only what would be useful to see in the log when analyzing incidents (action | controller, url, ip, etc.).
Not all information is copied .
Now running the application, I saw about the following happy lines:
2017-10-08 20:29:25.3015|0|AspNetCoreApp.Services.LongRespondingService|TRACE|Home/Test| DoJob method is starting (AspNetCoreApp.Services.LongRespondingService.DoJob:0)| url:http://localhost/Test/1 2017-10-08 20:29:34.3322|0|AspNetCoreApp.Services.LongRespondingService|TRACE|Home/Test| DoJob method is ending (AspNetCoreApp.Services.LongRespondingService+<DoJob>d__3.MoveNext:0)| url:http://localhost/Test/1
And this meant a small victory for me, which I share here with you. Who thinks about this?
To make sure that I didn’t think up anything wrong with a false one, I wrote a small functional load test that can be found in my repository on github along with the service. When you run 5000 simultaneous tasks, the test was successful.

By the way, thanks to the ASP.NET Core architecture, such tests can be written easily and naturally. You just need to run a full-fledged server inside the test and access it through a real socket:
Server initialization by URL protected TestFixture(bool fixHttpContext, string solutionRelativeTargetProjectParentDir) { var startupAssembly = typeof(TStartup).Assembly; var contentRoot = GetProjectPath(solutionRelativeTargetProjectParentDir, startupAssembly); Console.WriteLine($"Content root: {contentRoot}"); var builder = new WebHostBuilder() .UseKestrel() .UseContentRoot(contentRoot) .ConfigureServices(InitializeServices) .UseEnvironment(fixHttpContext ? "Good" : "Bad") .UseStartup(typeof(TStartup)) .UseUrls(BaseAddress); _host = builder.Build(); _host.Start(); }
Another reason for using ASP.NET Core in their projects.
Results
I really liked ASP.NET Core in all aspects of its implementation. It has great flexibility and at the same time the lightness of the entire platform. Due to abstractions for absolutely all the functionality, you can do any things and customize the entire platform for yourself, your team and your development methods. Cross-platform is not yet perfected, but Microsoft is committed to this, and someday (albeit maybe not soon) they should get it.
Github link