📜 ⬆️ ⬇️

Using Visual Studio Application Insights - Test Engineer's Experience

Thank you very much for preparing the article, Igor Shcheglovitov, Senior Test Engineer at Kaspersky Lab, for help in writing this article and valuable practical experience. Our remaining Azure articles can be found on the azureweek tag, as well as on the mstesting tag - testing articles.

Application Insights (hereinafter simply AI) is a mechanism for collecting and analyzing user telemetry: various performance counters, user events (logs), and so on. Currently, it supports not only ASP.NET applications, but also others, including Java, IOS, JavaScript, and others.



When connecting and setting up an AI package, it starts by default to collect some metrics, for example, for web applications, this is information on requests to the web server, as well as various server counters (for example, request time, CPU load). In addition, AI has an advanced API that allows you to save custom and business counters and logs.
Our team is developing Azure cloud services. Logs are stored in Azure Storage, and performance counters (server and performance) are read and analyzed automatically by a separate monitoring server. For logging we use Serilog . Unlike other loggers, it (out of the box) allows you to write structural logs, highlighting individual properties.
Currently, for many popular loggers, such as Serilog, log4net, Nlog, there are add-ons that allow you to redirect logs to AI. If you use these loggers and also have an active Azure subscription, you can try AI features in free mode.

Below is an example of setting Serilog to save logs in AI:
')
Via NuGet, we connect the Serilog.Sinks.ApplicationInsights package:


(this package will automatically connect the Application Insights API dependent on it)

Further, when configuring the Serilog logger, you must specify an ApplicationInsights extension, for example, like this:

Log.Logger = new LoggerConfiguration().WriteTo.ApplicationInsights(string.Empty).CreateLogger(); 

In the Azure portal preview version create a new container Application Insigts



After creation, on the “Basic” tab, copy the instrumentation key



Now in the code before starting the application should be written:
 TelemetryConfiguration.Active.InstrumentationKey = "{-}"; 

If, for example, you use Nlog logging , then to redirect logs to AI, you just need to connect the “ Application Insights Nlog Target package


After that, the section automatically appears in the application config:
 <nlog> <extensions> <add assembly="Microsoft.ApplicationInsights.NLogTarget"/> </extensions> <targets> <target type="ApplicationInsightsTarget" name="aiTarget"/> </targets> <rules> <logger name="*" minlevel="Trace" writeTo="aiTarget"/> </rules> </nlog> 

The main problem with using NLog is that it allows you to write only strings, without custom properties. In addition to using loggers, custom events in AI can be sent via the API , here's an example:

 <I>var</I><I> telemetry = new </I><I>TelemetryClient</I><I>(</I><I>);</I> <I>...</I> <I>try </I> <I>{ </I> <I>var</I> <I>eventTelemetry</I><I> = new </I><I>EventTelemetry</I><I>(“log”); </I> <I>eventTelemetry</I><I>. Properties[“</I><I>CustomProperty</I><I>”] = “test”; </I> <I>telemetry.Track</I><I>(</I><I>eventTelemetry</I><I>);</I> <I>catch (Exception ex)</I> <I>{</I> <I> </I><I>var</I><I> properties = new Dictionary <string, string> </I> <I> {{" </I><I>CustomProperty</I><I> ", “test”}};</I> <I> </I><I>telemetry.TrackException</I><I>(ex, properties);</I> <I>} </I> 

Now, after launching the application, the logs will be redirected to AI.

Unlike log forwarders, the explicit use of the AI ​​API allows you to write telemetry in different AI containers. This is convenient when you have a large application consisting of many components: services, web interface and so on. In this case, when sending telemetry to AI, you can explicitly set the InstrumentationKey property in TelemetryClient instances.

For example, for your application you have created three different containers: one for performance analysis, the other for logs, and the third for monitoring user activity. Regarding the latter, a special TrackPageView method is provided for analyzing user activity of UI applications in the AI ​​API. You can save data, for which pages (tabs) a particular user visited (remember that you can save events in AI directly in the JavaScript code), on the basis of which you can draw any conclusions.

Also add that you can configure the export of AI telemetry for further analysis in Azure Blobs. A convenient search interface will allow you to find the desired user events through the Azure portal. The screenshot shows how the properties of one of my saved logs are displayed.



Moreover, each logged property is indexed separately and you can filter logs by its specific value through the filter menu:



This is very convenient, because allows you to track the emergence of new events, for example, some new exceptions.

The log search interface itself is very simple; besides standard filters, you can write various requests to your logs (for more information, see the documentation ).

In addition to logs, if you have a web application (for example, MVC Web API), then by connecting Nuget Application Insights Web package (without any additional settings), the AI ​​will receive information on all web requests.



In addition, AI will build dependencies related to queries. For example, a database call was made during an HTTP request. In the AI ​​logs, you will see this link, as well as information on the related request (URL, return code ...).

In addition, various server performance counters (CPU, Memory ...) will be available on the “Servers” tab. By default, there are not so many of them, but through ApplicationInsights.config you can add those counters that you need.



Using the functionality of alert rules, you can create a rule, upon the occurrence of which a corresponding notification will arrive at the specified e-mail (for example, the CPU threshold value is exceeded).

In our team, we develop asynchronous services. We do not use transactions in the truest sense of the word. To ensure data integrity when running long BPs, when the state of database objects changes more than once when one calls the API method, we use queues and Service Bus (for example: the API method validated data, changed the state of the object, sent a command message to the queue and returned the result to the callee; the queue handler picked up the message, performed some action and put a new message in the queue, and so on).

The number of logs that are created with one such BP can reach dozens of records. For their bundles, we use a special custom conversation header. It is forwarded in all commands and messages during the execution of the whole process. This header can be specified by the caller. It is logged as a custom property. Thus, specifying the value of conversationId in the search window, you can get the whole chain of PS logs:



In addition to operation, this approach is very useful in integration testing.
I am developing integration tests for our cloud services. Automated test scripts live in MTM, run through a special build. Tests in most cases, client-server. According to a message with an error in the information of the fallen test, the cause of the fall is far from always clear. Often, server logs are needed for diagnostics. Separately, looking for and watching the logs for each test is not convenient and takes a lot of time. This is where such functionality as MSTest custom diagnostic adapters came to my rescue.

In short, the diagnostic adapter in this context is an assembly containing a special class that inherits from Microsoft.VisualStudio.TestTools.Execution.DataCollector. If the test agent is configured to use this diagnostic adapter, then during the execution of the test, the corresponding event is triggered, the logic of which is defined in the OnTestCaseStart, OnTestCaseEnd methods. In these methods, you have access to the context of the test, including its name, state, and so on. Here is an example implementation of a diagnostic adapter:
 [DataCollectorConfigurationEditor("configurationeditor://CompanyName/LogConfigEditor/4.0")] [DataCollectorTypeUri("datacollector://CompanyName/LogCollector/4.0")] [DataCollectorFriendlyName("log collector.")] public class LogCollecter : DataCollector { public override void Initialize( XmlElement configurationElement, DataCollectionEvents events, DataCollectionSink sink, DataCollectionLogger logger, DataCollectionEnvironmentContext environmentContext) { _stopwatch = new Stopwatch(); _dataEvents = events; // The test events _dataLogger = logger; // The error and warning log _dataSink = sink; Configuration.Init(configurationElement); events.TestCaseStart += OnTestCaseStart; events.TestCaseEnd += OnTestCaseEnd; } private void OnTestCaseStart(object sender, TestCaseEventArgs e) { _stopwatch.Start(); } private void OnTestCaseEnd(object sender, TestCaseEndEventArgs e) { Guard.CheckNotNull(e, "TestCaseFailedEventArgs is null"); var testCaseEndDate = DateTime.Now; if (_stopwatch.IsRunning) { _stopwatch.Stop(); } if (e.TestOutcome == TestOutcome.Failed) { //         . _dataSink.SendFileAsync(e.Context, Configuration.ZipFilePath, true); } } private Stopwatch _stopwatch; private DataCollectionEvents _dataEvents; private DataCollectionLogger _dataLogger; private DataCollectionSink _dataSink; } 

After implementing the diagnostic adapter, build it with the need to attach it to all servers with installed test agents (including the machine on which the Lab Management settings are configured) in the% Microsoft Visual Studio 12.0 \ Common7 \ IDE \ PrivateAssemblies \ DataCollectors folder.

Next, run MTM, and tick the adapter.



For more information about diagnostic adapters, their configuration forms and settings, read here msdn.microsoft.com/en-us/library/dd286727.aspx .

In each test class, in the TestInitialize method, the initialization code for the conversationId was added, which was subsequently put in headers when creating wcf and http clients. This onversationId was played with a diagnostic adapter. And when an error occurred in the test results, information appeared about the conversationId. And knowing the conversationId through AI you can very easily see all the server events associated with a specific test.

Application Insights is a very powerful tool that simply allows you to connect to your application a collection of various diagnostic telemetry, as well as to simplify its further visual analysis using the user-friendly portal interface. If you have an Azure subscription, then you can use AI in two modes: Free and Premium. The main limitation of the Free mode is the maximum number of user events - 5 million per month, and the added data is stored for 7 days, after which it is deleted. In the Premium version, these restrictions have been removed. Free to use for 30 days.

Various useful resources


Source: https://habr.com/ru/post/263999/


All Articles