My development experience is mainly built around a variety of network services under Windows and Linux. I usually strive to achieve maximum cross-platform up to binary compatibility. And of course, a number of stable logging solutions have accumulated.
The topic is written as a continuation of this
article and will be useful primarily for novice programmers.
Here I am going to highlight the following questions:
- Internal property of the logger and examples of the software that uses it.
- Volumes, levels and details of log messages.
- General configuration rules in development, in combat, and in investigation.
')
So, I'll start with my additions to the previous article.
Like the author, I use NLog and, of course, make extensive use of its features. Of course after
their implementation in any other logger, the practice described below can be applied to them.
By the way, log4net continues
to evolve .
Actually, there are only two required additions to the “Features of a respectable logger”:
- Observing / reloading a configuration file is no longer just a useful, but a much-needed skill.
- Minimal intervention in the off state.
Under the hood NLog
Immediately discuss the usefulness of the second feature.
Often, when developing a code, it is necessary to look at the value of any variable during execution. Usually, a debugger is used and the program is stopped in the place of interest. For me, this is a clear sign that this place will be useful
Trace output. Complete with unit tests, we immediately get a sweep of this variable in time and a protocol for comparison with tests in other conditions. Thus, I practically do not use the debugger.
Obviously, in combat use, even when turned off, detailed logging can interfere with both execution speed and parallelism.
The following techniques are used in NLog:
- Caching logging level in volatile variable. It does not require blocking or synchronization of threads.
- A simple method to determine the need for logging. Under some conditions, it can lead to inline-insertion of code during jit-compilation. And this turns the logic into a simple conditional omission of a piece of code and eliminates copying and transferring the parameters of the logging function.
- Detailed overloading of logging methods. Minimize type conversion, and also protect from vain boxing, before we figured out the need for logging.
- Delayed message generation by the delegate. Allows you to visually and easily define complex message generation.
The source code of the class can be found
here .
Fine! For NLog, you can be sure that your detailed messages can be turned off and this will have a minimal impact on performance. But, this is not a reason to devote half of the code to logging.
What and how to log
Follow the rules:
- The output to the log should be placed on a separate line.
- The message should be as short and informative as possible.
- Only rare or fatal messages can be localized.
- It is desirable to take the arguments from the local variables of the method or internal variables of the class of this method.
- It is not advisable to use type conversions.
In general, when outputting to the log, always note that the number of potentially unnecessary calculations that will be required for the case when the log is disabled.
A simple example (a fragment of a class):
private static Logger Log = LogManager. GetCurrentClassLogger ( ) ; <br/>
<br/>
public string Request ( string cmd, string getParams ) <br/>
{ <br/>
Uri uri = new Uri ( _baseUri, cmd + "?" + getParams ) ; <br/>
Log. Debug ( "Request for uri:`{0}'" , uri ) ; <br/>
HttpWebRequest webReq = ( HttpWebRequest ) WebRequest. Create ( uri ) ; <br/>
webReq. Method = "GET" ; <br/>
webReq. Timeout = _to ; <br/>
<br/>
string respText ; <br/>
try <br/>
{ <br/>
string page ; <br/>
using ( WebResponse resp = webReq. GetResponse ( ) ) <br/>
using ( Stream respS = resp. GetResponseStream ( ) ) <br/>
using ( StreamReader sr = new StreamReader ( respS ) ) <br/>
page = sr. ReadToEnd ( ) ; <br/>
Log. Trace ( "Response page:`{0}'" , page ) ; <br/>
return page ; <br/>
} <br/>
catch ( Exception err ) <br/>
{ <br/>
Log. Warn ( "Request for uri:`{0}' exception: {1}" , uri, err. Message ) ; <br/>
throw ; <br/>
} <br/>
}
private static Logger Log = LogManager. GetCurrentClassLogger ( ) ; <br/>
<br/>
public string Request ( string cmd, string getParams ) <br/>
{ <br/>
Uri uri = new Uri ( _baseUri, cmd + "?" + getParams ) ; <br/>
Log. Debug ( "Request for uri:`{0}'" , uri ) ; <br/>
HttpWebRequest webReq = ( HttpWebRequest ) WebRequest. Create ( uri ) ; <br/>
webReq. Method = "GET" ; <br/>
webReq. Timeout = _to ; <br/>
<br/>
string respText ; <br/>
try <br/>
{ <br/>
string page ; <br/>
using ( WebResponse resp = webReq. GetResponse ( ) ) <br/>
using ( Stream respS = resp. GetResponseStream ( ) ) <br/>
using ( StreamReader sr = new StreamReader ( respS ) ) <br/>
page = sr. ReadToEnd ( ) ; <br/>
Log. Trace ( "Response page:`{0}'" , page ) ; <br/>
return page ; <br/>
} <br/>
catch ( Exception err ) <br/>
{ <br/>
Log. Warn ( "Request for uri:`{0}' exception: {1}" , uri, err. Message ) ; <br/>
throw ; <br/>
} <br/>
}
All logging arguments were required for logic. The
Debug message marks the arguments with which we came to the function. In the error handler, we duplicate the input parameters in case the
Debug level is disabled. And this will give information if necessary to write a unit test. We do not display the exception stack, since it remains possible to do this by a higher-level handler.
In general, in the current error handler, it is useful to detail the context that led to the exception and the specific features of the exception. In the example, it would be useful to display the
Status field for the case of
WebException .
Guarantees the safety of the log
Despite some of the features of NLog on auto logging, there is no guarantee that the log will be safe at the end of the process.
Interestingly, the attempt to terminate the recording by processing the
AppDomain.ProcessExit event is not entirely correct. The configuration can be configured in many different ways to write to the log, including over the network. And the handler of this event is in a limited environment. In .Net, this time is no more than 2 seconds, and in Mono it is stopped by
ThreadPool . Therefore, it is helpful to take care of completing the process in a more friendly environment.
The first thing to do is handle the
AppDomain.UnhandledException event. It should log the complete error information and call
LogManager.Flush () . The event handler uses the same thread that caused the exception, and when it is finished, it immediately unloads the application.
private static readonly Logger Log = LogManager. GetCurrentClassLogger ( ) ; <br/>
<br/>
public static void Main ( string [ ] args ) <br/>
{ <br/>
AppDomain. CurrentDomain . UnhandledException += OnUnhandledException ; <br/>
( ... ) <br/>
LogManager. Flush ( ) ; <br/>
} <br/>
<br/>
static void OnUnhandledException ( object sender, UnhandledExceptionEventArgs e ) <br/>
{ <br/>
Log. Fatal ( "Unhandled exception: {0}" , e. ExceptionObject ) ; <br/>
LogManager. Flush ( ) ; <br/>
}
private static readonly Logger Log = LogManager. GetCurrentClassLogger ( ) ; <br/>
<br/>
public static void Main ( string [ ] args ) <br/>
{ <br/>
AppDomain. CurrentDomain . UnhandledException += OnUnhandledException ; <br/>
( ... ) <br/>
LogManager. Flush ( ) ; <br/>
} <br/>
<br/>
static void OnUnhandledException ( object sender, UnhandledExceptionEventArgs e ) <br/>
{ <br/>
Log. Fatal ( "Unhandled exception: {0}" , e. ExceptionObject ) ; <br/>
LogManager. Flush ( ) ; <br/>
}
In addition, you should call
LogManager.Flush () wherever the completion of the process is potentially possible. At the end of all non-background threads.
If your application is a win-service or Asp.Net, then you should handle the corresponding code start and end events.
How much to log
A serious problem for the developer. You always want to get more information, but the code starts to look very bad. I am guided by the following considerations.
Logging is essentially a comment.
Trace level
logging mostly replaces them.
The levels
Trace and
Debug are read by the developers, and all that is higher is technical support and admins. Therefore, to the
Info level, messages should accurately answer the questions: “What happened?”, “Why?” And, if possible, “How to fix it?”. This is especially true for errors in the configuration files.
The qualitative composition of the logging levels has already been analyzed in the previous article; here we consider only the quantitative composition:
- Trace - output everything in a row. In case the Debug does not allow to localize the error. It is useful to mark the calls of various blocking and asynchronous operations.
- Debug - logging of the moments of calling "large" operations. Start / stop flow, user request, etc.
- Info - one-time operations that are repeated very rarely, but not regularly. (loading config, plugin, running backup)
- Warning - unexpected call parameters, strange request format, use of default values ​​in replacements not correct. In general, everything that may indicate non-regular use.
- Error - a reason for the attention of developers. Here the environment of a specific place of an error is interesting.
- Fatal - here and so it is clear. We can display everything we can reach, as the application will not work further.
Combat Deployment
Suppose the development has reached implementation.
Postpone the issues of rotation logs, file size and depth of history. This is all very specific for each project and is configured depending on the actual operation of the service.
I will focus only on the semantic organization of files. They should be divided into 3 groups. It may be necessary to separate the logs of the modules into different files, but then I will still talk about one file for each group.
- Group Info , with the appropriate level for all sources. This is admin information. There may be the following things: when the application has started, if the configs have been read correctly, if the required services are available, etc. Its main property: the file resizes only when the application is restarted. In the process, the file should not grow. This will help provide automated external control of the success of the application launch. It is enough to check the absence of the Error and Fatal keywords in the file. Verification will always take a predictable little time.
- Warning group. This is also information for the administrator. This file during normal operation must be missing or empty. Accordingly, monitoring of his condition will immediately indicate malfunctions. Flexibly adjusting the filters for different sources, you can choose a fairly accurate criterion when you should pay attention to the service.
- Group Observation . As a rule, during the implementation some problem modules are highlighted. The information from them in the Debug detail is just sent here.
If the application is successfully implemented, then only the first two groups remain.
Crash Investigation
When a running service shows signs of an error, you should not attempt to restart it immediately. Perhaps we are “lucky” to catch the errors associated with incorrect synchronization of threads. And it is not known how much next time to wait for her to repeat.
First of all, you should connect previously prepared configs for the monitoring group. This is exactly what a decent logger should allow to do. When we received confirmation that the new configuration was successfully applied, we are trying to provoke a failure again. Preferably several times. This will provide an opportunity for its reproduction in "laboratory" conditions. Next is the work of programmers. In the meantime, you can reboot.
The output to the log is desirable to make asynchronous.
Example, combat settings.
<nlog autoReload = "true" > <br/>
<targets > <br/>
<target name = "fileInfo" type = "AsyncWrapper" queueLimit = "5000" overflowAction = "Block" > <br/>
<target type = "File" fileName = "${basedir}/logs/info.log" /> <br/>
</target > <br/>
<target name = "fileWarn" type = "AsyncWrapper" queueLimit = "5000" overflowAction = "Block" > <br/>
<target type = "File" fileName = "${basedir}/logs/warn.log" /> <br/>
</target > <br/>
</targets > <br/>
<br/>
<rules > <br/>
<logger name = "*" minlevel = "Info" writeTo = "fileInfo" /> <br/>
<logger name = "*" minlevel = "Warn" writeTo = "fileWarn" /> <br/>
</rules > <br/>
</nlog >
<nlog autoReload = "true" > <br/>
<targets > <br/>
<target name = "fileInfo" type = "AsyncWrapper" queueLimit = "5000" overflowAction = "Block" > <br/>
<target type = "File" fileName = "${basedir}/logs/info.log" /> <br/>
</target > <br/>
<target name = "fileWarn" type = "AsyncWrapper" queueLimit = "5000" overflowAction = "Block" > <br/>
<target type = "File" fileName = "${basedir}/logs/warn.log" /> <br/>
</target > <br/>
</targets > <br/>
<br/>
<rules > <br/>
<logger name = "*" minlevel = "Info" writeTo = "fileInfo" /> <br/>
<logger name = "*" minlevel = "Warn" writeTo = "fileWarn" /> <br/>
</rules > <br/>
</nlog >
When configuring filters, consider the relativity of the logging levels for each of the subsystems. For example, a module with an
Info initialization message can be created for each connected user. Of course, his output to the
Info group should be limited to the
Warn level.
What should not be done with the logger
Logger should be simple and reliable as a hammer. And he should be clearly outlined the scope in a particular project. Unfortunately, developers are often hard to keep. Design patterns, this is mainly useful, but not in this case. It was often enough to notice the proposals to allocate a generic interface (
example ) for the logger, or to implement a wrapper in the project in order to postpone the NLog vs log4net choice meal for later.
Whatever the reason, we must remember that in the first place, such conveniences completely kill the possibility of optimization for the compiler.
You should not directly display logger information to the user, even with filters. The problem is that this information depends on the internal structure of the program. It is unlikely that in the process of refactoring you are trying to save the logger output. Probably, it’s just worth thinking about and sharing the functionality. Perhaps the project just needs another level of logging. In this case, the wrapper over the logger will be just appropriate.
What else is missing in NLog?
- Additional overloading of logger methods in order to avoid generating classes of lambda functions. I want to have a call of the form
Log.Trace <TArg1, TArg2> (Func <TArg1, TArg2, string> messageCreater, TArg1 arg1, TArg2 arg2)
But at the moment, the shortest human-readable version implies hidden class generation:
Log.Trace (() => MessageCreate (arg1, arg2)) - Binary compatibility. It helps to quickly test services on different platforms. NLog has a lot of conditional compilation instructions depending on the platform. Those. Mono binaries can unexpectedly work in DotNet. A very desirable predictability, albeit in a limited configuration.
- Conditional extensibility. It is clear that with binary compatibility it is necessary to sacrifice functionality, but we already have a convenient extension mechanism. It remains only to filter extensions depending on the platform. Together with the previous opportunity, it also provides a simple deployment through copying the directory with IL-binaries.
- Logging internal messages in general context. A list of loggers created in the system would be useful. Unfortunately, I'm not sure that recursion can be avoided. For example, when output to a file starts writing output errors to itself.
NLog, Log4Net, Enterprise Library, SmartInspect ...
A variety of logger comparisons between themselves, one important detail missing.
It is important to compare not only the limitations / capabilities, but also the ability to quickly add your own wishes.
Therefore, I will be friends with NLog for now.
What you want.