📜 ⬆️ ⬇️

Extend log4net. Competitive Logging

The log4net library is known and does not need to be presented. It has almost everything you need, "out of the box." However, there are specific situations in which it is necessary to add / expand library functionality. Specific situation: there is some library that performs logging via log4net to some logger (or several - not the essence). This library is used in N services. How to make logging happen in a single file? Under the cut, an explanation of why FileAppender + MinimalLock is not appropriate, and the implementation of the Appender, which solves the problem. Separately, I want to say that the issue of logging in the database, network, etc. The article is not considered, because This is a much more serious task, which (especially for large systems) requires a lot of thought and implementation.



Immediately make a reservation about the test environment, since in my arguments I will rely on the test results. The i7 processor (4 cores, 8 taking into account hyper-threading), the application log4net.Extensions.Console performs in 8 threads an entry in the log with an arbitrary Sleep after writing. The number of entries in the log is 10,000. The BulkStarter application launches 10 instances of log4net.Extensions.Console and waits for them to complete. Thus, 10 processes, with 8 threads each, write to a single file. All temporal characteristics were obtained in this test with the help of Stopwatch (high resolution timer to clock cycles).
')

FileAppender + MinimalLock


In log4net, for writing logs to a file there is a FileAppender. It has an exclusive lock feature (used by default) and the so-called MinimalLock. ExclusiveLock when initializing log4net opens the file for writing with the ability to read for other processes. MinimalLock performs file opening / closing for each record in the file. Accordingly, ExclusiveLock does not suit us at all (since the file for all processes, except the first one that captured the lock on the file, will be unavailable), and MinimalLock has a number of serious flaws:
FileAppender configuration example with Minimal Lock:
<appender name="FileAppender" type="log4net.Appender.FileAppender"> <file value="D:\1\test-minlock.log"/> <lockingModel type="log4net.Appender.FileAppender+MinimalLock" /> <layout type="log4net.Layout.PatternLayout"> <conversionPattern value="%message%n"/> </layout> </appender> 

ConcurrentMinimalLock


In search of a worthy replacement, I came across a 2008 article on the Code Project - Logging into single file from multi-process . The article describes the implementation of ConcurrentMinimalLock, in which messages are buffered, and the file is opened in a loop (until the lock is set yet). It seems to be all right, but the time to call the logging method is 1:04 of 1:20. Those. still long. And this is also taking into account the fact that this is partially asynchronous logging (if the application is hard-closed, all of the queues will be lost).

ConcurrentFileAppender


In my research, I was very surprised that no one wants to use banal mutexes ( Mutex ), because their main purpose is cross-process synchronization. Therefore, I decided to write my own Appender, which would perform synchronization between processes using a mutex. To create an Appender, you need the following:
The main idea is to add messages to the synchronized queue (using the Monitor), and in a separate thread, write to the file using a named mutex (using the full path to the file) to synchronize file access.

ActivateOptions


At the time of initialization, we open the FileStream in write mode, but at the same time specify FileShare read-write. Those. other processes will be able to read-write to this file in parallel. Sorry there is no way to make the file read-only for anyone who is not a ConcurrentFileAppender to those. support could not edit files (by chance, of course) online. The fact that we keep the file open gives us the opportunity not to waste too much time on opening, as is done in previous implementations.
 public override void ActivateOptions() Dispose(true); // flush     base.ActivateOptions(); //       var path = SystemInfo.ConvertToFullPath(Path); var dir = System.IO.Path.GetDirectoryName(path); if (!Directory.Exists(dir)) Directory.CreateDirectory(dir); _loggingEvents = new Queue<LoggingEvent>(QueueInitialCapacity); FileStream stream = null; Mutex mutex = null; try { stream = File.Open(path, FileMode.Append, FileAccess.Write, FileShare.ReadWrite); mutex = CreateMutex(path); _thread = new Thread(ThreadProc) { Name = "ConcurrentFileAppenderListener", IsBackground = true // logging thread should not prevent an application exit }; _thread.Start(); _stream = stream; stream = null; _mutex = mutex; mutex = null; if (FlushOnProcessExit) AppDomain.CurrentDomain.ProcessExit += delegate { OnProcessExit(path); }; } finally { if (stream != null) stream.Dispose(); if (mutex != null) mutex.Close(); } } 


Threadproc


The method is trite in a loop to check the queue for non-empty. If the queue is non-empty, we copy its contents, clear it and dump it to disk.
 private void ThreadProc() { var culture = CultureInfo.InvariantCulture; while (true) { if (!_nonEmptyQueue) //      bool     { Thread.Sleep(QueueEmptyCheckTimeoutInMilliseconds); continue; } lock (QueueSyncRoot) { _currentProcessingEvents = _loggingEvents.ToArray(); _loggingEvents.Clear(); _nonEmptyQueue = false; } Flush(_mutex, _stream, culture, _currentProcessingEvents); _currentProcessingEvents = null; } } 


Flush


The messages accumulated in the queue are first written to the buffer, and then this buffer is completely removed to the disk in order not to make consecutive calls to Write. Another important point is to perform Seek before writing, since the current position of the file cursor is cached.
 private void Flush(Mutex mutex, FileStream stream, CultureInfo culture, ICollection<LoggingEvent> events) { if (events.Count <= 0) return; byte[] buffer; using (var sw = new StringWriter(culture)) { foreach (var e in events) RenderLoggingEvent(sw, e); buffer = Encoding.GetBytes(sw.ToString()); } try { mutex.WaitOne(); stream.Seek(0, SeekOrigin.End); stream.Write(buffer, 0, buffer.Length); stream.Flush(); } finally { mutex.ReleaseMutex(); } } 


Final implementation


In the end, the base class ConcurrentAppenderSkeleton was written:
 public abstract class ConcurrentAppenderSkeleton : AppenderSkeleton, IDisposable { /// <summary> /// Gets a name for a <see cref="Mutex"/>. /// </summary> protected abstract string UniqueMutexName { get; } /// <summary> /// Performs initialization of an appender. Should throw exception if cannot be initialized. /// </summary> protected abstract void ActivateOptionsInternal(); /// <summary> /// Prepares data to a flushing. /// </summary> /// <remarks> /// <para>This method executes in a thread safe context (under <see cref="Monitor"/> lock).</para> /// </remarks> /// <param name="events">Logging events.</param> /// <returns>A prepared data.</returns> protected abstract object PrepareData(LoggingEvent[] events); /// <summary> /// Flushes prepared data. /// </summary> /// <remarks> /// <para>This method executes in a thread-safe context (under mutex).</para> /// </remarks> /// <param name="data">A prepared data to a flushing.</param> protected abstract void Flush(object data); } 

And the heirs: ConcurrentFileAppender and ConcurrentForwardingAppender (by analogy with BufferingForwardingAppender ). The latter allows you to write (Flush) to the set of other Appenders. It is important that these other Appenders do not make exclusive locks (i.e., you cannot use FileAppender + ExclusiveLock), and they will be synchronized with the mutex.

For test purposes, I added a setting in which place to synchronize on the mutex: either immediately before receiving messages from the queue, or just for writing to a file. In the first case, there is an increase in the number of queues (due to longer locks, since they include formatting and writing), but some reduction in input-output costs (since 10 entries of 10K are made faster than 100 of 1KB). In the second case, the queue does not grow very much, being in the lock is minimal (only the time for the I / O itself), which is more scalable.

Tests


Let me remind you that the test is a logging 500 MB out of 10 processes with 8 threads each. The table below shows the results for various tests. ConcurrentFileAppender will be displayed in abbreviated form as File, ConcurrentForwardingAppender (recorded in FileAppender + MinimalLock) as Fwd + Min, and an exotic variant - ConcurrentForwardingAppender, where recording is conducted in FileAppender with ConcurrentMinimalLock will be denoted as Fwd + Con. Sleep time to poll the queue - 10ms. Mutex.WaitOne for write-only will be designated as IO, for formatting and recording - Whole.

Time (s) \ TestMinlockConcurrent minlockFile ioFile wholeFwd + Min IOFwd + Min WholeFwd + Con IOFwd + Con Whole
Total time116,679.711.2711.611.9611.6111.711.61
logger.Debug100.3664.680.080.070.070.070.080.07
Wed queue length--9401851691725

As can be seen from the table, when using a mutex lock only at the time of I / O, the message queue is smaller, which somewhat increases reliability. If you need to use RollingFileAppender functionality, it makes sense to use ConcurrentForwardingAppender in conjunction with ConcurrentMinimalLock, although it is better to write your own version of Lock, which will not do double buffering.

The two main advantages of Concurrent * Appenders are multiprocessing and the virtual absence of slowing down the main application while logging is enabled.

PS


In the process of registration of the article I noticed an error in counting the total time and calling the logger.Debug method. With not my appenders, everything turned out to be even worse, and with my even better :) But I didn’t begin to update the original results table, because very long, sorry (the error is that in log4net.Extensions.Console access to the Stopwatch instance was out of sync + it was necessary to divide the value by 8).

References:
Files:

Source: https://habr.com/ru/post/113663/


All Articles