📜 ⬆️ ⬇️

Service Logs - Users

image For a long time, I was worried about the problem that the shared hosting user did not always know what was going on with his account - did anyone log in via ftp , did the cron job run , was there ssh access, where did the email go and did it go. Most hosters (including us) could ask a question to the technical support service and wait for a specialist with the appropriate rights and qualifications to make a selection of the necessary logs. Bonus problem - it’s not so easy to take one command and look at the records in logs related to the user This creates difficulties for the system administrator.

It would seem that a simple task from the very beginning began to present surprises.


First, it turned out that not all programs can work with the syslog system. For example, proftpd is able to write the main log in syslog , but it does not know how to write a file management log there. The documentation has a “crutch” for this situation (by the way, not working), implying a separate “demon” listening to a FIFO file (in which proftpd is able to write any magazine). But in the end, the log entries are strange and by the amount of backups I decided to abandon the idea of ​​writing a filter launched from syslogd .
')
Secondly, it turned out that for some programs one should introduce a “tracking” system (which saves the lines and writes them to the log only when a line appears identifying the user) with a non-trivial analysis of the journal lines. The poorest log is at the OpenSSH server. To understand what key the user has logged in, you have to turn on the debug mode, the connection to the terminal is written only in the strongest debug mode (and I refused), the tunnels are not allocated in any special way. The key is not written to the journal itself, but to a fingerprint (tell me, how many of those who read this can see the ssh-key's fingerprint right away?), I had to write a backup writing to the journal the beginning and end of the key from authorized_keys so that it was clear. The established connections do not have an identifier and have to memorize them by the pid of the process, after a successful login, the log is not marked with the user name and again you have to keep in memory which pid serves which user. It is necessary to closely monitor all possible options for the completion of the process in order to avoid “leaks” or misinterpretations. Including I wrote a “garbage collector”, given that I may accidentally “lose” some lines of the log. The proftpd log also has a hard-to-follow connection problem . The easiest way, strangely enough, was sendmail . He puts in all the letters the number of the queue, which he writes to the journal. Wherever it is appropriate, he communicates with which system user he was dealing with. At the same time, the format of the journal in sendmail is a bit brainworm.

The process of implementing the “unfolding” after finding out the peculiarities and choosing the method of reading logs also turned out to be entertaining.

I decided to go according to the UNIX ideology to use the existing tools. I traditionally wrote the “folding” on perl, the task was to keep track of the logs. The choice fell on the tail utility with the –f key for each magazine of interest and the subsequent parsing of the read. But what to do with the rotation of the logs that we read? And then in my old age I opened my eyes to the key -F. For simplicity, I decided to do a process for each log ... and “died” in dispatching processes. When an external signal is received, a signal must be sent to everyone, that must have their own handlers, it is necessary to work out the system crash, reboot. And then I remembered that tail can monitor multiple files at the same time. The compilation of algorithms for the definition of a file which refers to which line was not trivial. 'tail' writes from which file it currently prints the lines, but you have to make a tracking system that remembers what it wrote. After an afternoon of torment, it turned out that in the system implementation of tail in FreeBSD there was an error, when working with a terminal, it writes the file name after the lines are output from it, and not before. As a result, I just made my own implementation of continuous reading of Perl files with tracking of their renaming. On the issue of bicycles and standard solutions. Sometimes the bike in all respects more profitable.

How I rotated user logs? As well as rotation of the others - the system utility is engaged in it. When retrieving a line for a user’s log, the “folding” script looks to see if I have one in open files, if not, it opens and remembers. The system service of log rotation after renaming and rotation of user logs sends the classic SIGHUP to our “folding”. The handler of this signal in the "folding" simply cleans the list of open handles, simultaneously closing them. Accordingly, upon receipt of the data files are reopened.

The result was impressive. So many interesting things turned out in the behavior of users. And by the way about nice additions that decorate any good utility. In addition to laying out logs in users' home directories, I made a copy of the record separately in one place for system administrators. Now it is possible with one glance to assess the current situation of each user for the service of interest. Logs in an additional place during the rotation are simply killed, since the current assessment does not require an archive.

Summary. When you write a program, think for whom you are writing the logs of this program. When you choose a certain tool, think - and this is definitely not an ideological choice?

PS I express my deep gratitude to my readers at Juick for their help in developing the system. Without you there was every chance not to cope with the task.

UPDATE: github.com/schors/peruserlog

Source: https://habr.com/ru/post/174327/


All Articles