📜 ⬆️ ⬇️

bash + logger application warrants

Everyday Linux administration tasks require automation. As a rule, automation is reduced to writing bash scripts and their “rotation according to crown”, or manual execution, depending on the task. This article contains frequently encountered bash script logging practices. The target audience is linux system administrators.

The easiest and most obvious way to save the output of a script is to simply redirect it to a file.

exemple.sh > exemple.sh.log 

In fact, you need to add to the output STDERR , because, as error messages are written exactly in it:

 exemple.sh > exemple.sh.log 2>&1 

')
varint
As Angel2S2 correctly noted
Better and easier to do this:
 exemple.sh &> exemple.sh.log



This is considered normal practice and usually satisfies most system administrators. But if there are several servers (tens, hundreds), and logging is centralized, it is more convenient to use a logger . Let me remind you that logger is a utility that sends messages to syslog .

 exemple.sh 2>&1 | logger 

The advantages of this approach are that you can redirect the output not only to bash scripts.

Disadvantages - it is impossible to separate error messages from other informational messages, since in this example the output is cast into one stream and all of this goes into a syslog with the priority user.notice (default priority).

For some tasks, the following option is acceptable:

 exemple.sh 2>&1 >/dev/null | logger -p user.error -t exemple.sh 

In this variant, only STDERR is logged ( STDOUT is redirected to / dev / null ) with error priority. Also messages are tagged with the name of the script. All this can simplify troubleshooting of scripts.

The disadvantage of this approach is that we lose STDOUT . To solve this problem, you can use output stream redirection directly inside the script.

 #!/bin/bash exec > >(logger -p local0.notice -t `basename "$0"`) exec 2> >(logger -p local0.error -t `basename "$0"`) echo "error" >&2 echo "notice" 

This example illustrates how STDOUT and STDERR can be redirected independently of each other. Unfortunately, this method creates an unexpected problem: messages from different streams may not be in the syslog in the order in which they were sent. This behavior is due to the fact that threads are treated as two independent processes. Despite this, the method is often used for logging scripts run by cron .

But sometimes you need to run the script manually, and in this case it is not always convenient to monitor its work in the syslog.

 #!/bin/bash # # .--------. # | STDOUT | # '--------' # ^ # | # .--------. .-----. .--------. # | STDOUT |----->| tee |---->| logger | # '--------' '-----' '--------' # exec > >(tee >(logger -p local0.notice -t `basename "$0"`)) #   exec 2> >(tee >&2 >(logger -p local0.error -t `basename "$0"`)) #exec 2> >(logger -p local0.error -t `basename "$0"` -s) echo "error" >&2 echo "notice" 

In this example, tee copies the “incoming” stream to the logger and back to STDOUT (the default behavior), and in the second case to STDERR . Thus, we get logging to syslog + console output at runtime, which is useful for manually launched scripts.

Findings:

Logs in syslog is easy.
Logs in syslog is useful, especially if logging is centralized. In case of parallel redirection of output streams, you should remember about a possible violation of the order of messages.

Related Links:

Redirecting bash script output to syslog
Logging From Launchd

Source: https://habr.com/ru/post/281601/


All Articles