📜 ⬆️ ⬇️

Safe and secure Linux (our response to Chamberlain)

After reading recent articles on Habré on the topic of security of Linux systems, I had a desire to share my point of view on this question.
The article as a whole is designed for novice administrators, therefore, it contains obvious things for a good specialist. Valuable additions and comments are welcome.
I almost won't give excerpts from config files for three reasons:
  1. This will lead to an excessive growth of the article.
  2. Mana and Google has not been canceled
  3. Point 2 is very useful for the development of a specialist.


So, how to improve the security and reliability of the server (and workstation) based on Linux?


')
I will divide this task into 3 parts:
  1. (Pro) active security assurance - toughening system and daemons settings. This also includes setting up a firewall.
  2. Passive security and reliability monitoring system.
  3. Backups

Tighter system settings

  1. Disable unused services / daemons . Carefully review the list of processes (for example, using the command ps -ef | less) and identify those that you do not need. Make sure that they are not needed by the system itself. Disable.
  2. If it is possible, change the standard ports and interfaces on which the remaining services are listened to and configure additional security restrictions using the services themselves (that is, by editing their configuration files / adding keys to the launch parameters).
    I will illustrate with sshd. Here you need to do the following - change the standard port from the 22nd to any free one, for example, to 6622. Deny access under the root login. Rigidly set the list of allowed usernames to access. Allow sshd to listen only on a specific address. As an option to allow access only by key, but I do not really like it
    UPD Keys are unanimously recognized as a great way to access the server, so use them (but do not completely disable the password).
  3. All services that are used only by the system administrator or by a limited number of people should be available only through a vpn or ssh connection.
  4. Wherever it is possible to switch to using an encrypted connection (pop3-> pops3, http-> https, etc.)
  5. Iptables
    Setting up a firewall is a very interesting topic that you need to devote to a separate article.
    The basic principles are:
    The firewall should work on the whitelist principle, that is, everything that is not explicitly allowed is prohibited.
    This can be achieved in two ways:
    Method number 1
    Setting the default firewall policy with the iptables -P INPUT DROP, iptables -P OUTPUT DROP and iptables -P FORWARD DROP commands. After executing these commands, all incoming, outgoing and transit traffic for which no permit rules have been created will be blocked, so they should be executed with caution.
    Before you do anything with iptables, I strongly recommend that you study the excellent manual from Oskar Andreasson
    When reading it, pay attention to the purpose (action) of the LOG, which allows you to write to the log various data on the packets entering the system. This greatly helps in the initial writing of the firewall. I can advise about the following order of firewall settings:
    a) Write all the necessary rules
    b) Add a logging rule to the end of each chain (it will catch all packets that you did not take into account in the previous rules and write information about them to the system)
    c) Enable the firewall
    d) Find unaccounted connections in the syslog and add rules for them to the firewall.
    Item (d) is repeated until the records in the syslog disappear.
    It is clear that you should not mindlessly add everything that appears in the syslog, because the results of unauthorized access attempts will be visible there.
    I note that the default policy DROP should be enabled only after the successful completion of item (d) - this ensures that existing services continue their work while debugging the firewall.
    Also note one not quite clear for beginners point:
    Here is an excerpt from what a standard firewall template should contain.
    iptables -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
    iptables -A OUTPUT -m state --state RELATED,ESTABLISHED -j ACCEPT

    These rules allow inbound and outbound packets that belong to already established sessions.
    This works of course only for stateful protocols, in particular for the tcp protocol.
    If you do not use these rules, then for each application you will have to write 2 rules - one for the INPUT chain, and the second for OUTPUT.
    If we use them, then we can limit ourselves to only one, one that allows the initial connection to be established.

    Method number 2
    At the end of each chain, write a rule that sends all packets to REJECT.
    That is, in the standard case you need to add three rules - iptables -A INPUT -j REJECT, iptables -A OUTPUT -j REJECT and iptables -A FORWARD -j REJECT.
    As a result of adding these rules, all packages that are not explicitly allowed by the previous rules will be rejected with the message port-unreachable.
    Everything else needs to be done in the same way as in method # 1, including adding rules with --state RELATED, ESTABLISHED.

    Method number 2 is the most ideologically correct and I advise you to use it, but the choice is yours.
  6. SELinux
    Very interesting and promising thing. Restricts access rights to processes anywhere. Every action that a process is allowed to do must be spelled out in the policy associated with it.
    There is another similar system - Apparmor (as far as I know it is used in ubuntu linux).
    Honestly, I haven’t worked closely with both systems yet (so I can’t give special recommendations), I partially used SELinux in one project, but due to the time pressure I was then, I had to temporarily postpone this task. Soon I will definitely return to the SELinux setup, because I really liked it.

Monitoring

Break-ins and accidents are known to be better prevented than to eliminate their consequences.
In this regard, a well-tuned monitoring system is very relevant, which at the first signs of a problem will immediately give a signal to the administrator.
  1. Nagios
    There are several large monitoring systems (nagios, zabbix, cacti, munin).
    I tried them all and eventually settled on Nagios. Very convenient and flexible to set up the system.
    A huge number of plugins for monitoring, and if there is no suitable plug-in, you can write your own in any language you like (for example, I use bash and python for this).
    At a minimum, you need to set up monitoring of processor load, memory, swap, free disk space, load average. It is very desirable to monitor the availability of critical services (for example, apache, mysql, nginx, tomcat, etc.).
  2. MRTG / RRD
    The dynamics of changes in various indicators are conveniently viewed on the graphs generated by the MRTG.
    MRTG allows you to view in a convenient way the dependence of various system parameters on time.
    For example, you can see how the CPU and memory usage change depending on the time of day. You can draw graphs for almost any indicator - from the processor temperature to the number of queries to the database. MRTG is an instument, extremely useful for analyzing the state of the system.
    MRTG has several limitations and shortcomings that can be circumvented using the RRD utility from the same author.
  3. Smartd + smartmontools
    Allows you to monitor the status of hard drives and detect suspicious readings at an early stage. Can be integrated into Nagios.
    For example, the non-zero value of the variable Reallocated_Sector_Ct indicates that the bad sectors appeared on the disk, and have appeared for a long time because smart learns about the presence of bads only after the factory remap table is full (here I can be a little mistaken, the theory about this for a long time did not update, it is possible that at modern hard drives the bads are immediately visible through smart).
    It is possible and necessary to configure the smartd so that all suspicious readings are immediately sent by email to the administrator.
  4. Log analyzers
    It is clear that manually monitoring the system logs for errors is a thankless task.
    For this purpose, there have long been many log analyzers.
    I advise you to put several systems at once and choose the one that you like more.
    You can start with logwatch.
  5. Remote syslog
    What do you think, what is the first goal of an attacker who has penetrated into any system?
    The goal is to hide your presence .
    To do this, he will definitely remove all references to his actions from the system logs (and also try to replace some system utilities, but more on that below).
    In order to protect against the negative consequences of deleting logs, it is necessary to organize their recording on a remote server. It is desirable that ssh access to this server was impossible (you can go through the IP KVM yourself) or be very difficult, otherwise nothing will prevent the hacker from deleting the unmodified copy of the logs stored there. On the log server itself, it is most convenient to store the logs in any DBMS.
  6. HIDS and NIDS
    HIDS monitors the state of the system (logs, the integrity of system utilities, etc.) and informs the administrator if a security breach is suspected. It will help if the attacker from the previous paragraph replaces any system utility in order to hide its presence or to secure access to the system. For example, you can replace the utilities ps, who, w, last so that the administrator cannot see who is logged in to the system at the moment. Replace the iptables and sshd utilities so that they allow an attacker to freely enter the system.
    HIDS will certainly not be able to prevent such actions, but it will be able to notify the administrator about them.
    One example of HIDS is OSSEC.

    NIDS If HIDS monitors the internal state of the system, then NIDS analyzes suspicious network activity (port scanning, password attempts, various attacks against system services, etc.). The most famous NIDS is Snort.


Backups

  1. As you know, system administrators are divided into those who do not make backups, those who already do them and those who think that they do.
    This saying is filled with deep meaning.
    Backups may be needed in many cases, for example, after hardware problems with the system, after it has been hacked, after incorrect actions of an administrator or developer.
    You can conditionally divide backups into file system backups and database backups.
    It is very desirable to have a separate server for storing backups, ideally also located separately from all other equipment. Separately means at a distance of 5 kilometers, no less.

    There is a lot of software written for backup, but I prefer to use scripts written by me personally. So I get full control over the process.
    For file system backups, it is easier to use rsync with the appropriate keys. In most cases, it is more convenient to make an incremental backup.
    For database backups, it is best to use the utilities provided by the database vendors, but in the case of mysql I was a little away from this rule.
    The standard backup utility, mysqldump, does not behave very well when removing backups, namely, it “locks” the tables into a record, prohibiting writing them at the time of the backup (although this could be completely avoided in the case of innodb, but mysqldump does not know how do this). This is very important when using large databases, at least 10-20 GB in size, where table locking can last 10-20 minutes. In addition, restoring large databases from such a backup takes many hours.
    In such a situation it is more reasonable to use the system master-> slave.
    On the server, the mysql slave is configured for the database that needs to be backed up. In the future, backups are no longer removed from the main base, but from its slave. In the event of an accident, you can not even immediately restore the database, but simply redirect all requests to the slave server, which will greatly save time. For removing backups from the slave server and for its initial preparation, it is convenient to use the Innobackupex utility from the Percona company. For example, it allows you to make a backup suitable for the subsequent raising of the slave server without stopping the main base (without locking it for writing).
    There is another very important point - it is necessary to periodically check backups for their correctness, otherwise you risk at the most inopportune moment to detect empty or broken archives.


It is very important to keep documentation (for example in the wiki) on all servers serviced and not to forget to update it with any changes to the server settings. Very often there are cases when a system administrator cannot recall the details of setting up a system with which he has not worked for a long time.

That's all. All valuable additions again please express in the comments to the article. I would very much like to bring the article to the main page so that as many specialists as possible can notice it.

PS The article made some edits to the results of discussions in the comments. Thanks to everyone.

Source: https://habr.com/ru/post/120700/


All Articles