📜 ⬆️ ⬇️

Malware for GNU / Linux and the fight against it

I read this topic in Habré : “Trojan.winlock began to spread through LiveJournal . In principle, nothing fundamentally new, and of course, as always, the comments are full of messages like “And in linux / mac / freebsd / plan9 there is no such thing, and Windows users of the SSZB”, from which small holivars begin. Here, I want to start a new holivar to share thoughts and find out who thinks what, find out as much as possible the existence of malware in GNU / Linux and think about what to do about it.

Problems


Software for GNU / Linux, like software for any other OS, contains vulnerabilities and nothing can be done about it. Not long ago, I came across news about a vulnerability found in some player or library with codecs (I don’t remember exactly, it doesn’t matter). The vulnerability allowed to execute arbitrary code when processing a specially crafted file. Yes, that go far, you can take for example Flash though it does not matter.

Suppose we have a terribly vulnerable player, when opening a special file, arbitrary code is executed in it. What can such a code do? If there are no vulnerabilities in the kernel (or in other important components) that allow elevation of rights, then it can only foul within the home directory. But after all, it’s in the home directory that’s the most valuable (yes, I remember about backups). Those. it is possible to spoil / delete user files, make a botnet from a brave warrior’s user’s computer, merge valuable information (passwords, documents, etc.). To spoil the rest of the system, to harm other users, to make a serious LinLock (which will not just be removed) will not work.

Can the code stay in the system? Let / home be mounted with the noexec option, but the attacker can still use scripts. Nothing prevents malicious code from creating the file ~ / .config / long / path / hard / to / find / zlovred.py and adding its autorun to the .profile. And you can also register the malware in ~ / .config / autostart, and maybe somewhere else, I think that you can find the places.
')
Those. You can spoil once in your home directory, you can also register in autostart and spoil often, for example on flash drives. Speaking of flash drives. Suppose the vulnerability is not in the player, but in the library, which, among other things, is used by some thumbnailer to create thumbnails of video files. The user inserts a USB flash drive, opens it with a nautilus, nautilus launches an auxiliary process for creating a preview ... Everything is malicious in the system, i.e. And you do not need to click anywhere, put in a flash drive - got a virus.
If I missed something or did not take into account, then please comment. If the above doesn’t work out, what will prevent the user from spoiling a happy life through a vulnerability in Flash when entering an enemy website?

Yes, Linux is not so common, there is a zoo of distros and a virus writer will have to take into account a bunch of nuances, but if you want, then you can. Now Linux is not very popular, what will happen when IT comes in popularity? Botnet owners will refuse to increase their number due to the fact that for Linux to create a malware is more difficult? I do not think so.

What to do?


Antivirus for Linux? No way!

All programs should work with minimal privileges. User privileges from which the program is running are redundant. Yes, it was already invented long before me, I just want to state my thoughts on how this could work.

What does the player need (can)? All he has to do is read the files, write to ~ / .config / player, open the URL, if he can. All the rest is not needed, or rather nizyaaaaa (in the voice of Polunin). Flash is even less, just a network and some ~ / .config / adobe / flash (or where is it?). It may take several more directories, for example for temporary files, but this is clearly limited.

So what to do? Forced access control already exists - SELinux and AppArmor. Just as it seems to me, they would not hurt to modify. For example, AppArmor (with SELinux is superficially familiar ... Maybe everything is already good there?), There for each application that needs to be trimmed, you need to write a special config and put it in /etc/apparmor.d/. It seems to me that this approach is not flexible enough (as far as I know, SELinux is also not flexible). There is a lack of the possibility of creating such profiles “on the fly”, without the superuser's rights. Namely:
  1. Interface for creating an application security profile from the application itself
  2. Interface to run the application with the specified profile
  3. The ability to assign privileges to an application for editing profiles of other applications on the fly, i.e. profile changes already applied
  4. Profile templates, changes in the format of executable files, it is possible to change packages


For example, that same holey player starts. First of all, the player process through a special API uses a restrictive profile for itself. Those. The developer of the player must clearly add to it the corresponding code with the list of rights required by the application. Something like "you can read all the files, you can write to your config, everything else is impossible." Those. This method is for conscious developers who know that their application may contain bugs and want to restrict the system from the actions of its program. Or the corresponding code can be added within the framework of any distribution kit. Yes, it will be necessary to edit the code, but there will not be many changes. Thus, the rights can be reduced, but not increased (as is the case in AppArmor), besides this mechanism should allow only a single application of the profile, i.e. if the application is hacked, then nothing will be changed. Such a security profile can be applied immediately after launching, or not immediately, for example, before application, the application can read / write some files that are no longer needed in future work.
When to use a profile decides the developer.

Not all applications can be applied in this way, for example, there will be a problem with closed-source applications. In addition, a situation may arise when, depending on the launch context, different profiles need to be applied. That’s what mechanism number two is required for. Using it, an application can launch another application with an indication of a profile. In my opinion, the most suitable example is the browser and plugins. Flash, java-applets, Silverlight and other plug-ins at startup receive security profiles that restrict their rights. Let Flash be three times full of holes, let it be ActionScript API for access to the file system, he can not do anything.

Everything seems to be good, i.e. Most applications can be limited in this way. But with some there may be difficulties. What to do with applications that potentially need to read / write any file.
The browser needs the ability to save downloadable files. You can limit it to a separate ~ / Downloads directory, but this will be security to the detriment of convenience. Any editor in principle needs the ability to read and write any file. In this case, we will help point number three. The dialogs for opening and saving files need to be transferred to separate processes, and in, for example, / etc / apparmor / trusted_programs, you can state that / usr / bin / gtk-open-dialog and / usr / bin / gtk-save-dialog can change on the fly "Profiles of other already running applications (for example, via / proc / [pid] / aa_profile). Naturally, you can edit the application profiles of only the same user, from which the special programs themselves are running (open and save dialogs).
The browser starts up, a profile is applied to it that restricts everything and everyone. When the browser needs to save the downloaded file, it will launch gtk-save-dialog (naturally, kde will need its own gizmo). The user explicitly chooses the name of the file to save. Gtk-save-dialog will add a corresponding exception to the profile of the browser process and return the file name to the browser. Thus, the application will be able to read and write only those files that the user explicitly authorized. The application can be prohibited to read the directories themselves, so that it is not possible to get a list of files (although, getting the list of files is safe enough, I think). You can do the same with office programs (and many others). Let all macros and other unsafe things be resolved in a word processor, it will be impossible to spoil anything, except the office documents themselves, and then only those that are already open at the moment. For the user, everything will remain as it was, the same programs, the same dialogs, nothing new, no inconvenience. For a programmer, you can also do without changes, all the same call the function of displaying the dialog box. It will only be necessary to tweak the widget libraries (Gtk, Qt, ...)

Stay fourth. Why is it needed? We need it to safely launch applications from unverified sources. That's when it comes GNU / Linux will become more popular and there will be a lot of applications for it, that's when the fourth point is needed. As a rule, in GNU / Linux applications from trusted sources - repositories, but there are situations when you need to install an application from an unverified source. The fourth point is the following: the system contains templates for security profiles, they are standard; The executable file contains the name / id of the profile. When you start the application, a profile is applied, if the application does not contain an id / profile name or if it requires potentially dangerous rights, then a warning is displayed to the user. Thus, the user will be able to download and run a large number of applications from unverified sources, of course, system applications do not belong here, since they will need many potentially dangerous privileges. Every little thing, such as games, small utilities, screensavers, etc. it will be possible to launch it safely, while if the application does not require any special rights, then the user may not be notified at all (“Downloaded from www.zlovredi.ru . Are you sure you want to run the executable file from an unverified source?”), but immediately run. Why are templates and names / id profiles, and not just security profiles directly in executable files? Because, then a parser will be needed, which will check the profile and issue a conclusion about its security, and this time at startup, besides, an attack on the kernel is eliminated through the security profile from an unchecked source (you never know what they have written ... and the core will have DoS) . All this, as far as I can tell, is reminiscent of working with applications for mobile devices.

Why did I write this?


I just stated my thoughts and it is interesting for me to discuss the topic of security in GNU / Linux, which I propose to do in the comments. Of course, I did not describe anything fundamentally new, but some of the ideas described here have never met me (for example, making open and save dialogs to other processes), maybe this will really be a useful idea. Why I did not rush to write patches for the kernel, and wrote a topic on the habr? Unfortunately, my knowledge of C, and even more so the kernel, leaves much to be desired. In addition, there is an idea to discuss the ideas presented, and perhaps write about it on the Ubuntu Brainstorm (I wrote a couple of lines on this subject, but they were almost ignored, maybe because I have English so-so, and maybe nobody needs it. Find a link - add here) or a similar resource.

PS I'm not sure that I chose the appropriate blog for writing this topic. If he has no place here, then I will bear it.

[update 1]
A small addition 1:
What I am describing here is additions to the existing AppArmor. The API for creating a profile from the application itself is not a replacement for AppArmor profiles that now exist in text files, but an add-on.
It seems to me that it can be useful in some cases:
- One application will wash work with different profiles depending on use, i.e. differently curtail the rights to yourself.
- It will be possible to trim the rights not immediately after launch, but a little later.
For example, I worked a completely checked and polished code section that needs more rights, then the application does not need rights and they are cut off.
Those. This is a supplement that adds flexibility, not replacement.

A small addition 2:
Rights can always only be reduced. Only in paragraph 3 can you allow something that was prohibited.
Allow can only privileged program and only another process already running.
Above all described in detail, read again.

Small addition 3:
As in the case of classic AppArmor, this system of rights will be less priority than the classical system of rights. Those. if something is impossible to the user himself, then it cannot be for all applications, regardless of their profiles.

[update 2]
It turns out that something similar is being developed in Mac OS X.
techjournal.318.com/security/a-brief-introduction-to-mac-os-x-sandbox-technology
developer.apple.com/library/ios/#DOCUMENTATION/Security/Conceptual/Security_Overview/Security_Services/Security_Services.html
Thanks int80h 'u for the links.
By the way, there is an opportunity to restrict the rights from the application itself through a special API, as described by me and what they write in the comments “don't need nafig”.

Interestingly, is there something similar for Windows?

Source: https://habr.com/ru/post/113143/


All Articles