I want to talk about what seemed to be a simple, but important part of any OS — about the rules or file allocation structure or more simply — I want to talk about order and convenience. I will address UNIX-like operating systems, and specifically the whole variety of distributions based on the Linux kernel. So, the programmer has created a program and wants to distribute it, now there is a task - installing the program by the user. For this, it seems there are standards (FHS, LSB), but the variety of software installation tools just discourages the programmer, to the best of his abilities, implements installation tools, from a simple README to something more complex. Yes, today most distributions have a software installation system that, within the distribution package, solves software installation issues and, to some extent, makes it easier for a programmer. And when it was necessary to manage a large group of different distributions (racks in data centers, VM) various configuration management systems were created (CFEngine, Bcfg2 puppet, Chef, etc). And everything seems to be ok, but the feeling of being a bit of a crutch, of all this variety, has been around for 14 years (since the moment when RedHat Linux 5.2 started picking) does not leave me ... So what's the matter?
The roots
I will try to discard everything that is used today for installing / removing programs in a typical Linux distribution - I’ll figure out what to do with the program, harder than helloworld, removing:
configuration management systems;
package manager;
all installation scripts, without exception, including those that perform preliminary actions such as creating directories to house libraries, resources, program documentation, compilation scripts, etc .;
and even the README / INSTALL file.
What actions need to be performed now to install and then remove the program, along with all its components? Suppose I managed to figure out where the parts of the program should be located, created the necessary directories, copied the files, updated the environment, and the program now runs successfully. But time passed, I have long forgotten where and how the program components were located, I just remember the name of the program and I need to delete it along with all the components and files created by it. And here I am at a dead end - what to do? Set on strace and ldd? Find all directories with the same name to remove the documentation? In order to just remove the program and all its components? And it's not a fact that everything will be found and deleted. So, if you do without curtsies, then the observed situation with installing / uninstalling programs in Linux is a well-disguised bullshit crap:
And what to do with it?
There is a simple install program that allows you to copy and set the necessary attributes on the files to be copied, however, the results of its work should be deleted with the usual rm. To launch any program and complete its execution, the functions execve () and _exit () are available , in order to open the file and close it there are functions open () / create () and close () . In this case, when executing / stopping the program, the process identifier ( PID ) takes place, and when opening / closing the file, the file descriptor. So why not to install / remove a program, something similar to the program's deployment ID , let's call it Deploy ID ( DID ) / Install ID ( IID ), which can actually be a standard UUID, providing several functions for operating with this identifier type _appinstall () / _appupdate () / _appuninstall () / etc and the corresponding command type ps, for example app? And why not to place the program with its components, let's say not in
')
Yes, it will be another package manager, it can appear convenient GUI and configuration management systems will not go anywhere. But! Everything can go to POSIX, the placement of the program and its component can be done in a strict way, and removal becomes easy even if the database of identifiers is destroyed. In addition, going through the file allocation structure, it will be possible to restore all identifiers by recreating the database, just as updatedb does by building an index for locate. It is possible, simply enough, to install multiple versions or instances of a single program and switch between versions. If we go further, then it is reasonable to bring the configuration files of programs to some common denominator, unifying their contents using, for example, the same xml or json, but providing functions in the manner of _appconfigdeploy () with the validation of the configuration for compliance with a predefined one, let's say in the same POSIX, schema.
Crackers dry or throw a fishing rod?
In general, there are distributions that seem to have moved towards unifying the placement of their component components and configuration files, such as GoboLinux (which seems to be no longer developed), and in my opinion, the closest package manager is nix and the NixOS distribution based on it. But I think we need a large-scale initiative, discussion, the same as for the FHS. Unless of course I'm right at all.
These are the pies.
PS Once I raised a discussion on this topic in 2006, on the forum gentoo.ru.
UPD : Immediately minus, ask yourself a question: would suit the programmer, say, the appearance of differences in the way the fork of the process, let's say in debian and redhat? I think not (remembering about the platform). So why is kostalizm in the form of different rules and approaches to installing programs is the norm?
UPD : As can be seen from the comments on LOR, the inability to deal with stereotypes is peculiar not only to hardened windows users, alas, I will add some explanation : The scheme is distinguished by simple logic - one id for one copy of the package deployed into the system without destroying the already established and standardized basic file hierarchy (unlike nixos, in which the changes are excessively total, for example / usr / contains only / usr / bin / env -> / nix / store / id-coreutils-version / bin / env, this is despite the fact that PM nix itself allows you to organize your trash bin under any distribution package in $ HOME, without affecting the main system at all).
Actually obtained id acts as an identifier for all parts of the program, adding only one level in the file hierarchy for insignificant but convenient isolation of the program components from each other (including the same program in several versions), it is easier to control through the same id access rights to the program and its components.
Of course, there are victims, the main one of which they don’t take, is the human-readable path (hrp) - in fact, all the misunderstanding is because of the hrp they see KISS and they are perplexed - why should the path to the program and its component become complicated, but there is nothing complicated here, take a look at fstab tezhe - why did you use label and uuid for mounting? Yes, because I wanted convenience. So I did try to get my comforts, though:
better control of the program, regardless of the presence / absence of turnips;
be able to run several versions of the program, without steaming with prefix and slot;
to be able to access parts of the program through the program name (just as we do it by jerking the function to call from a smart dialer, without trying to prescribe the initialization and dialing commands for the gsm module every time before it, without having to juggle tools, write alias, and edit environment variables.