
To date, there are two types of version control systems: client-server and distributed. But despite the huge difference between them, we still continue to use the central server to synchronize work between team members.
And what will happen if one day the central server burns down?
Let's discuss it
Client-server architecture
The category of these version control systems includes such as CVS, SVN. When working with such systems, you must have a server on which special versioning software will be installed. And each time you receive updates to the source code or commit changes, you are applying to this central server. If it collapses, you will lose the entire history of changes, which, alas, cannot be restored. This of course can be solved backups. As a result, you will lose only the part that was created after creating the last backup.
In contrast to this model, adherents of distributed version control systems advertise them, indicating that they do not need a central server and that this model is more protected from problems in the event of a central server crash.
Distributed architecture
But have you ever seen that in commands where the same git or mercurial is used work without a central server? I honestly never. But supporters of DVCS are right after all. When you get the code from the central server repository, you not only get the latest version, but also the change history, as a result, the whole repository comes to you. You, in turn, can also act as a central server for another team member. But if this team member will upload his changes to your repository, you will need to push these changes further to the central server so that all team members will have the latest code. And what do we have? you cannot do without a central server, but in case of its collapse, you can restore the history and everything else. And backups do not play such an important role.
')
Equally distributed architecture
And now think if you create such a system where you would not need to keep a server for synchronization, but all team members would always have the latest version of the code. I see this system as follows: all the source code is kept by all team members, but when one team member records their changes, they are transmitted, for example, to two colleagues, and they in turn to two more, and so on, while the entire team will not be the current version. In terms of fault tolerance, this system looks just perfect. There is no central server, there is no likelihood that data will be lost if it crashes, nor will it take time to recover lost data after a server crash. But this idea also has drawbacks, namely that if a team consists of a small number of participants (1-2 people), the probability of data loss is very high. One solution to this problem may be to use, after all, the server. On the other hand, it is possible, for example, to distribute the repository of the program not only among the members of the team developing this product, but among others who have this VCS installed. Of course, for others not related to the development of this product, the code must be encrypted before delivery.
So, on the technical side, for the system to work, a service must be installed in the operating system that would accept changes from colleagues, upload to the local repository and redirect the data further to other team members. When a developer needs to upgrade, he will receive the code from the local repository.
The mechanism for committing changes in this case is the following: the developer executes the command to commit the changes, these changes fall primarily into the local repository, and the service from this repository receives and distributes among the team.
Well, the idea seems to be pretty simple. And there seems to be nothing to add.
And why do I write this article? And besides, I would like to write a similar system and just share my ideas with the community. And suddenly there are those who would gladly be grateful users of it.
Thanks for attention.