
The principle of parallel calculations is one of those that literally "hover in the air." And this is natural, because any work is more convenient to perform together. Parallel computing appeared long before the first computer, but the idea blossomed in the computer era, because it was then that tasks requiring large computing power and devices ready to be provided by the “whole world” appeared. These days it is exactly 15 years since my first acquaintance with distributed computing projects is a good reason to write about their history and today.
Brief theoretical foreword
A bit of theory for those who have not previously been interested in distributed computing. A project of this kind assumes that the computational load is distributed between client computers, the more the better. A control center is also required, its functions are as follows:
- Distribution of "raw" pieces to customers and receiving processing results from them;
- Control of lost and incorrectly counted fragments;
- Interpretation of the received pieces in the light of a common goal;
- Counting and visualization of statistics.
So, the program installed on the client receives a piece of the task, executes it and sends the result to the center. In the first projects, the blocks were sent manually, by e-mail, then the transport function was fully automated, it would be an internet connection (which in the late 90s, however, did not sound so banal as it is now). By and large, of all the components of a computer, the program uses only the processor, so to speak, “clearing up” its unused resource. The client application works with low priority, without interfering with the rest, however, there is, of course, one hundred percent recycling and negative sides: first of all, increased power consumption and PC heat dissipation.
Despite the seeming simplicity, creating a distributed computing system before typical solutions were a non-trivial task - after all, it was necessary to at least write clients for several operating systems and a management server, so that it all worked together. Some projects failed to overcome the “childhood diseases” and did not achieve their goals. However, there were quite successful ones - one of them delayed me for almost 5 years.
Era distributed.net
So, the beginning of 1998. One of my colleagues in the work at that time, a person who is fascinated and gambling, tells us about an unprecedented miracle: a project that unites computers from all over the world into a single computer network. The idea somehow attracted everyone at once, including technical guidance, and the process started. We then had about a hundred workstations and a dozen servers, and almost all of them were put into operation.

The project we entered was called
Bovine RC5 . As the name implies, his idea is an attack with the help of “brute force” (simple iteration of options) on the RC5 encryption algorithm. The first key was 40-bit - it was “picked up” in a little over three hours. 48-bit lasted 13 days, 56-bit - 265 days. At the time of our connection, the project was in the 64-bit phase, it lasted almost 5 years.
Bovine RC5 quickly gained popularity. The project organizers,
the distributed.net community , were able to correctly identify the main driving force of the process - the excitement of the participants. Dvizhuha has acquired a global scale: the team competed with the team, the country with the country. “Overtake and overtake” became practically the meaning of life for hundreds of thousands of people, and for Russians, as usual, also became something of a national idea. Every morning began for us with viewing command and global statistics, the RC5 client was placed on any computer that fell into our hands. They got to the point where they ran the "cows" on foreign servers, which they were remote with - before the first conflict.
Distributed.net client interface has not changed much during the life of the project')
After the 64-bit phase was completed, interest in the project began to fade, primarily because the next 72-bit one promised to be very long. The premonitions did not deceive us: it has been going on for more than 10 years, during this time only a little more than 2.5% of the key space has been checked. Most likely, the case will not reach the 80-bit key, although the computing power of computers has increased many times over the course of the project. Say what you like, and the expected duration of the stage in 400 years is definitely scary.
We count rulers and look for aliens
Bovine RC5 can be attributed more to sporting events than to the way of solving some real computing problems, all the more so since the RSA, which started it itself, later disowned it. There is, however, a distributed project and a more valuable project for science: the calculation of the
optimal Golomb lines , but with each new unit of the ruler’s length it also creaks more and more.

Naturally, the distributed.net community does not exhaust the organizers of voluntary distributed computing projects. On the contrary, there are at least hundreds of active projects in the world, some of which also have a rich history: for example, since 1996 there has been a search for
Mersenne primes , and in 1999 the
SETI @ home project began, where the issue is being studied based on the SETI radio telescope data. , is there life
on mars in the universe. All in all, as already mentioned, the options are literally “innumerable”: here you can find a cure for the worst diseases, improve the Large Hadron Collider, study the three-dimensional structure of the protein, and solve many mathematical problems and hypotheses ... you are given a huge choice in which to participate in the project, and you can say 100%: for the processor of your PC, you will definitely find a load - to the great pleasure of both. Do not forget to follow the temperature.
BOINC client favorably differs from all others by the presence of a designAn important event in the life of the “distributed” community was the appearance in 2005 of the
BOINC (Berkeley Open Infrastructure for Network Computing) platform produced by the red banner of the University of California at Berkeley, as they live - with open source. BOINC is a ready-made binding (typical server components + client) for projects on network computing, which greatly facilitates their launch, although it does not completely eliminate mental labor, since a number of server modules need to be prepared for a specific task. But the client, one can say, is almost ready, fully adjusted and beautiful. Moreover, it allows you to participate in several BOINC-compatible projects at once. It turns out such a bunch of heterogeneous, but technologically united tasks, which benefits both the tasks themselves and the ideology as a whole.
I would like to finish again on a lyrical note. Perhaps distributed computing is not the best way to spend your processor power. But if you think about it, other ways are even less optimal? But to feel like a member of the team - there is probably no easier way. My “career” in this field ended in 2004 - almost 9 years ago. And here is a surprise: now, when I wrote this post, I went to the statistics of my team - imagine, it is still alive and still occupies the first place in our region. Not carried away in Russia enthusiastic people!
I invite everyone who has ever participated in distributed computing projects to respond and supplement my story - maybe I missed something?