I saw habratopic
“Programming parallel processes - is it too difficult?” , And realized that there are habra people who are interested in it. I could not resist expressing my opinion on this matter.
Briefly, the point is that processor manufacturers (in particular, Intel) have ceased to increase the clock frequency, and instead follow the path of increasing the number of cores in a single processor. Moreover, multi-core processors are now massively used for the production of not only servers, but also desktops. In this case, the overwhelming majority of desktop programs are now single-threaded and will not work faster on multi-core processors. If you run several such programs, and programs at the same time want to use processor time, then together they will work faster. But, in my opinion, this is not what the desktop user expects from new processors. And therefore there are concerns that users may not want to upgrade their dual-core desktop to, for example, the eight-core.
Therefore, Intel is interested in the fact that software manufacturers would write multi-threaded programs for desktops.
')
However, not all so simple.
Especially for software manufacturers.
There are two problems. First, with the modern level of programming language development, creating multi-threaded programs is many times more expensive (more expensive because it is many times more difficult) than single-threaded programs. Secondly, these programs are of much worse quality due to the complexity associated with non-determinism, which is inherent in multithreading.
Edward A. Lee in the article
“Problems with Flows” writes: “I argue that in most cases the reliability and predictability of the systems cannot be achieved by using flows”.
I think that by calling programmers to improve their skills (thereby increasing their cost for hiring, and thereby worsening the future quality of desktop programs), Intel will not achieve significant results.
However, it is not difficult to trace the reason why both of these problems arose. All high-level languages that I know about, including such as Java, C # and C ++, force a high-level programmer who wants to write multi-threaded programs to explicitly manipulate such low-level concepts like stream, mutex, monitor, and messaging.
In my opinion, this is the reason for all today's troubles with multi-threaded programming - high-level programmers are forced to manipulate low-level concepts that relate more to the operating system, and not to the program for the end user. And until this problem is resolved, multithreading will never become a mass practice (mainstream) for desktops. As much as Intel would not like it.
This reason is similar to the one that caused memory problems and was fixed by the garbage collector. Namely, C ++ high-level programmers were forced to massively and constantly manage memory by obviously manipulating such low-level concepts like memory allocation and freeing. After the virtual machine took over this work, leaving the garbage collector to the programmer, the problem of working with memory disappeared.
Therefore, a new technology for multithreading should do the same for thread management as the garbage collector does for memory management, namely, to take all the work of manipulating threads, giving high-level programmers the ability to write multi-thread programs exclusively in terms of classes, objects and dependencies between these classes and objects. Those. A new technology should automatically parallelize certain high-level programs. By the way, I believe that information about dependencies between objects is enough not only for garbage collection, but also for such automatic parallelization.
I also think that my idea will be relatively easy to prove by writing a framework as a library for an existing language. Naturally, such a library cannot guarantee the thread safety of the entire system. Just as a garbage collector, written as a library (unlike a built-in virtual machine), cannot guarantee the type safety of the entire system. But, first, to prove the viability of the concept of this will be enough, and secondly, the new program code can be used together with the old, written earlier.
There are many ways to put this framework into practice. One of those that I see is to base it on the principle of “injection injection” (
dependency injection ) in the form of an
Inversion of Control container. Those. obliging the programmer to declare dependencies between objects or classes in a certain way. In the future, the virtual machine or the compiler will be able to stop the conversion of one object to another, if the dependencies between them have not been declared, or automatically set the type of dependencies between them by default.
In the future, I think, I will also have to give up (at least partially) and from using the stack to provide the control flow (replacing the stack, for example, with a graph). The stack is well suited for executing a program in a single thread, but is an atavism that prevents the appearance of high parallelism in multi-threaded programs.