⬆️ ⬇️

1, 1, 2, 3, 5, 8 or how I overcame the Fibonacci dependency

image

Fibonacci numbers are the elements of a numerical sequence of 1, 1, 2, 3, 5, 8, 13, 21, 34, ... in which each successive number is equal to the sum of the two previous numbers. Fibonacci numbers we can see in many objects of nature, in the ratio of the proportions of the body or see the implementation of the Fibonacci spiral in the clam shell.



Recently, these same Fibonacci numbers do not bother me! With whatever materials on parallel programming, I did not get acquainted, I meet these numbers everywhere. There is a feeling that all parallel programming is associated exclusively with the problem of calculating Fibonacci numbers.



The calculation of Fibonacci numbers is given in a variety of printed and electronic articles. Even the Wikipedia article on Parallel computing contains an example of their calculation.

')

What example do Cilk developers like to give? Of course, the calculation of the Fibonacci numbers. Fibonacci numbers in Cilk Avenue " Parallelism for the Masses ". Fibonacci numbers in the description of Cilkview . About Fibonnachi referred to in the " Cilk Reference Manual ". Simply put, everywhere.







The Fibonacci numbers are used to demonstrate the T-System automatic dynamic parallelization tool developed in the framework of the SKIF supercomputer program of the Union State of Russia and Belarus: T-System with Open Architecture , T- Language Tutorial .



Around the Fibonacci numbers, holy wars unfold!



The list goes on and on. Fibomania some. :)



Actually such a frequent mention of Fibonacci is understandable. A simple and illustrative example demonstrating the principles of parallelization, but cited too often. There are, of course, other examples that demonstrate the parallelization of algorithms. But all of them are most often the solution of some mathematical problem. This is bad and I will focus on this point in more detail.



The role of Fibonacci numbers and other similar mathematical examples is, oddly enough, a brake in the history of the popularization of parallel programming. All these articles with examples of parallel sorting and mathematical calculations suggest that parallel programming is something remote, the lot of mathematicians solving their specific problems.



Examples with Fibonacci numbers, instead of demonstrating how easily and effectively you can parallelize a program, leave the application programmer with the feeling that this has nothing to do with his programs. He thinks not in mathematical algorithms, but in the form of working with GUI, in terms of “files” and “here I need to clear an array”. Perhaps he has a need to accelerate the software package. But this is in no way connected with parallelism, since he does not see in his project those algorithms for paralleling that they write about in articles and books.



Multicore systems provide the developer with many ways to improve the efficiency of their programs. But the literature often looks at this from the extreme position of parallelizing and changing counting algorithms. Although there are many other levels of parallelization. And you should not forget to talk about them to the developer and provide relevant examples. I can give one example from my practice right now.



One of the stages in the development of our PVS-Studio tool was the use of the capabilities of a multi-core system. Static code analysis of large projects can take hours, and processing speed is an important characteristic of such tools.



We started a discussion on how to parallelize our system and immediately went in the wrong direction, completely without noticing it. The reason for this is the thinking within the framework of information resources on parallelism, focused on technologies and methods of parallelization of various algorithms. The first thoughts were aimed at choosing OpenMP or another technology, how to parallelize the syntax tree traversal. And on other nonsense like that. And the solution lay on the surface, it was elegant and easy to implement.



It is good that the parallelization of static analysis algorithms was a difficult task and in the course of thinking we rose to a higher level of abstraction. We have no reason to quickly process a single source file. It is processed quickly enough. Problem in handling multiple files. So we will process these files in parallel! Just run in parallel several analyzers (create several processes) and collect the information they give out. No need for OpenMP, no need to think about synchronization, no need to look for bottlenecks and check the effectiveness of parallelization.



The solution described was implemented and works fine. Does such a solution seem obvious? Of course. I will not lie that it took us a long time to come to him. But in other tasks, things may not be so obvious. You can easily get carried away by searching for inefficient plots in the program, paralleling them, then identifying errors in them. It is very easy to forget to look at the task from higher levels. Consideration of examples like Fibonacci just contribute to such forgetfulness. Programming parallel systems is a much more multifaceted question. But it is often undeservedly forgotten to highlight this versatility, focusing on a specific technology or parallelization technique.



I tend to think that before starting the restructuring of algorithms, one should look for the methods of “simple parallelism”. In some cases, this is simply as in the example above. The same approach with separate file processing can be applied in the transcoding package of images. In other systems, such objects for parallel processing may not exist, but you can try to isolate them into separate entities. The main thing to remember to look at the top.

Source: https://habr.com/ru/post/78745/



All Articles