📜 ⬆️ ⬇️

GPU to help?

When Intel added a graphics core to its processors last year, it was perceived with an approving understanding, but, as they say, without fanfare. In the end, if the memory controller had moved to the CPU before, so why not transfer the rest of the north bridge functionality to it? And the integrated graphics began to live and live everywhere, from the Intel Atom to the mobile Core i7. Who wanted - used it, who did not want - chose a computer with a discrete accelerator. In general, nothing, by and large, has changed. And the fact that the graphic component slowly helped the CPU in video decoding, so it seems to be normal, they are used to it.

Meanwhile, business colleagues like the idea, and soon their cores will also be equipped with graphics cores. But this is not limited to: plans to use the latest in the maximum range of tasks, and even the official naming of chips will change from CPU to APU. In general, according to AMD, the number of x86 cores cannot be increased indefinitely, because “bottlenecks” begin to arise, reducing the efficiency of multiplying cores. And the GPU - it seems to be very close, but, because of its dissimilarity, it does not interfere with ordinary cores, and if we train it ...

In general, look at this slide.
')
image

According to him, so-called multi-core processors are attacking. "Non-uniform", and this is in strict accordance with Moore's law. The advantages of the latter are many, but the drawbacks are very serious. Perhaps the most embarrassing one is that hybrid processors do not fit in well with the usual programming models. And getting a tired programmer to switch oh so easy. Nevertheless, forces and means are already invested in support of developers, the marketing apparatus is turned on, work is underway.
I currently do not have a magic crystal ball at hand, looking into which you can learn about Intel’s far-reaching plans. But, if you turn on the logic and look at the architecture of future Sandy Bridge processors, you can assume that the graphic component from the processor will hardly disappear, and, therefore, it will become faster from model to model. Actually, progress is already evident - in Sandy Bridge, Turbo Boost technology has penetrated into the integrated graphic solution, as a result of which its frequency from the standard 650-850 megahertz can increase under load right up to 1350. Toys like it. Well, if so, it would be logical once to start using graphic power to accelerate all sorts of calculations.

About how graphics chips "help" today, I already wrote more than once. Let's just say, not everything is smooth there (for example, you can look at this post in ISN about the effectiveness of the GPU in video conversion). But let's assume that tomorrow integrated GPUs will not be too smart in tasks that are not very typical of them, but they will learn accuracy without losing speed. And the question arises: do we not suffer with their inclusion in the fight for the common cause even more than with multi-core processors?

After all, nothing on the computer itself happens, and the introduction of such beautiful technologies in presentations is impossible without the painstaking efforts of software developers. How are things today? Do you personally like examples of successful use of GPU in computing? Do developers want to promote friendship with CPU and GPU on an industrial scale? How effective will this bundle really be ? If effective - how quickly can you re-learn a new programming style ? And will we talk about a certain standard approach, or will every professional look for (and find) his or her own way?

There are more questions than answers, but this is not surprising: the CPU and GPU are only looking at each other for now, and they are wondering what benefit can be gained from the differences in the crystal neighbor. But maybe you already thought about this alliance, and you can make your prediction?

Source: https://habr.com/ru/post/108727/


All Articles