At a conference for
DICE game developers, Tim Sweeney from Epic Games presented his calculations of how the performance of graphics cards should be in order for them to provide the maximum quality perceived by human vision (
video footage of the performance, 30 minutes ,
slides ).
Tim Sweeney took up mathematics for a reason, but because the point of view has recently become popular, the current generation of game consoles supposedly has “sufficient” performance - and the next generation may be the last. According to Sweeney, this is out of the question. He calculates that in order to calculate the effects noticeable on the resolution of the human eye 8000x4000, the performance of the GPU should increase 2000 times to about 5000 teraflops.
Tim Sweeney, founder of Epic Games and author of the Unreal engine, enjoys no less prestige in the gaming industry than John Carmack.
Tim Sweeney began his speech with a reminder of a
famous video where a little girl was flipping through a glossy magazine, trying to poke a finger in it and eventually concluded that it was just a broken iPad.
')
He compares the characteristics of modern computer graphics systems with the characteristics of human vision (120 million monochrome photoreceptors, 5 million color photoreceptors, compression with losses in the optic nerve).

The anatomical limit of the susceptibility of the human eye - frames of about 30 megapixels, 72 FPS. Today, the equipment is capable of providing only 2560x1600 pixels on flat displays (viewing angle 30 °) and 8000x4000 pixels in panoramic viewing systems (90 °). To achieve such performance, the GPU must process 20-40 billion shaders per second, that is, 50 times more than now. It would seem that only two new generations of GPUs are enough, but in fact everything is not so simple. The fact is that the achievement of photorealism needs to increase the complexity of the mathematical calculations of visual scenes: to increase the approximation to photorealism or approximation in the calculation of lighting, shadows, reflections in water, the properties of materials, the design of each object, etc.

At the same resolution, you can run games of different computational complexity: games of the first level of approximation (Doom, 1993), second level (Unreal, 1998), third level (Samaritan, 2011), and so on. Already today, 99% of shaders remain behind the scenes, but they need to spend computational resources.
For example, for Doom there were enough 10 megaflops, for Unreal - 1 gigaflops, for Samaritan - 2.5 teraflops. Increasing the approximation levels — adding photorealistic skin, fog, smoke, and so on — will increase the required GPU performance exponentially. There are other computational problems for which we cannot yet achieve an acceptable level of approximation to reality, even if we use the computing power of all computers in the world - this includes calculating the properties of character, thoughts of a character, movement, speech, etc.
As a result, for true photorealism, we need something in the region of 5000 teraflops. For reference, the Xbox 360 now produces only 0.25 teraflops. So, hardware developers still have work to do, and until we get closer to 5000 TFLOPS, we will see significant progress in computer graphics.
Iron developers have already approached the atomic level of transistors, but they still have ways to comply with Moore's law - for example, designing three-dimensional microcircuits, that is, processor layers can be superimposed vertically thanks to the latest 3D printing achievement, says Tim Sweeney. Maybe there will be long-promised quantum computers. If we talk about the physical limits on the computational capability of microcircuits, then
the Beckenstein border is still very far, it is 10
27 above the current processor performance, so Moore's law can still be safely observed for about 180 years.