📜 ⬆️ ⬇️

Pascal guide: we understand 2016 NVIDIA graphics cards

2016 is already running out, but his contribution to the igroindustry will remain with us for a long time. First, the video cards from the red camp received an unexpectedly successful update in the average price range, and secondly, NVIDIA once again proved that it is not for nothing that it takes 70% of the market. Maxwell's were good, the GTX 970 was rightfully considered one of the best cards for its money, but Pascal was another matter.


The new generation of iron in the face of the GTX 1080 and 1070 literally buried the results of last year’s systems and the flagship second-hand iron market, while the “younger” lines in the face of the GTX 1060 and 1050 secured success in more accessible segments. The owners of GTX980Ti and other Titans are crying with crocodile tears: their Über-guns for many thousands of rubles lost 50% of the cost and 100% of the Ponts at once. NVIDIA itself claims that 1080 is faster than last year's TitanX, the 1070 easily “hits” the 980Ti, and the relatively low-end 1060 will hurt the owners of all the other cards.

Is this how high-performance legs grow from and what to do with it all on the eve of holidays and sudden financial joys, and what exactly is to please you, you can find out in this long and slightly boring article.

The company Nvidia can love or ... not love, but to deny the fact that it is she who is currently the leader in the field of video card will become only a person from an alternative universe. Since Vega from AMD has not yet been announced, we haven’t seen the flagship RXs on Polaris, and the R9 Fury with its 4 GB of experimental memory cannot be frankly considered a promising card (VR and 4K, nevertheless, want a little more, what she has) - we have what we have. While 1080 Ti and the conditional RX 490, RX Fury and RX 580 are just rumors and expectations, you and I have time to figure out the current NVIDIA lineup and see what the company has achieved in recent years.
')

Mess and the history of the origin of Pascal


NVIDIA regularly gives reasons to “not love yourself.” The story of the GTX 970 and its "3.5 GB of memory", "NVIDIA, Fuck you!" From Linus Torvalds, full pornography in the desktop graphics lines, refusal to work with free and much more common FreeSync system in favor of its proprietary side ... In general, the reasons enough. One of the most annoying for me personally is what happened to the past two generations of video cards. If we take a rough description, then "modern" graphics processors have gone since the support of the DX10. And if you look for the “grandfather” of the 10th series today, the beginning of modern architecture will be around the 400th series of video accelerators and Fermi architecture. It was in him that the idea of ​​a “block” construction of the so-called “final” was finally formed. "CUDA cores" in NVIDIA terminology.


Fermi


If video cards of the 8000th, 9000th and 200th series were the first steps in mastering the very concept, “modern architecture” with universal shader processors (like AMD, yes), then the 400th series was as close as possible to what we see in some 1070. Yes, Fermi has a small Legacy crutch from previous generations: the shader unit worked at twice the core frequency, which was responsible for calculating the geometry, but the overall picture of some GTX 480 is not much different from any 780th, SM multiprocessors are clustered, clusters communicate through a common cache with MODULES memory, and displays the results for the overall cluster rasterization unit:


Block diagram of the GF100 processor used in the GTX 480.

In the 500th series was all the same Fermi, slightly improved "inside" and with a smaller number of defects, so that the top solutions received 512 CUDA-cores instead of 480 from the previous generation. Visually, flowcharts appear to be twins:


The GF110 is the heart of the GTX 580.

In some places, the frequencies were increased, the design of the chip was slightly changed, there was no revolution. All the same 40 nm process technology and 1.5 GB of video memory on a 384-bit bus.

Kepler


With the advent of the Kepler architecture, a lot has changed. We can say that it was this generation that gave NVIDIA video cards the development trend that led to the emergence of current models. Not only the GPU architecture has changed, but the very kitchen of developing new hardware inside NVIDIA itself. If Fermi was aimed at finding a solution that will provide high performance, then Kepler staked on energy efficiency, reasonable use of resources, high frequencies and ease of optimizing the game engine for the capabilities of high-performance architecture.



In the design of the GPU, major changes were made: it was based not on the “flagship” GF100 / GF110, but on the “budget” GF104 / GF114, which was used in one of the most popular cards of the time - the GTX 460.


The overall architecture of the processor has become easier due to the use of only two large blocks with four unified modules of shader multiprocessors. The layout of the new flagships looked like this:


GK104, installed in the GTX 680.

As you can see, each of the computational units has significantly gained weight relative to the previous architecture, and was named SMX. Compare the block structure with the one shown above in the Fermi section.


Multiprocessor SMX graphics processor GK104

The six-hundredth series did not have video cards on a full-fledged processor containing six blocks of computational modules, the flagship was the GTX 680 with the GK104 installed, and the cooler was only the “two-headed” 690s, on which just two processors with all the necessary strapping and memory were diluted. A year later, the flagship GTX 680 with minor changes turned into a GTX 770, and the video card based on the GK110 crystal became the culmination of the Kepler architecture evolution: GTX Titan and Titan Z, 780Ti and the usual 780. Inside, all the same 28 nm, which is NOT the only quality improvement went to the consumer graphics cards based on the GK110) - performance with double precision operations.

Maxwell


The first video card on the Maxwell architecture was ... NVIDIA GTX 750Ti. A little later, there appeared its trimming in the face of the GTX 750 and 745 (it was supplied only as an integrated solution), and at the time of its appearance, the younger cards actually shook the market for low-cost video accelerators. The new architecture was tested on the GK107 chip: a tiny piece of future flagships with huge radiators and a frightening price. He looked like this:


Yes, only one computing unit, but how much more complicated it is than the predecessor, compare for yourself:


Instead of a large SMX block, which was used as the basic “building block” in the creation of a GPU, new, more compact SMM blocks are used. Kepler's basic computing blocks were good, but they suffered from poor capacity utilization — a trivial hunger for instructions: the system could not scatter instructions for a large number of actuators. Approximately the same problems were at Pentium 4: the power was idle, and the error in the prediction of the branches was very expensive. In Maxwell, each computational module was divided into four parts, allocating each of them with its own instruction buffer and the scheduler of warps — similar operations on a group of threads. As a result, efficiency has grown, and the graphics processors themselves have become more flexible than their predecessors, and most importantly, at the price of low blood and a fairly simple crystal, they have worked out a new architecture. The story develops in a spiral, hehe.

Mobile solutions won the most from innovations: the crystal area grew by a quarter, and the number of multiprocessor execution units almost doubled. As luck would have it, it was the 700th and 800th series that made the main mess in the classification. Inside the 700th alone, there were video cards on the Kepler, Maxwell and even Fermi architectures! That is why the desktop Maxwells, in order to distance themselves from the hustle and bustle in previous generations, received a total 900 series, from which the GTX 9xx M mobile cards subsequently budded.

Pascal - the logical development of the Maxwell architecture


What was laid in Kepler and continued in the Maxwell generation remained in Pascal: the first consumer video cards were released on the basis of the not very large GP104 chip, which consists of four graphics processing clusters. The full-size, six-cluster GP100 went to an expensive semi-professional GPU under the brand name TITAN X. However, even the “cut off” 1080 lights so that past generations become ill.

Performance improvement


The basis of the basics


Maxwell has become the foundation of the new architecture, the diagram of comparable processors (GM104 and GP104) looks almost the same, the main difference is the number of multiprocessors clustered in clusters. In Kepler (700th generation) there were two large SMX multiprocessors, which were divided into 4 parts each in Maxwell, providing the necessary strapping (changing the name to SMM). In Pascal, two more were added to the existing eight in the block, so there were 10 of them, and the abbreviation was again interrupted: now single multiprocessors are again called SM.


The rest is a complete visual resemblance. True, the changes inside have become even more.

Progress engine


There are a lot of changes inside the multiprocessor block. In order not to go into the very boring details of what they have altered, how they have been optimized and how they were before, I will describe the changes quite briefly, but some of them are already yawning.

First of all, Pascal corrected the part that is responsible for the geometric component of the picture. This is necessary for multi-monitor configurations and working with VR helmets: with proper support from the game engine (and this support will quickly appear with the efforts of NVIDIA), the video card can calculate the geometry once and get several geometry projections for each of the screens. This significantly reduces the load in VR, not only in the area of ​​work with triangles (here the increase is just double), but also in work with the pixel component.
The conditional 980Ti will read the geometry twice (for each eye), and then fill with textures and perform post-processing for each of the images, processing a total of about 4.2 million points, of which about 70% will actually be used, the rest will be cut off or fall into the area which is simply not displayed for each of the eyes.

1080 will process the geometry once, and the pixels that do not fall into the final image simply will not be calculated.

With the pixel component, everything is, in fact, even cooler. Since memory bandwidth can be increased only on two fronts (increasing frequency and bandwidth per clock), both methods cost money, and the GPU’s hunger for memory is more pronounced over the years due to the growth in resolution and the development of VR improve “free” bandwidth increase methods. If you can not expand the bus and raise the frequency - you need to compress the data. In previous generations, hardware compression has already been implemented, but Pascal brought it to a new level. Again, we can do without boring mathematics, and take a ready-made example from NVIDIA. On the left - Maxwell, on the right - Pascal, those dots whose color component was subjected to compression without loss of quality are filled in with pink.


Instead of transferring specific tiles of 8x8 points, there is an “average” color in memory + a matrix of deviations from it, such data takes from ½ to ⅛ of the original volume. In real-world tasks, the load on the memory subsystem decreased from 10 to 30%, depending on the number of gradients and the uniformity of fills in complex scenes on the screen.


This was not enough for engineers, and for the flagship graphics card (GTX 1080) memory with increased bandwidth is used: GDDR5X transmits twice as many data bits (not instructions) per clock, and produces more than 10 Gbit / s at the peak. Data transfer with such a crazy speed required a completely new topology of memory layout on the board, and in total, the efficiency of working with memory increased by 60-70% compared with the flagships of the previous generation.

Reduced delays and downtime


Video cards have long been involved not only in graphics processing, but also in related calculations. Physics is often tied to animation frames and is remarkably parallel, which means it is considered much more efficient on the GPU. But the VR-industry has recently become the biggest problem generator. Many game engines, development methodologies and a bunch of other technologies used for working with graphics simply were not designed for VR, the case of camera movement or repositioning of the user's head during the frame rendering process was simply not processed. If you leave everything as it is, desynchronization of the video stream and your movements will cause bouts of motion sickness and simply interfere with immersion in the game world, which means that the “wrong” frames just have to be thrown away after drawing and starting over again. And this is a new delay in displaying a picture on the display. This does not have a positive effect on performance.

Pascal took into account this problem and introduced dynamic load balancing and the possibility of asynchronous interrupts: now the execution units can either interrupt the current task (saving the work results in the cache) to process more urgent tasks, or simply drop an unfinished frame and start a new one, significantly reducing delays in image formation. The main beneficiary here, of course, is VR and games, but with general-purpose calculations this technology can help: the particle collision simulation received a performance increase of 10-20%.

Boost 3.0


NVIDIA video cards were overclocked for a long time, back in the 700th generation based on the Kepler architecture. Maxwell improved overclocking, but to put it mildly it was so self-explanatory: yes, the video card worked a little faster, as long as the heat pack allowed, an additional 20-30 megahertz core from the factory and an increase of 50-100 memory allowed, but small . It worked like this:


Even if the temperature of the GPU was margin, the performance did not grow. With the advent of Pascal, engineers shook up and this dusty swamp. Boost 3.0 works on three fronts: analysis of temperature, increase in clock frequency and increase in voltage on the chip. Now, all the juices are squeezed out of the GPU: standard NVIDIA drivers do not do this, but the software of vendors allows you to build a profile curve in one click, which will take into account the quality of your particular video card.

One of the first in this field was the company EVGA, its utility Precision XOC has a NVIDIA-certified scanner, which sequentially goes through the entire range of temperatures, frequencies and voltages, achieving maximum performance in all modes.

Add to this the new process technology, high-speed memory, all sorts of optimization and reduction of the thermal package of chips, and the result will be simply indecent. C 1500 "base" MHz GTX 1060 can squeeze more than 2000 MHz, if you get a good copy, and the vendor is not screwed up with cooling.

Improving the picture quality and perception of the game world


Productivity was increased on all fronts, but there are a number of points in which there have been no qualitative changes for several years: as a displayed image. And this is not about graphic effects, they are provided by game developers, but about what we see on the monitor and what the game looks like for the end user.

Fast vertical sync


The most important feature of Pascal is a triple buffer for frame output, which simultaneously provides ultra-low delays in drawing and ensuring vertical synchronization. In one buffer the displayed image is stored, in the other - the last frame drawn, in the third - the current one is drawn. Goodbye, horizontal stripes and frame breaks, hello, high performance. There are no delays that the classic V-Sync arranges here (as no one constrains the performance of the video card and it always draws with the maximum possible frame rate), and only fully-formed frames are sent to the monitor. I think that after the new year I will write a separate big post about V-Sync, G-Sync, Free-Sync and this new Nvidia fast sync algorithm, too many details.

Normal screenshots


No, those screens that are now - it's just a shame. Almost all games use a bunch of technologies to make the picture in motion amazing and breathtaking, and the screenshots have become a real nightmare: instead of a stunningly realistic picture of animation, special effects that exploit the features of human vision, you see something awkward and absolutely lifeless picture.

New technology NVIDIA Ansel solves the problem with the screen. Yes, its implementation requires the integration of a special code from game developers, but there is a minimum of real manipulations, but a huge profit. Ansel knows how to pause the game, transfers the control of the camera to your hands, and then - room for creativity. You can simply make a frame without a GUI and your favorite angle.


You can draw an existing scene in ultra-high resolution, shoot 360-degree panoramas, stitch them in a plane or leave them in three-dimensional form for viewing in a VR helmet. Make a photo with 16 bits per channel, save it in a kind of RAW-file, and then play with exposure, white balance and other settings so that the screenshots will again become attractive. We are waiting for tons of cool content from fans of games in a year or two.

Audio processing on the video card


New libraries NVIDIA Gameworks add many features available to developers. They are mainly aimed at VR and acceleration of various calculations, as well as improving the quality of the picture, but one of the features is the most interesting and worthy of mention. VRWorks Audio takes sound to a fundamentally new level, considering the sound not according to banal averaged formulas, depending on the distance and thickness of the obstacle, but performs a full trace of the sound signal, with all the reflections from the environment, reverberation and sound absorption in various materials. NVIDIA has a good video example on how this technology works:


Watch better with headphones

Purely theoretically, nothing prevents to run such a simulation on Maxwell, but the optimization of the asynchronous execution of instructions and the new interrupt system embedded in Pascal allows for calculations, not much affecting the frame frequency.

Pascal in total


Changes, in fact, even more, and many of them are so deep in architecture that for each of them you can write a huge article. The key innovations are improved design of the chips themselves, optimization at the lowest level in terms of geometry and asynchronous work with full interrupt processing, many features sharpened to work with high resolutions and VR, and, of course, insane frequencies that were not dreamed of by past generations of video cards. Two years ago, 780 Ti barely crossed the line at 1 GHz, today 1080 in some cases it works on two: here the merit is not only in the technical process reduced from 28 nm to 16 or 14 nm: many things are optimized at the lowest level, starting with the design of transistors , ending with their topology and strapping inside the chip itself.

For each case


The line of NVIDIA video cards of the 10th series turned out to be really balanced, and it covers all game cases rather tightly, from the option “in strategy and diabetes to play” to “I want top games in 4k”. Game tests are selected according to one simple method: to cover as wide a range of tests as possible with as few test sets as possible. BF1 is a great example of good optimization and allows you to compare the performance of DX11 versus DX12 under the same conditions. DOOM is selected for the same reason, it only allows you to compare OpenGL and Vulkan. The third "Witcher" here plays the role of so-so-optimized-toys, in which the maximum graphics settings allow you to fasten to any flagship simply by virtue of the govnokod. It uses the classic DX11, which is time-tested and perfectly worked out in drivers and is familiar to igrodels. Overwatch is blown away for all the “tournament” games in which the code is well optimized, in fact, it is interesting by how high the average FPS is in a game that is not very hard from a graphical point of view and honed to work in the “average” config available around the world.

Immediately I will give some general comments: Vulkan is very voracious in terms of video memory, for him this characteristic is one of the main indicators, and you will see a reflection of this thesis in the benchmarks. DX12 on AMD cards behaves much better than NVIDIA, if the "green" on average show a FPS drawdown on new APIs, the "red", on the contrary, increase.

Junior Division


GTX 1050


The youngest NVIDIA (without the letters Ti) is not as interesting as its charged sister with the letters Ti. Her destiny is a gaming solution for MOBA games, strategies, tournament shooters and other games where the detailing and picture quality are of few people interested in, and the stable frame rate for minimal money is what the doctor prescribed.


In all the pictures there is no core frequency, because it is individual for each instance: 1050 without add. the power supply can not chase, and her sister with a 6-pin connector will easily take the conditional 1.9 GHz. In terms of power and length, the most popular options are depicted, you can always find a video card with a different circuit or other cooling that will not fit in the specified "standards".

DOOM 2016 (1080p, ULTRA): OpenGL - 68 FPS, Vulkan - 55 FPS;
The Witcher 3: Wild Hunt (1080p, MAX, HairWorks Off): DX11 - 38 FPS;
Battlefield 1 (1080p, ULTRA): DX11 - 49 FPS, DX12 - 40 FPS;
Overwatch (1080p, ULTRA): DX11 - 93 FPS;

The GTX 1050 is equipped with a GP107 graphics processor, inherited from the top card with a slight trimming of functional blocks. 2 GB of video memory will not let you roam, but for cybersport disciplines and playing some tanks, it is perfect, since the price of a junior card starts from 9.5 thousand rubles. Additional power is not required, the video card is enough 75 watts coming from the motherboard via a PCI-Express slot. True, in this price segment there is also an AMD Radeon RX460, which with the same 2 GB of memory is cheaper, and is almost as good as the work, and for about the same money you can get the RX460, but in the 4 GB version. Not that they would help him much, but some reserve for the future. The choice of the vendor is not so important, you can take what is in stock and does not delay the pocket of an extra thousand rubles, which is better spent on the cherished letters Ti.

GTX 1050 Ti


About 10 thousand for the usual 1050 is not bad, but for the charged (or full, call it what you want) version is not much more asked (on average, 1-1.5 thousand more), but its filling is much more interesting. By the way, the entire series of 1050 is not produced from trimming / culling of "big" chips, which are not suitable for 1060, but as a completely independent product. It has less technical process (14 nm), another plant (Samsung grows crystals), and there are extremely interesting specimens with add. nutrition: the heat pack and the base consumption of it are all the same 75 W, but the overclocking potential and the ability to go beyond what is permitted are completely different.


If you continue to play at the resolution of FullHD (1920x1080), do not plan to upgrade, and your rest of the iron within 3-5 years ago is a great way to increase the performance in toys with little blood. The focus is on ASUS and MSI solutions with an additional 6-pin power supply, not bad options from Gigabyte, but the price is not so happy.

DOOM 2016 (1080p, ULTRA): OpenGL - 83 FPS, Vulkan - 78 FPS;
The Witcher 3: Wild Hunt (1080p, MAX, HairWorks Off): DX11 - 44 FPS;
Battlefield 1 (1080p, ULTRA): DX11 - 58 FPS, DX12 - 50 FPS;
Overwatch (1080p, ULTRA): DX11 - 104 FPS.

Average division


Video cards of the 60th line have long been considered the best choice for those who do not want to spend a lot of money, and at the same time playing at high graphics settings in everything that will be released in the next couple of years. It started since the days of the GTX 260, which had two versions (simpler, 192 stream processors, and fatter, 216 "stones"), lasted for 400, 500, and 700th generations, and now NVIDIA again found itself in an almost perfect combination price and quality. Two versions of the “middling” are again available: GTX 1060 for 3 and 6 GB of video memory differ not only in the amount of available RAM, but also in performance.

GTX 1060 3GB


Queen of eSports. Reasonable price, amazing performance for FullHD (and rarely use higher resolution in eSports: there are more important results than beautiful ones), a reasonable amount of memory (3 GB, for a minute, it was two years ago in the flagship GTX 780 Ti, which cost indecent money). In terms of performance, the youngest 1060 easily loads last year's GTX 970 with a memorable 3.5 GB of memory, and easily pulls past the last year's superflagman 780 Ti.


DOOM 2016 (1080p, ULTRA): OpenGL - 117 FPS, Vulkan - 87 FPS;
The Witcher 3: Wild Hunt (1080p, MAX, HairWorks Off): DX11 - 70 FPS;
Battlefield 1 (1080p, ULTRA): DX11 - 92 FPS, DX12 - 85 FPS;
Overwatch (1080p, ULTRA): DX11 - 93 FPS.

There is an absolute favorite in terms of price and exhaust - version from MSI. Good frequencies, silent cooling system and sane dimensions. For it, they ask for nothing at all, around 15 thousand rubles.

GTX 1060 6GB


The six-gigabyte version is a budget ticket in VR and high resolutions. It will not starve for memory, a little faster in all tests and will surely win the GTX 980 where last year’s video card will be low on 4 GB of video memory.


DOOM 2016 (1080p, ULTRA): OpenGL - 117 FPS, Vulkan - 121 FPS;
The Witcher 3: Wild Hunt (1080p, MAX, HairWorks Off): DX11 - 73 FPS;
Battlefield 1 (1080p, ULTRA): DX11 - 94 FPS, DX12 - 90 FPS;
Overwatch (1080p, ULTRA): DX11 - 166 FPS.

I would like to once again note the behavior of video cards when using the Vulkan API. 1050 with 2 GB of memory - FPS drawdown. 1050 Ti with 4 GB - almost flush. 1060 3 GB - drawdown. 1060 6 GB - an increase in results. The trend, I think, is clear: for Vulkan, you need 4+ GB of video memory.

The trouble is that both 1060 are not small video cards. It seems that the heat pack is reasonable, and the board is really small there, but many vendors decided to just unify the cooling system between 1080, 1070 and 1060. Someone has a 2-slot video card, but 28+ centimeters long, someone made them shorter, but thicker (2.5 slots). Choose carefully.

Unfortunately, an additional 3 GB of video memory and an unlocked computing unit will cost you about 5-6 thousand rubles on top of the price of the 3-gig version. In this case, the most interesting options for price and quality at the Palit. ASUS released monstrous 28-centimeter cooling systems, which sculpts both at 1080, and at 1070, and at 1060, and this video card doesn’t fit much, versions without factory overclocking cost almost as much, and exhaust is less, and for relatively compact MSI they ask for more than the competition with about the same level of quality and factory overclocking.

Major League


Playing all the money in 2016 is difficult. Yes, 1080 is incredibly steep, but perfectionists and hardware workers know that NVIDIA HIDES the existence of the super-flagship 1080 Ti, which should be incredibly cool. The first specifications are already leaking into the network, and it is clear that the green ones are waiting for a step from the red and whites: some uber guns that can be instantly replaced by the new king of 3D graphics, the great and powerful GTX 1080 Ti. In the meantime, we have what we have.

GTX 1070


Last year's adventures megapopular GTX 970 and its not-very-honest-4-gigabyte memory actively understood and sucked all over the Internet. This did not prevent her from becoming the most popular gaming graphics card in the world. In anticipation of the change of year on the calendar, she holds the first place in the Steam Hardware & Software Survey . It is understandable: the combination of price and performance was just perfect. And if you missed last year's upgrade, and 1060 seems to be not cool enough - the GTX 1070 is your choice.

Permissions 2560x1440 and 3840x2160 the video card digests with a bang. Boost 3.0 overclocking system will try to pop up firewood when the load on the GPU increases (that is, in the hardest scenes, when the FPS sags under the onslaught of special effects), accelerating the video card processor to a mind-blowing 2100+ MHz. Memory easily gets 15-18% of the effective frequency in excess of the factory figures. Monstrous thing.


Attention, all tests were conducted in 2.5k (2560x1440):

DOOM 2016 (1440p, ULTRA): OpenGL - 91 FPS, Vulkan - 78 FPS;
The Witcher 3: Wild Hunt (1440p, MAX, HairWorks Off): DX11 - 73 FPS;
Battlefield 1 (1440p, ULTRA): DX11 - 91 FPS, DX12 - 83 FPS;
Overwatch (1440p, ULTRA): DX11 - 142 FPS.

It is clear, to extend ultra-settings in 4k and never sink below 60 frames per second, neither this card nor 1080 can, but you can play at the conventional “high” settings by disabling or slightly reducing the most voracious features in full resolution, but in terms of real performance, the video card easily sets the heat even of last year’s 980 Ti, which cost almost twice as much. The most interesting option for Gigabyte: they managed to cram a full 1070 into an ITX-standard package. Thanks to the modest teplopakuet and energy-efficient design. Card prices start from 29-30 thousand rubles for delicious options.

GTX 1080


Yes, the flagship does not have the letters Ti. Yes, it does not use the largest GPU available NVIDIA. Yes, there is no coolest memory of HBM 2, and the video card does not look like the "Star of Death" or, in extreme cases, the imperial cruiser of the class "Star Destroyer". And yes, this is the coolest gaming graphics card, which is now there. One one takes and runs DOOM in 5k3k resolution with 60 frames per second on ultra-settings. All new toys are subject to her, and for the next year or two she will not have problems: while new technologies put into Pascal will become common, while game engines will learn how to efficiently load existing resources ... Yes, in a couple of years we will say: “Look at GTX 1260, a couple of years ago, you needed a flagship to play with such settings, but for now - the best of the best video cards are available before the new year at a very reasonable price.


Attention, all tests were conducted in 4k (3840x2160):

DOOM 2016 (2160p, ULTRA): OpenGL - 54 FPS, Vulkan - 78 FPS;
The Witcher 3: Wild Hunt (2160p, MAX, HairWorks Off): DX11 - 55 FPS;
Battlefield 1 (2160p, ULTRA): DX11 - 65 FPS, DX12 - 59 FPS;
Overwatch (2160p, ULTRA): DX11 - 93 FPS.

It remains only to decide: you need it, or you can save and take 1070. There is no particular difference to play on the “ultra” or “high” settings, the benefit of modern engines is excellent paint a picture in high resolution, even at medium settings: in the end, we You are not soap consoles that can not provide enough performance for an honest 4k and stable 60 frames per second.

If we discard the cheapest options, Palit will have the best combination of price and quality in the GameRock version (about 43-45 thousand rubles): yes, the cooling system is “thick”, 2.5 slots, but the video card is shorter than the competitors, and a couple of 1080 are rarely . SLI is slowly dying, and even a vivifying injection of high-speed bridges does not really help it out. Option ASUS ROG is not bad if you have installed a lot of extra. you don’t want to overlap the extra expansion slots: their video card is exactly 2 slots thick, but it requires 29 centimeters of free space from the back wall to the basket with hard disks. I wonder if Gigabyte will master the release of this monster in ITX format?

Results


New NVIDIA graphics cards simply buried the market for used iron. It survives only GTX 970, which can grab for 10-12 thousand rubles. Potential buyers of second-hand 7970 and R9 280 often have nowhere to put it and simply do not feed them, and many options from the secondary market are simply unpromising, and as a cheap upgrade for a couple of years, they don’t matter: new memory is not supported. The beauty of the new generation of video cards is that even non-optimized toys for them are much more cheerful than on the veterans of the GPU charts of previous years, and it will be hard to imagine what will happen next year when game engines use the full power of new technologies.

GTX 1050 and 1050Ti


Alas, I cannot recommend the purchase of the cheapest Pascal. RX 460 is usually sold for a thousand or another cheaper, and if your budget is so limited that you take the video card "for the last", then Radeon is objectively a more interesting investment of money. On the other hand, 1050 is slightly faster, and if the prices in these cities for these two video cards are almost the same, take it.

1050Ti, in turn, is an excellent option for those to whom the plot and gameplay is more important than frills and realistic nose hair. It does not have a bottleneck in the form of 2 GB of video memory, it does not “die out” in a year. You can report money on it - do it. Witcher on high settings, GTA V, DOOM, BF 1 - no problem. Yes, you will have to abandon a number of improvements, such as super-long shadows, complex tessellation or “expensive” self-shadowing models with limited ray tracing, but in the heat of battle you will forget about these beautiful after 10 minutes of play, and stable 50-60 frames per second will give much more immersion effect than nerve jumps from 25 to 40, but with settings for "maximum".

If you have any Radeon 7850, GTX 760 or less, video cards with 2 GB of video memory and less - you can safely change.

GTX 1060


The younger 1060 will please those for whom the frame rate from 100 FPS is more important than graphic beats. At the same time, it will allow you to comfortably play all the released toys in FullHD resolution with high or maximum settings and stable 60 frames per second, and at a price it is very different from everything that comes after it. The older 1060 with 6 gigabytes of memory is an uncompromising solution for FullHD with a year or two performance margin, an acquaintance with VR, and an acceptable candidate for playing in high resolutions at medium settings.

Change your GTX 970 to GTX 1060 makes no sense, will tolerate another year. But the annoying 960, 770, 780, R9 280X and more ancient units can be safely updated to 1060.

Top segment: GTX 1070 and 1080


1070 is unlikely to become as popular as the GTX 970 (yet, for most users, the iron update cycle is once every two years), but in terms of price and quality, of course, a worthy continuation of the 70th line. It simply grinds up games on the mainstream 1080p resolution, easily copes with 2560x1440, withstands the ordeals of non-optimized 21 to 9, and is quite capable of displaying 4k, though not at the maximum settings.


Yes, SLI can be like that.

We say "come on, bye" every 780 Ti, R9 390X and other last year's 980th, especially if we want to play in high resolution. And yes, this is the best option for those who like to assemble a hell of a box in the Mini-ITX format and scare the guests with 4k games on a 60-70 inch TV that runs on a computer the size of a coffee maker.

GTX 1080 easily replaces any bundle of previous video cards, except for a pair of 980Ti, FuryX or some kind of furious Titan. True, the energy consumption of such monstrous configs does not go to any comparison with one 1080, and there are complaints about the quality of the performance of the sparks.

I have everything on it, my gift to you for the new year is this guide, and what to please yourself - choose yourself :) Happy New Year!

Source: https://habr.com/ru/post/373023/


All Articles