📜 ⬆️ ⬇️

PC Buyer's guide: video card selection

Hi, Geektimes! We continue our series of articles on the choice of hardware for the computer. In previous releases, we covered the topic of motherboards, processors and chipsets, cooling systems, hard drives and power supplies. Today we are talking about what displays your favorite resource on monitors: about video cards.



Actually, all video cards can be divided into two large camps: professional and consumer. No, professional video cards are needed not by professional players, but by those who work with 3D graphics: animators, multipliers, modelers, and they are usually used when creating videos, films and stunning scenery for them. As a rule, such video cards are prohibitively expensive.

Consumer video cards (of which we are mainly going to talk about) are of two types: embedded and discrete. They differ primarily in performance and price: if the “embedded” solutions, for the most part, are needed to display the office with a browser, and the steep 3D graphics are pulled with difficulty, then the “dedicated” video card in a separate device, on the contrary, is sharpened under games and work in "heavy" applications.
')

Integrated graphics


Initially, the integrated video adapters were part of the so-called. "North Bridge": a large chip on the motherboard, which acted as a "hub" between the CPU, RAM, video adapter and "South Bridge", which was responsible for input-output devices: hard drives, USB ports, network, sound and expansion slots PCI-X and PCI-Express. The gradual “relocation” of the functions of the northern (and now part of the southern) bridges directly into the CPU entailed the transfer of the graphics core.

On Intel products, you may encounter the following graphics cards:

Relatively current models
The sixth generation, 2011-2012
Intel HD Graphics
Intel HD Graphics 2000
Intel HD Graphics 3000

Seventh Generation, 2012
Intel HD Graphics 2500
Intel HD Graphics 4000

Seventh Generation, 2013
Intel HD Graphics 4200
Intel HD Graphics 4400
Intel HD Graphics 4600
Iris Graphics 5000
Iris Graphics 5100
Iris Pro Graphics 5200

Eighth generation, 2014+
Intel HD Graphics 5300
Intel HD Graphics 5500
Intel HD Graphics 6000
Iris Graphics 6100
Iris Pro Graphics 6200

Everything that is lower than Intel HD 4400 is quite old solutions. With the output of two-dimensional graphics they have no problems, with video decoding too, but you can safely forget about the games. Anything higher, by the way, does not shine with special performance in three-dimensional applications, but it is already at a much more pleasant level of performance.

All these video chips, in principle, are not designed for complex games: casual browsers, three-dimensional games of a decade ago at medium settings will run quite well with FullHD resolution, but you should not count on more.

AMD (which recently acquired the ATI video chip manufacturer, known for its Radeon line of video cards), also took the path of integrating the GPU and CPU, but in a slightly different way. AMD created the so-called APU - Accelerated Processing Unit (accelerated processing unit), similar to the CPU (Central Processing Unit, central processing unit). The line of hybrid processors is called Fusion and works on the FM socket (we already told about sockets and chipsets).

The performance of the integrated graphics in AMD's A-series processors is higher than that of Intel, modern toys are going well, but you should not forget that the integrated video chip does not have any dedicated high-speed memory or high-quality heat sink: the heat pack is limited by both the dissipation area of ​​the cover and the cooling systems available to the processor. If you can afford the extra 5-7 thousand for powerful cooling, then they should rather be spent on a separate schedule than trying to squeeze out extra frames per second from AMD Fusion.

We'll talk more about hybrid graphics and expanding its capabilities some other time, and now let's move on to the most interesting part: discrete video cards.

NVIDIA vs ATI-AMD


The war is as old as Intel vs AMD. Discussing who is better is like participating in a special Olympiad .

image

Each of the manufacturers has more or less successful solutions, and they should be considered solely from the point of view of the “hotelka vs price / performance ratio”.

What's inside the video card


Before turning to the explanation for what we pay that kind of money, we need to understand the internal structure of the graphics adapter. We will not consider legacy-architecture with separate shader processors, we’ll dwell on the key points.

Almost all modern three-dimensional graphics (with rare exceptions) consists of a set of triangles, which in the field of 3D-modeling is called "polygons". From these polygons collected three-dimensional models, over which stretched two-dimensional textures. After “dragging” the texture, special effects are used, from which the picture is formed. Here the transformations of the triangles into the finished picture are well described.

Inside the video card, the entire output of each frame is divided into two large sections: the geometry construction stage (in which the computational power is mainly involved) and the rendering stage (the main memory consumption occurs and the computing resource of special hardware accelerators is consumed).

Extremely detailed material on how 3D graphics works in general is written by haqreu on Habré:

Article 1: Bresenham algorithm
Article 2: rasterization of the triangle + clipping of the rear faces
Article 3: Removing invisible surfaces: z-buffer
Article 4: Required Geometry: Matrix Festival
4a: Construction of perspective distortion
4b: we move the camera and what follows from this
4c: a new rasterizer and perspective distortion correction
Article 5: We write shaders for our library
Article 6: A little more than just a shader: shadow rendering

Even just running through the posts diagonally you will understand how much computing a video card produces to calculate just one frame, and in fact it needs to issue (ideally) more than thirty such pictures per second.

Actually, here are the parameters of the video card that interest us:

" Kernel code name: a little responsible for what, but useful when comparing video cards within the series. Some manufacturers simply rename last year's video cards, hang on them a new name with smaller numbers or programmatically limit their performance, and you're done: the old 9800GT turns into NVIDIA GTX450. Fortunately, recently such “feints with ears” have almost stopped.

" Technical process: measured in nanometers, the smaller it is - the newer the GPU and the lower its heat, other things being equal.

» Number of stream processors: what used to be“ vertex ”and“ pixel ”shader processors is now brought into a unified form. The video card driver itself decides how many blocks will be engaged in the processing of vertices, how many - in processing of raster points. The more of them, the more complex the scene will be able to digest the video adapter, without creating delays for the rest of the computing units. In other words: more forged processors - more special effects in the frame without loss of performance.

" Core frequency: in fact, determines the speed of the stream processors and some other modules. More is better.

"The number of texture units: the number of units responsible for the imposition of two-dimensional textures on three-dimensional models. The more texture units, the clearer high-resolution textures can be used in the game settings.

» Number of rasterization blocks (rendering blocks): determines the video card's capabilities for converting the finished picture into a frame, which will be sent further to the monitor. The more - the less problems with the display of super-high resolution images.

» Memory type and frequency: the faster the memory works, the smaller the bottleneck between the video card's computing unit and the data stored on it.

" Memory bus: the second indicator characterizing the speed of data exchange between the processor of a video card and the microcircuits on which the necessary blocks of information are stored. Multiplying the memory bus by the real frequency of its operation will give us bandwidth: a few years ago, for example, the Radeon video cards had a memory bus of 256 or 128 bits, but a very high frequency of the GDDR5 memory. NVIDIA video cards, by contrast, used a more complex architecture: they transferred 192, 256, 384 or 480 bits at a time, but they worked with cheaper and less fast GDDR3 memory.

Now, these parameters are rather important in the budget segment, where periodically both inexpensive memory and a heavily truncated (up to 64 bit) bus are found. The flagship products have switched to wide tires and high-speed memory, and are measured mainly by the internal arimatics and driver optimization.

" Memory: not strange, not very much wags on performance, if you play on one monitor with a resolution of 1920x1080 pixels. The difference between identical or comparable video cards with 2, 3, 4 gigabytes in this case, you almost do not notice. A larger volume is needed if you are using a multi-monitor configuration or are an enthusiast in the PC world: that is, you work and play on monitors with a resolution of 2560x1440 and 3840x2160 pixels.

" Connection interface and supported DirectX / OpenGL versions: this year almost all video cards work with support for backward-compatible PCI-E 2.0 / 3.0, and 4.0 is at the stage of standardization. With OpenGL and DirectX, everything is also quite simple: finding a new video card without the support of the required versions is difficult, and modern accelerators with promising DirectX 12 will accelerate before a significant number of games begin to widely use the capabilities of these libraries. You can not bathe on this topic.

2 cores, 2 gigabytes, gaming graphics card


Does everyone remember this stupid slogan? From each iron he spoke, and a poster with the corresponding statement hung at each metro station. What is a gaming video card? Actually, all discrete graphics (except for the most-very cheap, used only because of the fact that the computer did not have built-in graphics) - one way or another - gaming.

In the NVIDIA classification, the minimally decent “game” variants can be called a video card whose index ends at 40 and higher: 540, 640, 740, 840, and so on. In addition, NVIDIA itself refers to “gaming” cards with the GTX prefix, the letters GT denote “multimedia” models. Toys on them will start, but neither masterpieces in the field of graphics, or amazing performance can not wait.

ATI-AMD has a similar segment of video adapters, starting with R7 (for the new line) or having an index of at least x5xx: 7530, for example.

All video cards that have “less” numbers: NVIDIA GT820 or Radeon R5 230, rather, competitors for the graphics processor built into the processor. To bring the presentation to the projector, the film on the big screen, to run the “dota” at the minimum - pull, but no more.

From the point of view of the characteristics described above, the game can be called that video card whose computing power allows displaying graphics of modern games at medium settings with a frequency of at least 20-30 frames per second.

Sufficient performance


The main question (besides the budget) to be solved before choosing a discrete video card: what does “sufficient performance” mean for you? If you play tournament shooters (like CS: GO), then the overall performance of the video system is not very critical here. Such games are played with minimal detailing (to gain an advantage in the simplicity of target detection), and the main issue is the resolution of your monitor: relatively “inexpensive” video cards are enough for “office” models 1280x1024 and 1376x768 or 1440x990. FullHD will require more powerful graphics, but it will still not hit hard on the wallet.

If you expect to enjoy the benefits of really cool graphics (and not the kind of thing that we are shown on consoles disguised as NextGen), then here are really cool guns: flagship and pre-flagship models.

image

But do not forget that not only the video card is responsible for the graphics, but also the creator of the game. The most beautiful burger in the gaming industry from Battlefield hardline is adjacent to sprites and condoms.

Resolution and memory



The influence of monitor resolution on graphics performance is not noticeable until we move from classical resolutions to what progress has given us: retina displays, 4k2k TVs and other amenities in the form of 3-6 monitors for aircraft and car simulators.

Experts from THG compared the flagship AMD 7970 graphics card with doubled memory chips, the total amount of which was 6 GB with the same, but having a “regular” set - 3 GB. Testing was carried out on the Eyefinity-6 system (with the help of the ATI driver, six monitors were combined into one system with a resolution of 11,520 by 1080 pixels.

image

As you can see, it is in such a loaded environment that extra gigabytes give a performance boost: somewhere - significant (one and a half times), somewhere - not so much. On the other hand, one video card with such a memory capacity does not allow to get acceptable indicators by the number of frames per second, so it’s pointless to chase after “extra” gigs in games: rather, you will need a second video card to achieve a normal frame rate than you will memory in the three-version version.



What to choose?


If you are very hard on money, and your computer exploded yesterday (you are reading this material from a calculator on punch cards, not otherwise), then the best choice for toys would be to buy an AMD A-series processor with integrated graphics.

AMD A6-5400K has an HD 7540D video accelerator, it will provide ~ 25-30 frames per second in games like Dota2 or League of Legends at medium settings with a resolution of 1080p. In World of Warcraft you will have to play Minimal.

If you are a little less constrained in funds, Radeon R7 series boosters will come to the rescue: 240, 250, 250X.

Their performance is about one and a half to two times higher (depending on the version), when compared with the built-in graphics in AMD A6.

A slightly more expensive solution would be to install NVIDIA GTX750 / GTX750 Ti or AMD R7 260X / R7 265 . They will ensure trouble-free existence in all kinds of MOBA- games, show a good picture in modern races and shooters and will allow you not to experience discomfort in online games. Average settings, FullHD resolution - no questions asked.



Video cards of the middle segment are somewhat more expensive and have, in principle, comparable performance.

You can focus on any relatively modern video card from NVIDIA ( 760, 960 ) or AMD ( R9 270, 270X, 280, 280X ) - here the question is rather in the thickness of the wallet.



Climbing “taller” to the 70th series of NVIDIA and 290th from AMD makes sense only in one case: you have a monitor with a resolution of 2560x1440 and higher. However, even such cards as NVIDIA 770/970 will show excellent results at medium settings.



Sli and Crossfire Say a Word


You will, of course, get a tangible increase in performance: by putting two GTX970 instead of one 980, you will get about one and a half times more performance indicators and about the same increase in computer cost. The problem is that after a couple of years you will want to update the video card, and you will have to sell not one, but two old pieces of iron, and you will lose both on the dollar rate and on the fact that the piece of iron has fallen in price threefold. Each Add to this the additional power of the PSU, for which you will also have to pay ... In general, the financial picture is not the most pleasant.

The second trouble: drivers and games. Not in all games the performance of two video cards in a bundle will be significantly higher than one, but more powerful.

Different manufacturers


Despite the fact that two companies are developing discrete video accelerators: NVIDIA and AMD, many vendors are engaged in the production and modernization of the “basic” design: the most popular are ASUS, MSI, Gigabyte, EVGA. Slightly less popular, but common - Sapphire, Zotac, XFX. The rest of the manufacturers usually belong to the low-cost segment and constantly try to save something (mostly on food), choosing their video card options only if you do not have 5-10% of the cost of the piece of hardware left to sleep . In any case, we have an excellent warranty service, and if something happens to the video card, you will be replaced or repaired without any problems.

Basically, manufacturers change the design of power circuits, increase operating frequencies, create different perversions of the cooling system and measure the length of the warranty. You can read about the differences by starting a specific video card model, for example, ASUS ROG MATRIX-GTX780TI - such a card can produce 3-5% higher rates than any analog from Gigabyte or MSI. In general, it is necessary to focus on the reviews in the store, reviews and benchmark results. Panacea and guaranteed solution in the form of Z is better than Y - no.

General tips


Video cards become obsolete very, very quickly. If you take a computer for several years ahead - do not chase the flagship model of the freshest generation: it costs inadequate money, and often in a year loses up to 50% of the cost even on store shelves. Look closely at the flagships or pre-flagships of the past generation, their performance will not be much different from the models of this year, but the price can be a pleasure to please.


Titan Z - meaningless and merciless.

Choose a video card for your hardware and needs. Models from top-end segments are expensive, moreover, if the performance of different video cards for $ 100 and $ 200 can differ by two or three times, the difference between $ 200 and $ 300 can be a modest 10%: for example, if you play Dota -like games or drive in the "Potato" - the difference between the NVIDIA 760 and NVIDIA 980 you will not see, especially if your processor is an old dual-core Intel Core i3. :)

It makes sense to carry out an upgrade of a video card if you are not satisfied with performance at all: usually the increase in “synthetic” performance between generations (4xx and 5xx, 5xx and 6xx, etc., at NVIDIA) is about 10-15%, and the real one is twice as low. There is no point in changing your NVIDIA 560 to 660 or 760, but replacing it with the 960th (the 800th generation, for some reason, marketers missed) is quite a sensible decision. And the performance gain will be noticeable, and the power consumption will decrease slightly, and there will be more memory.

Previous publications from the PC Buyer's Guide cycle:
» PC Buyer's Guide: Choosing a Power Supply
» PC Buyer's Guide: Cooling
» PC Buyer's Guide 2015: Motherboards, Chipsets, and Sockets
» Twist-twirl, I want to confuse. Understanding the HDD lines

Source: https://habr.com/ru/post/374809/


All Articles