📜 ⬆️ ⬇️

Direct3D vs OpenGL: history of opposition

To this day, on the Internet you can find discussions about which graphical API is better: Direct3D or OpenGL? Despite their religious nature, such verbal battles bring useful results in the form of quite good historical reviews of the development of hardware-accelerated graphics.

image

The goal of this post is to translate one of these excursions into history written by Jason L. McKesson in response to the question "Why do game developers prefer Windows?" This text hardly answers the question posed, but it describes the development and confrontation of the two most popular graphic APIs in a very colorful and rather detailed manner, so I translated the author's markup in translation. The text was written in mid-2011 and covers a period of time that begins shortly before the appearance of Direct3D and until the time of writing. The author of the original text is an experienced game developer, an active participant in StackOverflow, and the creator of an extensive textbook on modern 3D graphics programming. So let's give the floor to Jason.

Foreword


Before we begin, I would like to say that I know more about OpenGL than about Direct3D. In my life I did not write a single line of code on D3D, but I wrote OpenGL tutorials. But what I want to talk about is not a matter of prejudice, but a story.
')

The birth of conflict


One day, somewhere in the early 90s, Microsoft looked around. They saw that SNES and Sega Genesis is very cool, you can play a lot of action games and all that. And they saw dos. The developers wrote dosov games like console games: close to hardware. However, unlike consoles, where the developer knew what kind of hardware the user would have, dos developers were forced to write under a variety of configurations. And it is much more difficult than it seems.

But Microsoft had a bigger problem: Windows. You see, Windows wanted to fully own the hardware, unlike DOS, which allowed developers to do anything. Possession of iron is necessary for interaction between applications. But this interaction is exactly what game developers hate because it consumes precious resources that they could use for all sorts of cool things.

To promote the development of games on Windows, Microsoft needed a homogeneous API that would be low-level, work on Windows without any performance loss, and be compatible with various hardware . Unified API for graphics, sound and input devices.

This is how DirectX was born.

3D accelerators appeared a few months later. And Microsoft got into trouble. The fact is that DirectDraw, the graphic component of DirectX, worked only with 2D graphics: it allocated graphics memory and did fast bit operations between different memory sectors.

Therefore, Microsoft bought some third-party software and turned it into Direct3D version 3. Absolutely everyone scolded him. And it was for that: reading the code on D3D v3 looked like a decoding of the writing of a vanished ancient civilization.

The old man John Carmack at Id Software looked at this disgrace, said "Fuck it ...", and decided to write using another API: OpenGL.

However, the other side of this confusing story was that Microsoft worked with SGI to work with OpenGL for Windows. The idea was to attract developers of typical GL-applications for workstations: CAD, modeling systems and similar things. The games were the last thing they thought. This mainly concerned Windows NT, but Microsoft decided to add OpenGL to Windows 95.

To lure the developers of software for workstations on Windows, Microsoft decided to bribe them with access to new-fashioned 3D accelerators. They implemented a protocol for client drivers to be installed: the graphics card could replace the Microsoft software OpenGL with its hardware implementation. The code automatically used hardware OpenGL, if one was available.

However, in those days, consumer video cards did not have OpenGL support. This did not stop Carmack from porting Quake to OpenGL on an SGI workstation. In the GLQuake readme, you can read the following:
In theory, glquake will run on any OpenGL implementation that supports the expansion of texture objects. But until you run it on a very powerful hardware, which accelerates everything you need, it will work inexcusably slowly. If the game needs to work through any software emulations, its performance will most likely not exceed one frame per second.

Currently (March 1997), the only fully opengl-compatible piece of hardware capable of pulling glquake at an acceptable level is the VERY expensive intergraph realizm video card. 3dlabs significantly increased its performance, but with existing drivers it is still not suitable for the game. Some of the drivers from 3dlabs for glint and permedia boards are also NT cracks when exiting full-screen mode, so I do not recommend running glquake on 3dlabs hardware.

3dfx provides opengl32.dll, which implements everything you need for glquake, but this is not a complete opengl implementation. Other opengl applications will probably not work with it, so consider it mainly as a “driver for glquake”.

This was the birth of miniGL drivers. Ultimately, they evolved into full-fledged OpenGL implementations as soon as the iron was powerful enough to support this functionality in hardware. nVidia was the first to offer the full implementation of OpenGL. Other vendors are still slow, which was one of the reasons why developers were switching to Direct3D, supported by a wider range of equipment. In the end, there were only nVidia and ATI (which is now AMD), and both had good OpenGL implementations.

Dawn opengl


So, the participants are defined: Direct3D vs. OpenGL. This is truly an amazing story, considering how bad D3D v3 was.

The OpenGL Architecture Board (Architectural Review Board, ARB) is the organization responsible for maintaining and developing OpenGL. They release many extensions, contain a repository with extensions, and create new versions of the API. ARB is a committee consisting of a large number of players in the computer graphics industry and some OS manufacturers. Apple and Microsoft at different times were also members of ARB.

3Dfx takes the stage with his Voodoo2. This is the first video card that allows you to do multitexturing, which was not previously provided in OpenGL. While 3Dfx was strongly opposed to OpenGL, nVidia, the next multitexturing chip maker (TNT1), was crazy about OpenGL. Then ARB released the GL_ARB_multitexture extension, which provided access to multiple textures.

Meanwhile, Direct3D v5 appears. Now D3D really became an API , and not some kind of nonsense. What is the problem? In the absence of multitexturing.

Oops.

But this did not cause such inconvenience, which could have been delivered, because almost no one used multiple texturing. Multitexturing almost does not harm performance, and in many cases the difference is not noticeable against the background of multi-pass. And of course, game developers are very fond of their games working confidently on the old hardware, which did not have support for multiple textures, so many games were released without it.

D3D breathed a sigh of relief.

Time passed, and nVidia rolled out the GeForce 256 (not to be confused with the very first GeForce GT-250), ending the struggle in the graphics card market for the next two years. The main competitive advantage of this board was the ability to transform vertices and lighting (transformation & lighting, T & L) hardware. But that's not all: nVidia loved OpenGL so much that their T & L engine was actually OpenGL. Almost literally! As I understand it, some of their registers received the input directly the numerical values ​​of variables of type GLenum.

Direct3D v6 comes out. Finally, multiple texturing came up ... but without hardware T & L. OpenGL has always had a T & L pipeline, although prior to the GeForce 256 it was implemented programmatically. Therefore, for nVidia, it turned out to be quite easy to convert the software implementation into a hardware solution. In the D3D, the hardware T & L appeared only to the seventh version.

The dawn of the era of shaders, OpenGL in the dark


Then came the GeForce 3. At the same time, many interesting things happened.

Microsoft decided that they were no longer going to be late. Therefore, instead of looking at what nVidia will do and copying their developments already post factum, Microsoft made an amazing decision: go and talk. And they fell in love with each other, and they had a joint small console.

Noisy divorce occurred later, but this is a completely different story.

For the PC market, this meant that GeForce 3 came out simultaneously with D3D v8, and it is not difficult to see how GeForce 3 influenced D3D v8 shaders. Shader Model 1.0 pixel shaders were very sharpened for nVidia hardware. Not a single attempt was made to do anything to abstract from nVidia hardware. Shader Model 1.0 has become what GeForce 3 is intended for.

When ATI broke into the performance race of video cards with its Radeon 8500, one problem appeared. The Radeon 8500 pixel pipeline turned out to be more powerful than the nVidia. Therefore, Microsoft released Shader Model 1.1, which basically was what the 8500 was intended for.

It sounds like a D3D defeat, but success and failure are relative concepts. In fact, an epic failure awaited OpenGL.

Nvidia was very fond of OpenGL, so after the release of GeForce 3, they released a whole pack of extensions for OpenGL. Proprietary extensions that only worked on nVidia. Naturally, when the 8500 board appeared, she could not use any of them.

So, on D3D 8 you could at least run SM 1.0 shaders. Of course, in order to use all the coolness of 8500, we had to write new shaders, but at least the code worked .

To get any shaders on the Radeon 8500 in OpenGL, ATI had to develop several extensions for OpenGL. Proprietary extensions that worked only on ATI. As a result, so that the developers could declare that they had attached shaders to their engine, they had to write a separate code for nVidia and a separate code for ATI.

You might ask, “Where was the ARB committee that should keep OpenGL afloat?” And they were where many of the committees ended up: they were sitting and stupid.

Notice that I mentioned ARB_multitexture above because this extension is deeply involved in the whole situation. An outside observer thought that ARB wanted to avoid the idea of ​​shaders at all. They decided that if they put enough configurability into a fixed pipeline, then it would be equal in its capabilities to a programmable shader pipeline.

ARB released extensions one by one. Each extension with the words “texture_env” in the title was an attempt to patch up this aging design. Look at the list of extensions: eight of these extensions have been released, and many of them have been translated into the main OpenGL functionality.

At that time, Microsoft was part of ARB, and left it only for the D3D 9 release, so Microsoft may have sabotaged OpenGL in some way. Personally, I doubt this theory for two reasons. First, they would have to enlist the support of other members of the Committee, because each participant has only one vote. Secondly, and more importantly, the committee did not need the help of Microsoft to bungle everything, evidence of which we will see later.

As a result, ARB, most likely under the pressure of ATI and nVidia (both are active participants), finally woke up and introduced assembler shaders into the standard.

Want even more stupid story?

Hardware T & L. This is what OpenGL was originally . To get the best possible hardware T & L performance, you need to store vertex data on the GPU. Still, the GPU is the main consumer of the vertex data.

In D3D v7, Microsoft introduced the concept of vertex buffers that allocate chunks of memory to the GPU and place the vertex data there.

Want to know when equivalent functionality appeared in OpenGL? Yes, nVidia, as the biggest fan of OpenGL, released its extension for storing arrays of vertices on the GPU even at the time of the release of GeForce 256. But when did ARB introduce such functionality?

Two years later. This was after she approved vertex and fragment (pixel in terms of D3D) shaders. ARB spent so much time developing a cross-platform solution for storing vertex data in GPU memory. And this is what is necessary for the hardware T & L to reach maximum performance.

One language to kill them all


So, OpenGL has been broken for some time. There were no cross-platform shaders and hardware-independent storage of vertices in the GPU, while D3D users enjoyed both. Could it get any worse?

You can say it could. Meet: 3D Labs .

You ask: who are they? They are a dead company that I consider to be the true killer of OpenGL. Of course, the Committee’s general failure made OpenGL vulnerable, while it had to tear D3D to shreds. But in my opinion, 3D Labs is probably the only reason for the current OpenGL position in the market. What did they do for it?

They developed a shader language for OpenGL.

3D Labs was a dying company. Their costly GPUs have been driven out of the market by the ever-increasing pressure of nVidia. And unlike nVidia, 3D Labs has not been introduced to the consumer market; a nVidia win would mean death for 3D Labs.

What eventually happened.

In an effort to be afloat in a world that did not need their products, 3D Labs showed up at the Game Developer Conference with a presentation of what they called “OpenGL 2.0”. It was the OpenGL API rewritten from scratch. And it made sense, because in those days the OpenGL API was full of junk (which, however, remains there to this day). Look at least at how esoterically made loading and binding textures.

Part of their sentence was shader language. Yes, precisely he. However, unlike the available cross-platform extensions, their shader language was “high level” (C is a high level for shader language).

At the same time, Microsoft worked on its own shader language. Which they, including all their collective imagination, called ... High Level Shader Language (HLSL). But their approach to language was fundamentally different.

The biggest problem with language from 3D Labs was that it was embedded. Microsoft completely determined its own language. They released a compiler that generated assembler code for SM 2.0 shaders (or higher), which, in turn, could be fed to D3D. At the time of D3D v9, HLSL never touched D3D directly. He was a good, but not necessary abstraction. The developer has always had the opportunity to take compiler exhaust and tweak it for maximum performance.

There was nothing like that in the language from 3D Labs. You give the driver a C-like language, and it creates a shader. That's all. No assembler shader, nothing to feed anything else. Only an OpenGL object representing the shader.

For OpenGL users, this meant that they became subject to the whims of the OpenGL developers, who only learned how to compile assembler-like languages. In the compilers of the newborn language of OpenGL shaders (GLSL) bugs raged. Worse yet, if you managed to force the shader to compile correctly on various platforms (which in itself was a great achievement), then it was still subject to optimizers of those times that were not as optimal as they could be.

This was a big, but not the only disadvantage of GLSL. Not the only one.

In D3D, as in the old OpenGL assembler languages, it was possible to mix vertex and fragment shaders in every possible way. You could use any vertex shader with any compatible fragment shader, if they interacted through the same interface. Moreover, even some incompatibility was allowed: for example, the vertex shader could supply a value to the output that was not used by the fragment shader.

There was nothing like that in GLSL. The summit and fragment shader fused together, forming something called 3D Labs company "software object". Therefore, to share several vertex and fragment shaders in various combinations, it was necessary to create several program objects. This caused the second largest problem.

3D Labs thought they were the smartest. They took C / C ++ as the basis for the GLSL compilation model. This is when you take one c-file and compile it into an object file, and then take several object files and compose them into a program. This is how GLSL is compiled: first you compile a vertex or fragment shader into a shader object, then put these objects into a program object and put them together to finally form a program.

In theory, this allowed for such cool things to appear as “library” shaders, which contain the code called by the main shader. In practice, this led to shaders being compiled twice : once at the compilation stage and a second time at the linking stage. In particular, the compiler from nVidia was famous for it. It did not generate any intermediate object code; He first compiled, discarded the result and re-compiled it at the layout stage.

Thus, in order to attach a vertex shader to two different fragment shaders, we had to compile a lot more than in D3D. Especially considering the fact that the whole compilation is done offline , and not before the direct execution of the program.

GLSL had other problems. Perhaps it would be wrong to put all the blame on 3D Labs, because in the end ARB approved the shaders language and included it into OpenGL (but nothing more from the 3DLabs offerings). However, the original idea was all the same for 3D Labs.

And now the saddest thing: 3D Labs were right (mostly). GLSL is not a vector language as HLSL was at that time. This happened because the 3D Labs iron was scalar (like modern iron from nVidia), and they were completely right in choosing the direction that many equipment manufacturers later followed.

They were right with the choice of a compilation model for a “high-level” language. Even D3D eventually came to this.

The problem is that 3D Labs were right at the wrong time . And in trying to get into the future prematurely, in trying to be ready for the future, they set aside the present. It looks like the T & L functionality in OpenGL, which has always been there. Except that the OpenGL T & L pipeline was useful even before the T & L hardware, and GLSL was a burden before the rest of the world caught up with it.

GLSL is a good language now . But what happened at that time? He was terrible. And OpenGL suffered from that.

On the way to the apotheosis


I support the view that 3D Labs struck OpenGL with a fatal blow, but the last nail on the coffin was scored by ARB itself.

You may have heard this story. In the days of OpenGL 2.1, OpenGL had big problems. He was carrying a huge load of compatibility. The API was no longer easy to use. One thing could be done in five different ways and it is not clear which one is faster. It was possible to “learn” OpenGL using simple tutorials, but you didn’t learn about OpenGL, which gives you real graphical power and performance.

ARB decided to make another attempt to invent OpenGL. It was like OpenGL 2.0 from 3D Labs, but better because ARB was behind this attempt. They called it "Longs Peak."

What is so bad about spending a little time improving the API? The bad thing is that Microsoft is in a rather shaky position. It was the transition time to Vista.

In Vista, Microsoft decided to make long-awaited changes to graphics drivers. They made drivers turn to the OS for graphics memory virtualization and more.

One can argue for a long time about the merits of such an approach, and whether it was even possible at all, but the fact remains: Microsoft made D3D 10 only for Vista and higher. Even on the supporting D3D hardware it was impossible to launch the D3D application without Vista.

You may remember that Vista ... let's say, did not work very well. So, we had a leisurely OS, a new API that worked only on this OS, and a new generation of hardware that needed this API and OS to do more than just outperform the previous generation in performance.

However, developers could use the functionality of the D3D level 10 through OpenGL. That is, they could if ARB were not busy working on Long Peaks.

ARB spent a good one and a half or two years working on improving the API. By the time OpenGL 3.0 was released, the transition to Vista was over, Windows 7 was on the way, and game developers no longer cared about the functionality of the D3D 10 level. In the end, the hardware for the D3D 10 worked fine with the applications on D3D 9. With the increase in porting from PC to consoles (or with the transition of PC developers to the console market), developers needed D3D 10 less and less.

If developers had access to this functionality even on Windows XP, the development of OpenGL could receive a vivifying charge of vivacity. But the ARB missed this opportunity. Do you want to know what is the worst?

ARB could not invent API from scratch despite spending two precious years trying to do it. Therefore, they returned the status quo, adding only a mechanism for declaring the functionality obsolete.

As a result, ARB not only missed the key opportunities, but also did not perform the work that led them to this omission. It was epic fail across the board.

Such is the story of the opposition of OpenGL and Direct3D. The history of missed opportunities, the greatest folly, deliberate recklessness and banal absurdities.

Source: https://habr.com/ru/post/397309/


All Articles