Before we begin, I will say: I know about OpenGL much more than about Direct3D. I never wrote a single line of code for D3D in my life, and I wrote OpenGL tutorials. So what I tell you here is not a question of bias. Now it's just a story.
The origin of the conflict
Once, in the early 90s, Microsoft looked around. They saw the wonderful Super Nintendo and Sega Genesis, which had a lot of great games. And they saw DOS. Developers have written for DOS as well as for consoles: right on the hardware. But, unlike consoles, where the developer knew exactly what hardware the user has, the developers for DOS had to write for a variety of different hardware configurations. And it is much more difficult than it seems at first glance.
')
Microsoft had an even bigger problem at the time: Windows. Windows wanted to single-handedly dispose of the hardware, unlike DOS, which allowed the developer to do whatever he wanted. Possession of equipment was necessary in order to streamline the interaction between applications. The game developers didn’t like the interaction because they took away valuable resources that they could use for their great games.
In order to attract game developers to Windows, Microsoft needed a single API that would be low-level, worked in Windows and at the same time did not suffer from brakes and, most importantly,
would abstract equipment from the developer . Unified API for graphics, sound and user input.
And so was born DirectX.
3D accelerators were born a few months later. And Microsoft faced several problems at once. You see, DirectDraw, the graphic component of DirectX, worked only with 2D graphics: the allocation of graphics memory and bit-by-bit copying between different allocated memory sections.
And Microsoft bought a kind of intermediate driver and made Direct3D version 3 out of it. He was scolded by
everyone everywhere . And not just like that; one glance at the code was enough to recoil in horror.
Old John Carmack of Id Software looked at this garbage, said: “To hell!”, And decided to write under another API - OpenGL.
Another part of this tangle of problems was that Microsoft was very busy working with SGI on the implementation of OpenGL for Windows. The idea was simple - to attract developers of typical working GL-applications: automatic design systems, modeling, all that. The games were the last thing Microsoft thought about then. This was all meant for Windows NT, but Microsoft decided to add this implementation to Win95.
To attract the attention of developers of professional software to Windows, Microsoft tried to bribe them with access to new features of three-dimensional graphics accelerators. Microsoft made the protocol Installable Client Driver: a graphics accelerator maker could overload the software implementation of the OpenGL hardware. The code simply automatically used the hardware implementation if it was available.
Sunrise OpenGL
So, the balance of power was determined: Direct3D vs. OpenGL. This is a really interesting story, given how awful D3D v3 was.
The OpenGL Architectural Solutions Committee (“Architectural Review Board”, ARB) was the organization responsible for supporting the OpenGL standard. They released many extensions, followed the extension repository, and created new versions of the API. The committee included many influential players in the graphics industry and OS developers. Apple and Microsoft at one time were also members of this committee.
Then came 3Dfx with Voodoo2. It was the first device capable of multitexturing, which OpenGL could not do before. Although 3Dfx did not fit into the standard OpenGL, NVIDIA, the developer of subsequent graphics chips with multitexturing (TNT1), laid eyes on this implementation. ARB had to release an extension: GL_ARB_multitexture, which gave access to multitexturing.
At the same time, Direct3D v5 was released. Now D3D has become a real
API , and not a strange piece of cat vomit. Problem? No multitexturing.
Oops.
By and large, it was not as important as it should have been, because people did not particularly use multitexturing. At least directly. Multitexturing greatly worsened speed, and in many cases there was simply no point in using it instead of multi-passing. And, of course, game developers tried to make sure that their games would work on the old hardware, in which there was no multitexturing, and many games simply did not use it.
D3D got away with it.
Some time passed and NVIDIA released GeForce 256 (not GeForce GT-250; the very first GeForce), by and large ending the arms race in graphics accelerators for the next two years. The main selling chip is the ability to do vertex transformations and lighting (T & L) hardware. But that's not all: NVIDIA loved OpenGL so much that their T & L engine was in fact OpenGL. And literally: as far as I understand, some registers did
directly accept OpenGL objects as values.
Direct3D v6 comes out. Finally, multitexturing appeared, but ... there is no hardware T & L. OpenGL
has always had a T & L pipeline, even before it came out 256, it was implemented programmatically. So for NVIDIA it was not very difficult to convert a software implementation into a hardware one. In D3D, the hardware T & L appeared only to the seventh version.
Dawn Shader Twilight OpenGL
Then GeForce 3 came out and many things happened at the same time.
Microsoft decided that now they certainly would not be late for the holiday. And instead of watching what NVIDIA is doing and copying it post factum, they came to NVIDIA and talked. Then they fell in love with each other and from this union a small game console appeared.
Then there was a painful divorce. But that's another story.
For PCs, this meant that GeForce 3 was released simultaneously with D3D v8. And it is easy to see how much GeForce 3 influenced the shaders in the eight. Pixel shaders in Shader Model 1 were very tightly bound to NVIDIA hardware. There was no hardware abstraction from NVIDIA; SM 1.0 was essentially what GeForce 3 did.
When ATI entered the race of high-performance graphics cards with its Radeon 8500, a problem emerged. The 8500 pixel pipeline was more powerful than NVIDIA. And Microsoft released Shader Model 1.1, which in essence was "what 8500 did."
This may seem like a failure on the part of D3D. But failure and success is a relative question. This failure occurred in the OpenGL camp.
NVIDIA loved OpenGL, so when GeForce 3 came out, they released a set of OpenGL extensions.
Proprietary extensions suitable only for NVIDIA. Naturally, when the 8500 came out, he could not use any of them.
You see, in D3D v8 you can at least run SM 1.0 shaders on ATI hardware. Naturally, in order to use the goodies of the 8500, you will have to write new shaders, but your code at least
worked .
In order to get
at least some shaders on the 8500 in OpenGL, ATI had to write several OpenGL extensions.
Proprietary extensions suitable only for ATI. So, you had to write two ways to execute the code, for NVIDIA and for ATI, in order to have at least some shaders.
Now you probably ask: “What did OpenGL ARB do, whose job was keeping OpenGL up to date?”. Yes, the same thing that most committees do: stupid.
You see, above, I mentioned ARB_multitexture because it is also part of this story. From the point of view of an external observer, ARB tried to avoid adding shaders at all. They felt that if you create a sufficiently configurable pipeline with fixed functions, it will be able to match the capabilities of shader conveyors.
And ARB began to release expansion after extension. Each extension, in the name of which was present "texture_env" was another attempt to patch this aging design. check the registry: between the extensions of ARB and EXT there are already
eight of these . Many are included in the basic version of OpenGL.
Microsoft at this time were a member of ARB; they left around the time of the D3D 9 release. So in principle, it is quite possible that they somehow sabotaged the development of OpenGL. I personally doubt this version for two reasons. First, they would have to enlist the help of other committee members, since one member has only one vote. And, more importantly, secondly, the committee did not need Microsoft to come to such a failure. A little later we will see what happened.
Over time, ARB, most likely under the onslaught of ATI and NVIDIA (very active participants), woke up enough to accept assembly-type shaders.
Want to see more nonsense?
Hardware T & L. Which appeared in OpenGL
before . Here's what's interesting. To squeeze the maximum performance out of hardware T & L, you need to store data in the GPU. In the end, it is in the GPU that they are used.
In D3D v7, Microsoft introduced the concept of vertex buffers. These are dedicated GPU memory areas for storing vertex data.
Do you want to know when OpenGL has its analog of this? Oh, NVIDIA, being a fan of everything related to OpenGL (as long as the writing remains a proprietary extension of NVIDIA), released an extension for vertex arrays even with the first release of GeForce 256. But when did ARB decide to officially provide such functionality?
Two years later . This happened
after they approved vertex and fragment shaders (pixels in the D3D language). This is how long it took ARB to develop a cross-platform solution for storing data in GPU memory. Exactly what you need to get the most out of your hardware T & L.
One language to destroy everything
So, the development of OpenGL was fragmented. There are no single shaders, there is no single storage in the GPU, when D3D users have already enjoyed both. Could it be even worse?
Well ... you can say so. Meet:
3D Labs .
Who are they, you ask? This is a bankrupt company that I think are the real killers of OpenGL. Naturally, ARB's overall lethargy made OpenGL vulnerable when it was supposed to beat D3D on all fronts. But 3D Labs is perhaps the biggest reason for the current OpenGL market position. What could they do?
They developed the OpenGL Shader Language.
The fact is that 3D Labs was a dying company. Their expensive accelerators became unnecessary when NVIDIA increased the pressure on the desktop computer market. And, unlike NVIDIA, they had no presence in the general market; if NVIDIA won, they would disappear.
So it happened.
And, in an attempt to stay afloat in a world that did not need their products, 3D Labs appeared at Game Developer Conference with a presentation of what they called “OpenGL 2.0”. It should have been completely rewritten from scratch by the OpenGL API. And it made sense; The OpenGL API had
quite a few rough spots (note: they exist now). Just look at what loading and texture mapping is like; it's just some kind of black magic.
Part of their sentence was shader language. Like this. However, unlike the current cross-platform ARB extensions, their shader language was “high-level” (C is a high-level language for shaders. No, really).
So, Microsoft at that time was working on its own high-level shader language. They called it, by their old habit, “High Level Shader Language” (HLSL). But the approach to the language was fundamentally different.
The biggest problem with the shader language of 3D Labs was that it was embedded. You see, HLSL was a language defined by Microsoft. They released a compiler for it that generated assembler code for Shader Model 2.0 (and later versions) that you inserted into D3D. In the days of D3D v9, HLSL was never called directly from D3D. It was a convenient abstraction, but completely optional. The developer always had the opportunity to postpone the compiler and modify the code to maximum performance.
In the language of 3D Labs, this was
nothing . You fed the driver a code in a C-like language, and it returned the shader. That's the end of the story. And not an assembler shader, not something that can be inserted somewhere. A real OpenGL object representing a shader.
This meant that OpenGL users were defenseless against the bugs of developers who had just begun to deal with compiled assembler-like languages. Compiler bugs in the new shader language OpenGL (GLSL) were just
herds . Worse, if you managed to correctly compile a shader for several platforms (in itself is not an easy task), you still had to deal with the
optimizers of that time. Which were not as optimal as they could have been.
Although this was the main problem GLSL, but not the only one.
Far from the only one.
In D3D, like the old assembler languages ​​of OpenGL, it was possible to mix vertex and fragment (pixel) shaders. While they used a single interface, any vertex shader could be used with any compatible fragment shader. In addition, there were incompatibility levels that could be tolerated in principle; The vertex shader could output data that the fragment shader simply did not read. And so on.
There was nothing like that in GLSL. Vertex and fragment shaders were assembled into a single abstraction, which 3D Labs called the “software object”. And if you wanted to use vertex and fragment programs together, you had to build several such software objects. And this was the cause of the second problem.
You see, 3D Labs thought they were doing very cleverly. They established a compilation model in GLSL in C / C ++. You take a .c or .cpp and compile it into an object file. Then you take one or more object files and link them into the program. This is how GLSL compiles: you compile a shader (vertex or fragment) into a shader object. Then you place this shader object into the program object and link them together to get the program.
Although this made it possible to do some potentially cool things like “libraries” of shaders containing additional code shared by the main shaders, in practice this meant that the shaders were compiled twice. Once at the compilation stage, once at the linking stage. No intermediate object code was created; the shader was simply compiled, the result of the compilation was thrown away and the compilation was repeated during linking.
So if you wanted to link your vertex shader with two different fragment shaders, you had to compile a lot more code than in D3D. Especially since the whole compilation of C-like languages ​​took place during development, and not at program launch.
GLSL had other problems. It’s probably wrong to blame 3D Labs for everything, because ARB eventually approved and accepted this language (but nothing came out of it from their OpenGL 2.0 offer). But the idea was theirs.
But the really sad part. 3D Labs were by and large
right . GLSL is not a vector shader language, as HLSL has always been. This happened because the 3D Labs hardware was a scalar iron (just like modern NVIDIA cards), but on the whole they were right about the direction of accelerator development.
They were also right with the “compile-online” model for “high-level” languages. D3D subsequently also switched to it.
The problem was that 3D Labs were right at the wrong
time . And in an attempt to invoke the future too soon, in an attempt to predict it, they discarded the present. It's just like OpenGL has always had the opportunity to do T & L. Apart from the fact that the T & L pipelines in OpenGL were
useful even before the release of the hardware implementation, and GLSL was just a burden before the world was ready to accept it.
Now GLSL is a good language. But for his time he was terrible. And OpenGL suffered for that.
Apotheosis approaches
Although I claim that 3D Labs delivered a fatal blow, it was the ARB committee that scored the last nail on the coffin cover of OpenGL.
This story you probably heard. At the time of OpenGL 2.1, OpenGL faced a problem. He had many old irregularities. The API was difficult to use. For each action, there were five ways and no one knew which would be the fastest. It was possible to “learn” OpenGL using simple tutorials, but no one told you which API would give you maximum performance.
And ARB decided to make another attempt to reinvent OpenGL. It was like OpenGL 2.0 from 3D Labs, but better because there was ARB behind it. The attempt was called "Longs Peak".
What was wrong with trying to fix the old API? The bad thing was that Microsoft was vulnerable at the time. This was the time of the release of Vista.
In Vista, Microsoft decided to introduce the long-needed changes in display drivers. They forced the drivers to access the OS to virtualize video memory and many other things.
Although you can doubt whether it was necessary, but the fact remains: Microsoft decided that D3D 10 will be only for Vista (and subsequent OS). Even if you had hardware capable of performing the functions of the D3D 10, you could not run the D3D 10 applications without running Vista.
You also probably remember that Vista ... well, let's just say that it didn't work out very well. So, you had a braking OS, a new API that only works on this OS, and a new generation of accelerators that needed the API and OS to surpass the previous generation of accelerators.
However, developers
could get access to the functions of the D3D level 10 through OpenGL. Well, they could if ARB weren't so busy working on Longs Peak.
By and large, ARB spent a year or two to make the API better. When OpenGL 3.0 came out, the time for Vista was already ending, Win7 appeared on the horizon, and most developers were no longer interested in the D3D-10 functions. In the end, the hardware for which the D3D 10 was intended worked remarkably well with the D3D 9. And with the flourishing of ports from PC to consoles (or PC developers engaged in development for consoles), the functions of class D3D 10 were unclaimed.
If developers had access to these functions earlier, through OpenGL on machines with WinXP, the development of OpenGL would receive so much needed impetus. But ARB missed this opportunity. And you know what's the worst?
Despite the fact that two valuable years were spent on developing an API from scratch ... they
still failed and had to go back to the previous version.
That is, ARB not only missed a wonderful opportunity, but did not accomplish the task for which the opportunity was missed. Just a complete failure.
This is the story of the struggle OpenGL and Direct3D. The history of missed opportunities, tremendous folly, blindness, and simply recklessness.