There was a dispute about standards. And in particular web standards and my beloved
w3c .
Who suddenly does not know (as I find out with horror, quite a few do not know this), this consortium is responsible for the standards of HTML and around it, XML and around it. And not only.
The essence of the problem is this. Not all browsers correctly display sites and other web of joy or do not display them at all. And you need to do something with it. The reasons for this may be several.
One of them (the most popular on the network, but not fatal, for the reasons outlined below) - the code is written with the left foot, and is not valid. It is not fatal in the case of HTML (for the time being the majority), the standard didn’t care too much at the time what to do if it ... well, not really bad, but slightly wrong. Therefore, browsers work out this moment in different ways: ignoring or trying to correct (sometimes successfully, for example, the code <i> <b> text </ i> </ b> in IE still shows you bold italics, although instead of nesting the tags are mixed). But in any case he does not inform the user about this (the same developer at the stage of writing the code). So now we have a huge number of sites written incorrectly. This time.
If you refer to XML in this key, you can see that the guys (w3c) still learn from their mistakes. Normal (compliant specifications) XML parsers will never show you invalid code, but they will tell you exactly which character they did not like (and some ... no, they will not try to fix it, they will simply recommend the solution). And all because the specification takes all this into account. In the end, I didn’t see a single invalid XSLT site (I really don’t see a lot of them at all). And including for this now w3c recommends XHTML where these rakes are eliminated.
But this is all about the server part. On the client, too, there are problems. Not all browsers support the specifications of the same HTML + CSS in full (especially the 3rd CSS). Apparently, they just do not have time to keep up with the new versions produced by w3c. And in connection with the emerging persistent desire to keep up, apparently (and then, because if the company declares that only their product supports these “blue bows with polka dots”, then people will go to them and carry them money), forget to close the old flaws and very minor deficiencies (a feature of minor deficiencies - they always appear at the most inopportune moment). Also, even now (I take a stone and carry it to the side of the Melkosoft garden) some companies consider it not so important to correct it, they say, there are more important tasks and problems. I think it is not necessary to mention that web developers are trying to use innovations as the specifications come out, rather than updating browsers. Therefore, in terms of HTML, there is confusion and vacillation (you just have to go to a good website by layout and you will see a lot of such footnotes there - which is supposedly good, but it doesn’t work here, you won’t see it at bad, but it won’t ). In XML, again, all stand firmly - the standard is twisted so cunningly that everything or nothing (well, at least compared to HTML).
It was all a lyrical introduction (which, in my opinion, should still help to understand the idea). The essence of the dispute:
standards are good, but if the validator checked compatibility with specific browsers, it would be better and more useful | vs | only standards and no exceptions
(author's position) |
That is, it was proposed to make such a subset of the standard that would close the eyes to errors in the code that will not affect the display (perhaps they will be corrected), and said that where it will not work (including from the correct code). It is not even a subset, but some kind of interset - partly “under”, partly “above”.
In this there is a sensible idea in the moment that this validator must, by virtue of its definition, find new implementations in the code of the specifications that are not implemented in the browser. That is, to protect the advancement of technology up. Then the developer (who, for example, IE7) will know for sure that his creation will look the same on IE5. At the moment, this is implemented by installing a browser on the developer of several browsers and testing in all at once.
On the other hand, such a validator will cut off possibly necessary or not unnecessary (in the classical specification) pieces of code, since a specific browser will not understand them and will abuse them. That is, the classically valid code will be incorrect in a specific validator. This is fundamentally wrong.
Further. Several abstract fabrications about this.
First, who will do this all. More than a dozen of these browsers, tracking them (if doing this w3c) is a mockery. Firms themselves will not do this - they will still have holes in the implementation of the specification to patch and patch.
Secondly, it will lead to the erosion of standards. Already, with a single standard, there are websites written, for example, under IE. When such validators appear - it will only aggravate the situation, since it will legitimize such a division. And the standards of these validators will crawl in different directions, since some assumptions will be made here and there. And for the benefit of the development of a common idea, this will not work.
Thirdly, an example from life. There was an idea to make HTML dynamic, and invented JavaScript (this is not Java). After which he developed. But it developed like an amoeba - in different directions on different browsers. And in the end - you will write a simple script easily, but to dig a little deeper - and the same properties lie in different objects. And it is necessary, by the presence of certain objects, to find out in place what kind of browser it is - and for it to launch specially written code. That is, the same algorithm has to be written 2 (or even 3) times for different browsers in order not to limit users. Actually what was expected with such an ideology.
Failure to comply with standards leads to the death of the standard.
')