Long ago, at the beginning of 1989, Ronald Reagan was still president, although it was 19½ days before his term expired. And before the year 1989 came to an end, Taylor Swift was born, and Andrei Sakharov and Samuel Beckett died.
In the long run, the most significant event in 1989 is likely to be that Tim Berners-Lee laid the foundations of the HTTP protocol and called the result the "World Wide Web". (One notable feature of this name is that the abbreviation “WWW” has two times more syllables and takes more time to pronounce.)
The HTTP protocol proposed by Tim worked on 10-megabit networks with coaxial cables, and its computer was the NeXT Cube with a frequency of 25 MHz. 26 years later, my laptop has a processor installed hundreds of times faster, and there are thousands of times more RAM than Tim’s computer, but the HTTP protocol is the same.
')
A few days ago, the IESG working group (The Internet Engineering Steering Group) requested the latest comments on the new HTTP / 2.0 protocol (
https://tools.ietf.org/id/draft-ietf-httpbis-http2 ) before approving it in as an official standard.
[After a couple of months with minimal changes, the standard was approved. - approx. trans.]
Expectations will differ
Someone expects that a significant update of the most common protocol in the world will become a masterpiece, a canonical example for future students in the field of protocol design. Someone expects that the protocol developed during the times of Snowden's revelations should increase confidentiality. Others cynically expect the opposite. In general, there is an expectation of some “acceleration”. Many may also suggest "environmental friendliness." And some of us are sated enough to see "2.0" and mumble "oh-oh, second system syndrome".
Cheat sheet with answers: no, no, most likely no, maybe not no, yes.
If this sounds disappointing, it is only because it is so.
HTTP / 2.0 is not a technical masterpiece. It violates the integrity of individual, previously isolated layers, overcomplicated, contains a bunch of inconsistencies, bad compromises, missed opportunities, etc. On my (hypothetical) course on protocol development, students would fail if they offered such a protocol. HTTP / 2.0 will also not increase your privacy. Packing HTTP / 2.0 in SSL / TLS may increase, or may not increase it, as well as packing HTTP / 1.1 or any other protocol in SSL / TLS. But HTTP / 2.0 by itself does nothing to increase privacy. It is extremely ironic, given that the main burden of HTTP is cookies, which are such a serious problem that the EU has a legislative requirement to notify them. HTTP / 2.0 could get rid of cookies by replacing them with a fully client-controlled session ID. What would give users a clear right to dispose of when they want to be tracked, and when not - a significant improvement in privacy. It would also save traffic. But the proposed standard does not.
The good news is that HTTP / 2.0 will most likely not lower your privacy. Although it adds several possibilities for server-side tracking, there are already many ways to do this using cookies, javascript, flash, etc., which most likely does not matter.
You may notice that pages load faster with HTTP / 2.0, but most likely only if the content provider has a huge network of servers. Some computers, including your own, will have to spend more resources, especially for large objects like music, TV shows, movies, etc. No one has demonstrated the implementation of HTTP / 2.0, which could come close to modern data transfer rates. Faster? Not at all.
There is also an answer to the question of the environmental impact: HTTP / 2.0 requires more computing resources than HTTP / 1.1, and thus, increasing CO2 emissions will accelerate climate change. You could assume that a protocol designed for tens of millions of computers would be subject to environmental considerations, but surprisingly, at least to me, I did not find any evidence that the IETF cared for environmental issues at all.
And yes, the second system syndrome is strong.
Considering such a mediocre result, you are most likely interested, so why HTTP / 2.0 is generally considered as a standard.
The answer is simple - the policy
Google came up with the SPDY protocol, and since they have their own browser, they can experiment as they please, optimizing the protocol for their specific needs. SPDY was a good prototype that clearly showed that there is room for improvement in the new version of HTTP. Bow to Google for that. But SPDY has also become something of a “garden behind high walls” for other people and, importantly, for other companies, that’s the policy that has surfaced.
The IETF organization, obviously feeling its worthlessness, quickly “discovered” that the HTTP / 1.1 protocol required an update, puzzled the working group to prepare it in unrealistically short terms. What ruled out any other basis for the new HTTP / 2.0 protocol than the SPDY protocol. By throwing out the most disgusting flaws in SPDY and rejecting any other improvement attempts with resolutions like “out of agenda”, “too late”, “no consensus”, the IETF can now show its involvement and declare victory, sacrificing practically all the principles it values, in exchange for privilege to stamp google initiative.
But politics does not end there.
The reason why HTTP / 2.0 will not increase privacy is because large patron corporations have built their business model on the lack of confidentiality. They are very upset that the NSA is spying on almost everyone in the whole world, but they don’t want to do anything that would prevent them from doing the same. Proponents of the HTTP / 2.0 protocol are trying to use it as a lever to enforce SSL everywhere, although encryption is not required for many HTTP applications, it is undesirable or may even be illegal.
MES site in your country, district or city
Local authorities do not want to waste resources on SSL / TLS connections with each smartphone in the territory, when something explodes, rivers overflow, or people get poisoned. The largest news sites likewise prefer to be able to report the news than to hide the fact that they are reporting the news, especially if something serious has happened. (Did the IETF have all forgotten the graph of the exponential growth of traffic on CNN sites 14 years ago? - [note lane: here the author appeals to the attacks of September 11, 2001 in the USA]).
The so-called "multimedia business" [meaning porn - approx. Lane.], which accounts for almost 30% of all traffic on the network, is also unwilling to have to spend resources on meaningless encryption. There are categories of people who legally lose confidential information sharing: children, prisoners, financial traders, CIA analysts, etc. Yet, despite this, HTTP / 2.0 will only work with SSL / TLS in at least all major browsers. in order to impose a specific policy. Ironically, the same browsers view self-signed certificates as a deadly danger, despite the fact that such certificates make it easy to achieve secrecy. (Secrecy means that only you and the other party can decipher the correspondence. Confidentiality is privacy with an identified or authenticated party.)
History has clearly shown that if you want to change the world for the better, then you should present good tools for changing the world for the better, and not a policy for changing it for the better. I advise everyone who has a voice in this matter to point the finger down HTTP / 2.0: this is not a good protocol, and this is not even a good policy.
about the author
Poul-Henning Camp (phk@FreeBSD.org) is one of the main developers of the FreeBSD operating system, which he worked on from the very beginning. It is widely “unknown” for its MD5-based password encryption algorithm, which protects passwords on Cisco, Juniper routers, as well as Linux and FreeBSD systems. Some may have noticed that he wrote a memory manager, file system, and disk encryption method that actually works. Kamp lives in Denmark with his wife, son, daughter, a dozen computers with FreeBSD, as well as with one of the most accurate NTP watches in the world. He makes a living as an independent expert, engaged in all sorts of tasks in the field of computers and networks.
[Also the author of Varnish (HTTP Accelerator and Caching Proxy) - approx. trans.]