Written by Mark Nottingham, member of the Internet Architecture Board and co-chair of the IETF HTTP and QUIC working groupsWhen the Internet became popular in the 1990s, only a few protocols were enough for basic traffic: IPv4 routed packets, TCP turned them into connections, SSL (later TLS) encrypted these connections, DNS named hosts for connections, and HTTP as an application protocol often used all of them.
Over the years, these key Internet protocols have changed very little: several new headers and methods have been added to HTTP, TLS has slowly changed a couple of minor versions, TCP has adapted to congestion management, and functions like DNSSEC have appeared in DNS. The protocols themselves have remained virtually unchanged for a very long time (except for IPv6, which already receives sufficient attention in the community of telecom operators).
')
As a result, telecom operators, vendors and government agencies that seek to understand (and sometimes control) the Internet have established some practices based on the functionality of these protocols - to debug, improve service quality, or comply with the law.
Now the key Internet protocols will undergo major changes. Although they should be generally compatible with the current Internet (otherwise they will not be distributed), this can have devastating consequences for those who dare to use the undocumented properties of the protocols or made the assumption that the protocols will not change in the future.
Why we need to change the Internet
There are a number of factors that determine these changes.
First, the limitations of key protocols became apparent, especially in performance. Due to the structural problems of the application and transport protocols, the network is not used with maximum efficiency, which affects the performance of end users (in particular, we are talking about latency).
This is an important reason to redo or replace the protocols, because there is a
lot of evidence of how much the impact of even a small performance improvement is .
Secondly, over time, it has become more difficult to evolutionally improve the Internet protocols at any level, mainly because of the unintended use mentioned above. For example, if an HTTP proxy attempts to compress responses, this complicates the deployment of new compression techniques; Intermediate stage TCP optimizations make it difficult to implement improvements in TCP.
Finally,
encryption is increasingly being used on the Internet . This process was spurred on by the revelations of Edward Snowden in 2015. In fact, this is a separate topic, but it is important here that encryption is one of the best tools available to ensure the development of protocols.
Let's see what happened in the past and what will happen in the future, how it can affect the network and how the network affects the architecture of the protocol.
HTTP / 2
HTTP / 2 (based on Google SPDY) was the first noticeable change. It was issued as a standard in 2015. This protocol summarizes multiple requests over a single TCP connection without blocking each other, thereby eliminating the need for a client-side request queue. Now it is widely distributed, supported by all major browsers and web servers.
From a network perspective, HTTP / 2 represents a few notable changes. First, it is a binary protocol, so that it cannot work with any device that takes it as HTTP / 1.1.
This incompatibility has become one of the main reasons for another important innovation in HTTP / 2: it actually requires the use of encryption. This helps to avoid the intervention of an outsider, who takes it for HTTP / 1.1 or undertakes more subtle intervention such as clearing headers or blocking new protocol extensions - both of these situations actually occur and cause problems of significant support problems for some engineers working on the protocol.
HTTP / 2 encryption requires the use of TLS / 1.2 and
prohibits cipher suites that have proven insecure - while allowing only ephemeral keys. For potential consequences, see the TLS 1.3 section.
Finally, HTTP / 2 allows you to
combine in the connection requests from more than one host, improving performance by reducing the number of connections (and therefore the number of contexts to control congestion) required to load the page.
For example, you can connect to
www.example.com
, but also use it to request
images.example.com
.
Future protocol extensions may allow additional hosts to be added to the connection , even if they are not specified in the original TLS certificate that is used for the connection. As a result, one can never assume that the traffic in the connection will be limited to the destination for which it was originally intended.
Despite these changes, it is important to note that HTTP / 2 does not seem to have significant compatibility issues or network interference.
TLS 1.3
TLS 1.3 is now going through the last stages of standardization and is already supported in some implementations.
Do not be misled by a minor number change; This is a completely new version of TLS with a heavily modernized handshake that allows data exchange from the very beginning (often called 0RTT). New architecture relies on the exchange of ephemeral keys, excluding static keys.
This caused concern for some network operators and vendors - in particular, those who need visibility of what is happening inside these connections.
For example, a data center for a bank whose visibility is a requirement of the regulator. Listening to the traffic on the network and decrypting it with static keys, they can record legitimate traffic and detect malicious, be it foreign intruders or information leakage from the inside.
TLS 1.3 does not specifically support this traffic interception technique, since it is one of the
types of attacks that ephemeral keys protect against . However, since the requirements of regulators for network operators simultaneously apply modern encryption protocols and track their networks, they are in an awkward position.
There have been many debates about whether the rules require the use of static keys, whether alternative approaches can be just as effective and whether the deterioration of the security of the entire Internet is justified for a relatively small number of networks. In the end, TLS 1.3 traffic can also be decrypted, you just need access to ephemeral keys, and they are valid for a limited time.
At the moment, it is unlikely that TLS 1.3 will change to suit these networks, but there is a discussion about creating another protocol that will allow third parties to scan traffic - or even more - in such situations. Let's see how the community will support this idea.
QUIC
While working on HTTP / 2, it became obvious that TCP suffers from similar inefficiency problems. Since TCP is a packet delivery protocol in order, the loss of one packet may prevent the application from delivering subsequent packets from the buffer. In a multiplexed protocol, this can lead to a large loss of performance.
QUIC is an attempt to solve this problem by effectively rebuilding TCP semantics (along with some aspects of the HTTP / 2 streaming model) over UDP. Like HTTP / 2, the development of this protocol began with the efforts of Google, and has now come under the wing of the IETF. Initially, the goal was to create the HTTP-over-UDP protocol and adopt the standard at the end of 2018. But since Google has already implemented QUIC in Chrome and on its websites, this protocol already accounts for more than 7% of traffic on the Internet.
Read: QUIC AnswersIn addition to the transition of such a significant amount of traffic from TCP to UDP (and all the corresponding network settings), both Google QUIC (gQUIC) and IETF QUIC (iQUIC) require mandatory encryption to work; unencrypted quic does not exist at all.
iQUIC uses TLS 1.3 to set session keys, and then encrypt each packet. But since it is based on UDP, much of the session information and metadata opened in TCP is encrypted in QUIC.
In fact, the current iQUIC
“short header” —which is used for all packets except handshaking — gives you only the packet number, an optional connection identifier, and a status byte needed for processes like changing encryption keys and packet type (which can also be encrypted).
Everything else is encrypted - including ACK packets, which significantly raises the bar for carrying out
attacks with traffic analysis .
But this means that it is no longer possible to passively evaluate RTT and packet loss by simply observing the connection; there is not enough information for this. The lack of observability has caused serious concern among some representatives of the communication operators community. They say that such passive measurements are critical for debugging and analyzing their networks.
One suggestion for solving this problem is the introduction of a
spin bit . This is a bit in the header, which changes its value once on the way back and forth, so the observer can evaluate the RTT. Since it is not tied to the state of the application, it should not provide any information about the end points, except for an approximate estimate of the network location.
Doh
The most recent change on the horizon is DOH, that is,
DNS over HTTP . Many studies have shown that networks often use DNS as a means of imposing rules (whether in the interests of a network operator or higher level authorities).
For some time
we have been discussing bypassing this kind of control with encryption, but here there is one drawback (at least from a certain point of view) - such traffic can be distinguished from normal traffic and be processed separately; for example, using the port number to block access.
DOH solves the problem by combining DNS traffic with an existing HTTP connection, thereby eliminating any discriminators. If a network wants to block access to a specific DNS resolver, then it will have to block access to the website itself.
For example, if Google deploys its
public DNS service via the DOH protocol at
www.google.com
, and users configure their browsers accordingly, if the network wants (or is required to) block this service, then it will have to effectively block all of Google ( because so they place their services).
The work on DOH has just begun, but quite a lot of interest is already showing to this protocol, there are also first attempts at implementation. Let's see how networks (and governments) that use DNS to impose rules will react to this.
Read: IETF 100, Singapore: DNS over HTTP (DOH!)Ossification and lubrication
Returning to the motivation to develop new protocols, you need to pay attention to the question of how protocol developers are increasingly faced with problems due to the fact that network operators make their assumptions about network traffic.
For example, at the last stage of the development of TLS 1.3, there were many problems with intermediate nodes that took it for the old version of the protocol. gQUIC does not work on some networks that muffle UDP traffic because they consider it harmful or low priority.
If the protocol cannot develop due to the fact that its expansion points “freeze” during implementation, we say that it is
ossified . TCP itself is an example of the most severe ossification; so many intermediate nodes do such different things with TCP - from blocking TCP packets with unrecognized options to "optimizing" congestion control.
It is necessary to prevent ossification in order to guarantee the development of protocols to meet the future needs of the Internet. Otherwise, we will be confronted with the “tragedy of communities”, where the actions of individual networks - albeit well-intentioned - will affect the health of the entire Internet.
There are many ways to prevent ossification. If the data in the packets is encrypted, no one except the owner of the keys will get access to them, which prevents interference. If the extension point is encrypted, but widely used in such a way that the visibility of the application (for example, HTTP headers) is violated, then the probability of interference decreases.
If protocol developers cannot use encryption and the extension point is used infrequently, artificial extension of the extension point can help. We call it greasing.
For example, QUIC pushes endpoints to use a range of values ​​when
negotiating a version to avoid implementations that always assume the version number is the same (this is often found in TLS implementations, which leads to serious problems).
Network and user
In addition to wanting to avoid ossification, the new protocols also reflect the evolution of the relationship between networks and their users. For a long time it was assumed that the network always remains a benevolent - or at least neutral - side. But this is no longer the case, not only because of the omnipresent
total monitoring , but also because of attacks like
Firesheep .
As a result, contradictions between the needs of Internet users in general and the interests of networks that want access to a certain amount of data passing through them increase. Particularly affected are networks that seek to impose certain rules on users, for example, corporate networks.
In some cases, they can achieve their goals by installing special software (or certificate of certification authority, or browser extension) on users' computers. But it is not easy in those cases where the operator does not have access to the computer. For example, now employees often work on their own computers, and IoT devices rarely have corresponding control interfaces.
As a result, many discussions in the development of protocols in the IETF are about the contradiction of the needs of corporate and other “nested” networks - and benefits for the Internet as a whole.
Join the work
For the Internet to work well in the long run, it must have value for end users, avoid ossification and ensure the normal operation of networks. The changes that are underway now satisfy all three goals, but we need more feedback from network operators.
If these changes affect your network - or do not affect - please report it in the
comments to the article , or better yet, join the
IETF by visiting the working meeting, subscribing to the mailing list or leaving feedback to the draft standard.
Thanks to Martin Thompson and Brian Trammel for editing the article.