⬆️ ⬇️

QUIC protocol: Web transition from TCP to UDP

The QUIC protocol (the name stands for Quick UDP Internet Connections) is a completely new way of transmitting information on the Internet, built on top of the UDP protocol, instead of the previously accepted use of TCP. Some people call it (jokingly) TCP / 2 . The transition to UDP is the most interesting and powerful feature of the protocol, from which several other features follow.



Today's Web is built on the TCP protocol, which was chosen for its reliability and guaranteed packet delivery. To open a TCP connection, the so-called "three-time handshake" is used. This means additional message sending / receiving cycles for each new connection, which increases latency.



image


If you want to establish a secure TLS connection, you will have to send even more packets.

')

image


Some innovations, such as TCP Fast Open , will improve some aspects of the situation, but this technology is not yet very widespread.



UDP, on the other hand, is built on the idea of ​​“send a packet and forget about it.” A message sent via UDP will be delivered to the recipient (not guaranteed, with some probability of success). A bright advantage here is in a shorter connection setup time, the same bright disadvantage is the lack of guarantee of delivery or the order of arrival of packets to the recipient. This means that to ensure reliability, you will have to build some mechanism over UDP, which guarantees the delivery of packets.



And here comes QUIC from Google.



The QUIC protocol can open a connection and negotiate all TLS parameters (HTTPs) in 1 or 2 packets (1 or 2 depends on whether the connection is opened to a new server or to an already familiar one).



image


This incredibly speeds up the opening of the connection and the start of data loading.



Why do you need QUIC?



Plans for the QUIC protocol development team look very ambitious: the protocol will try to combine UDP speed with TCP reliability.



This is what Wikipedia writes about it:



Improving TCP is a long-term goal for Google, and QUIC is created as the equivalent of an independent TCP connection, but with reduced latency and SPDY-enhanced multiplexing support. If QUIC shows its effectiveness, then these capabilities can be included in the next version of the TCP and TLS protocols (the development of which takes more time).


There is an important point in this quote: if QUIC proves its effectiveness, then there is a chance that the ideas tested in it will become part of the next version of TCP .



TCP protocol is rather strongly formalized. Its implementations are in the kernels of Windows and Linux, in each mobile OS, and in many simpler devices. Improving TCP is not an easy task, since all these implementations must support it.



UDP is a relatively simple protocol. It is much faster to develop a new protocol over UDP in order to be able to test theoretical ideas, work in overloaded networks, processing blocked threads by a lost packet, etc. Once these points are clarified, it will be possible to begin work on transferring the best parts of QUIC to the next TCP version.



Where is QUIC today?



If you look at the levels that make up the current HTTPs connection, you will see that QUIC replaces the entire TLS stack and part of HTTP / 2.



Yes, the QUIC protocol implements its own crypto layer, which avoids the use of TLS 1.2.



image



A small HTTP / 2 API layer works on top of QUIC and is used to communicate with remote servers. It is less than the full HTTP / 2 implementation, since multiplexing and connection setup are already implemented in QUIC. The only thing left is the implementation of the HTTP protocol.



Head-of-line blocking



The SPDY and HTTP / 2 protocols use the same TCP connection to the server instead of separate connections for each page. This single connection can be used for independent requests and for obtaining individual resources.



image



Since all data exchange is now built on a single TCP connection, we automatically get one drawback: Head-of-line blocking. The TCP protocol requires that packets arrive (more precisely processed) in the correct order. If the packet is lost on the way to / from the server, it must be sent again. A TCP connection at this time should be expected (blocked) and only after re-receiving the lost packet does the processing of all packets in the queue continue - this is the only way to observe the condition of the correct packet processing order.



image



The QUIC protocol solves this problem fundamentally - by rejecting TCP in favor of UDP, which does not require adherence to the order of processing received packets. And, although packet loss, of course, is still possible, it will only affect the processing of those resources (individual HTML \ CSS \ JS files) to which the lost packet belongs.



image



QUIC very elegantly combines the best parts of SPDY \ HTTP2 (multiplexing) with a non-blocking transport protocol.



Why reduce the number of packets sent is so important



If you have a fast Internet connection, the packet transfer delays between your computer and the remote server are about 10-50 ms. Each packet sent from you over the network will be received by the server after this period of time. For this order of magnitude, the advantages of QUIC may not be very clear. But we should consider the issue of data exchange with a server on another continent or the use of mobile networks - and now we already have delays of about 100-150 ms.



image



As a result, on a mobile device, when accessing a server located far away, the difference between 4 TCP + TLS packets and one QUIC packet can be about 300 ms, which is already a significant value observed with the naked eye.



Proactive error correction


The elegant feature of the QUIC protocol is preventive error correction (Forward Error Correction, FEC). Each packet sent contains a certain amount of data from other packets, which makes it possible to reconstruct any lost packet from the data in its neighbors, without the need to re-send the lost packet and wait for its contents. This is essentially the implementation of RAID 5 at the network level.



But you can see for yourself the disadvantage of this solution: each package becomes a bit larger. The current implementation sets this overhead to 10%, i.e. having made each transferred packet by 10% more, we thereby obtain the possibility of data recovery without re-querying in case no more than every tenth packet will be lost.



This redundancy is a network bandwidth charge for reducing delays (which looks logical, because connection speeds and bandwidths are constantly growing, but the fact that data transfer to the other end of the planet takes a hundred milliseconds is unlikely to change somehow without fundamental coup in physics).



Resume session and parallel downloads



Another interesting feature of using the UDP protocol is that you are no longer tied to the server IP. In the TCP protocol, the connection is determined by four parameters: the IP addresses of the server and the client, the ports of the server and the client. In Linux, you can see these parameters for each established connection using the netstat command:



$ netstat -anlp | grep ':443' ... tcp6 0 0 2a03:a800:a1:1952::f:443 2604:a580:2:1::7:57940 TIME_WAIT - tcp 0 0 31.193.180.217:443 81.82.98.95:59355 TIME_WAIT - ... 


If any of these four parameters need to be changed, we will need to open a new TCP connection. That is why it is difficult to maintain stable communication on mobile devices when switching between WiFi and 3G / LTE.



image



In QUIC, with its use of UDP, this parameter set is no longer present. QUIC introduces the concept of a connection identifier, called a Connection UUID. It is possible to switch from WiFi to LTE while maintaining the Connection UUID, thus avoiding the cost of re-creating the connection. Mosh Shell works in a similar way, keeping the SSH connection active when the IP address is changed.



This approach also opens the door to using multiple sources to request content. If the Connection UUID can be used to move from WiFi to a mobile network, then we can, theoretically, use both of them at the same time to get data in parallel. More communication channels - more bandwidth.



QUIC Practical Implementations



Chrome has been experimentally supporting QUIC since 2014. If you want to test QUIC, you can enable its support in Chrome and try working with Google services that support it. This is a great advantage of Google - the ability to use a combination of your browser and your own web resources. By including QUIC in the world's most popular browser (Chrome) and high-load sites (Youtube.com, Google.com), they will be able to obtain large, clear statistics on the use of the protocol, which will reveal all the significant problems of practical use of QUIC.



There is a plugin for Chrome , which shows in the form of an icon server support for HTTP / 2 and QUIC protocols.



You can also see open QUIC connections by opening the chrome: // net-internals / # quic tab right now (note in the table the Connection UUID parameter mentioned earlier)



image



You can go even further and look at all open connections and all packets transmitted by them: chrome: // net-internals / # events & q = type: QUIC_SESSION% 20is: active.



image



How does firewalling work with everything?



If you are a system administrator or a network engineer, then you may have twitched slightly when you heard that QUIC uses UDP instead of TCP. Yes, you probably have your own reasons. Perhaps you (like, for example, in our company), the settings for accessing the web server look something like this:



image



The most important thing here, of course, is the protocol column, in which “TCP” is clearly written. Similar settings are used by thousands of web servers around the world, as they are reasonable. 80 and 443 ports, only TCP - and nothing more on the production webserver should be allowed. No UDP.



Well, if we want to use QUIC, we will have to add the resolution of UDP connections to port 443. On large enterprise networks this can be a problem. As Google statistics show, UDP is blocked in some places:



image


These figures were obtained in a recent study in Sweden. We note several key points:





The advantage of default encryption is that various Deep Packet Inspection tools cannot decrypt encrypted information and modify data, they see a binary stream and (hopefully) just skip it.



Using QUIC on the server side



QUIC is currently supported by the Caddy web server (from version 0.9). Both the client and server implementation of QUIC are still at the experimental support stage, so be careful with the practical use of QUIC. Since no one has QUIC enabled by default, it’s probably safe to enable it on your server and experiment with your browser (Update: from version 52, QUIC is enabled by default in Chrome).



QUIC performance



In 2015, Google published some QUIC performance measurements.



As expected, QUIC overshadows classic TCP on poor communication channels, giving a half second gain on the start page of www.google.com by 1% of the slowest connections. This gain is even more noticeable on video services like YouTube. Users complained 30% less about delays due to buffering when watching a video while using QUIC.


Youtube statistics are especially interesting. If improvements of this scale are really possible, then we will see a very quick adaptation of QUIC, at least in the field of video services like Vimeo, as well as on the market for adult video.



findings



Personally, I find the QUIC protocol absolutely charming! The huge amount of work done by its developers was not in vain - the mere fact that even today the largest sites on the Internet support QUIC is a bit overwhelming. I can not wait for the final QUIC specification, and of its further implementation by all browsers and web servers.



Comment on the article from Jim Roskind, one of the QUIC developers


I spent many years researching, designing and developing the implementation of the QUIC protocol, and I would like to add some thoughts to the article. In the text, the moment was rightly noted about the probable inaccessibility of the QUIC protocol for some users due to strict corporate policies regarding the UDP protocol. This was the reason that we got an average protocol availability of 93%.



If we go back a little into the past, we will see that quite recently corporate systems often even banned outgoing traffic to port 80, with the argument that “this will reduce the amount of time workers spend on surfing to the detriment of work.” Later, as you know, the advantages of accessing websites (including for production purposes) forced most corporations to revise their rules by allowing access to the Internet from the workplace of an ordinary employee. I expect something similar with the QUIC protocol: once it becomes clear that with the new protocol, the connection can be faster, the tasks are performed more quickly - it will make its way into the enterprise.



I expect that QUIC will massively replace TCP, and this is even apart from the fact that it will present a number of ideas to the next TCP version. The fact is that TCP is implemented in operating system kernels, in hardware, which means adaptation to the new version can take 5-15 years, while QUIC can be implemented on top of publicly available and all supported UDP in a single product / service in just a few weeks or months.


More information on the topic:



Source: https://habr.com/ru/post/315172/



All Articles