📜 ⬆️ ⬇️

Little about optimizing data transfer latency

Low latency is an important factor in ensuring reliable operation and high performance of networks. Applications for real-time communication, streaming and transactions are highly dependent on latency. Increasing the delay of just a few milliseconds can lead to image distortion and voices, application freezes and financial losses.

Providers try to monitor network bandwidth and fluctuations in delays, but increasing the channel “width” often does not affect network latency. In this article we will discuss the main causes of the delay and how to deal with it.


/ photo by Thomas Williams CC

Delay and its effect on call quality


In networks based on packet exchange, the relationship between latency and throughput is ambiguous and difficult to determine. In this case, the waiting time consists of the following components:
')

Traffic management


Ashton, Metzler & Associates defines traffic management as the ability of a network to handle different types of traffic with different priorities.

This approach is used in networks with limited bandwidth when running important applications that are sensitive to delays. Management can mean restricting traffic to specific services, such as email, and allocating a portion of the channel for critical business applications.

To control the traffic and quality of communication in the organization’s network, engineers recommend:


The most effective way to manage traffic, according to Viavi Solutions experts, is hierarchical quality control of communications (H-QoS), which is a combination of network policies, filtering and traffic bandwidth control. H-QoS will not slow down if all network elements provide ultra-low latency and high performance. The main advantage of H-QoS is a reduction in latency without the need to increase channel capacity.

Using NID


Network Interface Devices (NID) provide the ability to monitor and optimize traffic at low cost. Typically, such devices are installed on the subscriber's territory: network towers and other points of transition between the networks of operators.

NIDs provide control over all network components. If such a device supports H-QoS, the provider can not only monitor the operation of the network, but also carry out individual settings for each connected user.

Caching


A relatively small increase in bandwidth alone will not solve the problem of poor performance of network applications. Caching helps speed content delivery and optimize network load. This process can be viewed as a technique for accelerating the storage of resources — the network works faster as if after an update.

Usually in organizations, caching is used at several levels. It is worth noting the so-called proxy caching. When a user requests any data, his request can be performed by a local proxy cache. The higher the probability of executing such a request, the stronger the network channel is released.

Proxy caches are a kind of shared cache: they work with a large number of users and are very good at reducing latency and network traffic. One of the useful applications of proxy caching is the ability to remotely connect several employees to a set of interactive web applications.

Data compression


The main task of data compression is to reduce the size of files that are transmitted over the network. To some extent, compression is similar to caching and can give the effect of acceleration, as with increasing channel bandwidth. One of the most common compression methods is the Lempel – Ziv – Welch algorithm, which is used, for example, in ZIP archiving and the UNIX compression utility.

However, in some situations, data compression can lead to problems. For example, compression does not scale well in terms of the use of RAM and processor resources. Also, compression is rarely beneficial if traffic is encrypted. With most encryption algorithms, fewer repetitive sequences are obtained at the output; therefore, such data cannot be compressed by standard algorithms.

For effective operation of network applications, it is necessary to solve problems with bandwidth and latency simultaneously. Data compression is aimed at resolving only the first problem, so it is important to apply it in conjunction with traffic management techniques.

One-way data compression


There is an alternative approach to data compression - these are web content optimization systems located on one side of the data transmission channel. Such systems use web page optimization technologies, various compression standards, image optimization methods, delta coding and caching. They allow you to achieve compression of information in 2-8 times, depending on the content.

These tools have some advantages over two-way solutions and proxy caching. They are much cheaper to install and manage than bilateral. In addition, such systems can determine the connection speed, browser type, optimize not only static, but also dynamic content for a specific user.

The disadvantage of one-way compression is that with its help you can only optimize the work of individual programs and sites.

Today, engineers are constantly conducting research, trying to improve the performance and efficiency of networks. The IEEE 802.1Qau group develops improved management techniques that will eliminate packet loss during port overload; the Internet Engineering Task Force team creates a protocol for the data link layer that can provide the shortest connection using Ethernet.

Work is also under way to improve the sampling of data for transmission in order to distribute unused portions of the connection for different traffic classes.

Maintaining high quality connections in networks is an important task for modern organizations. This allows us to provide customers with better services and maximize network resources.

If you are interested in the topic of optimizing the processes of transfer, storage and processing of data, then you can pay attention to several other materials from our blog:

Source: https://habr.com/ru/post/312038/


All Articles