📜 ⬆️ ⬇️

Delays are a stumbling block to the Internet of Things



The history of the development of the Internet, computers and gadgets is inextricably linked with a decrease in reaction time. Download sites, launch programs, video processing - all this over the years has become faster and faster. However, a person subjectively begins to perceive as an instant delay response less than a certain duration. In the case of the Internet of things, albeit short, but numerous delays are quite capable of stacking in seconds.

It will be unacceptable for users and at the root will destroy the very idea of ​​the environment, networked.

Not so long ago, it was believed that 10 seconds to load - this is quite acceptable. But a study conducted in 2009 showed that about 40% of users close the browser tab if the site loads for more than three seconds. And now, six years later, our requirements for web speed have become even higher.
')
But this trend has a certain limit - our own physiology. Many processes running in less than one second are perceived by us as quite comfortable. And not because we are not able to register it - of course in a state - but because such waiting time usually does not force us to interrupt the thought process. That is, we notice similar delays, but they do not deprive us of control over the situation when we have to wait too long (subjectively or objectively) to wait for the response of the device or program. A delay of less than 0.1 seconds, we no longer notice and perceive this level of reaction as instant.

The constant growth of the requirements for the reaction rate has led to the fact that not a single serious product is developed without strict control over the level of delays: both in the operation of the interface and in the performance of the functions inherent. Many large network projects manage to achieve, conditionally, an instantaneous speed of reaction to user actions. For example, Spotify, Twitter, a number of instant messengers and so on. But in the area of ​​distributed network systems, there is still something to strive for. At the moment it is hardly technologically possible to provide an instant reaction rate in e-commerce systems or geographically dispersed systems. Their users can only put up with it, customarily awaiting confirmation of placing orders in online stores and downloading search results in corporate databases.



However, we are now at the beginning of a new stage of technology development. We are talking about the Internet of things, that is, machine-to-machine communications (M2M, machine-to-machine). And in this case, we definitely will not tolerate long enough delays. When you press the lamp switch, you expect the light to flash or go out instantly. By the way, did you notice that energy-saving light bulbs have a tiny delay in triggering when you turn on the light? The delay is quite small, but it is, and you can not expect it in such a thing as a lamp. And therefore it causes latent tension, irritation. In other words, when we interact with the material world, with tools, objects, instruments, we expect an instant reaction from them to our actions. But with the introduction of the Internet of Things, it will be more and more difficult for the user to provide instant response.

For example, the same switch will no longer directly control the operation of the light bulb. When you open the door of your “smart” home, the sensor will send information about this event to the control device, it will send to the remote server, and the latter will already give the command to light up the lamp. Well, or some other scheme will be implemented, with several intermediate participants equipped with microcircuits and network interfaces. And such examples of the complication of the simplest operations can be given a lot. Moreover, the Internet of Things will no longer deal with a certain distributed system, but with a multitude of systems consisting of a multitude of devices interacting within and between systems in many different ways. Infinity in a cube. And all this should work instantly for the user, because for him it is one of the basic perceptions of the real world.



The main obstacle to providing instant response on the Internet of things is the scale of the system. Many different devices will generate a huge amount of information. Productivity will take first place in the list of priorities in the development of components. And even those for whom instant reaction is not required a priori. Data generated by network members will need to be processed non-stop, with strict priority management and delay control. Most likely, batch processing models will be massively applied.

The next point: since the systems and subsystems that make up the Internet of things will be, by definition, highly distributed, you will have to solve the problem of correlating data from numerous scattered sources in order to work out a particular control team. And here it is difficult to overestimate the contribution of network infrastructure to the value of total delays. Probably, forced data transmission and processing will be used geographically closer to the user. But at the same time there will be a very high sensitivity to the effectiveness of the network architecture and its workload.

Finally, complex and massive services on the Internet of Things will include systems from numerous manufacturers and suppliers. Needless to say, how important it is to ensure full compatibility and impeccable adherence to standards to ensure low delays. In a number of tasks, not just the user's comfort will depend on this. For example, an automatic car parking system. Receiving and processing information from sensors about the surrounding space should take place at a speed sufficient to avoid collision. But billing systems will certainly be involved here, not to mention the omnipresent advertising. So in this situation, we can talk about the interaction of systems from three different companies: parking attendants, banks and advertisers. Moreover, each system will have its limitations and features. And without effective data sharing and cross-correlation will not do.



Fortunately, the analysis and widespread use of big data can help solve these problems. It will be possible to monitor and centrally manage the interaction between devices, servers, data centers and network segments through high-speed protocols. But all the necessary tools for this still have to create. Management, identification of trends and weak spots will also require the use of numerous metrics, extensive data analysis and visualization.

To facilitate the interaction between systems of different organizations, it will be possible to use platforms that aggregate and provide the necessary data to distributed services. Probably, the use of such technologies will lead to the transformation of our understanding of IT monitoring. We will move from a model based on a device, service or application to an information-centric model. Emphasis will be placed not on monitoring the operation of each component, but on collecting from the right sources reliable and high-quality data necessary for the operation of analytical mechanisms. Which, in turn, will play a crucial role in the work of the Internet of Things, generating real-time management teams. Not to mention the analysis of delays post factum, in order to identify weaknesses and optimize.



For example, the first steps in this direction were made in Amazon’s DNS routing service. It implements a mechanism for collecting data on delays in the company's cloud service in different regions, which are subsequently analyzed in order to identify those user requests that need routing. But managing connection latency is only one component in the overall performance of services. It will be necessary to pay a lot of attention to the processing of data on the servers, the duration of the transmission of information over the network and the delays in receiving answers from partner services.

In general, there is hope that the very idea of ​​the Internet of Things will withstand the first contact with users and will not be rejected by them because of the long reaction time. Still, the experience of daily Internet use has taught us that not all services and sites react instantly. And on this “trust credit” it will be possible to bring the delays on the Internet of things to an acceptable level. The development of technology will be aimed at accelerating the work of applications and the constant reduction of the threshold of acceptable delays. This will require new high-performance tools combined with extensive analytical and optimization algorithms. All this, perhaps, will be the key to the survival and development of the Internet of things.

Source: https://habr.com/ru/post/259211/


All Articles