📜 ⬆️ ⬇️

P2P - The Next Stage of Information Systems Development



Let's digress from bans in different countries, let's not think that P2P is a mechanism for bypassing locks.

I offer you an alternative opinion on P2P - what problems of the future and the present can this information network architecture solve.

What is real P2P?


Let's introduce the concept - true P2P .
')
A true P2P is a peer-to-peer network in which absolutely all network nodes perform the same functions or can automatically change a set of their functions depending on environmental conditions.

Changing functions is nothing more than providing those functions that cannot work on some peer-to-peer nodes due to limitations:
1) For NAT'om
2) Mobile devices

Both classes of devices either cannot have direct access to the network (NAT) or can, but are not strictly recommended - (Mobile devices) due to increased power consumption with a huge number of connections.

To eliminate this problem, technologies such as TCP Relay are used (as most P2P systems use UDP, with a huge number of simultaneous connections, you can choose a node that will perform the functions of receiving requests from the network via UDP and sending them to the end device via TCP through the same connection ) I want to remind you that such a mechanism has already been implemented for a long time in Skype, before its purchase by MS, these functions worked, and later - the concept of “super nodes” in Skype went away and they were replaced by MS servers.

It is very important not to confuse P2P and Mesh networks. P2P is a peer-to-peer interaction at level 3 and above for the OSI model, Mesh is at 3 and below, respectively.

What problems does P2P network solve and what technologies will it leave with the widespread adoption of P2P?



Caching

At the present time, some providers, and almost all mobile operators cache traffic. This saves resources and uplinks, so as not to drive the same traffic through highways.

But why do you need caching, if the content that has fallen into the operator’s network upon a second request will most likely be requested from the operator’s network?
And there is no need to build any new infrastructure at all.

CDN

The content delivery system is mainly used for delivering “heavy” content, music, video, and gaming (steam) to reduce the load on the main server and reduce the response time - in different countries and / or regions a CDN server is installed that performs the balancing function load.

Server data must be maintained, spending man-hours they need to be configured and they will not be able to dynamically increase their throughput or assume:
Giwi.get has always been popular in Nizhny Novgorod, which allows you to watch legal content online; a CDN server in the region can simultaneously provide the ability to view movies and TV shows only 100,000 users. But suddenly a new content (series) appears on the service according to forecasts that were made on the basis of research, this series should not interest people from this region.

But why, he was interested, and everyone decided to watch it - naturally the CDN cannot cope, at best, the content will be able to process the next CDN, but not the fact that the CDN is ready for such a load.

Shortage of communication channels

Providers of the last mile are ready to provide channels of 1 Gigabit / s, and even the network inside the city will be able to pump such a load, but bad luck, there is a main channel from the city that is not designed for such a load, and the expansion of the channel is millions (substitute a currency to choose ).

Naturally, this problem is again solved by P2P services, it is enough that there would be at least 1 source of content in the city (previously downloaded via the backbone) - everyone will have access to the content at the maximum speed of the local network (intracity)

Strengthening Internet Distribution

In the current world, Aplinka is everything, there are traffic exchange points in cities, but a provider would rather buy another couple of gigabits on the trunk than expand channels to a traffic exchange point or connect to neighboring providers.

Load reduction on aplinki

When using P2P, it is quite logical that it will be more important for the provider to have wider internal channels than external channels, and why pay for expensive uplink if it is very likely that the required content can be found in the network of a neighboring provider.

By the way, providers will also be happy, even now the provider offers such tariffs that its uplink does not equal the total number of all users.
In other words - if all users start using their 100% tariff - the provider’s uplink will end very quickly.

Obviously, P2P solutions enable the provider to say that it gives you access to the network at a speed of at least 1 TB \ c. Tc content on the network is very rarely unique, the provider (which has a piercing with its neighbors providers from the city) will be more likely to provide access to content at the tariff rate.

No extra servers on the network

Now the provider’s network usually has such servers as Google CDN (/ Youtube), Yandex CDN / peering, DPI, + other specific CDN / Caching servers that are used in this region.

Obviously, it is possible to eliminate all CDN servers and extra peering (with services, and not with providers), the DPI in this situation will also not be needed, because there will not be such sudden jumps in load during the NN hours. Why?

CHNN - Forget this abbreviation

CHNN - Hour of the greatest load, traditionally it is morning hours and evening hours, and several peaks of CHNN are always noticeable depending on the type of employment of people:

Peaks evening ChNN:
1) Return of schoolchildren from school
2) Return of students from universities
3) Return of workers who work on a schedule of 5/2

These peaks can be seen on any equipment that analyzes the network load on the channel.

P2P Solves this problem too, it is likely that content that is of interest to schoolchildren may be of interest to both students and employees - respectively, it already exists within the provider’s network - accordingly, the NNI will not be on the backbone.

Far future


We are sending our vehicles to the moon and to Mars, for a long time there is internet on the ISS.

It is obvious that in the future, the development of technology will allow flights to distant space and the long-term presence of man on other planets.

They should also be connected to a common network if we consider the classical Client-Server system, and the servers are located on the ground, and the clients say on Mars - Ping will kill any interaction.

And if we assume that on another planet there will be our colony which will grow, then just like on earth they will use the Internet, it is clear that they will need the same tools as us:
1) Messenger
2) Social networks
And this is the minimum required number of services that allow you to share information.

It is logical that the content that will be generated on Mars will be interesting and popular on Mars, and not on earth, how to be a social network?
Install your own servers that will work autonomously and after some time synchronize with the ground?

P2P networks can solve this problem - there are subscribers on the content source on Mars, on the ground - their own, but the social network is the same, but if the Martian resident has a subscriber from the earth - there are no problems, if there is a channel, the content will arrive on another planet.

What is important to note is that there will be no desynchronization, which can happen in traditional networks, there is no need to install any extra servers there and even configure something. P2P system will take care of the support of the relevance of the content.

Channel breaks


Let us return to our mental experiment - humans live on Mars, humans live on earth - they all exchange content, but at one point there is a catastrophe and the connection between the planets disappears.

With traditional client-server systems, we can get a completely non-working social network or another service.
Remember that each service has an authorization center. Who will be involved in authorization when the channel is broken?
And Martian teenagers also want to post photos of their Martian food in a MarsaGram.

When P2P networks break a channel, they easily go offline - in which it will exist completely autonomously and without any interaction.
And as soon as the connection appears, all services are automatically synchronized.

But Mars is far away, even on earth there may be problems with breaking the communication channel.

Recall the latest high-profile Google / Facebook projects covering the new territories with the Internet.
Some parts of our planet are still not connected to the network. Connection may be too expensive or not economically viable.

If in such regions you cost your own network (intranet) and then connect it to the global one using a very narrow satellite channel, the P2P solution allows you to use all the functions at the initial stage as in the case of global connected networks. And later - as we said above - allows you to pump all the necessary content through a narrow channel.

Network survival


If we rely on a centralized infrastructure, we have a very specific number of points of failure, yes, there are also backup copies and backup data centers, but we must understand that if the main DC is damaged due to disaster, access to the content will be slowed down several times, if not stop at all.

We recall the situation with Mars, all devices arrive on Mars from the ground, and one fine day the server of Uandex or LCQ breaks down - the RAID controller burns out, or another malfunction - and all Martians again without MarsiGram or worse - I can’t exchange simple messages each with a friend. The new server or its components will come from the ground oh, how not soon.

With a P2P solution, the failure of one network participant does not affect the operation of the network.

I - I can not imagine the future in which our systems will remain client-server, it will generate a huge number of unnecessary crutches in the infrastructure, complicate support, add points of failure, will not allow scaling when it is needed, great efforts will be needed if we want our client server solutions worked not only on our planet.

So that the future is definitely P2P, how the P2P world has changed can be seen right now:
Skype - a small company did not spend money on servers could grow to a huge giant
Bittorrent - OpenSource projects can transfer files without loading their servers

These are only two prominent representatives of the information revolution. On the way many other programs that will change the world.

Source: https://habr.com/ru/post/239225/


All Articles