This article raises the question of the depressing situation with the availability of data on the Internet, the abuse of censorship and total surveillance. Are the authorities or corporations to blame? What to do? Create your own social networks, participate in anonymization networks, build mesh networks and store-and-forward solutions. Demonstrate NNCP utilities to create these store-and-forward friend-to-friend solutions.
Who is guilty?
In recent years, there have been many discussions about the fear of draft laws that can put the existence of our part of the Internet at all in doubt. The interpretation of these bills may be such that
everything that uses encryption will be illegal, is prohibited, and who needs a network where it is impossible not to publicly transmit data, to talk privately?
Representatives of the authorities say that there is no talk about a ban, but only about control: that is, they say if
they can read, then there are no problems - but we know that there is no “transparent” encryption for some, but reliable from others. Such a “control” is equivalent to the fact that any of our words should be transmitted through an intermediary; direct communication between two interlocutors is unacceptable. In addition, the presence of a centralized intermediary is dangerous for the abuse of censorship and the closure of access to layers of information. Plus, this is a huge stream of private information about people (each click will be recorded) - that is, global surveillance with all the resulting problems.
All these problems are presented in oppositional form against the authorities, saying that they are to blame for the fact that we, ordinary people, can lose such a miracle of the world as the Network of networks. Is everything so bad, and does it really “dislike” power with our technologies?
')
Everything is not so bad - everything is much worse. Because the availability of information, global total surveillance and censorship have long become a de facto, without any bills adopted. Here are just the instigators of all this were and are corporations such as Google, Facebook, Microsoft and Apple.
It's no secret that Web technologies are extremely complex, cumbersome and time consuming to develop: try writing a web browser from scratch with all the CSS and JavaScript with the DOM. It is known that actively developed browsers are counted on the fingers and people from these corporations are engaged in their development. Therefore, any development moves exclusively and logically towards the needs of corporations.
What was the Web from a technological point of view? Users have a special program (good old warm tube Web-browser), which, using a standardized HTTP protocol (telling what document, what resource to get), connecting to servers, receives HTML (possibly plus images) document and displays it. This is a distributed
document storage network in which there is one protocol. We get ready documents over the network, which can be saved on a hard disk and read without further connections to servers.
How does the Web "corporations"? The user has a special program (still called Web-browser), which by the transport protocol (this is HTTP, but which already has nothing to do with hypertext, which can be replaced by any other file transfer protocols) gets a program written in JavaScript ( although now it may be some kind of WebAssembly, which is a normal binary executable code, similar to the .exe file), runs it in a virtual machine and this program according to its own protocol (that is, the rules of interaction with the server and message format) starts communication with the server in order to receive data and display it on the screen. Save the displayed data, thinking that this document is unlikely to succeed. Automating the receipt of documents also fails, because every single site is a separate program and its own communication protocol with its own message format (JSON request structure, for example). Now it is a distributed network of
applications downloaded to user computers.
Of course, all these programs are closed, since the code is at least obfuscated and is not suitable for people to read or edit. Previously, we once installed a program that implements one given protocol and supports at least one standardized document format. Now, every time, with each site we download another different program.
What is a closed proprietary program? This is when you do not control your computer, when you do not know what the program is going to do on it. You don’t tell your car what to do, but the program tells you what you are allowed to do. Of course, all this applies to any other proprietary program, not only automatically downloaded JS-code. However, there is a significant difference between installing Microsoft Windows with some Microsoft Word on your computer and JS-code - you put them once and if you don’t notice anything dangerous or disturbing in their work, then you just don’t worry and trust. However, in the Web world, every time you access the site, you can get a new version of the program and any modern browser will not even tell you about it. If earlier you did not notice that the site sends private data to the server, then, having entered it in five minutes, it can start. No one, without special plug-ins and dances with a tambourine, will tell you about it and warn you that you are using a different version of the downloaded program. The ecosystem is sharpened on the unquestioning download of proprietary software. Owners of such sites can force them to execute on user computers that they would like to literally change several files on their servers and the new version of the program will automagically be executed.
Maybe the problems are exaggerated, because this is not an ordinary .exe program that has access to a huge amount of computer resources, and the program that runs, in theory, in an isolated virtual machine? Unfortunately, the size and complexity of the code base of modern browsers is so huge that even just to analyze their security is very expensive, not to mention the fact that this code base is changing so quickly that any analysis at the time of its completion will not be relevant.
Difficulty is the main enemy of any security system. Complex protocols like TLS have proven that even if hundreds of millions of people use and develop under OpenSSL, which is a free open source program, there can be fatal critical errors. Plus, we all saw that attacks like the
Row hammer can also be made from the browser. In addition, there are successful attacks on the processor's cache aimed at recognizing the AES key, also performed from the browser. A virtual machine so rapidly changing and with such complexity cannot be safe by definition, because even full virtualization in some Xen or KVM does not help against some attacks. And what's the point of making a good isolated environment, when the business of corporations is the opposite of collecting as much data as possible?
Now let's try to disable JavaScript in our browser and follow the most different modern sites. To date, some of the resources will not work at all, but on the remaining 99% of the sites we will see that a huge amount of advertising has gone somewhere. We will see that a smaller number of requests issuing our private data to third-party sites / servers began to emanate from us - that is, surveillance is significantly reduced, at least by the lack of contact with numerous third parties.
All this is being done, as is officially reported, for the sake of advertising, for the sake of targeted advertising, for its improvement and for us. Only all this is improved solely by spying on us. Well-known security specialist and cryptographer Bruce Schneier has repeatedly stressed that the business model of the Internet is the surveillance of users. All of these corporations live by keeping track of users (let me stress that monitoring means collecting data about it) and selling the information received.
Someone may object: what kind of surveillance is it in the store, if I came here in such a coat and now my face is visible - I gave this information myself. Indeed, I myself send the IP address, TCP ports, User-Agent of my browser, and I can't help but send it - this is how the Web works. But if the seller starts asking what my name is, where I am from, walking on my heels is already a request for information that is not needed to complete a purchase and sale transaction, this is already shadowing. But corporation sites are doing just that, destroying access to information on standardized protocols (which they say so little about us) and document formats — if they force them to use their software, then they’re free to follow as they please.
Interview the majority of people who specifically really hit Roscomnadzor’s blockages and they lost access to some information? Apart from the loud, short-term locks of the Github type, the most people will say is the loss of Rutraker. However, as in the case of Pirate Bay, it should be understood that this is no longer the whim of the authorities and not politics, but the power of corporations like Hollywood and others like them. Their monetary status and influence on the power of countries is quite weighty. The authorities themselves to close Rutraker or Pirate Bay does not make sense, since, basically, this is cheap (in terms of infrastructure) entertainment that distracts people from politics (potentially creating a danger to power).
But the loss of tons of information due to the fact that the site has stopped working with simple HTTP + HTML methods, forcing people to use its software, forcing it to be constantly online (if a person is not online, then how can I gather information about it?) - in my opinion , affected and affected more and more. Disconnect a person from the Internet and he is not able to do anything at all, or even read the mail or see his photos or recall a meeting, because all this is left in the clouds.
Information “trapped” on a VKontakte social network is inaccessible for indexing by third-party robots, often unavailable for unauthorized persons. They say, only if I allow downloading proprietary programs that are closed, I register, providing the identification data of my “beacon” (cell phone), then only I can see a couple of paragraphs of the text about some regular musical concert. The wild amount of Web developers simply forgot how to make websites differently - without monitoring and installing each user of their software, they will not show anything, not a bit of payload information. Because corporations teach people only such an unethical and non-user-friendly development method. To see at least one message in Google Groups, you need to download almost two megabytes of JS-programs - comments are unnecessary.
Thus, total surveillance, inaccessibility of information, centralized censorship - all this has already happened, all this is cultivated by the developers themselves, ordinary people. Social networks are “clean”: they say, no one forces us to put all this in us. Indeed, people are extremely easy to manipulate and extremely easy to keep silent about what they are losing, showing only the positive aspects of their approaches. And they begin to understand the value only with the loss.
There are many people who use practically only VKontakte and YouTube: each of their actions is
already monitored, all their correspondence (they do not have email, only an account in VK or Telegram) is read, all incoming information is trivially censored (how many times has Facebook been noticed for being manipulated by people , by censoring data?). And most of them are already possible, but no one forcibly forced them, and there is still a choice. For them, the providers already have special cheaper tariff plans in which access is only available to a number of services. When the mass of these people becomes quite critical, only such tariff plans will remain, since what is the point of supporting an infrastructure provider providing access to the entire Internet when it is enough to have a half a dozen corporate networks and 99.99% of users are satisfied? Prices for full-fledged tariffs will rise (if they remain at all) and this will already become a barrier to Internet access.
Do people care that the CA certificate will be forcibly issued in a TLS connection, as it happened in Kazakhstan, for example, in order to monitor, listen and arrange censorship? But at the same time, these same people do not care what other parties (corporate services) generally install their private software and their protocols? These same people, by posting information only on social networks, support the centralization and hegemony of one corporation over all data. They have long been independently digging their own Internet graves, trying to blame all the troubles on the "default" extreme.
For corporations with outstretched hands, and the power that makes far less catastrophic things to find fault.
What to do?
A minority of people who really need the Internet, who need the opportunity, roughly speaking, to send arbitrary data from one arbitrary computer to another?
If you do not like the services of corporations or social networks, then no one forces them to use and you can always make your analog (with blackjack and whores, optional). Each home has a powerful computer, fast network and all the possibilities of technical implementation. Engines for social networks like
Diaspora or
GNU Social have long existed.
If you don’t like the way they provide data, doing it with a huge threshold of entry (its own protocol and format), then at least do it yourself in the proper and satisfactory manner. This applies to developers.
If a resource does not have enough hard disks, a channel to provide for all needs, then do not forget about cooperation and offer the possibility of resource mirroring. Instead, unfortunately, many move to CDNs, such as Cloudflare, which often prohibits logging in from the Tor network, forcing them to undergo degrading de-animization procedures.
If you don’t like the fact that a growing number of providers don’t give static IP addresses, don’t give full addresses, but only an internal address behind NAT, then placing resources inside overlay networks like
Tor hidden service (.onion) or
I2P (.i2p) may be the only way to connect to you from the outside. You must not forget about participation in such networks and donate, often unused, resources of your computers. To develop and support not only low-latency networks, a priori subject to a series of attacks because of their nature, but also networks like
Freenet and
GNUnet . That was not as always: until the thunder clap.
If corporate censorship really reaches a state where arbitrary computers cannot exchange encrypted traffic among themselves, then the Internet will close and only whitelist remote access to a dozen services will remain, then you can make your network.
Laying fiber or cables can hardly be considered, since it costs a lot of money (let's forget about the fact that it is not allowed). But mesh networks over wireless communication channels can be implemented in Spartan home environments. The option may not be to create a completely isolated network, but one in which at least someone will have access to a working Internet, being a gateway. There are many projects to choose from.
But do not forget about the desires of corporations to ban firmware changes on WiFi routers and do not forget that a huge number of WiFi modules do not work without binary blobs. Such vendor-lockin does not exclude tight control over traffic, such as in modern proprietary operating systems it is impossible to install programs that are not cryptographically signed by corporations (who approved the launch of this software). To make a WiFi chip independently at home is very expensive and it is possible that those on which it would be possible to make a mesh network will also be lost from the market. This is not only about WiFi, but any other wireless solutions with a thick communication channel. Amateur radio stations can also be made at home, but the capacity of their channels is deplorable, and it is impossible to simply pick up and raise such a station at home — we need permissions.
It is possible to create a mesh network in theory, but in practice a large number of sufficiently geographically distributed people are needed to create something of impressive size and having practical benefits, and not just academic ones. There are different opinions, but personally, my experience shows that people are not particularly eager to cooperate and therefore one does not have to hope that a mesh network, at least, could be created in Moscow. And people really need a lot, because WiFi (or other available capacious radio solutions) works at relatively short distances.
In addition, mesh networks and their protocols are well sharpened exclusively for real-time connections. They are designed so that you can open sites in real time or use the terminal remotely. If the connectivity in the network is lost, then this is equivalent to a cable break and, before restoration, another network segment will be unavailable, rendering the real-time low-delay program unusable. It is also necessary to ensure good channel redundancy - which is expensive and resource intensive.
A more fundamental decision is to forget about real-time services and remember that life is quite possible without them. Is it so critical that your message will not come instantly, but maybe in a few minutes or hours? Email remains the most reliable and common method of communication and it does not guarantee any time frame for delivery: the delays of tens of minutes are regular.
To read most sites, real-time is not needed in principle. , , GNU Wget . Web-:
WARC (Web ARChive),
Internet Archive . Web-. Freenet : , , , . : Web
Internet Archive . Web- , , . , . , CDN .
real-time online , mesh-, store-and-forward ( ) . , ,
FidoNet , , , ( ) . , , , , .
Node-to-Node CoPy
FidoNet
UUCP (Unix-to-Unix CoPy),
!
-, , , , , ( , ). . .
-, FidoNet, DOS-, UUCP - Unix-. Unix-like . « ». FidoNet UUCP .
, , email . . UUCP, . , FidoNet UUCP . , FidoNet , , , ( « , », « , , »).
(GNU GPLv3+)
NNCP . , , , store-and-forward .
.
, ( ) , NNCP nncp-* . (
nncp-cfgnew ) :
alice% nncp-cfgnew | tee alice.yaml self: id: ZY3VTECZP3T5W6MTD627H472RELRHNBTFEWQCPEGAIRLTHFDZARQ exchpub: F73FW5FKURRA6V5LOWXABWMHLSRPUO5YW42L2I2K7EDH7SWRDAWQ exchprv: 3URFZQXMZQD6IMCSAZXFI4YFTSYZMKQKGIVJIY7MGHV3WKZXMQ7Q signpub: D67UXCU3FJOZG7KVX5P23TEAMT5XUUUME24G7DSDCKRAKSBCGIVQ signprv: TEXUCVA4T6PGWS73TKRLKF5GILPTPIU4OHCMEXJQYEUCYLZVR7KB7P2LRKNSUXMTPVK36X5NZSAGJ632KKGCNODPRZBRFIQFJARDEKY noiseprv: 7AHI3X5KI7BE3J74BW4BSLFW5ZDEPASPTDLRI6XRTYSHEFZPGVAQ noisepub: 56NKDPWRQ26XT5VZKCJBI5PZQBLMH4FAMYAYE5ZHQCQFCKTQ5NKA neigh: self: id: ZY3VTECZP3T5W6MTD627H472RELRHNBTFEWQCPEGAIRLTHFDZARQ exchpub: F73FW5FKURRA6V5LOWXABWMHLSRPUO5YW42L2I2K7EDH7SWRDAWQ signpub: D67UXCU3FJOZG7KVX5P23TEAMT5XUUUME24G7DSDCKRAKSBCGIVQ noisepub: 56NKDPWRQ26XT5VZKCJBI5PZQBLMH4FAMYAYE5ZHQCQFCKTQ5NKA sendmail: - /usr/sbin/sendmail spool: /var/spool/nncp/alice log: /var/spool/nncp/alice/log
NNCP creates exclusively Friend-to-Friend (F2F) networks, where every member knows about the neighbors with whom he communicates. If Alice needs to contact Bob, then, beforehand, they must exchange their public keys and write them into configuration files. Nodes exchange among themselves the so-called
encrypted packets - a kind of OpenPGP analogue. Each package is explicitly addressed to a given participant. This is not a Peer-to-Peer (P2P), where anyone can connect to anyone and send something: this creates the possibility of
Sybil attacks , when an attacker's node can disable the entire network or at least monitor the activity of the participants in it.
The simplest
configuration file contains the following fields:
- self.id - our node ID
- self.exchpub / self.exchprv and self.signpub / self.signprv - keys used to create encrypted packets
- self.noisepub / self.noiseprv - optional keys used when communicating nodes over a TCP connection
- neigh - contains information about all known members of the network, "neighbors". It always contains the self record - it contains your public data and it is possible to distribute it safely to people, since there are only public parts of keys in it
- spool - path to the spool directory where outgoing encrypted packets and raw incoming packets are located
- log - path to the log in which all performed actions are saved (sent file / letter, received, etc.)
Bob generates his file. It exchanges its key with Alice and adds it to its configuration file (Alice does the same):
alice% cat bob.yaml self: id: FG5U7XHVJ342GRR6LN4ZG6SMAU7RROBL6CSU5US42GQ75HEWM7AQ exchpub: GQ5UPYX44T7YK5EJX7R73652J5J7UPOKCGLKYNLJTI4EBSNX4M2Q exchprv: HXDO6IG275S7JNXFDRGX6ZSHHBBN4I7DQ3UGLOZKDY7LIBU65LPA signpub: 654X6MKHHSVOK3KAQJBR6MG5U22JFLTPP4SXWDPCL6TLRANRJWQA signprv: TT2F5TIWJIQYCXUBC2F2A5KKND5LDGIHDQ3P2P3HTZUNDVAH7QUPO6L7GFDTZKXFNVAIEQY7GDO2NNESVZXX6JL3BXRF7JVYQGYU3IA noiseprv: NKMWTKQVUMS3M45R3XHGCZIWOWH2FOZF6SJJMZ3M7YYQZBYPMG7A noisepub: M5V35L5HOFXH5FCRRV24ZDGBVVHMAT3S63AGPULND4FR2GIPPFJA neigh: self: id: FG5U7XHVJ342GRR6LN4ZG6SMAU7RROBL6CSU5US42GQ75HEWM7AQ exchpub: GQ5UPYX44T7YK5EJX7R73652J5J7UPOKCGLKYNLJTI4EBSNX4M2Q signpub: 654X6MKHHSVOK3KAQJBR6MG5U22JFLTPP4SXWDPCL6TLRANRJWQA noisepub: M5V35L5HOFXH5FCRRV24ZDGBVVHMAT3S63AGPULND4FR2GIPPFJA sendmail: - /usr/sbin/sendmail alice: id: ZY3VTECZP3T5W6MTD627H472RELRHNBTFEWQCPEGAIRLTHFDZARQ exchpub: F73FW5FKURRA6V5LOWXABWMHLSRPUO5YW42L2I2K7EDH7SWRDAWQ signpub: D67UXCU3FJOZG7KVX5P23TEAMT5XUUUME24G7DSDCKRAKSBCGIVQ noisepub: 56NKDPWRQ26XT5VZKCJBI5PZQBLMH4FAMYAYE5ZHQCQFCKTQ5NKA spool: /var/spool/nncp/bob log: /var/spool/nncp/bob/log
Next, Bob wants to send the file to Alice:
bob% nncp-file -cfg bob.yaml ifmaps.tar.xz alice: 2017-06-11T15:33:20Z File ifmaps.tar.xz (350 KiB) transfer to alice:ifmaps.tar.xz: sent
and also backup your file system, but doing pipelines through the Unix-way:
bob% export NNCPCFG=/path/to/bob.yaml bob% zfs send zroot@backup | xz -0 | nncp-file - alice:bobnode-$(date "+%Y%m%d").zfs.xz 2017-06-11T15:44:20Z File - (1.1 GiB) transfer to alice:bobnode-20170611.zfs.xz: sent
Then he can look at what his
spool directory is cluttered with:
bob% nncp-stat self alice nice: 196 | Rx: 0 B, 0 pkts | Tx: 1.1 GiB, 2 pkts
Each packet, except for information about the sender and recipient, contains also the so-called nice level. This is just a single-byte number - priority. Almost all actions can be accompanied by a limit on the maximum allowed level of nice. This is an analog grade in UUCP. It is used to process packets with a higher priority (lower nice value) in the first place, so that e-mail messages go through despite the fact that the background is trying to transfer the film to DVD. By default, for sending files the priority is set to 196, which we see.
Rx is received packets that have not yet been processed, and
Tx is packets to transmit.
The packages are encrypted and their integrity is guaranteed to be checked. The packages are also authenticated - they are reliably known from whom. In almost all teams, you can specify the minimum required package size - they will automatically add garbage to the desired size. At the same time, the real size of the payload is hidden from an outside observer, encrypted.
When transferring files, you can specify the -chunked option, indicating to it the size of the piece for which the file should be beaten. It uses a very BitTorrent-like scheme: the file is broken into pieces and a meta-file containing information about each piece is added, in order to guarantee, in terms of integrity, file recovery. This can be useful if you need to hide the size of huge files (as with a corpse — it’s problematic to drag it with one piece, but dividing it into six parts is much easier). It is also useful when it is necessary to transfer large amounts of data through drives of obviously smaller size: then the data will be transferred in several iterations.
Now this must somehow be passed to Alice. One way is through the data collector, through copying files on the file system.
Bob takes a USB flash drive, creates a file system on it, runs:
bob% nncp-xfer -mkdir /mnt/media 2017-06-11T18:23:28Z Packet transfer, sent to node alice (1.1 GiB) 2017-06-11T18:23:28Z Packet transfer, sent to node alice (350 KiB)
and gets there a set of directories with all outgoing packages for each node known to it. Which nodes to "examine" can be limited by the option -node. The -mkdir option is required only for the first start - if the directories for the corresponding nodes already exist on the drive, they will be processed, and otherwise the nodes will simply be skipped. This is convenient in that if a flash drive “walks” only between some members of the “network”, then only packages for them will be placed on the drive, without the need to constantly specify -node.
Instead of a USB drive, there may be a temporary directory from which an ISO image will be created for writing to a CD. This could be some kind of public FTP / NFS / SMB server mounted in the / mnt / media directory. Such a NAS can be placed at work or somewhere else - if only two contacting participants have occasionally been able to connect to it. It can be a portable
PirateBox that collects and distributes NNCP packets along the way. This can be
USB dead drop , to which from time to time, completely different and unknown people are connected: if there are unknown nodes in the target directory, then we ignore them and can only learn about the fact of their existence and the number of transmitted packets.
All of these drives and storage contain only encrypted packets. It can be seen from whom and to whom they are intended, how much, what size and priority. But not more. Without private keys, you cannot even recognize the type of package (mailing, file, or transit package). NNCP does not attempt to be anonymous.
Using
nncp-xfer does not require any private keys: we only need knowledge about the neighbors - their identifiers. Thus, you can go on the road with the spool directory and the minimum configuration file (without private keys) prepared by the
nncp-cfgmin command , without fear of compromising the keys. A nncp-toss call that requires private keys can be made at any other convenient time. However, if it’s still scary for the security of private keys or even a configuration file where all your neighbors are listed, then you can use the
nncp-cfgenc utility to encrypt it. As the encryption key, the password phrase entered from the keyboard is used: there is salt in the encrypted file, the password is amplified by the CPU and memory hard by the
Balloon algorithm, therefore, having a good password phrase, you can not worry too much about compromise (except for the soldering iron).
Alice just needs to execute the command to copy the files intended for her to her spool directory:
alice% nncp-xfer /mnt/media 2017-06-11T18:41:29Z Packet transfer, received from node bob (1.1 GiB) 2017-06-11T18:41:29Z Packet transfer, received from node bob (350 KiB) alice% nncp-stat self bob nice: 196 | Rx: 1.1 GiB, 2 pkts | Tx: 0 B, 0 pkts
We see that she got raw (Rx) packets in the spool directory. However, not all so simple. Though we have a network of friends (F2F), trust, but verify. You must explicitly allow the specified node to send you files. To do this, Alice should add an indication to the Bob section of the configuration file where to put the files transferred from him.
bob: id: FG5U7XHVJ342GRR6LN4ZG6SMAU7RROBL6CSU5US42GQ75HEWM7AQ exchpub: GQ5UPYX44T7YK5EJX7R73652J5J7UPOKCGLKYNLJTI4EBSNX4M2Q signpub: 654X6MKHHSVOK3KAQJBR6MG5U22JFLTPP4SXWDPCL6TLRANRJWQA noisepub: M5V35L5HOFXH5FCRRV24ZDGBVVHMAT3S63AGPULND4FR2GIPPFJA incoming: /home/alice/bob/incoming
After that, you need to start processing incoming encrypted packets with the
nncp-toss command (similar to FidoNet tosser):
alice% nncp-toss 2017-06-11T18:49:21Z Got file ifmaps.tar.xz (350 KiB) from bob 2017-06-11T18:50:34Z Got file bobnode-20170611.zfs.xz (1.1 GiB) from bob
This command has the -cycle option, which allows it to hang in the background and regularly check and process the spool directory.
In addition to sending files, there is the possibility of a file transfer request. For this, it is necessary to explicitly register in the configuration file a freq (file request) record for each node from which directory you can request files:
bob: id: FG5U7XHVJ342GRR6LN4ZG6SMAU7RROBL6CSU5US42GQ75HEWM7AQ exchpub: GQ5UPYX44T7YK5EJX7R73652J5J7UPOKCGLKYNLJTI4EBSNX4M2Q signpub: 654X6MKHHSVOK3KAQJBR6MG5U22JFLTPP4SXWDPCL6TLRANRJWQA noisepub: M5V35L5HOFXH5FCRRV24ZDGBVVHMAT3S63AGPULND4FR2GIPPFJA incoming: /home/alice/nncp/bob/incoming freq: /home/alice/nncp/bob/pub
Now Bob can make a file request:
bob% nncp-freq alice:pulp_fiction.avi PulpFiction.avi 2017-06-11T18:55:32Z File request from alice:pulp_fiction.avi to pulp_fiction.avi: sent bob% nncp-xfer -node alice /mnt/media
and Alice, after processing incoming messages, will automatically send the requested file to him:
alice% nncp-toss 2017-06-11T18:59:14Z File /home/alice/nncp/bob/pub/pulp_fiction.avi (650 MiB) transfer to bob:PulpFiction.avi: sent 2017-06-11T18:59:14Z Got file request pulp_fiction.avi to bob
There is no functionality for sending a list of files, but users can always agree on it, for example, by saving the output of ls -lR in the ls-lR file of the root directory.
Now, imagine that Alice and Bob know Eve (but this Eve is good, not bad cryptographic, because we have the same network of friends), but Alice does not have direct contact with her. They live in different cities and only Bob oversees between them from time to time. NNCP supports transit packets that are similar to Tor's onion-based encryption: Alice can create an encrypted packet for Eva, and put it in another encrypted packet for Bob, indicating that she needs to be forwarded to Eve. The length of the chain is not limited and the intermediate participant knows only the previous and subsequent chain link, not knowing the real sender and recipient.
The indication of a transit path is given by the record via in the node. For example, Alice wants to say that Eva is available through Bob:
bob: id: FG5U7XHVJ342GRR6LN4ZG6SMAU7RROBL6CSU5US42GQ75HEWM7AQ exchpub: GQ5UPYX44T7YK5EJX7R73652J5J7UPOKCGLKYNLJTI4EBSNX4M2Q signpub: 654X6MKHHSVOK3KAQJBR6MG5U22JFLTPP4SXWDPCL6TLRANRJWQA noisepub: M5V35L5HOFXH5FCRRV24ZDGBVVHMAT3S63AGPULND4FR2GIPPFJA eve: self: id: URVEPJR5XMJBDHBDXFL3KCQTY3AT54SHE3KYUYPL263JBZ4XZK2A exchpub: QI7L34EUPXQNE6WLY5NDHENWADORKRMD5EWHZUVHQNE52CTCIEXQ signpub: KIRJIZMT3PZB5PNYUXJQXZYKLNG6FTXEJTKXXCKN3JCWGJNP7PTQ noisepub: RHNYP4J3AWLIFHG4XE7ETADT4UGHS47MWSAOBQCIQIBXM745FB6A via: [bob]
How exactly Eve is in contact with Bob Alice is not known and there is no need to know: somehow, the messages should reach her and Bob sees only the facts of sending transit traffic. The outgoing message to Eva at Bob is automatically created by nncp-toss after processing the message from Alice.
If we are talking about good security, then you need to use computers with an air gap (air-gapped) - not connected to data networks, ideally having, for example, only CD-ROM / RW. And “in front of” them is a computer in which flash drives or traffic from other nodes are stuck: you can make sure that the flash drives do not contain anything malicious, and if it is connected to the Internet or another network, OS vulnerabilities cannot compromise the air-gapped computer. Only transit packets that generate outgoing messages for an air-gapped computer on a CD-ROM come to this node.
If you add a section to the configuration file:
notify: file: from: nncp@bobnode to: bob+file@example.com freq: from: nncp@bobnode to: bob+freq@example.com
then notifications of the transferred files will be sent to bob+file@example.com, and notifications about the requested files will be sent to bob+freq@example.com.
NNCP can be easily integrated with a mail server for transparent mail transfer. What is the usual SMTP is not suitable, because it is also store-and-forward? The fact that you cannot use flash drives (without plyok), the fact that SMTP traffic, like email messages, is not very compact (binary data in Base64 form), plus there are quite serious limitations on the maximum waiting time for the delivery of correspondence. If you are a courier in Uganda and a courier with a flash drive walks up to you once a week, then SMTP will not work here, and the NNCP will just hand over a pack of letters for each resident of the village and take it back to the city.
To send mail, use the
nncp-mail command. In fact, this is exactly the same file transfer, but sendmail is called on the target machine, instead of saving the message to disk, and the message itself is compressed. Postfix configuration takes literally a
few lines :
- in master.cf, you specify how to use the nncp transport (how to call the nncp-mail command)
- for a given domain / user, it is specified that this nncp transport should be used
- specify that the specified domain or user is a transit (relay)
On the target machine, you need to set the path to the sending mail command for the specified node (if it is not specified, then the mail will not be sent):
bob: id: FG5U7XHVJ342GRR6LN4ZG6SMAU7RROBL6CSU5US42GQ75HEWM7AQ exchpub: GQ5UPYX44T7YK5EJX7R73652J5J7UPOKCGLKYNLJTI4EBSNX4M2Q signpub: 654X6MKHHSVOK3KAQJBR6MG5U22JFLTPP4SXWDPCL6TLRANRJWQA noisepub: M5V35L5HOFXH5FCRRV24ZDGBVVHMAT3S63AGPULND4FR2GIPPFJA sendmail: [/usr/sbin/sendmail, "-v"]
Such a transparent approach allows you to completely get rid of POP3 / IMAP4 servers and email clients that save letters as drafts, when you need to transparently receive and send mail between your email server and laptop. The mail server always looks to the Internet, and the laptop doesn't care how often it connects: NNCP will accept mail and send it to its local sendmail, which will deliver to the local mailbox. And similarly sending: the falling messages to the local mail server on the laptop will be saved in the NNCP spool directories and will be sent to the server as soon as, so right away (from the point of view of the mail client, the mail was successfully sent immediately). Plus compressed mail traffic.
The use of portable drives is not always convenient, especially now, when the Internet is still operational. NNCP can be conveniently used to transfer packets over TCP connections. For this, there is the
nncp-daemon daemon and the
nncp-call command that calls it.
For the exchange of encrypted packets, it would be possible to use the rsync protocol, possibly over an OpenSSH connection, but this would mean some kind of crutch, plus the next link and the next keys for node authentication. NNCP uses a self-contained synchronization protocol SP (
sync protocol ) used on top of the
Noise-IK encrypted and authenticated communication channel. Although NNCP packets are encrypted, it is not good to glow with them publicly in communication channels - therefore, there is an additional layer above this. Noise provides perfect forward secrecy (PFS, when compromising Noise's private keys will not allow you to read previously intercepted traffic) and two-way authentication: strangers will not be able to connect to you and send or try to accept something.
The SP protocol tries to be as efficient as possible in half duplex mode: as much data as possible should be transferred in one direction, without expecting anything in return. This is critical for satellite communication channels, where protocols awaiting confirmation of acceptance or sending requests for the next piece of data are completely degraded in performance. However, full-duplex mode is fully supported, trying to utilize the communication channel in both directions.
SP tries to be economical in the number of data packets sent: already at the time of the Noise-IK handshake, lists of packages available for download are immediately sent, and download requests are sent in batches.
SP is designed to work in communication channels without errors: it does not have to be TCP - as a transport, just about anything, just without errors. Packages with compromised integrity will not be processed. The success of the adoption of the package is reported to the opposite side only after checking the integrity.
The protocol allows you to continue downloading any package from any arbitrary location. That is, when the connection is broken, the most noticeable loss is a TCP + Noise handshake, and then a continuation from the same place. As soon as a new packet for sending appears on the node, the opposite side immediately (once a second) inquires about it, allowing it to interrupt the current download and queue a higher priority packet.
% nncp-daemon -nice 128 -bind [::]:5400
the command picks up the daemon listening on all addresses on the 5400 TCP port, but passing packets with a nice level not higher than the 128th. If the possible addresses of the daemon of a given node are known, then they can be entered in the configuration file:
bob: id: FG5U7XHVJ342GRR6LN4ZG6SMAU7RROBL6CSU5US42GQ75HEWM7AQ ... addrs: lan: "[fe80::be5f:f4ff:fedd:2752%igb0]:5400" pub: "bob.example.com:5400"
and then call the node:
% nncp-call bob % nncp-call bob:lan % nncp-call bob:main % nncp-call bob forced.bob.example.com:1234
In the first command, all addresses from addrs will be tested, in the second and third, it is explicitly said to use lan and pub entries, and the last one says to use a clearly defined address, despite the configuration file.
A daemon, like nncp-call, can specify the minimum level necessary for a nice level: for example, skip only e-mail messages over the Internet channel (assuming that they have higher priority), and transfer heavy files to portable drives or a separately running daemon that listens only in fast local network. nncp-call, you can specify the -rx and -tx options, forcing it to only accept or only send packets.
Unfortunately, in the SP protocol it is not possible for the parties to agree that the connection can be closed. Now it is implemented at the expense of timeout: if both nodes have nothing to send and receive, then after a specified time they break the connection. But this time, you can set -onlinedeadline option for a very long time (hours) and then we will have a long-lived connection, which will immediately send alerts about new packages. This saves on expensive protocol handshakes. In the case of sending mail, it is also a very quick notification of incoming mail, without the need to constantly polling POP3, breaking the connection.
TCP connections can also be used without an Internet connection. For example, a bit of anointing with shell scripts can be configured on a laptop to raise an ad-hoc WiFi network and listen to the IPv6 link-local address as a daemon, trying to constantly connect to another known address. And driving along the subway on the escalator, two laptops, relatively quickly, can see each other in the common ad-hoc network and quickly “shoot” each other with NNCP packets. WiFi , . , , , . , , , 185 ( ).
nncp-caller . , TCP , , /.
cron -. For example:
bob: id: FG5U7XHVJ342GRR6LN4ZG6SMAU7RROBL6CSU5US42GQ75HEWM7AQ ... calls: - cron: "*/10 9-21 * * MON-FRI" nice: 128 addr: pub xx: rx - cron: "*/1 21-23,0-9 * * MON-FRI" onlinedeadline: 3600 addr: lan - cron: "*/1 * * * SAT,SUN" onlinedeadline: 3600 addr: lan
: , , ( , IPv6 link-local ), ( ), ; LAN , TCP ( ), , ; LAN .
. , . , TCP , addrs .
NNCP. SP . : KISS . Unix-way store-and-forward , .