Hello to all! In connection with the growth of locks, including unreasonable sites on the part of the state, we are offering you a description of the idea, as well as a prototype of the settings of the site , protected from locks on a specific path and domain name. Block Protection Ideas:
will be outlined in other posts. Who is interested in the topic, go under the cat.
https://github.com/http-abs/http-abs/
The principle of protection is that each user receives a unique pair from the individual (sub) domain and the prefix of the path to view the site. Let's call this pair an agent id .
If for some reason, you can manage only a limited set of subdomains, the user will of course share his subdomain with some other users.
Of course, in the case of the above limitation, life will become more difficult for those users who are not lucky to share the subdomain with the lock operator. But not much.
Subject to previous note.
Easy. The agent identifier allocated to the user to view the site is stored in his cookie. The user simply shares a link from the address bar.
To isolate the operator locks. If you are lucky, after several locks, it can even be identified.
Not at all, nothing is literally required of him. He enters the site in the usual way, reads materials and shares links. In his life, only the type of the address bar changes.
Maybe. It will not have to take into account the path prefix and subdomains included in the agent identifier: the handler will truncate them at the first stage of processing the incoming URL. However, to make the process completely transparent, you may still have to work hard.
Of course. But it will be much more difficult for him to prove the necessity of blocking an infinite number of references to material formed according to an intricate rule, despite the fact that the rules may change to even more intricate.
This development is not excluded, but I want to consider these cases separately in another article. Perhaps we will come up with something together in the comments.
In order to make the process as transparent as possible, the code is concentrated in the frontend server nginx . This allows you to protect a variety of application servers with little or no restrictions.
Since the processing of the request will be very nontrivial, the additional package libnginx-mod-http-lua , which introduces the lua language in the processing of requests under nginx.
Ideally, processing should be done in such a way that the back-end (uplink, application server) is not at all concerned about whether it is placed under protection. He receives requests on the URL from which all the elements of the agent identifier are removed (let's call such pure URLs ). In order not to alter the returned pages, going to a clean URL with the agent identifier cookie set, causes a redirect to the individual URL.
The frontend does not store the status anywhere except the agent ID cookie.
The browser does not use a single line of javascript code. Only HTTP is used.
In fact, only the proof-of-concept is implemented, which allows to observe the real operation of the algorithm. Many details related to product packaging are not solved: modularity, selection and verification of parameters, and so on.
For subdomains, a scheme with a fixed set of subdomains has been selected, suitable for use in conjunction with the hosts , without installing an additional DNS server.
The format of the path prefix is predefined and consists of 32 digits on the base 16.
Startup parameters are set right in the code.
The set of subdomains (a, b, c) is set to a variable and can be expanded.
Domain set to example.com .
Backend expected at http://127.0.0.1:8000
The growing threat of sudden blockages makes it necessary to prepare in advance and invent owners even completely loyal websites to protect them. Such protection is quite possible, does not require any gestures on the part of the user, and can be implemented with quite a modest amount of effort on the part of the site administrator.
Source: https://habr.com/ru/post/344536/
All Articles