📜 ⬆️ ⬇️

Cascading squids

In times of too expensive anlim (64kbits - 1000rur), they created a cluster of proxies with their friends in order to increase the total throughput. Time passed, prices changed. Now they are already more friendly - Mbit anlim (with nightly doubling of speed) costs the same 1000 rubles. But, despite this, we still sometimes use the cluster. I decided to share with the public the method of creating such a good, all of a sudden someone will be interested.
For the experiments we need:

We assume that we have 2 cars - 192.168.1.1 (she also has 2nd un, with 2nd provider - 192.168.2.1) and 192.168.1.2
A little googling - I found a note almost 5 years ago - Squid 2 chans . Two points were taken out of it - how, in fact, to organize a cluster and how to proxy on different interfaces.

First of all, let's create additional squid services.
First, we 'll tweak the server config a bit ( /etc/squid/squid.conf ). Because I do not need a cache - I chopped it off. This is done by setting the cache_dir parameter as follows:
cache_dir null / var / spool / squid

Because disconnected cache - we chopped off and a record of a record in a cache
cache_store_log none

Open access to the proxy for everyone (you need to insert before the entry 'http_access deny purge'):
acl all src 0.0.0.0/0.0.0.0
http_access allow all

Let's open access to the proxy using the cache_mgr protocol from the localhost (I don’t remember which parameters are specified in the default config - so it doesn’t hurt to check what you already have before inserting):
acl manager proto cache_object # this is the line most likely already in the config
http_access allow manager localhost
http_access deny manager

Play port 8080 (by default - 3128):
http_port 8080

To restrict access to the proxy - I prefer to use iptables (by default, all packets are dropped), rather than authorization, so proxy from the config is open to all:
iptables -N proxy
iptables -A proxy -j REJECT
iptables -I proxy -s 192.168.1.2 -j ACCEPT
iptables -I INPUT -p tcp --dport 8080 -j proxy

Now we will add parent proxies (I’m talking right away — with the configuration below, it’s not a niche that will work. But then, in the course of the story, the error will be found and corrected :)):
cache_peer 127.0.0.1 parent 8081 0 no-query no-digest round-robin weight = 4
cache_peer 127.0.0.1 parent 8082 0 no-query no-digest round-robin weight = 1
cache_peer 192.168.1.2 parent 8080 0 no-query no-digest round-robin weight = 4

Total - 3 parent proxies, from which the cache (no-query) and cache-digest (no-digest) will not be requested, the parents will cycle (round-robin). Well, and different weight - for the 2nd channel I have is weak.
We prohibit a squid to walk directly (not through parents):
never_direct allow all

Create 2 copies of the config file - /etc/squid/squid_2.conf and /etc/squid/squid_3.conf . We remove from them the lines beginning with cache_peer, never_direct. Change the value of the parameter http_port to 8081 and 8082, respectively. We change paths to logs (hereinafter in the text - the changes will be indicated only for squid_2 , for squid_3 - the changes are similar):
access_log /var/log/squid_2/access.log
cache_log /var/log/squid_2/cache.log

It also does not hurt to create this path and change the owner, so that the squid can write logs:
mkdir / var / log / squid_2
chown proxy: proxy / var / log / squid_2

Specify the location of the pid file:
pid_filename /var/run/squid_2.pid

I did not change the cache location - all 3 proxies work quietly with 1 directory. But squid -z (initialize the cache) before the first start does not hurt.

Now let's get down to creating init scripts. In the /etc/init.d/squid file, we comment on the code responsible for cache initialization (because the script is not designed for a null cache):
# if [-d "$ cdr" -a! -d "$ cdr / 00"]
# then
# log_warning_msg "Creating squid spool directory structure"
# $ DAEMON -z
# fi

Copy /etc/init.d/squid => /etc/init.d/squid_2 .
Modify squid_2 :
NAME = squid_2
...
SQUID_ARGS = "- D -sYC -f /etc/squid/squid_2.conf"

Also, for mental equilibrium, you can change the messages that the script produces:
log_daemon_msg "Starting Squid HTTP proxy" "squid_2"
...
log_daemon_msg "Stopping Squid HTTP proxy" "squid_2"
...
log_daemon_msg Restarting Squid HTTP proxy "squid_2"

For the next focus, the following lines should be at the beginning of the init script:
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6

We customize the launch of a newly created service with the system:
update-rc.d squid_2 defaults

We launch child proxies:
invoke-rc.d squid_2 start
invoke-rc.d squid_3 start

Here it is, the moment of truth - we launch the central proxy
invoke-rc.d squid start
and…
FATAL: ERROR: cache_peer 127.0.0.1 specified twice
... and they break us cruelly.
A little thought (hands do not reach Google) - replace in one record 127.0.0.1 => localhost.
We start, and here it is - happiness.
A squid on the 2nd machine can be configured by copying /etc/squid/squid.conf and removing the references to cache_peer and never_direct + from there to correct access rights via the cache_mgr protocol.
For testing, set your favorite browser to use proxies and open any page. Look at /var/log/squid/access.log , there should be about the following lines:
1214499645.364 15335 89.189.176.111 TCP_MISS / 206 214755 GET ru.download.nvidia.com/Windows/177.41/177.41_geforce_winxp_64bit_international_whql.exe - ROUNDROBIN_PARENT / 192.168.1.2 application / octet-stream
1214499646.148 11138 89.189.176.111 TCP_MISS / 206 534572 GET EN.download.nvidia.com/Windows/177.41/177.41_geforce_winvista_64bit_international_whql.exe - ROUNDROBIN_PARENT / 127.0.0.1 application / octet-stream
1214499650.695 3564 89.189.176.111 TCP_MISS / 206 370947 GET ru.download.nvidia.com/Windows/177.41/177.41_geforce_winvista_64bit_international_whql.exe - ROUNDROBIN_PARENT / localhost application / octet-stream
1214499658.899 52092 89.189.176.111 TCP_MISS / 206 1115575 GET ru.download.nvidia.com/Windows/177.41/177.41_geforce_winvista_64bit_international_whql.exe - ROUNDROBIN_PARENT / 192.168.1.2 application / octet-stream

')
Further according to the plan - SqStat. Download the archive , unpack it into any www-folder, rename config.inc.php.defaults => config.inc.php and edit:
$ squidhost [0] = "localhost";
$ squidport [0] = 8080;
$ cachemgr_passwd [0] = "";
$ resolveip [0] = false; # for beauty - you can set true :)
$ hosts_file [0] = "hosts.txt";
$ group_by [0] = "host";

$ squidhost [] = "localhost";
$ squidport [] = 8081;
$ cachemgr_passwd [] = "";
$ resolveip [] = false;
$ hosts_file [] = "hosts.txt";
$ group_by [] = "host";

$ squidhost [] = "localhost";
$ squidport [] = 8082;
$ cachemgr_passwd [] = "";
$ resolveip [] = false;
$ hosts_file [] = "hosts.txt";
$ group_by [] = "host";

$ squidhost [] = "localhost";
$ squidport [] = 8081;
$ cachemgr_passwd [] = "";
$ resolveip [] = false;
$ hosts_file [] = "hosts.txt";
$ group_by [] = "host";

$ squidhost [] = "192.168.1.2";
$ squidport [] = 8080;
$ cachemgr_passwd [] = "";
$ resolveip [] = true;
$ hosts_file [] = "hosts.txt";
$ group_by [] = "host";

Open the browser sqstat.php and watch the activity.
If anyone is interested - I can post a slightly modified version - additionally displays the total speed / volume by the grouping parameter - http://narod.ru/disk/26137171001/sqstat.class.php.bz2.html , there is only 1 file - sqstat.class.php , therefore it is necessary to unpack the archive into the folder with the original sqstat.

In the passive mode, 3 squid consume 2.3% of the memory, which is 14.72 in MB.
PS # squid -v
Squid Cache: Version 2.6.STABLE5

UPD : nevertheless, I hope, they understand that when downloading to 1 stream through such a proxy, there will be no increase in speed? to increase the speed, it is necessary to start the download with n threads, where n is the number of proxies in the cluster (if the weights are the same).

Source: https://habr.com/ru/post/28063/


All Articles