📜 ⬆️ ⬇️

Configuring squid or how not to buy a paid solution

image

Hello!

Often in organizations we use various kinds of proxies, proxies as a component of a software gateway or an independent classical version of squid + log analyzer, etc.

We tried to implement a solution from Ideco and IKS, eventually settled on squid. Under the cat history of the path and technical information on setting up the good old squid.

image

Perhaps I'll start with the fact that of course it's strange on habr in 2018 to see an article about configuring squid, but nevertheless, even at the present time, paid products can yield on some items of open source software that somehow forms the basis of a paid product with a beautiful interface .
')
It all started with the fact that the management made it clear that we can afford to buy Internet billing.

The requirements are as follows: integration into Windows AD, full management of users from AD, speed shaper, filtering by content type, by site list, the ability to give access to the entire network to local company resources.

The company’s network has over 550 computers. Most of them need access to internal resources.

Everything unfolded in a virtual environment, Hyper-v core virtualization server - Wrong choice, I will explain the reasons at the end of the article.

A little about the choice of contestants, UserGate remember him from the time when I started working in IT, the windows application is an old memory - by default it does not fit.

Internet Control Server (IKS) - it came to tests. It was possible to load correctly from 10 only 2 times, noting its excellent instability went further. By the way, I can not fail to note the humor of the developers, who in the course will understand! The product is developing, maybe there are already no problems, but the problem has been solved.

Ideco - I liked it, an excellent solution, but not only Internet billing is included in the functionality, it is a full gateway with all the buns, for us too much. Nevertheless, he passed the full test, there were 2 insurmountable obstacles:

1. It is impossible to give access to certain resources of the entire network or all users of the domain - by default, not counting such users for the user you want to license.

1.1 - A considerable price follows from point 1, since we have a lot of computers in our company that need to connect to internal web services and do not need access to the Internet, we did not plan to buy licenses for the use of internal resources, we also did not plan to plant zoo servers distributing Internet.

2. The IP address of the computer is rigidly tied to the username that first authenticated on the proxy, so when you change the employee you need to be in the admin. panels remove the binding in manual mode, which of course does not meet the requirement to manage everything through AD.

By the way, the ideco gateway is available in the free version for up to 40 users and without being tied to AD. IDECO SELECTA also appeared, or I did not notice its release or it was released after all the tests.

After all the stages passed, it was decided to do everything on squid on our own, but with adjustments to our technical requirements, what came out of it read further.

Let's start with the fact that there are no correct and complete manuals on the network, there are some parts, but all instructions were frustrated by new releases of squid.

We use the ubuntu server, therefore the following information is relevant for this OS and with other OSs can be seriously different.

Everything on the command line needs to be done from under sudo, I will not write further before each sudo command.

Setting up the OS ubuntu server 16.04:

apt-get update apt-get upgrade apt-get install mc g++ libecap3-dev libdb-dev libldap2-dev libpam0g-dev libldb-dev libsasl2-dev libkrb5-dev gcc libssl-dev krb5-user libpam-krb5 libkrb5-3 libsasl2-modules-gssapi-mit linux-virtual-lts-xenial linux-tools-virtual-lts-xenial linux-cloud-tools-virtual-lts-xenial linux-image-virtual linux-tools-virtual linux-cloud-tools-virtual squid3 

Since we use Hyper-v virtualization then we installed the necessary packages.

We download the squid from the site, in this post we analyze the version 3.5.26, for other versions it will probably be irrelevant. UPD in the docker configured 3.5.28 normal flight.

 wget http://www.squid-cache.org/Versions/v3/3.5/squid-3.5.26.tar.gz 

We unpack in home or any other directory.

 tar xzf squid-3.5.26.tar.gz cd /home/squid-3.5.26/ 

 chmod +x configure 

Specify which packages we need, you can delete unnecessary or add something. Someone seems that there is a lot of excess. The list is taken from the installed version of the squid and additional packages are added.

 ./configure '--enable-ssl' '--with-openssl=/usr/lib/ssl/' '--disable-ipv6' '--enable-ssl-crtd' '--build=x86_64-linux-gnu' '--prefix=/usr' '--includedir=${prefix}/include' '--mandir=${prefix}/share/man' '--infodir=${prefix}/share/info' '--sysconfdir=/etc' '--localstatedir=/var' '--libexecdir=${prefix}/lib/squid3' '--srcdir=.' '--disable-maintainer-mode' '--disable-dependency-tracking' '--disable-silent-rules' 'BUILDCXXFLAGS=-g -O2 -fPIE -fstack-protector-strong -Wformat -Werror=format-security -Wl,-Bsymbolic-functions -fPIE -pie -Wl,-z,relro -Wl,-z,now' '--datadir=/usr/share/squid' '--sysconfdir=/etc/squid' '--libexecdir=/usr/lib/squid' '--mandir=/usr/share/man' '--enable-inline' '--disable-arch-native' '--enable-async-io=8' '--enable-storeio=ufs,aufs,diskd,rock' '--enable-removal-policies=lru,heap' '--enable-delay-pools' '--enable-cache-digests' '--enable-icap-client' '--enable-follow-x-forwarded-for' '--enable-auth-basic=DB,fake,getpwnam,LDAP,NCSA,NIS,PAM,POP3,RADIUS,SASL,SMB' '--enable-auth-digest=file,LDAP' '--enable-auth-negotiate=kerberos,wrapper' '--enable-auth-ntlm=fake,smb_lm' '--enable-external-acl-helpers=file_userip,kerberos_ldap_group,LDAP_group,session,SQL_session,unix_group,wbinfo_group' '--enable-url-rewrite-helpers=fake' '--enable-eui' '--enable-esi' '--enable-icmp' '--enable-zph-qos' '--enable-ecap' '--disable-translation' '--with-swapdir=/var/spool/squid' '--with-logdir=/var/log/squid' '--with-pidfile=/var/run/squid.pid' '--with-filedescriptors=65536' '--with-large-files' '--with-default-user=proxy' '--enable-build-info=Ubuntu linux' '--enable-linux-netfilter' 'build_alias=x86_64-linux-gnu' 'CFLAGS=-g -O2 -fPIE -fstack-protector-strong -Wformat -Werror=format-security -Wall' 'LDFLAGS=-Wl,-Bsymbolic-functions -fPIE -pie -Wl,-z,relro -Wl,-z,now' 'CPPFLAGS=-Wdate-time -D_FORTIFY_SOURCE=2' 'CXXFLAGS=-g -O2 -fPIE -fstack-protector-strong -Wformat -Werror=format-security' make make install 

--with-openssl = / usr / lib / ssl / - specify the path to openssl, the default path is specified in the ubuntu server.
--disable-ipv6 - turn off ipv6 - read below for reasons.
--enable-ssl-crtd is for bundling the generation of ssl certificates for bump.

Perhaps there will be dependencies, you need to install them.

By default everything is installed in / etc / squid /

Create a folder inside / etc / squid for ssl certificates:

 mkdir /etc/squid/ssl/private 

Create a certificate:
Go to the directory

 cd mkdir /etc/squid/ssl/private 

Create a key

 openssl genrsa -aes256 -out private.pem 2048 

Create a certificate

 openssl req -x509 -sha256 -nodes -days 3650 -newkey rsa:2048 -keyout private.pem -out public.pem 

Convert certificate to browser-friendly format

 openssl x509 -outform der -in public.pem -out squid3domainlocal.der 

Create a certificate database:

 /usr/lib/squid/ssl_crtd -c -s /etc/squid/ssl/ssl_db/ 

Assign access:

 chown root:proxy -R /etc/squid/ssl chmod 640 -R /etc/squid/ssl/private chmod 660 -R /etc/squid/ssl/ssl_db 

I draw your attention to the fact that the name of the proxy server and the name specified when creating the certificate should be the same. Format squid3.domain.local.

Received squid3domainlocal.der through group policies or manually add them to trusted certificate authorities. The proxy server in the browser does not specify ip but the full name of the computer, for example, squid3.domain.local.

Create a regular user in the domain, let it be squid3.

To pass authentication through kerberos, we need the squid3 user keytab for the principal HTTP/squid3.DOMAIN.LOCAL@DOMAIN.LOCAL, with standard login to the domain through net ads, keytab /etc/krb5.keytab is created, but the principal does not indicate http but host . What makes it impossible to authenticate users through a web browser. If you put the keytab in /etc/krb5.keytab and then enter the machine itself into the domain, then the keytab will simply be supplemented with new principals. But I draw your attention to the fact that you do not need to install the samba package and enter the machine into the domain, just a generated keytab for user

Next, go to the domain controller and execute a simple command:

 ktpass -princ HTTP/squid3.DOMAIN.LOCAL@DOMAIN.LOCAL mapuser squid3@DOMAIN.LOCAL -crypto AES128-SHA1 -pass XXXXXXXXXXXXXX -ptype KRB5_NT_PRINCIPAL -out c:\krb5.keytab 

We transfer the resulting file to the proxy server and then put it in a convenient location, I choose /etc/krb5.keytab.

If you want to make authorization also for the web site, statistics or the company's internal portal, then you need to create a group and include proxy and www-data users there.

Create a group:

 groupadd allowreadkeytab 

Add the required users to the group:

 adduser proxy allowreadkeytab adduser www-data allowreadkeytab 

Assign owners to krb5.keytab

 chown root:allowreadkeytab /etc/krb5.keytab 

If there is no need for additional services to give access, then we do not create a group, we simply set the owners and rights:

 chown root:proxy /etc/krb5.keytab 

Assign access:

 chmod 640 /etc/krb5.keytab 

We get:

 -rw-r----- 1 root allowreadkeytab /etc/krb5.keytab 

Or

 -rw-r----- 1 root proxy /etc/krb5.keytab 

Read and write for root, read only for allowreadkeytab and no access for others.

Configuring krb5.conf

 mcedit /etc/krb5.conf 

krb5.conf
  [libdefaults] krb4_config = /etc/krb.conf krb4_realms = /etc/krb.realms kdc_timesync = 1 ccache_type = 4 forwardable = true proxiable = true default_tgs_enctypes = aes256-cts-hmac-sha1-96 rc4-hmac des-cbc-crc des-cbc-md5 default_tkt_enctypes = aes256-cts-hmac-sha1-96 rc4-hmac des-cbc-crc des-cbc-md5 permitted_enctypes = aes256-cts-hmac-sha1-96 rc4-hmac des-cbc-crc des-cbc-md5 default_keytab_name = FILE:/etc/krb5.keytab v4_instance_resolve = false v4_name_convert = { host = { rcmd = host ftp = ftp } plain = { something = something-else } } fcc-mit-ticketflags = true [realms] DOMAIN.LOCAL = { kdc = DC1.DOMAIN.LOCAL kdc = DC2.DOMAIN.LOCAL admin_server = DC1.DOMAIN.LOCAL admin_server = DC2.DOMAIN.LOCAL default_domain = DOMAIN.LOCAL } [domain_realm] .domain.local = domain.LOCAL domain.local = domain.LOCAL [login] krb4_convert = true krb4_get_tickets = false 


We save.

I draw your attention to the fact that below squid.conf will not contain all acl and all the rules, they will only be configured according to example 1, full configuration of acl and site access lists, etc. will be too voluminous. The following configuration can be considered as requiring modification for your needs.

Go to the squid configuration:

 mcedit /etc/squid/squid.conf acl SSL_ports port 443 acl Safe_ports port 80 acl Safe_ports port 21 acl purge method PURGE acl CONNECT method CONNECT http_access allow purge localhost http_access deny purge http_access deny CONNECT !SSL_ports 

Here an important point, there are sites that raise the connection directly to the "computer", and user authentication is not performed. As a result, the connection is blocked. To circumvent this problem, access is given to a specific ip to a specific site.

!!! Important note !!! The rule must be located above the rules with the authentication of basic, ntlm, kerberos, etc.

 acl authip src "/etc/squid/pools/ip.txt" acl domainautip dstdomain "/etc/squid/exceptions/domain.txt" http_access allow authip domainautip http_reply_access allow authip domainautip 

We define acl:

→ Documentation

Acl to determine the type of content:

acl application_mime rep_mime_type application / octet-stream
acl video_mime rep_mime_type "/etc/squid/ban/mime_type_video.txt"

mime_type_video.txt
video / mpeg
video / mp4
video / ogg
video / quicktime
video / webm
video / x-ms-wmv
video / x-flv
video / 3gpp
video / 3gpp2
video / avi
video / msvideo
video / x-msvideo
video / x-dv
video / dl
video / x-dl
video / vnd.rn-realvideo

You can also filter some content by url, for this we create acl:

acl blockextention urlpath_regex -i "/etc/squid/ban/blockextention.txt"

blockextention.txt
\ .snapshot $
\ .windows $
\ .mac $
\ .zfs $
\ .action $
\ .apk $
\ .app $
\ .bat $
\ .bin $
\ .cmd $
\ .com $
\ .command $
\ .cpl $
\ .csh $
\ .exe $
\ .gadget $
\ .inf1 $
\ .ins $
\ .inx $
\ .ipa $
\ .isu $
\ .job $
\ .ksh $
\ .msc $
\ .msi $
\ .msp $
\ .mst $
\ .osx $
\ .out $
\ .paf $
\ .reg $
\ .rgs $
\ .run $
\ .sct $
\ .sh $
\ .shb $
\ .shs $
\ .u3p $
\ .vb $
\ .vbe $
\ .vbs $
\ .vbscript $
\ .workflow $
\ .ws $
\ .wsf $
\ .bin $
\ .inf $
\ .com $
\ .cpp $
\ .msu $
\ .pif $
\ .7z $
\ .ace $
\ .arj $
\ .cab $
\ .cbr $
\ .deb $
\ .gz $
\ .gzip $
\ .jar $
\ .one $
\ .pak $
\ .ppt $
\ .rpm $
\ .sib $
\ .sis $
\ .sisx $
\ .sit $
\ .sitx $
\ .spl $
\ .tar $
\ .tar-gz $
\ .tgz $
\ .xar $
\ .zipx $
\ .asf $
\ .asm $
\ .c $
\ .cfm $
\ .cgi $
\ .class $
\ .cpp $
\ .cs $
\ .dot $
\ .dtd $
\ .fla $
\ .ged $
\ .gv $
\ .h $
\ .icl $
\ .java $
\ .jse $
\ .kml $
\ .lua $
\ .m $
\ .mb $
\ .mdf $
\ .mod $
\ .obj $
\ .pkg $
\ .pl $
\ .po $
\ .pot $
\ .ps1 $
\ .pub $
\ .py $
\ .rss $
\ .sln $
\ .so $
\ .sql $
\ .ts $
\ .vc4 $
\ .vcproj $
\ .vc4 $
\ .vcproj $
\ .vcxproj $
\ .wsc $
\ .xcodeproj $
\ .xsd $
\ .torrent $

There is also a curious acl allowerrorsert, since we do not allow access to crooked certificate sites by default, I use the allowerrorsert to determine the list of allowed sites with “curved” ssl. This is not much lower.

 acl banksites dstdomain "/etc/squid/allow/bank.txt" acl allofficesites dstdomain "/etc/squid/allow/alloffice.txt" acl manual dstdomain "/etc/squid/ban/manual.txt" acl allowerrorsert dstdomain "/etc/squid/exceptions/allowerrorsert.txt" 

It is also possible to control access to sites based on ssl rules, but in my opinion it is more efficient to manage via http_access. Here is an example acl for use in ssl rules:

 acl sslproxy ssl::server_name "/etc/squid/ban/proxy.txt" 

Below we will return to this type of acl and their application.

Allows you to see in advanced mode requests POST and mime.

 strip_query_terms off log_mime_hdrs on 

Authentication and authorization of a user in the group active direcory via kerberos:

 auth_param negotiate program /usr/lib/squid/negotiate_kerberos_auth -s HTTP/squid3.domain.local@DOMAIN.LOCAL auth_param negotiate children 20 startup=10 idle=10 auth_param negotiate keep_alive on 

Here it is necessary to stop and disassemble in more detail, children - the maximum number of processes available to start, startup the number of processes that are always running, idle the maximum queue to the assistant if a specified number is exceeded, the assistant process will start.

A small digression on the work of authorization:

There is a feature here, the fact is that some sites try to connect a wagon of various resources and pictures from other sites, collect a bunch of statistics and so on, each request passes authorization, this can cause a large queue in the authorization assistant process, you can simply increase the children, increase the idle ... but it is only at first glance, there can be several tens of thousands of requests from 1 user, which carries a long queue. When a large queue appears, the load on the CPU goes off scale. In the conditions of a large number of PCs and a small proportion of users with full Internet access, installed on a PC, chrome created a surprising number of connections directly - 500 thousand requests to clients1.google.com per day. As a result, there were peaks in the queues.

Details of the solution are at the end of the article, where some technical aspects of solving the problems encountered during the debugging process will be described.

Search for a user in a group:

 external_acl_type domainusers ttl=300 negative_ttl=60 ipv4 %LOGIN /usr/lib/squid3/ext_kerberos_ldap_group_acl -a -T d09fd0bed0bbd18cd0b7d0bed0b2d0b0d182d0b5d0bbd0b820d0b4d0bed0bcd0b5d0bdd0b0 -D DOMAIN.LOCAL external_acl_type allow-all ttl=300 negative_ttl=60 ipv4 %LOGIN /usr/lib/squid3/ext_kerberos_ldap_group_acl -a -g internet-allow-all -D DOMAIN.LOCAL 

The two lines above perform 1 function, load the helper to search for a user in a group, you can do it yourself on the command line / usr / lib / squid3 / ext_kerberos_ldap_group_acl -a -g internet-allow-all -D DOMAIN.LOCAL click enter and type in the user name, if the user is found in the specified group, the answer will be OK if not, then ERR. I draw attention to the fact that the specified group internet-allow-all was created in AD.

If you noticed, the two lines are different, in one incomprehensible set of letters and numbers in the second everything is clear ... The first line contains the Domain Users group, not wanting to deal with Cyrillic in the squid config and helper's work, I decided to do so the only group in AD that is associated with this service is a name written in Cyrillic. The syntax is also changed, from g which means group to T.

He promised to tell why he disabled ipv6, it was a long story, the user did not log in because I did not specify external_acl_type in the line ....... ipv4 . we do not use ipv6 and very few people use it in local networks it was decided to disable it altogether in order to avoid such problems. On surfing the Internet this is also not reflected in any way.

Speed ​​limit groups:

 external_acl_type disable-speed ttl=300 negative_ttl=60 ipv4 %LOGIN /usr/lib/squid3/ext_kerberos_ldap_group_acl -a -g internet-deny-speed -D DOMAIN.LOCAL external_acl_type allow-speed ttl=300 negative_ttl=60 ipv4 %LOGIN /usr/lib/squid3/ext_kerberos_ldap_group_acl -a -g internet-allow-speed -D DOMAIN.LOCAL 

internet-allow-speed - A group created in AD.

Since we get groups and users from an external helper, we need to define acl in the squid syntax for http_access, etc.

 acl domainusers external domainusers acl allow-all external allow-all acl allow-speed external allow-speed acl disable-speed external disable-speed 

Next are allow and block rules. The rules work as usual in a chain, everything above is more important.

 http_access allow localhost http_access deny manual http_reply_access deny application_mime http_access allow allow-all http_reply_access allow allow-all http_access allow domainusers banksites http_access deny domainusers 

Here begins the bump, in the http_port line we specify the port and specify the ssl-bump function, then we turn on certificate generation, then the cache size, then we specify the certificate itself to the word that is added as a trusted certification authority on domain computers, then the key.

The scheme of work is the following, the client enters google.com , the client establishes ssl connection with the proxy, and the proxy in turn with the site, the proxy raises ssl with the site and separately ssl with the client acting as an intermediary.

This scheme with full bump of the connection, you can not disassemble completely, but only for 1 of the parties, I have not found this application, so we do not use it. In addition, to see all traffic openly as http, only this scheme is suitable.

 http_port 3128 ssl-bump generate-host-certificates=on dynamic_cert_mem_cache_size=16MB cert=/etc/squid/ssl/private/public.pem key=/etc/squid/ssl/private/private.pem 

Settings assistant that generates ssl certificates for sites:

 sslcrtd_program /usr/lib/squid/ssl_crtd -s /etc/squid/ssl/ssl_db -M 16MB sslcrtd_children 20 startup=10 idle=10 visible_hostname = squid3.domain.local 

We create acl with bump steps, there are only 3 steps, sslbump1 looks at open information in the certificate, the one that is accessible to everyone.

sslbump2 creates a connection to the site sslbump3 creates a connection to the client.

 acl step1 at_step SslBump1 acl step2 at_step SslBump2 acl step3 at_step SslBump3 

We specify acl which will be brought in exceptions during the work with sslbump

 acl sslbanksites ssl::server_name "/etc/squid/exceptions/bank.txt" acl allowsplice ssl::server_name "/etc/squid/exceptions/allowsplice.txt" 

In bank.txt and allowsplice.txt are domain names.

This rule allows to accept certificates with an error, i.e. expired, self-signed, issued to another host, etc. We created acl for this rule above.

 sslproxy_cert_error allow allowerrorsert 

splice - skip all subsequent actions i. do not do bump skip as is.
peek - peek at available infu without full bump
terminate - close the connection, do not use, we filter via http_access
bump - gets into the connection, makes https visible as http

 ssl_bump splice allowsplice ssl_bump splice sslbanksites ssl_bump peek step1 all ssl_bump bump step2 all ssl_bump bump step3 all 

We close access to all.

 http_access deny all icp_access deny all htcp_access deny all 

Other settings

 cache deny all error_directory /etc/squid/errors/ forwarded_for off 

We cut the speed, indicate how many delay pools we use:

 delay_pools 3 

VIP users, favorite sites without speed limits

 delay_class 1 1 delay_access 1 allow allow-speed delay_access 1 allow banksites delay_parameters 1 -1/-1 delay_access 1 deny all 

After hours - the Internet is turned off (up to 100KB / sec.)

 delay_class 2 2 delay_access 2 allow !workhours delay_parameters 2 -1/-1 10000/10000 delay_access 2 deny all 

Download restriction - up to 10MB download the entire channel without restrictions, above only 100 KB / C

 delay_class 3 2 delay_access 3 allow disable-speed delay_parameters 3 -1/-1 32000/10485760 delay_access 3 deny all 

In the syntax of the log, the letter a is changed to a large A, here:% 6tr%> A. This makes it possible to see in the logs the name of the computer instead of its IP address, which of course is more convenient.

 logformat squid %ts.%03tu %6tr %>A %Ss/%03>Hs %<st %rm %ru %[un %Sh/%<a %mt 

Not a lot about the problems and features that have arisen.

The proxy server is displayed in a separate dmz, the firewall restricts access to and from the dmz. Since Since a squid constantly polls dns and kerberos by udp predominantly, it immediately exceeded the allowed number of connections with 1 ip, to the AD server which is in another dmz, the connections were dropped. The problem was not obvious, the authorization helper fell, the client received an authentication window.

The error looks like this:

support_krb5.cc (64): pid = 36139: 2017/10/24 08: 53: 51 | kerberos_ldap_group: ERROR: Error while initializing credentials from keytab: Cannot contact any KDC for realm 'DOMAIN.LOCAL'

Solved the problem by raising bind on the proxy server, the number of requests has decreased significantly. In general, it was possible to disable restrictions on the firewall, which was actually done, but bind is still a good idea that allows you to significantly reduce the number of connections.

There was 1 more error:

support_sasl.cc (276): pid = 8872: 2017/10/24 06: 26: 31 | kerberos_ldap_group: ERROR: ldap_sasl_interactive_bind_s error: Local error
support_ldap.cc (957): pid = 8872: 2017/10/24 06: 26: 31 | kerberos_ldap_group: ERROR: Error while binding to server with SASL / GSSAPI: Local error

In bind you need to copy the reverse zone.

UPD - The Most Interesting

There was a problem with a high load on cpu and io, the percents were mostly negotiate_kerberos io were loading ext_kerberos_ldap_group_acl, it’s clear that negotiate_kerberos ran ext_kerberos_ldap_group_acl, the load was not constant, twice a day for 30 minutes.

Changing the ratio of the number of children and idle did not give the desired result. In the process of debugging, there was a clear picture; in any configuration during the peak period, the maximum number of authentication processes was launched. Access.log was analyzed, as a result of the analysis it was highlighted that at the time of peak load there were a lot of ssl connections, this suggested that the problem lies not in authorization but in ssl_bump, ssl_bump was turned off for the experiment, as a result there was a complete lack of load throughout the day. In general, during the day, the work of the squid and his assistants did not cause any complaints, but at certain moments a huge number of connections came up, dry numbers: from 1 computer per unit of time (5-15 min) 10,000 requests came for an ssl connection that falls under the bump rule. Another day is the same from another computer on. * Whatsapp.net.

Ultimately, ssl_bump is enabled, it works without any complaints. If there are a lot of requests to a host that is not available by timeout, then there are peaks. The decrease in the queue was mainly affected by the exclusion of clients1.google.com and clients2.google.com from the proxy.

It’s up to you to decide to give access to clients1.google.com and clients2.google.com, disable the update task or exclude these hosts from the proxy.

Regarding hyper-v, in general, everything works stably, the uptime usually exceeds two months, but the day comes when absolutely out of the blue with no errors in the logs and any load virtual machines hang or reboot, but subsequent loading does not result loading working condition. It is necessary to do a reset and the subsequent download is done normally, I apologize for the tautology. With all the equal on the specified server, two virtual servers of ubuntu server 16.04 are spinning and both have an ode and the same problem with a difference between them of several days, then again uptime for at least 2 months. To solve this problem, we transfer a squid to a docker, I will draw up the following article about setting up a squid in a docker, in general, it is not much different than a whole heap of dependencies.

Bind setting:

  nano /etc/bind/named.conf.options 

We edit and paste:

 zone "domain.local" { type slave; masters { 192.168.XX.XX; 192.168.XX.XX;}; file "bak.domain.local"; forwarders {}; zone "XX.168.192.in-addr.arpa" { type slave; masters { 192.168.XX.XX; 192.168.XX.XX;}; file "XX.168.192.in-addr.arpa.zone"; }; 

Log Analyzer:

Squidanalyzer

→ Site
→ Instructions: one and two

For it to work, you need to install apache2:

 apt-get install apache2 

Talking about how to insist I will not, on the links is quite understandable and accessible. I will pay attention only to one, until the first report is generated - nothing appears on the web address, there will be an error.

As soon as the first report is generated, you will receive a cherished report page.
It should be noted that the page with reports can be styled for your company, change logos, signatures, background, etc. The part should be changed in the main config:

/etc/squidanalyzer/squidanalyzer.conf

And in the script that is the template for / usr / bin / squid-analyzer:

/usr/local/share/perl/5.22.1/SquidAnalyzer.pm

The article was written intermittently, periodically supplemented and corrected, I hope it will be useful.

Below is the listing of the cleaned config, it should be used as a sample, not subject to copy-paste, this will not give a working copy, you need to create files that are listed in acl, fill them, etc.

In the process of debugging, awk was very helpful, a command that displays and groups the columns:

  cat /var/log/squid/access.log | awk '{print $}' | cut -d: -f1 | sort | uniq -c | sort -n 

You can add grep.

To convert the date and time format in the squid log, you can use the following perl script:

 #! /usr/bin/perl -p s/^\d+\.\d+/localtime $&/e 

Save to file, say time. Next, copy or save the desired piece of the access.log log and execute:

perl time filename.log> time-filename.log
Etc.

squid.conf
 acl SSL_ports port 443 acl SSL_ports port 80 acl Safe_ports port 88 acl Safe_ports port 443 acl purge method PURGE acl CONNECT method CONNECT acl blockip src "/etc/squid/ban/blockip.txt" http_access deny blockip http_reply_access deny blockip acl allnet src 192.168.XX.0/18 acl allnet src 192.168.0.0/24 acl javaapletclient src "/etc/squid/pools/javaaplet.txt" acl javaapletdomain dstdomain "/etc/squid/exceptions/javaaplet.txt" acl microsoftcrt url_regex -i "/etc/squid/exceptions/microsoftCRT.txt" http_access allow javaapletclient javaapletdomain http_access allow allnet microsoftCRT http_reply_access allow allnet microsoftCRT http_access deny allnet manual http_access allow purge localhost http_access deny purge http_access deny CONNECT !SSL_ports acl application_mime rep_mime_type "/etc/squid/ban/mime_type_application.txt" acl audio_mime rep_mime_type "/etc/squid/ban/mime_type_audio.txt" acl video_mime rep_mime_type "/etc/squid/ban/mime_type_video.txt" acl blockextention urlpath_regex -i "/etc/squid/ban/blockextention.txt" acl blockextention2 urlpath_regex -i "/etc/squid/ban/blockextention2.txt" acl allowextention urlpath_regex -i "/etc/squid/exceptions/allowextention.txt" acl others src 192.168.XX.0/20 192.168.XX.0/18 192.168.XX.0/24 acl localnet dst 192.168.0.0/24 acl workhours time 7:00-18:59 strip_query_terms off log_mime_hdrs on acl manual_reg url_regex -i "/etc/squid/ban/manual_url.txt" acl banner_reg url_regex -i "/etc/squid/ban/adv/urls" acl dating_reg url_regex -i "/etc/squid/ban/dating/urls" acl redirector_reg url_regex -i "/etc/squid/ban/redirector/urls" acl porno_reg url_regex -i "/etc/squid/ban/porn/urls" acl shopping_reg url_regex -i "/etc/squid/ban/shopping/urls" acl socialnet_reg url_regex -i "/etc/squid/ban/socialnet/urls" acl spyware_reg url_regex -i "/etc/squid/ban/spyware/urls" acl allowerrorsert dstdomain "/etc/squid/exceptions/allowerrorsert.txt" auth_param negotiate program /usr/lib/squid/negotiate_kerberos_auth -s HTTP/squid3.DOMAIN.local@DOMAIN.LOCAL auth_param negotiate children 50 startup=15 idle=15 auth_param negotiate keep_alive on external_acl_type domainusers ttl=300 negative_ttl=60 ipv4 %LOGIN /usr/lib/squid3/ext_kerberos_ldap_group_acl -a -T d09fd0bed0bbd18cd0b7d0bed0b2d0b0d182d0b5d0bbd0b820d0b4d0bed0bcd0b5d0bdd0b0 -D DOMAIN.LOCAL external_acl_type allow-all ttl=300 negative_ttl=60 ipv4 %LOGIN /usr/lib/squid3/ext_kerberos_ldap_group_acl -a -g internet-allow-all -D DOMAIN.LOCAL external_acl_type allow-speed ttl=300 negative_ttl=60 ipv4 %LOGIN /usr/lib/squid3/ext_kerberos_ldap_group_acl -a -g internet-allow-speed -D DOMAIN.LOCAL external_acl_type standart ttl=300 negative_ttl=60 ipv4 %LOGIN /usr/lib/squid3/ext_kerberos_ldap_group_acl -a -g internet-standart -D DOMAIN.LOCAL external_acl_type bankusers ttl=300 negative_ttl=60 ipv4 %LOGIN /usr/lib/squid3/ext_kerberos_ldap_group_acl -a -g internet-bank -D DOMAIN.LOCAL external_acl_type disable-speed ttl=300 negative_ttl=60 ipv4 %LOGIN /usr/lib/squid3/ext_kerberos_ldap_group_acl -a -g internet-deny-speed -D DOMAIN.LOCAL external_acl_type allowformat ttl=300 negative_ttl=60 ipv4 %LOGIN /usr/lib/squid3/ext_kerberos_ldap_group_acl -a -g internet-allowFormat -D DOMAIN.LOCAL external_acl_type denyformat ttl=300 negative_ttl=60 ipv4 %LOGIN /usr/lib/squid3/ext_kerberos_ldap_group_acl -a -g internet-denyFormat -D DOMAIN.LOCAL acl domainusers external domainusers acl allow-all external allow-all acl allow-speed external allow-speed acl standart external standart acl bankusers external bankusers acl disable-speed external disable-speed acl allowformat external allowformat acl denyformat external denyformat http_access deny blockextention denyformat http_access deny blockextention2 allowformat http_access deny localnet others http_access deny spyware http_access deny spyware_reg http_access deny porno http_access deny porno_reg http_access deny ra http_access deny proxy http_access deny other http_access deny banner http_access deny banner_reg http_access deny dating http_access deny dating_reg http_access deny redirector http_access deny redirector_reg http_access deny standart audiovideo http_access deny standart shopping http_access deny standart shopping_reg http_access deny standart socialnet http_reply_access deny denyformat application_mime http_reply_access allow allowformat application_mime http_access deny manual http_reply_access allow all http_access allow localhost http_access allow allow-all http_access allow standart http_access allow bankusers banksites http_access allow domainusers allofficesites http_access deny domainusers !allofficesites http_port 3128 ssl-bump generate-host-certificates=on dynamic_cert_mem_cache_size=100MB cert=/etc/squid/ssl/private/public.pem key=/etc/squid/ssl/private/private.pem sslcrtd_program /usr/lib/squid/ssl_crtd -s /etc/squid/ssl/ssl_db -M 100MB visible_hostname = squid3.DOMAIN.local sslcrtd_children 70 startup=5 idle=10 acl step1 at_step SslBump1 acl step2 at_step SslBump2 acl step3 at_step SslBump3 acl sslbanksites ssl::server_name "/etc/squid/exceptions/bank.txt" acl allowsplice ssl::server_name "/etc/squid/exceptions/allowsplice.txt" sslproxy_cert_error allow allowerrorsert ssl_bump splice allowsplice ssl_bump splice sslbanksites ssl_bump peek step1 all ssl_bump bump step2 all ssl_bump bump step3 all http_access deny all icp_access deny all htcp_access deny all cache deny all cache_mgr support@DOMAIN.COM negative_ttl 10 seconds hosts_file /etc/hosts error_directory /etc/squid/errors/ forwarded_for off delay_pools 3 delay_class 1 1 delay_access 1 allow allow-speed delay_access 1 allow allofficesites delay_access 1 allow allowspeeddomain delay_parameters 1 -1/-1 delay_access 1 deny all delay_class 2 2 delay_access 2 allow !allow-speed delay_access 2 allow !allowspeeddomain delay_access 2 allow !workhours delay_parameters 2 -1/-1 625000/625000 delay_access 2 deny all delay_class 3 2 delay_access 3 allow disable-speed delay_parameters 3 -1/-1 320000/10485760 delay_access 3 deny all deny_info ERR_ACCESS_DENIED_BANNERS banner banner_reg deny_info ERR_ACCESS_DENIED_DATING dating dating_reg deny_info ERR_ACCESS_DENIED_REDIRECTOR redirector redirector_reg deny_info ERR_ACCESS_DENIED_PORNO porno porno_reg deny_info ERR_ACCESS_DENIED_SOCIALNET socialnet socialnet_reg deny_info ERR_ACCESS_DENIED_SPYWARE spyware spyware_reg deny_info ERR_ACCESS_DENIED_MANUAL manual manual_reg deny_info ERR_ACCESS_DENIED_AUDIOVIDEO audiovideo deny_info ERR_ACCESS_DENIED_BLOKEXTENTION blockextention deny_info ERR_ACCESS_DENIED_BLOKEXTENTION2 blockextention2 logformat squid %ts.%03tu %6tr %>A %Ss/%03>Hs %<st %rm %ru %[un %Sh/%<a %mt 

Source: https://habr.com/ru/post/347212/


All Articles