📜 ⬆️ ⬇️

DNS server on the local interface or a simple path to love between programmers and administrators

I have a heterogeneous network of about 40 Linux servers and several live PHP programmers. For a long time I asked to bring in a single configuration configuration file unified to databases, servers, memcached, redis and sphinx. But today a brilliant idea struck. It was connected with the inability to quickly turn on and off the server from work and give technical support for the upgrade slaughter. Of course you can wait a couple of hours when there are few servers, you can drink tea and talk with your beloved, but this makes it very difficult to work with a large number of machines.

The idea is this: delegate to each of the local machines a zone describing all internal resources, with a lifetime of records of only a few seconds. In case of any changes, adding new hardware to the cluster, turning off the old ones, changing the addressing in the existing clusters and so on - it is enough to edit the zone file on the dns-master server and spread it into slaves, that is, our working machines. To me personally, the idea still seems ingenious in its simplicity.

The technical implementation is extremely easy, I will give a couple of lines of configs, it seems to me that administrators with a couple of dozen servers will understand what's what, but I will describe in detail at the first requests.

We come up with a beautiful name for our zones and delegate it to a server that will be our new dns-master.
[root@isp4 manualzone]# cat corp.ru |grep service.noc service.noc NS admin.service.noc admin.service.noc IN A AA.BB.CC.74 

')
On the new server:
named.conf:

 acl "trust" { 127.0.0.1; AA.BB.CC.0/24; }; options { check-names master ignore; allow-transfer { any; }; allow-notify { any; }; allow-query { any; }; directory "/var/lib/named"; dump-file "/var/log/named_dump.db"; statistics-file "/var/log/named.stats"; listen-on { AA.BB.CC.74; 127.0.0.1; }; query-source address * port 53; transfer-source * port 53; notify-source * port 53; notify yes; disable-empty-zone "1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.IP6.ARPA"; }; 


Well, of course, our zone file, in which we describe all the services that interest us. Do not forget to adjust the lifetime of records in accordance with their needs:
 SER34:/var/lib/named/master # cat service.noc.corp.ru $TTL 5 @ 5 IN SOA admin.service.noc.corp.ru. root.service.noc.corp.ru. ( 2012082004 ; serial 5s ; refresh 15s ; retry 7d ; expire 5s ; minimum ttl ) IN A AA.BB.CC.0 IN NS admin.service.noc.corp.ru. IN NS W02.service.noc.corp.ru. ADMIN IN A AA.BB.CC.74 W02 IN A AA.BB.CC.22 ;WEB WEBNODE IN A AA.BB.CC.0 WEBNODE IN A AA.BB.CC.1 WEBNODE IN A AA.BB.CC.2 WEBNODE IN A AA.BB.CC.3 WEBNODE IN A AA.BB.CC.4 WEBNODE IN A AA.BB.CC.5 WEBNODE IN A AA.BB.CC.6 WEBNODE IN A AA.BB.CC.7 ;MYSQL REPLICATE CLUSER MYSQLCLUSTER00 IN A AA.BB.CC.50 MYSQLCLUSTER00 IN A AA.BB.CC.51 MYSQLCLUSTER00 IN A AA.BB.CC.52 MYSQLCLUSTER00 IN A AA.BB.CC.53 ;MYSQL MASTER ADMIN MYSQLCLUSTER_ADMIN IN A AA.BB.CC.70 ;MYSQL MASTER INTERACTIVE MYSQLCLUSTER_INTERACTIVE IN A AA.BB.CC.54 ;MYSQL MASTER_ONE MYSQLCLUSTER_ONE IN A AA.BB.CC.55 ;SPHINX CLUSTER01 SPHINX_CLUSTER00 IN A AA.BB.CC.56 SPHINX_CLUSTER01 IN A AA.BB.CC.57 SPHINX_CLUSTER02 IN A AA.BB.CC.58 


Well here we go to our slaves, on each we raise the named on the local interface (after all, no one except the server itself needs it) and with the minimum config.
named.conf
 options { allow-update { AA.BB.CC.74; }; directory "/var/lib/named"; dump-file "/var/log/named_dump.db"; statistics-file "/var/log/named.stats"; listen-on port 53 { 127.0.0.1; }; allow-query { 127.0.0.1; }; notify no; disable-empty-zone "1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.IP6.ARPA"; }; zone "service.noc.corp.ru" { type slave; file "slave/service.noc.corp.ru"; masters { AA.BB.CC.74; }; }; 


After that, write in /etc/resolv.conf
 SRV03:/var/lib/named/slave # cat /etc/resolv.conf search noc.corp.ru nameserver 127.0.0.1 


And of course the check:

 SRV03:/var/lib/named/slave # nslookup > webnode.service Server: 127.0.0.1 Address: 127.0.0.1#53 Name: webnode.service.noc.corp.ru Address: AA.BB.CC.22 Name: webnode.service.noc.corp.ru Address: AA.BB.CC.23 Name: webnode.service.noc.corp.ru Address: AA.BB.CC.24 Name: webnode.service.noc.corp.ru Address: AA.BB.CC.25 Name: webnode.service.noc.corp.ru Address: AA.BB.CC.26 Name: webnode.service.noc.corp.ru Address: AA.BB.CC.27 Name: webnode.service.noc.corp.ru Address: AA.BB.CC.20 Name: webnode.service.noc.corp.ru Address: AA.BB.CC.21 


And we do it on every node in our small but well-kept park of servers. Enjoying life without thinking about when dear programmers will find 5 minutes to run through numerous configs and turn this or that node off balance.

Ps. Do not ask why balancers and proxies are not used, there are some architectural reasons for this.
Pps. I spent more time writing this article than the initial setup of the system itself. All IP addresses, hostnames and configuration files are based on real and tested for performance.

Source: https://habr.com/ru/post/151612/


All Articles