In 2002, experts built theories about who, and why, is testing the strength of the weakest link in the infrastructure of the World Wide Web. The answer was never found, although experts agreed that the attack most likely was a test of strength, a kind of training. Unknown attackers apparently wanted to check if they could, if necessary, deactivate the entire DNS system and “cut down” the Internet. Perhaps it was a demonstration of the possibilities of someone before someone. We do not know.
Who is behind
yesterday's attack is also unknown, as well as its causes, but this is the largest attack since 2002. The attack began on February 6 at 1:30 p.m. The three most affected DNS servers belong to the United States Department of Defense and ICANN. Three servers almost stopped their work for one hour. A few more root servers experienced a significantly increased load. At about 2:30 pm, the specialists managed to set up filtering and the server load approached normal. However, the flow of fake DNS requests from botnets was received for another 12 hours.
According to experts, this DDoS attack can be called weak compared to the largest DDoS attacks on commercial servers that have been observed recently. The request flow was measured in megabytes, not gigabytes, as is often the case.
The failure of the three root DNS servers did not affect the majority of ordinary users who did not notice any slowdown in the Internet.
')
However, the same question now arises as five years ago: are only 13 root servers capable of serving as a truly reliable infrastructure of the World Wide Web? Or are they still the weakest link? A few years ago, the idea was that it might be better to build some kind of
peer-to-peer DNS system so that the root server function could be distributed across multiple machines. Of course, this idea is not without certain flaws, and it has not received wide publicity, but now, perhaps, it is worth returning to discussing it?