The bearded saying says: admins are divided into those who do not make backups, and those who already do. Most of the awareness of the need to make backups comes after a major personal data loss. And, despite the abundance of sentimental stories about how people lost everything, many people still continue to hope that someone will do backups for them. As a reminder of the infidelity of such an approach, I want to give a few examples of how people completely unexpectedly lost their data or were on the verge of it.

My personal story of a great loss occurred 7-8 years ago. I then had a couple of small sites and a forum with one of them. Sites did not use the database, kept strictly on the files, because I had a local copy. But the forum ... Its backup was made when I changed the engine, about a year and a half before the sad event. On the server where I hosted, there were 4 disks combined in RAID5 for reliability. And at one point, one of the disks fell down. Yes, RAID5 certainly kept working and continued gloriously rustling. But the load on the surviving disks has become critical. To live the database was not long ...
')
While the engineers were scratching everything that itches, instead of quickly putting a new disc, the second one went to the best of worlds. The gap was only 2-3 days. And due to my youth and inexperience, even knowing about the situation with the first disc, I quietly waited for him to be replaced. As a result, I had to lose the forum base in order to become smarter for the future. I think such stories were, if not everyone, then at least many.
There are many reasons and ways to lose data. They all differ in the degree of predictability. There are more or less predictable: system failure, hacking, admin error. There are also cases of dishonesty when hired administrators in conflict situations did not give access to the data or damaged it. But there were situations that are usually not expected, but which bring much more significant data loss.
Fire
Perhaps the most popular of the unexpected causes of data loss. Despite all the fire prevention measures taken, the data centers were burning, burning and will continue to burn. The question is only in scale. In high-end data centers, each server rack has its own fully isolated space, with independent cooling and fire suppression systems. Even if something lights up, the fire will not go outside the rack.
But in some data centers, counters are missing as a phenomenon. That is why such data centers are burning very quickly. Nothing prevents the spread of fire over the hangars. I think many people remember the situation with the fire at hosting.ua, when many have lost not only the main sites, but also backup copies stored on neighboring servers.

The photo shows through a broken window that the data center used “storage” equipment, which contributed to the spread of the flame. By the way, the laying of cables "vermicelli" not bad helped burn another data center.

Storing backups in the same data center, where the working servers are installed, has repeatedly failed people. I came across a message, dated January 2008, about a man who looked with horror in his eyes at a burning data center in the United States, in which there were both working and backup servers. Two years later, the customers of the Ukrainian data center suffered from the same situation, and I began to make backups to an independent data center in another country.
Fire can occur anywhere, and no matter how super-reliable the data center promises you, reinsure yourself. In July 2012, an infrastructure serving a lot of government information (data on driver’s licenses, car registrations, hunting and fishing licenses, as well as medical information — medical records, treatment plans, etc.) suffered from explosion and fire in Canada. Fortunately, backup copies have been preserved. And in August 2013 in India, a fire destroyed servers that contained personal data of 1.2 billion citizens of the country, collected as part of a government project.
Flood
On October 29-30, 2012, Hurricane Sandy reached the US coast. The data centers of New York and New Jersey were preparing to take a strike: they stocked up fuel for the generators, agreed on its emergency deliveries, and morally prepared brigades on duty for 3-5 days to live in the workplace for security reasons. They quickly prepared for a possible outage, often accompanying hurricanes. What they were not ready for was flooding.
In many data centers located in the flood zone, backup generators, fuel tanks and pumps for them, and in some places also communications equipment, were located on the basement floors. When the big water came, all that was left to the engineers on duty was to shut down all the equipment correctly and turn off the generators. The level of incoming water can be judged by a photograph of the hall of one of the Verizon data centers.

By the way, this is not the only case of flooding. In September 2009, due to heavy rains, the server racks of the Vodafone operator in Turkey were lower equipment in the water, and in July 2013, a technical platform in Toronto, which houses about a hundred and fifty different providers, was due to heavy rains and associated power outages. cooling systems.
"Masks Show"
The removal of equipment “for investigation” or the disconnection of some part of the equipment at the decision of state bodies is also one of the possible causes of data loss. More often it concerns large projects. Residents of Ukraine remember the fate of Infostore, ex.ua, the popular online store Rozetka. In Russia, the same fate befell the iFolder.ru file sharing service, whose servers were turned off as part of a search for unnamed evidence in a case that was done by an unidentified person (wording from the press).
But it is not worth much to be under a delusion if you have only a small site at a small hoster. In our not-so-legal states, they can endure anything. There are cases when, within the framework of an investigation of some kind of pornography, the server of a small hosting provider, which has only two or three servers, was seized. And seized for a long time. Unfortunately, we are not yet in Europe, where, in cases of investigation, hard drives are usually removed for a day, all information is poured out and returned.
Unfair cooperation
Such cases are extremely rare, but still happen. In 2010, due to a conflict between the companies Makhost and Oversan-Mercury, a large number of servers were disconnected from the network. Naturally, each of the companies tried to prove their case and blame their opponents, but this was no easier for clients whose websites were killed.
The reasons may be more exotic: military actions or special regimes established by the state, acts of terrorism, earthquakes (however, special technologies are used in seismically unstable zones that increase equipment's chances of survival). I think if you rummage through the press more thoroughly, at least in some of these situations you can find real cases.
I suggest that readers in the comments share their experience, their situations, who taught and taught how to make backup copies. To those whom life has not yet taught, I want to remind you that the safety of your data is necessary and important primarily to you, and that you must provide and control it, not relying on the provider, the data center and the powers of heaven.