Welcome to the weekly digest of security events, season one, episode two. In the previous series, we
learned about self-opening typewriters, a chronic dread of the scene in Android, and how we will not be tracked on the net anymore (in fact, they will). In this release, two seemingly completely unrelated news, which nevertheless have one thing in common: they are not that someone is vulnerable somewhere, but that vulnerability is sometimes caused by a reluctance to connect available security measures. And the third is that it is not about security at all, but rather about particular cases of interrelations within the industry. Interestingly, all three, if you delve, turn out to be not quite as they seem initially.
I remind the rules: each week the editors of the news site
Threatpost selects the three most significant news, to which I add an extended and merciless comment. All episodes of the series can be found
here .
Hacking hotel doorsNews
They say that there are humanists, and there are techies, and these two categories of people hardly understand each other. And it’s impossible to turn from a humanist into a techie. Such a stereotype at the time was refuted by John Wiegand, who initially chose a career as a musician. In the 30s of the last century, he
played the piano and conducted a children's choir until he became interested in the device of sound amplifiers. In the 1940s, he was working on a novelty of that time - magnetic sound recording - and in 1974 (at the age of, for a moment, 62 years), he made his main discovery.
')
In Wigand's wire, from the alloy of cobalt, iron and vanadium, placed in a magnetic field, the polarity of the core and the shell changes places, due to which a potential difference arises. Moreover, the polarity does not change until the next magnetization, which made it possible to use the effect to, say, create hotel keys. Unlike modern cards, units and zeroes are recorded not in a microchip, but directly by a sequence of wires laid in a special way. It is impossible to reprogram such a key, and in general according to the scheme, it is more like not modern modern metro and credit cards, but cards with magnetic stripes - only more reliable ones.
That is, breaking contactless cards?
Not really . The name of Wigand is named not only the effect, but also the
protocol , rather ancient, by which data is exchanged. With the protocol, everything is bad enough. Firstly, it was not really standardized, and there are many different implementations. Secondly, initially the ID of the card could be recorded with a maximum of 16 bits, which gives very few possible combinations. Thirdly, the feature of those most contactless cards with wire, invented even before they learned to place an entire computer on a credit card, limits the key length to 37 bits - the reading reliability drops further.
So, at the Black Hat conference last week, researchers Eric Evenchik and Mark Baseggio showed their device to intercept (unencrypted) key sequences during authorization. What is most interesting, the cards have nothing to do with it - the data is stolen during transmission from the card reader to the controller: there, historically, the very same Wigand protocol is used.
They called the device BLEKey - it is such a small board that can be plugged directly into the reader’s body, say, at the hotel door, and it was shown that the whole process takes a few seconds. Then everything is simple: read the key, wait until the real owner leaves, open the door. Or do not wait. Or do not open. Without going into technical subtleties, the dialogue between the door and the reader / wireless key looks like this:
- Who's there?
- It's me.
- And it's you. Come on in.
Satisfied researcher on the background of the vulnerable door model.It seems that everything is clear, but there is a nuance. Well, as usual, not all access control systems are subject to such an attack. And even those that are exposed can be protected without replacing it in its entirety. According to the researchers, the readers have the means to protect against such hacks, they are usually, ahem, turned off. Some even support the
Open Supervised Device Protocol , which allows you to encrypt the transmitted key sequence. These "features" are not used, because, I do not get tired of repeating it, do not think about security cheap and easy.
Here is another interesting
study on the topic of 2009, with technical details. Apparently, the vulnerability of the cards (not the readers) was indicated as far back as 1992, but then it was suggested that the card itself be either disassembled or skipped through x-rays. For this, it must, for example, be taken away from the owner. And now everything is decided by a handkerchief the size of a coin. However, progress!
Invulnerability in Microsoft. The subtleties of Windows Server Update Services in companies.News The original
whitepaper researchers.
Windows Server Update Services allows large companies to centrally install updates on a fleet of computers, using an internal server instead of an external one for distribution. And it is a very reliable and fairly secure system. First, all updates must be signed by Microsoft. Secondly, communication between the corporate update server and the vendor server is encrypted using the SSL protocol.
And this is a fairly simple system. The list of updates is received by the server of the company in the form of a file in XML format, where it is actually stated what to download and how to update. And this is the initial interaction, as it turned out, is made in clear text. More precisely not so. It must be encrypted (the "must" keyword), and when deploying WSUS, the administrator is strongly recommended to enable encryption. But by default it is off.
It’s not that horror-horror: it’s not easy to replace the “instructions”, but if the attacker already has the ability to intercept traffic (man-in-the-middle has already taken place), then it is possible. Researchers Paul Stone and Alex Chapman found out that the substitution of instructions allows you to run arbitrary code with high privileges on an updated system. No, checking for a Microsoft digital certificate is still performed, but any company certificate is accepted. For example, you can drag the PsExec utility from the SysInternals suite in such a way, and with its help, run anything.
Why does this happen at all? The fact is that enabling SSL cannot be automated when deploying WSUS — you need to generate a certificate. And, as the researchers note, Microsoft in this case and can not do anything, but to strongly recommend the inclusion of SSL. That is, it turns out that the vulnerability seems to be there, but as if there is none. And nothing can be done. Yes, and no one except the admin is to blame.
Picture on request.By the way, the cyber spyware found by “Lab” Flame also
used the Windows update system to infect, though in a different way: with the help of a fake proxy, they intercepted requests to the Microsoft server and some of the files, some of which were even signed by vendor certificates, were sent in response.
Reverse engineering and painNews Original CSO Oracle Post (Google
cache ,
another copy).
Above, two presentations at the Black Hat conference are quoted, and they are united by the fact that the authors of these studies - security experts - have discovered some kind of vulnerability in a technology or product that someone else is developing. And they made public, and in the case of BLEKey, they also put all the code and hardware in open access. In general, this is the standard interaction of the security industry with the outside world, but not everyone likes this situation. Fundamentally, I will refrain from assessing here, I will just say that this is a very delicate topic. Is it possible to analyze someone else's code and under what conditions? How to disclose information about vulnerabilities so as not to harm? Can I pay for the holes found? Legislative restrictions, the criminal code, the unwritten rules of the industry - all this affects.
The effect of an elephant in a china shop was made by the recent post of Chief Security Officer at Oracle by Mary Ann Davidson. Captioned (inaccurate translation from English) "
In fact, no , you can not", it is almost entirely addressed to the company's customers (and not to the industry as a whole), which send information about vulnerabilities found in the vendor's products. Quote post, published in the blog of Oracle on August 10, can paragraphs, but the main thing: if the client could not get information about the vulnerability other than using reverse engineering, the client violates the license agreement, but this is not good.
Quote:
The client can not analyze the entire code and make sure that there is an algorithm that blocks a potential attack, which some scanner reports to it ... The client cannot create a patch to solve the problem - only the vendor can. The client absolutely violates the license agreement using a static analysis utility.The reaction of the public looked like
this :
Or
so :
Or even
like this :
In short, the post hung for no more than a day, it was deleted due to “inconsistencies with [official] views on customer interaction” (but the Internet remembers everything). Let me remind you that Oracle is developing Java, the vulnerability in which does not exploit only lazy. Three years ago, we
calculated the number of vulnerabilities in Java for 12 months and found 160 (!) Pieces. Perhaps, in an ideal world, only a software developer really needs to look for and close vulnerabilities in software. In the real world, is it sometimes impossible that the people responsible for this work according to the “bees versus honey” scheme?
But look from the other side. Last week, the founder of the very Black Hat, Jeff Moss,
spoke about the responsibility of software developers for holes in it. They say it's time to delete all these lines from the EULA about the fact that the company is not obliged to customers. The statement is interesting, but no less pretentious than “let's ban the disassembler”. So far it is clear only that users (corporate and simple), vendors and researchers, if they can come to an agreement with each other, then clearly not with the help of loud statements and jokes on Twitter.
What else happened:Another
presentation from Black Hat about hacking the Square Reader plastic card reader - well, the thing that connects to the smartphone, and through it you pay the courier for sushi delivery. Requires a soldering iron.
Lenovo's laptops (not all, but some) again found the
rootkit from the vendor. Previous
story .
Antiquities:Family "Small"
Resident viruses are usually written to the end of COM files (except for “Small-114, -118, -122”, these are at the beginning) when loading files into memory. Most of the viruses in the family use the POPA and PUSHA 80x86 processors. "Small-132, -149" incorrectly infect some files. Belong to various authors. Apparently, the emergence of the Small family can be viewed as a contest for the shortest resident virus for MS-DOS. It remains only to decide on the size of the prize fund.
Quote from the book "Computer viruses in MS-DOS" Eugene Kaspersky. 1992 Page 45.Disclaimer: This column reflects only the personal opinion of its author. It may coincide with the position of Kaspersky Lab, or it may not coincide. Then how lucky.