At the end of January in the logs of our internal system for analyzing user clicks on the site
kidsreview.ru there were hundreds of clicks on strange links like:
compareiseries.in/goto.php?url=aHR0cDovL24uYWN0aW9ucGF5LnJ1L2NsaWNrLzUyZDhmODY2ZmQzZjViMjYxYTAwNDFjNS82OTIzMy81MDI1OS9DJJA short investigation revealed that the site compareiseries.in is an interlayer, the script issues a js-redirect to the link that was transmitted in the address. In this case, base64 hid the real address:
n.actionpay.ru/click/52d8f866fd3f5b261a0041c5/69233/50259/subaccountAs it is not difficult to guess, the site turned out to be a pay-per-click traffic exchange (with the parameters of someone’s particular account in the URL). That is, someone cheated clicks, earned money - and everything on our site and, the most offensive, without our permission.
')
Problem
Problem: when browsing a website in browsers of a large number of visitors, there are spam links to many disparate advertising resources.
Question: at what level they appear?
Variations that come to mind (in a subjectively estimated order of decreasing probability):
1. Into the user's browser, as a result of a virus on a computer / a virus plug-in to the browser / other viruses in all their diversity, malicious javascript is injected into the visited sites;
2. Our servers, as a result of hacking and subsequent embedding of a site code or a web server or a network card driver, themselves give spam scripts / links to visitors;
3. Some of the js that we load from external resources, as a result of hacking, began simultaneously to distribute advertising spam content
(rambler top100, vk / fb, twitter, google, yandex metrika, ... there are about a dozen of them);
A careful search of any traces on our servers did not give anything, point 3 is the shoals as the most unlikely. In parallel with this, we made changes to our system for analyzing transitions - so that the content of the pages on which clicks on the “left” URLs were made was logged.
The very first working hours of our improvised “trap” caught a few dozen pages, and with them, the injected js of the form:
api.cpatext.ru/js/cpatext.js?r5ayrqejixqe.ru/js/start_ad.js?u=7_05032014yhcwxeqhzg.ru/?d=kidsreview.ru&t=KidsReview.ru%20%7Cwww.superfish.com/ws/sf_main.jsp?dlsource=pcom&userId=4826451239324129823&CTID=p1272Or, for example, like this:
"Data: text / javascript; base64, KGZ1bmN0aW9uKHdpbmRvdykgew0KICAgIHZhciBkb21haW5fbGlzdCA9ICcsMS5hemFydG55ZS1pZ3J5LXBva2VyLnJ1LDEWPHPHP2GHZ2ZS1pZ3J5LXBva2VyLnJ1LBWHPHGTZHPGHZ2ZS1pZ3J5BXbcB2bFbGbzbc2121W5fbGlzdCA964
etc.
Got it! - we decided and opened cpatext.ru, and in fact, what we saw - "We automatically replace links, and you earn more!".

The guys offer to replace the links in various ways, in particular through browser plugins. We built a whole system for this and, naturally, they are far from the only ones.
What to do?
One of the most useful options for dealing with the consequences is the Content-Security-Policy header (http://en.wikipedia.org/wiki/Content_Security_Policy), which allows the site to declare restrictions on the operation of site pages with external content. In particular, it allows transmitting instructions to modern browsers on the site from which external domains the site allows to load external JS.
This method is especially good if all js are hosted within a single domain (for example, CDN), but in general, when there may be a bunch of third-party js on the site (for example, widgets like ours) there are several problems:
1. You need to make a complete whitelist by analyzing all loadable external js
2. It is necessary in the course of development to maintain whitelist up to date
3. Changes in the intricacies of hosting legitimate external scripts: for example, changing the Facebook CDN domain or the appearance of a new third-party legitimate domain - leads to an error.
4. Among other things, a lot of scripts are blocked from advertising toolbars and potentially legitimate browser extensions.
Within the team, we agreed that extensions that climb into the content of the page and allow themselves to change the content “go to the forest”, but is this true in 100% of cases?
After enabling the blocking using Content-Security-Policy, the “left” clicks are orders of magnitude smaller (only clicks from ancient versions of browsers that do not support CSP remain), but a few questions remain:
1. Are there any progress on the part of browser developers in terms of restrictions
extensions on js injections in site pages?
2. What are the best practices when dealing with similar issues with infected users on large sites?
No decent information was found.
3. And most importantly, why do sites in the spirit live quietly:
metabar.ru ,
cpatext.ru ?