Sometimes owners of services, project managers and SEO-specialists have a desire to peek at the user, how he presses the buttons and what he splits his forehead. It happens that such a desire to pry allows you to identify interface problems, which may indirectly affect the efficiency of the service, and even profit.
I know several ways to solve this problem:
Quite an option that requires, nevertheless, the solution of a number of issues:
Even the successful solution of the above questions does not guarantee a more or less reliable result, and the costs of organizing the process are already frightening. In addition, there is a risk of loss of boundary cases, which are not so rare when used in combat.
Perhaps not this time.
Most services offer heat maps of clicks and scrolling states, which allows you to more or less reliably determine what users saw on the site and what they noticed.
Some offer analysis of forms, key logging, and tracking of selection and copying, which also makes it possible to track complex human-machine interaction schemes.
Offhand found a couple of services:
For lack of an economically sound goal, take WebVisor. The service promised to remember and lose for us the entire user session, i.e. repeat user behavior on the resource from start to finish. Primary integration is trivial, just add the code on the page.
Immediately it should be noted that requests that are deliberately destructive for the session state (that is, all but GET) are ignored.
There are two versions. The principle of their work is about the same. First, we get the user authentication parameters and transfer to the server, which downloads the current page and saves it. On the client side, it collects user actions: mouse movements, clicks, data entry, and so on. The embedded code transmits actions to a server with a timestamp (delta from the beginning or date / time, did not figure out).
Later, you can go to the user sessions page in the Yandex.Metric service and play user actions. It seems all is well, but there are a number of problems.
Version one:
The second version is in beta testing and contains the same flaws, plus sometimes, like a decent beta, it does not work from the word “at all”.
Hands did not reach HotJar. If you have experience of integration with SPA - share, please, but I have suspicions that there will be the same suffering.
And here we smoothly approach the moment “a, give, I will try!”.
What you need to implement:
Collecting analytics should be considered for two options for the quality of the network connection - for a low-speed unstable channel and for more favorable cases.
The initial state can be obtained in two ways:
In both cases, you need to disable all scripts from the saved page, since we will do the interface response to user actions yourself by recording user actions.
The question remains with styles and static data. In general, they can be left as they are, but special effects are possible if the statics changes at the time between receiving the initial state and when viewing the recorded activity.
At the time of loading the page save the current size of the window object.
You need to keep track of all DOM subtree events regardless of the use of stopImmediatePropagation () / stopPropagation (). For this we will use addEventListener () with the parameter useCapture = true.
To track changes in the document structure and attributes, you can use the mechanism - MutationObserver. DOM Node for registering handlers will be taken in the initialization parameters, by default - document.
You also need to keep track of the scrolling position of the document.scroll window and the resizing of the window.resize window.
To identify the event object, we will build a CSS selector.
For lack of practical data, we will use the JSON format for exchange. To fight with ad blockers and any other things, you should use GET requests and return a small image, say, GIF format. For example, the URL / path / to / api / [JSON string] .gif.
It is worth remembering that the URL is not rubber, it has a limit of 2000 characters. It is a rather small value, taking into account the sending of information about the change in the document structure, so you should immediately attend to data compression, for example, using the GZIP algorithm, a JavaScript implementation of which exists. To transfer compressed data, you will have to additionally encode them in BASE64. You also need to provide for transmission in parts, but to test the concept it will be unnecessary at this stage.
The source code for the client library prototype is here .
For experiments the real project was chosen:
Integration of problems did not cause, the performance of the browser is not visually observed.
The generated traffic was not as great as expected. Taking into account the limitations, save once every 10 seconds or, according to accumulation, 100 events, the data did not exceed 4 kilobytes. The compression ratio floats heavily from a few to dozens, but for fairly large volumes (tens of kilobytes) lies around tens, which is logical, since data is text and there are many duplicate substrings.
The prototype showed its viability. To make it completely clear, you need to implement the player and the server side, where a number of problems are foreseen:
You should not forget about the features when changing attributes. For example, the style attribute cannot simply be taken through a collection of attributes (Element.attributes); you will have to use the HTMLElement.style.cssText property. The number of such nuances at the moment can not be assessed.
If you use a headless browser with pre-playing user activity, you should consider recording video. In this case, there is no need for a player, but the required amount of computing resources and the size of the storage of the resulting data increase, which may not always be rational.
Source: https://habr.com/ru/post/345234/
All Articles