📜 ⬆️ ⬇️

How user factors influence ranking: 2 critical factors



We continue the cycle of translations of articles on various aspects of behavioral factors and how PFs influence the position of a site in search. Summarizing the previous publications , PF is one of the most difficult to cheat, and therefore the most reliable for search engines ways to determine the "quality" sites. Simply put, the better the PF of your site, the higher the site is in organic delivery. In this post, which saw the light back in 2012 on SEOMoz.com, Peter Meyers reveals the details of the PF accounting algorithm by the Google and Bing systems, which, in our opinion, are actively developing to this day. SERPClick is designed including on the basis of the data discussed in detail in this post. It allows you to implement a PF campaign aimed at improving the ranking of your site through improved behavioral factors. We thank the analytical department of the company ALTWeb Group for the provided translation.

The implications of updating the Panda algorithm made us all worry about the so-called user metrics and how they would affect the optimization. Many began to fear that “bad” signals from analytics, especially a high failure rate and a short time on the site, could potentially worsen the position of the site.

I think (and later I will explain why) that Google does not look directly to us at the analyst. And I do not think that they need it. Because there are two metrics to which Bing and Google have immediate access:
')
(1) Issuability page (SERP CTR)
(2) Time to return to search ("Dwell time")

And I think these two metrics can tell search engines enough about your site.

Google Analytics and Optimization


The official position of Google on this issue is as follows: data from analytics are not used for ranking . I will not speak on this issue, here you can decide for yourself whether to believe in it or not. I note only that little Matt so strongly emphasized. I think that the arguments against directly using analytics for ranking are practical.

(1) Not everyone uses GA (Google Analytics)

It is difficult to say with certainty what percentage of websites are currently using GA, but a large study in 2009 showed a figure of 28%. I, in turn, saw numbers in 40% in some sources, but it can be assumed that, on average, 2/3 of the sites do not have GA connected. Therefore, it is logical to assume that it’s quite difficult for Google to rank or ban website based on a factor that is only present in 1/3 of all Internet resources. What else aggravates the picture: the largest sites do without GA, because they can afford traditional corporate analytics systems (WebTrends, Omniture, and the like).

(2) GA can be crookedly set.

Google cannot control the quality of the GA installation on those sites that use the system. From my consulting experience and from the questions and answers section here at Moz, I can say that the analyst is often screwed up badly. Because of this, failure rates, time on the site may not be very pleasant, plus it creates a certain confusion in the system.

(3) GA can be wound.

As a "hacker" option (2) - analytics can be set "crooked" on purpose. Thus, most user metrics can be manipulated. In this case, Google is not possible to double-check the quality of each installation. At that moment, when the GA tags are in your hands, Google itself has little to control.

In fairness it should be noted that there is an opinion that Google uses any available data. I even met with circumstantial evidence that the failure indicator is meaningful . I’m ready to argue with this: Google and Bing do not need analytics data or failure rates. They have all the necessary data in their own logs.

The main reason I don’t believe it

The most common argument is that Google cannot use metrics like bounce rate as a ranking signal, since the bounce rate varies greatly depending on the site specifics and cannot be regarded unambiguously. Hearing this happens so often that I would like to stop here in more detail, since I have a very clear reason for not believing this.

ANY signal ranking, by itself - can not be regarded uniquely. I do not know any optimizer who would say that the title of the page does not matter. Nevertheless, it is the easiest way to turn the title of the page as you need it. In principle, all factors related to the pages of the site can be prompted: this is why Google has added link accounting to its ranking algorithm. Links, in turn, can also be spammed - so Google also added social and user ranking factors to the overall algorithm. Thus, we get more than 200 ranking factors. Bing claims to take into account 1000 ranking factors. So, none of these hundreds of factors is perfect.

Metric # 1: Issuability Page (SERP CTR)


The main metric which, I think, is widely used in Google is click-throughs of the issue page. Whether or not a user clicked on the result of a click, for Google and Bing, this is the main indicator of whether the result of the issue responds to the query. We know that both Google and Bing collect this data because they indirectly link to it. (The relevance of this conclusion is confirmed by the Yandex report that we translated earlier for the search engine of the same name. The report tells about the algorithm by which users' clicks from the issuing page can talk about the quality of the issue as a whole and this page - indirectly. - approx. Transl.)

In the Webmasters Tools, you can find data on clickability in the section “Your site on the web”> “Search queries” (“Your site on the web”> “Search queries”). It looks like this:

image

Bing reflects similar data: in the Monitoring Panel (“Dashboard”), click on “Traffic Report”:
image

Of course, we also know that Google relies on the clickthrough of its issue page when giving it a quality assessment, and Bing has been following this example for the past few years. The paid search algorithm is significantly different from the organic search algorithm, however, CTR is important there as well. More relevant results attract more clicks.

Metric # 2: Return to Search Time (Dwell Time)


Last year, Duan Forrester from Bing wrote a post called “How to create high-quality content” in which he remembered a term like return time to search:

Your goal should be a situation in which the user comes to your page, the content satisfies all his questions and needs, and the user remains with you. If the content does not force the user to stay on your site, the user leaves. The search engine records this in terms of time to return to the search . The time between when the user clicked from the search and went to your site and when the user returned to the search from your site, potentially everything speaks about the quality of your content. A minute or two is quite sufficient, as this may mean that the user has read the content on your site. A couple of seconds, enough for a quick look - a bad result.


Time to return to the search reflects both the percentage of failures and the time metric spent on the site: this metric shows how much time passed before the user returns back to the issue after leaving the page to your page (and these figures can be directly found in the logs search engine).

Google has never expressed unequivocally on this issue, but indirect evidence indicates that Google uses the search return time indicator (or some similar indicator). Last year, Google tested the functionality in which, if you click on the result of the issue and then quickly return to it (that is, the time to return to the issue is minimal), then you get the option to block this site:
image

Currently, the functionality is not available to a wide audience. With the launch of a personalized search, Google temporarily abandoned this option. But the fact that a quick return to the search was a trigger to enable the site blocking option shows that Google relies on the return time to the search as a signal to the quality of the site.

1 + 2 = Destruction combination


Both metrics give a brilliant result in combination. The CTR itself can be screwed: you can put misleading page headers and meta tags that have little to do with the content of the page itself. But this kind of manipulation will lead to a low return time to the search. Ie, you wind the CTR, and then the site does not meet the user's expectations, which appeared on him after viewing the snippet, and people go back to the search. The combination of CTR and time to return to the search gives a tangible opportunity to track the quality of pages and issue, while relying on only 2 metrics. If you have both high CTR and time to return to the search, most likely your result satisfies the quality and is relevant.

Are other metrics taken into account?


I do not say that the percentage of failures and other parameters of behavioral factors are not taken into account. As I already said, the time to return to the page is related (and possibly correlated) with the percentage of failures and time spent by the user on the site. Glenn Gabe published a good post about the “Actual Failure Percentage” where he also spoke about why the return time can more accurately reflect the situation compared to the rejection percentage. As before, we need to take into account the traditional behavioral factors that we see from analysts, as well as not forgetting broader indicators, such as site speed and signals from social networks, which, in turn, are also associated with behavioral factors.

I would like you to develop a broader understanding of behavioral factors. Look at them from the point of view of the search engine: it's time to digress from the actual site analytics. I recently saw examples when GA removed or manipulated tags in order to cheat indicators, because they were afraid of the consequences of the search engine. But in fact, this led to a bad result: the reliability of their own data was lost. I do not think that Google or Bing use data from our analytics. And even if they had to resort to it, they would analyze this data along with data from their logs and taking into account other factors.

So what should I do?


Create snippets that lead to clicks on relevant pages (Important! This is a tip for readers in 2012, now Google can choose the text of the snippet itself - comment. Trans.). Create the pages on which users remain. In the end, this is quite obvious, and it also has a positive effect on optimization indicators and on conversion. Especially think about the combination: simply attracting user clicks is useless (and it may even be harmful to retain the site’s position) if people leave the site immediately after the click. Work to create a balance between relevant keys and quality transitions.

As translators: we also recommend that you use the SERPClick product to improve the position of your site by improving the indicators of behavioral factors.

Source: https://habr.com/ru/post/238335/


All Articles