How to work with ranking factor research

 

Where does the data come from research on ranking factors

The purpose of research Laboratory – to see behind the parameters of sites and pages that we are variously and unjustly able to attribute them, ranking factors that are used by search engines to form the output.

To do this, we first need to understand what characteristics of sites and pages can be important for ranking, and then figure out how to “count” them. Now we already have more than 600 parameters (although, of course, not all of them are actually ranking factors).

Data Sources

Our sources. We “take” most of the parameters from pages that appear in search results ourselves. For example, we determine the size of HTML-code, the proportion of text and links in it, the number of occurrences of query words in the h2-h4 headings, the presence of the Google Analytics counter or the https protocol, the maximum price per page, the concentration of query words and their synonyms in the SEO-text.

  • External services. The values of some parameters we receive from the search engines themselves or third-party services. For example, the value of the IKS (and formerly TIC), the number of pages in the index and the number of pages found on the site, we learn from Yandex. The number of “readers” in the social networks is available in VKontakte and Odnoklassniki, and the age of the site – directly from whois-services or indirectly from the first mention in Archive.org. We get link data from MegaIndex and traffic and some behavioral data from SimilarWeb and Alexa.com.
  • Expert evaluations. A large group of parameters – about a hundred – is based on expert evaluations. They are given by our assessors who fill out a special questionnaire for each page in search results. The “manual” (assessor) parameters are mostly commercial or social. Most of them refer to the site as a whole. If any other pages from the site have been previously evaluated, the assessor receives an almost finished questionnaire – he has only to fill in the lines relating to a particular page. Asessor evaluations are available only for commercial sites – if the assessor has evaluated the site as informational, that is where the questionnaire ends.

Many of the assessor parameters for individual pages are duplicated by close automatic parameters, but the results are not always the same. For example, the assessor will only take into account the phone number of the company that owns the site, while the automated system also takes into account the phone numbers of the customers; therefore, the automatic detection finds more phones in general and 8-800 phones in particular on aggregator sites.

Types of parameters

Parameters are divided into binary (whether or not the site or page has some property) and numeric.

When it comes to multiple search results (e.g., the top 10) and/or a sample of queries, it is convenient to specify binary parameters as a percentage. For example, 31% means that for 31% of sites that fall in the positions of interest to us on this sample, the value of the parameter is.

With numerical parameters is a little more complicated: if the range of values is small, we use averages; if it is significant, we consider the median for each of the queries, and then average the results for all queries of the sample.

For parameters that have multiple values, we usually reduce them to binary values, which makes them much more convenient to work with. For example, our assessor questionnaire has options for the number of brands in the assortment – one, several, or many. We use the data from this line of the questionnaire in two binary options: One brand (yes, if one; no, if several or many) and Many brands (yes, if many; no, if one or more). There is also an option not relevant, which simply turns off accounting for this group of parameters for the site.

Selections

We deal mostly with commercial topics, in which almost any query (except perhaps the most exotic low-frequency queries) find dozens of pages, not only highly relevant, but also specially optimized for it. For medium-frequency queries such pages hundreds, for high-frequency – thousands. Search engines have a lot to choose from, and getting into the top 30 any of them for a competitive commercial query – in fact, a guarantee of high relevance.

When Yandex (or Google) arranges search results by query in some order, it gives preference to some relevant pages over others that are also relevant. Why are these particular pages in the top 30 – and not the other 30 (probably no less worthy) that we also know and with which we can quite easily compare them? What is the difference between the pages that made it to the top positions and those at the bottom of the top 30? Can we see a pattern in all this?

 

If you take a sufficiently large sample of requests and for the search results for each of them to get the values of any parameters, then for each parameter can be checked whether its values are statistically significantly related:

  • with the position in search results (within the top-30) – including getting into the top-3 or top-10;
  • getting into the top 30 (as a background we can use pages in the top 30 of the other two search engines).

The larger the sample, the more accurate the results will be. But even for small samples, the size of the order of 100 requests, the results are statistically significant, and large samples largely confirm them.

Of course, a lot depends on queries, and ranking factors may behave differently depending on them. Significance of the same parameters on different samples of queries can vary greatly.

This can be due to objective features of the ranking for certain topics or intents (user intent), and with the “landscape” of sites competing for a place in the top: the parameters with which all is well with sites with high visibility, will seem important, even if the search engine does not pay attention to them. Therefore, it is important, first, that the control sample of queries be sufficiently representative and diverse and, second, that the results obtained on it be controlled by other samples.

Read also:   Seo site audit

We are currently working with four types of samples:

  • One standard sample of 160 commercial queries of different topics. It is not very large and not perfectly balanced by topic or query frequency, but we have been monitoring it since spring 2015, and it is useful for tracking changes in ranking.
  • The industry benchmark samples are slightly larger, and we used them this year to prepare analytical reports on ranking factors in e-commerce, finance, medicine, automobiles, and real estate.
  • A lot of narrow topic samples are mostly from queries that our clients’ sites are promoted for. Their size can vary, but is usually in the tens or hundreds of queries.

 

Consolidated samples of thousands of queries, which we compile from thematic samples several times a year for research purposes. They are already large enough to make the graphs of the average parameter values depending on the position smooth, but unfortunately, they are different every time, because they are simply made up of queries evaluated over a certain period. It is also possible to compile summary samples by topic (e.g., medicine or furniture), by query type (e.g., informational), by region, and so on.

How the data are evaluated

Now we come to the most important thing. We have a sample, we have parameter values for all (or almost all) sites/pages in the top 30 of Yandex, Google and Mail.ru. How do I understand whether the parameter is important for ranking in each of the search engines? And what exactly does “important” mean?

Correlations with position

The simplest correlation with the ranking is when the value of the parameter increases (or decreases) as you approach the first position. Sometimes this correlation is obvious if you look at a chart of the average values of the parameter by position. However, there may be nuances, especially with numerical parameters, where the overall picture can be strongly influenced by “spikes” of values for individual sites. Therefore it is better to assess the dependence with mathematical statistics methods.

We use Spearman’s rank criterion (see box) as the main one – it is its value that we mean when we talk about the correlation coefficient between the position and the value of a parameter. We consider a correlation with a position to be strong if the correlation coefficient is 0.10 or greater (or -0.10 or less). We also use Spearman’s ranking criterion when assessing correlations between parameters.

We prefer not to rely on only one metric and control it with Fisher’s exact test (as well as the Mann-Whitney U-test). A correlation counts only if it is “confirmed” by a statistically significant difference between the parameter values for the top 3 or top 10 and the rest of the top 30.

The stronger the correlation, the greater the difference in values between the top 3 and the third top 10 (exceptions may occur, for example, when the maximum values are not in the first positions, but in the middle of the first ten, as we have often seen in the real estate report). In our standard parameter charts, we do not show the third top ten, but if there is strong correlation, the difference in average values is also high for the top 3 VS. top 30.

Such charts show correlation coefficients as well as the strength of the correlation with being in the top 30. But the “strength” of the parameters can usually be seen without that. If the values for the top-3 are noticeably higher than for the top-30, it means that there is most likely a strong correlation between the parameter and the position. If the values for the top-30 are noticeably higher than for the background, it is likely that there is a strong correlation with getting into the top.

For example, for localization in Moscow for Yandex, the average value for the top 3 – 97%, for top 30 – 94%, for the third top ten (positions 21-30) – 92%, the correlation coefficient is 0.06. For Yandex, the correlation coefficient is noticeably higher – 0.19; the dispersion of values is also stronger: the top three positions – 94%, the top 30 – 83%, and the third top 10 – 74%.

Note, by the way, that the closer to zero or 100%, the more “weighty” is the difference in the parameter values. It would seem that 98% and 96%, like 58% and 56%, have only two percentage points. But an increase from 96% to 98% means that the percentage of sites that do not have the parameter is halved!

Contrast with the background.

Most of the queries in the samples we work with are commercial and highly competitive; there is no shortage of relevant pages. The search quality of all three major search engines is high enough to get into the top 30 at least one of them can be considered a guarantee of sufficient relevance and quality. So if the page is not in the top 30, for example, Google, but got in Yandex or Mail.ru – it’s not because it is bad, but because Google preferred it some other pages (or sites) that better meet some important criteria for him.

So by comparing a search engine’s top 30 with the background – with pages that are not there, but which have been in the output for the same queries in other search engines – we can learn a lot about what parameters are important to it. For example, we can say that localization in Moscow is more important for Yandex than for Google, despite the fact that the correlation with the position in Google is higher. Because in the top 30 Yandex 94% of sites are localized in Moscow, while in the top 30 Google – only 83%. It turns out that Google has almost three times as many sites without Moscow addresses and phone numbers – 17% versus 6%.

Read also:   7 SEO Tips: A Western Approach

Our main tool for evaluating the difference between the top 30 and the background is Fisher’s exact test. The closer its value is to zero, the less likely it is that the difference in parameter values between the search engine output and the background is random.

But if the relationship with getting into the top 30 is not random, then what can it be determined? There are several basic options, which, however, and can complement each other in various proportions.

  • Preselection. Modern ranking formulas are very complex – and therefore can be resource-intensive and time-consuming to compute. To reduce server load and speed up processing, ranking can be performed in two or more steps, with simplified formulas optimized for fast calculation being applied first to a large number of pages lifted from the index. As a result, relatively few (e.g., a thousand) results are selected, to which the full ranking formula is already applied. If the parameter is included in the simplified formula and taken into account in the early stages of ranking, its values for sites that have reached the last stage will initially be on average high. And even if the parameter is no longer taken into account at this last stage, its impact on the overall ranking results can be very large.
  • Continued gradient. Some of the background pages actually make it into the rankings, but below the 30th position. If there is a correlation with position, i.e. with the distance from the top the parameter values fall, then it is not surprising if the sites (pages) that fall to 47th or 350th position have on average lower values of the parameter than those that fall to 2nd or 25th.
  • Correlations with other parameters. If Fisher’s exact test shows that the distribution is not random, it only means that higher values of the parameter in the top 30 compared to the background are not the result of a random lottery win, but a consequence of some objective reasons. The reasons may be different – and they are not necessarily related to the selection for the parameter we are interested in (or close to). For example, there are usually noticeably fewer aggregators in Google search than in Yandex search. The average values of many parameters for aggregators are noticeably higher or lower than for other commercial sites – and this can greatly affect the difference between the average values of these parameters for the two search engines.

The greater the contrast between the top 30 and the background, the more likely it is that the parameter actually counts in ranking (or is at least close to one of the ranking factors). As always, when approaching zero or 100%, the difference is weightier. However, there can always be alternative explanations – for example, related to the distribution of site types in the ranking.

On the other hand, the lack of contrast between top-30 and background, or even higher values in the background do not necessarily indicate that there is no selection for the parameter. If the correlation with position depends on only one search engine, the background depends on all three. If one of our competitors pays more attention to the parameter we are interested in, “our” search engine may be in its shadow, although it also gives preference to sites with high values of the parameter. That is why we note negative correlations on our charts, but there is no negative relationship with getting to the top (except for inverted parameters like rankings, for which the less, the better).

 

How to use research results

The parameters we call important are not necessarily the ranking factors that search engines use. A search engine may not have included that particular parameter in its ranking formula. Why? There are many variations:

  • There are other parameters that affect ranking, on which this parameter directly or indirectly depends (the most common situation).
  • The value of the parameter is higher for those sites that have a high position in search results (eg, the number of visits to the page the more it ranks better, simply by the transition from the search engine).
  • The parameter is often included in site optimization programs, because there is a widespread belief that it is important. As a result, “highly optimized” sites (and such in the top most), the value of this parameter is higher on average, although in reality “working” other parameters.
  • The parameter was important some time ago, but now the ranking has no effect; nevertheless, many sites with “correct” values of this parameter are entrenched in the top.
  • Even in cases where ranking factor is actually used by a search engine, we cannot say that it is exactly the same as our parameter – most likely, there will be sites (pages) for which the value of “our parameter” and “their factor” differ significantly. Nevertheless, if our data show that the parameter “is important for ranking,” then it is in most cases really important, and in search engine optimization sites these findings are useful to consider. Although it is not necessary to strive to improve performance on important parameters at all costs.

What should you do in this case?

Fix critical errors

Parameters that correlate tangibly with ranking factors – important. If you have a limited range, or no localization in Moscow, or no prices, then for many queries you simply have no chance to get to the top.

This is the situation with parameters that are taken into account in the preliminary stages of ranking, (as you can guess by the large contrast between the values of the parameter in the top 30 and the background), as well as with those in which the share of the top approaching one hundred percent.

If your site is not in the top (ie, not ranked at all on the query – just thirty here is not important), it is worth paying attention to the parameters are strongly associated with getting to the top, which he did not or in which he lags far behind his competitors.

Read also:   SEO promotion of B2B sites

It is important to remember that Yandex factors responsible for text relevance, also belong to this category. Pages with few words in the query have a good chance of being left out, even if other ranking factors are great. That said, “few” does not mean none at all. If your competitors have 40 occurrences of words in the query, and you have five, you are taking a big risk.

Appeal to common sense

For example, our data shows that the greater the size of HTML, text size and even page load time (only in Yandex and within reasonable limits – less than a second without taking into account scripts, etc.), the higher the average ranking of sites.

This is due to the fact that HTML size is strongly correlated with the number of outgoing internal links, pictures, prices, etc. – ie with the size of the “storefront. Showcase is important especially for online stores: its size affects assortment, number of pages in index, traffic, etc. It is the storefront that makes the major contribution to textual relevancy. It will not be punished for the “keywords” and repetitions, and a large volume will not be called a “spoilage”. This means that the size of the HTML code correlates to the number of occurrences of query words in the “text fragments” (i.e. outside the SEO text), links, alt attributes, the number of repetitions and “meaningful” words in the page text.

The size of the HTML code is important. But that doesn’t mean you have to optimize your site by intentionally increasing the size of the HTML or the load time!

Compare your site to your competitors

HTML size is not likely to be directly considered in ranking, but it can be used as a control. If your site’s pages are significantly smaller than your competitors’ pages in the top 3, top 10 or top 30, that’s a reason to wonder if you’re doing everything right and if something needs to be changed.

The method of comparing with competitors (and sometimes blindly imitating them) – in SEO, to put it bluntly, is nothing new. For example, this is how the services work, forming a “job for copywriters” based on a list of keywords. Our data help you choose the right criteria for this comparison.

The criteria monitored by search engines can be formal

Representatives of Yandex and Google have been saying for years how important it is to make sites for people, not for search engines, and this is the most direct way to high positions in search. The longer it goes on, the more true it becomes, with ranking factors increasingly successfully modeling the usefulness of sites to humans.

Nevertheless, it is likely that even now, a significant portion of the factors rely on simple criteria. For example, the presence of reviews can be judged by a search engine by the presence of headlines containing the word reviews, and the presence of video by the presence of video player code on the page.

Don’t count on the fact that you can easily fool a search engine. We do not urge you to put a fake 8-800 number on your site (as many did two years ago). But finding and inserting relevant videos in appropriate places, since search engines want to see them so badly, is definitely a good idea. Recording your own would probably be even more useful – but also much more difficult, and therefore not necessarily more effective.

If the parameter is irrelevant to you, you won’t get anything for not having it

Delivery information on the site is undoubtedly an important parameter. In Google there is a correlation with the position, in Yandex – a strong relationship with getting into the top 30 (89% vs. 83% in the background), and for specific types of delivery (courier, pickup) – and correlation with the position. Nevertheless, the owners of sites offering services can not worry about delivery – simply because it is not relevant to them.

Payment by card – no less important parameter. In both search engines on our overall sample shows a strong correlation with the position and connection with getting into the top 30 (in Yandex is also strong). But it’s not customary to pay by card for cars, and even on automobile sites they prefer not to raise the question of how to pay. And who would have thought? – If and when card payments (as well as other parameters associated with payments) correlate with position, then only negatively.

How the search engines will understand that the parameter is not relevant is, after all, not that important (although certainly interesting). You can safely assume that they’ll get it right: they just can’t afford to be too wrong about these things.

Build data into your promotion strategy

Laboratory of search analytics company “Ashmanov and Partners” is not only engaged in research, and this is not even the main part of our work. We have developed practical tools for SEO specialists.

Our main product is designed for internal use, and our company calls it “Laboratory,” without any frills. It allows you to gather information for a particular sample of requests, and then build the desired report for the optimizer or a draft report for the client (to be further elaborated by the optimizer).

We compare parameter values of the site and its competitors from the top 30 and give detailed recommendations, ranked by importance. The importance of each parameter is determined not only by its initial estimate (based on correlation with position and connection with getting into the top 30), but also by context – by how the parameter is presented in a particular pumping out and how its values at the site of interest correlate with the values of competitors.

Posted in SEO

Leave a Reply

Your email address will not be published. Required fields are marked *