Where is ecommerce heading in the Russian search engine Yandex and Google. Report based on a sample of 200,000 requests

Problematic

In many topics in the search results in Yandex and Google, the user in response to his request first of all sees the sorcerer:


It picks up the CTR. This is bad. After that, contextual advertising comes in, and CTR is lost on it again.

2. Contextual advertising.jpg

And only then organic delivery begins.

On Google – the same context:
4. Google context.jpg

The same sorcerers:
5. The sorcerer of maps in Google.jpg

And only then – search engine. It is very difficult to get traffic from search for highly competitive queries, unless you have a giant company.

Many people promote sites by position. This is normal, the market demands it. But in fact, everyone who is in the 6th place in the search results does not receive traffic.

Dependence of CTR on position:

8. Positions and CTR.jpg

Let’s make the calculation. Suppose we have a request with a frequency of 3000. The site for this request is on the 6th position. We multiply the frequency by 6.23, take the conversion rate of 5% (this is the average for online stores). We receive 9 applications per month. Only requests, excluding conversion to sales.

What can you do about it? You can complain, write posts. But that won’t get you anywhere. Or you can go the other way and identify requests that bring a lot of traffic. For this, a study was conducted.

Research objectives and inputs

More than 200,000 queries from 110 commercial topics (selling goods and services, as well as betting and casinos) were used for the research.

9. Topics of research.jpg
10. Topics.  Part two.jpg
From January to August 2019 (for some projects since December 2018), the top 10 search results from Yandex and Google were collected in these topics.

In January, data was taken every week. From February to August – every 3 weeks (changes are minimal for 1-2 weeks). Research regions – Moscow, million-plus cities: (Yekaterinburg, Voronezh); cities with a population> 500 thousand: Tomsk, Penza; with a population> 300 thousand: Saransk, Vladimir.

The first task that had to be solved within the framework of the study was site markup. Based on the URL of the search results, we got domains and highlighted their types:

  1. aggregator;
  2. thematic aggregator;
  3. complex commerce;
  4. commercial site;
  5. social network;
  6. state site;
  7. search engine service;
  8. information site.

Thus, 103,617 domains were allocated.

We checked that the site characteristics correlate with the positions:

  • number of pages in the index I and G;
  • presence in the Directory;
  • commercial markers in Title, H1, Description;
  • reference;
  • finding a site in the top 500 thousand by Keys.so;
  • X;
  • Whois and the date of the first scan;
  • social network;
  • telephones,
  • Email.
READ  Ecommerce Ranking Factors - 2020 Analyst Report

But the only correlation was found for complex commerce.

Research objectives:

  • get the percentage of organic search results for the entire sample for Yandex and Google search engines, as well as the change in 2019;
  • get a list of topics for search engines Yandex and Google, which from the current sample have the maximum percentage of aggregators and complex commerce; and vice versa;
  • understand how the percentage of aggregators and complex commerce is changing in the regions;
  • answer the question – is the SEO channel suitable for topics with complex commerce and aggregation;
  • create a tool for scoring requests for each type of transcript by position + “Except for the selected transcript.”

results

At the moment, Yandex has 65% of commercial issues. Mostly aggregators.

11. The composition of organics.jpg

And here’s the dynamics:

12. Change in the percentage of organic matter.jpg

Yandex systematically kills other people’s commercial sites (according to generalized data for 6 months). But there are always exceptions.

13. Exceptions.jpg

In general, aggregators receive little traffic in complex topics (and vice versa).

14. General conclusion.jpg

Thus, if you have a highly specialized site and a cool offer, then you will always make money. In the category of complex products, you can work in any search engine.

But with maximum topics, the situation is different:

15. Conclusion by subject.jpg

There is a minimum entry threshold for starting a business and a lot of aggregators.

In Moscow, Yandex prioritizes aggregators and complex commerce in ranking, while the percentage of URLs for such sites is constantly growing. Therefore, in complex topics you can work in both search engines, but in simple ones it is better to go to Google.

For regions:

14. General conclusion.jpg

In the regions, in Yandex and Google, the percentage of aggregators is growing, while complex commerce is practically unchanged. The fact is that in the regions there are very bad sites, so traffic is given to aggregators.

Is SEO lost?

Not. SEO is responsible for 65% of commercial traffic in Yandex.

17. Features.jpg

The existing problem needs to be addressed.

Tool for professionals

Solution from Ozhgibesov – query scoring and iterative promotion.

You can act like this:

  1. Collect data to research your topic.
  2. Prepare decryption for domains.
  3. Get data on the composition of the top in your topic.
  4. Get data on URL changes for phrases based on different input iterations (weeks, months, etc.).
  5. Select queries that do not have the required type of decryption (for example, aggregators) and implement this semantics.
  6. Select queries with a minimum presence by positions of the required type of decryption (for example, aggregators) – and implement this semantics.
  7. And only after getting the top for these queries – proceed to the most complex semantics.
READ  Website promotion in Google

It is necessary to search for queries in your topic for at least a month. If you collect queries once, you will not be able to understand the dynamics and predict whether their prospects are good or bad in the selected topic. You can only give the current position. For example, the current position on Yandex is 65%, but over the past six months it has lost about 10% of commercial sites. To see this, you need to track changes.

Next, you need to prepare domain decryption. This is done manually. One specialist can process 1500 domains per day.

Then we collate the data. You can use two options for tools:

  • Key Collector and XMLproxy (for Yandex), XMLRiver for Google.
  • Topvisor – the SERP Snapshot tool.

You need to do this for at least four weeks for each search engine.

For Yandex, let’s analyze it using the Key Collector as an example. We take the semantics file and make copies.

18. KeyCollector.jpg

For Google, let’s take Topvisor as an example. Download file from Topvisor by date.

19. TopVisor.jpg

We open all iterations in KC, unload the search results data in the Multi-group mode. If you have taken the output data 4 times, we will upload it for all files.

20. Unloading to KS.jpg

Topvisor files differ from the Key Collector view, so they need to be processed to make the view 1. To do this, run a macro to process the data in the folder.

21. Run the macro.jpg

22. Before and after processing.jpg

After collecting the data, we rename them.

23. Rename the .jpg files

Yandex is the first, Google is the second. The order for our macro is mandatory – data is processed according to it.

From 1 base file, we need to get domains from the URL.

24. Getting domains.jpg

Next, we need domain decryption. Copy the URL column into a separate column and apply Text to Columns.

25. Distribute by columns.jpg

We get the list:

26. List of urls.jpg

We delete everything starting from column D so that the protocol and domain remain.

27. Leave the protocol and domain.jpg

Insert “//” into column “B” and stretch. In cell D1 write the formula = CONCATENATE (A1; B1; C1;) and stretch.

28. Writing a formula.jpg

Copy column D and use Paste Special to get the data from the formulas.

29. Copy column D.jpg

Select the entire column and remove duplicates.

30. Remove duplicates.jpg

We insert the resulting 344 domains into Netpeak Checker and remove the following parameters:

  • title;
  • phone numbers;
  • indexed URLs in Yandex;
  • domain registration date and date of the first web archive scan.
READ  How to reduce the cost of a targeted action by 5 times in three months? Case

For example, if the site has a commercial title, then it is easy to conclude that this is a commercial product.

Optionally, we check traffic through SimilarWeb and the number of phrases in the TOP-100 through Serpstat. This will help you navigate: if in your topic everyone in the TOP-100 has about 5,000 phrases, and some site has 30,000, then this is clearly an aggregator.

We prepare the received data in the following form:

31. Decoding domains.jpg

Based on the data obtained and manually browsing the sites, you need to prepare a decryption and bring it into the following form:

32. Manual check.jpg

Checker gives you a view of the site https://url.ru/, we need to remove the last slash, for this use the formula = PSTR (A2; 1; DLSTR (A2) -1).

33. Remove the last slash.jpg

Let’s move on to the macro. In the last sheet “Grades” we insert domains and transcript. Then go to the Input sheet and select the folder with the data that we have prepared. And click convert. If you did everything correctly, you will see “OK”.

34. Let's go to the macro.jpg

Press the second button. The macro will give you domains in all files for which there is no decryption. We repeat the decryption process.

35. Repeat the decryption process.jpg

In total, 796 domains had to be decrypted for this topic. If successful, when the button is pressed again, the macro will display an OK message.

36. The macro displays the message Ok.jpg

As a result, we get full analytics on the issue Go to the Report1 sheet and press the button to fill in the Report 0.0-1.3.
37. Go to the sheet Report1.jpg

When you click the Fill in Report 4 button, the Report 2 sheet will fill in the data on URL changes with reference to phrases and files.

38. The data about the URL change will be filled in.jpg

The third sheet of the report contains the scoring of requests.

39. Scoring requests.jpg

The macro has a handy “Except” function. After downloading and processing the file, the program will show all requests for which there is no aggregation.

For example, we took the real estate segment. A total of 6,000 requests. We found 218 queries without aggregation in Yandex and 529 queries in Google. Work with them.

There are requests for which it is easy to promote the site, but there are those for which the issue consists entirely of aggregators and it is difficult to advance. It is better to remove these requests and prioritize the promotion. It is more efficient to work where you are not forced out.

Leave a Reply

Your email address will not be published. Required fields are marked *