Problematic
In many topics in the search results in Yandex and Google, the user in response to his request first of all sees the sorcerer:
It picks up the CTR. This is bad. After that, contextual advertising comes in, and CTR is lost on it again.
And only then organic delivery begins.
On Google – the same context:
The same sorcerers:
And only then – search engine. It is very difficult to get traffic from search for highly competitive queries, unless you have a giant company.
Many people promote sites by position. This is normal, the market demands it. But in fact, everyone who is in the 6th place in the search results does not receive traffic.
Dependence of CTR on position:
Let’s make the calculation. Suppose we have a request with a frequency of 3000. The site for this request is on the 6th position. We multiply the frequency by 6.23, take the conversion rate of 5% (this is the average for online stores). We receive 9 applications per month. Only requests, excluding conversion to sales.
What can you do about it? You can complain, write posts. But that won’t get you anywhere. Or you can go the other way and identify requests that bring a lot of traffic. For this, a study was conducted.
Research objectives and inputs
More than 200,000 queries from 110 commercial topics (selling goods and services, as well as betting and casinos) were used for the research.
From January to August 2019 (for some projects since December 2018), the top 10 search results from Yandex and Google were collected in these topics.
In January, data was taken every week. From February to August – every 3 weeks (changes are minimal for 1-2 weeks). Research regions – Moscow, million-plus cities: (Yekaterinburg, Voronezh); cities with a population> 500 thousand: Tomsk, Penza; with a population> 300 thousand: Saransk, Vladimir.
The first task that had to be solved within the framework of the study was site markup. Based on the URL of the search results, we got domains and highlighted their types:
- aggregator;
- thematic aggregator;
- complex commerce;
- commercial site;
- social network;
- state site;
- search engine service;
- information site.
Thus, 103,617 domains were allocated.
We checked that the site characteristics correlate with the positions:
- number of pages in the index I and G;
- presence in the Directory;
- commercial markers in Title, H1, Description;
- reference;
- finding a site in the top 500 thousand by Keys.so;
- X;
- Whois and the date of the first scan;
- social network;
- telephones,
- Email.
But the only correlation was found for complex commerce.
Research objectives:
- get the percentage of organic search results for the entire sample for Yandex and Google search engines, as well as the change in 2019;
- get a list of topics for search engines Yandex and Google, which from the current sample have the maximum percentage of aggregators and complex commerce; and vice versa;
- understand how the percentage of aggregators and complex commerce is changing in the regions;
- answer the question – is the SEO channel suitable for topics with complex commerce and aggregation;
- create a tool for scoring requests for each type of transcript by position + “Except for the selected transcript.”
results
At the moment, Yandex has 65% of commercial issues. Mostly aggregators.
And here’s the dynamics:
Yandex systematically kills other people’s commercial sites (according to generalized data for 6 months). But there are always exceptions.
In general, aggregators receive little traffic in complex topics (and vice versa).
Thus, if you have a highly specialized site and a cool offer, then you will always make money. In the category of complex products, you can work in any search engine.
But with maximum topics, the situation is different:
There is a minimum entry threshold for starting a business and a lot of aggregators.
In Moscow, Yandex prioritizes aggregators and complex commerce in ranking, while the percentage of URLs for such sites is constantly growing. Therefore, in complex topics you can work in both search engines, but in simple ones it is better to go to Google.
For regions:
In the regions, in Yandex and Google, the percentage of aggregators is growing, while complex commerce is practically unchanged. The fact is that in the regions there are very bad sites, so traffic is given to aggregators.
Is SEO lost?
Not. SEO is responsible for 65% of commercial traffic in Yandex.
The existing problem needs to be addressed.
Tool for professionals
Solution from Ozhgibesov – query scoring and iterative promotion.
You can act like this:
- Collect data to research your topic.
- Prepare decryption for domains.
- Get data on the composition of the top in your topic.
- Get data on URL changes for phrases based on different input iterations (weeks, months, etc.).
- Select queries that do not have the required type of decryption (for example, aggregators) and implement this semantics.
- Select queries with a minimum presence by positions of the required type of decryption (for example, aggregators) – and implement this semantics.
- And only after getting the top for these queries – proceed to the most complex semantics.
It is necessary to search for queries in your topic for at least a month. If you collect queries once, you will not be able to understand the dynamics and predict whether their prospects are good or bad in the selected topic. You can only give the current position. For example, the current position on Yandex is 65%, but over the past six months it has lost about 10% of commercial sites. To see this, you need to track changes.
Next, you need to prepare domain decryption. This is done manually. One specialist can process 1500 domains per day.
Then we collate the data. You can use two options for tools:
- Key Collector and XMLproxy (for Yandex), XMLRiver for Google.
- Topvisor – the SERP Snapshot tool.
You need to do this for at least four weeks for each search engine.
For Yandex, let’s analyze it using the Key Collector as an example. We take the semantics file and make copies.
For Google, let’s take Topvisor as an example. Download file from Topvisor by date.
We open all iterations in KC, unload the search results data in the Multi-group mode. If you have taken the output data 4 times, we will upload it for all files.
Topvisor files differ from the Key Collector view, so they need to be processed to make the view 1. To do this, run a macro to process the data in the folder.
After collecting the data, we rename them.
Yandex is the first, Google is the second. The order for our macro is mandatory – data is processed according to it.
From 1 base file, we need to get domains from the URL.
Next, we need domain decryption. Copy the URL column into a separate column and apply Text to Columns.
We get the list:
We delete everything starting from column D so that the protocol and domain remain.
Insert “//” into column “B” and stretch. In cell D1 write the formula = CONCATENATE (A1; B1; C1;) and stretch.
Copy column D and use Paste Special to get the data from the formulas.
Select the entire column and remove duplicates.
We insert the resulting 344 domains into Netpeak Checker and remove the following parameters:
- title;
- phone numbers;
- indexed URLs in Yandex;
- domain registration date and date of the first web archive scan.
For example, if the site has a commercial title, then it is easy to conclude that this is a commercial product.
Optionally, we check traffic through SimilarWeb and the number of phrases in the TOP-100 through Serpstat. This will help you navigate: if in your topic everyone in the TOP-100 has about 5,000 phrases, and some site has 30,000, then this is clearly an aggregator.
We prepare the received data in the following form:
Based on the data obtained and manually browsing the sites, you need to prepare a decryption and bring it into the following form:
Checker gives you a view of the site https://url.ru/, we need to remove the last slash, for this use the formula = PSTR (A2; 1; DLSTR (A2) -1).
Let’s move on to the macro. In the last sheet “Grades” we insert domains and transcript. Then go to the Input sheet and select the folder with the data that we have prepared. And click convert. If you did everything correctly, you will see “OK”.
Press the second button. The macro will give you domains in all files for which there is no decryption. We repeat the decryption process.
In total, 796 domains had to be decrypted for this topic. If successful, when the button is pressed again, the macro will display an OK message.
As a result, we get full analytics on the issue Go to the Report1 sheet and press the button to fill in the Report 0.0-1.3.
When you click the Fill in Report 4 button, the Report 2 sheet will fill in the data on URL changes with reference to phrases and files.
The third sheet of the report contains the scoring of requests.
The macro has a handy “Except” function. After downloading and processing the file, the program will show all requests for which there is no aggregation.
For example, we took the real estate segment. A total of 6,000 requests. We found 218 queries without aggregation in Yandex and 529 queries in Google. Work with them.
There are requests for which it is easy to promote the site, but there are those for which the issue consists entirely of aggregators and it is difficult to advance. It is better to remove these requests and prioritize the promotion. It is more efficient to work where you are not forced out.