Semantic core for direct message, example of filling. Distribution of requests across pages


About what is important to consider when compiling a semantic core.

To bookmarks

How to collect the correct semantic core

If you think that some service or program can build the correct kernel, you will be disappointed. The only service capable of collecting the correct semantics weighs about one and a half kilograms and consumes about 20 watts of power. This is the brain.

Moreover, in this case the brain has a very specific practical use instead of abstract formulas. In this article, I will show rarely discussed steps in the semantics collection process that cannot be automated.

There are two approaches to collecting semantics

Approach one (ideal):

  • You sell fences and their installation in Moscow and the Moscow region.
  • Do you need applications from contextual advertising.
  • You collect all the semantics (extended phrases) for the query “fences” from anywhere: from WordStat to search tips.
  • You receive a lot of requests - tens of thousands.
  • Then spend several months clearing them of garbage and you get two groups: “needed” queries and “negative keywords.”

Pros: in this case, you get 100% coverage - you took all the real requests with traffic for the main request “fences” and selected from there everything you need: from the elementary “buy fences” to the non-obvious “installation of concrete parapets on a fence price”.

Minuses: two months have passed, and you have just finished working with requests.

Approach two (mechanical):

Business schools, trainers and contextual agencies have been thinking for a long time about what to do about this. On the one hand, they cannot really work through the entire array for the request “fences” - it is expensive, labor-intensive, and they cannot teach people this on their own. On the other hand, the money of students and clients also needs to be taken away somehow.

So a solution was invented: take the request “fences”, multiply it by “prices”, “buy” and “installation” - and go ahead. There is no need to parse, clean or assemble anything, the main thing is to multiply the requests in a “multiplier script”. At the same time, few people worried about the problems that arose:

  • Everyone comes up with the same multiplications, plus or minus, so queries like “installation of fences” or “buy fences” instantly “overheat.”
  • Thousands of high-quality queries like “corrugated fences in Dolgoprudny” will not get into the semantic core.

The multiplication approach has completely exhausted itself: difficult times are coming, the winners will be only those companies that can solve for themselves the problem of high-quality processing of a really large real semantic core - from selecting bases to cleaning, clustering and creating content for websites.

The purpose of this article is to teach the reader not only to select the correct semantics, but also to maintain a balance between labor costs, kernel size and personal effectiveness.

What is a basis and how to search for queries

First, let's agree on terminology. A basis is a general query. If we return to the example above, you sell any fences, which means that “fences” are your main basis. If you sell only fences made of corrugated sheets, then your main basis will be “fences made of corrugated sheets”.

But if you are alone, there are a lot of requests, and campaigns need to be launched, then you can take “corrugated sheet fences price” or “buy corrugated board fences” as a basis. Functionally, the basis serves not so much as an advertising request, but as a basis for collecting extensions.

For example, for the query “fences” more than 1.3 million impressions per month in the Russian Federation

These are not users, not clicks, and not requests. This is the number of impressions of Yandex advertising blocks for all queries that include the word “fences”. This is a measure of coverage applicable to a certain large array of queries, united by the occurrence of the word “fences” in it.

Today, any business represented on the Internet (and this is, in fact, any company or organization that does not want to lose its audience of customers “online”) pays considerable attention to search engine optimization. This the right approach, which can help significantly reduce promotion costs, reduce advertising costs and, when the desired effect occurs, will create a new source of customers for the business. Among the tools with which promotion is carried out is the compilation of a semantic core. We will tell you what it is and how it works in this article.

What is “semantics”

So let's start with general idea about what the concept of “collecting semantics” is and means. On various Internet sites dedicated to search engine optimization and website promotion, it is described that the semantic core can be called a list of words and phrases that can fully describe its topic, field of activity and focus. Depending on how large the this project, it may have a large (and not very large) semantic core.

It is believed that the task of collecting semantics is key if you want to start promoting your resource in search engines oh and want to receive “live” search traffic. Therefore, there is no doubt that this should be taken with complete seriousness and responsibility. Often, a correctly assembled semantic core is a significant contribution to the further optimization of your project, to improving its position in search engines and the growth of indicators such as popularity, traffic and others.

Semantics in advertising campaigns

Actually making a list keywords, which will most successfully describe your project, is important not only if you are engaged in search engine promotion. When working with systems such as Yandex.Direct and Google Adwords, it is equally important to carefully select those “keywords” that will make it possible to get the most interested customers in your niche.

For advertising, such thematic words (their selection) are also important for the reason that with their help you can find more accessible traffic from your category. For example, this is relevant if your competitors work only on expensive keywords, and you “bypass” these niches and promote where there is traffic that is secondary to your project, but is nevertheless interested in your project.

How to collect semantics automatically?

In fact, today there are developed services that allow you to create a semantic core for your project in a matter of minutes. This is, in particular, a project for automatic promotion Rookee. The procedure for working with it is described in a nutshell: you need to go to the appropriate page of the system, where it is proposed to collect all the data about the keywords of your site. Next, you need to enter the address of the resource that interests you as an object for compiling the semantic core.

Service in automatic mode analyzes the content of your project, determines its keywords, and obtains the most identifiable phrases and words that the project contains. Due to this, a list of those words and phrases that can be called the “base” of your site is formed for you. And, in truth, it is easiest to assemble semantics this way; Anyone can do this. Moreover, the Rookee system, analyzing suitable keywords, will also tell you the cost of promotion for a particular keyword, and also make a forecast as to how much search traffic can be obtained if promotion is made for these requests.

Manual compilation

If we talk about selecting keywords automatically, there’s really nothing to talk about here for a long time: you just use the best practices ready-made service, which suggests keywords to you based on the content of your site. In fact, not in all cases the result of this approach will suit you 100%. Therefore, we recommend that you also contact manual option. We will also talk about how to collect semantics for a page with your own hands in this article. However, before that you need to leave a couple of notes. In particular, you should understand that manually collecting keywords will take you longer than working with an automatic service; but at the same time, you will be able to identify higher priority requests for yourself, based not on the cost or effectiveness of their promotion, but focusing primarily on the specifics of your company’s work, its vector and the features of the services provided.

Defining the topic

First of all, talking about how to collect semantics for a page in manual mode, it is necessary to pay attention to the subject of the company, its field of activity. Let's give a simple example: if your site represents a company selling spare parts, then the basis of its semantics will, of course, be queries that have highest frequency usage (something like “auto parts for Ford”).

As experts on search engine promotion, not worth it at this stage afraid to use high frequency queries. Many optimizers mistakenly believe that there is a lot of competition in the fight for these frequently used, and therefore more promising, queries. In reality, this is not always the case, since the return from a visitor who comes with a specific request like “buy a battery for Ford in Moscow” will often be much higher than from a person looking for some general information about batteries.

It is also important to pay attention to some specific points related to the operation of your business. For example, if your company is represented in the field wholesale sales, the semantic core should display keywords such as “wholesale”, “buy in bulk” and so on. After all, a user who wants to purchase your product or service in a retail version will simply be uninteresting to you.

We focus on the visitor

The next stage of our work is to focus on what the user is looking for. If you want to know how to assemble semantics for a page according to what a visitor is looking for, you need to refer to key queries which he does. For this, there are services such as “Yandex.Wordstat” and Google Keyword External Tool. These projects serve as a guide for webmasters to search for Internet traffic and provide an opportunity to identify interesting niches for their projects.

They work very simply: you need to “drive” a search query into the appropriate form, on the basis of which you will search for relevant, more specific ones. Thus, here you will need those high-frequency keywords that were installed in the previous step.

Filtration

If you want to collect semantics for SEO, the most effective approach for you will be to further filter out “extra” queries that turn out to be inappropriate for your project. These, in particular, include some keywords that are relevant to your semantic core from the point of view of morphology, but differ in their essence. This should also include keywords that will not properly characterize your project or will do it incorrectly.

Therefore, before collecting the semantics of keywords, it will be necessary to get rid of inappropriate ones. This is done very simply: from the entire list of keywords compiled for your project, you need to select those that are unnecessary or inappropriate for the site and simply delete them. In the process of such filtering, you will establish the most suitable queries that you will focus on in the future.

In addition to the semantic analysis of the presented keywords, due attention should also be paid to filtering them by the number of requests.

This can be done using the same Google Keyword Tool and Yandex.Wordstat. By typing a request into the search form, you will not only receive additional keywords, but also find out how many times a particular request is made during the month. This way you will see the approximate amount of search traffic that can be obtained through promotion for these keys. Most of all at this stage we are interested in abandoning the little-used, unpopular and simply low frequency queries, promotion of which will be extra expenses for us.

Distribution of requests across pages

Once you have received a list of the most suitable keywords for your project, you need to start comparing these queries with the pages of your site that will be promoted on them. The most important thing here is to determine which page is most relevant to a particular request. Moreover, corrections should be made for link weight inherent to a particular page. Let's say the ratio is approximately this: the more competitive the request, the more cited the page should be selected for it. This means that when working with the most competitive ones, we should use the main one, and for those with less competition, pages of the third nesting level are quite suitable, and so on.

Competitor analysis

Don’t forget that you can always “peek” at how sites that are in the “top” positions of search engines for your key queries are being promoted. However, before collecting the semantics of competitors, it is necessary to decide which sites we can include in this list. It will not always include resources belonging to your business competitors.

Perhaps, from the point of view of search engines, these companies are engaged in promotion for other queries, so we recommend paying attention to such a component as morphology. Just enter queries from your semantic core into the search form - and you will see your competitors in the search results. Next, you just need to analyze them: look at the parameters of the domain names of these sites, collect semantics. What kind of procedure this is, and how easy it is to implement it using automated systems, we have already described above.

In addition to everything that has already been described above, I would also like to present some general tips that experienced optimizers give. The first is the need to deal with a combination of high- and low-frequency queries. If you only target one category of these, your promotion campaign may end up being a failure. If you choose only “high-frequency” ones, they will not give you the targeted visitors who are looking for something specific. On the other hand, low-frequency queries will not give you the required amount of traffic.

You already know how to collect semantics. Wordstat and Google Keyword Tool will help you determine which words are being searched for along with your keywords. However, do not forget about associative words and typos. These categories of queries can be very profitable if you use them in your promotion. Both the first and the second we can get a certain amount of traffic; and if the request is low-competitive, but targeted for us, such traffic will also be as accessible as possible.

Some users often have a question: how to collect semantics for Google/Yandex? This means that optimizers focus on a specific search engine when promoting their project. In fact, this approach is quite justified, but there are no significant differences in it. Yes, each of the search engines works with its own algorithms for filtering and searching for content, but it is quite difficult to guess where the site will be ranked higher. You can only find some general recommendations according to which ones should be applied if you are working with one or another PS, but there are no universal rules for this (especially in a proven and publicly available form).

Compiling semantics for an advertising campaign

You may have a question about how to collect semantics for Direct? We answer: in general, the procedure corresponds to that described above. You need to decide: what queries are relevant to your site, which pages will interest the user most (and for which key queries), promotion by which keys will be the most profitable for you, and so on.

The specificity of how to collect semantics for Direct (or any other advertising aggregator) is that you need to categorically refuse non-topical traffic, since the cost of a click is much higher than in the case of search engine optimization. For this purpose, “stop words” (or “negative words”) are used. To understand how to assemble a semantic core with negative keywords, you need more in-depth knowledge. IN in this case we're talking about about words that can bring traffic that you are not interested in to your site. Often these words can be “free”, for example, when we are talking about an online store, in which a priori nothing can be free.

Try to create a semantic core for your website yourself, and you will see that there is nothing complicated here.

Brand semantics

Ideally the process works like this:

2) Collect semantics from marker queries from anywhere: from Wordstat to search tips.

It doesn't matter what you use when collecting. All services have different quality, but the principle is the same: at the entrance you enter token request, and the program produces extensions that contain the phrase.

The task that has to be solved manually is to determine those very markers (bases). Each of them reflects its own demand, key phrases, extensions and coverage. To do this, you need at least minimal familiarity with the assortment.

When it comes to brand semantics, it’s clear how to look for markers. The brand, as a rule, has Russian or English spelling, series and model names. It is important to take into account all erroneous and synonymous spellings. Other cases with examples will be discussed later in the article.

3) You receive tens or hundreds of thousands of requests, clean them of “garbage” and get two groups: the necessary requests and negative keywords.

Let's consider brand semantics using the example of the online store of tourist equipment and goods “My Planet”.

The store has about 70-80 brands, one of them is Stanley. These include tools, furniture, dishes, and much more. There is no point in collecting all extensions from the word stanley, otherwise there will be a lot of “garbage”. Therefore, we leave requests of 2-3 words:

Most often, it is better to take three-word or two-word ones; in some specific cases, one-word ones are acceptable.

Thermoses are the most popular product, it has 3 spellings of the brand name - stanley, stanley, stanley - and there are markers for the series: stanley mountain, stanley classic.

The more bases, the wider the coverage. We have 70 types of goods, each with 20-50 bases. The total volume of the “tail” is several hundred thousand advanced queries. They can intersect, but partially: as a rule, the percentage of intersection is low.

As a result, you get 100% coverage, but spend a lot of time processing the data. To speed up the process, they often use the method of multiplying queries in a multiplying script.

For a branded semantic core, this method speeds up the work. But what do you do when you offer services in a highly competitive market?

Semantics for complex services

In this situation, there are more unobvious requests that can only be identified through in-depth analysis.

An example is a “diesel” car service, a client of the MOAB agency.

Initial data: the previous contractor multiplied standard service names by transactional words such as “prices”, “buy” and others. As a result, the basics were “injector repair”, “injection pump repair” and others like them.

This approach gives the most banal formulations. An exact copy of the key, rearrangement of words, different cases and word forms are not an option for creativity. Everyone - both contractors and clients - thinks alike, uses the same type of wording and transactional words. Requests “from the lantern” quickly overheat.

The result is a loss of coverage and, as a consequence, insufficient load on the service, since there are no impressions for non-obvious queries. They cannot be obtained by simple multiplication.

The paradox of the situation is that there is little traffic (up to 10 visitors per day), but the auction is terrible (up to 40 rubles per click). Even the service with a huge material base, low cost and a large flow of clients, it is almost impossible to recoup the rates for a specific key.

Based on the results of the analysis, we found additional bases (frequency in Moscow is indicated):

Most of them were a revelation for the customer himself: he did not suspect that potential clients They can formulate their search this way, even though they have been working in this field for a long time.

These requests are far from obvious to competitors, and therefore not overheated. Forecast daily traffic– about 400-500 users, in total for all systems. The average price for them is much lower than for phrases like “injector repair.”

How to systematize if the markers are not tied to the brand and contain vague demand? What is looking for the target audience– this cannot be invented spontaneously and cannot be heard from the customer.

One problem generates an unknown large array of requests: diesel smokes black, gray, white, does not work, knocks, rattles, etc. Your task is to divide this array into clusters in order to differentiate between a finite number of needs.

Demand Variables

In the case of a brand, the “anchor” of demand is the name itself. Stanley products cannot be called anything else; in any case, it is something with the word “Stanley”.

For a complex service, demand is broken down into several components (variables). It is impossible to formulate the problem without one of them:

  • There is a problem with the unit - the user knows or assumes what is broken (injectors, injection pump, plunger). And then the flight of fantasies begins - “the injector is knocking,” “the injector is rattling,” “the injector is smoking,” etc.
  • The problem with the car is that he doesn’t know what’s broken and doesn’t want to find out, he just writes the name of the brand of his car (Scania, Kamaz, Man). In our case, gasoline cars are not our profile; we select only those that run on diesel;
  • Through fuel - a person does not indicate a car or unit, but indicates the type of fuel - “diesel engine”, “diesel”, “diesel car repair”;
  • Through the manifestation of the problem (“smoke”, “does not run”). For example, black smoke is a typical diesel problem; there is no need to specify whether it is gasoline or diesel;
  • Through the error code on the auto scanner (“error 1235”, “error 0489”).

With a high probability, a person whose diesel engine has broken down will use at least one of the values ​​of these variables in the query. This is the “anchor” around which demand for an issue revolves.

Recommendation: To break down queries into variables and select their values, you need to imagine how your potential audience talks about problems. To do this, it is useful to study competitors’ websites, thematic forums, communities, etc.

How is this different from multiplication?

Imagine there is a mountain, inside of which there are gold bars that you need to get. Standard method– dig a mine in this mountain and collect gold that comes along the way of exploration.

Another option is that you tear down the mountain with an excavator and take it to the mining and processing plant. It is more labor-intensive and requires more competencies, but from the entire mass of rocks you will collect all the gold.

By analogy with this, we take all the demand for the query “diesel”, “diesel” and work through all the expansions in depth in Wordstat. Then we collect for each search tips. Based on the resulting array, we break down the frequency, remove duplicates and get the final volume of requests.

Let's say we get 100 thousand. What to do next with them? We select the necessary phrases.

To do this, we enter each array into “Group Analysis” in Key Collector. We use a frequency dictionary of queries. In it you need to group by phrases and fill in the keywords in the tab.

What we get:

At this stage, there is no need to clear the array of negative keywords, etc. You just need to look through the frequency dictionary and identify two-word words that clearly indicate auto repair.

What are the advantages? Frequently used groups of requests are tied to the most popular problems. The program sorts groups by the number of words they contain. You look through all the results and identify groups that fit your problem.

You get everything that can be collected in this topic. At the same time, it’s normal if you initially have 100 thousand queries with the word “diesel” and only 10 thousand after analysis.

You do similar work with all variable values.

Semantics for exact demand

For the YapiMotors auto repair center, the MOAB agency compiled one of the largest semantic plans for its history. Client specifics: he needs to provide accurate traffic.

The customer clearly outlined the initial conditions: there is an exact list of works (300 pieces) that he performs, and a list of brands (70 pieces) with which he works.

At the first stage, we looked for all kinds of titles from the list of works:

  • Brake repair – repair of the brake system, replacement of the brake system, brakes do not work, etc.
  • Engine repair - internal combustion engine repair, internal combustion engine replacement, engine repair, etc.

Japanese names are complex and often misspelled. As a result, 70 given makes/models turned into 270 lines various options writing in Latin and Cyrillic alphabet.

The market is large, so the company did not lay claim to the whole market. It’s logical: one car service cannot serve the whole of Moscow. His goal is a small part of this demand, but as hot as possible, and for minimal money. Therefore, we have identified requests that already have an urgent need.

If a user writes “black smoke diesel” in a search, he can drive a smoking car for another week before it completely stops. And the “repair running price” can be converted right now.

We multiply 450 works by 270 models and get a list of markers from which we take the frequency.

About 5 thousand bases showed a non-zero frequency and a “tail” of extensions of 50 thousand. Unlike the diesel core, which contains a lot of “garbage,” this contains a minimal number of negative keywords, and almost all queries are targeted.

Semantics for feeds

Why collect it?

Semantics for feeds ensures a reliable negative file and clean traffic.

A standard list of negative keywords is not enough. At best, they close 30-40% of real negative keywords. Each topic also has queries that contain characteristic words and make the query itself untargeted and irrelevant to you. Therefore, you need to collect negative keywords for feeds based on real queries.

An example is requests for Bosch auto parts. This is an array of several hundred thousand. From it we identified those that contain numbers - these are product requests, there were 20-30 thousand of them. We compiled a frequency dictionary from them to find groups with irrelevant demand. It is important to take real phrases for a specific brand.

This gives you a more accurate negative keyword file, based on which you can block untargeted impressions. As a result, in Russia average price the click was 7-10 rubles, and the application price was 60-70 rubles. We have achieved high conversion, since they only attracted traffic that was close to a purchase.

Pitfalls of feed advertising

Let's say you have 10 thousand auto parts. During the process, it is important to check whether there are double meanings among the articles. It can mean both a product and GOST, instructions that are not related to your topic. Or a product from a completely different area.

How can I check this? Take a list of articles and break down the frequency of them. You also manually identify articles with double meanings. Using them, you clarify the semantics - add the brand name or qualifying words to exclude impressions based on non-target coverage.

Is it enough to add an article number to the ad or is it necessary« %item + %buy (or other transaction word)»

The second option doesn't work additional traffic, but you can directly control the ad position not only by the request for an article, but also by the request “%article + %buy” through the connection of a bidder.

Hello everyone!

Once you have created an account, you can proceed to the instructions below:

Great! Key Collector has been successfully configured, which means you can proceed directly to compiling the semantic core.

Compiling a semantic core in Key Collector

Before you start collecting key phrases for Yandex.Direct, I recommend reading, in it you will find a lot useful information about key phrases (only for beginners). Have you read it? Then it will not be difficult for you to collect masks of key phrases, which are very necessary for parsing through Key Collector.

  1. Be sure to indicate the region where keywords are collected:
  2. Click on the “Batch collection of words from the left column of Yandex.Wordstat” button:
  3. Enter key phrase masks and distribute them into groups: This is the result. Click "Start Collection": This is done for the convenience of processing key phrases. This way, requests will not be mixed in one group and it will be much easier for you to process them;
  4. Wait until the collection of key phrases is completed. Once the process is completed, you can collect the exact frequency of requests, and also find out approximate cost click on the ad, the approximate number of ad impressions, the approximate budget and the number of competitors for a specific request. All this can be found out using one single button “Collecting Yandex.Direct statistics” (we added it to the panel quick access):
    Check all the boxes according to the screenshot above and click “Get data”;
  5. Wait for the process to complete and view the results. To make this convenient, click on the column auto-tuning button, which leaves visible only those columns that contain data:
    We need the statistical data that we have now collected in order to analyze the competitive situation for each key phrase and estimate the approximate advertising costs for them;
  6. Next, we’ll use this coolest and the most convenient tool Key Collector as “Group Analysis”. We've added it to the Quick Access Toolbar, so just go to it from there:
    Key Collector will group all key phrases by words and it will be convenient for us to process each group of requests. Your task: look through the entire list of groups; find groups of queries containing non-target words, that is, negative words, and add them to the appropriate list; Mark these request groups to delete them later. You can add a word to the list by clicking on the little blue button: Then a small window will appear where you need to select a list of negative words (list 1(-)) and click on the “Add to stop words” button: This way you work through the entire list. Don't forget to mark groups with non-target words. Key phrases are automatically marked in the table search queries;
  7. Then you need to delete the marked non-target phrases in the search queries table. This is done by clicking the “Delete phrases” button:
  8. We continue to process phrases. As you remember, the status “Few impressions” appeared in Yandex Direct at the beginning of 2017 (we dealt with it), and in order to avoid this status, it is necessary to allocate requests with low frequency (low-frequency requests) into a separate group. First, apply a filter to the “Base Frequency” column:
    Filter parameters: Base frequency, less than or equal to 10. I set these filter parameters based on the display region - Izhevsk:
    Then we mark all the filtered phrases:
  9. We create a subgroup in the group where work takes place in this moment a simple keyboard shortcut CTRL+Shift+T: Then we transfer the filtered phrases from the “Buy iPhone 6” group to the “Few impressions” group. We do this by transferring phrases to another group:
    Then specify the transfer parameters as in the screenshot below (Run-transfer-marked):
    Remove the filter from the “Base Frequency” column:

You process the rest of the groups in exactly the same way. The method, of course, may seem tedious at first glance, but with some skill you can quickly, quickly create a semantic core for Yandex Direct and already create campaigns in Excel, and then upload them. It takes me about 2 hours to process the semantic core in this way, but this depends solely on the amount of work.

Export key phrases to Excel

All we have to do is export the key phrases to a file for working with Excel. Key Collector offers two export file formats: csv and xlsx. The second option is much preferable, since working in it is much more convenient and more familiar for me personally. You can specify the file format in the same program settings, in the “Export” tab:

You can export key phrases by clicking on green icon in the quick access panel:

Each group is exported separately, that is separate group- this is separate xlsx file. You can, of course, put all request groups into one file using the “Multi-Groups” tool, but then it will be extremely inconvenient to work with these files, especially if there are a lot of groups.

Next you need to export your negative keywords. To do this, you need to go to “Stop Words” and copy the negative words to the clipboard so that you can then paste them into Excel:

This is how I work with Key Collector, which I taught you too. I sincerely wish that this lesson will help you in mastering this wonderful tool and your semantic core will bring exceptional targeted traffic and lots and lots of sales.

See you soon, friends!

Previous article
Next article






2024 gtavrl.ru.