What to do with the semantic core after compilation. Keyword selection service Yandex Wordstat


Semantic core of the site– this is a set keywords and phrases that most fully describe the theme of the site and its focus.

Compiling a semantic core– This is the second most important step in creating a website after choosing the theme of the new site. The entire future promotion of the new site in search engines will depend on the compilation of the semantic core.

At first glance, choosing keywords for a website is not difficult. But this process has a large number of nuances that need to be taken into account when compiling a semantic core.

In today's article we will try to understand all the features of compiling a semantic core.

Why is the semantic core compiled?

Semantic core important for a website of any subject, from an ordinary blog to an online store. Blog owners need to constantly work to improve their rankings in searches, and semantic core keywords play a role in this main role. Owners of online stores should know how customers search for products that the online store distributes.

To bring your site to the TOP 10 search queries need to be composed correctly semantic core and optimize high-quality and unique content for it. Without unique content There is no point in talking about the benefits of compiling a semantic core. Every unique article needs to be optimized for one or more similar queries.

How to create a semantic core of a website?

One of the online services for selecting keywords can help you compile the semantic core of your site. Almost all search engines have such online services. U Yandex – Wordstat, y Google – Google AdWords, y Rambler – Rambler Adstat. In online keyword selection services, you can select the main keywords on a specific topic in various word forms and combinations.

Using Wordstat in the left column you can see the number of requests per month not only for a given word, but also for various combinations a given word (phrase). The left column also contains statistics on the keywords that users searched for along with a given query. This information can be useful in creating relevant content for your site.

Also in Yandex Wordstat, you can select a specific region in order to learn about query statistics only for a specific region. Such information may be useful to companies that provide services only within one region.

The program can also help to compile a semantic core Key Collector . Using this program, you can quickly collect all keywords and determine their effectiveness and competitiveness. The program can also analyze the site for compliance with the semantic core.

Main disadvantage Key programs Collector– it’s paid. The cost of the program is 1500 rubles.

To form a semantic core, you can also use drop-down tips in search engines. If you enter “semantic core” into Google, then along with it Google will provide several more keywords related to the entered query.

What to do with the semantic core of the site?

After compiling a list of keywords, you need to divide the entire list into conditional groups depending on the frequency of the request. All search queries are divided into: high-frequency, mid-frequency and low-frequency.

It is better to arrange the semantic core in the form of a table, at the top of which will be indicated high frequency queries, lower – mid-frequency, even lower – low-frequency. The words and phrases in each subsequent line should be similar in theme and morphology.

A properly composed semantic core of a website can greatly facilitate further website promotion in search engines. Website promotion determines its traffic, and along with it, income.

If you have a question “How to compose a semantic core,” then before deciding, you must first figure out what you are dealing with.

Semantic core of the site is a list of phrases that users enter into search engines. Accordingly, promoted pages must respond to user queries. Of course, you can’t shove a bunch of different types of key phrases onto the same page. One main search query = one page.

It is important that the keywords correspond to the theme of the site, do not have grammatical errors, have a reasonable frequency, and also correspond to a number of other characteristics.

The semantic core is usually stored in an Excel table. This table can be stored/created anywhere - on a flash drive, in Google Docs, on Yandex.Disk or somewhere else.

Here clear example simplest design:

Features of selecting the semantic core of a site

First, you need to understand (at least roughly) what phrases your audience uses when working with a search engine. This will be quite enough for working with tools for selecting key phrases.

What keywords does the audience use?

Keys- these are exactly the same phrases that users enter into search engines to obtain this or that information. For example, if a person wants to buy a refrigerator, he writes “buy a refrigerator,” or “buy an inexpensive refrigerator,” “buy a Samsung refrigerator,” etc., depending on his preferences.

Now let's look at the characteristics by which keys can be classified.

Sign 1 - popularity. Here the keys can be roughly divided into high-frequency, mid-frequency and low-frequency.

Low frequency queries(sometimes referred to as LF) have a frequency of up to 100 impressions per month, mid-frequency (MF) - up to 1000, and high-frequency (HF) - from 1000.

However, these figures are purely conditional, because there are many exceptions to this rule. For example, the topic of cryptocurrency. Here it is much more correct to consider low-frequency queries with a frequency of up to 10,000 impressions per month, medium-frequency - from 10 to 100 thousand, and high-frequency - everything else. Today, the keyword “cryptocurrency” has a frequency of more than 1.5 million impressions per month, and “bitcoin” has exceeded 3 million.

And despite the fact that “cryptocurrency” and “bitcoin”, at first glance, are very tasty search queries, it is much more correct (at least in the initial stages) to focus on low-frequency queries. Firstly, because these are more precise queries, which means it will be easier to prepare relevant content. Secondly, there are ALWAYS tens to hundreds of times more low-frequency queries than high-frequency and mid-frequency queries (and in 99.5% of cases, also combined). Thirdly, the “low-frequency core” is much easier and faster to expand than the other two. BUT... This does not mean that the mids and highs should be ignored.

Sign 2 - user needs. Here we can roughly divide into 3 groups:

  • transactional - imply some kind of action (contain the words “buy”, “download”, “order”, “delivery”, etc.)
  • informational - simply searching for certain information (“what will happen if”, “what is better”, “how to do it correctly”, “how to do it”, “description”, “characteristics”, etc.)
  • others. This is a special category, because it is not clear what exactly the user wants. For example, let's take the request “cake”. "Cake" what? Buy? Order? Bake according to the recipe? View photos? Unclear.

Now about the application of the second sign.

Firstly, it is better not to “mix” these requests. For example, we have 3 search queries - “ dell laptop 5565 amd a10 8 GB hd buy”, “dell 5565 amd a10 8 GB hd laptop review” and “dell 5565 amd a10 8 GB hd laptop”. The keys are almost completely identical. However, it is the differences that play a decisive role. In the first case, we have a “transactional” request, according to which we need to promote the product card. In the second - “information”, and in the third - “other”. And if a separate page is needed for the information key, then it is logical to ask the question - what to do with the third key? It’s very simple - view the TOP 10 of Yandex and Google for this query. If there is a lot trade offers- this means the request is still commercial, and if not, then it is informational.

Secondly, transactional queries can also be divided into “commercial” and “non-commercial”. In commercial requests you will have to compete with “heavyweights”. For example, for the request “buy samsung galaxy“We will have to compete with Euroset, Svyaznoy, and for the request “buy an ariston oven” - with M.Video and Eldorado. So what should I do? It’s very simple to “swing” at requests that have much more low frequency. For example, today the request “buy samsung galaxy” has a frequency of about 200,000 impressions per month, while “buy samsung galaxy a8” (and this is quite specific model line) has a frequency of 3600 impressions per month. The difference in frequency is enormous, but for the second request (precisely due to the fact that a very specific model is implied) you can get much more traffic than for the first.

Anatomy of search queries

The key phrase can be divided into 3 parts - body, qualifier, tail.

For clarity, let’s take the previously discussed “other” query - “cake”. What the user wants is unclear, because... it consists only of a body and has no specifier and tail. However, it is high-frequency, which means it has fierce competition in search results. However, 99.9% of people who visit a site will say “no, this is not what I was looking for” and simply leave, and this is a negative behavioral factor.

Let’s add the “buy” specifier and get a transactional (and as a bonus, also a commercial) request “buy a cake.” The word “buy” reflects the user’s intent.

Let’s change the specifier to “photo” and get the request “cake photo”, which is no longer transactional, because the user is simply looking for photos of cakes and is not going to buy anything.

Those. It is with the help of the specifier that we determine what kind of request it is - transactional, informational or other.

We've sorted out the sale of cakes. Now, to the request “buy a cake” we will add the phrase “for a wedding”, which will be the “tail” of the request. It is the “tails” that make requests more specific, more detailed, but at the same time do not cancel the user’s intentions. IN in this case- since the cake is a wedding, then cakes with the inscription “happy birthday” are immediately discarded, because... they are not suitable by definition.

Those. if we take the queries:

  • buy a birthday cake
  • buy a wedding cake
  • buy an anniversary cake

then we will see that the user’s goal is the same - “buy a cake”, and “for the birth of a child”, “for a wedding” and “for an anniversary” reflect the need in more detail.

Now that you know the anatomy of search queries, you can derive a certain formula for selecting a semantic core. First, you define some basic terms that are directly related to your activity, and then collect the most suitable specifiers and tails (we’ll tell you a little later).

Clustering of the semantic core

Clustering refers to the distribution of previously collected requests across pages (even if the pages have not yet been created). This process is often called “grouping the semantic core.”

And here many people make the same mistake - they need to separate queries according to their meaning, and not according to the number of pages available on the site or in a section. Pages can always be created if necessary.

Now let's figure out which keys should be distributed where. Let's do this using the example of a structure that already has several sections and groups:

  1. Home page. For it, only the most important, competitive and high-frequency queries are selected, which are the basis for promoting the site as a whole. (“beauty salon in St. Petersburg”).
  2. Categories of services/products. It is quite logical to place queries here that do not contain any particular specifics. In the case of a “beauty salon in St. Petersburg”, it is quite logical to create several categories using the keys “make-up artist services”, “men’s room”, “women’s room”, etc.
  3. Services/products. More specific queries should already appear here - “wedding hairstyles”, “manicure”, “evening hairstyles”, “coloring”, etc. To some extent, these are “categories within a category.”
  4. Blog. Information requests are suitable here. There are many more of them than transactional ones, so there should be more pages that will be relevant to them.
  5. News. Keys that are most suitable for creating short news notes are highlighted here.

How Query Clustering Is Performed

There are 2 main methods of clustering - manual and automatic.

Manual clustering has 2 main disadvantages: long, labor-intensive. However, the entire process is controlled personally by you, which means you can achieve very High Quality. For manual clustering, Excel will be quite sufficient, Google Sheets or Yandex.Disk. The main thing is to be able to filter and sort data according to certain parameters.

Many people use the Keyword Assistant service for clustering. Essentially, this is manual clustering with elements of automation.

Now let’s look at the pros and cons of automatic grouping; fortunately, there are many services (both free and paid) and there is plenty to choose from.

For example, it is worthy of attention demon paid service clustering from the SEOintellect team. It is suitable for working with small semantic cores.

For “serious” volumes (several thousand keys), it makes sense to use paid services (for example, Topvisor, SerpStat and Rush Analytics). They work as follows: You download key queries, and at the end you get a ready Excel file. The 3 services mentioned above work approximately according to the same scheme - they group by meaning, analyze the intersection of phrases, and also view the TOP-30 for each request search results to find out how many URLs the requested phrase appears in. Based on the above, distribution into groups occurs. All this happens “in the background.”

Programs for creating a semantic core

There are many paid and free tools, there is plenty to choose from.

Let's start with the free ones.

Service wordstat.yandex.ru. This is a free service. For convenience, it is recommended to install the Wordstat Assistant plugin in your browser. That is why we will consider these 2 tools in pairs.

How it works?

Very simple.

For example, we will put together a small core of travel packages to Antalya. As “basic” we will have the 2nd request - “tours to Antalya” (in this case, the number of “basic” requests is not important).

Now go to https://wordstat.yandex.ru/, log in, insert the first “basic” request and get a list of keys. Then, using the plus signs, we add suitable keys to the list. Please note if key phrase is colored blue and marked with a plus on the left, which means it can be added to the list. If the phrase is “discolored” and marked with a minus, it means it has already been added to the list, and clicking on the “minus” will lead to its removal from the list. By the way, the list of keys on the left and the pros and cons are the very features of the Wordstat Assistant plugin, without which working in Yandex.Wordstat makes no sense at all.

It is also worth noting that the list will be saved exactly until it is corrected or cleared by you personally. Those. if you type in the line “ samsung tvs", then the list of Yandex.Wordstat keys will be updated, but previously collected keys in the plugin list will be saved.

According to this scheme, we run all the pre-prepared “basic” keys through Wordstat, collect everything we need, and then by clicking on one of these two buttons we copy the previously collected list to the clipboard. Please note that the button with two leaves copies the list without frequencies, and with two leaves and the number 42 - with frequencies.

The list copied to the clipboard can then be pasted into an Excel spreadsheet.

Also during the collection process, you can view impression statistics by region. For this purpose, Yandex.Wordstat has the following switch.

Well, as a bonus, you can look at the request history - find out when the frequency increased and when it decreased.

This feature can be useful in determining the seasonality of a request, as well as for identifying a decline/growth in popularity.

Another interesting feature is the statistics of impressions for the specified phrase and its forms. To do this, you must enclose the query in quotation marks.

Well, if you add an exclamation mark before each word, then the statistics will display the number of impressions by key without taking into account word forms.

No less useful is the minus operator. It removes key phrases that contain the word (or several words) you specify.

There is another tricky operator - the vertical separator. It is necessary in order to combine several lists of keys into one (we are talking about keys of the same type). For example, let’s take two keys: “tours to Antalya” and “trip to Antalya”. We write them in the Yandex.Wordstat line as follows and get 2 lists for these keys, combined into one:

As you can see, we have a lot of keys where there are “tours”, but no “vouchers” and vice versa.

Another important feature is the frequency binding to the region. You can select your region here.

Using Wordstat to collect a semantic core is suitable if you are collecting mini-cores for some individual pages, or you don’t plan large cores (up to 1000 keys).

SlovoEB and Key Collector

We're not kidding, that's exactly what the program is called. In a nutshell, the program allows you to do exactly the same thing, but in automatic mode.

This program was developed by the LegatoSoft team - the same team that developed Key Collector, we will also talk about it. In essence, Slovoeb is a heavily trimmed (but free) version of Key Collector, but it is quite capable of working with the collection of small semantic cores.

Especially for Slovoeb (or Key Collector) it makes sense to create a separate account on Yandex (if they ban you, it’s not a pity).

It is necessary to make small adjustments one-time.

The login-password pair must be entered separated by a colon and without spaces. Those. if your login [email protected] and the password is 15101510ioioio, then the pair will look like this: my_login:15101510ioioio

Please note that there is no need to enter @yandex.ru in your login.

This setup is a one-time event.

Let's make a couple of points clear:

  • How many projects to create for each site is up to you to decide
  • Without creating a project, the program will not work.

Now let's look at the functionality.

To collect keys from Yandex.Wordstat, on the “Data Collection” tab, click on the “Batch collection of words from the left column of Yandex.Wordstat” button, insert a list of previously prepared key phrases, click “Start collection” and wait for it to finish. There is only one drawback to this collection method - after parsing is completed, you have to manually delete unnecessary keys.

At the output we get a table with collected from Wordstat key words and base frequency of impressions.

But we remember that you can also use quotation marks and an exclamation point, right? This is what we will do. Moreover, this functionality is implemented in Sloyoba.

We start collecting frequencies in quotes and watch how the data gradually appears in the table.

The only negative is that the data is collected through the Yandex.Wordstat service, which means that even collecting frequencies for 100 keys will take quite a lot of time. However, this problem is solved in Key Collector.

And one more function that I would like to talk about - collecting search tips. To do this, copy the list of previously parsed keys to the clipboard, click the button for collecting search tips, paste the list, select the search engines from which search tips will be collected, click “Start collection” and wait for it to finish.

As a result, we get an expanded list of key phrases.

Now let’s move on to Slovoeb’s “big brother” - Key Collector.

Key Collector is paid, but has much wider functionality. So, if you are professionally involved in website promotion or marketing, Key Collector is simply a must-have, because Wordfucker will no longer be enough. In short, Kay Collector can do:

  • Parse keys from Wordstat*.
  • Parse search suggestions*.
  • Cutoff search phrases by stop words*.
  • Sorting requests by frequency*.
  • Identification of duplicate requests.
  • Determination of seasonal requests.
  • Collection of statistics from Liveinternet.ru, Metrica, Google Analytics, Google AdWords, Direct, Vkontakte and others.
  • Determination of relevant pages for a particular request.

(the * sign indicates the functionality available in Slovoyobe)

The process of collecting keywords from Wordstat and collecting search tips is absolutely identical to that implemented in Slovoyobe. However, frequency collection is implemented in two ways - through Wordstat (as in Slovoyobe) and through Direct. Through Direct, the collection of frequencies is accelerated several times.

This is done as follows: click on the D button (short for “Direct”), check the box to fill out the Wordstat statistics columns, check the box (if necessary) about what frequency we want to get (base, in quotes, or in quotes and with exclamation marks", click "Get data" and wait for the collection to complete.

Collecting data through Yandex.Direct takes much less time than through Wordstat. However, there is one drawback - statistics may not be collected for all keys (for example, if the key phrase is too long). However, this minus is compensated by collecting data from Wordstat.

Google Keyword Planner

This tool is extremely useful for collecting a core based on the needs of Google search engine users.

Using the Key Planner Google words you can find new queries by query (no matter how strange it may sound), and even by site/topic. Well, as a bonus, you can even predict traffic and combine new search queries.

For existing queries, statistics can be obtained by selecting the appropriate option on home page service. If necessary, you can select a region and negative keywords. The result will be output in CSV format.

How to find out the semantic core of a competitor’s website

Competitors can also be our friends, because... You can borrow ideas for choosing keywords from them. For almost every page you can get a list of keywords for which it is optimized, manually.

The first way is to study the page content, Title, Description, H1 and KeyWords meta tags. You can do everything manually.

The second way is to use Advego or Istio services. This is quite enough to analyze specific pages.

If you need to perform a comprehensive analysis of the semantic core of the site, then it makes sense to use more powerful tools:

  • SEMrush
  • Searchmetrics
  • SpyWords
  • Google Trends
  • Wordtracker
  • WordStream
  • Ubersuggest
  • Topvisor

However, the above tools are more suitable for those who are engaged in the professional promotion of several sites at the same time. “For myself” even manual method will be quite enough (as a last resort - Advego).

Errors when compiling a semantic core

The most common mistake is a very small semantic core

Of course, if this is some highly specialized niche (for example, the hand-made production of elite musical instruments), then in any case there will be few keys (one hundred, one and a half, two hundred).

The larger the semantic core (but without “garbage”), the better. In some niches, the semantic core can consist of several... MILLIONS of keys.

The second mistake is synonymizing. More precisely, its absence

Remember the example of Antalya. Indeed, in this context, “tours” and “vouchers” are the same thing, but these 2 lists of keys can be radically different. “Stripper” may well be searched for “wire stripper” or “insulation removal tool.”

At the bottom of the search results, Google and Yandex have this block:

It is there that you can often spot synonyms.

Compiling a semantic core exclusively from high-frequency queries

Remember what we said at the beginning of the post about low-frequency queries, and the question “why is this an error?” You won't have any more problems. Low-frequency queries will bring the bulk of traffic.

"Garbage", i.e. non-targeted requests

It is necessary to remove from the assembled kernel all requests that do not suit you. If you have a store cell phones, then for you the request “cell phone sales” will be targeted, and “cell phone repair” will be garbage. In the case of a service center for repairing cell phones, everything is exactly the opposite: “repair of cell phones” is targeted, and “sale of cell phones” is garbage. The third option is if you have a cell phone store to which you are “attached” service center, then both requests will be targeted.

Once again, there should be no garbage in the kernel.

No grouping of requests

It is strictly necessary to split the core into groups.

Firstly, this will allow you to create a competent site structure.

Secondly, there will be no “key conflicts”. For example, let’s take a page that is promoted by the queries “buy self-leveling floor” and “buy acer laptop" The search engine may be confused. As a result, it will fail for both keys. But for the queries “hp 15-006 laptop buy” and “hp 15-006 laptop price” it already makes sense to promote one page. Moreover, it doesn’t just “make sense”, but will be the only correct solution.

Thirdly, clustering will allow you to estimate how many pages still need to be created so that the core is completely covered (and most importantly, is it necessary?).

Errors in separating commercial and information requests

The main mistake is that requests that do not contain the words “buy”, “order”, “delivery”, etc. can also turn out to be commercial.

For example, the request "". How to determine whether a request is commercial or informational? It’s very simple - look at the search results.

Google tells us that this is a commercial request, because... in our search results, the first 3 positions are occupied by documents with the word “buy”, and although the fourth position is occupied by “reviews”, look at the address - this is a fairly well-known online store.

But with Yandex everything turned out to be not so simple, because... in the TOP 5 we have 3 pages with reviews/feedback and 2 pages with trade offers.

Nevertheless, this request still refers to commercial ones, because commercial offers there is both here and there.

However, there is also a tool for mass verification of keys for “commerce” - Semparser.

We picked up “empty” queries

Both base and quoted frequencies must be collected. If the frequency in quotes is zero, it is better to delete the request, because it's a dummy. It often happens that the base frequency exceeds several thousand impressions per month, and the frequency in quotes is zero. And right away specific example- key “inexpensive skin cream”. Base frequency 1032 impressions. Looks delicious, doesn't it?

But all the flavor is lost if you put the same phrase in quotation marks:

Not all users type without errors. Because of them, “crooked” key queries end up in the database. Including them in the semantic core is pointless, since search engines still redirect the user to the “corrected” query.

And it’s exactly the same with Yandex.

So we delete “crooked” requests (even if they are high-frequency) without regret.

An example of the semantic core of a site

Now let's move from theory to practice. After collection and clustering, the semantic core should look something like this:

Bottom line

What do we need to compile a semantic core?

  • at least a little businessman (or at least marketer) thinking
  • at least some SEO skills.
  • it's important to pay attention Special attention site structure
  • figure out what queries users can use to search for the information they need
  • based on “estimates”, collect a list of the most suitable queries (Yandex.Wordstat + Wordstat Assistant, Slovoeb, Key Collector, Google Keyword Planner), frequencies taking into account word forms (without quotes), and also without taking into account (in quotes), remove “garbage” "
  • the collected keys must be grouped, i.e. distribute across site pages (even if these pages have not yet been created).

No time? Contact us, we will do everything for you!

The semantic core is a scary name that SEOs came up with to denote a rather simple thing. We just need to select the key queries for which we will promote our site.

And in this article I will show you how to correctly compose a semantic core so that your site quickly reaches the TOP, and does not stagnate for months. There are also “secrets” here.

And before we move on to compiling the SY, let's figure out what it is and what we should ultimately come to.

What is the semantic core in simple words

Oddly enough, but the semantic core is the usual excel file, which lists the key queries for which you (or your copywriter) will write articles for the site.

For example, this is what my semantic core looks like:

I have marked in green those key queries for which I have already written articles. Yellow - those for which I plan to write articles in the near future. And colorless cells mean that these requests will come a little later.

For each key query, I have determined the frequency, competitiveness, and come up with a “catchy” title. You should get approximately the same file. Now my CN consists of 150 keywords. This means that I am provided with “material” for at least 5 months in advance (even if I write one article a day).

Below we will talk about what you should prepare for if you suddenly decide to order the collection of the semantic core from specialists. Here I will say briefly - they will give you the same list, but only for thousands of “keys”. However, in SY it is not quantity that is important, but quality. And we will focus on this.

Why do we need a semantic core at all?

But really, why do we need this torment? You can, after all, just write quality articles and attract an audience, right? Yes, you can write, but you won’t be able to attract people.

The main mistake of 90% of bloggers is simply writing high-quality articles. I'm not kidding, they have really interesting and useful materials. But search engines don’t know about it. They are not psychics, but just robots. Accordingly, they do not rank your article in the TOP.

There is another subtle point with the title. For example, you have a very high-quality article on the topic “How to properly conduct business in a face book.” There you describe everything about Facebook in great detail and professionally. Including how to promote communities there. Your article is the highest quality, useful and interesting on the Internet on this topic. No one was lying next to you. But it still won't help you.

Why high-quality articles fall out of the TOP

Imagine that your site was visited not by a robot, but by a live inspector (assessor) from Yandex. He realized that you have the coolest article. And hands put you in first place in the search results for the request “Promoting a community on Facebook.”

Do you know what will happen next? You will fly out of there very soon anyway. Because no one will click on your article, even in first place. People enter the query “Promoting a community on Facebook,” and your headline is “How to properly run a business in a face book.” Original, fresh, funny, but... not on request. People want to see exactly what they were looking for, not your creativity.

Accordingly, your article will empty its place in the TOP search results. And a living assessor, an ardent admirer of your work, can beg the authorities as much as he likes to leave you at least in the TOP 10. But it won't help. All the first places will be taken by empty articles, like the husks of sunflower seeds, that yesterday’s schoolchildren copied from each other.

But these articles will have the correct “relevant” title - “Promoting a community on Facebook from scratch” ( step by step, in 5 steps, from A to Z, free etc.) Is it offensive? Still would. Well, fight against injustice. Let's create a competent semantic core so that your articles take the well-deserved first places.

Another reason to start writing SYNOPSIS right now

There is one more thing that for some reason people don’t think much about. You need to write articles often - at least every week, and preferably 2-3 times a week - to gain more traffic and faster.

Everyone knows this, but almost no one does it. And all because they have “creative stagnation”, “they just can’t force themselves”, “they’re just lazy”. But in fact, the whole problem lies in the absence of a specific semantic core.

I entered one of my basic keys, “smm,” into the search field, and Yandex immediately gave me a dozen hints about what else might be of interest to people who are interested in “smm.” All I have to do is copy these keys into a notebook. Then I will check each of them in the same way, and collect hints on them as well.

After the first stage of collecting SY, you should be able to Text Document, which will contain 10-30 broad basic keys, which we will work with further.

Step #2 — Parsing basic keys in SlovoEB

Of course, if you write an article for the request “webinar” or “smm”, then a miracle will not happen. You will never be able to reach the TOP for such a broad request. We need to break the basic key into many small queries on this topic. And we will do this using a special program.

I use KeyCollector, but it's paid. You can use free analogue- SlovoEB program. You can download it from the official website.

The most difficult thing about working with this program is setting it up correctly. I show you how to properly set up and use Sloboeb. But in that article I focus on selecting keys for Yandex Direct.

And here let’s look step by step at the features of using this program for creating a semantic core for SEO.

First we create new project, and call it by the broad key you want to parse.

I usually give the project the same name as my base key to avoid confusion later. And yes, I will warn you against one more mistake. Don't try to parse all base keys at once. Then it will be very difficult for you to filter out “empty” key queries from golden grains. Let's parse one key at a time.

After creating the project, we carry out the basic operation. That is, we actually parse the key through Yandex Wordstat. To do this, click on the “Worstat” button in the program interface, enter your base key, and click “Start collection”.

For example, let's parse the base key for my blog “contextual advertising”.

After this, the process will start, and after some time the program will give us the result - up to 2000 key queries that contain “contextual advertising”.

Also, next to each request there will be a “dirty” frequency - how many times this key (+ its word forms and tails) was searched per month through Yandex. But I do not advise drawing any conclusions from these numbers.

Step #3 - Collecting the exact frequency for the keys

Dirty frequency will not show us anything. If you focus on it, then don’t be surprised when your key for 1000 requests does not bring a single click per month.

We need to identify pure frequency. And to do this, we first select all the found keys with checkmarks, and then click on the “Yandex Direct” button and start the process again. Now Slovoeb will look for the exact request frequency per month for each key.

Now we have an objective picture - how many times what query was entered by Internet users per last month. I now propose to group all key queries by frequency to make it easier to work with them.

To do this, click on the “filter” icon in the “Frequency” column. ", and specify - filter out keys with the value "less than or equal to 10".

Now the program will show you only those requests whose frequency is less than or equal to the value “10”. You can delete these queries or copy them to another group of key queries for future use. Less than 10 is very little. Writing articles for these requests is a waste of time.

Now we need to select those key queries that will bring us more or less good traffic. And for this we need to find out one more parameter - the level of competitiveness of the request.

Step #4 — Checking the competitiveness of requests

All “keys” in this world are divided into 3 types: high-frequency (HF), mid-frequency (MF), low-frequency (LF). They can also be highly competitive (HC), moderately competitive (SC) and low competitive (LC).

As a rule, HF requests are also VC. That is, if a query is often searched on the Internet, then there are a lot of sites that want to promote it. But this is not always the case; there are happy exceptions.

The art of compiling a semantic core lies precisely in finding queries that have high frequency, and their level of competition is low. It is very difficult to manually determine the level of competition.

You can focus on indicators such as the number of main pages in the TOP 10, length and quality of texts. level of trust and tits of sites in the TOP search results upon request. All of this will give you some idea of ​​how tough the competition is for rankings for this particular query.

But I recommend you use Mutagen service. It takes into account all the parameters that I mentioned above, plus a dozen more that neither you nor I have probably even heard of. After analysis, the service gives an exact value - what level of competition this request has.

Here I checked the query "configuration contextual advertising V google adwords" Mutagen showed us that this key has a competitiveness of “more than 25” - this is maximum value which he shows. And this query has only 11 views per month. So it definitely doesn’t suit us.

We can copy all the keys that we found in Slovoeb and make mass verification in Mutagen. After this, all we have to do is look through the list and take those requests that have many requests and low level competition.

Mutagen is a paid service. But you can do 10 checks per day for free. In addition, the cost of testing is very low. In all the time I have been working with him, I have not yet spent even 300 rubles.

By the way, about the level of competition. If you have a young site, then it is better to choose queries with a competition level of 3-5. And if you have been promoting for more than a year, then you can take 10-15.

By the way, regarding the frequency of requests. We now need to take the final step, which will allow you to attract a lot of traffic even for low-frequency queries.

Step #5 — Collecting “tails” for the selected keys

As has been proven and tested many times, your site will receive the bulk of traffic not from the main keywords, but from the so-called “tails”. This is when a person enters search bar strange key queries, with a frequency of 1-2 per month, but there are a lot of such queries.

To see the “tail”, just go to Yandex and enter the key query of your choice into the search bar. Here's roughly what you'll see.

Now you just need to write down these additional words in a separate document and use them in your article. Moreover, there is no need to always place them next to the main key. Otherwise, search engines will see “over-optimization” and your articles will fall in search results.

Just use them in different places of your article and then you will get additional traffic also on them. I would also recommend that you try to use as many word forms and synonyms as possible for your main key query.

For example, we have a request - “Setting up contextual advertising”. Here's how to reformulate it:

  • Setup = set up, make, create, run, launch, enable, place...
  • Contextual advertising = context, direct, teaser, YAN, adwords, kms. direct, adwords...

You never know exactly how people will search for information. Add all these additional words to your semantic core and use them when writing texts.

So, we collect a list of 100 - 150 key queries. If you are creating a semantic core for the first time, it may take you several weeks.

Or maybe break his eyes? Maybe there is an opportunity to delegate the compilation of FL to specialists who will do it better and faster? Yes, there are such specialists, but you don’t always need to use their services.

Is it worth ordering SY from specialists?

By and large, specialists in compiling a semantic core will only give you steps 1 - 3 from our scheme. Sometimes, for a large additional fee, they will do steps 4-5 - (collecting tails and checking the competitiveness of requests).

After that, they will give you several thousand key queries that you will need to work with further.

And the question here is whether you are going to write the articles yourself, or hire copywriters for this. If you want to focus on quality rather than quantity, then you need to write it yourself. But then it won't be enough for you to just get a list of keys. You will need to choose topics that you understand well enough to write a quality article.

And here the question arises - why then do we actually need specialists in FL? Agree, parsing the base key and collecting exact frequencies (steps #1-3) is not at all difficult. This will literally take you half an hour.

The most difficult thing is to choose HF requests that have low competition. And now, as it turns out, you need HF-NK, which you can write to good article. This is exactly what will take you 99% of your time working on the semantic core. And no specialist will do this for you. Well, is it worth spending money on ordering such services?

When are the services of FL specialists useful?

It’s another matter if you initially plan to attract copywriters. Then you don't have to understand the subject of the request. Your copywriters won’t understand it either. They will simply take several articles on this topic and compile “their” text from them.

Such articles will be empty, miserable, almost useless. But there will be many of them. On your own, you can write a maximum of 2-3 quality articles per week. And an army of copywriters will provide you with 2-3 shitty texts a day. At the same time, they will be optimized for requests, which means they will attract some traffic.

In this case, yes, calmly hire FL specialists. Let them also draw up a technical specification for copywriters at the same time. But you understand, this will also cost some money.

Summary

Let's go over the main ideas in the article again to reinforce the information.

  • The semantic core is simply a list of key queries for which you will write articles on the site for promotion.
  • It is necessary to optimize texts for precise key queries, otherwise even your highest-quality articles will never reach the TOP.
  • SY is like a content plan for social networks. It helps you avoid falling into a “creative crisis” and always know exactly what you will write about tomorrow, the day after tomorrow and in a month.
  • To compile a semantic core it is convenient to use free program Word fucker, you just need her.
  • Here are the five steps of compiling the NL: 1 - Selection of basic keys; 2 - Parsing basic keys; 3 - Collection of exact frequency for requests; 4 — Checking the competitiveness of keys; 5 – Collection of “tails”.
  • If you want to write articles yourself, then it is better to create a semantic core yourself, for yourself. Specialists in the preparation of synonyms will not be able to help you here.
  • If you want to work on quantity and use copywriters to write articles, then it is quite possible to delegate and compile the semantic core. If only there was enough money for everything.

I hope this instruction was useful to you. Save it to your favorites so as not to lose it, and share it with your friends. Don't forget to download my book. There I show you the most fast way from zero to the first million on the Internet (extract from personal experience in 10 years =)

See you later!

Yours Dmitry Novoselov

Relevance map is a plan for website content optimization, compiled on the basis of queries from the semantic core. Simpler and clearer? This is a table where for each page the queries for which it will be optimized are entered.

This document will help in the work of everyone involved in promoting your website: SEO specialist, copywriter, targetologist (advertising), marketer. Or just for you, if you are a multi-armed SEO god and do everything on your own.

2. How to create a relevance map?

Rules for drawing up a relevance map:

1. Requests. In the relevance map, for each page you need to select 3-5 queries from the list of keywords for which we will optimize the page. For the main one - HF and MF requests, for internal ones - emphasis on MF and LF. If you have us, you can skip this point, since we select the keys page by page. If you did it yourself, we check which pages are already relevant to the keywords from the point of view of search engines. I'll tell you 2 ways to do this:

– Manual check. Enter each query into the search bar, and in the advanced settings specify the site address and region. The first answer in the search results is the page that, in the opinion search robot, responds to the request better than others on your site. If you agree with this, then we enter the page and keys in one line. If not, make a note that the page needs to be corrected.

– Automatic check. Find relevant pages using special services eg Key Collector, Majento.

2. Tags. Well-developed Title, Description and H1. We add the keywords that we chose for the page to the tags. We don’t forget to use tags and the rules for compiling them.

3. Pictures. If there are drawings on the pages, we also work on alt attribute for the img tag.

As a result, if you do everything correctly, you will get a table like this:


This is the relevance map. Next, systematic work on optimization: place tags on the site, adjust texts taking into account the selected keys, work on the link profile. And here he is TOP. Just one big request: in pursuit of the top, do not forget about the users. The site should be convenient and understandable. We'll talk about this more in future letters.

3. Will there be materials to help?

Time to read the article is 5 minutes, including a coffee break. But drawing up a relevance map, I believe, will take you 1-2 days. Yeah, with a break for lunch.

Good afternoon. IN Lately I receive a lot of letters in the mail like this:

  • “Last time I stupidly didn’t have time to register for the marathon because I was on vacation, and there were few announcements as such...”
  • "Sing, I saw that some kind of course is being prepared, you can say exact dates and how many classes will there be?
  • "How much will the course cost? What will the material be like? Is it a marathon or an electronic recording?"

I'll try to answer some of the questions:

  1. I can’t tell you the exact release date of the course. It will definitely be in October and most likely at the end.
  2. The course will be open for sale for a maximum of 5 days, I will recruit a group with whom I would be interested in working and achieving specific numbers, then I will close access. So don't miss your registration date.
  3. At the last marathon, some participants achieved incredible results (I will share the graphs in the next lessons), but these results were achieved only by those who did all their homework and attended all classes, so registration will be limited in time and quantity. Most likely I will give the first 30 some kind of significant bonus.

That's all for now, you can ask me a question by email (petr@site), in the comments, or sign up for pre-registration by completing this survey.

Now let's move on to the tasty stuff. 🙂

The semantic core has been assembled, what next?

All optimizers are saying that you need to collect the semantic core of the site. This is certainly true, but, unfortunately, many have no idea what to do with this goodness? Well, we collected it, what next? I wouldn't be surprised if you fall into this category too. In general, some clients order a semantic core, and even after assembling it to the highest possible quality, they throw your work down the drain. I want to cry when I see something like this. Today I will talk about what to actually do with the semantic core.

If you haven’t yet created it as expected, here are links to lessons on creating a semantic core.

I will demonstrate everything using a simple example to facilitate your understanding of this difficult matter. Let's say we needed to assemble a semantic core for a site that talks about WordPress. Quite naturally, one of the sections of this site will be “WordPress Plugins”.

By the way, don’t forget to look for phrases in Russian when parsing keywords. That is, for the “WordPress plugins” category you need to parse not only the phrase “Wordpress plugin”, but also the phrase “Wordpress plugin”. It often happens that in Russian the name of a brand or product is searched even more than in the original spelling in English. Remember this.

After collecting the SN (short for “semantic core”) for this category, we get something like this Excel file:

As you can see, there are quite a lot of requests and everything is broken into a heap. Then we simply group in Excel by cutting/pasting keywords that are similar in meaning. We separate groups of keywords with some empty line for clarity.

It would be great here if the keywords in the subgroups were sorted by exact frequency (this will be useful in the future). It turns out that these subgroups are the keywords that are contained in these articles. If the semantic core is compiled quite well, we will not miss anything and will cover ALL queries that are included in this section.

Common words such as " wordpress plugins"We leave the page with the category for it, that is, we place the necessary SEO optimized text directly in the category. Be sure to read my article about to know how to do this correctly.

Even if you do not write the article yourself, this file with the breakdown of the semantic core into groups is an ideal guide for a copywriter. That is, he already sees the structure of the article and understands what he should write about. Needless to say, how much traffic can be collected this way?

Ideally, if, of course, you write the articles yourself or have a smart SEO copywriter. In any case, be sure to read the article about it. Even if you don’t write it yourself, show this article to a copywriter and the effect of your content will not keep you waiting. After some time, you will be pleasantly surprised at the increase in traffic.

By the way, if possible, suitable keywords should be made into headings, of course in a more natural form. That is, something like this:

Remember, no spam, my friends, over-optimization is evil. Here, as in the entire structure of the site, the correct structure of the article is important. Remember once and for all: search engines love well-structured sites, but I generally keep quiet about people. We all love it when everything on a website is laid out neatly, clearly, beautifully.

Well, guys, that's all for today, see you in the next lesson, which you should also like. You like what I write, right? I'm right? 🙂 If yes, don’t forget about retweets, likes and other “goodies”, and I especially love comments. It’s a small thing for you, but for me it’s all very pleasant. Are you “good-natured”, my friends? 🙂

P.s. Do you need a website? Then creating a Kyiv website may be what you need. Trust the professionals.







2024 gtavrl.ru.