# [Myth Dispelling] Why the Anchor Ratio Competitor Analysis is Nonsense

#### splishsplash

##### Jr. Executive VIP
Jr. VIP
This is a slightly more advanced topic that a lot of you probably won't know about.

It involves analysing competitors to get their anchor text ratios then averaging that and trying to "match" that with your own link building.

I'm not going to do a deep analysis here on why it's bad. I have a fairly good understanding of mathematics, statistics and systems to understand at a gut level that it's just not a reliable method.

Here's a post from Matt Diggity that explains this method. https://diggitymarketing.com/anchor-text-optimization/

The reason I don't feel this is hugely useful is because the google ranking algorithm is incredibly complex, and this method is essentially trying to simplify a complicated algorithm into a set of workable rules.

You have a complicated system with 1000's of variables that are influencing ranking, and you're looking at a tiny subset of those in an attempt to get the equivalent output.

Let me try and come up with an analogy.

Imagine you want to maximize academic success for a group of students.

So you look at the top 10% of students and you analyse their study methods. You then take the average of their study methods and apply that to your group.

It's semi-useful, but you're still missing so much. The intelligence of the students, their discipline, their motivation, their individual teachers. It's such a complicated system with so many factors.

Ranking is like this.

What's also making this even harder is you have to classify the anchor texts yourself. Exact, partial, longtail, topic, title, empty. There's so so many "types" of anchors. None of us really know how google views them.

What makes an exact bad? What makes an exact an exact. Is it search volume? Or is it just how optimized a page is for that keyword. It's an incredibly complex topic and you're essentially just making up your own groups like "topic keywords", then deciding which fit into your made up category.

Now, even IF we had exact classifications of anchor types that we knew google used, this STILL doesn't take into account the influence of other factors.

For example.

The age of a site. The topical relevance. The trust of a site. The authority. Even these things are just made up. What is authority, what is topical relevance? We don't know how google classifies any of this. We can only make a best guess, and mine is you have :-

1) Topical relevance
2) Trust
3) Power

Words like "authority" are just a combination of power, trust and topical relevance.

This is exactly why SEO is an art, not a science. It *IS* a science if you can see how the algorithm is made, but like any complex system, without knowing the rules, you have to treat it as an art.

So how does trust, power and topical relevance influence things? The average anchor text ratio analysis method doesn't take any of these into account. It treats everything like it's the same, and you have some magic formula.

A high level mathematician could show you math that proves this method doesn't work. If we took a large sample, categorized the anchors, and performed an analysis for both the a general sample across all niches, and looking individually at niches, then groups of keywords, then individual keywords we would find no patterns.

We don't know the "weight" of each link. One brand link could carry more branding/trust power than another, so because of that one site might be able to get away with more anchors.

Likewise, we don't know if more dangerous anchors like exact anchors also carry weight. Knowing what I know about engineering, I'd say they do. For example, 5 exact anchors from DR90+ sites will not be viewed the same as 5 exact anchors from small blogs, and they certainly aren't viewed the same on directories/comment links.

Google does expect certain type of links to be a certain type of anchor. Exact on comments are in 99.9999% of cases complete spam.

So you have a machine that takes 1000's of variables as input to produce an output. You don't know what those variables are or how they're classified. And you then decide to guess at 10 of those variables, average them, and try to make your variables near that average, and expect to get the same output.

This is not how to approach SEO.

SEO is an art.

Therefore, you should try to understand, from a 50,000 foot view how the system generally operates. What kind of things it likes and dislikes and how it generally approaches producing its output.

And the biggest thing we know about google is it's looking for UNNATURAL signals. So our primary concern shouldn't be micro-optimizing ratios, but trying to stay natural.

Looking at what kind of links sites with more natural links get, what kind of anchors they get, what kind of link building velocities can occur naturally, and then aiming to be as close to that as possible.

It's generally not overly complex.

Rather than try to create some fancy anchor text plan, just for the most part build lots of brand links to the homepage. Add in a few phrase/misc brand like "visit Brand", "toasters at Brand". To your inner pages, do a couple of url, a couple of phrase brand, like "this product is reviewed at Brand", and a couple of long phrase, "the super duper rigatator is reviewed here".

Does this also applies to local SEO?

This is a very important topic and most of us fail on this definitely worth a read

Does this also applies to local SEO?

Yeah, it applies to everything.

The problem is this method will give you "false positives".

But it's just a waste of time. It's time consuming.

You'll be roughly on the right path with it, but what if you see in your local niche that a bunch of sites have a lot of exacts/partials? Sometimes this is the case. It doesn't mean that's why they're ranking, or your site is going to benefit form more exacts. It just means, for whatever reason, those sites have quite a few exacts. Your best approach is still just following the general best practices.

All niches are the same. Google doesn't want more exacts for health, and less for law. Any patterns you see are just patterns the brain wants to see. If you applied some high level math you'd see that these are not really real patterns.

You can think of the google algorithm in terms of chaos theory.

https://simple.wikipedia.org/wiki/Chaos_theory

"
As an example, take a pendulum that is attached at some point, and swings freely. Connecting a second pendulum to the first will make the system completely different. It is very hard to start in exactly the same position again - a change in starting position so small that it cannot even be seen can quickly cause the pendulum swing to become different from what it was before.

"

This.

This is SEO.

This is exactly why people will start one site, succeed, and do the exact same things, in almost the exact same order, with content that's almost exactly the same, and they get COMPLETELY different results.

However..

This doesn't mean SEO is totally random and we can't predict.

It just means it's not an exact science.

We should think of it more in terms of probabilities.

"If I build 30 strong backlinks to my new site in the first 3 months, with mostly brand anchors, I increase the probability of success"

There are things we can do, which have a high probability of success.

Yeah, it applies to everything.

The problem is this method will give you "false positives".

But it's just a waste of time. It's time consuming.

You'll be roughly on the right path with it, but what if you see in your local niche that a bunch of sites have a lot of exacts/partials? Sometimes this is the case. It doesn't mean that's why they're ranking, or your site is going to benefit form more exacts. It just means, for whatever reason, those sites have quite a few exacts. Your best approach is still just following the general best practices.

All niches are the same. Google doesn't want more exacts for health, and less for law. Any patterns you see are just patterns the brain wants to see. If you applied some high level math you'd see that these are not really real patterns.

You can think of the google algorithm in terms of chaos theory.

https://simple.wikipedia.org/wiki/Chaos_theory

"
As an example, take a https://simple.wikipedia.org/wiki/Pendulum that is attached at some point, and swings freely. Connecting a second pendulum to the first will make the system completely different. It is very hard to start in exactly the same position again - a change in starting position so small that it cannot even be seen can quickly cause the pendulum swing to become different from what it was before.

"

This.

This is SEO.

This is exactly why people will start one site, succeed, and do the exact same things, in almost the exact same order, with content that's almost exactly the same, and they get COMPLETELY different results.

However..

This doesn't mean SEO is totally random and we can't predict.

It just means it's not an exact science.

We should think of it more in terms of probabilities.

"If I build 30 strong backlinks to my new site in the first 3 months, with mostly brand anchors, I increase the probability of success"

There are things we can do, which have a high probability of success.
Can tell more on the false positives I am confused

Do people really do that shit? I am doing good with the anchor ratio without disturbing my competitors.

Do people really do that shit? I am doing good with the anchor ratio without disturbing my competitors.

What about internal anchors? Do you use a more aggressive approach?

Looks like @splishsplash is taking job of sharing knowledge on BHW...

Great thread Thanks for sharing.

I don't think there is a relevant specific ratio based on competence. Not even a d0follow/nofollow ratio.

But I'm 100% sure you have to have a nice looking anchor profile with great diversity that avoids concentration of the anchors in just one of a kind

Going full contextual (which is generally the most likely option to happen if one is too focused on a bunch of keywords) is not a good idea either

I believe the idea of using the competitor's profile as a benchmark was a rule of thumb for teachers that didn't want to deal with the uncertainty of their students (I've seen this in multiple SEO courses). Pure heuristics for spoon-feeders.

What about internal anchors? Do you use a more aggressive approach?

Internal anchors are aggressive par excellence (i.e. nav menu)

I'm going to put together a really cool case study soon. More of a collection of programs to analyse data than a single case study.

I've got some ideas that I think might yield some really interesting results.

What we want to do is analyse sites that we know are entirely natural with zero link building done, then remove the effects of google updates.

So I'm going to create a small database of every google update and create a weighting factor for that update. Ie, some of them were more extreme than others, both in the positive and negative.

Then we want to look at the entire history of that site, which involves :-

1) When each page was created.
2) The date each backlink appeared(a rough date is enough, and ahrefs is good enough for this)
3) the keywords and traffic for the history of the site.
4) When each internal link was created.(This is very important and can't be excluded)

Next, code the program with the following :-

1) Graphing not just the keyword and traffic growth, but the derivative and the second derivative. The derivative shows us the rate of change of traffic. Ie, on a monthly basis if traffic goes like this :

50 100 150 250 600 500 450 520

Then the derivative would be

50, 50, 100, 350, -100, -50, 70

This is the rate the traffic is increasing. If it's increasing at the exact same rate every month, 100, 200, 300, 400 then the rate of change would be a straight line, where x=100. Constant growth of 100 each month. In calculus that would be f(x) = 100x. month 3 = 100(x) = 100(3) = 300. The derivative of f(x)=100x is 100.

This is interesting to know and we want to plot this out over the entire life of the site for both traffic AND keywords, because we will look for correlations between certain data points(article creation date, internal links, external page links, external rds, AND for a large enough data set we can see if there's a correlation between certain types of sites linking to a page and certain types of anchors. We can look at a TON of data points, way more than a typical analysis could look for)

Next, we'd also want to know the second derivative. This is the rate of change, of the rate of change. This is VERY important, because..

Let's say there's a bunch of backlinks getting built and the rate of change of traffic is increasing, 0, 100, 300, 700, 1200, 2000 for example. So our first derivative is, 100, 200, 500, 500, 800. By calculating the second derivative..

100, 300, 0, 300 we see the rate of change of that traffic growth. Ie, how much is the GROWTH changing. Not how much is the traffic changing.

Then we can look at that data for the site as a whole, and each page.

And look at how different links influenced those graphs at different points.

Ie, there was a 0 rate of change of growth for 3 months on an article, then perhaps an internal link is made, and we can look at the changes on the following 3 months.

So we can see how page growth looks on natural sites without links, then what happens with certain links.

With plenty of data we can ask questions to the data like, "how many times after 1 link was applied was there a change in the rate of change of growth?" or "what type of anchors caused the rate of change of growth to increase the most?" or any sort of variation, "Did contextual links cause an increase in the rate of change of growth in more pages", "is there more of an increase in rate of change of growth when a page gets links from DR50+ sites" and so on. From that we should be able to find some distinct patterns.

The most important thing is the starting point data, but I know a bunch of sites that are completely natural, where the owners never do a single bit of link building.

It wouldn't work on sites where people are building hidden pbns or employing blackhat tactics because they're harder to measure. We just want data from a site that grows naturally without any custom tactics that would influence the results.

SEO is mostly guesswork, I agree. I've been saying this for a while. There are a lot of questions we don't have answers for because the fact is that we don't know.

Last edited:
I don't think there is a relevant specific ratio based on competence. Not even a d0follow/nofollow ratio.

But I'm 100% sure you have to have a nice looking anchor profile with great diversity that avoids concentration of the anchors in just one of a kind

Going full contextual (which is generally the most likely option to happen if one is too focused on a bunch of keywords) is not a good idea either

I believe the idea of using the competitor's profile as a benchmark was a rule of thumb for teachers that didn't want to deal with the uncertainty of their students (I've seen this in multiple SEO courses). Pure heuristics for spoon-feeders.

Internal anchors are aggressive par excellence (i.e. nav menu)

Some good points here..

Diversity is necessary for bigger sites. For your typical small to medium niche sits you could just build 30 strong pbn links and nothing else and you'll rank. A massive site with 100's of thousands of backlinks needs diversity, because at that level it's out in open chaotic system, and it's impossible for a site to have 100k backlinks without having comments, forum links, directories and everything inbetween. For 99% of us here, we don't need to concern ourselves with diversity though.

Yeah, the anchor analysis method is more something that looks impressive on paper for people.

nav menu links aren't counted the same way as contextuals, so you won't get a penalty. You can get a penalty for being too aggressive with internal anchors, but it's hard. The best approach is not to repeat anchors. 1 of each type of exact and partial, then some longer partials, and mix in 10-30% safer anchors and you'll be good.

Can tell more on the false positives I am confused

In simple terms it means.

You are assuming that your successful SEO campaign is because you applied the anchor ratio analysis.

In specific, inferential statistics terms.

You have what's called a null hypothesis. Which is the general statement that there is no relationship between 2 measured tests.

Ie, if you want to test if applying the anchor ratio competitor method will result in an improvement in your site.

State 1 = your site before
State 2 = your site after applying the method

The null hypothesis states that there is no measurable difference before and after.

A false positive, is where you accept the null hypothesis to be INVALID. Therefore, you're believing that your method worked and did make a difference. It's an inferential statistics type 1 error.

A false negative, a type 2 error is where you accept the null hypothesis, and state that the method didn't produce any results, when in fact, it did.

Last edited:
My 2c...

The biggest issue is indeed the fact that we are incapable of seeing the whole picture. Backlink analyzers themselves, for example, are unable to index all backlinks. So we failed to reach 100% accuracy before we even started.

That's why testing is so important.

Testing is powerful because you create closed experiment with your own variables and rules. Then you just feed it to engines and await their response.

It's like ping pong

EXAMPLE:
Test group A - coping competitors 100%
Test group B - 25% more EMAs
Test group C - 50% more EMAs
etc.

Observe the results.

Still not 100% accurate (can't be really) but gives strong indication.

It's also very important to note that other than the above differences, everything else must be close to identical including: starting point, link velocity, quality/power of sources etc. Otherwise test is void.

The most important thing about testing however, is to interpret the results correctly and don't fool yourself.

To sum it up...
All in all, anchor text is definitely a big part of optimization. Something that can make or break your website. It never hurts to check what is happening on the front page before you start your campaign but take it with a grain of salt.

Play it cool. Better to under-optimize than over-optimize.