The New Perils of Google Rankings

Not open for further replies.


Power Member
Feb 24, 2008
This was originally a reply in another thread regarding rankings but I though it would be a good topic of its own.

NOTE: Please take this with a grain of salt... SEO and business are arts not sciences. This is just my OCD perspective on the issues.

The original question: How to check rankings?

It is more tricky these days... If you don't capture the serps in a rendered browser view that supports javascript and cookies then what you will see is the legacy accessibility view SERPs which are different than the google instant SERPs. If you want to know what most of the world is seeing then you need the google instant SERPs. A lot of tools and services are built on coding and sending HTTP requests to google to get the SERPs. All those tools and services are displaying the accessibility SERPs and typically are 10-25% different than the instant SERPs. In some cases I have encountered the accessibility SERPs will say my site isn't even listed when I am actually on page one in the google instant SERPs. I used to use Rank Tracker (part of the SEO Power Suite from, but after Google Instant launched I started seeing the discrepancies)

Another issue is that google is putting alternative result types in with the organic results like images, definitions, shopping, brands, news, videos, etc... Wether you personally would count or don't count these the number of organic result locations per page can vary from 8-12 results per page. Tools and services that accurately state where you placed at a point in time need to display page # and result # on page and have the results per page from each page prior to calculate you actual ranking. Simply saying #6 on page 2 = 16 will produce an incorrect answer a lot of the time and the deeper you go in page number the more wildly off the result becomes.

Another issue is to be consistent in time and place you conduct your rank checking. Google is estimating client side connection speed and is filtering search results based on sites that would perform well for your current connection. I discovered this while commuting to work on a ferry. My sites always jumped up a lot on the ferry's slow internet connection because my web sites are optimized for load times, # of requests, and total page weight. All of my slower competition fell in the ranks on the slower connection. I believe this also explains why different browsers see different results from google despite changing the User-Agent strings... Subtle differences in the page rendering engines influence google's estimation of client side connection speed and thus produce differences in the sort order.

So given all this new information what is the best way to check serps... A nut case like me would use automation tools to pilot the top 5 versions of the top 5 browsers using a large list of dedicated proxies to search my list of keywords of interest and cache the result pages locally. Then I would write a script to parse the results into a file with fields like this:

date|browserVersion|keyword|page|resultOnPage|tota lResultsOnPage|url|title|summary|googleCacheURL|li nkToCacheSerpPageForReference

I would do this for every result including my sites because then it can be used to analyze your competition too.

You will also find interesting things like when you report rankings by site you need to have a min max and avg ranking as a site can have multiple pages in the serps. I even found a couple rare instances of the same url being on page 1 and page 4 or so for a Keyword. I think this occurs because page one is from a cluster of "page 1" servers with a high degree of caching and pages 2+ probably go to a differently configured cluster with different caches.

Then to get the market perspective you weight your data by the browser market share and you'll have the closest estimate of your rankings you can reasonable achieve. Then determine a sample size for your niche. I have found that many small niches have a keyword scope of about 50,000 keywords and things like online stores with 10 or more categories tend to be in the area of around 250,000 keywords... you want at least a 5% sample of our keyword space. Use your sample religiously and try to avoid the temptation of changing it. Determining the averages over your space is how you really employ SEO to scale. Tracking only your best words is probably the most common SEO mistake that even the big firms make. If a service or tool doesn't enable you to search up to 50,000 keywords weekly or monthly then it is not a tool that can scale with your business. I find that very few services and tools can do this and I largely build and host my own systems. Even when I find a service that can handle the scale they often mess up one of the other critical areas of rank tracking and that produces bad data.

Once you have your system up and running you need to spot check a sample of the rankings to produce a margin of error. just verify for 3-5% of your results that they did appear at #4 on page 4 in your cached serp pages. this can be added to your data as +- x% error. This will also show where your system is flawed and you can track improvement to the system. This will also inform you when google sneaks in new surprises. Often times changes to the number of results per page by page will signal a new google gizmo appearing in an organic slot and the cached pages let you see what appeared at that time.

I know... this is crazy... no normal human being would do this...

Poor Man's Method:

Check your rankings on the same computer on the same network connection at the same time of day after logging out and clearing your cookies and cache (and your auto-fill form text and disable your password manager tools... they narc your account to google for browser piloting). Capture the same data fields I suggested above and cache a local copy of the serp pages to verify things you see in the data as they appeared at that point in time.

This won't tell you what the world sees, but it will be a relative indicator as to wether or not you are doing better or worse overall in SEO. There are a lot of assumptions with this though like browser market share isn't changing and improvements in one browser's SERPs means equivalent improvement in the others... etc... but at the end of the day you can only go so far trying to apply scientific method to an art. But knowing your limitations and the limits of the problem is a good place to start.


Last edited:
I just check my rankings with SEScout, Market Samurai, Traffic Travis, Scroogle, and Rank tracker, which I think does the trick :)
I just check my rankings with SEScout, Market Samurai, Traffic Travis, Scroogle, and Rank tracker, which I think does the trick :)

I would expect them to largely agree because my understanding is that they all are sending HTTP requests and not using Google Instant. This could be a "Good Enough" method for a small number keywords provided you are verifying the accuracy of the results you are getting. I was in the same school as you for years and then deviated once the tools no longer agreed with what I was seeing in the Google Instant SERPs. I would also imagine the issue to be more pronounced for newer websites because instant will show ranking changes faster than the legacy HTTP requests that most tools use will. It's an art not a science, so if you make more than you spend ultimately then you are not wrong about it working for you no matter what the data or anyone says. That is a good point to keep in mind.

Of course everyone should take my post as a perspective and not a gospel by any means. I would encourage everyone to not trust me either and just verify their own results. :) I added a disclaimer to my post and gave thanks for your point.


Last edited:
Honestly Microsite Masters is great for checking the rankings so far, and Serp IQ does everything you want and more in terms of analyzing yourself and the competition.
I use Traffic Travis, and I just checked several links and they are in the right position.

But you are right sometimes the ranking in SERP changes if you use another connection, browser ...

I use an 8086 machine.... with DOS... using IE .1a.... on dial up 300 baud..

so all my stats should be within .001% accurate.
Ted, do you have any more experiments to share in 2012? I know a lot has changed with Panda and Caffeine and Instant.

If I may, I would be interested to know what you would do today to rank an affiliate ecommerce site?

I am also curious to know how the ecommerce sites that you manage are faring in the post-panda world? Is it easier or harder to keep the traffic consistent?


p.s. A great many thanks to all the posts you made in the last couple of years. I read many of them and have picked up many pearls.
Last edited:
Thread bumping
You are always sharing gems, cool to stumble upon one of your many insightful shares. Thank you!
Bruh, this thread is 7 years old and op wasn't in this forum since 2013.
He is just trying to get 100 posts (10 of 12 of his last publications are like this one) so gonna get the boot soon.
Not open for further replies.
AdBlock Detected

We get it, advertisements are annoying!

Sure, ad-blocking software does a great job at blocking ads, but it also blocks useful features and essential functions on BlackHatWorld and other forums. These functions are unrelated to ads, such as internal links and images. For the best site experience please disable your AdBlocker.

I've Disabled AdBlock