1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

How to track SERPs?

Discussion in 'Black Hat SEO' started by maildigger, Dec 29, 2011.

  1. maildigger

    maildigger Power Member

    Joined:
    Jan 30, 2009
    Messages:
    558
    Likes Received:
    60
    Location:
    EU
    I'm looking for a service like SEscout to track positions on multiple keywords of several websites. Any suggestions? What do you use?
     
  2. peterbruce

    peterbruce Supreme Member

    Joined:
    Sep 1, 2010
    Messages:
    1,480
    Likes Received:
    374
    Occupation:
    Social Media & Internet Marketing
    Location:
    UK | US
    Home Page:
    Market samurai, it can be found in the download section.
     
  3. gm2026

    gm2026 Registered Member

    Joined:
    Dec 8, 2010
    Messages:
    50
    Likes Received:
    6
    I use Market Samurai Rank Tracker. It works for me.
     
    • Thanks Thanks x 1
  4. kirkonpolttaja

    kirkonpolttaja Senior Member

    Joined:
    Feb 6, 2010
    Messages:
    1,027
    Likes Received:
    669
    Market samurai rank tracker or TrafficTravis :) For some bloody reason my MS says its corrupted!!!!! it's legit copy even and i cant use ;(
     
    • Thanks Thanks x 1
  5. EzioFiranzee

    EzioFiranzee Newbie

    Joined:
    Dec 28, 2011
    Messages:
    12
    Likes Received:
    2
    What about TraficTravis?
     
    • Thanks Thanks x 1
  6. peterbruce

    peterbruce Supreme Member

    Joined:
    Sep 1, 2010
    Messages:
    1,480
    Likes Received:
    374
    Occupation:
    Social Media & Internet Marketing
    Location:
    UK | US
    Home Page:
    TT is faster and has a better interface but MS is more indepth and accurate.
     
    • Thanks Thanks x 1
  7. ipopbb

    ipopbb Power Member

    Joined:
    Feb 24, 2008
    Messages:
    626
    Likes Received:
    844
    Occupation:
    SEO & Innovative Programming
    Location:
    Seattle
    Home Page:
    It is more tricky these days... If you don't capture the serps in a rendered browser view that supports javascript and cookies then what you will see is the legacy accessibility view SERPs which are different than the google instant SERPs. If you want to know what most of the world is seeing then you need the google instant SERPs. A lot of tools and services are built on coding and sending HTTP requests to google to get the SERPs. All those tools and services are displaying the accessibility SERPs and typically are 10-25% different than the instant SERPs. In some cases I have encountered the accessibility SERPs will say my site isn't even listed when I am actually on page one in the google instant SERPs. I used to use Rank Tracker (part of the SEO Power Suite from link-assistant.com, but after Google Instant launched I started seeing the discrepancies)

    Another issue is that google is putting alternative result types in with the organic results like images, definitions, shopping, brands, news, videos, etc... Wether you personally would count or don't count these the number of organic result locations per page can vary from 8-12 results per page. Tools and services that accurately state where you placed at a point in time need to display page # and result # on page and have the results per page from each page prior to calculate you actual ranking. Simply saying #6 on page 2 = 16 will produce an incorrect answer a lot of the time and the deeper you go in page number the more wildly off the result becomes.

    Another issue is to be consistent in time and place you conduct your rank checking. Google is estimating client side connection speed and is filtering search results based on sites that would perform well for your current connection. I discovered this while commuting to work on a ferry. My sites always jumped up a lot on the ferry's slow internet connection because my web sites are optimized for load times, # of requests, and total page weight. All of my slower competition fell in the ranks on the slower connection. I believe this also explains why different browsers see different results from google despite changing the User-Agent strings... Subtle differences in the page rendering engines influence google's estimation of client side connection speed and thus produce differences in the sort order.

    So given all this new information what is the best way to check serps... A nut case like me would use automation tools to pilot the top 5 versions of the top 5 browsers using a large list of dedicated proxies to search my list of keywords of interest and cache the result pages locally. Then I would write a script to parse the results into a file with fields like this:

    date|browserVersion|keyword|page|resultOnPage|totalResultsOnPage|url|title|summary|googleCacheURL|linkToCacheSerpPageForReference

    I would do this for every result including my sites because then it can be used to analyze your competition too.

    You will also find interesting things like when you report rankings by site you need to have a min max and avg ranking as a site can have multiple pages in the serps. I even found a couple rare instances of the same url being on page 1 and page 4 or so for a Keyword. I think this occurs because page one is from a cluster of "page 1" servers with I high degree of caching and pages 2+ probably go to a differently configured cluster with different caches.

    Then to get the market perspective you weight your data by the browser market share and you'll have the closest estimate of your rankings you can reasonable achieve. Then determine a sample size for your niche. I have found that many small niches have a keyword scope of about 50,000 keywords and things like online stores with 10 or more categories tend to be in the area of around 250,000 keywords... you want at least a 5% sample of our keyword space. Use your sample religiously and try to avoid the temptation of changing it. Determining the averages over your space is how you really employ SEO to scale. Tracking only your best words is probably the most common SEO mistake that even the big firms make. If a service or tool doesn't enable you to search up to 50,000 keywords weekly or monthly then it is not a tool that can scale with your business. I find that very few services and tools can do this and I largely build and host my own systems. Even when I find a service that can handle the scale they often mess up one of the other critical areas of rank tracking and that produces bad data.

    Once you have your system up and running you need to spot check a sample of the rankings to produce a margin of error. just verify for 3-5% of your results that they did appear at #4 on page 4 in your cached serp pages. this can be added to your data as +- x% error. This will also show where your system is flawed and you can track improvement to the system. This will also inform you when google sneaks in new surprises. Often times changes to the number of results per page by page will signal a new google gizmo appearing in an organic slot and the cached pages let you see what appeared at that time.


    I know... this is crazy... no normal human being would do this...

    Poor Man's Method:

    Check your rankings on the same computer on the same network connection at the same time of day after logging out and clearing your cookies and cache. Capture the same data fields I suggested above and cache a local copy of the serp pages to verify things you see in the data as they appeared at that point in time.

    This won't tell you what the world sees, but it will be a relative indicator as to wether or not you are doing better or worse overall in SEO. There are a lot of assumptions with this though like browser market share isn't change and improvement in one browsers means equivalent improvement in the others... etc... but at the end of the day you can only go so far trying to apply scientific method to an art. But knowing your limitations and the limits of the problem is a good place to start.

    Cheers!

    Ted
     
    • Thanks Thanks x 2
    Last edited: Dec 29, 2011