I was reading something you may find useful: Components of Googles Ranking Algorithm in 2010- Linking Still King Would be cool to post here your own thoughts regarding the factors and how much you think each weights in Google's algorithm. Personally I would make it like this (these are 100% my own opinions and are not necesarily backed up by others or by hard data/tests, other than my own subjective observations): Trust/authority of host domain: 35% From the factors that determine domain authority i would mention: number and quality of links, topic coverage of link sources, IP & class C distribution (number of different class C IPs divided by total number of IPs), number of IPs divided by number of domains. Link popularity of the specific page: 20% Here I would say we have to consider both quantity and especially quality. Anchor text of external links to the page: 15% Varies, depending on quality of links and convergence (percentage of links with same/similar/subset/superset words from number of all links - higher convergence means that Google can trust more that the page is about that particular keyword). Anchor text of internal links to the page: 5% Could increase if domain trust and topic coverage are high and/or if not enough external links have useful anchor text to determine what the page is about, or if the external links are low quality (not to be trusted much). Topic coverage: 20% By topic coverage I am referring to how much content/information about the keyword (topic of interest), is available on a site. Basically a site with 10 pages about "weight loss" and 10000 pages about various other topicss (mostly unrelated) will have a much lower topic coverage rank than a site with 1000 pages about "weight loss" and 10 pages about various other topics. Basically this indicates how much of a "specialist" the site is on a topic, much like comparing a butcher that has a hobby about quantum mechanics to a physics researcher working at CERN on the topic of quantum mechanics. On-page keyword usage: 5%, function of domain trust, topic coverage, anchor text similarity I think this is a variable weight factor. If domain trust (and several other factors) is high, Google will "trust" your site more, hence trust that you used the keyword on the page in a useful manner for the visitor, not only to try to rank better (spammy). Because of this, in practice, the influence of this factor becomes a tiebreaker for results of similar rank (based on other factors, primarily links, topic coverage, anchor text). Registrar & host data: critical This is critical because it functions in a pass/fail way. It is used as part of the anti-spam algorithm to determine if a site is a spam result. If it passes the test, the other factors will be used to rank the site. If it fails the test, it will receive a ranking "penalty" (rank significantly lower, or don't rank at all for that keyword or all keywords for which there are results that are clearly not spam). Obviously, this factor is not the only one used in a pass/fail way. Link graph analysis, linking patterns, content patterns, etc. are also used as anti-spam factors. Traffic + CTR data: low, tiebreaker I see traffic stats as difficult to use reliably as a ranking factor. A domain with a lot of traffic (based on Google Analytics) doesn't necessarily have a better coverage for a given topic than a domain with lower traffic. SERP CTR data on the other hand I think can be used as an indicator regarding the quality of the SERP excerpt (how appealing it seems to the searcher), hence it could have a low impact in the ranking algorithm. However, I think this is used more in a tiebreaker approach - if 2 results are have the other factors similar, Google will use this data to pick one of them as being a better answer to the searcher. Social graph metrics: i have no idea This is not my thing. Never did social optimization so I don't have an opinion about it, other than the fact it has much lower importance than the other factors above. However, it makes sense to a certain extent for Google to take into consideration, even if just a bit, mentions of your domain name, a URL from your site that is not a clickable hyperlink, mentions of your brand name, etc. --- An important note I want to make is that I truly believe the algorithm is not linear. That means, the factors are not taken separately, but they are a function of other factors. In layman's terms it goes something like "If F1 factor meets F1c1 criteria, apply function f1(F1), else if meets F1c2 criteria apply f2(F1), else if f3(F1,F2,F3) = X apply f4(F1)" - ok, maybe not that clear, but read it several times and you should understand what I mean. Some of the factors that are usually taken together to compute an intermediary result are those of same type (e.g. all factors that have to do with links - number, anchor text, IP distribution), but it also makes sense for "unrelated" factors to be coupled (e.g. giving more weight to the final result (the SERP) if there is a very clear concordance between multiple factors (links anchor texts, on page keywords, title tag keywords)). Feel free to give reputation or say thanks if this helps, but most importantly let us know your thoughts. I think understanding the factors as much as possible and how they interact with each other is fundamental to be a good SEO and not just average.