So, I'm somewhat confused about this. I run a couple of scraper sites, game based, now I know other sites which use the same cms. Those phd'duds in that big chocolate factory seem pretty good at filtering out dupes. My site has about 5000 games, yet only 370 or so are indexed. By pinging the Rss feed on a large scrape I have managed to raise it to about 1000, but then a while later it removes them and falls to around the 370 mark again. Now in terms of 'duplication' there are several aspects to it in terms of on page seo: meta tags page titles urls on page descriptions. Another site I know has 30,000 games, but only about 700 indexed pages. This doesn't appear to be universal though, like take a look at this search result: clicky Theres a lot of the same content indexed. I'm going to try adding variance in the meta descriptions, and some other on page tweaks. Any ideas on why Google is indexing then dropping these pages, and how to get around it? I presume there is some 'domain filtering' as well as 'page filtering' or some mix of total duplication across the site, plus possible other factors?