I'm looking for constuctive criticism & advice regarding a technique that I read about which supposedly worked very well a few years ago, but looks like it probably no longer would work. The method which I read about: The basic idea was to scrape titles of blog posts from places such as http://weblogs.com/ , then create your own RSS feed using those same titles, some description that you make up, and any url of your choosing. The RSS feed is essentially 'fake' in that it would not correspond to a real blog, it's just a feed. So you then ping http://blogsearch.google.com/ with your fake RSS feed, wait for it to get listed, and hope that some autobloggers scrape your feed, thus publishing your post on their autoblogs, and if they credit the original blog where they scraped the post from, instead of it pointing to your blog, it instead uses the url you placed in the feed, giving you a backlink to wherever you specified. You could also include a backlink within the description/post itself, so that if they don't remove the url, it might give you a contextually relevant link within their autoblog post. Obviously, this would need to be done on a very large scale, perhaps using 100's or 1000's of feeds, which shouldn't be too difficult if it's mostly automated. When I first read about this technique I was very enthused, however the more I looked into it, the more problems I found. I think that it may be possible to solve many of those problems however, so that's where I'd like your input. Potential problems and possible solutions: Problem: blogsearch doesn't list links to RSS feeds, it lists links to the blogs. Presumably, when autobloggers are scraping (or manually searching) for RSS links to add to their autoblogs, they use blogsearch to find links to relevant blogs, visit those blogs, then look for an RSS feed link. So merely providing google with a fake RSS feed, isn't going to accomplish very much even if they do list it. The scrapers/bloggers would just end up at the url you want a backlink pointed to, rather than at a blog where they can grab your (fake) RSS feed from. Solution: Rather than just scrape titles and use a standard description in a fake feed, instead simulate a real blog. Use spun content to generate titles & post content, and generate a real RSS feed to correspond to those posts. This makes it appear just like any other blog. However, blogsearch appears to mainly provide results for very recent posts. i.e., only 1 post on this page (when I look at it) is from over 24 hours ago: http://blogsearch.google.com/blogsearch?hl=en&ie=UTF-8&q=bravia This might permit the opportunity to provide blogsearch with a real rss feed to real (spun) posts, and then after 24 hours simply 301 redirect each post. In this way, blogsearch is listing genuine posts, autobloggers are finding real posts and real RSS feeds, and after they subscribe to your feed the links that they provide to your "blog", just 301 to whatever you specify. If they happen to leave your links within the posts intact, all the better. This could all be generated automatically, perhaps on subdomains, and I can easily envision generating 1000's of feeds like this very quickly. The big question which remains is, how hard is it to get blogsearch to list a blog with no backlinks? If autobloggers aren't able to find the content to scrape, then this whole endeavor would be pointless. Any comments or suggestions?