Alright folks these are my twists on existing methods that should yield pretty cool results with scrapebox. Let's get started. Requirements: - Scrapebox, hrefer or any other harvesting tool that can adapt (this tutorial is geared towards scrapebox though) - VPS or dedicated server (HIGHLY recommended cause we'll be using tons of bandwidth) 1. We're gonna start by doing little searches here on the forum and on Google: ".edu blog link" "auto approve list" "PR7 blog link" "free PR6 blog link" ...etc...etc whatever you can come up with. 2. Grab the ones that seem most popular. Focus on very rare links that are shared in public...like high PR autoapprove .edus. We want to ensure that they are very heavily spammed. 3. Compile what you find in one list, remove duplicates and low PRs. I keep PR6 and above in addition to the .govs, .edus...etc if there are some. (I know there won't be many URLs then) 4. [OPTIONAL] Now here's a cool twist. Visit a couple of the very very best ones you can find. PR7++. You'll notice that they're spammed to death of course. Many of them will have thousands of comments but with 80+ pages and each page has just a hundred or so. Use my PHP script (or anything else you like) to generate all those comment page URLs. Instructions inside: Code: http://www.mediafire.com/?o5fjd2cy4deadhe 5. Now repeat step 4 for the best blogs you can find that may have many comment pages. 6. Compile a file of URLs gathered in step 3, 4 and 5. 7. Open up the scrapebox link extractor plugin and import the list in step 6. 8. Choose "outbound links" and start scraping links on the page. 9. You should end up with tens of thousands of spammer URLs that were all gathered from high PR blogs. 10. These are your keywords, add them to scrapebox text editor and add "link:" operator before them. 11. Use custom footprint harvester. 12. [OPTIONAL] Use blog analyzer addon to filter out none compatible blogs. 13. [OPTIONAL] Use CrazyFlx's tutorial to deal with massive text files: Code: http://crazyflx.com/scrapebox-tips/remove-duplicate-domains-urls-from-huge-txt-files-with-ease/ 14. [OPTIONAL] Repeat the whole process with the new list if you find it worthwhile until you scrape the whole web This method has great potential and allows you to scrape billions of URLs. Yes, even if we say 70% or more are duplicates you would still end with millions of unique URLs. Spammers occassionally compile and use their own lists so chances of finding unique URLs is cool. That's my primary scraping method that HAS been mentioned time and time again but with a few additional tweaks. The "comment page" tweak also helps you reach spammer URLs that others don't and thus helping you steal their backlinks while others can't Enjoy folks!