since i've only started threads asking for help and advice, thought i might share something myself... i've spent a lot of time playing with scrapebox and its an amazing tool, but I have other things to do than babysit it all day so I've come up with a streamlined approach to using scrapebox effectively here's what i do: * generate keywords * harvest urls wordpress * harvest urls blogengine * harvest urls movable type * Break down into groups of 100,000 URLs (post to batches over time) * save successful posts to "success" folder * save failed posts to be checked later * post to failed urls * save successful posts to "success" folder * move failed posts to "manual" folder - don't delete, might still be worthwhile for manual commenting after checking edu, gov, high PR, ect * scrape additional urls for "success" domains * break additional urls into batches and post I'm doing very high volumes running 4 instances of scrapebox on a vps 24/7. I've come to the conclusion that I simply dont want to spend time checking links, creating auto approve lists, rechecking to see if what scrapebox missed on the first run, ect. The way i see it, I dont really care if the successful posts are auto approved or not. some will be, most wont, but if I've spun some decent comments then there's a good chance that many of them will be approved given time. This will give me a steady stream of backlinks as people approve them and will help my footprint look natural to big g.