Here is a little secret i decided to share =) When scraping google i used public proxies (lots of them), call me cheapskate. And there have been a lot of topics saying that private proxies are best for scraping etc. But lets face it, they cost a lot. Public proxies cost next to nothing. But here is the problem , they get banned a lot, because people used them for scraping. So how do you scrape without getting banned? even better how to scrape via semi banned proxies? The fact is that google never actually bans an IP completely, it bans certain requests from it, eg footprints. How do i know this? I have my own scraping tools that i sometimes share on this forum,and while writing them i came across a strange thing. My scripts couldnt scrape and got asked to enter captcha, while i was googling normally using browsers. And i sought to find out why this was happening. I tried a LOT of stuff, headers, cookies, requests to some hidden stuff, etc. And here are the quickest tips for you guys: Tip 1 (scrapebox does this automatically, but maybe this will be of use to someone): replace ' ' with + by default urls get to encode spaces as %20 and google notices that. If you look at any google url you will see only +. It rewrites your quesry to have + instead of %20. So do the same. Tip 2 This is real gold. The problem with getting banned is your FOOTPRINT When searching using footprints that contain stuff like inurl and "powered by wordpress" i got banned almost instantly. Later even when using a browser such queries would produce a captcha. The problem with scrapebox is that it gives up on a keyword after certain amount of tries. And if you use footprints usually 30% of keywords will fail because the google will ban you for some time, and during that time scrapebox will waste keywords. I think google also "bans" footprints. I mean that if it spots some suspicious footprints it may ban the IP. Instead of powered by and inurl concentrate on other texts on page. for example when i searched for vbulletin forums i didn't use inurl:'member.php' "powered by vbulletin" but instead: "Home+page"+"find+all+posts+by+this+user"+"about+me" etc. If you rewrite your footprints to be less "generic" scrapebox will loose much less keywords. I tried this and my SB had 0 keyword loss with 5000 keywords over public proxies. Hope this helps.