Hey guys, I have been fiddeling around lately a little bit with Scrapebox. Still trying to master the art of footprints and how to harvest well Lately, I am experiencing a problem: Whenever I try to harvest URLs Scrapebox says that my proxies are blocked by google. I have 10 Proxies from Squid Proxies and 10 from Yourprivateproxy to test them out and all proxies seemed to die after a little harvest (30.000 URLs). I know I should do the heavy lifting with public proxies but was a little lazy harvesting them, however, I think private proxies should last longer. Also, if I browse with Firefox those 'supposedly' dead proxies are working just fine, I can access Google and do my searches. So I think there might be a problem with my Scrapebox settings. I am still a newbie so definately I am missing something and thought you could give me a hint. I tried to set different connections starting with 90 which gave me around 160URLs/s but lowering quickly to 100 and below, is that normal? Thanks for your help.