Google is alway failing in scrapebox 302 error(IP Blocked)

ningning

Supreme Member
Joined
Mar 22, 2010
Messages
1,262
Reaction score
309
When I first got scrapebox the program could harvest URLs from Google.

But lately I get a "302 error(IP Blocked)" every time I try to harvest with Google.

This happens with working proxies???

Anyone knows how to solve this problem?
 
It means what it says the IP is already blocked by Google. You're using public proxies right? Get some private ones like around 10 should be good for scraping.
 
hi, i dont get this error code but i cant harvest with google too, it gives no resulst whereas i obtain many with yahoo and bing...

The keword is in red with google which means it hasn't took it into consideration...

I am using public proxies but since it works with yahoo and bing it is not proxies fault i think

strange
 
I am also having this same problem with gogle(IP blocked) whereas i can scape from bing and yahoo....

I have opened a couple of threads on it but nobody seemed to know the answer.....

Bumping this thread for a solution.
 
I have no idea,for me it is working,maybe try to harvest some secconds without porxies.
 
Have you tried checking "Use new Proxy Harvester" in the options Tab?
It will filter out all proxies google is "hating on" if you leave the "ignore google" box unchecked...
 
I am also having this same problem with gogle(IP blocked) whereas i can scape from bing and yahoo....

I have opened a couple of threads on it but nobody seemed to know the answer.....

Bumping this thread for a solution.


Google is much quicker to block them than Yahoo is. You just need to gather a bunch of proxies and let them run. A lot of them will be blocked but if you start with enough of them you will still be able to scrape the urls you want. Do not just rely on the proxies that scrapebox scrapes for you, find some other sources as well that are not as abused.
 
Let me just say that I am able to get free public proxies that work just fine with google and it gives up great results.

I know many of you will be pissed at me when I tell you that I'm not going to share my sources but just think about it for a second. If I told you guys about which proxies to use and where to get them, how long do you think they would continue to work.

Nice thing is that many of the proxies I use are elite proxies but even the highly anonymous ones are usually not ruined by the hordes of SB users.

Nothing personal, just don't want to lose my resource.


~A great man once said. "Everything is for sale."
 
Have you tried checking "Use new Proxy Harvester" in the options Tab?
It will filter out all proxies google is "hating on" if you leave the "ignore google" box unchecked...

thanks for the answer here, but where is this found?

under "options" or "settings"?

"options" i see:
- name instance
- export xml
- user agents
- confrim delete
- remove dup
- check addon
 
Having a similar problem... Still can't work out why...
 
Last edited:
Sorry to bump a really old thread but I am having the same problem!
I have 5 private proxies, they are definitely working with Google because I test them in IE and I can visit and search on Google.

But when I try to harvest with scrapebox it fails - produces loads of errors and no results.
And when I test proxies in scrapebox it shows same error - error - 402 - found

....
 
Sorry to bump a really old thread but I am having the same problem!
I have 5 private proxies, they are definitely working with Google because I test them in IE and I can visit and search on Google.

But when I try to harvest with scrapebox it fails - produces loads of errors and no results.
And when I test proxies in scrapebox it shows same error - error - 402 - found

....

browser is going to be different as it loads java and adds the additional browser footprint cookie, this isn't possible with scrapebox or any scraper that uses webrequests/sockets.

Also check your doing the same searches google will block for more complex searches but not ones a normal user would do.
 
hi me i had only 5 or 6 links succeeded in thounsands all the rest failed . last year it was working good and now i have this problem anybody knows!
 
hi me i had only 5 or 6 links succeeded in thounsands all the rest failed . last year it was working good and now i have this problem anybody knows!
Google don't want you to scrape, they've got way way stricter. Do you have working private proxies? How many threads per a proxy?

Tbf, just do bing or yahoo...
 
Back
Top