Am I screwing something up?

mojito56

Registered Member
Joined
Apr 24, 2016
Messages
54
Reaction score
52
Bought a bunch of proxies from ProxyMillion cause they were hella cheap. Thing is, they don't work in Scrapebox no matter what I do. I get Error 403 in the proxy tester and the same error in the harvester (but for some reason Yahoo is showing up as completed?).

I then tried the proxies in Firefox where they seem to work just fine:

F9rf3Rz



I still really need the proxies to work in Scrapebox though. Is it an error on my side? My proxy provider? Scrapebox? Sorry if this is in the wrong forum
 
ITS problem because : You dont read what you buy :) And because you dont know the main rule that there is no such thing as CHEAP and GOOD stuff.. You can have good or cheap.
Better message proxymillion support, I can bet they will help you.
Anyways its like you dont have permission - that what means 403 error - so proxymillion works on 5 sources only, maybe they not allowed usage on Scrapebox. Or they use some user:pass login which you forgot to enter there.
 
proximillion is supporting only a couple of pages .... not made for spamming.
 
The problem is the way Scrapebox checks the proxies. When you verify them under Proxy Manager (even when you do google test) the first request goes to their server and if the connection has been completed then next request goes to i.e. google page.
And now if you have proxies which allows to connect only 5 services and none of them is SB server then code 403 is returned because the SB server was blocked.

To really check them out in Scrapebox you will need to verify the direct action of the pages, under which they can operate, eg. Google, Yahoo.
 
Yup, pretty much the proxies you've signed up weren't meant for SB.
 
Not exactly :)
He can use them with SB for example to harvest Google but he won't be able to verify them in Proxy Manager.

So you could rather say the Scrapebox is not a suitable tool for checking proxies that have limited access to several pages, because of the way it does.

All the best!
 
That's right. As a rule, or you need more proxy, or more patience.

You can, however, make it less painful by the appropriate settings. Loopline in the film below showed the general rules that should be taken into action when planning to harvest:


Appropriate delays for queries, the number of threads on the number of proxies held, etc.

These are general rules, but at the end, and so the final shape of these settings depends from the experience gained by the user.
 
Google scrapping will contiously make you trouble and you need to tackle all the time :)

With new Ips,captcha solutions,programming changes,etc
 
Back
Top