Discussion in 'Black Hat SEO Tools' started by JustUs, Mar 23, 2015.
What is the difference in the harvester between the Google API and Google in the Beta?
Google API is deprecated afaik but it still works. Banned proxies from normal search should generally still work with it (again afaik it's been awhile since I messed with it). I think it delivers 50 results max at a time.
Not 100% sure but with other bots the API uses the API where as the regular one mimics the browser.
This does not seem to make sense. I can search for a term in Google and specify the starting page and number of terms up to 100 with a standard Google search of www.google.com/search?q=term&start=0&num=100 with a standard search.
Of course it doesn't make sense if you don't read what I wrote. The API is what is more limited not the normal search. Try harvesting with the Google API in the detailed harvester and you will see that only 20 results come up at a time. As it turns out 20 is the maximum (just tested it) and not 50.
But the important thing to keep in mind is that you can use it with proxies that are banned from search.
This is where the confusion is coming in. The only Google API, outside of Maps, that I am aware of is the one that a developer can pay for and have unlimited queries. Mostly this is used by big data.
There are multiple engines out there that when you search they are just a 3rd party and they just grab google results and display them to you. Like Deeperweb, and several other in Scrapebox. The best way to think of it is that the Google API is more or less like this. It gives a limited number of google results back. The thing is they all use different IP bans.
So you can take an IP that is blocked by google and still get results from these other engines. They typically won't return as many results as google, but if you get 50 results times 1000 keywords, thats 50,000 results you get that you could not get from google with the same blocked IPs. If that makes sense.
Separate names with a comma.