Hi guys! I was wondering how tools like Scrapebox can gather information for websites like Google. Normally I think that this kind of tools should behave like browsers. I know that in Java you can use HtmlUnit (which is not perfect for scraping or accessing websites if we think in terms of BlackHat) but I do not know what it is used by C (or C++, C#) programs. On conclusion, could you please give some hints concerning the current architecture needed to build a scraping / social wetwork software?