They've started to introduce a lot of new query filters in the last six months. In some cases you may be getting blocked because of how your query is formed, regardless of how quickly, frequently, or simultaneously the queries are being made from the same IP address. While an extremely large number of simultaneous connections from the same IP address will cause the CAPTCHA to pop, there are also many legitimate queries simultaneously coming from the same IP address, they aren't as strict about this as one might expect. (Think about public schools and libraries with older networks and small budgets.) If they can identify a specific query type as having very little substantial value to real users, they either neuter it or make it complicated to scrape.
If you're programming scraping utilities yourself, one thing you might want to take into consideration is staggering your queries such that you're not scraping page after page after page of the same query, but rather a page of a query, a page of another query, a page of a third query, then the second page of the second query, second page of the first query and so on. Spreading query types across proxies is another method of avoiding the CAPTCHA. We've actually experienced what appears to be the dynamic generation of new query filters in-the-wild over the course of only a few minutes. Google are good at identifying generalized patterns, but not until those patterns have been introduced to the system. By being impatient and rampantly attacking a specific query type, you're almost ensuring that it will be blocked. Worse than that is that once a generalized pattern of abusive querying is identified, the speed at which similar but different queries are identified also increases. The more information that is stored on a specific query type, the faster the filters can be generated.