So I've started to get hard core into scrapebox - especially for finding expired web 2.0 domains that have strong metrics. Been having GREAT success, except now where I have about 10 million unique web 2.0 domains to sort through. I know I'll first sort them by availability using the vanity checker, but what is your next step? I've always run them through MOZ next but with the new limitations on their free APIs, I find it hard to process large quantities of data - even with the handful accounts I already have. Of course paying is an option but their fees are a little far out there in MHO. How do/would you sort through something like this?