Well let's start with what we know machine learning can do.
1. Identify unintuitive patterns in large data sets.
2. Use large data sets and patterns to provide insight and model data.
That basically sums up machine learning in a nutshell, a lot of ML is just advanced statistics, not voodoo magic, although the math almost looks voodoo.
Anyway, there's a common denominator, large data sets. You have to be able to collect a lot of data, and more importantly, it has to be "clean", aka not corrupted, but completely accurate.
The holy grail is split testing or experimenting with your own sites to better understand Google's ranking algorithm (good luck with that). But it would take an absurd amount of resources to pull that off. I think what can be done is collecting data on other people's sites and observing what changing events in their link profile or content cause ranking fluctuations.
The problem is collecting data, we know Majestic/Moz/etc don't crawl the web as comprehensively as Google does. Scraping search results can be hard enough of a task and requires way too many proxies as is.
Start with figuring out what data you can reliably collect and think about what insight you could gain from it. Then collect a crap ton of it and figure out how to apply ML.