I'm aware this is not the best way to prevent a website from being indexed, but I've got a dev environment that is behind password protection. The site is set to noindex and robots.txt disallows all crawlers everywhere (the robots.txt is not behind password protection). I know the crawler can't read the noindex, but it can't read that anyway due to the password protection. If erroneously a deep link exists to this dev environment and the crawler follows it, will it first check robots.txt to see if it is disallowed there before indexing? Or will it skip this step and simply index this page.