If you're coming to the meet on Friday we can have a debate on this one.
Would it be seen natural to link to a property that isn't indexed? How did you find it?
I've thought about experimenting with building links to un-indexed properties.
What if the newer link indexed before the older property, how would that affect?
I've got another theory on reverse tier linking that I want to put to the test as well.
I won't be able to come this time - holidays so I have to be around, but next time for sure!
Well... it all comes down to either letting Google know if you own the domain (submitting a sitemap), or letting it index the links "naturally" - which by the crawler's definition is: when you crawl a page, get a list of inner links and external links, check if we have them in our index
If we do: Check if the content has been changed (if so, is it duplicate/whatever/linking out/<random content factor we don't know)
If we don't: Does it qualify (is it unique) to be indexed, other analysis for relevancy/KWs/whatnot
There is a 3rd way that used to be popular and way more effective than these days - Pinging. By definition:
In blogging, a ping is an XML-RPC-based push mechanism by which a weblog notifies a server that its content has been updated.
A protocol dedicated to aiding people let search engines (or any platform that holds indexes of content/links) know when new content is available for the index.
And it's not just Google using this mechanism either. Every backlink checker out there is doing the same thing - using each of these methods to index links... only difference is, they don't care about content per say, just links pointing to other pages.
I actually coded my own crawler that does the same as above for some personal use, and when doing so, these were the only ways I could figure out to get links in my DB. I might be wrong or missing something though (social, real time indexes for custom sites to name a few).