Discussion in 'White Hat SEO' started by krishnanayak, Jun 10, 2016.
How Search Engines Use Links ? please someone share own thought to understand about this.,..
1. Simple crawl - That is the search engine crawler visits a site and it look for robots.txt file.It collects all the whole pages of the site and make a list.
2. Processing links - By collecting all information then the link graph gets processed then the SE pulls out all of those links out of the database and connects them,assigning relative values to them.
3. Blocking pages with robots.txt - The crawler go back to the original example. Suppose the robots.txt file had told the search engine no to access one of those pages.
4. Using 404 or 410 to remove pages - That is the blocked pages can be removed.
5. Indexing and No indexing the pages regarding to the robots.txt file.
These are the process of search engine...
At first crawler visit a website or a blog.
It will be collecting the robot.txt file.
And also it will be collecting the list of all the pages and each page links too.
After that search engine pulls all those datas out and connects them and assign values to them.
The Search Engine was crawling through the pages and making list of links, it would have any data or links that was included in the robot.txt file.
Through that the links are used by the search engines
Thanks to all for your valuable contribution..
If you are having a new site ready for submitting to google webmaster tools then it takes sometime to someday for indexing your site according to site value.
It crawls the site to indentify the content quality and effectiveness of your site with URLs.
You get de indexed while you have not followed the google metrices.
Also Please someone share own thought to understand about this....
Separate names with a comma.