what are the uses of robots.txt
The robots exclusion protocol (REP), or robots.txt is a text file webmasters create to instruct robots (typically search engine robots) how to crawl and index pages on their website.
Robots.txt can be used for following purposes:
1. Robots.txt file indicates whether the pages of their website should be crawled or not.
2. To hide few pages that you don't want search engines to show in the SERP.
3. It shows the search engines where the XML sitemap is located so that they can find new pages quickly.
Robots.txt is useful for preventing the indexation of the parts of any online content that website owners do not want to display.
Using this file a website owner can block robots to follow some link in website. Ex: User-agent: * Disallow: / Above code means website owner restricting all ("*") to visit whole website("/").
Robots.txt is a text file that lists webpages which contain instructions for search engines robots. The file lists webpages that are allowed and disallowed from search engine crawling.
thx for this intresting topic. nice information.