Robots.txt is a text file webmasters create to instruct web robots (typically search engine robots) how to crawl pages on their website. The robots.txt file is part of the robots exclusion protocol (REP), a group of web standards that regulate how robots crawl the web, access and index content, and serve that content up to users.
Allowing all web crawlers access to all content
User-agent: *
Disallow:
Using this syntax in a robots.txt file tells web crawlers to crawl all pages on
www.example.com, including the homepage.