Robots.txt file
A robots.txt file contains directives for search engines, which you can use to prevent search engines from crawling specific parts of your website.
By default the robots.txt file that is loaded with your site is just a basic setup with these settings in the file and no others:
User-agent: *
Disallow: /myaccount/
When implementing robots.txt, keep the following best practices in mind:
- Be careful when making changes to your robots.txt: this file has the potential to make big parts of your website inaccessible for search engines.
- The robots.txt file should reside in the root of your website (e.g.
http://www.example.com/robots.txt). - The robots.txt file is only valid for the full domain it resides on, including the protocol (http or https).
- Different search engines interpret directives differently. By default, the first matching directive always wins. But, with Google and Bing, specificity wins.
- Avoid using the crawl-delay directive for search engines as much as possible.
If you want to change this file you can. Currently we must upload the new robots.txt file for you. Attach the new robots.txt file to a new ticket request and we will upload it to the site for you.