The robots.txt protocol is used to tell search engines (Google, MSN, etc) which parts of a website should not be crawled.
For JIRA instances where non-logged-in users are able to view issues, a robots.txt file is useful for preventing unnecessary crawling of the Issue Navigator views (and unnecessary load on your JIRA server).
JIRA (version 3.7 and later) installs the following
robots.txt file at the root of the JIRA webapp:
Alternatively, if you already have a
robots.txt file, simply edit it and add
Disallow: /sr/ and
robots.txt file needs to be published at the root of your JIRA internet domain, e.g.