Documentation for JIRA 6.3 EAP developer (EAP) releases only. Not using this? See below:
(JIRA 6.2.x documentation | JIRA OnDemand documentation | earlier versions of JIRA)

Skip to end of metadata
Go to start of metadata

The robots.txt protocol is used to tell search engines (Google, MSN, etc) which parts of a website should not be crawled.

For JIRA instances where non-logged-in users are able to view issues, a robots.txt file is useful for preventing unnecessary crawling of the Issue Navigator views (and unnecessary load on your JIRA server).

Editing robots.txt

JIRA (version 3.7 and later) installs the following robots.txt file at the root of the JIRA web app ($JIRA-INSTALL/atlassian-jira):

Alternatively, if you already have a robots.txt file, simply edit it and add Disallow: /sr/ and Disallow: /si/.

Publishing robots.txt

The robots.txt file needs to be published at the root of your JIRA internet domain, e.g. jira.mycompany.com/robots.txt.

Icon

If your JIRA instance is published at jira.mycompany.com/jira, change the contents of the file to Disallow: /jira/sr/ and Disallow: /jira/sr/. However, you still need to put robots.txt file in the root directory, i.e. jira.mycompany.com/robots.txt (not jira.mycompany.com/jira/robots.txt).