Runbook: Bulk move worklogs in Jira Cloud
Platform Notice: Cloud - This article applies to Atlassian products on the cloud platform.
Platform Notice: Cloud - This article applies to Atlassian products on the cloud platform.
Goal
This page aims to share script to Move worklogs between issues in order to allow the creation of new worklogs when the 10,000 limit is enforced.
Documentation
Get Data
Fetch the worklog IDs you plan to move using Get Issue Worklogs API
- use startedAfter and startedBefore parameters to narrow the timeframe (timestamps in milliseconds)
def get_worklog_ids(issue_key, after=None, before=None):
url = f"{SITENAME}/rest/api/3/issue/{issue_key}/worklog?" \
f"maxResults=5000&" \
f"startedAfter={get_timestamp_in_milliseconds(after)}&" \
f"startedBefore={get_timestamp_in_milliseconds(before)}"
response = requests.get(url,
auth=(USER, PWD),
timeout=30)
return [worklog['id'] for worklog in response.json()['worklogs']]
Get Data by author
To move worklogs authored by a specific user, custom filtering of results must be done, as the Get Issue Worklogs API does not accept a user parameter.
Example script in Python 3 (by email address)
def get_worklog_ids_by_emails(issue_key, emails, after=None, before=None):
url = f"{SITENAME}/rest/api/3/issue/{issue_key}/worklog?" \
f"maxResults=5000&" \
f"startedAfter={get_timestamp_in_milliseconds(after)}&" \
f"startedBefore={get_timestamp_in_milliseconds(before)}"
response = requests.get(url,
auth=(USER, PWD),
timeout=30)
worklog_ids = []
for worklog in response.json()['worklogs']:
if 'emailAddress' in worklog['author'] and worklog['author']['emailAddress'] in emails:
worklog_ids.append(worklog['id'])
return worklog_ids
Move Worklogs
Move the worklogs by calling Move Issue Worklogs API with IDs received from the previous call
Example script in Python 3
def bulk_move_worklogs(issue_key, worklog_ids, destination_issue):
url = f"{SITENAME}/rest/api/3/issue/{issue_key}/worklog/move"
body = {
"issueIdOrKey": destination_issue,
"ids": worklog_ids
}
response = requests.post(url, json=body, auth=(USER, PWD), timeout=30)
return response.status_code, response.text
Full script
The script will move all worklogs from issue SOURCE_ISSUE_KEY to issue DESTINATION_ISSUE_KEY started after STARTED_AFTER and before STARTED_BEFORE
import json
from datetime import datetime
import requests
USER = 'admin@yahoo.com'
PWD = 'prod_token'
SITENAME = "the instance address"
SOURCE_ISSUE_KEY = 'TEST-1'
DESTINATION_ISSUE_KEY = 'TEST-2'
# Adjust if you'd prefer to use a different format
DATE_TIME_FORMAT = '%Y-%m-%dT%H:%M:%S%z'
# Can be set to None
STARTED_AFTER = '2023-01-01T11:00:00+1100'
STARTED_BEFORE = '2024-01-01T11:00:00+1100'
def get_worklog_ids(issue_key, after=None, before=None):
url = f"{SITENAME}/rest/api/3/issue/{issue_key}/worklog?" \
f"maxResults=5000&" \
f"startedAfter={get_timestamp_in_milliseconds(after)}&" \
f"startedBefore={get_timestamp_in_milliseconds(before)}"
response = requests.get(url,
auth=(USER, PWD),
timeout=30)
return [worklog['id'] for worklog in response.json()['worklogs']]
def get_worklog_ids_by_emails(issue_key, emails, after=None, before=None):
url = f"{SITENAME}/rest/api/3/issue/{issue_key}/worklog?" \
f"maxResults=5000&" \
f"startedAfter={get_timestamp_in_milliseconds(after)}&" \
f"startedBefore={get_timestamp_in_milliseconds(before)}"
response = requests.get(url,
auth=(USER, PWD),
timeout=30)
worklog_ids = []
for worklog in response.json()['worklogs']:
if 'emailAddress' in worklog['author'] and worklog['author']['emailAddress'] in emails:
worklog_ids.append(worklog['id'])
return worklog_ids
def get_timestamp_in_milliseconds(date):
if not date:
return ''
return 1000 * int(datetime.strptime(date, DATE_TIME_FORMAT).timestamp())
def bulk_move_worklogs(issue_key, worklog_ids, destination_issue):
url = f"{SITENAME}/rest/api/3/issue/{issue_key}/worklog/move"
body = {
"issueIdOrKey": destination_issue,
"ids": worklog_ids
}
response = requests.post(url, json=body, auth=(USER, PWD), timeout=30)
return response.status_code, response.text
while True:
worklog_ids = get_worklog_ids(SOURCE_ISSUE_KEY, STARTED_AFTER, STARTED_BEFORE)
if len(worklog_ids) == 0:
print("No more worklogs to move")
break
status, body = bulk_move_worklogs(SOURCE_ISSUE_KEY, worklog_ids, DESTINATION_ISSUE_KEY)
# partial success
if status == 200:
print(f"Successfully moved *some* worklogs, response \n\"{body}\"")
elif status == 204:
print(f"Successfully moved {len(worklog_ids)} worklogs from issue {SOURCE_ISSUE_KEY} to issue {DESTINATION_ISSUE_KEY}")
else:
print(f"Received different status than HTTP 200/204 from the Bulk Move endpoint, status {status}, response body \n\"{body}\"")
break
FAQ
Q.How many worklogs, at max, can be moved at a time?
A.The limit, as described in the API docs, is 5,000 worklogs.
Q. What are the various criteria that can be used to identify the worklogs to move?
A.The Bulk Move API can only accept worklog IDs. Those IDs must be fetched using one of the available APIs, e.g. the Get Issue Worklogs API
Q. What is the maximum time range to get worklog IDs?
A. The Get Issue Worklogs API does not limit the time range, as long as the timestamps are valid.
Q. Are worklogs moved from a single issue at a time?
A. The worklogs must be from a single issue, specified in the {issueIDOrKey} API parameter.
Q. Are worklogs moved to a single issue at a time?
A. The worklogs cannot be moved to more than one issue with one API call.
Q. What if I move worklogs by mistake? Can I restore them back?
A. The worklogs can be restored by calling the same API, but with the issues reversed.
Q. Does the Bulk Move API update the time-tracking field?
A. No, time-tracking values are not updated on either issue.
Q. Does the Bulk Move API update issue change history?
A. No, issue change history is not updated on either issue.