Thank you for your idea! We appreciate the time our users take to provide us feedback and help us improve our product. Be sure to check out our other product boards and user ideas as well as our updated Submission Guidelines.
This would allow us to schedule workflows without using the Alteryx scheduler in the gallery and tag them to the appropriate worker. Why do it outside the gallery? It'll allow us to use job control tools like Autosys and Control-M.
We really need to be able to make workflows part of a larger enterprise schedule that includes non-Alteryx processes. One obvious example is to run a workflow as soon as database is ready (vs hoping it is ready at the scheduled time and leaving a big time gap after normal completion time). We use controlM for enterprise scheduling and would like to run an API to start workflows, but some workflows must be run on specific Alteryx worker servers.
On a personal note, I've painted myself into a corner because I believed the V1 workflow post API "workerTag" parameter function as described.
Think of an ad hoc execution of a job as a one-time schedule. So when creating the job, the iteration type is once. The start and end times will be + a few seconds/minutes later from datetimenow(), but they will be the same.
Yes @TheCoffeeDude , we can start a job like that. I had additional need to have the assigned jobId so I could monitor the job as it was queued and running, then when complete check the disposition and transmit that back to ControlM as a return code 0 for success or 1 for fail. The API endpoint I used returns the jobId, so much easier and reliable than searching for a jobId that could be monitored as the job runs.
There's also an old command line method of the add-to-queue endpoint that works similar to your suggestion.
Here's the part of my primitive python that does the start-monitor-feedback process in case anyone stumbles into this and wants to use
#######################################################################
### START THE WORKFLOW ID USING API, GET BACK RUNNING JOB ID
#######################################################################
headers = {
'Accept': 'application/json',
'Authorization': 'Bearer ' + token
}
url = GalleryUrl + '/user/v2/workflows/' + workflow_id + '/jobs'
params = {
'values': '{}'
}
response = requests.post(url, headers=headers, params=params)
print ('response.status_code=',response.status_code)
post_submit_return = response.status_code
data = response.json()
job_id = data.get('id')
print ('job_id=',job_id)
if str(post_submit_return) != '200':
print ('exitting with bad return because of get job submit return code is ' + post_submit_return)
sys.exit(1)
else:
url = GalleryUrl + '/v3/jobs/' + job_id + '?includeMessages=false'
params = {
'values': '{}'
}
#########################################################################
### LOOP UNTIL JOB STATUS IS COMPLETED, THEN CHECK FOR SUCCESS
### EXIT WITH 0 FOR SUCCESS OR 1 FOR NOT SUCCESS
#########################################################################
job_status = 'Primed'
get_submit_return = '200'
while True:
response = requests.get(url, headers=headers, params=params)
print(f'response.status_code = {response.status_code}')
get_submit_return = str(response.status_code)
data = response.json()
job_status = data.get('status')
print(f'job_status = {job_status}')
job_disposition = data.get('disposition')
print(f'job_disposition = {job_disposition}')
if get_submit_return != '200':
print ('exitting with bad return because of get job status return code is ' + get_submit_return)
sys.exit(1)
if job_status == 'Completed':
break # Exit the loop when job_status is Completed
time.sleep(30) # Adjust the delay as needed
if job_disposition != 'Success':
print ('exitting with bad return because of job disposition is ' + job_disposition)
sys.exit(1)
print('Good Job')
sys.exit(0)