The Product Idea boards have gotten an update to better integrate them within our Product team's idea cycle! However this update does have a few unique behaviors, if you have any questions about them check out our FAQ.

Alteryx Server Ideas

Share your Server product ideas - we're listening!
Submitting an Idea?

Be sure to review our Idea Submission Guidelines for more information!

Submission Guidelines

API function for assigning worker node while submitting the job request using api.

Can we make API function available to assign specific worker node while submitting job request using api.

8 Comments
KylieF
Alteryx Community Team
Alteryx Community Team

Thank you for your idea! We appreciate the time our users take to provide us feedback and help us improve our product. Be sure to check out our other product boards and user ideas as well as our updated Submission Guidelines.

KhZ
5 - Atom

 

Assign job to specific worker by API is very important for below scenarios,

 

1>dedicated worker for special jobs/BU.

2>Important Jobs required to run in different workers, e.g. primary job on worker A and standby job on worker B to mitigate risk on worker level.

 

Hope this feature could be available in next version.

 

 

TheCoffeeDude
11 - Bolide

This would allow us to schedule workflows without using the Alteryx scheduler in the gallery and tag them to the appropriate worker. Why do it outside the gallery? It'll allow us to use job control tools like Autosys and Control-M.

hroderick-thr
11 - Bolide

We really need to be able to make workflows part of a larger enterprise schedule that includes non-Alteryx processes. One obvious example is to run a workflow as soon as database is ready (vs hoping it is ready at the scheduled time and leaving a big time gap after normal completion time). We use controlM for enterprise scheduling and would like to run an API to start workflows, but some workflows must be run on specific Alteryx worker servers. 

 

On a personal note, I've painted myself into a corner because I believed the V1 workflow post API "workerTag" parameter function as described.

TheCoffeeDude
11 - Bolide

The V3 API allows you to create a schedule and specify a worker. Below is the contract for the payload.

BTW, this comes from Server 2023.2. I don't know if it's available in older versions of Alteryx Server.

{
  "workflowId": "string",
  "iteration": {
    "iterationType": "Once",
    "startTime": "2024-05-07T22:43:40.816Z",
    "endTime": "2024-05-07T22:43:40.816Z",
    "hourlyContract": {
      "hours": 0,
      "minutes": 0
    },
    "dailyContract": {
      "runOnlyWorkWeek": true
    },
    "weeklyContract": {
      "daysOfWeek": [
        "Sunday"
      ]
    },
    "monthlyContract": {
      "simpleDayOfMonth": true,
      "dayOfMonth": 0,
      "occurrence": 0,
      "dayOfWeek": "Sunday"
    },
    "customContract": {
      "daysOfMonth": [
        0
      ],
      "months": [
        0
      ]
    }
  },
  "name": "string",
  "comment": "string",
  "priority": "Default",
  "workerTag": "string",
  "credentialId": "string",
  "timeZone": "string"
}

 

hroderick-thr
11 - Bolide

Thanks @TheCoffeeDude we are on 2023.1

I see that 2024.1 has what looks like a v3 rewrite of the v1 endpoint to add workflow to queue.

The one you suggested adds a schedule which wouldn't work for me.

 

I did find a window out of the corner I was in. 

I was already using the v3 API to add workflow into a private studio and it had a workerTag parameter that worked

When tagged in the private studio and run using the old v1 endpoint to add workflow to queue, it uses the intended server tag to pick a worker.

 

 

TheCoffeeDude
11 - Bolide

Think of an ad hoc execution of a job as a one-time schedule. So when creating the job, the iteration type is once. The start and end times will be + a few seconds/minutes later from datetimenow(), but they will be the same.

 

"iterationType": "Once",
    "startTime": "2024-05-07T22:43:40.816Z",
    "endTime": "2024-05-07T22:43:40.816Z",

 

hroderick-thr
11 - Bolide

Yes @TheCoffeeDude , we can start a job like that. I had additional need to have the assigned jobId so I could monitor the job as it was queued and running, then when complete check the disposition and transmit that back to ControlM as a return code 0 for success or 1 for fail. The API endpoint I used returns the jobId, so much easier and reliable than searching for a jobId that could be monitored as the job runs.

 

There's also an old command line method of the add-to-queue endpoint that works similar to your suggestion.

 

Here's the part of my primitive python that does the start-monitor-feedback process in case anyone stumbles into this and wants to use 

 

#######################################################################
###    START THE WORKFLOW ID USING API, GET BACK RUNNING JOB ID
#######################################################################
headers = {
    'Accept': 'application/json',
    'Authorization': 'Bearer ' + token  
    }

url = GalleryUrl + '/user/v2/workflows/' + workflow_id + '/jobs' 
params = {
    'values': '{}'
    }

response = requests.post(url, headers=headers, params=params)
print ('response.status_code=',response.status_code)
post_submit_return = response.status_code
data = response.json()
job_id = data.get('id')
print ('job_id=',job_id)
 
if str(post_submit_return) != '200':
    print ('exitting with bad return because of get job submit return code is ' + post_submit_return)
    sys.exit(1)
else:
    url = GalleryUrl + '/v3/jobs/' + job_id + '?includeMessages=false'
    params = {
        'values': '{}'
        }

#########################################################################
###   LOOP UNTIL JOB STATUS IS COMPLETED, THEN CHECK FOR SUCCESS
###   EXIT WITH 0 FOR SUCCESS OR 1 FOR NOT SUCCESS
#########################################################################    
    job_status = 'Primed'
    get_submit_return = '200'
    while True:
        response = requests.get(url, headers=headers, params=params)
        print(f'response.status_code = {response.status_code}')
        get_submit_return = str(response.status_code)
        data = response.json()
        job_status = data.get('status')
        print(f'job_status = {job_status}')
        job_disposition = data.get('disposition')
        print(f'job_disposition = {job_disposition}')

        if get_submit_return != '200':
            print ('exitting with bad return because of get job status return code is ' + get_submit_return)
            sys.exit(1)
        if job_status == 'Completed':
            break  # Exit the loop when job_status is Completed
        time.sleep(30)  # Adjust the delay as needed
        
    if job_disposition != 'Success':
        print ('exitting with bad return because of job disposition is ' + job_disposition)
        sys.exit(1)
        
    print('Good Job')
    sys.exit(0)