Hi, is it the case that,
- using the 'run command' tool to run an external script
- scheduled on Alteryx server
- if the script takes ages / hangs, so we cancel the Alteryx workflow in the scheduler options
- the script itself is not cancelled / closed, and remains on the server, using up memory etc
Specifically this is happening with the Runner Macros from the @AdamR_AYX Crew macro pack.
- We have a 'scheduling workflow' that runs lots of other workflows, some of which run scripts.
- If these hang and we cancel the scheduling workflow from the 'view schedules' window
- Both the 'AlteryxRunner.exe' and 'xx.ps1' processes remain running on the server machine
is this correct / is there some extra setting to make sure these processes are correctly killed?
Thanks, Greg.
Solved! Go to Solution.
Hey @gregh
My understanding is that because these are run in a separate process ID - Windows does not track a dependancy between the Alteryx flow and the command that is initiated via the Run command. This means that from Window's perspective, this process is entirely separate and is not running WITHIN the context of the alteryx flow and it will continue even if Alteryx is killed completely. This means that you do need to ensure that anything you run has a timeout in it; or that you have a regular habit of rebooting your alteryx server on a weekly basis to prevent hanging process buildup (e.g. a scheduled restart on Sundays).
Adam ( @AdamR_AYX), is my understanding correct? Is there any way that Alteryx has of tracking these process dependencies internally, and garbage collecting these processes as a workflow is shut down?
@SeanAdams Hello!
I have a situation where I have a Workflow ( says WF 1) to run a python code kick started via Run command which will start a local SMTP service. Later I would run a parallel process (say WF 2) which would crunch data and send out an notification with file attachments via an email tool which utilizes this python service to send out. Now, what I want is to set up a process to kill this python code / smtp service (ideally kill the WF1 ) post successful run of WF 2. In short I want to kill a workflow post completion of another. Am not sure how to get this done and am making rounds in the community for a post.
Any sort of thought would be greatly appreciated. Looking forward.
http://tweaks.com/windows/39559/kill-processes-from-command-prompt/Sounds like a taskkill challenge.
@MarqueeCrew Hey! Thanks for the push! Here is what I have done.
1. Py script to start SMTP service in Run command
2. Use List Conditional Runner from crew macros to kick the Data crunch + email send process
3. I have a .bat file to kill python.exe and exit out of cmd. ( executed via Events post successful completion of WF)
Works as expected when am doing this manually. The bottleneck is that am not able to execute 1 & 2 in parallel in the same workflow as the SMTP services usually runs indefinitely. Tried using a combination of List runners from crew macros but same infinite loop.
Have you run across such issue?
@MarqueeCrew Also I tried placing scripts int he event manager before Run and still it mimics the Run Command the WF waits indefinitely as the script is a SMTP service and it always running.
Just thought to update the recent try as well.
Any thoughts on this friends?