Community Spring Cleaning week is here! Join your fellow Maveryx in digging through your old posts and marking comments on them as solved. Learn more here!

Alteryx Server Discussions

Find answers, ask questions, and share expertise about Alteryx Server.

'Kill job' feature based on run length condition - is this possible?

BYJE
7 - Meteor

Does such a feature exist?

 

I am not a server admin, but I have an interest in this topic and would like to know more. I looked at previous stuff on related topics and it seems there's a global setting on the server side where you can set a maximum runtime for all workflows executed in the gallery (Solved: Gallery Workflow Runtime Limits - Alteryx Community)

 

But is there no way to restrict runtime for specific jobs? Or is it just that I'm unaware of how to do this? Let's say I know my workflow is supposed to take 10 seconds to run, and sometimes (when something is going wrong) it takes 20 minutes, and I want to kill the job if it's taken a minute to run - is there some way to do this? (If not, are there any plans to implement something like this?)

 

I know that it is possible in the designer to auto-abort jobs running in Designer based on conditions such as record count or formula expressions and I use those options, but there are lots of cases those kinds of checks can't handle and runtime problems is a major one.

5 REPLIES 5
RishiK
Alteryx
Alteryx

@BYJE for your "specific" jobs, you can simulate an "error" condition, ie if the workflow has processed x number of records, then you can kill the running of it. The Test tool might be useful for you:

https://community.alteryx.com/t5/Alteryx-Designer-Discussions/Stopping-workflow-using-Test-Tool/td-p...


BYJE
7 - Meteor

Hi RishiK,

 

I'm familiar with the test tool, I was alluding to it myself in the post. I'm not happy with this solution because it seems in some cases to not really be able to address the issue you are trying to address.

 

Unacceptable execution times seem often to linked to problems with inputs and outputs, and the test tool is bad at handling such situations. It might be a database connection issue (so perhaps no records are entering the data stream for a long time, and your test tool won't figure out there's an issue there because you can't put it in front of the input tool, and if you put it after the input data and the data is not being read then the tool won't process the data because there's no data to process (...yet) (which is the issue you want to address!). Or it might be a write issue at the output level -> I've had this sort of problem popping up in a flow writing to the Tableau server, where you could see in the performance profile that sometimes that write process (or related processes, like credentials handling?) just took a very long time, compared to the amount of time that was usual for that flow and that amount of data. Again, you can put a test tool in front of the Publish to Tableau macro and it won't stop the flow in such a situation because the issue comes later, and you can't put a test tool after the macro because at that point that data has left Alteryx. ...but it took way too long for it do that!

 

More generally, runtime is a variable we try to optimize and if it's off it leads to excess server load and it indicates that there's a problem with the workflow or related processes. it would be desirable to be able to target it specifically instead of using potentially poor proxies like record counts.

RishiK
Alteryx
Alteryx

@BYJE I understand your perspective. When you are running a workflow in Designer on your laptop/PC there can be limitations to the performance of external sources and systems ie. databases connected to that workflow.

 

Performance profiling does help when building the workflows and understanding what the overheads may be and you can use this analysis to make the workflow more efficient.

 

Can you not leverage the abilities to stop the workflow running on the Server? - I would have thought if you are dealing with larger volumes, you will be running the workflow on the Server - you can use the Server Usage Report to understand performance issues also as a result of this and analyse previous job runs of the workflow to see where you need to make improvements.

BYJE
7 - Meteor

Obviously I'm looking into the performance of my (many) server runs, and obviously I'm evaluating whether the workflows can be improved. Yeah, you can take down a workflow that's performing poorly and identify why it's performing poorly. Ideally you optimize all you can during development. But I don't want a flow I'm scheduling to run hourly to take 20 minutes per run for two weeks while I'm on vacation because of some new database connection issue. For example. I'd want to kill it and send an email to someone who's not on vacation, so that it can be fixed and/or looked into. This process should be automated too.

mbarone
16 - Nebula
16 - Nebula

Unfortunately there is not "workflow based" or "designer based" setting for this.