We recently upgraded from 2023.1 to 2024.2 and now have multiple issues with certain things.
1. Workflow logs would suggest the jobs are not getting the correct memory and allowable threads settings from the server and are using a default or old setting. I've only been able to get the values to change by modifying the workflow and clicking to override server settings for memory.
2. It appears the Gallery "Run" button will execute jobs 100% of the time while calling the same job from the API yields errors. The code has been stable for a long time and ran fine on 2023.1 both ways.
-- Workflows with Container Controls
-- Workflows with batch macros
-- Workflows with mixed amp settings.
Anyone else have any similar behavior? A ticket is opened with Alteryx as well.
Solved! Go to Solution.
Just wanted to close the loop on this. What I found was that the questions in an analytic app used to work differently between 2023.1 and 2024.2.
We tend to create a singular piece of code (workflow/app) that does a data load. For maintenance and production support reasons, we often create them as applications so in the event we need to refresh a time period, the user selects a start / end date and then runs the load.
We have always relied on the fact that the analytic app will give start/end dates in the questions as current date when run from the gallery as the questions present to the user and the values are sent back to the call to run.
When the analytic app was run via a schedule or API the questions do not present anywhere and for the last 12 plus years would just result in a question that IsEmpty. We trap that condition in a formula tool and set a default date (in this case) of 1900-01-01. I then check the dates and know that 1900-01-01 means it came from the scheudler so use a default logic of loading the last 7 days and set it accordingly.
WORKAROUND: I was setting defaults to the question in the formula tools, but it appears you need to set them in the Workflow Configuration -- Workflow -- Constants. In my case I set them to 1900-01-01 so I can still assume the run is coming from somewhere that is not actually presenting questions to the user and then modify those dates to fit my default business logic. Took me a while to figure this out so hopefully I can save someone else 10 days of their effort.
Thank you so much for taking the time to write out the solution you found! This is an interesting case, and one that would only be found either by Trial & Error or by reading the issue from someone else that has gone through it.
I'll say, it's also an interesting way to apply defaults. I like it.
It seems like scheduling an analytic app is not what is in fashion, but for us it gives us the best of both worlds. It lets me use the fact the workflow is running from the scheduler to have a default date logic or let the user pick. It also lets me just add a few controls like a start/end date so I can tell my support team to re-run data between a range and know with 100% certainty that the exact same logic is being applied no matter how you change the code.
For that reason, I am surprised that Alteryx would not have ensured that this behavior was constant because this is a risk removing measure for the business. All they need to do at the code level on the server is to handle when the API questions are not sending values Questions = [] SHOULD work the same as if it runs from the Schedule Now function with sends the <WizardValues/> as they mean the same exact thing. I do think this is a bug and I have not been able to spend the time yet to work with Alteryx to show them directly.
Hi @dataguyW,
Thanks for working on this and coming up with a solution. You can submit a bug report directly to our technical support team by following the steps below:
1. Please log in to my.alteryx.com
2. Switch over to the Case Management tab.
3. Click on 'Submit New Case' to report the bug with all the details you have about it.
Following these steps will allow you to keep tabs on our team's work to resolve bugs like these.
Hope this helps.
Take care.