community
cancel
Showing results for 
Search instead for 
Did you mean: 

Alteryx Designer Ideas

Share your Designer product ideas - we're listening!

1 Review

Our submission guidelines & status definitions before getting started

2 Search

The community for a solution or existing idea before posting

3 Vote

By clicking the star in the top left corner of an idea you support

4 Submit

A new idea to suggest a product enhancement or new feature


Suggest an idea

I've seen this question before and have run into it myself.  I'd like to see a new tool that would allow a developer (of a workflow) to choose a path of logic based upon criteria known only during the execution of a module.

 

If LEFT INPUT Count of records < 10,000 THEN Path1 (e.g. use a calgary join)

ELSE Path 2 (e.g. use a standard join)

endif

 

Thanks,

 

Mark

Problem:

Currently, the scheduling via designer controller is independent of the gallery. So, even after a canvas is deleted, the scheduler still continues to execute the cached version of the canvas, as long as the scheduler exists.

Note, this issue does not occur when the canvas is directly scheduled in the gallery, and only occurs when you schedule via the Designer on the controller directly.

 

Steps to replicate issue:

1) Publish a canvas into gallery

2)Schedule the canvas to run daily via the Designer --> Options --> View Schedules --> Select Controller --> Create new workflow and schedule

3) Delete the canvas from gallery

4) You will notice that the canvas is still getting run on the defined schedule, even though you have deleted the canvas

 

Observed in Alteryx 11.5.1

 

Idea Recommendation:

Golden copy of a canvas should be the version existing in the gallery. Once the gallery instance of the canvas is deleted/replaced with a new version,

  • All related artifacts to the old version should be marked as "Deleted"
  • All existing schedules should be stopped from being executed
  • We should continue to retain all meta data attributes and execution history related to the old version (should not be wiped out) but clearly marked as archived/deleted

 

@SeanAdams @LizaNemchynova

 

@AdamR,

 

Limit conversion warning allows for a minimum of 1 message.  Can we set the minimum to 0 to completely ignore the message?

 

Perhaps we can allow warning messages a similar function as ERROR messages and allow the designer to Ignore, Warn or Cancel?

 

ConvError: Imputation (441): Tool #104: No demand: 0.200000000000031 had more precision than a double. Some precision was lost.

ConvError: Summarize (456): Data&colon; 0.360000000004675 had more precision than a double. Some precision was lost.

 

End: Designer x64: Finished running FP Model - Marquee Crew v3.yxmd in 32.3 seconds with 16 field conversion errors and 4 warnings

 

Thanks,

 

Mark

Preface: I have only used the in-DB tools with Teradata so I am unsure if this applies to other supported databases.

 

When building a fairly sophisticated workflow using in-DB tools, sometimes the workflow may fail due to the underlying queries running up against CPU / Memory limits. This is most common when doing several joins back to back as Alteryx sends this as one big query with various nested sub queries. When working with datasets in the hundereds of millions and billions of records, this can be extremely taxing for the DB to run as one huge query. (It is possible to get arround this by using in-DB write out to a temporary table as an intermediate step in the workflow)

 

When a routine does hit a in-DB resource limit and the DB kills the query, it causes Alteryx to immediately fail the workflow run. Any "temporary" tables Alteryx creates are in reality perm tables that Alteryx usually just drops at the end of a successful run. If the run does not end successfully due to hitting a resource limit, these "Temporary" (perm) tables are not dropped. I only noticed this after building out a workflow and running up against a few resource limits, I then started getting database out of space errors. Upon looking into it, I found all the previously created "temporary" tables were still there and taking up many TBs of space.

 

My proposed solution is for Alteryx's in-DB tools to drop any "temporary" tables it has created when a run ends - regardless of if the entire module finished successfully. 

 

 

Thanks,

Ryan

Hi all,

I was wondering if any of you have achieved "Transaction rollback" type of feature in alteryx.

 

Following is the usecase:

If a workflow that writes data into multiple outputs (could be relational tables / files) is failed half way through in writing to one of the outputs, is there an option to rollback the partially loaded data & reset the process to the original state (i.e., before the execution of the workflow)? (OR) does this needs to be done programatically?

 

There is a workflow level property - "Cancel Running Workflow on Error". This stops the execution but doesn't perform rollback.

 

Thanks,

Sandeep.

The Listbox (interface macro) is currently populated statically when sourcing values through a Connected tool. Whatever, I configure in the macro is retained. When I use the macro in a workflow the LIstbox values are not updated when the fields in the connected tool are changed. This practically limits my capabilities to build a truely dynamic macro/app.

The Listbox should be able to show dynamically the fields coming in through the connected app.

Currently if a user has multiple connections in a workflow that connect to a password-protected source, and that password changes, the user will be locked out of their account by login attempts as Alteryx attempts to validate the connection.

 

Today I had to manually edit the XML of another user's workflow in order to remove references to their server, so they could correct their password without locking the account for a third time today.

 

While I understand that aliases are a good workaround to this problem, the issue still has potential to occur.

 

Having an option to load a workflow in a "SECURE" or "SAFE" mode, where it would not validate a query until runtime, or refreshing the metadata manually, would help to significantly reduce lockouts which would improve the usability of the tool.

It would be helpful to have the Read Uncommitted listed as a global runtime setting.

Most of the workflows I design need this set, so rather than risk forgetting to click this option on one of my inputs it would be beneficial as a global setting.

For example: the user would be able to set specific inputs according to their need and the check box on the global runtime setting would remain unchecked.

However, if the user checked the box on the global runtime setting for Read Uncommitted then all of the workflow would automatically use an uncommtted read on all of the inputs.

When the user unchecks the global runtime setting for Read Uncommitted, then only the inputs that were set up with this option will remain set up with the read uncommitted.

 

I understand that Server and Designer + Scheduler versions have the option to "cancel workflows running longer than X”.

 

I'd like to see that functionality in the desktop edition as well.

0 Stars

when using the R-Tool for simple tasks (like renaming files, for example) in an interative macro - there's a delay on every iteration as the R Tool starts up R.

 

The following are repeated on every iteration (with delays):

2018-08-19_21-41-23.jpg

 

Can we look at an option to forward scan an alteryx job to look for R Tools, then load R into process once to eliminate these delays on every iteration?

 

Top Starred Authors