The Product Idea boards have gotten an update to better integrate them within our Product team's idea cycle! However this update does have a few unique behaviors, if you have any questions about them check out our FAQ.

Alteryx Designer Desktop Ideas

Share your Designer Desktop product ideas - we're listening!
Submitting an Idea?

Be sure to review our Idea Submission Guidelines for more information!

Submission Guidelines

Featured Ideas

We are trying to utilize Alteryx Workflow migration workflow to setup proper SDLC environments and ensure we have less human intervention in the process. For example, if we create a gallery data connection XYZ in multiple Alteryx environments and try to run the migration workflow, the connection IDs are different in those environments regardless of how we name them. So even if we migrated the workflow, we still have to manually go to each environment, update the connection(s) and upload it again. That sort of defeats the purpose of migration concept itself.

Suggestion is to use gallery connection name/alias as connection ID so that when workflows migrated, connections are mapped accordingly. 

Hi all,

I was wondering if any of you have achieved "Transaction rollback" type of feature in alteryx.

 

Following is the usecase:

If a workflow that writes data into multiple outputs (could be relational tables / files) is failed half way through in writing to one of the outputs, is there an option to rollback the partially loaded data & reset the process to the original state (i.e., before the execution of the workflow)? (OR) does this needs to be done programatically?

 

There is a workflow level property - "Cancel Running Workflow on Error". This stops the execution but doesn't perform rollback.

 

Thanks,

Sandeep.

Transfer of records from Python SDK RecordRef seems to be slow sending large amounts of data to the Alteryx Engine (e.g. discussion here). Although unclear of the exact specifics, it seems that there's a copy and convert process in play.

 

Apache Arrow appears to be addressing this issue, and the roadmap and specs are impressive! It seems like (again I have no understanding of the Alteryx Engine specifics) that something like this would be excellent for expanding SDK use cases as well as for other connectors such as the Apache Spark connector.

 

And it looks like it'd be fun to build into Alteryx! 🙂

The new Cache tool does not function if the 'Disable All Tools that Write Output' option is selected in the workflow runtime properties.  There is no indication of why the cache is not working and this may be confusing because many users won't associate the 'cache' as a normal output.  The interface should be changed to make this more clear or the cache function configured to ignore this workflow runtime option. 

 

Like many of you, I have a lot of modules and macros ... and growing. I keep them fairly organized in different folders and subfolders but sometimes I can't find that particular module I was working on weeks ago.... and I need to get it now. Now I end up doing an advanced search in windows explorer by date and maybe looking for certain keywords.
It would be nice to keep track of them in alteryx - add tags, customer names. depts to modules (meta info tab)?
Maybe a special container/gui with time line would read the meta info tab so you can more easily find that one module/macro

Also, another gui containing tool name tags so you can easily find all module that use that one tool you're looking for



Can we have an option to disable all tool containers at once? Similar to disable all browse tools or tools that write output.

Please evaluate the option to add 2 new containers:

1. parallel - execute tasks inside in parallel
2. serial - execute tasks in strict  order, imposed at design time. In the future the oder of operations could be enforced by parameters or other input conditions at runtime.

 

Please Give us the capacity to mix and match these 2 containers.

Thank you

 

Regards,
Cristian.

  • Engine

As we begin to adopt the AMP engine - one of the key questions in every user's mind will be "How do I know I'm going to get the same outcome"

One of the easiest ways to build confidence in AMP - and also to get some examples back to Alteryx where there are differences is to allow users to run both in parallel and compare the differences - and then have an easy process that allows users to submit issues to the team.

 

For example:

  • Instead of the option being run in AMP or run in E1 - instead can we have a 3rd option called "Run in comparison mode"
  • This runs the process in both AMP and E1; and checks for differences and points them out to the user in a differences repot that comes up after the run.
  • Where there's a difference that seems like a bug (not just a sorting difference but something more material) - the user then has a button that they can use to "Submit to Alteryx for further investigation".    This will make it much simpler for Alteryx to identify any new issues; and much simpler for users to report these issues (meaning that more people will be likely to do it since it's easier).

 

The benefit of this is that not only will it make users more comfortable with AMP (since they will see that in most cases there are no difference); it will also give them training on the differences in AMP vs. E1 to make the transition easier; and finally where there are real differences - this will make the process of getting this critical info to Alteryx much easier and more streamlined since the "Submit to Alteryx" process can capture all the info that Alteryx need like your machine; version number etc; and do this automatically without taxing the user.

 

 

 

Hello all,

In addition to the create index idea, I think the equivalent for vertica may be also useful.

On vertica, the data is store in those projections, equivalent to index on other database... and a table is linked to those projections. When you query a table, the engine choose the most performant projection to query.

What I suggest : instead of a create index box, a create index/projection box.

Best regards,

Simon

Hello all,

A whole field of performance improvement have not been explored by Alteryx : the hardware acceleration by using something else than a CPU for calculation.

Here some good readings about that :
https://blog.esciencecenter.nl/why-use-an-fpga-instead-of-a-cpu-or-gpu-b234cd4f309c

https://en.wikipedia.org/wiki/Application-specific_integrated_circuit

The kind of acceleration we can dream !

 
 

image.png

 

Best regards,

Simon

Hello Alteryx,

 

Would it be possible to extend the "Cache and Run" functionality also to tools with multiple outputs? Our clients use the R and Python tools very frequently and the runtimes tend to be pretty long. For the development purposes, it would be great to have the caching possibilities also on these tools.

 

Thank you very much for considering this idea.

 

Regards,

Jan Laznicka

It would be great to dynamic update the next Analytic App based on an interface input. This mean I have a chained app. In Step 1 I ask a Yes/No Question. The Answer to this question will determine to open in Step 2 Analytic App A (with it's own interface Inputs) or Analytic App B (with other interface inputs).

Many users are facing this issue when they want to create an tool (e.g. for mapping purposes) that contains two datastreams/flows with different interface input requirements.

Adding this feature would allow us to create different dataflows with different input requirements. This helps us to differentiate between different mappingsschemes and increases userexperience (currently they have to fill a lot of unnecessary interface inputs). Thanks.

 

H.

As per this discussion, I'd like to create constants that stay with me as I create new workflows rather than creating a user constant across multiple workflows.

 

This could perhaps be done by editing an xml file in the bin.

  • Engine

In short:
Add an option to cache the metadata for a particular tool so that it doesn't forget when using tool that have dynamic metadata such as batch macros or alteryx metadata engine can't resolve such as python tool.

 

 

Longer explanation:

The Problem:

One of the issues I often encounter when making dynamic workflows or ones that require calling external services is that Alteryx often forgets the metadata of what columns to expect. This causes the workflow to forget configuration of downstream tools when a workflow is first opened or when the metadata engine refreshes. There is currently the option to disable the metadata engine from automatically refreshing but this isn't a good option because you miss out on much of the value it brings.

 

Some of the common tools where I encounter this issue:

  • Json parse
  • Batch macros
  • Python tool
  • Regex parsing to rows

 

Solution:

Instead could we add an option to cache the metadata for a particular tool, this would save the metadata from the last time the workflow ran to within the workflows XML so that it persists when closed and reopened. Then when the metadata engine runs when it gets to this tool instead of resolving the metadata from the tool it instead uses the saved version in the XML. Obviously when it actually runs it would ignore this and any errors would still occur.

 

This could be an option in navigation pane of each tool. Mockup below:

Mockup.png

 

 

 

This would make developing dynamic workflows far easier and resolve issues of configuration being lost when the metadata changes and alteryx forgets the options.

I learnt Alteryx for the first time nearly 5 years ago, and I guess I've been spoilt with implicit sorts after tools like joins, where if I want to find the top 10 after joining two datasets, I know that data coming out of the join will be sorted. However with how AMP works this implicit sort cannot be relied upon. The solution to this at the moment is to turn on compatibility mode, however...

 

1) It's a hidden option in the runtime settings, and it can't be turned on default as it's set only at the workflow level

2) I imagine that compatibility mode runs a bit slower, but I don't need implicit sort after every join, cross-tab etc.

 

So could the effected tools (Engine Compatibility Mode | Alteryx Help) have a tick box within the tool to allow the user to decide at the tool level instead of the canvas level what behaviour they want, and maybe change the name from compatibility mode to "sort my data"?

 

Ability to run. Workflow from failed tool onwards .

If a workflow has 10 tools , if some tools failed with error(at tool5) , in an etl world we don’t want to run it again from beginning, instead we fix the tool (tool5)that had error and run from that tool and finish workflow .tool 5 to tool10.


  • Engine

There are three places that provides the log information:

1) Regular results window:

Pro: In the process sequence so the user can understand the order of the process.

Con: Doesn't have info on how long each tool takes to process.

2) Workflow -> Runtime -> Enable Performance Profiling

Pro: Processes are sorted in the processing duration descending order which helps to identify the ones that took long to run.

Con: Doesn't show the process sequence.

3) Actual Alteryx log file:

Pro: There are timestamps for each process so the duration can be calculated.

Con: Not ready accessible and not user friendly to be seen from the interface. Not clickable to see more details in the workflow.

I think it will be SUPER HELPFUL to integrate all three together to show in the process order along with the running time.

It would be nice if Alteryx was able to directly output data and the workflow into an Excel PowerPivot data model for people without Alteryx access to pivot the data.

  • Engine

It's often challenging to estimate run time of various workflows AND a run time of over 3+ hours can often be indicative of errors in the workflow. Could we have an estimated runtime calculator? This would also help when pushing against deadlines for timing. 

 

Fingers crossed and thanks! 

Hi All,

 

This is a fairly straightforward request. I'd like to be able to pass through interface tool values to the workflow events the same way I would pass it through to a tool in the workflow (%Question.<tool name>%). One use-case for this is that we are calling a workflow and passing in an ID, and if this workflow fails, I'd like to trigger an event that will call back to the application and say this specific workflow for this ID failed.

 

The temporary solution is to have the workflow write to a temp file and have the event reference that temp file, but this is clunky and risky if there are parallel runs occurring. 

 

Best,

devKev

Top Liked Authors