Bring your best ideas to the AI Use Case Contest! Enter to win 40 hours of expert engineering support and bring your vision to life using the powerful combination of Alteryx + AI. Learn more now, or go straight to the submission form.
Start Free Trial

Alteryx Designer Desktop Ideas

Share your Designer Desktop product ideas - we're listening!
Submitting an Idea?

Be sure to review our Idea Submission Guidelines for more information!

Submission Guidelines

Featured Ideas

We are working on building out training content in a story mode and would like to have short snippets playing in a loop for people to see embedded in the workflow. Currently you can add a .gif to a comment background and it will provide a still image on the worklfow itself but functions as a gif in the configuration display. The interesting part is when you are running the workflow the .gif works and then it pauses it when the workflow has completed!

Example GifExample GifGif upload in comment and playing in the configuration windowGif upload in comment and playing in the configuration windowGif in workflow when uploadedGif in workflow when uploadedGif after workflow runGif after workflow run

Currently in 2020.2 (but I assume all versions), when you have a workflow running and click on the Tool Name/ID (1 - in the picture below) in the results window it is then not possible to click on the canvas OR get back to the messages for the full workflow as it is then locked to that tool.

 

The idea is that it should be possible to get back to all of the workflow messages if you click on a tool name in the results window whilst the workflow is running.

 

However, a neat little tip that I found is if you click on the input, output or browse hyperlink (2 in the picture below), it will open a pop-out browse rather than show the data in the results window, meaning you can still see all of the messages)

 

joe_lipski_0-1599821194599.png

 

This leads me to think that it could and should be possible to see browse anywhere data whilst the workflow is running if this is fixed. Here's a separate idea for that. 

 

Hey @apolly 

 

You and the team have been doing a lot of innovative changes to the results window for data.

Could I ask for an uplift to the results window for Workflow Messages?

 

Summary: Error messages in the workflow results window cannot be fully viewed - have to be copied into Notepad and then reformatted before you can read.

Request: Allow user to double-click to see full readable version of a workflow result message

Detail:

 

If you have an error message in a workflow result - it gives you a message that is often longer than the window allows and there is no cell-viewer option

 

DoubleClickError.png

 

As a result, there is really no way to get to the important part of the error message to understand what's going on, other than to use Notepad

 

Step 1: Copy into Notepad

DoubleClickError-part2.png

(you can see the end of line characters being misunderstood)

 

Step 2: Manually clean this up by breaking on the line breaks

DoubleClickError-part3.png

And now you can see the important part of the result message..

 

 

Could we rather add the ability to double-click on a result message in the result window and bring up a modal window that formats the error message for you (similar to the modal window used for XML editing of a tool).   That would eliminate this entire wasteful effort of trying to read an error message and having to use Notepad?

 

Bring up a modal window, similar to this one; so that I can see the error without having to go to NotepadBring up a modal window, similar to this one; so that I can see the error without having to go to Notepad

 

Hello,

 

we have several environment in our organization : dev, recept, production.

 

In order to make that change safe we intend to make several connection (standard alias)  like

alias_in_memory_pour_support.PNG

PRODUCTION_HIVE

DEV_HIVE

RECEPT_HIVE

 

In our workflows, we want to use aka:%Question.v_environment%HIVE

 

Sadly, this solution does not work despite the value defaut. 

 aka_et_alias_in_memory.PNG

 

CI / CD is critical to any production level process, especially when multiple authors are contributing new features to the same workflow. Currently, multi-author editing of workflows is extremely difficult, and something that would be aided greatly by using git to control different branches of ongoing work. Luckily, that's something we can already do today! However, the ability to test before merging a pull request is critical to modern CI / CD pipelines. For this, it we need to be able to run a headless workflow from a CI / CD environment. Also, having the ability to pass in parameters to the workflow would allow for robust integration testing - something that isn't straightforward today without running on production environments. 

Environment variables act as a shortcut so that different computers can be configured in different ways, but a particular path will still point to the right place.

 

For example if you open up explorer and go to %TEMP%\  - you will open up whichever folder is set up as Temp on this machine.    This is super useful so that you can use a particular logical folder without knowing the actual placement on every machine (for example the Windows Directory)

 

envVariables.png

 

 

This works partially in the Directory / input - when you put in the environment variable, it is able to search possible subdirectories (screenshot 1) but it does not work once you run the workflow (screenshot 2).

It seems as if the designer hits the Windows API directly, but it does not work within the engine.

 

Please could you alter the engine to be able to make full use of the environment variables on the machine in question in the directory path or input tool path?

 

Screenshot 1 - works in designerScreenshot 1 - works in designer

 

Doesnt work in Engine.png

  • Engine

Idea:

I know cache-related ideas have already been posted (cache macros; cache tools), but I would like it if cache were simply built into every tool, similar to the way it is on the Input Tool.

 

Reasoning:

During workflow development, I'll run the workflow repeatedly, and especially if there is sizeable data or an R tool involved, it can get really time consuming.

 

Implementation ideas:

I can see where managing cache could be tricky: in a large workflow processing a lot of data, nobody would want to maintain dozens of copies of that data.  But there may be ways of just monitoring changes to the workflow in order to know if something needs to be rebuilt or not: e.g. suppose I cache a Predictive Tool, and then make no changes to any tool preceeding it in the workflow... the next time I run, the engine should be able to look at "cache flags" and/or "modified tool flags" to determine where it should start: basically start at the "furthest along cache" that has no "modified tools" preceeding it.

 

 

Anyway, just a thought.

 

Vanilla Alteryx Chained Apps can only progress linearly, which means developers could not let users skip few applications ( or ) reach the last app in the chain ( or ) let the user select which specific app to trigger based on the requirement.

 

Maithreyan_0-1721668066939.png

 

 

This can be bypassed by using a render tool with output as PCXML and HTML link of the Application you can trying to divert to, which does not affect the existing workflow in any way.

 

 

 

Maithreyan_1-1721668066955.png

 

 

By using the below set of tools on any workflow/chained app you can either branch the flow of apps ( or ) you can skip a few apps in the chain.

 

  1. A excel or csv file which has the links of the apps -  the reason for keeping the hyperlinks in an external file is so that we can update the link if the server link changes/updates - refer Image 1
  2. A filter tool to specify which application to move to ( can be changed using a radio button/drop down to app 2/3/4/5 etc.)
  3. A text tool ( This is where the magic is ) - configure it to pick the server link from the incoming data from the filter tool as hyperlink and generate a output preview, as shown below - refer Image 2
  4. Use a render tool as output and write to any PCXML file ex: "File.pcxml" - refer Image 3

 

Image 1 - Input Configuration with the flow that can be part of any existing application

 

Maithreyan_2-1721668066452.png

 

 

Image 2 - Text Tool Configuration

 

 

Maithreyan_3-1721668067700.png

 

 

 

Image 3 - Render tool Configuration

 

 

Maithreyan_4-1721668068301.png

 

 

 

 

POC in action

 

  • Let assume our 3rd application is located in www.alteryx.com - if the user selects 3rd app in the radio button

 

 

Maithreyan_5-1721668067060.png

 

 

 

  • Which would generate an Output Preview like below

 

Maithreyan_6-1721668067084.png

 

 

 

 

Now If clicked on App 1, it would divert me to www.Alteryx.com

 

 

 

Maithreyan_7-1721668071187.png

 

Keywords : Chained Applications, Chained Apps, Application Sequence, Skip Application Sequence, Branch Application Sequence, Application Order, Controlled Order, Trigger Next Application

 

Regards,

Maithreyan S

The current version of the Publish to Tableau macro retrieves an API key at the start of the workflow run.  Often times the workflow may take several hours to run before it's ready to write to Tableau by which time the API may have expired.  (I think the default tableau server setting times out in 2 hrs)  It's one of those soul crushing "I should've forked the output!"

Sample Log Error - 

  •         Tool #46: TableauServer.UploadChunks (238): Iteration #1: Tool #19: Tool #4: Tableau Server API Request (Upload file) Error Code 401002: Unauthorized Access -- Invalid authentication credentials were provided. 
  •          Tool #46: Tool #252: Tool #4: Tableau Server API Request (Publish file) Error Code 401002: Unauthorized Access -- Invalid authentication credentials were provided.

The idea would be to change when the macro obtains the API from when the workflow is initiated to just before the workflow is ready to write to the Tableau avoiding these timeouts.

 

(If you're having this issue in the meantime you can have your Tableau server admin up the timeout)

7-9-2020 2-53-42 PM.png

Today when we install custom tools that use DLLs, the DLLs must be placed in the Plugins folder inside the Alteryx installation directory.  This requires a second step after the YXI installer runs.  I would like to be able to package the DLL with the YXI installer and Alteryx will search for the DLL inside the tool's directory, just the same as what happens with custom Python tools.  This will allow custom tools that use DLLs to be installed just as easily as the 1-step installation process for Python tools.

 

For example, this today does not work, but I want it to:

Screen Shot 2020-06-05 at 8.35.17 AM.png

So I discovered this neat little tip today where if you have a browse tool in your workflow and click on the hyperlink (2 in the picture below) whilst the workflow is running, it will open a pop-out browse rather than show the data in the results window, meaning you can still see all of the messages). However, If you click on the Tool name/ID (1 in the image) is locks the results window to that tool. Idea for a fix here

 

joe_lipski_0-1599822085052.png

 

 

And this lead me to think that Alteryx must be populating the temporary browse anywhere in memory as it's running, so it would be great if it was possible to either click on the tool anchors or the tool names in the results window whilst the workflow is running to see the browse anywhere data.

 

 

The Alteryx Python tool currently throws an error if the inbound record set has zero rows (screenshot 1).

In order to manage that - you need to create try-except block around the Alteryx.read that instead creates an empty record set data frame. (screenshot 2).   This is inefficient because every time you change the canvas before the python tool, you need to re-code a static field list into the try-except block (i.e. you can no-longer deal with variable fields)

 

Please could you change the Alteryx.read method to create a zero-record dataframe with the correct column names if the input is zero-length?

 

Thank you

Sean

 

Screenshot 1:

ErrorMessage.png

 

Screenshot 2:

TryExceptBlock.png

In order to perform audit-trail logging - it would be valuable to have 2 new capabilities

 

a) environment variables which show the workflow name; filepath; version; run start date and time; etc.   For any worklows we build, we need to have a solid audit trail to be SOX compliant, so having this detail available as a data field to write and manipulate is essential

b) A logging component.   What would be great is a component that you can drop on a workflow, not connected to anything, which is able to trap the start; end; runtime; version; etc of a workflow; and commit this to any output data format (CSV or ODBC etc).   This logging tool would need to be able to capture the full runtime, so it would need to be the last thing that runs (which means it may need to exist in parallel to the main workflow in some way).    This is not currently possible with a complex workflow with outputs, because it's not possible to identify when the entire workflow ended; or the runtime (since output tools don't have an onward connector to pass flow-of-control to catch the final end-time)

 

Again, both of these are necessary to meet audit requirements for workflows and prodcution-quality ETLs for BI data warehouses.

In cases where there are dynamic tools - you often get a situation where there are zero rows returned - which means that the output of something like a transpose or a JSON parse or a regex may not have the field names expected.

 

However - any downstream filter tools (or other similar tools) fail even though there are no rows (see screenshot below).

 

The only way to get around this is to insert fake rows using a union or use the CReW macro for Ensure Fields.    However, this is all waste since there are no rows so there's no point in even evaluating the predicate in the filter tool.     Rather than making users work around this - can we please change the engine so that a tool can avoid evaluation if there are zero rows - this will significantly reduce the amount of these kind of workaround that need to be used with any dynamic tools (including any API calls).

 

thank you

Sean

 

 

 

 

 

Predicate.png

 

 

 
 

 

When opening an Alteryx workflow that has been saved in a newer version, a warning message is shown, but you are still able to open the workflow, provided that it doesn't contain tools that don't exist in your current Alteryx version.

 

This does not work for packaged workflows that contain macros, for instance. You have to manually edit the xml of the extracted package file.

 

It would be great if we could have the same ability with packaged workflows that exists for normal workflows, i.e. the ability to extract and execute them with a warning.

In workflow Constants, it would be really useful to be able to populate a new field associated with each user created constant. 

 

E.g. Type, Name, Value, "Description"

 

The description could be left blank but also populated by workflow designers to attach commentary / business logic to the constant. 

 

E.g. Type = User, Name = MyUserConstant, Value = 0.25, Description = "This describes the weighting factor used in Product Calculations"

 

 

We are trying to utilize Alteryx Workflow migration workflow to setup proper SDLC environments and ensure we have less human intervention in the process. For example, if we create a gallery data connection XYZ in multiple Alteryx environments and try to run the migration workflow, the connection IDs are different in those environments regardless of how we name them. So even if we migrated the workflow, we still have to manually go to each environment, update the connection(s) and upload it again. That sort of defeats the purpose of migration concept itself.

Suggestion is to use gallery connection name/alias as connection ID so that when workflows migrated, connections are mapped accordingly. 

It appears that the Workflow Dependencies window does not report dependencies from all tools. In the example image, you can see that the file input from the Amazon S3 Download tool is not listed. Some tools may have dependencies that do not easily fit the current field structure of the window, but maybe the input/download tools could be listed with an asterisk or partial reference.

Missing Amazon S3 DependencyMissing Amazon S3 Dependency

When I have AMP enabled, I can no longer performance profile my workflows. I get that there may be issues with calculating this across multiple threads but it'd be great to have Performance profiling available for the new engine. 

#Deployment #LargeScale #CleanCode #BareBonesCode

 

Request to add and option to strip out all unnecessary text within a Workflow / Gallery App when deploying to the Alteryx Server to be scheduled or used as a Gallery App.   Run at file location still causes the reading of unnecessary information across the network.  

 

Often the workflows are bloated with un-used meta data that at a small scale is not an issue, but with scale... all the additional bloat (kBs to MBs in size) - sent from the controller to the worker does impact the server environment.

 

The impact explodes when leveraging the Alteryx API to launch the same job over and over with different parameters - all the non-useful information in the workflow is always sent to the various workers to handle each one of these jobs.

 

Even having a "compiled" version of the workflow could be a great solution. #CompiledCode

 

Attached is a simple workflow that shows how bloated the workflows can become.

 

I appreciate your consideration.

Top Liked Authors