Advent of Code is back! Unwrap daily challenges to sharpen your Alteryx skills and earn badges along the way! Learn more now.

Alteryx Designer Desktop Ideas

Share your Designer Desktop product ideas - we're listening!
Submitting an Idea?

Be sure to review our Idea Submission Guidelines for more information!

Submission Guidelines

Featured Ideas

Hello all,

Like many softwares in the market,  Alteryx uses third-party components developed by other teams/providers/entities. This is a good thing since it means standard features for a very low price. However, these components are very regurarly upgraded (usually several times a year) while Alteryx doesn't upgrade it... this leads to lack of features, performance issues, bugs let uncorrected or worse, safety failures.

Among these third-party components :

- CURL (behind Download tool for API) : on Alteryx 7.15 (2006) while the current release is 8.0 (2023)
- Active Query Builder (behind Visual Query Builder) : several years behind

- R : on Alteryx 4.1.3 (march 2022)  while the next is 4.3 (april 2023)
- Python : on Alteryx 3.8.5 (2020) whil the current is 3.10 (april 2023)
-etc, etc....
-

of course, you can't upgrade each time but once a year seems a minimum...

Best regards,

Simon

We have discussed on several occasions and in different forums, about the importance of having or providing Alteryx with order of execution control, conditional executions, design patterns and even orchestration.

I presented this idea some time ago, but someone asked me if it was posted, and since it was not, I’m putting it here so you can give some feedback on it.

 

The basic concept behind this idea is to allow us (users) to have:

  • Design Patterns
    • Repetitive patterns to be reusable.
    • Select after and Input tool
    • Drop Nulls
    • Get not matching records from join
  • Conditional execution
    • Tell Alteryx to execute some logic if something happens.
    • Record count
    • Errors
    • Any other condition
  • Order of execution
    • Need to tell Alteryx what to run first, what to run next, and so on…
    • Run this first
    • Execute this portion after previous finished
    • Wait until “X” finishes to execute “Y”
  • Orchestration
    • Putting all together

This approach involves some functionalities that are already within the product (like exploiting Filtering logic, loading & saving, caching, blocking among others), exposed within a Tool Container with enhanced attributes, like this example:

OnCanvas.png

 

 

The approach is to extend Tool Container’s attributes.

This proposition uses actual functionalities we already have in Designer.

So, basically, the Tool Container gets ‘superpowers’, with the addition of some capabilities like: Accepting input data, saving the contents within the container (to create a design pattern, or very commonly used sequence of tools chained together), output data, run the contents of the tools included in the container, etc.), plus a configuration screen like:

 

ToolcontainerConfig_Comment.png

 

 
  1. Refers to the actual interface of the Tool Container.
  2. Provides the ability to disable a Container (and all tools within) once it runs.
    • Idea based on actual behavior: When we enable or disable a Tool Container from an interface Tool.
  3. Input and output data to the container’s logic, will allow to pickup and/or save files from a particular container, to be used in later containers or persist data as a partial result from the entire workflow’s logic (for example updating a dimensions table)
    • Based on actual behavior: Input & Output Data, Cache, Run Command Tools, and some macros like Prepare Attachment.
  4. Order of Execution: Can be Absolute or Relative. In case of Absolute run, we take the containers in order, executing their contents. If Relative, we have the options to configure which container should run before and after, block until previous container finishes or wait until this container finishes prior to execute next container in list.
    • Based on actual behavior: Block until done, Cache, Find Replace, some interface Designer capabilities (for chained apps for example), macros’ basic behaviors.
  5. Conditional Execution: In order to be able to conditionally execute other containers, conditions must be evaluated. In this case, the idea is to evaluate conditions within the data, interface tools or Error/Warnings occurrence.
    • Based on actual behavior: Filter tool, some Interface Tools, test Tool, Cache, Select.
  6. Notes: Documentation text that will appear automatically inside the container, with options to place it on top or below the tools, or hide it.

 

This should end a brief introduction to the idea, but taking it a little further, it will allow even to have something like an Orchestration layout, where the users can drag and drop containers or patterns and orchestrate them in a solution, like we can do with the Visual Layout Tool or the Interactive Chart tool:

Alteryx Choreographer.png

 

I'm looking forward to hear what you think.

Best

Hello all,

As of today, we can easily copy or duplicate a table with in-database tool.This is really useful when you want to have data in development environment coming from production environment.

But can we for real ?

 

Short answer : no, we can't do it in these cases :
-partitions

-statistics
-index

-any constraints such as primary-foreign keys

But even if these ideas would be implemented, this means manually setting these parameters.

So my proposition is simply a "clone table"' tool that would clone the table from the show create table statement and just allow to specify the destination path (base.table)

simonaubert_bd_0-1680504054872.png

 


Best regards,

Simon

 

Hello all,

Here the issue : I have a workflow in my One Drive folder
image.png


In that workflow, I use a macro that writes a file with a relative path (..\6_Big_Data\EN\.csv ) :


image.png

Strangely, it doesn't work and the error message seems to relate to a folder that doesn't exist (but also, not the one I have set)
image.png

ErrorLink: Output Data (1): https://community.alteryx.com/t5/*/*/ta-p/724327?utm_source=designer&utm_medium=resultsgrid|Cannot access the folder C:\Users\saubert\OneDrive - Business & Decision\Documents\B&D_Market\6_Big_Data\EN\.


I really would like that to work :)

Best regards,

Simon

 

Hello all,

 

I'm currently learning Pythin language and there is this cool feature : you can multiply a string

image.png

 

 

 

Pretty cool, no? I would like the same syntax to work for Tableau.

 

Best regards,

 

Simon

Hello all,

So, right now, we have two very separated products : Alteryx Designer and Alteryx Designer Cloud. But what if you want to go from Alteryx Designer on your desktop to the cloud ?

well, you will have to rewrite every single workflow because you can't publish or import your current workflow on Alteryx Designer Cloud. You cannot  export Designer Cloud workflow to Alteryx Designer on Desktop either.

This is a huge limitation on cloud implementation and sells and the ONLY product I know that's not compatible between on-premise and cloud.

Please Alteryx, this is a no-brainer situation if you want to convince your customers !

Best regards,

Simon

I can't even count how often I looked at an Excel, CSV or even YXDB file, where I KNEW that it was generated by Alteryx, but I couldn't remember the workflow. Currently, I have to simply go through all workflows I ever build and see if I can find it.

 

Theoretically, I could use a text-search across all workflows and see if I can find the output names - problem here: Most of my output filenames are generated dynamically on the run.

 

It would be amazing if Alteryx could simply write the Workflow name (maybe even path) into the metadata of a file.

2b32a469-58fc-4219-b567-795509ca50dd.png

(Screenshot from Google, as my os is set to German) 

 

How about, we write "This file was created with by "Create Controlling Reports.yxmd on 2023-02-06 with Alteryx Designer 2021.4.298434" in the field 'Comments'?

 

This would make it extremely easy to find what workflow the file generated. I think it would be an option to talk about "filepath" instead of filename, but the filepath could include the local machine name, which might include GDPR information.

 

@Community: Is there any additional information that you'd like to see in the metadata?

 

 

Best

Alex

Hello,

Just like Monetdb or Vertica, Clickhouse is a column-store database, claiming to be the fastest in the world. It's available on Cloud (like Snowflake), linux and macos (and here for free, it's open-source). it's also very well ranked in analytics database https://db-engines.com/en/system/ClickHouse and it would be a good differenciator with competitors.

https://clickhouse.com/

 

image.png

it has became more popular than Greenplum that is supported : (black snowflake, red greenplum, orange clickhouse)

simonaubert_bd_0-1677359498791.png

 


Best regards,

Simon

I've seen this question before and have run into it myself.  I'd like to see a new tool that would allow a developer (of a workflow) to choose a path of logic based upon criteria known only during the execution of a module.

 

If LEFT INPUT Count of records < 10,000 THEN Path1 (e.g. use a calgary join)

ELSE Path 2 (e.g. use a standard join)

endif

 

Thanks,

 

Mark

Hello all,

MonetDB is a very light, fast, open-source database available here :
https://www.monetdb.org/

 
image.png

 

Really enjoy it, works pretty well with Tableau and it's a good introduction to column-store concepts and analytics with SQL.

 

It has also gained a lot of popularity these last years :
https://db-engines.com/en/ranking_trend/system/MonetDB



Sadly, Alteryx does not support it yet.

Best regards

Hello,

SQLite is :

-free

-open source

-easy to use
-widely used

https://en.wikipedia.org/wiki/SQLite


It also works well with Alteryx input or output tool. 🙂

However, I think a InDB SQLite would be great, especially for learning purpose : you don't have to install anything, so it's really easy to implement.

Best regards,

Simon

Hello all,

Change Data Capture ( https://en.wikipedia.org/wiki/Change_data_capture ) is an effective way to deal with changes in a database, allowing streaming or delta functionning. Several technos, more or less intrusive, can be applied (and combined). Ex : logs reading.

Qlik  : https://www.qlik.com/us/streaming-data/data-streaming-cdc

Talend : https://www.talend.com/resources/change-data-capture/

 

Best regards,

Simon

Hi UX interested parties,

 

capture.png

 

Here are some ideas for you to consider:

 

1.  These lines are BORING and UNINFORMATIVE.  I'd like to understand (pic = 1,000 words) more when looking at a workflow.  

  • A line could communicate:
    • Qty of Records
    • Size of Data
    • Is the data SORTED
      • What sort order
    • Quality of Data 

If you look at lines A, B, C in the picture above.  Nothing is communicated.  Weight of line, color of line, type of line, beginning line marker/ending line marker, these are all potential ways that we could see a picture of the data without having to get into browse everywhere to see the information.  If we hover over the data connection, even more information could appear (e.g. # of records, size of file) without having to toggle the configuration parameters.

 

2.  Wouldn't it be nice to not have to RUN a workflow to know last SAVED metadata (run) of  a workflow?  I'd like to open a "saved" workflow and know what to expect when I run the workflow.  Heck, how long does it take the beast to run is something that we've never seen unless we run it.

 

3.  I'd like to set the metadata to display SORT keys, order.  Sort1 Asc, Sort 2 Desc ....   This sort information is very helpful for the engine and I'll likely post about that thought.  As a preview, when a JOIN tool has sorted data and one of the anchors is at EOF, then why do we need to keep reading from the other anchor?  There won't be another matched record (J) anchor.  In my example above, we don't ask for the L/R outputs, so why worry about the rest of the join?

 

4.  Have you ever seen a map (online) that didn't display watermark information?  I think that the canvas experience should allow for a default logo (like mine above, but transparent) in the lower right corner of the canvas that is visible at all times.  Having the workflow name at the top in a tab is nice, but having it display as a watermark is handy.

 

5.  Once the workflow has RUN, all anchors are the same color.  How about providing GREY/White or something else on EMPTY anchors instead of the same color?  This might help newbies find issues in JOIN configuration too.

 

6.  If the tool has ERRORs you put a RED exclamation mark.  I despise warnings, but how about a puke colored question mark?  With conversion errors, the lines could be marked to let you know the relative quantity of conversion errors (system messages have a limit)

 

Just a few top of mind things to consider ....

 

Cheers,

 

Mark

Please add official support for newer versions of Microsoft SQL Server and the related drivers.

 

According to the data sources article for Microsoft SQL Server (https://help.alteryx.com/current/DataSources/SQLServer.htm), and validation via a support ticket, only the following products have been tested and validated with Alteryx Designer/Server:

 

Microsoft SQL Server

Validated On: 2008, 2012, 2014, and 2016.

  • No R versions are mentioned (2008 R2, for instance)
  • SQL Server 2017, which was released in October of 2017, is notably missing from the list.
  • SQL Server 2019, while fairly new (~6 months old), is also missing

This is one of the most popular data sources, and the lack of support for newer versions (especially a 2+ year old product like Sql Server 2017) is hard to fathom.

 

ODBC Driver for SQL Server/SQL Server Native Client

Validated on ODBC Driver: 11, 13, 13.1

Validated on SQL Server Native Client: 10,11

Hello,

More and more databases have complex data types such as array, struct or map. This would be nice if we could use it on Alteryx as input, as internal and as output, with calculations available on it.

https://cwiki.apache.org/confluence/display/hive/languagemanual+types#LanguageManualTypes-ComplexTyp...

 

Best regards,

Simon

We see canvasses every day where dozens fields are brought into a canvas or a macro, but never used - and this just creates slowness for no good benefit.

 

Given that one of the selling features of Alteryx is the speed of processing  - could we look at three improvements to the Alteryx engine & designer:

  • easiest: Keep track of every field brought in / created - and if they are not used in an output, then throw a warning at the end of the execution process
    • For example - you bring in fields a,b,c - you create field d and e during the flow in formula tools
    • Field d is never used as an input to any filters or formulae - and it doesn't appear on any output - so it's just waste
    • Field a and b are part of the output, so they are fine
    • Field c is never used at all - so that's just waste.
    • Field e is used to filter the records before output - so this one is fine.
    • So we've immediately found 2 fields that we can eliminate and make this canvas faster
  • Medium: Ignore the unused fields in the execution engine
  • Hardest: Tell the users that their field is unused in Alteryx Designer by doing a lineage analysis of the tools, just like software environments like Visual Studio do.    This may require a change to the engine & to designer 'cause we would need to make each tool capture the full detail of the fields that they know in their configuration in order to do this trace.

 

 

 

  • Engine

I love Workflow Meta info, especially the ability to put the Author, the search tags,the version, the description, etc...

workflow meta info.png

But why can't we use it as Engine Constant? It doesn't seem very hard to implement and it would change life for development.

 

engine_constant.png

Hey YXDB Bosses,

 

Let's move forward with our YXDB.  Maybe give AMP a real edge over e1.  Here are some things that could may YXDB super-powered:

 

  • Metadata
    • Workflow information about what created that poorly named output file.
    • When was the file originally created/updated.
    • SORT order.  If there is a sort order for the data, what is it?
  • Other stuff
    • INDEX.  Currently you get spatial indexes (or you can opt out).  If I want to search through a 100+MM record file, it is a sequential read of all of the data.  With an index I could grab data without the expense of a calgary file creation.  Don't go crazy on the indexing option, just allow users to set 1+ fields as index (takes more time to write).  
    • I'm sure that you've been asked before, but CREATE DIRECTORY if the output directory doesn't exist.
  • Old School - Crazy Idea
    • Generation Data Groups (GDG)
      This will likely make @NicoleJ 's eyes roll 🙄 but back in the days, we could write our data to the SAME filename and the old data became 1 version older.  You could read the (0) version of the file or read from 1, 2, 3 or more previous versions of the data using the same name (e.g. .\Customers|||3).  The write of the output file would do all of the backing up of the data (easy to use) and when the initial defined limit expires, the data drops off.

Just a little more craziness from me

 

cheers

This idea has arisen from a conversation with a colleague @Carlithian where we were trying to work out a way to remove tools from the canvas which might be redundant, for example have you added a select tool to the canvas which hasn't been configured to change a data type or rename a field. So we were looking for ways of identifying in the workflow xml for tools which didn't have a configuration applied to them.

 

This highlighted to me an issue with something like the data cleanse tool, which is a standard macro.

 

The xml view of the data cleanse configuration looks like this:

<Configuration>
  <Value name="Check Box (135)">False</Value>
  <Value name="Check Box (136)">False</Value>
  <Value name="List Box (11)">""</Value>
  <Value name="Check Box (84)">False</Value>
  <Value name="Check Box (117)">False</Value>
  <Value name="Check Box (15)">False</Value>
  <Value name="Check Box (109)">False</Value>
  <Value name="Check Box (122)">False</Value>
  <Value name="Check Box (53)">False</Value>
  <Value name="Check Box (58)">False</Value>
  <Value name="Check Box (70)">False</Value>
  <Value name="Check Box (77)">False</Value>
  <Value name="Drop Down (81)">upper</Value>
</Configuration>

 

As it is a macro, the default labelling of the drop downs is specified in the xml, if you were to do something useful with it wouldn't it be much nicer if the interface tools were named properly - such as:

cgoodman3_0-1674658512759.png

So when you look at the xml of the workflow it's clearer to the user what is actually specified.

cgoodman3_1-1674658649253.png

 

 

 

In short:
Add an option to cache the metadata for a particular tool so that it doesn't forget when using tool that have dynamic metadata such as batch macros or alteryx metadata engine can't resolve such as python tool.

 

 

Longer explanation:

The Problem:

One of the issues I often encounter when making dynamic workflows or ones that require calling external services is that Alteryx often forgets the metadata of what columns to expect. This causes the workflow to forget configuration of downstream tools when a workflow is first opened or when the metadata engine refreshes. There is currently the option to disable the metadata engine from automatically refreshing but this isn't a good option because you miss out on much of the value it brings.

 

Some of the common tools where I encounter this issue:

  • Json parse
  • Batch macros
  • Python tool
  • Regex parsing to rows

 

Solution:

Instead could we add an option to cache the metadata for a particular tool, this would save the metadata from the last time the workflow ran to within the workflows XML so that it persists when closed and reopened. Then when the metadata engine runs when it gets to this tool instead of resolving the metadata from the tool it instead uses the saved version in the XML. Obviously when it actually runs it would ignore this and any errors would still occur.

 

This could be an option in navigation pane of each tool. Mockup below:

Mockup.png

 

 

 

This would make developing dynamic workflows far easier and resolve issues of configuration being lost when the metadata changes and alteryx forgets the options.

Top Liked Authors