The Product Idea boards have gotten an update to better integrate them within our Product team's idea cycle! However this update does have a few unique behaviors, if you have any questions about them check out our FAQ.

Alteryx Designer Desktop Ideas

Share your Designer Desktop product ideas - we're listening!
Submitting an Idea?

Be sure to review our Idea Submission Guidelines for more information!

Submission Guidelines

Featured Ideas

In the tools that embed the "Rename" option (Select, Append Fields, Join, Join Multiple), copying the new name will copy all the information of the field configuration : tick/untick, original field name, type, size, new name and description.

 

Renaming the field "Rename_Field"Renaming the field "Rename_Field"

 

 

Capture2.PNG

 

In my opinion, it should copy only the new name. This would be useful, especially because when you change the name of a field, it isn't automatically changed in subsequent tools, so copying it to replace it in those tools is faster than retyping it every time.

When loading multiple sheets from and Excel with either the Input Data tool or the Dynamic Input Tool, I usually want a field to identify which Sheet the data came from. Currently I have to import the Full Path and then remove everything except the SheetName.

 

It would be great if there was an option to output she SheetName as a field.

 

DavidP_0-1657614328613.png

 

My organization use the SharePoint Files Input and SharePoint Files Output (v2.1.0) and connect with the Client ID, Client Secret, and Tenant ID. After a workflow is saved and scheduled on the server users receive the error "Failed to connect to SharePoint AADSTS700082: The refresh token has expired due to inactivity" every 90 days. My organization is not able to extend the 90 day limit or create non-expiring tokens.

 

If would be great if the SharePoint connectors could automatically refresh the token when it expires so users don't have to open the workflow and do it manually.

There is no tool that exists that outputs all records that are duplicates (those sharing the selected values with at least one other record) and also outputs the records that are not duplicates (those not sharing the selected values with at least one other record).

 

The Unique Tool is not sufficient.  It only provides the first record of a unique duplicate group along with any non-duplicates and then provides a secondary output that only contains the additional records of a duplicate group.  Sometimes you only care about the duplicates and want to quickly see what differs between the unique groups.

 

For example, if there are 4 records with the City of Austin and I am looking for duplicates on City I want to see all 4 records with Austin in the output so I can quickly compare additional fields to see what might differ, or if they are all indeed truly duplicates.

Connecting to Smartsheets using Alteryx Desktop (and by extension, Alteryx Server) is extremely cumbersome. If a user wants to read data from Smartsheet, they are required to get an API token (preferred) or use a username/password

 

Then do one of the following to read data from Smartsheets:

1. a. Install a ODBC driver

    b. Configure a DSN connection for ODBC
    c. Use the input data using a generic ODBC connection
or
2. Use python

 

To write data to Smartsheets, a user can use Python or upload the data using an API call - both very hard for end users to use especially if they're not Python developers.

 

Regardless, all of these are problematic. On the server I manage, I have over 15 ODBC connections to Smartsheets and it's getting very hard to upgrade the server hardware because of them. Creating a native connector for input/output of data to Smartsheets will eliminate a headache of managing ODBC connections, and make it simple for Alteryx Desktop users to read and write data.

 

 

Hello all,

I really appreciate the ability to test tools in the Laboratory category :

simonaubert_bd_0-1672223871200.png



However, these nice tools should go out of laboratory and become supported after a few monhs/quarters. Right now, without Alteryx support, we cannot use it for production workflow.

simonaubert_bd_1-1672223991592.png



Example given :
Visual Layout Tool introduced in 2017
https://community.alteryx.com/t5/Alteryx-Designer-Knowledge-Base/Tool-Mastery-Visual-Layout/ta-p/835...

Make columns Tool also introduced in 2017
https://community.alteryx.com/t5/Alteryx-Designer-Knowledge-Base/Make-Columns-Tool/ta-p/67108

Transpose In-DB in 10.6 introduced 2016
https://help.alteryx.com/10.6/LockInTranspose.htm

etc, etc...

Best regards,

Simon

Hello all,

Like many softwares in the market,  Alteryx uses third-party components developed by other teams/providers/entities. This is a good thing since it means standard features for a very low price. However, these components are very regurarly upgraded (usually several times a year) while Alteryx doesn't upgrade it... this leads to lack of features, performance issues, bugs let uncorrected or worse, safety failures.

Among these third-party components :

- CURL (behind Download tool for API) : on Alteryx 7.15 (2006) while the current release is 8.0 (2023)
- Active Query Builder (behind Visual Query Builder) : several years behind

- R : on Alteryx 4.1.3 (march 2022)  while the next is 4.3 (april 2023)
- Python : on Alteryx 3.8.5 (2020) whil the current is 3.10 (april 2023)
-etc, etc....
-

of course, you can't upgrade each time but once a year seems a minimum...

Best regards,

Simon

Hello all,

As of today, we can easily copy or duplicate a table with in-database tool.This is really useful when you want to have data in development environment coming from production environment.

But can we for real ?

 

Short answer : no, we can't do it in these cases :
-partitions

-statistics
-index

-any constraints such as primary-foreign keys

But even if these ideas would be implemented, this means manually setting these parameters.

So my proposition is simply a "clone table"' tool that would clone the table from the show create table statement and just allow to specify the destination path (base.table)

simonaubert_bd_0-1680504054872.png

 


Best regards,

Simon

 

When working on a complex, branching workflow I sometimes go down paths that do not give the correct result, but I want to keep them as they are helpful for determining the correct path.  I do not want these branches to run as they slow down the workflow or may produce errors/warnings that muddy debugging the workflow.  These paths can be several tools long and are not easily put in a container and disabled.  Similar to the Cache and Run Workflow feature that prevents upstream tools from refreshing i am suggesting a Disable all Downstream Tools feature.  In the workflow below the tools in the container could be all disabled by a right click on the first sample tool in the container.

 

T_Willins_0-1663214830996.png

 

Hello all,

 

As of today, you must set which database (e.g. : Snowflake, Vertica...) you connect to in your in db connection alias. This is fine but I think we should be able to also define the version, the release of the database. There are a lot of new features in database that Alteryx could use, improving User Experience, performance and security. (e.g. : in Hive 3.0, there is a catalog that could be used in Visual Query Builder instead of querying slowly each schema)

I think of a menu with the following choices :
-default (legacy) and precision of the Alteryx default version for the db
-autodetect (with a query launched every time you run the workflow when it's possible). if upper than last supported version, warning message and run with the last supported version settings.
-manual setting a release (to avoid to launch the version query every time). The choices would be every supported alteryx version.

Best regards,

Simon

Hello,

As of today, we can't choose exactly the file format for Hadoop when writing/creating a table. There are several file format, each wih its specificity.

Therefore I suggest the ability to choose this file format :

-by default on connection (in-db connection or in-memory alias)

-ability to choose the format for the writing tool itself.

Best regards,

Simon

Hello all,

Big picture : on Hadoop, a table can be

-internal (it's managed by Hive or Impala, and act like any other database)
-external (it's managed by hadoop, can be shared among the different hadoop db such as hive and impala and you can't delete it by default when dropping the table

 

for info, about suppression on external table :

https://docs.cloudera.com/HDPDocuments/HDP3/HDP-3.1.4/using-hiveql/content/hive_drop_external_table_...

Alteryx only creates internal tables while it would be nice to have the ability to create external tables that we can query with several tools (Hive, Impala, etc).

It must be implemented

-by default for connection
-by tool if we want to override the default

Best regards,

Simon

Hello all,

We all love pretty much the in-memory multi-row formula tool. Easy to use, etc. However, the indb counterpart does not exist.

I see that as a wizard that would generate windowing functions like LEAD or LAG
https://mode.com/sql-tutorial/sql-window-functions/

 

Best regards,

Simon

Ability to color the connector lines to symbolize a path or data. This would help when you have multiple sources into a Join to determine that a path is still the same set of data when you have multiple paths created.

Not sure I'd call this a user setting, but I couldn't figure out the right heading this belongs to. 

 

When opening files, there are often times a couple of files at that aren't run on any kind of schedule or set time frame but you come back to when you need to run them.  

 

There should be a way to set "FAVORITES" for a handful of files that you find yourself referring to on a repeated basis, but too far back to be on the 'recents' list because you open too many other files. 

Hello --

 

Many times, I want to summarize data by grouping it, but to really reduce the number of rows, some data needs to be concatenated.

 

The problem is that some data that is group is repeated and concatenating the data will double, triple, or give a large field of concatenated data.

 

As an example:

Name                                         State

ANew York
ANew York
ANew Jersey
BFlorida
BFlorida
BFlorida

 

The above, if we concatenate by State would look like:

ANew York, New York, New Jersey
BFlorida, Florida, Florida

 

What I propose is a new option called Concatenate Unique so I would get:

ANew York, New Jersey
BFlorida

 

This would prevent us from having to use a Regex formula to make the column unique.

 

Thanks,

Seth

Hello!

I am just making a quick suggestion, specifically for the Formula tool within Alteryx.

 

Often when I am working on a larger workflow - I will end up optimising the workflow towards the end. I typically end up removing unnecessary tools, fields, and rethinking my logic.

 

Much of this optimisation, is also merging formula tools where possible. For instance, if I have 3 formulas - its much cleaner (and I would suspect faster) to have these all within one tool. For instance, a scaled down example:

TheOC_0-1638886556192.png

 

to this:

TheOC_1-1638886598494.png

 

This requires a lot of copy and paste - especially if the formulas/column names are long - this can be two copy and pastes, and waiting for tools to load between them, per formula (i do appreciate, this sounds an incredibly small problem to have, but on what I would consider a large workflow, a tool loading can actually take a couple of seconds - and this could burn some time. Additionally, there's always potential problems when it comes to copy/pasting or retyping with errors).

 

My proposed solution to this, is the ability to drag a formula onto another - very similar to dragging a tool onto a connection. This integration would look like:

TheOC_4-1638886826166.png

 

Drag to the first formula:

 

TheOC_5-1638886837420.png

 

 

Release:

 

TheOC_6-1638886865299.png

 

Formula has been appended to the formula tool:

TheOC_7-1638886879753.png

 

 

I think this will help people visually optimise their workflows!

Cheers,
TheOC

 

 

 

Hello all,

 

I'm currently learning Pythin language and there is this cool feature : you can multiply a string

image.png

 

 

 

Pretty cool, no? I would like the same syntax to work for Tableau.

 

Best regards,

 

Simon

Lots of use cases involve concatenating some values based on group by clauses within the Summarize tool.

It will be great to have the option to Concatenate Unique as an aggregation method, so the results will have just one appearance for each value in the results.

Plus, having the option to get the chance to have them sorted or not will be awesome.

I can't even count how often I looked at an Excel, CSV or even YXDB file, where I KNEW that it was generated by Alteryx, but I couldn't remember the workflow. Currently, I have to simply go through all workflows I ever build and see if I can find it.

 

Theoretically, I could use a text-search across all workflows and see if I can find the output names - problem here: Most of my output filenames are generated dynamically on the run.

 

It would be amazing if Alteryx could simply write the Workflow name (maybe even path) into the metadata of a file.

2b32a469-58fc-4219-b567-795509ca50dd.png

(Screenshot from Google, as my os is set to German) 

 

How about, we write "This file was created with by "Create Controlling Reports.yxmd on 2023-02-06 with Alteryx Designer 2021.4.298434" in the field 'Comments'?

 

This would make it extremely easy to find what workflow the file generated. I think it would be an option to talk about "filepath" instead of filename, but the filepath could include the local machine name, which might include GDPR information.

 

@Community: Is there any additional information that you'd like to see in the metadata?

 

 

Best

Alex

Top Liked Authors