Free Trial

Alteryx Designer Desktop Ideas

Share your Designer Desktop product ideas - we're listening!
Submitting an Idea?

Be sure to review our Idea Submission Guidelines for more information!

Submission Guidelines

Featured Ideas

0 Likes

It would be nice if the fields which are selected for the Unique tool can be easily visible. (by way of grouping selected fields etc)

 

The issue is that if a few out of many fields are selected to be considered for Unique, it is hard to review/check which are the fields that have been selected in the Unique Tool configuration.

 

Here's an example. It is difficult to see all the fields which have been selected. (There are 7 fields selected in this example.)

 

Amin_0-1654796229216.png

 

It'd be great to have all DCM connections available in the Data connections window.

And when Use Data connection Manager (DCM) is ticked, The screen defaults to DCM Connection list. 

 

Aguisande_1-1654699262092.png

 

 

 

 

The XML Parse tool has a checkbox to ignore errors and continue.  This idea works for all options that allow you to ignore errors.  It would be great if XML Parse had 2 outputs, 1 for successful records and another for the errored records.  This would make it much easier to identify and update (if necessary) errored records.

In my view this would make it more similar to other tools like Filter or Spatial Match where records that don't fit your criteria follow a different flow.

 

Thanks for considering

I would to suggest to add a configuration in the Block Until Done tool, which allow the user to prioritize the release of a data stream through multiple Block Until Done tools in the same module. 

 

In the example below, the objective is to update multiple sheets in a single Excel workbook. Each sheet is a different data stream, that cannot be unioned together, therefore making the filtering of a single stream feeding into multiple Block Until Done from that filter solution impossible.

 

What I would like to be able to do is have a configuration, where Block Until Done #2 will not allow the data stream to pass through until Block Until Done #1 is complete, Then Block Until Done #3 will not pass through the data stream until Block Until Done #2 is complete, and so forth through the all the Block Until Done instances. 

 

ScubaGeek_0-1654554889263.png

 

0 Likes

I the current Output Data Tool, choosing a bulk Loader option, say for Teradata, the tool automatically requests the first column to be the primary index. That is absolutely incorrect, especially on Teradata because of how it might be configured. My Teradata Management team notifies me that the created table, whether in a temp space or not, becomes very lopsided and doesn't distribute the "amps" appropriately. 

They recommend that instead of that, I should specify "NO PRIMARY INDEX" but that is not an option in the Output Tool. 

 

The Output tool does not allow any database specific tweaks that might actually make things more efficient.

 

Additionally, when using the Bulk Loader, if the POST SQL uses the table created by the bulk loading, I get an error message that the data load is not yet complete. 

 

It would be very useful if the POST SQL is executed only and only after the bulk data is actually loaded and complete, not probably just cached by Teradata or any database engine to be committed. 

 

Furthermore, if I wanted either the POST SQL or some such way to return data or status or output, I cannot do so in the current Output Tool. 

It would be very helpful if there was a way to allow that. 

 

Today, I am able to take an excel file from a folder and drag it onto the canvas, which automatically creates an Input Data tool.

I would like to be able to drag an excel file right from outlook to do the same! 

Hello,

The release notes quality is not exactly at its best nowadays The 2022.1 releases notes available here https://help.alteryx.com/release-notes/designer/designer-20221-release-notes don't mention at least two cool new features :
-DCM for in-database connection.
-distinction between greenplum and postgresql, which is important for me as I post this as an idea : https://community.alteryx.com/t5/Alteryx-Designer-Ideas/Separate-entry-in-in-db-configuration-for-Po...

 

Note that the corresponding idea aren't also up to date.

It's cool to have new features, it's way better if you gives the full list.

Best regards,

Simon

i investigate a super messy and huge workflow, i have a hard time to trace back the data stream.

 

it only have around 5 wireless connection, some is easy to find, but some are connect to union..

 

if have a button to turn all connection back to wire, and better if have option for just solid for temp to show as solid line. it will helps alot.

0 Likes

All the other data types get basic filters but time doesn't get any besides a NULL check:

IraWatt_0-1654095977829.png

 

'%userprofile%\documents' in file explorer will return 'C:\Users\username\Documents' It would be very useful if Alteryx would allow the %Path% syntax. When using One Drive or SharePoint often the user name has to come first which if you are sharing workflows will be different every time. This would solve this problem very cleanly.

0 Likes

If i draw a line that crosses the pacific ocean, the path is split in half, and connected by a line that goes over the Atlantic..

 

This isn't just cosmetic.. if i intersect this object with another polygon, it will show that the two objects interact even though they should not.. The only way i can fix this is to manually divide my polyline into two objects where they cross the pacific ocean.

 

Please fix this.

 

 

Matthew_0-1654091156864.png

 

 

Need ability to call Stored Procedures in Snowflake Database. 

0 Likes

Hello all,

EDIT : stupid me : it's an excel limitation in output, not an alteryx limitation :( Can you please delete this idea ?
I had to convert some string into dates and I get this error message (both with select tool and DateTime tool) :

 

 

ConvError: Output Data (10): Invalid date value encountered - earliest date supported is 12/31/1899  error in field: DateMatch  record number: 37399

 

 

 

This is way too early. Just think to birthdate or  geological/archeological data !

Also : other products such as Tableau supports earlier dates!

Hope to see that changed that soon.

Best regards,

Simon

Hello,


As of today, when you connect o, a database, you go through a batch of queries to retrieve which database it is ( cf https://community.alteryx.com/t5/Alteryx-Designer-Ideas/Smart-Visual-Query-Builder-for-in-db-less-te... where I suggest a solution to speed up the process) and then, Alteryx queries the metadata. In order to get the column in each table, Alteryx use a SHOW TABLES and then loop on each table. This is really slow. 

However, since Hive 3.0, an information_schema with the list of columns for each  table is now available. I suggest to use the information_schema.columns instead of the time-consuming loop.

 
 

image.png


PS : I don't know if it's linked to the Active Query Builder, the third-party tool behind the Visual Query Builder. In that case, it would be a good idea to update it as suggested here https://community.alteryx.com/t5/Alteryx-Designer-Ideas/Update-Query-Builder-component/idi-p/799086



Best regards,

Simon

I believe many have voiced out this as their pain point within the Community. Essentially, there is no straightforward method to import multiple Excel files which are password protected.

 

I understand that there is an R solution suggested by several users, however, that is not ideal as it can be difficult to obtain permission from internal Tech team to install the package on the users' computers. 

 

Re-saving them without password is not only a hassle, but also raises concerns for data protection and security.

0 Likes

Would be great if you could support Snowflake window functions within the In-DB Summarize tool

This may have been raised before, but we would like to see the equivalent of PRICE and YIELD formulas from Excel in Alteryx's Formula tool. I believe many users in the finance industry are using formulas like these frequently and it would be helpful to be able to replicate the formula in Alteryx.

 

Manually building the formula is possible, however it is unnecessarily complicated especially if you are working on different calendar basis e.g. 30 /360 European.

 

Thank you!

Hello!
Currently i develop on a 2560 x 1440 monitor, and it is great for development of Alteryx workflows. 

However, from an accessibility perspective (and for demonstration purposes), the whole of the Alteryx Interface text and icons are far too small for anyone to read. For instance, this is what Designer looks like at the most common monitor size, 1920 x 1080:

TheOC_1-1653667217648.png

 



And at my native resolution (2560 x 1440)

TheOC_2-1653667235714.png

 





And 4k resolution, for comparison:

TheOC_3-1653667284677.png

 

As you will notice - virtually everything is smaller, and unreadable at higher resolutions. It doesn't appear that this is a setting within Alteryx, and so I have to resort to windows settings to change the size:

TheOC_4-1653667346598.png



Or as @CharlieS mentions here change the size of text across all applications.

It would be useful within Alteryx to have a 'scaling' slider/dropdown, so I do not have to change the resolution or size of applications within windows, to be able to easy read or demonstrate data from Alteryx Designer. 

Thanks,
TheOC

Hello!
Currently when using the DateTime and Text Pre-processing tool (I'm sure there are a few others), the default option is to have a new column as the output. For instance with DateTime:

TheOC_0-1653665348080.png

There is no option to replace original field, simply create a new field. Setting the name as the same, will result in:

TheOC_3-1653665485305.png

 




and with the Text-Preprocessing:
There is no option to specify an output column. The column you process, will become [field]_processed:

TheOC_1-1653665427373.png

 

TheOC_2-1653665434647.png

 





It would be awesome if both of these tools had functionality similar to the Multi-Row formula, with the ability to create a new field, or update existing field:

TheOC_4-1653665598032.png

 


This would reduce data redundancy and need for additional select tools. Additionally with the Text Pre-Processing tool specifically, its very easy to make the mistake of not using the 'processed' field in future text based analysis, especially when the pre-processing tool is inserted into an already built connection.


Thanks,
TheOC

Hello,
Currently, the Sentiment analysis tool will score textual data by use of a Compound_Sentiment_Score, which is effectively a score from -1(negative) to 1(positive). 

 

The tool is great, in that it has a built-in classification function so that we can say that anything under -0.5 we want to label as negative, and anything over 0.5 we want to label as positive.

 

However currently, the defaults are set to:

TheOC_0-1653665068657.png



Which effectively creates a negative-weighted classification, as anything from -1 to -0.1 are classified as negative. -1 to 0.5 are labelled as neutral, and 0.5 to 1 are labelled as positive.

 

My suggestion would be to change the default max negative classification to 0.5, to level the weighting of the tools scoring.

 

 

Thanks,
TheOC

Top Liked Authors