Join the Alteryx Community’s Maveryx Summer Cup event! Compete, network with others, and earn your gold through a series of challenges from July 24th to August 11th. Learn more about the event here.
The Product Idea boards have gotten an update to better integrate them within our Product team's idea cycle! However this update does have a few unique behaviors, if you have any questions about them check out our FAQ.

Alteryx Designer Desktop Ideas

Share your Designer Desktop product ideas - we're listening!
Submitting an Idea?

Be sure to review our Idea Submission Guidelines for more information!

Submission Guidelines

Featured Ideas

Having the open / close ( expand / collapse ) button for the tool container in lhe top right corner implies that everytime a big container is expanded, to close it the user has to move the pointer to its new position, which sometimes mean scrolling / zooming out and then zooming in to locate it.

I suggest to locate that button in the top left corner by the side of the enable/disable switch or even a double click mechanism for open/close, which would enable to user to open, see what is inside the container, and close it without moving the mouse to locate the new location of the button.

 

 

Hello,

As of today, only English is available. But it's hard to convince French Customers with french language data to buy the AIS if it cannot work with their data.

Best regards,

Simon

Hello!
I like to annotate my workflows when finished, and it can be a bit of a pain to add more and more comment tools by searching for them, or going through the current right-click menu:

TheOC_0-1666797918460.png



What would be nice is the option to right click anywhere on the canvas, and have the option of 'add comment', similar to how we have the option for 'add container' when selecting tools on the canvas.

 

Cheers!

Hello,

 

I had a business case requiring a cost effective and quick storage solution for real time online sourced survey data from customers.  A MongoDB instance would fit the need, so I quickly spun up a cluster on Mongo Atlas.  Atlas was launched by MongoDB in 2016 as a database-as-a-service deployed on AWS.  All instances for Atlas require TLS/SSL to connect.  Currently, the Alteryx MongoDB connector does not support TLS/SSL connections and doesn't work against Atlas.  So, I was left with a breakdown in my plan that would require manual intervention before ingesting data to Alteryx (not ideal).

 

Please consider expanding this functionality on all connectors.  I am building Alteryx out in my agency as a data platform that handles sensitive customer information (name, address, email, etc.).  Most tools I use to connect to secure servers today support this type of connection and should be a priority for Alteryx to resolve. 

 

Thanks,

Mike Schock

 

 

 

 

Hi 

I'm really missing a search in the medata phane?

If I am on data phane:

Hamder83_0-1658922640426.png


If im browsing though metadata:

Hamder83_1-1658922660398.png



Have you ever had the business deliver an Excel (EEK!) file to be passed into Alteryx with a different number of header rows (because it looks pretty and is convenient)? Never, you say? Lies! 

 

I would suggest adding an option to the Input Data Tool that would give us the ability concatenate multiple header rows. This would help enable accurate data profiling for columns when output and eliminate loss from unnecessary conversion errors. Currently, the options allow us to Start Data Input on Line X; however, if the header for the column is on multiple rows, they would have to be manually entered after input due to only being able to select the lowest possible row to assure the data is accurately passed. The solution would be to be able to specify the number of rows that contain headers, concatenate them to a single row (ignoring null and carriage return) and then output that as the header. 

 

The current functionality, in a situation where each row has a variable number of header rows, causes forced errors such as a scientific string conversion of a numeric value.

Added in Alteryx Version 2020.3, the Browse tool no longer shows a profile of the complete dataset (it is capped when the record data size reached 300MB).

 

My proposed solution is an optional override of the record size limit on the browse tool (which will make the profiling take longer, but actually profile the entire dataset).  I would also like a general user setting to set the default behavior of the browse tool to either be limited or unlimited.

 

Below is the newly included documentation of the Data Profiling Limit, which I'm proposing can be overridden.

 

 

Data Profiling Limit
Data Profiling in the Browse tool is capped at 300 MB. This allows you to process very large datasets faster. For each record in the incoming dataset, we process the record and add the record size to a counter. Once the counter reaches 300 MB, we stop processing records.

It is important to note that there is no specific number of records that we can process. This depends on the dataset since a record size can range from 1 byte to a few thousand bytes. This record size is different from the file size, displayed in the Results grid and Data Profiling Holistic View. The file size is generally different since it has been compressed to optimize spacing.

In other words, 300 MB of record size is not the same as 300 MB of file size.

 

 

 

This new tool can cause confusion when looking at the data profile (e.g. if you expect the sum to be $3 million, but the browse tool is only showing 2% of your total records in the profile tool, the profile sum may only show $60 thousand).

 

The sampled version with a cutoff of 300MB is rarely useful if you are using browse tools to get a quick sense of the variable profiles on medium sized datasets (around 1 million records) since this rarely will fit into the 300MB record size limit.

 

An example can be shown in the image below, where the dataset contains 855,085 records, but the browse tool is profiling only the first 20,338.

 

alteryxExample1.png

 

Again, being able to override this 300MB record size limit would fix the problem created in the 2020.3 change to the browse tool.

 

 

 

Where it stands now, only a file input tool can be used to pull data from Google BigQuery tables. The issue here is that the data is streamed and processed locally, meaning the power of BigQuery processing isn't actually being leveraged.

Adding BigQuery In-Database as a connection option would appeal to a wide audience. BigQuery is also standard SQL compliant with the SQL 2011 standard, so this may make for an even easier integration.

On the "Join Tool"  allow to click on a connection and say “switch L & R” connection.  Currently if only one connection is there you can move to the other, but if they're both there, you have to disconnect one, and then 'switch'.   

When using the text mining tools, I have found that the behaviour of using a template only applies to documents with the same page number.

 

So in my use case I've got a PDF file with 100+ claim statements which are all laid out the same (one page per statement). When setting up the template I used one page to set the annotations, and then input this into the T anchor of the Image to Text tool. Into the D anchor of this tool is my PDF document with 100+ pages. However when examining the output I only get results for page 1.

 

On examining the JSON for the template I can see that there is reference to the template page number:

cgoodman3_0-1604393391514.png

 

And playing around with a generate rows tool and formula to replace the page number with pages 1 - 100 in the JSON doesn't work. I then discovered that if I change the page number on the image input side then I get the desired results. 

 

cgoodman3_1-1604393499357.png

However an improvement to the tool, as I suspect this is a common use case for the image to text tool, is to add an option in the configuration of the image to text tool to apply the same template to all pages.

 

cgoodman3_4-1604393738275.png

 

 

 

 

 

Please add Parquet data format (https://parquet.apache.org/) as read-write option for Alteryx.

 

Apache Parquet is a columnar storage format available to any project in the Hadoop ecosystem, regardless of the choice of data processing framework, data model or programming language.

 

Thank you.

 

Regards,

Cristian.

Please update the Publish to Tableau Server connector tool to support Tableau's Ask Data feature. The data source must be recognized as an extract on Tableau Server in order for the Ask Data feature to work. Currently, all data source published using version 2.0 of the connector tool are recognized as a live data source. The work around is cumbersome and requires multiple copies of data sources to be created and managed.

I decided to get real fancy when building a standard macro the other day. I checked the box on my macro input that made the connection optional:Capture2.PNG

 

shocked-will-smith

 

It worked really well. My macro then became more complex, so I changed it to a batch macro. To my great surpise/astonishment/shock, the optional incoming connection is no longer optional:

Capture.PNG

 

The standard macro is working as expected on the left, but the batch macro is producing an error because my optional connection is requiring that something be connected to it.

what-did-you-just-say

 

I've been told that the code to make it optional is not there for batch macros and that this would be a product feature/improvement.

 

With the continued growth of Graph Databases, it would be nice for Alteryx to creates a new tool set that would allow input/output connectors for Graph Databases like Neo4j which software tools like Pentaho and Talend already have.

 

Keith. 

At the moment if a part of your python code takes more than 30s to run, Jupyter times out and Alteryx cancels the workflow. This makes the Python Tool unusable for anything intensive and the timeout should be removed by default or be configurable per workflow.

 

I've made this idea as none of the solutions in these threads feel satisfactory:

 

https://community.alteryx.com/t5/Alteryx-Designer-Discussions/Python-tool-NbConvertApp-Timeout/m-p/3...

https://community.alteryx.com/t5/Alteryx-Designer-Discussions/Python-Tool-Timeouts-When-Running-Work...

https://community.alteryx.com/t5/Alteryx-Designer-Discussions/Python-SDK-timeout-error-cell-executio...

Similar to previous ideas from @patrick_mcauliffe and @shailesh_patel - would like to request 2 things:

 

Default on Folder Picker Interface tool

The folder picker tool does not currently allow a default value - this unnecessarily adds work if users have the same value 90% of the time.

Please add a field for the default value that will show when the interface starts up

 

Annotation 2019-09-20 074835.png

 

 

 

 

Similar ideas:

- Default on Date interface: https://community.alteryx.com/t5/Alteryx-Designer-Ideas/Default-Date-for-Interface-Tool/idi-p/35770

- Default on File Selector: https://community.alteryx.com/t5/Alteryx-Designer-Ideas/Default-file-location-in-file-broswer-Interf...

Hello all,

 

Some Database, including Hive, support natively scheduled queries (yes, the scheduling configuration is inside the database, not through etl/dataprep system). I think this would be an interesting feature for in-db workflow output : you play the worflow once and then only have to run it when it changes, the database do the scheduling. 



https://cwiki.apache.org/confluence/display/Hive/Scheduled+Queries

Intro

Executing statements periodically can be usefull in

  • Pulling informations from external systems
  • Periodically updating column statistics
  • Rebuilding materialized views

 

Best regards,

Simon

Having just participated in weekly challenge 293 there is a requirement to output a table with certain conditional row colours. However the configuration is based on rgb colour codes, whereas the desired output displays the colours using hex codes. 95% of the development time on this challenge was to get matching colour formatting, so being able to insert hex codes would improve this experience.

 

cgoodman3_0-1637588819650.png

cgoodman3_1-1637588840307.png

 

cgoodman3_2-1637588900043.png

 

Alteryx doesnt support querying tables within Apache Ignite via Ignite ODBC connector. Connectivity from Ignite being an in memory database with Alteryx would help in better connectivity via ODBC.

 

https://apacheignite-sql.readme.io/docs/overview 

The idea is to store credentials, login/pw in a "credential alias".

 

Then, those credential aliases can be used in :

-traditional aliases/connection

-in database aliases/connection

-hdfs aliases/connection

-API

-on user aliases for connected controllers/gallery

...etc.

 

The idea is that I only have to change the credentials once for all the connection type (on Hive, I have the in db alias, the traditional alias and even an HDFS alias using exactly the same credentials !! and I have to change all that manually).

 

Top Liked Authors