Alteryx Designer Ideas

Share your Designer product ideas - we're listening!
Submitting an Idea?

Be sure to review our Idea Submission Guidelines for more information!

Submission Guidelines
It's the most wonderful time of the year - Santalytics 2020 is here! This year, Santa's workshop needs the help of the Alteryx Community to help get back on track, so head over to the Group Hub for all the info to get started!

Featured Ideas

Using the Snowflake Input and Output Connectors when Snowflake is setup to use Single Sign On generates a lot of browser windows as the connectors authenticate multiple times (opening a browser window each time) which is very disruptive and disctracting for users.  Any time the workflow interacts with the tools and it authenticates and the browser is opened.

This is driven by the Snowflake ODBC driver using the externalbrowser authentication method however it would be helpful if Snowflake and Alteryx worked together to refine how the connectors authenticate in order to reduce the number, or completely stop the browser windows and therefore greatly improving the user experience.

Simple request to revert back to the functionality that generated an error at the onset of running the workflow.  

I would like for it to be easier to change input (and output) tools to UNC pathing. I think adding it to the right click menu would be great. Currently, I have to go to options >> advanced >> workflow dependencies. A right click option would be easier.

 

Thanks!

 

 

Added in Alteryx Version 2020.3, the Browse tool no longer shows a profile of the complete dataset (it is capped when the record data size reached 300MB).

 

My proposed solution is an optional override of the record size limit on the browse tool (which will make the profiling take longer, but actually profile the entire dataset).  I would also like a general user setting to set the default behavior of the browse tool to either be limited or unlimited.

 

Below is the newly included documentation of the Data Profiling Limit, which I'm proposing can be overridden.

 

 

Data Profiling Limit
Data Profiling in the Browse tool is capped at 300 MB. This allows you to process very large datasets faster. For each record in the incoming dataset, we process the record and add the record size to a counter. Once the counter reaches 300 MB, we stop processing records.

It is important to note that there is no specific number of records that we can process. This depends on the dataset since a record size can range from 1 byte to a few thousand bytes. This record size is different from the file size, displayed in the Results grid and Data Profiling Holistic View. The file size is generally different since it has been compressed to optimize spacing.

In other words, 300 MB of record size is not the same as 300 MB of file size.

 

 

 

This new tool can cause confusion when looking at the data profile (e.g. if you expect the sum to be $3 million, but the browse tool is only showing 2% of your total records in the profile tool, the profile sum may only show $60 thousand).

 

The sampled version with a cutoff of 300MB is rarely useful if you are using browse tools to get a quick sense of the variable profiles on medium sized datasets (around 1 million records) since this rarely will fit into the 300MB record size limit.

 

An example can be shown in the image below, where the dataset contains 855,085 records, but the browse tool is profiling only the first 20,338.

 

alteryxExample1.png

 

Again, being able to override this 300MB record size limit would fix the problem created in the 2020.3 change to the browse tool.

 

 

 

AD/LDAP Authentication should be an option for the Mongo tool, and the ability to use Gallery Connections would also be great. Local SQL authentication is no longer allowed in most enterprises to simplify security configuration control.

This has probably been mentioned before, but in case it hasn't....

 

Right now, if the dynamic input tool skips a file (which it often does!) it just appears as a warning and continues processing. Whilst this is still useful to continue processing, could it be built as an option in the tool to select a 'error if files are skipped'? 

 

Right now it is either easy to miss this is happening, or in production / on server you may want this process to be stopped.

 

Thanks,

 

Andy 

 

 

As of 2019.4+, Alteryx is now leveraging the Tableau Hyper API in order to output .Hyper files. Unfortunately, our hardware is not compatible with the Tableau Hyper API. It would be great if Alteryx could allow a best of both worlds in that they use the new Tableau Hyper API when possible but revert back to the old method (pre 2019.4) when the machine's hardware doesn't support the new Tableau Hyper API. Thanks!

Have External Tables in Snowflake be accessible in the Visual Query Builder.

Current state: External tables in the Snowflake DBMS are not available in the "visual query builder" tab of the green input tool. These tables are only available in the "Tables" tab.

The guide line of Shape File is below. They recommend that you use only letters and numbers.

 

"Spaces and certain characters are not supported in field names. Special characters include hyphens such as in x-coordinate and y-coordinate; parentheses; brackets; and symbols such as $, %, and #. Essentially, eliminate anything that is not alphanumeric or an underscore."

https://desktop.arcgis.com/en/arcmap/latest/manage-data/tables/fundamentals-of-adding-and-deleting-f...

 

But many GIS tools can read and write 2 byte field name at Shape File.

(e.g. QGIS https://qgis.org/en/site/index.html)

And Esri Japan says Shape file can use 2 byte field name.

https://www.esrij.com/gis-guide/esri-dataformat/shapefile/

 

We want to use 2 byte field name at Shape File on Alteryx Designer.

(e.g. UTF-8 , Shift-JIS )

 

 

Thanks,

Kajitani

 

Hi Alteryx community,

 

It would be really nice to have v_string/v_wstring and max character size as a standard for text columns.

fmvizcaino_0-1587008811932.png

it is countless how many times I found that the error was related to a string truncation due to string size limit from the text input.

 

Thumbs-up those who lost their minds after discovering that the error was that! 😄

Referencing the previous idea: Inputs/Output should have the option to read/write a compressed file (ZIP or GZIP)

 

This idea has been implemented for inputting .zip files. However, we still need to use the run command workaround for outputs. It's very common for many users to want to output their .csv, .xlsx, .pdf to a .zip. The functionality would also need to extend to Gallery.

 

See the following links for people that are looking for this type of functionality:

https://community.alteryx.com/t5/Alteryx-Server-Discussions/Download-Multiple-Outputs-from-the-Galle...

https://community.alteryx.com/t5/Alteryx-Designer-Discussions/Output-files-to-ZIP/td-p/163502

https://community.alteryx.com/t5/Alteryx-Designer-Discussions/Zip-files/td-p/151456

 

Feel free to merge this idea with the previous one for continuity.

When using the output data tool, it would save me and my cluttered organizational skills a lot of effort if the writing workflow was saved as part of the yxdb metadata. 

I've often had to search to find a workflow which created the yxdb. I tend to use naming conventions to help me,  but it would be easier if the file and or path was easily found. 

cheers,

 

 mark

Hello Dev Gurus - 

 

Populating a parent / child relationship into a RDBMS in Alteryx is a lot harder than it should be.  The hacks workflows must go through to do anything other than populating single tables makes Alteryx extremely cumbersome to use for any non-trivial ETL process; especially if you live in an existing database environment using database housed, sequentially generated primary key values.

 

Consider a workflow that generates data to the point where you create a cascading set of rows:

 

  • One super parent.  Table A. 
  • Two children rows in Table B. 
  • Two children rows to rows in Table B, into Table C.

In Alteryx, the only way I have found to do this is by altering my tables to have UUID columns, populating them and then using dynamic selects afterwards, i.e.: 

 

  • Populate my row bound for table A with a UUID column I generate in stream.  
  • Make a new insert into Table A. 
  • Perform dynamic selects from Table A where GUID == current row GUID.
  • Get the primary key back out from that data set. 
  • Add the primary key to the data bound for Table B. 
  • Generate a UUID column for the data bound for Table B. 
  • Repeat. . . . 

Not only this, but this 'technique' mandates you either use a block until done and a batch macro to insure your data going to Table A finishes up, or block until done using WaitASecond tool for the lazy among us (like me).  It is amazingly clunky.  

 

In my coding days, there were a variety of ORM tools that would let you do an insert and then immediately make back available to you the primary key that was generated, whether the key was created via database or by the code library itself.  If Hibernate 3.0 released 12 years ago can make this work, I'm pretty sure that the people creating Alteryx tomorrow can do the same thing one way or the other.  

 

Basically what we need is an output-insert tool that has a data stream back out that comes with the data stream and the fancy new primary key that is to the row.  Easy mode is to have it only operate for tables that have sequentially generated primary key values.  Alteryx 2022.1 mode is to give the user some key generation technique options at tool configuration time.  

 

Thank you for listening to my Ted Talk regarding improvements to the output tool to make ETL operations more efficient.  

 

 

Currently we are able to write data to an .xlsb file if the Access Driver 2010 is installed. What we are not able to do is, write data to a specific cell location the way we can do in the case of .xlsx files for example:

 

C:\Users\kk\Downloads\Testfile.xlsx|||'Sheet1$A2:C4'  (This works)

C:\Users\kk\Downloads\Testfile.xlsb|||'Sheet1$A2:C4' (This doesn't work currently in Alteryx)

C:\Users\kk\Downloads\Testfile.xlsb|||Sheet1 (This too works)

 

The problem is that in the case of xlsb files it uses the access database engine, if we are able to find the correct input to parse to the database engine the way we do in case of vba scripts then all the BFSI sector individuals have a highly space optimized pixel perfect reporting possible in Alteryx itself.

Hi GUI Gang

 

At the moment, I have a lovely formatted XLS with corporate branding, logos, filled cells, borders etc.  The data from the Alteryx output needs to start in cell B6.  I have tried the output tools to this named range, but Alteryx destroys all the Excel formatted cells in the data block.

 

As a workaround on the forums, many Alteryx users pump out to a hidden "Output" tab, and then code =OutputA1 in the formatted sheet.  This looks messy to the users who then go hunting for the hidden tab.  Personally I end up pumping the workflow out to a temporary CSV file.  Then opening that in Excel, selecting all, and then pasting values in the pretty Excel file.

 

This is fine for one file, but I need to split the output report block by a country field and do this 100s of time for each month end.

 

Please can we have a output tool that does the same as my workaround.  Outputs directly from a workflow to a range in Excel that doesnt destroy the workbook's formatting.

 

Jay

Please add official support for newer versions of Microsoft SQL Server and the related drivers.

 

According to the data sources article for Microsoft SQL Server (https://help.alteryx.com/current/DataSources/SQLServer.htm), and validation via a support ticket, only the following products have been tested and validated with Alteryx Designer/Server:

 

Microsoft SQL Server

Validated On: 2008, 2012, 2014, and 2016.

  • No R versions are mentioned (2008 R2, for instance)
  • SQL Server 2017, which was released in October of 2017, is notably missing from the list.
  • SQL Server 2019, while fairly new (~6 months old), is also missing

This is one of the most popular data sources, and the lack of support for newer versions (especially a 2+ year old product like Sql Server 2017) is hard to fathom.

 

ODBC Driver for SQL Server/SQL Server Native Client

Validated on ODBC Driver: 11, 13, 13.1

Validated on SQL Server Native Client: 10,11

Alteryx 2019.4 added support in the Input tool for Tableau .hyper extract files. The tables stored in the .hyper files have a schema and a table name. Tableau's old .tde files and Hyper files created by Alteryx & Tableau Desktop use "Extract.Extract" as the schema.tablename. However when using Tableau's Hyper API the default schema is "public" and the table name is arbitrarily specified by the user or application.

 

This has two impacts:

1) Without this support Alteryx can't open many .hyper files created by other applications. By way of example I've attached a sample .hyper file (in a .zip because the community software doesn't allow .hyper files) that has the schema.tablename "public.table1".

2) Also support for names beyond Extract.Extract is required in order to support multiple table extracts (submitted as a separate Idea).

 

Please update the Input tool so the user can select the particular schema and table name from the .hyper file.

 

Jonathan

 

 

Tableau's Hyper file structure can store multiple tables and the published Hyper API exposes a SQL interface. Therefore instead of supporting the standard file-based interface (like text, Excel, etc.) for connecting to Hyper files how about supporting the database server interface used for MS SQL Server, PostgreSQL, etc. so we can select the schema, tables, fields, or even write SQL?

 

Two related ideas:

 

Supporting alternative schema & table names: https://community.alteryx.com/t5/Alteryx-Designer-Ideas/Input-tool-Support-more-than-Extract-Extract...

Supporting multiple table extracts: https://community.alteryx.com/t5/Alteryx-Designer-Ideas/Input-tool-support-multiple-table-extracts-f... to support multiple table extracts for the Input

 

Jonathan

 

Alteryx 2019.4 introduced support for Tableau's .hyper extract format, however it only supports single table extracts. .hyper files have supported multiple tables since mid-2018, so I'd like Alteryx to support that as well.

 

Here are a couple of current use cases (as of February 2020) and one future one.

 

- We have malaria incidence data that is joined to multiple sets of spatial data. Doing all of the joins in the extract creation process to build a single table extract is not possible due to processing time & memory constraints, so we use a multiple-table extract.

- There are multiple ways to do row level security in Tableau. A common way is to have separate tables for the data & the entitlements and then use calculations at run-time to filter the data, and for that having a multiple table extract is ideal.

- In 2020 Tableau will be introducing new data modeling capabilities (this was first demoed at the 2018 Tableau Conference, there were sessions on it at the 2019 Tableau Conference) where one goal is vastly improved performance for large fact table to fact table joins where previously we'd have to do much more data preparation. This is another case where multiple table extracts would be useful.

 

I've attached a sample Hyper file with two tables in the extract (it's zipped because the Community site doesn't accept .hyper files).

 

Supporting alternative schema and table names in Hyper extracts https://community.alteryx.com/t5/Alteryx-Designer-Ideas/Input-tool-Support-more-than-Extract-Extract... is a prerequisite for this because by definition multiple table extracts have multiple table names.

 

A related idea is supporting multiple table extracts for the Output tool: https://community.alteryx.com/t5/Alteryx-Designer-Ideas/Support-multiple-table-extracts-in-the-Table...

 

Jonathan

 

 

 

Hi Alteryx 🙂

 

When you set maximum records per file, the filename gets _# appended.  Great!  But in reality you get:

 

Filename.csv

Filename_1.csv

Filename_2.csv

 

The first filename doesn't get a number.  I think that it should.

 

Cheers,

 

Mark

Top Liked Authors