Alteryx Designer Ideas

Share your Designer product ideas - we're listening!
Submitting an Idea?

Be sure to review our Idea Submission Guidelines for more information!

Submission Guidelines
Nominations are open for the Alteryx Excellence Awards through March 26! We want to celebrate the impact you've had and give you the visibility you deserve! Make your submission here.

Featured Ideas

DearAlteryx team and community,

all the best for 2021!

Thank you very much for enhancing the output option from Alteryx Designer to Excel keeping the format.

For a lot of my use cases this is very helpful!

 

Still, there are some use cases left. In case I want to overwrite a calculated/linked number (e.g. calculated prediction) with the Actual number, it would be very helpful to feed into those cells as well. At the moment Alteryx is doing the job but I receive a lot of Excel Errors (xml errors) and a corupt Excel file when overwriting calculated fields/linked fields.

 

Is there a chance to extend the current setup for all of those cases?

 

Thanks and best regards

Chhristoph

This is a QoL-request, and I love me some QoL-updates!

 

best idea for 2020.png

 

While I'm developing I often need the output of a workflow as input for the next phase of my development. For example: an API run returns job location, status, and authentication ids. I want to use these in a new workflow to start experimenting what'll work best. Because of the experimenting part, I always do this in a new workflow and not cache and continue in my main flow.

 

Writing a temporary output file always feels like unnescesary steps, and tbh I don't want to write a file for a step that'll be gone before it reaches production. Esp if there is sensitive information in it.

 

Thanks.

Hello Alteryx Dev Gurus - 

 

We are migrating and some workflows that used to successfully update a datasource are now giving a useless error message, "An unknown error occurred".  

 

Back in my coding days, we could configure the ORM to be highly verbose at database interaction time to the point where you could tell it to give you every sql statement it was trying to execute, and this was extremely useful at debug time.  Somewhere down the pipe Alteryx is generating a sql statement to perform an update, so why not have something on the Runtime tab that says, 'Show all SQL statements for Output tools'?  Or allow it on an Output tool by Output tool basis?  If this was possible by changing a log4j properties file 15 years ago, I'm pretty sure it can be done today.  

 

Thank you for attending my TED talk on how allowing for detailed sql statements to bubble back up to the user would be a useful feature improvement.  

This idea is to fix one of the Power BI Output tool options for existing datasets.

 

Currently, if the 'Replace existing dataset' option is selected, the dataset is dropped and replaced with one having the same name.  Problem with this is that any reports or dashboards using that dataset become invalid (likely due to a changed internal identifier).

 

Idea is to change the 'Replace existing dataset' functionality to delete & replace the data within a dataset rather than deleting & replacing the dataset itself.

 

This behavior is described in the following thread & flagged as 'solved' although the workaround isn't practical as a true solution to the issue.  We'd like to see this supported more seamlessly via Alteryx.

https://community.alteryx.com/t5/Alteryx-Designer-Discussions/Publish-to-Power-BI-breaks-linked-Powe...

This has probably been mentioned before, but in case it hasn't....

 

Right now, if the dynamic input tool skips a file (which it often does!) it just appears as a warning and continues processing. Whilst this is still useful to continue processing, could it be built as an option in the tool to select a 'error if files are skipped'? 

 

Right now it is either easy to miss this is happening, or in production / on server you may want this process to be stopped.

 

Thanks,

 

Andy 

 

 

Hi Alteryx community,

 

It would be really nice to have v_string/v_wstring and max character size as a standard for text columns.

fmvizcaino_0-1587008811932.png

it is countless how many times I found that the error was related to a string truncation due to string size limit from the text input.

 

Thumbs-up those who lost their minds after discovering that the error was that! 😄

I would like for it to be easier to change input (and output) tools to UNC pathing. I think adding it to the right click menu would be great. Currently, I have to go to options >> advanced >> workflow dependencies. A right click option would be easier.

 

Thanks!

 

 

AD/LDAP Authentication should be an option for the Mongo tool, and the ability to use Gallery Connections would also be great. Local SQL authentication is no longer allowed in most enterprises to simplify security configuration control.

Referencing the previous idea: Inputs/Output should have the option to read/write a compressed file (ZIP or GZIP)

 

This idea has been implemented for inputting .zip files. However, we still need to use the run command workaround for outputs. It's very common for many users to want to output their .csv, .xlsx, .pdf to a .zip. The functionality would also need to extend to Gallery.

 

See the following links for people that are looking for this type of functionality:

https://community.alteryx.com/t5/Alteryx-Server-Discussions/Download-Multiple-Outputs-from-the-Galle...

https://community.alteryx.com/t5/Alteryx-Designer-Discussions/Output-files-to-ZIP/td-p/163502

https://community.alteryx.com/t5/Alteryx-Designer-Discussions/Zip-files/td-p/151456

 

Feel free to merge this idea with the previous one for continuity.

As of 2019.4+, Alteryx is now leveraging the Tableau Hyper API in order to output .Hyper files. Unfortunately, our hardware is not compatible with the Tableau Hyper API. It would be great if Alteryx could allow a best of both worlds in that they use the new Tableau Hyper API when possible but revert back to the old method (pre 2019.4) when the machine's hardware doesn't support the new Tableau Hyper API. Thanks!

Added in Alteryx Version 2020.3, the Browse tool no longer shows a profile of the complete dataset (it is capped when the record data size reached 300MB).

 

My proposed solution is an optional override of the record size limit on the browse tool (which will make the profiling take longer, but actually profile the entire dataset).  I would also like a general user setting to set the default behavior of the browse tool to either be limited or unlimited.

 

Below is the newly included documentation of the Data Profiling Limit, which I'm proposing can be overridden.

 

 

Data Profiling Limit
Data Profiling in the Browse tool is capped at 300 MB. This allows you to process very large datasets faster. For each record in the incoming dataset, we process the record and add the record size to a counter. Once the counter reaches 300 MB, we stop processing records.

It is important to note that there is no specific number of records that we can process. This depends on the dataset since a record size can range from 1 byte to a few thousand bytes. This record size is different from the file size, displayed in the Results grid and Data Profiling Holistic View. The file size is generally different since it has been compressed to optimize spacing.

In other words, 300 MB of record size is not the same as 300 MB of file size.

 

 

 

This new tool can cause confusion when looking at the data profile (e.g. if you expect the sum to be $3 million, but the browse tool is only showing 2% of your total records in the profile tool, the profile sum may only show $60 thousand).

 

The sampled version with a cutoff of 300MB is rarely useful if you are using browse tools to get a quick sense of the variable profiles on medium sized datasets (around 1 million records) since this rarely will fit into the 300MB record size limit.

 

An example can be shown in the image below, where the dataset contains 855,085 records, but the browse tool is profiling only the first 20,338.

 

alteryxExample1.png

 

Again, being able to override this 300MB record size limit would fix the problem created in the 2020.3 change to the browse tool.

 

 

 

Hello all,


DuckDB is a new project of embeddable database by the team behind MonetDB. From what I understand, it's like a SQLite database but for analytics (columnar-vectorized query execution engine on a single file). And of course it's open-source and free.

More info on their website : https://duckdb.org/


Best regards,


Simon

The guide line of Shape File is below. They recommend that you use only letters and numbers.

 

"Spaces and certain characters are not supported in field names. Special characters include hyphens such as in x-coordinate and y-coordinate; parentheses; brackets; and symbols such as $, %, and #. Essentially, eliminate anything that is not alphanumeric or an underscore."

https://desktop.arcgis.com/en/arcmap/latest/manage-data/tables/fundamentals-of-adding-and-deleting-f...

 

But many GIS tools can read and write 2 byte field name at Shape File.

(e.g. QGIS https://qgis.org/en/site/index.html)

And Esri Japan says Shape file can use 2 byte field name.

https://www.esrij.com/gis-guide/esri-dataformat/shapefile/

 

We want to use 2 byte field name at Shape File on Alteryx Designer.

(e.g. UTF-8 , Shift-JIS )

 

 

Thanks,

Kajitani

 

Have External Tables in Snowflake be accessible in the Visual Query Builder.

Current state: External tables in the Snowflake DBMS are not available in the "visual query builder" tab of the green input tool. These tables are only available in the "Tables" tab.

When using the output data tool, it would save me and my cluttered organizational skills a lot of effort if the writing workflow was saved as part of the yxdb metadata. 

I've often had to search to find a workflow which created the yxdb. I tend to use naming conventions to help me,  but it would be easier if the file and or path was easily found. 

cheers,

 

 mark

Please add official support for newer versions of Microsoft SQL Server and the related drivers.

 

According to the data sources article for Microsoft SQL Server (https://help.alteryx.com/current/DataSources/SQLServer.htm), and validation via a support ticket, only the following products have been tested and validated with Alteryx Designer/Server:

 

Microsoft SQL Server

Validated On: 2008, 2012, 2014, and 2016.

  • No R versions are mentioned (2008 R2, for instance)
  • SQL Server 2017, which was released in October of 2017, is notably missing from the list.
  • SQL Server 2019, while fairly new (~6 months old), is also missing

This is one of the most popular data sources, and the lack of support for newer versions (especially a 2+ year old product like Sql Server 2017) is hard to fathom.

 

ODBC Driver for SQL Server/SQL Server Native Client

Validated on ODBC Driver: 11, 13, 13.1

Validated on SQL Server Native Client: 10,11

Hi GUI Gang

 

At the moment, I have a lovely formatted XLS with corporate branding, logos, filled cells, borders etc.  The data from the Alteryx output needs to start in cell B6.  I have tried the output tools to this named range, but Alteryx destroys all the Excel formatted cells in the data block.

 

As a workaround on the forums, many Alteryx users pump out to a hidden "Output" tab, and then code =OutputA1 in the formatted sheet.  This looks messy to the users who then go hunting for the hidden tab.  Personally I end up pumping the workflow out to a temporary CSV file.  Then opening that in Excel, selecting all, and then pasting values in the pretty Excel file.

 

This is fine for one file, but I need to split the output report block by a country field and do this 100s of time for each month end.

 

Please can we have a output tool that does the same as my workaround.  Outputs directly from a workflow to a range in Excel that doesnt destroy the workbook's formatting.

 

Jay

Alteryx 2019.4 introduced support for Tableau's .hyper extract format, however it only supports single table extracts. .hyper files have supported multiple tables since mid-2018, so I'd like Alteryx to support that as well.

 

Here are a couple of current use cases (as of February 2020) and one future one.

 

- We have malaria incidence data that is joined to multiple sets of spatial data. Doing all of the joins in the extract creation process to build a single table extract is not possible due to processing time & memory constraints, so we use a multiple-table extract.

- There are multiple ways to do row level security in Tableau. A common way is to have separate tables for the data & the entitlements and then use calculations at run-time to filter the data, and for that having a multiple table extract is ideal.

- In 2020 Tableau will be introducing new data modeling capabilities (this was first demoed at the 2018 Tableau Conference, there were sessions on it at the 2019 Tableau Conference) where one goal is vastly improved performance for large fact table to fact table joins where previously we'd have to do much more data preparation. This is another case where multiple table extracts would be useful.

 

I've attached a sample Hyper file with two tables in the extract (it's zipped because the Community site doesn't accept .hyper files).

 

Supporting alternative schema and table names in Hyper extracts https://community.alteryx.com/t5/Alteryx-Designer-Ideas/Input-tool-Support-more-than-Extract-Extract... is a prerequisite for this because by definition multiple table extracts have multiple table names.

 

A related idea is supporting multiple table extracts for the Output tool: https://community.alteryx.com/t5/Alteryx-Designer-Ideas/Support-multiple-table-extracts-in-the-Table...

 

Jonathan

 

 

 

Alteryx 2019.4 added support in the Input tool for Tableau .hyper extract files. The tables stored in the .hyper files have a schema and a table name. Tableau's old .tde files and Hyper files created by Alteryx & Tableau Desktop use "Extract.Extract" as the schema.tablename. However when using Tableau's Hyper API the default schema is "public" and the table name is arbitrarily specified by the user or application.

 

This has two impacts:

1) Without this support Alteryx can't open many .hyper files created by other applications. By way of example I've attached a sample .hyper file (in a .zip because the community software doesn't allow .hyper files) that has the schema.tablename "public.table1".

2) Also support for names beyond Extract.Extract is required in order to support multiple table extracts (submitted as a separate Idea).

 

Please update the Input tool so the user can select the particular schema and table name from the .hyper file.

 

Jonathan

 

 

It would be wonderful for Alteryx to be able to connect to and query OData feeds natively, rather than using a 3rd-party driver or custom macro.   

 

OData querying is supported by quite a few familiar products, including Excel and PowerBISSIS/SSRS, FME SafeTableau, and many others. And the protocol is used to publish feeds from Microsoft Dynamics and Sharepoint, as well as many of the 10,000 publically available government datasets with API's (esp. those hosted by Socrata)   

 

I didn't see it as in the Idea section, but questions and workarounds have been discussed in the community a few times (11/15, 3/18, 4/18), and suggestions seem to be just to buy the $400-600 ODBC driver from CDATA (or ZappySys), or I could use a VBA script in Excel trigger a refresh, or create my own Alteryx connector macro (great series btw, though most was beyond my understanding!) 

   

While not opposed paying, kludging, or learning to program, they're just one more thing to build/buy, install, maintain, and break at the most inconvenient time 🙂

 

Thanks,
Chadd

 

OData Overview:

OData (Open Data Protocol) is an ISO/IEC approvedOASIS standard that defines a set of best practices for building and consuming RESTful APIs. OData helps you focus on your business logic while building RESTful APIs without having to worry about the various approaches to define request and response headers, status codes, HTTP methods, URL conventions, media types, payload formats, query options, etc. OData also provides guidance for tracking changes, defining functions/actions for reusable procedures, and sending asynchronous/batch requests.  OData RESTful APIs are easy to consume. The OData metadata, a machine-readable description of the data model of the APIs, enables the creation of powerful generic client proxies and tools.

More info at at http://odata.org

Announcements
Welcome
Top Liked Authors