Get Inspire insights from former attendees in our AMA discussion thread on Inspire Buzz. ACEs and other community members are on call all week to answer!
The Product Idea boards have gotten an update to better integrate them within our Product team's idea cycle! However this update does have a few unique behaviors, if you have any questions about them check out our FAQ.

Alteryx Designer Desktop Ideas

Share your Designer Desktop product ideas - we're listening!
Submitting an Idea?

Be sure to review our Idea Submission Guidelines for more information!

Submission Guidelines

Featured Ideas

Idea: Allow the user to set the data type including character field width in the Text Input tool.

 

The Text Input tool currently auto-senses the correct type and width of the field in a Text Input tool. However, this sometimes restricts the usage of the data downline.

 

Examples:

1 - I often run into the situation where I've copied some data from a browse tool and then pasted that as an input to a new workflow. Then I'll turn that workflow into a macro. But then I run into an issue where the data that comes into the macro is larger than the original width in the Text Input tool. This causes problems.

 

2 - The tool senses that a field containing zip codes should be numeric and then converts the data. This corrupts the data and makes me insert a Select/Formula tool combo to pad the zeros to the left.

A common problem with the R tool is that it outputs "False Errors" like the following: "The R.exe exit code (4294967295) indicted an error"

I call this a false error because data passes out of the R script the same as if there were no error. As such, this error can generally be ignored. In my use case, however, my R tool is embedded within an iterative macro, and the error causes the iterator to stop running.

 

I was able to create a workaround by moving the R tool to a separate workflow and calling it from the CReW runner macro within my iterator, effectively suppressing the error message, but this solution is a bit clumsy, requires unnecessary read/writes, and uses nonstandard macros.

 

I propose the solution suggested by @mbarone (https://community.alteryx.com/t5/Alteryx-Designer-Discussions/Boosted-Model-Error/td-p/5509) to only generate an error when the R return code is 1, indicating a true error, and to either ignore these false errors or pass them as warnings. This will allow R scripts and R-based tools to be embedded within iterative macros without breaking.

 

 

The bak file that is automatically created (and re-created if deleted) really clutters up our folders.

Please allow us to either turn it off, or specify a different location to hold our back up files.

Thanks

Hi,

A lot of companies now are deploying on both AWS and Microsoft Azure.

Alteryx supports AWS S3 object storage out of the box, it would be important to support Microsoft Azure blob as part of the native Alteryx product as well. 

Cheers,

Adrian

When using the output data tool, it would save me and my cluttered organizational skills a lot of effort if the writing workflow was saved as part of the yxdb metadata. 

I've often had to search to find a workflow which created the yxdb. I tend to use naming conventions to help me,  but it would be easier if the file and or path was easily found. 

cheers,

 

 mark

It would be great if we could have a Windows Active Directory data connector tool added to the standard Alteryx toolset.

 

MS Excel Power Query and PowerBI both can connect to Active Directory for use as a data source, but are both very cumbersome to use.  Having a connector in Alteryx that can read AD data into a workflow would be super helpful for a long list of use cases.  A couple that are top of mind for me are:

 

-Leveraging group membership info for dynamic distribution of reports or datasets

-Being able to build reporting and dashboards about the organization (useful for Tech audit, HR, etc.) 

 

I've seen links to an old project on GitHub of someone that started development on this, but the method (just copy these random .dlls into your program directory) is seriously frowned upon by any enterpise IT.  Would be great if Alteryx could pick up that work, polish it a bit and add it to the actual Alteryx Designer toolset.

Statistics are tools used by a lot of DB to improve speed of queries (Hive, Vertica, etc...). It may be interesting to have an option on the write in db or data stream in to calculate the statistics. (something like a check box for )

 

Example on Hive : analyse {table} comute statistics; analyse {table} compute statistics for columns;

Hi there,

 

My idea comes when I've built an application, where user select filter from drop-down list. However it contains thousands of records, so it takes lot's of time to find desired record.

In Excel and MS Access when you use filter you can put many letter and filter shows rows that match the input. In Alteryx user can only put first letter, which is huge drawback to my users.

 

This is how it works in Excel:

werwe.jpg

 

Hope you like it!

Given Crew Macro Pack increases Alteryx's capability so much, and is used so pervasively, is there a reason to not include Crew Macro Pack in Alteryx Designer or Alteryx Server by default?

 

Can anyone give a reason why Alteryx wouldn't bundle Crew Macro Pack?

 

If not, can we get Crew Macro Pack bundled into Alteryx and have official support for it?

I propose another wildcard, %ErrorLog%, that would simply output the error codes and narratives instead of having to use the %OutputLog% to see these.  I'd rather not have a 4 MB text email depicting every line of code and action in the module when all I really need to see are the errors.

It would be great if there was an option in the configuration of the Output Tool to create the output directory if it doesn't already exist. Maybe also to append instead of overwrite for all file types too?

It would be great to have the below functionality in Alteryx.

A workflow is built in Alteryx and button click in Alteryx can be used to generate SQL code that can be ran on a specific database platform, such as SQL Server to run external editors such as SQL Server Management Studio. Thanks. 

Would like to direclty query Hyperion Cube / Essbase data source directly - please propose functionality in next release or add a user macro to the gallery.  Thanks -cb

In the community and in mixed teams - it's very common for people to be caught on the error that "This document was created in a more recent version".   Although there are several workarounds (e.g. this one from @WayneWooldridge  here https://community.alteryx.com/t5/Alteryx-Knowledge-Base/Adjusting-Alteryx-Files-for-Different-Versio...), this seems like it may be an easy problem to solve more permanently.

 

Could we add an option to Alteryx to save the file with the lowest compatible version number?

So - for example - if i'm only using components that shipped with version 10, then please mark the file as version 10.   If I've used a tool that shipped in 11.0.6 then that needs to be the version number.

 

This way - files will be back-compatible as far as is possible by default unless using newer components.

 

Many thanks

Sean

Please include IBM DB2 as an in-Database option. Currently, my primary use of Alteryx is for copying DB2 tables into Teradata for use on that server. Copying large tables and particularly joining several tables and copying the results to Teradata is too slow in Alteryx.

Alteryx Server was recently updated to allow TLS-mediated connections to the MongoDB persistence layer. This allowed us to switch off of the embedded MongoDB to a highly-available MongoDB Atlas cluster. To our surprise after the switch, when we went to edit our workflows that make use of the persistence layer's data (Server Usage Report, etc.) to hit the new Atlas cluster, we found that the MongoDB Input tool does not support TLS connections. This absolutely needs to be changed. Based on organizational constraints, Atlas is our only option for a HA persistence layer. We absolutely have to have TLS support for the MongoDB Input tool. There is no other way for us to natively query our server persistence layer in Designer. Please bring the MongoDB Input tool into alignment with the MongoDB connections that are supported by Alteryx Server.

 

Presently when mapping an Excel file to an input tool the tool only recognizes sheets it does not recognize named tables (ranges) as possible inputs. When using PowerBI to read Excel inputs I can select either sheets or named ranges as input. Alteryx input tool should do the same.

Hello,

 

  After used the new "Image Recognition Tool" a few days, I think you could improve it : 

  > by adding the dimensional constraints in front of each of the pre-trained models,

  > by adding a true tool to divide the training data correctly (in order to have an equivalent number of images for each of the labels)

  > at least, allow the tool to use black & white images (I wanted to test it on the MNIST, but the tool tells me that it necessarily needs RGB images) ?

 

  Question : do you in the future allow the user to choose between CPU or GPU usage ?

 

  In any case, thank you again for this new tool, it is certainly perfectible, but very simple to use, and I sincerely think that it will allow a greater number of people to understand the many use cases made possible thanks to image recognition.

 

  Thank you again

  Kévin VANCAPPEL (France ;-))

 

  Thank you again.

 

  Kévin VANCAPPEL

When using the text mining tools, I have found that the behaviour of using a template only applies to documents with the same page number.

 

So in my use case I've got a PDF file with 100+ claim statements which are all laid out the same (one page per statement). When setting up the template I used one page to set the annotations, and then input this into the T anchor of the Image to Text tool. Into the D anchor of this tool is my PDF document with 100+ pages. However when examining the output I only get results for page 1.

 

On examining the JSON for the template I can see that there is reference to the template page number:

cgoodman3_0-1604393391514.png

 

And playing around with a generate rows tool and formula to replace the page number with pages 1 - 100 in the JSON doesn't work. I then discovered that if I change the page number on the image input side then I get the desired results. 

 

cgoodman3_1-1604393499357.png

However an improvement to the tool, as I suspect this is a common use case for the image to text tool, is to add an option in the configuration of the image to text tool to apply the same template to all pages.

 

cgoodman3_4-1604393738275.png

 

 

 

 

 

I reported this to the support team but was told it was by design and to post here.

 

In-DB Inefficient SQL

I would like to report that the In-DB tools are generating horribly inefficient SQL code for simple operations.  It seems no matter what tools you use every statement is starting with a nested 'Select * From'.

 

Example Simple workflow:

  Support1.jpgSupport2.jpg

 

This is a simple Select and Group by but the SQL Generated is:

 

SELECT "ShipTo", "ShipTo_Name", SUM("ECM_3PL_OVERHEADS_Unit") AS "Sum_ECM_3PL_OVERHEADS_Unit"

FROM (SELECT * FROM "_SYS_BIC"."shell.app.gsap.FL000_LSC.FL002_CTS.INT.RPT/CA_CTS_RPT_MAIN_001") AS "a"

GROUP BY "ShipTo", "ShipTo_Name"

 

This is taking a very long time to execute:

 

Statement 'SELECT "ShipTo", "ShipTo_Name", SUM("ECM_3PL_OVERHEADS_Unit") AS "Sum_ECM_3PL_OVERHEADS_Unit" FROM ...'

successfully executed in 15.752 seconds  (server processing time: 15.699 seconds)

 

Whereas if I take the same query and remove the nested Select *:

 

SELECT "ShipTo", "ShipTo_Name", SUM("ECM_3PL_OVERHEADS_Unit") AS "Sum_ECM_3PL_OVERHEADS_Unit"

FROM "_SYS_BIC"."shell.app.gsap.FL000_LSC.FL002_CTS.INT.RPT/CA_CTS_RPT_MAIN_001" AS "a"

GROUP BY "ShipTo", "ShipTo_Name"

 

It is very quick:

 

Statement 'SELECT "ShipTo", "ShipTo_Name", SUM("ECM_3PL_OVERHEADS_Unit") AS "Sum_ECM_3PL_OVERHEADS_Unit" FROM ...'

successfully executed in 1.211 seconds  (server processing time: 1.157 seconds)

 

So Alteryx is generating queries up to x13 slower than they should be thereby defeating the point of using In-DB.  As you can imagine in a workflow where we have multiple Connect In-DB tools this is a really substantial amount of time.  Example used above is from SAP HANA DB has 1.9m rows and ~90 columns but we have much bigger tables/views than this.

 

If you look you will see its same behaviour for all In-DB tools where each tool creates another nested Select with its particular operator.

 

MY SUGGESTION:

So my suggestion is that Alteryx should combine the SQL of the first few tools and avoid using SELECT * completely unless no Select tools have been used.  So it should combine:

- Connect In-DB + Select

- Connect In-DB + Filter

- Connect In-DB + Summarise

 

Preferably it should combine/flatten everything up until the first join or union.  But Select + Filter are a must!

 

Note it seems some DB's can cope OK with un-nesting these big nested queries in the query plans for some Tables but normally not for Views.  But some cannot cope at all and so the In-DB tools cannot even be used to Browse 100 records (due to select *).

Top Liked Authors