ACT NOW: The Alteryx team will be retiring support for Community account recovery and Community email-change requests Early 2026. Make sure to check your account preferences in my.alteryx.com to make sure you have filled out your security questions. Learn more here
Start Free Trial

Alteryx Designer Desktop Ideas

Share your Designer Desktop product ideas - we're listening!
Submitting an Idea?

Be sure to review our Idea Submission Guidelines for more information!

Submission Guidelines

Featured Ideas

As @JordanB mentioned in his post (https://community.alteryx.com/t5/Alteryx-Knowledge-Base/Stop-workflow-on-a-condition/tac-p/74403#M19...) - there's a common need to stop a worfklow when an condition is met.

However, at present there's no way to do this without generating an error.

 

Please can we either alter the message/test component to allow for error-free termination on a formula condition; or alternatively implement the fuller idea that Mark ( @MarqueeCrew) mentioned in his programmatic Detour idea?

 

https://community.alteryx.com/t5/Alteryx-Product-Ideas/Programmatic-Detour/idi-p/12763

 

 

 

 

 

When entering a number of column names in the RegEx parse mode - please can you allow either Enter or down-arrow to move down to the next cell (standard windows convention)?

 

Currently Enter just exists the edit mode; and down-arrow does nothing.

 

Annotation 2020-07-06 210535.png

 

cc: @Hollingsworth 

When we industrialize our workflows, we often use a parameter file with a command like :

 

 

AlteryxEngineCmd.exe MyAnalyticApp.yxwz AppValues.xml

 

 

I would like to have the parameter file path with its extension as an engine constant, like we have the workflow name.

As a best practice, I'd like to automagically change any drive mapping to UNC when saving my workflows.  This applies to both local and gallery saves.

 

Cheers,

 

Mark

The Azure Machine Learning Training and Scoring Tools seems great to improve Azure ML process.

Introducing: The Azure Machine Learning Training and Scoring Tools 

We tried to use this tool but can't log in to Azure ML correctly. We have several Tenant ID then log in to another tenant for office 365 not Azure ML.

====================== <Error Message> ==========================================================
Error: Azure ML Training (367): UserErrorException:
    Message: You are currently logged-in to 55f0a...-.............................................. tenant. You don't have access to d846a...-............................................. subscription, please check if it is in this tenant. All the subscriptions that you have access to in this tenant are =
 [SubscriptionInfo(subscription_name='Microsoft Azure Enterprise', subscription_id='754c5...-...........................')].
 Please refer to aka.ms/aml-notebook-auth for different authentication mechanisms in azureml-sdk.
    InnerException None
    ErrorResponse
=======================================================================================================

Microsoft states that tenant needs to be specified if we have access to multiple tenants.

Set up authentication for Azure Machine Learning resources and workflows 

temp.JPG
Could you add Tenant ID into Azure credentials so that we can use this tool? 

temp2.JPG

I would like Alteryx to offer a native Fuzzy Join tool that allows two datasets with completely different schemas to be joined using Fuzzy matching logic (Dice coefficient algorithm, Levenshtein distance algorithm, etc.). Any matches would be output to a new table with either exactly matched or fuzzy matched primary and secondary records. I want this tool be supported by Server as well.

Hi there,

 

When creating a database connection - Alteryx's default behaviour is to create an ODBC DSN-linked connection.

 

However DSN-linked connections do not work on a large server env - because this would require administrators to create these DSNs on every worker node and on every disaster recovery node, and update them all every time a canvas changes.

they are also not fully safe becuase part of the configuration of your canvas is held in the DSN - and so you cannot just rely on the code that's under version control.

 

So:

Could we add a feature to Alteryx Designer that allows a user to expand a DSN into a fully-declared conneciton string?

In other words - if the connection string is listed as 

- odbc:DSN=DSNSnowFlakeTest;UID=Username;PWD=__EncPwd1__|||NEWTESTDB.PUBLIC.MYTESTTABLE

Then offer the user the ability to expand this out by interrogating the ODBC Connection manager to instead have the fully described connection string like this:
odbc:DRIVER={SnowflakeDSIIDriver};UID=Username;pwd=__EncPwd1__;authenticator=Snowflake;WAREHOUSE=compute_wh;SERVER=xnb27844.us-east-1.snowflakecomputing.com;SCHEMA=PUBLIC;DATABASE=NewTestDB;Staging=local;Method=user

 

NOTE: This is exactly what users need to do manually today anyway to get to a DSN-less conneciton string - they have to craete a file DSN to figure out all the attributes (by opening it up in Notepad) and then paste these into the connection string manually.

 

Thanks all 

Sean

 

 

Hello Team,

 

Currently, in the select tool, we have to scroll up or down to check or see the list of the fields. In case, if the user wanted to change the data type, they can scroll into the list. Like, I am working on the mid-size data, and sometimes data contain 300+ fields, if I need to change anything in the data type I have to search by scrolling up or down. 

 

The idea here is, If you provide a search bar under Field, it will be a great help to all, in case if anyone needs to go for some specific field, the user just types the name in the search bar and make changes quickly. The select tool is important and we used much time while working on the flow. 

 

Thank you,

Mayank

 

 

 

 

Hi there,

 

Adam ( @AdamR_AYX ), Mark ( @MarqueeCrew) and many others have done a great job in putting together super helpful add-in macros in the CREW pack - and James ( @jdunkerley79 ) has really done an incredible job of filling in some gaps in a very useful way in the formula tools.

 

Would be possible to include a subset of these in the core product as part of the next release?

I'm thinking of (but others will chime in here to vote for their favourite):

- Unique only tool (CReW)

- Field Sort (CReW)

- Wildcard XLSX input (CReW) - this would eliminate a whole category of user queries on the discussion boards

- Runner (CReW - although this may have issues with licensing since many people don't have command line permission - Alteryx does really need the ability to do chained dependancy flows in a more smooth way.

- Date Utils (JDunkerly) - all of James's Date utils - again, these would immediately solve many of the support questions asked on the discussion forum

 

I think that these would really add richness & functionality to the core product, and at the same time get ahead of many of the more common queries raised by users.   I guess the only question is whether the authors would have any objection?

 

Thank you

Sean

Please enhance the input tool to have a feature you could select to test if the file is there and another to allow the workflow to pause for a definable period if the input file is locked by another user, then retry opening.  The pause time-frame would be definable for N seconds and the number of iterations it would cycle through should be definable so you can limit how many attempts to open a file it would try.

 

File presence should be something we could use to control workflow processing.  

 

A use case would be a process that runs periodically and looks to see if a file is there and if so opens and processes it.  But if the file is not there then goes to sleep for a definable period before trying again or simply ends processing of the workflow without attempting to work any downstream tools that might otherwise result in "errors" trying to process a null stream.

 

An extension of this idea and the use case would be to have a separate tool that could evaluate a condition like a null stream or field content or file not found condition and terminate the process without causing an error indicator, or perhaps be configurable so you could cause an error to occur or choose not to cause an error to occur.

 

Using this latter idea we have an enhanced input tool that can pass a value downstream or generate a null data stream to the next tool, then this next tool can evaluate a condition, like a filter tool, which may be a null stream or file not found indicator or other condition and terminate processing per the configuration, either without a failure indicated or with a failure indicated, according to the wishes of the user.  I have had times when a file was not there and I just want the workflow to stop without throwing errors, other times I may want it to error out to cause me to investigate, other scenarios or while processing my data goes through a filter or two and the result is no data passes the last filter and downstream tools still run and generally cause a failure as they have no data to act on and I don't want that, it may be perfectly valid that on a Sunday or holiday no data passes the filters.

 

Having meandered through this I sum up with the ideal being to enhance the input tool to be able to test file presence and pass that info on to another tool that can evaluate that and control the workflow run accordingly, but as a separate tool it could be applied to a wider variety of scenarios and test a broader scope of conditions to decide if to proceed or term the workflow.

 

This functionality would allow the user to select (through a highlight box, or ctrl+click), only the tools in a workflow they would want to run, and the tools that are not selected would be skipped. The idea is similar to the new "add selected tools to a new tool container", but it would run them instead. 

 

I know the conventional wisdom it to either put everything you don't want run into a tool container and disable it, or to just copy/paste the tools you want run into a blank workflow. However, for very large workflows, it is very time consuming to disable a dozen or more containers, only to re-enable them shortly afterwards, especially if those containers have to be created to isolate the tools that need to be run. Overall, this would be a quality of life improvement that could save the user some time, especially with large or cumbersome workflows.

This idea has arisen from a conversation with a colleague @Carlithian where we were trying to work out a way to remove tools from the canvas which might be redundant, for example have you added a select tool to the canvas which hasn't been configured to change a data type or rename a field. So we were looking for ways of identifying in the workflow xml for tools which didn't have a configuration applied to them.

 

This highlighted to me an issue with something like the data cleanse tool, which is a standard macro.

 

The xml view of the data cleanse configuration looks like this:

<Configuration>
  <Value name="Check Box (135)">False</Value>
  <Value name="Check Box (136)">False</Value>
  <Value name="List Box (11)">""</Value>
  <Value name="Check Box (84)">False</Value>
  <Value name="Check Box (117)">False</Value>
  <Value name="Check Box (15)">False</Value>
  <Value name="Check Box (109)">False</Value>
  <Value name="Check Box (122)">False</Value>
  <Value name="Check Box (53)">False</Value>
  <Value name="Check Box (58)">False</Value>
  <Value name="Check Box (70)">False</Value>
  <Value name="Check Box (77)">False</Value>
  <Value name="Drop Down (81)">upper</Value>
</Configuration>

 

As it is a macro, the default labelling of the drop downs is specified in the xml, if you were to do something useful with it wouldn't it be much nicer if the interface tools were named properly - such as:

cgoodman3_0-1674658512759.png

So when you look at the xml of the workflow it's clearer to the user what is actually specified.

cgoodman3_1-1674658649253.png

 

 

 

Sometimes I need to connect to the data in my Database after doing some filtering and modeling with CTEs. To ensure that the connection runs quicker than by using the regular input tool, I would like to use the in DB tool. But is doesn't working because the in DB input tool doesn't support CTEs. CTEs are helpful for everyday life and it would be terribly tedious to replicate all my SQL logic into Alteryx additionally to what I'm already doing inside the tool. 

I found a lot of people having the same issue, it would be great if we can have that feature added to the tool. 

It would be great if you could include a new Parse tool to process Data Sets description (Meta data) formatted using the DCAT (W3C) standard in the next version of Alteryx.

DCAT is a standard for the description of data sets. It provides a comprehensive set of metadata that can be used to describe the content, structure, and lineage of a data set.

We believe that supporting DCAT in Alteryx would be a valuable addition to the product. It would allow us to:

  • Improve the interoperability of our data sets with other systems (M2M)
  • Make it easier to share and reuse our data sets
  • Provide a more consistent way to describe our data sets
  • Bring down the costs of describing and developing interfaces with other Government Entities
  • Work on some parts of making our data Findable – Accessible – Interopable - Reusable (FAIR)

We understand that implementing support for this standards requires some development effort (eventually done in stages, building from a minimal viable support to a full-blown support). However, we believe that the benefits to the Alteryx Community worldwide and Alteryx as a top-quality data preparation tool outweigh the cost.

 

I also expect the effort to be manageable (perhaps a macro will do as a start) when you see the standard RDF syntax being used, which is similar to JSON.

 

DCAT, which stands for Data Catalog Vocabulary, is a W3C Recommendation for describing data catalogs in RDF. It provides a set of classes and properties for describing datasets, their distributions, and their relationships to other datasets and data catalogs. This allows data catalogs to be discovered and searched more easily, and it also makes it possible to integrate data catalogs with other Semantic Web applications. 

DCAT is designed to be flexible and extensible, so they can be used to describe a wide variety. They are both also designed to be interoperable, so they can be used together to create rich and interconnected descriptions of data and knowledge.

 

Here are some of the benefits of using DCAT:

  • Improved discoverability: DCAT makes it easier to discover and use KOS, as they provide a standard way of describing their attributes.
  • Increased interoperability: DCAT allows KOS to be integrated with other Semantic Web applications, making it possible to create more powerful and interoperable applications.
  • Enhanced semantic richness: DCAT provides a way to add semantic richness to KOS , making it possible to describe them in a more detailed and nuanced way.

Here are some examples of how DCAT is being used:

  • The DataCite metadata standard uses DCAT to describe data catalogs.
  • The European Data Portal uses DCAT to discover and search for data sets.
  • The Dutch Government made it a mandatory standard for all Dutch Government Agencies.

As the Semantic Web continues to grow, DCAT is likely to become even more widely used.

 

DCAT

 

RDF

 

 

Hello all,

I try to use Alteryx and MonetDB, a very cool column store database in Open Source.
 
When I use the Visual Query Builder, I get this :

 

simonaubert_bd_2-1657224776143.png

The fields names are totally absent.



The reason is Alteryx does not use the standard ODBC SQLColums() function at all but send some query (here "select * from demo.exemplecomparetable.fruit1 a where 0=1" ) to get a sample of data. In the same time, Monetdb sends the error "SELECT: only a schema and table name expected" (not shown to user, totally silent)

I think this should be implemented like that :
1/use of SQLColums() function which is a standard for ODBC and should work most of the time
2/if SQLColums() does not work, send the current queries.

It's widely discussed here with the MonetDB team.

https://github.com/MonetDB/MonetDB/issues/7313



Best regards,

Simon

 

 

 

all too often, we build an alteryx flow just to realise that step 8 out of 10 was wrong -so back to the beginning and rerun the entire thing.   this often is tedious if your work requires a big data set.

 

So there is a workaround, using the Cache Macro which can be downloaded (but this does require quite a bit of fiddling with containers; disabling items; setting flags; etc) - but it would be good to allow the user to "restart from here" like you can with a powerpoint slide deck.    I appreciate that this may be tricky since Alteryx may be flushing data out of memory as it goes along, so it cannot restart from any arbitrary point - but if we put the workflow into a "testing cached mode" to cache data at each step; or allowed users to set particular controls as a breakpoint and cache at these points, that would help immensely.

 

Thank you

Sean

 

Right now it is not possible to open .xlsx files in Alteryx that has restricted access to specific users from the excel file, even when you are logged in to Alteryx and Excel with the same user. If it is possible to make Alteryx recognize which users/email addresses should be able to input a file to Alteryx I think it would be a great enhancement. To get around the problem we are currently changing the file restrictions through right clicking on it -> Properties -> Security, but this is time consuming and not a smooth fix. 

 

All the best,

Elin

We now have the ability to output to an ESRI File Geodatabase, which is great, but it only allows you to output it to the WGS84 coordinate system.  I would like to have the same functionality to export it to other projections or coordinate systems similar to the ESRI Shapefile or ESRI Personal Geodatabase output tools (we specifically need NAD83 but I'm sure others would like other options as well).

Hello all,

HDFS (Hadoop Distributed File System) connection is widely used to load data efficiently on Hadoop, for Hive, Spark or Impala. However, it's not compatible with the new DCM.


Best regards,

Simon

Most databases treat null as "unknown" and as a result, null fails all comparisons in SQL.    For example, null does not match to null in a join, null will fail any > or < tests etc.    This is an ANSI and ISO standard behaviour.

 

Alteryx treats null differently - if you have 2 data sets going into a join, then a row with value null will match to a row with value null.

 

We've seen this creating confusion with our users who are becoming more fluent with SQL and who are using inDB tools - where the query layer treats null differently than the Alteryx layer.

 

Could we add a setting flag to Alteryx so that users can turn on ISO / ANSI standard processing of Null so that data works the same at all levels of the query stack?

 

Many thanks

Sean

Auteurs les plus complimentés