The Product Idea boards have gotten an update to better integrate them within our Product team's idea cycle! However this update does have a few unique behaviors, if you have any questions about them check out our FAQ.

Alteryx Designer Desktop Ideas

Share your Designer Desktop product ideas - we're listening!
Submitting an Idea?

Be sure to review our Idea Submission Guidelines for more information!

Submission Guidelines

Featured Ideas

The Find Replace tool has a checkbox to do a case insensitive find. It would be fabulous if the Join and Join Multiple tools had a similar checkbox.

 

I frequently have to create a new field in each data stream, convert the data I want to join on to upper case, perform the join and remove the extra "helper" fields. Using the helper field is needed in my case in order to preserve unique capitalization (i.e., acronyms within the string, etc.). 

Formula Tool --> Functions --> Operators list

 

The operator titles for the two comment functions are too similar, the difference cannot be determined unless checking the hover text.

Can the title for /* Comment */ be adjusted to make it more clear that it is for block or multi-line usage?

I didn't understand the difference until I saw this post on LinkedIn:
https://www.linkedin.com/feed/update/urn:li:activity:7165816592063266817/

/* Comment */ --> /* Block Comment */   |   /* Multi-line Comment */

 

2024-02-21 08_18_04-Alteryx Designer x64 - _New Workflow1.png

  

2024-02-21 08_18_11-Alteryx Designer x64 - _New Workflow1.png

Good morning!

 

This may be a very simple thing, but would it be possible to add a DateTimeQuarter() function? We have DateTime Second, Minute, Day, Month, and Year, and being able to have an easy formula for the quarter as well would be incredibly convenient. 

 

Thanks,

Kat

In some cases, the information about incoming columns to tools are (temporarily) forgotten, e.g. if Autoconfig is switched off, if the incoming connection is temporarily missing, or if column names are generated dynamically and the workflow has not been executed, yet.

Many tools deal with that situation well, e.g. Selection, Formula, or Summarize. In these cases, the tools tell the user that they cannot find incoming columns, but they preserve the configuration so that the user still can (at least partially) work on these tools and important information on the configuration is not lost:

 

Example Select Tool

  1. First step: Connections present, configuration typed in:
    select-step1-configuration_entered.png
  2. Second step: Connection cut, confguration opened. The configuration looks screwed up but implicitly contains all settings:
    select-step2-incoming_connection_missing.png
  3. Third step: Connection re-connected. The configuration is as before:
    select-step3-incoming_connection_present.png

 

Other tools behave the opposite, for example Unique or Macro Input (an for sure many other tools). If the incoming columns are currently unknown to the Designer and you click once on the symbol, the entire configuration of this tool is lost. You might try to get the configuration back by pressing undo. This, in most cases does not work. Or, even worse, you find out what happened later when it's too late for undo. In this case, you either have an old version of that workflow to look up the configuration or you have to re-develop it. In any case, this is unnecessary and time-consuming software behaviour.

 

Example Unique Tool

  1. Step 1: Connections present, configuration typed in:
     unique-step1-configuration_entered.png
  2. Step 2: Connection cut, confguration opened. The configuration is empty:
    unique-step2-incoming_connection_missing.png
  3. Step 3: Connection re-connected: The entire configuration is permanently lost:
    unique-step3-incoming_connection_present.png

I wasn't sure whether I should report this as a bug or a feature enhancement. It is somehow in between. Two aspects tell me that this should be changed:

  • Inconsistent behaviour of different tools for now reason,
  • Easy loss of programming work, resulting in time-consuming bug fixing.

Please make sure that all tools preserve their configuration also if information on incoming columns is temporarily lost.

Hi everyone,

 

Add two additional features to a directory tool. Something like this:

fmvizcaino_0-1686406240366.png

Use cases:

1. Since it is not possible to use a folder browse on the Gallery, this could help a basic user create a list of possible folders to select from with the help of a drop-down

2. Directory analysis for cleaning purposes - currently, if you want to get a list of the folders with Alteryx, it takes forever for big file servers since Alteryx is mapping all the files

 

Both are achievable today through regex or a bat script.

 

Thank you,

Fernando Vizcaino

Alteryx offers the ability to add new formulae (e.g. the Abacus addin) and new tools (e.g. the marketplace; custom macros etc) - which is a very valuable and valued way to extend the capability of the platform.

 

However - if you add a new function or tool that has the same name as an existing function / tool - this can lead to a confusing user experience (a namespace conflict)

 

Would it be possible to add capability to Alteryx to help work around this - two potential vectors are listed below:

- Check for name conflicts when loading tools or when loading Alteryx - and warn the user.   e.g. "The Coalesce function in package CORE Alteryx conflicts with the same function name in XXX package - this may cause mysterious behaviours"

- Potentially allow prefixes to address a function if there are same names - e.g. CoreAlteryx.Coalesce or Abacus.Coalesce - and if there is a function used in a function tool in a way that is ambiguous (e.g. "Coalesce") then give the user a simple dialog that allows them to pick which one they meant, and then Alteryx can self-cleanup.

 

cc: @JarrodT  @NicoleJ 

 

 

 

 

 

Problem statement - 

Currently we are storing our Alteryx data in .yxdb file format and whenever we want to fetch the data, the whole dataset first load into the memory and then we can able to apply filter tool afterwards to get the required subset of data from .yxdb which is completely waste of time and resources.

 

Solution - 

My idea is to introduce a YXDB SQL statement tool which can directly apply in a workflow to get the required dataset from .YXDB file, I hope this will reduce the overall runtime of workflow and user will get desired data in record time which improves the performance and reduce the memory consumption.

Hello all,

According to wikipedia https://en.wikipedia.org/wiki/Materialized_view

 In computing, a materialized view is a database object that contains the results of a query. For example, it may be a local copy of data located remotely, or may be a subset of the rows and/or columns of a table or join result, or may be a summary using an aggregate function.

The process of setting up a materialized view is sometimes called materialization.[1] This is a form of caching the results of a query, similar to memoization of the value of a function in functional languages, and it is sometimes described as a form of precomputation.[2][3] As with other forms of precomputation, database users typically use materialized views for performance reasons, i.e. as a form of optimization.

 

So, I would like to create that in Alteryx, for obvious performance reasons in some use cases.

This is not a duplicate of https://community.alteryx.com/t5/Alteryx-Designer-Desktop-Ideas/In-DB-Create-View/idi-p/157886

 

Best regards,

Simon

Today, there is an checkbox to "Disable All Tools that Write Output" within the Runtime settings for a workflow.  Setting this option requires at least 3 clicks:

  • Click on the canvas
  • Click the "Runtime" tab in the Configuration pane
  • Click the checkbox

Could a keyboard shortcut be added for this?  I've spoken to several users who leverage this feature and, while it is already a time saver, it seems helpful enough where a keyboard shortcut is warranted.

Hello all,

According to wikipedia :
https://en.wikipedia.org/wiki/Embedded_database

 

An embedded database system is a database management system (DBMS) which is tightly integrated with an application software; it is embedded in the application.

 

 

It's often like a single file/dll that you can use inside an application without the user having to connect (or at least to configure it) to it (it's all done inside the application). So, it's widely portable.


Why it does matter ?

As of today, there is not a single example of in database workflow because all the supported databases need the user to:

1/install an odbc driver (most of time, he won't have the rights to do so)

2/configure an odbc connection (sometimes, he doesn't have the rights to)

3/configure a connection on Alteryx (ok, he can)
So it requires IT action, which can be pretty long (in ùany organization, it requires several weeks !!). And even with all of that,the users must be granted privilege to access database and the customer need to develop its own examples and write its own specific documentation.

Well, this is not efficient.


What I suggest is Alteryx to use one of embedded database for training support/one tool examples. SQLlite seems good, maybe a more analytics oriented (like DuckDB ) would be more efficient.
The requirement are, I think, the following :
-OpenSource and free
-Fast
-SQL compliant
-With a bulk load ability

Best regards,

Simon


When making any type of macro, it's important to test the functionality of the macro via a debug.  This is accomplished successfully with normal tools, however there's a bug that will not allow the user to debug In-DB macros that use either of the following standard Alteryx tools:  

  • Macro Input In-DB
  • Macro Output In-DB

 

If either of these tools are included in the macro you are building, an error message will appear not allowing you to open a debug.  

Error message: Question Tool Load Error:  A question tool with a tool id of XXX is missing the associated question data.

 

Of course, Macro input and output tools do not require any specific action/question tool associated with it.  This is a bug.  A user pointed out the XML issue almost 3 years ago here:

In summary: "It appears that the tool itself inserts a hidden Question attribute into the XML which can also be seen in Workflow Configuration"

Source:

https://community.alteryx.com/t5/Alteryx-Designer-Desktop-Discussions/In-DB-Macro-Input-and-Output-t...

 

Examples....

 

A normal macro, using standard tools:

Debug_Standard1.png

 

After debugging a standard macro, the Macro Input/Output tools correctly change to a Text Input and a Browse tool.  This allows the macro author to test the macro.

Debug_Standard2.png

 

However, when trying the same thing with In-DB tools in a macro, an error message appears:

In-DB macro 1:

Debug_indb1.png

 

In-DB Macro error message (after clicking "Open Debug"):

Debug_indb2.png

Hello all,

Sometimes, when you have too much time to retrieve your tables metadas, you can have this message

 

 

 

Initialization Timed Out: Workflow must be run for field meta info to be accurate.

 

 

image.png

From what I understand, it's Alteryx and the source system that drives the time out value. However, I have some cases where the long time is "normal" and that really hurts the user experience.
So, I would like the ability in settings to change the default value.

Best regards,

Simon

We have lots of tools that create new column(s) from the Inputs, e.g., Generate Rows. It'd be very nice if the new column(s) is/are highlighted in the Output. This makes it a lot easier for users when developing the workflow.

It would be great if we could add example workflows to our macros, accessible in the same way as from the original tools (example hyperlink shown after single-clicking on a tool in the tool palette or when searching in the search bar).

 

There is a post on how to do it for custom tools How to add an example link in the custom tool (alteryx.com). The way described there has limitations and does not seem to work on macros: I was able to get the link to show up, but nothing happens when I click.

 

My suggestion, make it easy to add an example workflow to a macro, like it is to change the logo or add a help link.

example workflow.png

Add Unicode category to the cleansing tool

It would be great if you could include a new Parse tool to process Data Sets description (Meta data) formatted using the DCAT (W3C) standard in the next version of Alteryx.

DCAT is a standard for the description of data sets. It provides a comprehensive set of metadata that can be used to describe the content, structure, and lineage of a data set.

We believe that supporting DCAT in Alteryx would be a valuable addition to the product. It would allow us to:

  • Improve the interoperability of our data sets with other systems (M2M)
  • Make it easier to share and reuse our data sets
  • Provide a more consistent way to describe our data sets
  • Bring down the costs of describing and developing interfaces with other Government Entities
  • Work on some parts of making our data Findable – Accessible – Interopable - Reusable (FAIR)

We understand that implementing support for this standards requires some development effort (eventually done in stages, building from a minimal viable support to a full-blown support). However, we believe that the benefits to the Alteryx Community worldwide and Alteryx as a top-quality data preparation tool outweigh the cost.

 

I also expect the effort to be manageable (perhaps a macro will do as a start) when you see the standard RDF syntax being used, which is similar to JSON.

 

DCAT, which stands for Data Catalog Vocabulary, is a W3C Recommendation for describing data catalogs in RDF. It provides a set of classes and properties for describing datasets, their distributions, and their relationships to other datasets and data catalogs. This allows data catalogs to be discovered and searched more easily, and it also makes it possible to integrate data catalogs with other Semantic Web applications. 

DCAT is designed to be flexible and extensible, so they can be used to describe a wide variety. They are both also designed to be interoperable, so they can be used together to create rich and interconnected descriptions of data and knowledge.

 

Here are some of the benefits of using DCAT:

  • Improved discoverability: DCAT makes it easier to discover and use KOS, as they provide a standard way of describing their attributes.
  • Increased interoperability: DCAT allows KOS to be integrated with other Semantic Web applications, making it possible to create more powerful and interoperable applications.
  • Enhanced semantic richness: DCAT provides a way to add semantic richness to KOS , making it possible to describe them in a more detailed and nuanced way.

Here are some examples of how DCAT is being used:

  • The DataCite metadata standard uses DCAT to describe data catalogs.
  • The European Data Portal uses DCAT to discover and search for data sets.
  • The Dutch Government made it a mandatory standard for all Dutch Government Agencies.

As the Semantic Web continues to grow, DCAT is likely to become even more widely used.

 

DCAT

 

RDF

 

 

I have developed many workflows, macros, and apps, and I have always had to find a workaround for displaying information on the user config page or user interface.

 

For example, I want to input 'Default text' into the Text Box interface tool, but the problem is that it does not accept any external connection.

It would be great if this tool had a Q input anchor that could accept data from a connected tool (in both single or multi-line mode) or from external input (such as a file for DropDown list or List Box tools).

 

TextBox.PNG

TextBox_with Default_text.PNG

 

Hi

 

The action of the 'tab' key in configuration window recently appears to have changed from indenting to a navigation function. 

 

The user should be able to select which action the tab key performs. 

 

Alternatively, tab should indent and shift-tab (or alternative) navigate. I'm not the only one who would appreciate the choice.

 

PuffinPanic

 

I would love a tool to be created for looking up a value in a table based on a condition. It could be called "Lookup." One input to the tool would be the lookup list, the other is the main database. Inside the tool you could enter functions that can query the lookup table and return the results either as an overwrite of an existing field in the main DB or as a new field in the main DB, similar to the options in the Multi-Row Formula tool.

 

Here is a link to my post in Community that explains the problem. The solution, in a nutshell, was to create a Join (which resulted in millions of additional rows), run the conditional formula, then filter to get rid of the millions of rows that were created by the Join so only those that met the condition remained (the original database rows).

 

Here is the text of my Community post describing my project (slightly modified for clarity):

 

Table 1:  A list of Pay Dates (the lookup table)

Table 2:  Daily timekeeper data with Week Start and Week End Date fields.

 

The goal:  To find the Pay Date in Table 1 that is greater than the Week Start Date in Table 2 and no more than 13 days after the Week End Date in Table 2.

 

[Table 2: Week Start Date] < [Table 1: Pay Date]

and [Table 2: Week End Date] < [Table 1: Pay Date]

and DateTimeDiff([Table 1: Pay Date], [Table 2: Week End Date], 'Days') <= 13

 

There are many different flows I could use this type of tool for that would save time and simplify the flow.

Thanks!

Hi!

 

Just thought up a simple improvement to the US Geocoder macro that could potentially speed up the results. I'm doing an analysis on some technician data where they visit the same locations over & over again. I'm doing a full year analysis (200k + records) & the geocoder takes a bit to churn thru that much data. In the case of my data though, it's the same addresses over & over again & the geocoder will go thru each one individually.

 

What I did in my process & could be added to the macro is to put a unique tool into the process based off address, city, state, zip, then Geocode the reduced list, then simply join back to the original data stream using a join based off the address, city, state, zip fields (or use record id tool to created a unique process id to join off).

 

In my case, the 200k records were reduced to 25k, which Alteryx completed in under a minute, then joined back so my output was still the 200k records (all geocoded now).

 

Not everyone will have this many duplicates, but I'd bet most data has a few, & every little bit of time savings helps when management is waiting on the results haha!

Top Liked Authors