The Product Idea boards have gotten an update to better integrate them within our Product team's idea cycle! However this update does have a few unique behaviors, if you have any questions about them check out our FAQ.

Alteryx Designer Desktop Ideas

Share your Designer Desktop product ideas - we're listening!
Submitting an Idea?

Be sure to review our Idea Submission Guidelines for more information!

Submission Guidelines

Featured Ideas

Hello,

It's nice to have this OpenAI Connector but it seems it must be the default OpenAI URL. In my company, we use OpenAI on an Azure instance and I'm unable to connect to it.

(by the way, I know pre-sales teams have developed lot of connectors for fireworks, mistral, etc.. it would be very cool to have it available).

Best regards,

Simon

Problem statement - 

Currently we are storing our Alteryx data in .yxdb file format and whenever we want to fetch the data, the whole dataset first load into the memory and then we can able to apply filter tool afterwards to get the required subset of data from .yxdb which is completely waste of time and resources.

 

Solution - 

My idea is to introduce a YXDB SQL statement tool which can directly apply in a workflow to get the required dataset from .YXDB file, I hope this will reduce the overall runtime of workflow and user will get desired data in record time which improves the performance and reduce the memory consumption.

Add Unicode category to the cleansing tool

Sometimes, Control Containers produce error messages even if they are deactivated by feeding an empty table into their input connection.

 

screenshot_error_in_spite_control_container_deactivated.png

(Note that this is a made up example of something which can happen if input tables might be from different sources and have different columns so that they need separated treatment.)

 

According to the product team, this is expected behaviour since a selection does not allow zero columns selected. This might be true (which I doubt a bit), but it is at least counter-intuitive. If this behaviour cannot be avoided in total, I have a proposal which would improve the user experience without changing the entire workflow validation logic.

(The support engineer understands the point and has raised a defect.)

 

Instead of writing messages inside Control Containers directly to the log output (on screen, in logfile) and to mark the workflow as erroneous, I propose to introduce a message (message, warning, error) stack for tools inside Control Containers:

  1. When the configuration validation is executed:
    1. Messages (messages, warnings, errors) produced outside of Control Containers are output to the screen log and to the log files (as today).
    2. Messages (messages, warning, errors) produced inside of Control Containers are not yet output but stored in a message stack.
  2. At the moment when it is decided whether a Control container is activated or deactivated:
    1. If Control Container activated: Write the previously stored message stack for this Control Container to the screen and to the log output, and increase error and warning counts accordingly.
    2. If Control Container deactivated: Delete the message stack for this Control Container (w/o reporting anything to the log and w/o increasing error and warning count).

This would result in a different sequence of messages than today (because everything inside activated Control Containers would be reported later than today). Since there’s no logical order of messages anyways, this would not matter. And it would avoid the apparently illogical case that deactivated Control Containers produce errors.

Currently when debug mode is entered in analytic apps and macros, the direct inputs to the app/macro when the error occurred are hardcoded into a workflow in debug mode, so that errors can be more easily detected.

 

However, inputs into analytic apps also create global variables which can be used in the more code-heavy aspects of Alteryx such as the Formula Tool. These are not updated in the same way which can cause workflows to break in debug mode - it would be really helpful if global variables could be updated in the same way as the inputs into tools are.

In short:
Add an option to cache the metadata for a particular tool so that it doesn't forget when using tool that have dynamic metadata such as batch macros or alteryx metadata engine can't resolve such as python tool.

 

 

Longer explanation:

The Problem:

One of the issues I often encounter when making dynamic workflows or ones that require calling external services is that Alteryx often forgets the metadata of what columns to expect. This causes the workflow to forget configuration of downstream tools when a workflow is first opened or when the metadata engine refreshes. There is currently the option to disable the metadata engine from automatically refreshing but this isn't a good option because you miss out on much of the value it brings.

 

Some of the common tools where I encounter this issue:

  • Json parse
  • Batch macros
  • Python tool
  • Regex parsing to rows

 

Solution:

Instead could we add an option to cache the metadata for a particular tool, this would save the metadata from the last time the workflow ran to within the workflows XML so that it persists when closed and reopened. Then when the metadata engine runs when it gets to this tool instead of resolving the metadata from the tool it instead uses the saved version in the XML. Obviously when it actually runs it would ignore this and any errors would still occur.

 

This could be an option in navigation pane of each tool. Mockup below:

Mockup.png

 

 

 

This would make developing dynamic workflows far easier and resolve issues of configuration being lost when the metadata changes and alteryx forgets the options.

This is a hybrid idea related to both posts regarding dynamic tool configuration during runtime / without having to run an analytic app.

 

What I would like to propose is a new optional connection type for the interface tools that can be updated with incoming connections (having a Q letter with white background), namely Drop Down, List Box, Tree and Map tools. This could be a simple R letter in a square for example, which would be located to the left of the incoming question anchor.

 

Use Case

 

Imagine an app where there are two control containers and three interface tools (Action tools excluded from the count) outside those containers, one of them is a Text Box connected to a filter tool (via an Action tool) in the first control container with the purpose of limiting the dataset by specifying a city for example, another one is a Numeric Up Down for limiting the dataset by the average transaction amounts that are greater than the specified amount. These two interface tools are contained in a Group Box in the Interface Designer.

 

The third interface tool is a Drop Down tool which obtains the values (which will be Store Name for this example) from the results of the Select tool (in the second control container that is connected to the output anchor of the first control container) that is connected to an incoming filter tool which is modified by the previously mentioned interface tools. Output anchor of this Select tool is connected to the hypothetical R anchor on the top of the Drop Down tool, which is then connected to an outgoing filter tool that is connected to a series of tools which ends with a Browse tool that displays basic KPI information for the store specified from the Drop Down tool.

 

The main difference of the R (Refresh) anchor from the Q anchor is that it will enable the user to dynamically update the incoming values (i.e., choices for a drop down tool) without having to run the workflow. Alteryx Designer will automatically execute only the tools necessary to be able to update the values (up to a certain point of the workflow only, which may also be indicated by the boundaries of the control containers containing the target tool) for the R anchor connected applicable Interface tools specified above. This will be possible by clicking the hypothetical confirm button (same appearance with the Apply Data Manipulations button) which only appears next to the Interface tools (or the Group Boxes containing them instead) that are automatically determined by Alteryx Designer to be providing downstream data to the the tools (T anchor of the Filter tool for example) sending values to the applicable Interface tools having an incoming R anchor connection.

 

I saw that a similar feature recently became available with Alteryx Analytics Cloud Platform with the App Builder product, and I think that Alteryx Designer Desktop could definitely benefit both from this feature and additional App Builder features (that can be adapted to Desktop counterpart) in the upcoming releases.

It would be very helpful to have an output of the workflow into a step by step document. so someone who does not have access to Alteryx can undestand the steps taken to create the flow hence the result or output.

Have you ever had the business deliver an Excel (EEK!) file to be passed into Alteryx with a different number of header rows (because it looks pretty and is convenient)? Never, you say? Lies! 

 

I would suggest adding an option to the Input Data Tool that would give us the ability concatenate multiple header rows. This would help enable accurate data profiling for columns when output and eliminate loss from unnecessary conversion errors. Currently, the options allow us to Start Data Input on Line X; however, if the header for the column is on multiple rows, they would have to be manually entered after input due to only being able to select the lowest possible row to assure the data is accurately passed. The solution would be to be able to specify the number of rows that contain headers, concatenate them to a single row (ignoring null and carriage return) and then output that as the header. 

 

The current functionality, in a situation where each row has a variable number of header rows, causes forced errors such as a scientific string conversion of a numeric value.

Would be nice to have a way to cache-uncache all inputs or a selected group of tools.  Caching and Uncaching workflows with many input tools or slow data-read tools gets to be a bit cumbersome.  Would be a nice QoL improvement :)

 

I looked around for something like this but didn't see a solution, so thought I'd recommend.  Please let me know if something like this exists already natively in designer desktop.

I try to use the Comment tool for documentation within workflows for team members (and my future self when I have to revisit it months after I built it). It would be helpful to be able to use markdown formatting inside the tool.

This might even encourage more documentation. *fingers crossed*

Hi there,

 

When you connect to a DB using a connection string or an alias - this shows up in the Workflow Dependancies in a way that is very useful to allow you to identify impacts if a DB is moved or migrated.

 

However - in 2023.1, if you use DCM then the database dependancies just show up as .\ which makes dependancy management much more difficult.

 

 
 

screenshot1.png

 

Please could you add the capability to view the DCM dependancies correctly in the dependancy window?

 

BTW - this workflow Dependancy Window would be a great place to build a simple process to move existing DB connections to a DCM connection!

 

CC: @wesley-siu @_PavelP 

Alteryx should seriously consider incorporating certain Excel features into its Brows tool, as they greatly enhance usability and functionality.

 

Currently, when selecting specific records in the Brows tool, users are unable to obtain important metrics such as sum, average, or count without resorting to additional steps, such as adding a summary tool or filters.

 

However, envisioning the integration of a concise bar below the message result window that provides these essential statistics, which are immensely beneficial to users, would undoubtedly elevate the Brows tool to the next level.

 

By implementing this enhancement, Alteryx would make a significant impact and establish the Brows tool as a must-have resource.

 

 

SaadNaser_0-1684918867896.png

 

 

SaadNaser_1-1684918880407.png

 

 

 

 

 

 

Alteryx is not able to read generated Excel sheets which have the prefix "x:" within it's XML tags.  Often this occurs when an xlsx file is created from a bot or rpa process.  Example file attached.

Hi there,

 

the Snowflake documentation only refers to connection strings which use a DSN such as this page Snowflake | Alteryx Help which refers to the connection string as odbc:DSN=Simba_Snowflake_JWT;UID=user;PRIV_KEY_FILE=G:\AlteryxDataConnectorsTeam\OAuth project\PEMkey\rsa_key.p8;PRIV_KEY_FILE_PWD=__EncPwd1__;JWT_TIMEOUT=120

 

However - for canvasses which need to be productionized on Alteryx Server - it is critical to use dsn-less connection strings so that the canvasses can be deployed and run on any worker node without having to set up DSNs on every worker node.

 

A DSN-less connection string looks like this: 

ODBC:DRIVER={SnowflakeDSIIDriver};UID=UserName;pwd=Password;WAREHOUSE=compute_wh;SERVER=server.us-east-1.snowflakecomputing.com;SCHEMA=PUBLIC;DATABASE=NewTestDB;Staging=local;Method=user|||NEWTESTDB.PUBLIC.MYTESTTABLE

 

Please could you consider making an update to the help texts to provide and describe a DSN-free connection string as well as the DSN driven connections?

 

Many thanks

Sean

 

Hello all,

As of today, when you want to retrieve or create a file on Apache Spark for Databricks, you have only two choices :  CSV and Avro

 

 

image.png

However it's clearly missing parquet file type :
-it's faster
-it's better for storage
-it's standard and already supported as input/output of Alteryx or for HDFS so doesn't seem hard to add here.

Best regards,

Simon

The basic premise is this: 

 

Phantom spacing. Basically something that looks like it has spaces on Excel but is actually formatted as an indentation. 

Unfortunately, to read the indentation we will need either a VBA prep or read the XML inside. The latter of which is difficult. 

As to VBA, the general steps are to create an indentation formula in order to see the numbers, then go from there. The idea is credited to @clmc9601 as we discussed privately.

 

As of now, I do not see anyway to do this on Alteryx as a function or even expression. It would be very helpful especially reading trial balances or even Bloomberg outputs as they are formatted with indentation. 


Reading indentation from Excel or any other file within Alteryx will be much appreciated, especially in actuarial and finance spaces. 

After using the PCA can there be a model object to output to be able to "score" new data?

 

Similar to PCA transform here https://stackoverflow.com/questions/26182329/how-do-i-convert-new-data-into-the-pca-components-of-my...

 

As currently there is no way to use this model with new data

Alteryx offers the ability to add new formulae (e.g. the Abacus addin) and new tools (e.g. the marketplace; custom macros etc) - which is a very valuable and valued way to extend the capability of the platform.

 

However - if you add a new function or tool that has the same name as an existing function / tool - this can lead to a confusing user experience (a namespace conflict)

 

Would it be possible to add capability to Alteryx to help work around this - two potential vectors are listed below:

- Check for name conflicts when loading tools or when loading Alteryx - and warn the user.   e.g. "The Coalesce function in package CORE Alteryx conflicts with the same function name in XXX package - this may cause mysterious behaviours"

- Potentially allow prefixes to address a function if there are same names - e.g. CoreAlteryx.Coalesce or Abacus.Coalesce - and if there is a function used in a function tool in a way that is ambiguous (e.g. "Coalesce") then give the user a simple dialog that allows them to pick which one they meant, and then Alteryx can self-cleanup.

 

cc: @JarrodT  @NicoleJ 

 

 

 

 

 

I would love a tool to be created for looking up a value in a table based on a condition. It could be called "Lookup." One input to the tool would be the lookup list, the other is the main database. Inside the tool you could enter functions that can query the lookup table and return the results either as an overwrite of an existing field in the main DB or as a new field in the main DB, similar to the options in the Multi-Row Formula tool.

 

Here is a link to my post in Community that explains the problem. The solution, in a nutshell, was to create a Join (which resulted in millions of additional rows), run the conditional formula, then filter to get rid of the millions of rows that were created by the Join so only those that met the condition remained (the original database rows).

 

Here is the text of my Community post describing my project (slightly modified for clarity):

 

Table 1:  A list of Pay Dates (the lookup table)

Table 2:  Daily timekeeper data with Week Start and Week End Date fields.

 

The goal:  To find the Pay Date in Table 1 that is greater than the Week Start Date in Table 2 and no more than 13 days after the Week End Date in Table 2.

 

[Table 2: Week Start Date] < [Table 1: Pay Date]

and [Table 2: Week End Date] < [Table 1: Pay Date]

and DateTimeDiff([Table 1: Pay Date], [Table 2: Week End Date], 'Days') <= 13

 

There are many different flows I could use this type of tool for that would save time and simplify the flow.

Thanks!

Top Liked Authors