Advent of Code is back! Unwrap daily challenges to sharpen your Alteryx skills and earn badges along the way! Learn more now.

Alteryx Designer Desktop Ideas

Share your Designer Desktop product ideas - we're listening!
Submitting an Idea?

Be sure to review our Idea Submission Guidelines for more information!

Submission Guidelines

Featured Ideas

Please consider adding a new setting to the Render Tool, so the users can select or deselect if an existing File should be overwritten (Otherwise throw an error, like the Output Data Tool does, when configured to create a new Sheet and that Sheet already exists)

 

Aguisande_1-1651515071841.png

 

 

In Japan, the prople usually use the date format "yyyy/mm/dd". But there is no preset in Date tool. So I usually use custom setting, but it is the waste of time.

 

So please add yyyy/mm/dd format to the preset in Date tool configuration for Japanese people.

 

AkimasaKajitani_0-1660969609039.png

 

Working in the accounting department, this has come up too many times now to ignore! 

 

Would LOVE LOVE LOVE to see a new formula available in the DateTime formula suite that mimics the function of the EOMONTH() formula when working with dates in Excel. 


The beauty of the EOMONTH() formula in Excel is that I can just give it a date, and then tell it how many months in the future or past I would like it to add/subtract... Alternatively, in Alteryx, this can require 2 or 3 nested DateTime functions to arrive at the same answer. 


Example: To find the end of the month 2 months in the future from today's date, I would use the following formula...

Excel = EOMONTH(Today(),2)

Alteryx = DateTimeAdd(DateTimeAdd(DateTimeTrim(DateTimeToday(),"month"),3,"months"),-1,"days")

 

Seems much more complicated than it needs to be in Alteryx, and easy to get lost in the nested formulas & non-intuitive adding/subtracting of months/days! I can see a new formula (something like DateTimeEOMonth?) being structured as follows in Alteryx: DateTimeEOMonth([Field],increment)

 

Please consider! Our accounting department thanks you heartily in advance... 🙂

 

Cheers,

NJ

I would love a tool to be created for looking up a value in a table based on a condition. It could be called "Lookup." One input to the tool would be the lookup list, the other is the main database. Inside the tool you could enter functions that can query the lookup table and return the results either as an overwrite of an existing field in the main DB or as a new field in the main DB, similar to the options in the Multi-Row Formula tool.

 

Here is a link to my post in Community that explains the problem. The solution, in a nutshell, was to create a Join (which resulted in millions of additional rows), run the conditional formula, then filter to get rid of the millions of rows that were created by the Join so only those that met the condition remained (the original database rows).

 

Here is the text of my Community post describing my project (slightly modified for clarity):

 

Table 1:  A list of Pay Dates (the lookup table)

Table 2:  Daily timekeeper data with Week Start and Week End Date fields.

 

The goal:  To find the Pay Date in Table 1 that is greater than the Week Start Date in Table 2 and no more than 13 days after the Week End Date in Table 2.

 

[Table 2: Week Start Date] < [Table 1: Pay Date]

and [Table 2: Week End Date] < [Table 1: Pay Date]

and DateTimeDiff([Table 1: Pay Date], [Table 2: Week End Date], 'Days') <= 13

 

There are many different flows I could use this type of tool for that would save time and simplify the flow.

Thanks!

It would be great if you could include a new Parse tool to process Data Sets description (Meta data) formatted using the DCAT (W3C) standard in the next version of Alteryx.

DCAT is a standard for the description of data sets. It provides a comprehensive set of metadata that can be used to describe the content, structure, and lineage of a data set.

We believe that supporting DCAT in Alteryx would be a valuable addition to the product. It would allow us to:

  • Improve the interoperability of our data sets with other systems (M2M)
  • Make it easier to share and reuse our data sets
  • Provide a more consistent way to describe our data sets
  • Bring down the costs of describing and developing interfaces with other Government Entities
  • Work on some parts of making our data Findable – Accessible – Interopable - Reusable (FAIR)

We understand that implementing support for this standards requires some development effort (eventually done in stages, building from a minimal viable support to a full-blown support). However, we believe that the benefits to the Alteryx Community worldwide and Alteryx as a top-quality data preparation tool outweigh the cost.

 

I also expect the effort to be manageable (perhaps a macro will do as a start) when you see the standard RDF syntax being used, which is similar to JSON.

 

DCAT, which stands for Data Catalog Vocabulary, is a W3C Recommendation for describing data catalogs in RDF. It provides a set of classes and properties for describing datasets, their distributions, and their relationships to other datasets and data catalogs. This allows data catalogs to be discovered and searched more easily, and it also makes it possible to integrate data catalogs with other Semantic Web applications. 

DCAT is designed to be flexible and extensible, so they can be used to describe a wide variety. They are both also designed to be interoperable, so they can be used together to create rich and interconnected descriptions of data and knowledge.

 

Here are some of the benefits of using DCAT:

  • Improved discoverability: DCAT makes it easier to discover and use KOS, as they provide a standard way of describing their attributes.
  • Increased interoperability: DCAT allows KOS to be integrated with other Semantic Web applications, making it possible to create more powerful and interoperable applications.
  • Enhanced semantic richness: DCAT provides a way to add semantic richness to KOS , making it possible to describe them in a more detailed and nuanced way.

Here are some examples of how DCAT is being used:

  • The DataCite metadata standard uses DCAT to describe data catalogs.
  • The European Data Portal uses DCAT to discover and search for data sets.
  • The Dutch Government made it a mandatory standard for all Dutch Government Agencies.

As the Semantic Web continues to grow, DCAT is likely to become even more widely used.

 

DCAT

 

RDF

 

 

I would love to be able to see the actual curl statement that is executed as part of the download tool. Maybe something like a debug switch can be added which would produce 1 extra output field which is the curl statement itself? This would greatly enhance the ability to debug when things aren't working as expected from the download tool.

Report text tools currently only give the option to allign left, right or center. Would be great if we could have the option to have a true 'Justify' option also as it makes chunks of text look so much cleaner

 

It would be nice if we can arrange some tools on the canvas neatly by one click and having them distributed evenly (horizontally/vertically).

 

See this picture which worth thousand words.

 

Dsitribute Tools Horizontally/Vertically.jpg

Hi 

 

The wording of the tool tip displayed in results window cells with long strings is misleading.  The current wording is "This cell has truncated characters".   

danilang_0-1616587137476.png

 

 

New users tend to infer that this means that the data value has been truncated somewhere upstream.  See here, here and here.   Changing this message to something like "Only a portion of long strings is displayed" will help reduce the confusion immensely.

 

Dan  

Hi Community Microsoft wil deprecate basic Authentication, so users will need OAuth2 to be included in the email tool. https://docs.microsoft.com/en-us/exchange/clients-and-mobile-in-exchange-online/deprecation-of-basic... Microsoft is removing the ability to use Basic authentication in Exchange Online for Exchange ActiveSync (EAS), POP, IMAP, Remote PowerShell, Exchange Web Services (EWS), Offline Address Book (OAB), Outlook for Windows, and Mac. This change will be efective on October 1, 2022. Best regards, JP

When creating a connection using DCM (example being ODBC for SQL) - the process requires an ODBC Data Source Name (see screenshot 1 below).

However, when you use the alias manager (another way to make database connections) - this does allow for DSN-free connections which are essential for large enterprises (see screenshot 2 below).    

 

NOTE: the connection manager screens do have another option - Quick Connect - which seems to allow for DSN-free connections, but this is non-intuitive; and you're asked to type in the name of the driver yourself which seems to be an obvious failure point (especially since the list of all installed drivers can be read straight from the registry)

 

Please could we change DCM to use the same interfaces / concepts as the alias screens so that all DCM connections can easily be created without requiring an ODBC DSN; and so that DSN-free connections are the default mode of operation?

 

 

 

Screenshot 1: DCM connection:

SeanAdams_0-1685360285460.png

 

screenshot 2

SeanAdams_2-1685360473900.png

 

cc: @wesley-siu  @_PavelP @ToddTarney 

 

 

It would be absolutely marvellous if the ability to use a field as the replace value could be incorporated into the Regex tool. Currently the "Replacement Text" field is a hardcoded text value, and so to make that dynamic you have to wrap the tool in a batch and feed in the value as a Control Parameter. If we could just select a field to use as the replacement value, that would be spiffy.

 

mceleavey_0-1647001959072.png

 

M.

Hello!
I appreciate this is a very underused element of Alteryx Functionality, however, I have noticed a few issues with the description of fields. 

 

Firstly, if you set a description on a field within a select tool:

TheOC_0-1681228654695.png



And then attempt to clear the description later in the workflow (in another select tool), you cannot. When you delete the description, it will clear back to the original value (in this case, 'test'):

TheOC_1-1681228698380.png


This can be easily recreated, and can be more applicable to yxdb outputs that contain the description of fields. In that scenario, you cannot go back to the previous select tool and remove the description. The closest you can come to easily clearing the description is replacing it with a space ' '.

 

As a secondary issue, as current the score tool removes field descriptions and overrides the source. For example if I open the Score tool example workflow, and add a select tool/description:

TheOC_10-1681229323907.png

 


You can see the meta data going into the score tool:

TheOC_8-1681229240520.png

 

But unfortunately the output of the tool looks like:

TheOC_9-1681229254843.png

 

Showing that it has completely removes the descriptions, and also replaced all of the 'source' information. My suggestion for this would be that it would not replace the source information or descriptions.

 

 

Thirdly - and quite a niche issue, but an int64 field specifically will break when the description differs between the data and the model.

Again, easy to recreate within the Ccore tool example workflow. Apply a Select tool to both streams, setting 'First_Years' to an int64. Within the bottom stream (the model creation), set a description, in this case, 'test':

TheOC_11-1681229464488.png

 

Make sure to leave the top streams description blank.

Run the workflow, observe the error:
Error: Score (106): Score: The variable testFirst_Years is missing from the input data stream.
Interestingly, it seems to be using the description as part of the name within the Score tool, which is causing issue when the descriptions differ. My suggestion for this would be that it would not utilise descriptions at all.

 

Kind Regards,

Owen

Hi all,

 

The SalesForce Input tool is great.. but has some really bad limitations when it comes to report. 

I think there are 2 main limitations :

 

A - It can only consume 2000 rows due to the rest api limitation. There plenty of articles about it in the community.

B - Long string such as text comment are cutout after a certain number of characters. 

 

Thanks to this great article : https://community.alteryx.com/t5/Alteryx-Designer-Discussions/Salesforce-Input-Tool-amp-Going-Beyond... , I had the idea of going through a csv file export to then import the data into Alteryx. 

I've done it using two consequent download tool. The first download is used to get the session id and the second to export a report into a csv in the temp folder. This temp file can then be read using a dynamic input workflow. 

 

Long story short, I think Alteryx should upgrade the Salesforce connector to make it more robust and usable. Using the export to csv feature, this should enable Alteryx to be fully compatible with Salesforce report.

 

Regards,

Idea: Allow the user to set the data type including character field width in the Text Input tool.

 

The Text Input tool currently auto-senses the correct type and width of the field in a Text Input tool. However, this sometimes restricts the usage of the data downline.

 

Examples:

1 - I often run into the situation where I've copied some data from a browse tool and then pasted that as an input to a new workflow. Then I'll turn that workflow into a macro. But then I run into an issue where the data that comes into the macro is larger than the original width in the Text Input tool. This causes problems.

 

2 - The tool senses that a field containing zip codes should be numeric and then converts the data. This corrupts the data and makes me insert a Select/Formula tool combo to pad the zeros to the left.

The bak file that is automatically created (and re-created if deleted) really clutters up our folders.

Please allow us to either turn it off, or specify a different location to hold our back up files.

Thanks

Hello, 

 

This is one thing that my OCD cannot cope with. 

 

Some tools, like the Union tool, allow you to 'Ignore warnings', like when fields are missing. 

 

Some other tools however don't give the option. Date time tool for instance. Sometimes I feel like yelling at Alteryx that "I know that field already exists! I want to change it!". Or the join tool, when you join on a double. 

 

I know that these warnings don't really affect anything, and they may be useful to highlight something that may be best to be changed, but pleeeeaaassee give us a tick box or something like the union tool where we can ignore warnings. It makes my workflow messy. 

 

(I'm on designer v 2021.1 btw, so if this has already been done, then please ignore my rant. 😁 )

 

Thanks

 

Edit: What I'm talking about 

Rags1982_0-1655908955080.png

 

Sometimes, Control Containers produce error messages even if they are deactivated by feeding an empty table into their input connection.

 

screenshot_error_in_spite_control_container_deactivated.png

(Note that this is a made up example of something which can happen if input tables might be from different sources and have different columns so that they need separated treatment.)

 

According to the product team, this is expected behaviour since a selection does not allow zero columns selected. This might be true (which I doubt a bit), but it is at least counter-intuitive. If this behaviour cannot be avoided in total, I have a proposal which would improve the user experience without changing the entire workflow validation logic.

(The support engineer understands the point and has raised a defect.)

 

Instead of writing messages inside Control Containers directly to the log output (on screen, in logfile) and to mark the workflow as erroneous, I propose to introduce a message (message, warning, error) stack for tools inside Control Containers:

  1. When the configuration validation is executed:
    1. Messages (messages, warnings, errors) produced outside of Control Containers are output to the screen log and to the log files (as today).
    2. Messages (messages, warning, errors) produced inside of Control Containers are not yet output but stored in a message stack.
  2. At the moment when it is decided whether a Control container is activated or deactivated:
    1. If Control Container activated: Write the previously stored message stack for this Control Container to the screen and to the log output, and increase error and warning counts accordingly.
    2. If Control Container deactivated: Delete the message stack for this Control Container (w/o reporting anything to the log and w/o increasing error and warning count).

This would result in a different sequence of messages than today (because everything inside activated Control Containers would be reported later than today). Since there’s no logical order of messages anyways, this would not matter. And it would avoid the apparently illogical case that deactivated Control Containers produce errors.

Hi UX interested parties,

 

capture.png

 

Here are some ideas for you to consider:

 

1.  These lines are BORING and UNINFORMATIVE.  I'd like to understand (pic = 1,000 words) more when looking at a workflow.  

  • A line could communicate:
    • Qty of Records
    • Size of Data
    • Is the data SORTED
      • What sort order
    • Quality of Data 

If you look at lines A, B, C in the picture above.  Nothing is communicated.  Weight of line, color of line, type of line, beginning line marker/ending line marker, these are all potential ways that we could see a picture of the data without having to get into browse everywhere to see the information.  If we hover over the data connection, even more information could appear (e.g. # of records, size of file) without having to toggle the configuration parameters.

 

2.  Wouldn't it be nice to not have to RUN a workflow to know last SAVED metadata (run) of  a workflow?  I'd like to open a "saved" workflow and know what to expect when I run the workflow.  Heck, how long does it take the beast to run is something that we've never seen unless we run it.

 

3.  I'd like to set the metadata to display SORT keys, order.  Sort1 Asc, Sort 2 Desc ....   This sort information is very helpful for the engine and I'll likely post about that thought.  As a preview, when a JOIN tool has sorted data and one of the anchors is at EOF, then why do we need to keep reading from the other anchor?  There won't be another matched record (J) anchor.  In my example above, we don't ask for the L/R outputs, so why worry about the rest of the join?

 

4.  Have you ever seen a map (online) that didn't display watermark information?  I think that the canvas experience should allow for a default logo (like mine above, but transparent) in the lower right corner of the canvas that is visible at all times.  Having the workflow name at the top in a tab is nice, but having it display as a watermark is handy.

 

5.  Once the workflow has RUN, all anchors are the same color.  How about providing GREY/White or something else on EMPTY anchors instead of the same color?  This might help newbies find issues in JOIN configuration too.

 

6.  If the tool has ERRORs you put a RED exclamation mark.  I despise warnings, but how about a puke colored question mark?  With conversion errors, the lines could be marked to let you know the relative quantity of conversion errors (system messages have a limit)

 

Just a few top of mind things to consider ....

 

Cheers,

 

Mark

The crosstab tool replaces any non-alphanumeric characters with underscores in column names. It would be helpful to keep the original values as column names (or to have the option to toggle whether or not special characters are replaced with underscores).

 

This is often an issue for reporting and for dynamically-populated app inputs (e.g. drop-down), where we need to retain the special characters.

 

For example, say I have the following dataset: 

kelly_gilbert_0-1593130247986.png

 

Currently, the crosstab tool produces this:

kelly_gilbert_1-1593130318568.png

 

I would like this:

kelly_gilbert_2-1593130621660.png

 

There are currently (somewhat cumbersome) workarounds such as adding an extra row with the original names, and then using Dynamic Rename to rename the columns, but it would be great to be able to use the data straight out of the crosstab!