Advent of Code is back! Unwrap daily challenges to sharpen your Alteryx skills and earn badges along the way! Learn more now.
Free Trial

Alteryx Designer Desktop Ideas

Share your Designer Desktop product ideas - we're listening!
Submitting an Idea?

Be sure to review our Idea Submission Guidelines for more information!

Submission Guidelines

Featured Ideas

Please consider adding a new setting to the Render Tool, so the users can select or deselect if an existing File should be overwritten (Otherwise throw an error, like the Output Data Tool does, when configured to create a new Sheet and that Sheet already exists)

 

Aguisande_1-1651515071841.png

 

 

Tools should not error with Zero rows, often when working with macros it is possible to have a scenario where zero rows or columns is legitimate. Some tools are fine with this and some are not. In my case the Select Tool does not allow it so I have to create a Work around with a Text Input tool.

IraWatt_0-1661023969561.png

 

As an international organization we deal with clients in multiple-countries.

 

Name matches for names including Chinese characters generate a unicode conversation warning and are excluded from the fuzzy match.

 

It would be good if fuzzy match could be enhanced to handle Chinese characters.

In Japan, the prople usually use the date format "yyyy/mm/dd". But there is no preset in Date tool. So I usually use custom setting, but it is the waste of time.

 

So please add yyyy/mm/dd format to the preset in Date tool configuration for Japanese people.

 

AkimasaKajitani_0-1660969609039.png

 

I would love to be able to see the actual curl statement that is executed as part of the download tool. Maybe something like a debug switch can be added which would produce 1 extra output field which is the curl statement itself? This would greatly enhance the ability to debug when things aren't working as expected from the download tool.

Report text tools currently only give the option to allign left, right or center. Would be great if we could have the option to have a true 'Justify' option also as it makes chunks of text look so much cleaner

 

It would be nice if we can arrange some tools on the canvas neatly by one click and having them distributed evenly (horizontally/vertically).

 

See this picture which worth thousand words.

 

Dsitribute Tools Horizontally/Vertically.jpg

Hi 

 

The wording of the tool tip displayed in results window cells with long strings is misleading.  The current wording is "This cell has truncated characters".   

danilang_0-1616587137476.png

 

 

New users tend to infer that this means that the data value has been truncated somewhere upstream.  See here, here and here.   Changing this message to something like "Only a portion of long strings is displayed" will help reduce the confusion immensely.

 

Dan  

Hi Community Microsoft wil deprecate basic Authentication, so users will need OAuth2 to be included in the email tool. https://docs.microsoft.com/en-us/exchange/clients-and-mobile-in-exchange-online/deprecation-of-basic... Microsoft is removing the ability to use Basic authentication in Exchange Online for Exchange ActiveSync (EAS), POP, IMAP, Remote PowerShell, Exchange Web Services (EWS), Offline Address Book (OAB), Outlook for Windows, and Mac. This change will be efective on October 1, 2022. Best regards, JP

It would be great if you could include a new Parse tool to process Data Sets description (Meta data) formatted using the DCAT (W3C) standard in the next version of Alteryx.

DCAT is a standard for the description of data sets. It provides a comprehensive set of metadata that can be used to describe the content, structure, and lineage of a data set.

We believe that supporting DCAT in Alteryx would be a valuable addition to the product. It would allow us to:

  • Improve the interoperability of our data sets with other systems (M2M)
  • Make it easier to share and reuse our data sets
  • Provide a more consistent way to describe our data sets
  • Bring down the costs of describing and developing interfaces with other Government Entities
  • Work on some parts of making our data Findable – Accessible – Interopable - Reusable (FAIR)

We understand that implementing support for this standards requires some development effort (eventually done in stages, building from a minimal viable support to a full-blown support). However, we believe that the benefits to the Alteryx Community worldwide and Alteryx as a top-quality data preparation tool outweigh the cost.

 

I also expect the effort to be manageable (perhaps a macro will do as a start) when you see the standard RDF syntax being used, which is similar to JSON.

 

DCAT, which stands for Data Catalog Vocabulary, is a W3C Recommendation for describing data catalogs in RDF. It provides a set of classes and properties for describing datasets, their distributions, and their relationships to other datasets and data catalogs. This allows data catalogs to be discovered and searched more easily, and it also makes it possible to integrate data catalogs with other Semantic Web applications. 

DCAT is designed to be flexible and extensible, so they can be used to describe a wide variety. They are both also designed to be interoperable, so they can be used together to create rich and interconnected descriptions of data and knowledge.

 

Here are some of the benefits of using DCAT:

  • Improved discoverability: DCAT makes it easier to discover and use KOS, as they provide a standard way of describing their attributes.
  • Increased interoperability: DCAT allows KOS to be integrated with other Semantic Web applications, making it possible to create more powerful and interoperable applications.
  • Enhanced semantic richness: DCAT provides a way to add semantic richness to KOS , making it possible to describe them in a more detailed and nuanced way.

Here are some examples of how DCAT is being used:

  • The DataCite metadata standard uses DCAT to describe data catalogs.
  • The European Data Portal uses DCAT to discover and search for data sets.
  • The Dutch Government made it a mandatory standard for all Dutch Government Agencies.

As the Semantic Web continues to grow, DCAT is likely to become even more widely used.

 

DCAT

 

RDF

 

 

Idea: Allow the user to set the data type including character field width in the Text Input tool.

 

The Text Input tool currently auto-senses the correct type and width of the field in a Text Input tool. However, this sometimes restricts the usage of the data downline.

 

Examples:

1 - I often run into the situation where I've copied some data from a browse tool and then pasted that as an input to a new workflow. Then I'll turn that workflow into a macro. But then I run into an issue where the data that comes into the macro is larger than the original width in the Text Input tool. This causes problems.

 

2 - The tool senses that a field containing zip codes should be numeric and then converts the data. This corrupts the data and makes me insert a Select/Formula tool combo to pad the zeros to the left.

I would love a tool to be created for looking up a value in a table based on a condition. It could be called "Lookup." One input to the tool would be the lookup list, the other is the main database. Inside the tool you could enter functions that can query the lookup table and return the results either as an overwrite of an existing field in the main DB or as a new field in the main DB, similar to the options in the Multi-Row Formula tool.

 

Here is a link to my post in Community that explains the problem. The solution, in a nutshell, was to create a Join (which resulted in millions of additional rows), run the conditional formula, then filter to get rid of the millions of rows that were created by the Join so only those that met the condition remained (the original database rows).

 

Here is the text of my Community post describing my project (slightly modified for clarity):

 

Table 1:  A list of Pay Dates (the lookup table)

Table 2:  Daily timekeeper data with Week Start and Week End Date fields.

 

The goal:  To find the Pay Date in Table 1 that is greater than the Week Start Date in Table 2 and no more than 13 days after the Week End Date in Table 2.

 

[Table 2: Week Start Date] < [Table 1: Pay Date]

and [Table 2: Week End Date] < [Table 1: Pay Date]

and DateTimeDiff([Table 1: Pay Date], [Table 2: Week End Date], 'Days') <= 13

 

There are many different flows I could use this type of tool for that would save time and simplify the flow.

Thanks!

It would be absolutely marvellous if the ability to use a field as the replace value could be incorporated into the Regex tool. Currently the "Replacement Text" field is a hardcoded text value, and so to make that dynamic you have to wrap the tool in a batch and feed in the value as a Control Parameter. If we could just select a field to use as the replacement value, that would be spiffy.

 

mceleavey_0-1647001959072.png

 

M.

Hi all,

 

The SalesForce Input tool is great.. but has some really bad limitations when it comes to report. 

I think there are 2 main limitations :

 

A - It can only consume 2000 rows due to the rest api limitation. There plenty of articles about it in the community.

B - Long string such as text comment are cutout after a certain number of characters. 

 

Thanks to this great article : https://community.alteryx.com/t5/Alteryx-Designer-Discussions/Salesforce-Input-Tool-amp-Going-Beyond... , I had the idea of going through a csv file export to then import the data into Alteryx. 

I've done it using two consequent download tool. The first download is used to get the session id and the second to export a report into a csv in the temp folder. This temp file can then be read using a dynamic input workflow. 

 

Long story short, I think Alteryx should upgrade the Salesforce connector to make it more robust and usable. Using the export to csv feature, this should enable Alteryx to be fully compatible with Salesforce report.

 

Regards,

The bak file that is automatically created (and re-created if deleted) really clutters up our folders.

Please allow us to either turn it off, or specify a different location to hold our back up files.

Thanks

When creating a connection using DCM (example being ODBC for SQL) - the process requires an ODBC Data Source Name (see screenshot 1 below).

However, when you use the alias manager (another way to make database connections) - this does allow for DSN-free connections which are essential for large enterprises (see screenshot 2 below).    

 

NOTE: the connection manager screens do have another option - Quick Connect - which seems to allow for DSN-free connections, but this is non-intuitive; and you're asked to type in the name of the driver yourself which seems to be an obvious failure point (especially since the list of all installed drivers can be read straight from the registry)

 

Please could we change DCM to use the same interfaces / concepts as the alias screens so that all DCM connections can easily be created without requiring an ODBC DSN; and so that DSN-free connections are the default mode of operation?

 

 

 

Screenshot 1: DCM connection:

SeanAdams_0-1685360285460.png

 

screenshot 2

SeanAdams_2-1685360473900.png

 

cc: @wesley-siu  @_PavelP @ToddTarney 

 

 

Hello!
I appreciate this is a very underused element of Alteryx Functionality, however, I have noticed a few issues with the description of fields. 

 

Firstly, if you set a description on a field within a select tool:

TheOC_0-1681228654695.png



And then attempt to clear the description later in the workflow (in another select tool), you cannot. When you delete the description, it will clear back to the original value (in this case, 'test'):

TheOC_1-1681228698380.png


This can be easily recreated, and can be more applicable to yxdb outputs that contain the description of fields. In that scenario, you cannot go back to the previous select tool and remove the description. The closest you can come to easily clearing the description is replacing it with a space ' '.

 

As a secondary issue, as current the score tool removes field descriptions and overrides the source. For example if I open the Score tool example workflow, and add a select tool/description:

TheOC_10-1681229323907.png

 


You can see the meta data going into the score tool:

TheOC_8-1681229240520.png

 

But unfortunately the output of the tool looks like:

TheOC_9-1681229254843.png

 

Showing that it has completely removes the descriptions, and also replaced all of the 'source' information. My suggestion for this would be that it would not replace the source information or descriptions.

 

 

Thirdly - and quite a niche issue, but an int64 field specifically will break when the description differs between the data and the model.

Again, easy to recreate within the Ccore tool example workflow. Apply a Select tool to both streams, setting 'First_Years' to an int64. Within the bottom stream (the model creation), set a description, in this case, 'test':

TheOC_11-1681229464488.png

 

Make sure to leave the top streams description blank.

Run the workflow, observe the error:
Error: Score (106): Score: The variable testFirst_Years is missing from the input data stream.
Interestingly, it seems to be using the description as part of the name within the Score tool, which is causing issue when the descriptions differ. My suggestion for this would be that it would not utilise descriptions at all.

 

Kind Regards,

Owen

Hello, 

 

This is one thing that my OCD cannot cope with. 

 

Some tools, like the Union tool, allow you to 'Ignore warnings', like when fields are missing. 

 

Some other tools however don't give the option. Date time tool for instance. Sometimes I feel like yelling at Alteryx that "I know that field already exists! I want to change it!". Or the join tool, when you join on a double. 

 

I know that these warnings don't really affect anything, and they may be useful to highlight something that may be best to be changed, but pleeeeaaassee give us a tick box or something like the union tool where we can ignore warnings. It makes my workflow messy. 

 

(I'm on designer v 2021.1 btw, so if this has already been done, then please ignore my rant. 😁 )

 

Thanks

 

Edit: What I'm talking about 

Rags1982_0-1655908955080.png

 

Please add ablity to globally, within a module, forget all missing fields.

The crosstab tool replaces any non-alphanumeric characters with underscores in column names. It would be helpful to keep the original values as column names (or to have the option to toggle whether or not special characters are replaced with underscores).

 

This is often an issue for reporting and for dynamically-populated app inputs (e.g. drop-down), where we need to retain the special characters.

 

For example, say I have the following dataset: 

kelly_gilbert_0-1593130247986.png

 

Currently, the crosstab tool produces this:

kelly_gilbert_1-1593130318568.png

 

I would like this:

kelly_gilbert_2-1593130621660.png

 

There are currently (somewhat cumbersome) workarounds such as adding an extra row with the original names, and then using Dynamic Rename to rename the columns, but it would be great to be able to use the data straight out of the crosstab!

Top Liked Authors