The Summer Cup has officially kicked off! Get ready to learn, connect, and compete! Complete Community engagement tasks to earn points and unlock exclusive Summer Cup badges for your profile. Learn more here!

Alteryx Designer Desktop Ideas

Share your Designer Desktop product ideas - we're listening!
Submitting an Idea?

Be sure to review our Idea Submission Guidelines for more information!

Submission Guidelines

Featured Ideas

I've obviously been doing lots of work with APIs for this to be my second idea posted today which relates to an improved based on recent work with APIs, but I also believe this is wider reaching.

 

I've been using Alteryx now for over 4 years and always assumed implicit behaviour of the select tool, so would add a select tool as best practice into a workflow after input tools to catch any data type issues. However I discovered that only fields where you either change the data type, length or field name result in that behaviour being configured and subsequently ensured. I discovered this as part of API development where I had an input field which was a string e.g. 01777777. Placing a select tool after this shows this is a string data type, however if the input was changed to 11777777 the select tool changes to a numeric data type. Therefore downstream formulas such as concatenating two strings would fail.

 

The workaround to this is to change the select tool to string:forced, which is fine when you know about it, but I suspect that a large majority of users don't. Plus if you have something like 2022-01-26 which is recognised initially as a string, then the forced option will be string:forced, however if you wanted it to be date:forced you need to add a first select tool to change to date, and a second select tool to change to string:forced.

 

Therefore my suggestion is to add a checkbox option in the select tool to Force all field types, which would update the xml of the tool and therefore ensure what I currently assume would be implicit behaviour is actually implemented.

 

 

Hi there Alteryx team,

 

When we load data from raw files into a SQL table - we use this pattern in almost every single loader because the "Update, insert if new" functionality is so slow; it cannot take advantage of SSVB; it does not do deletes; and it doesn't check for changes in the data so your history tables get polluted with updates that are not real updates.

 

This pattern below addresses these concerns as follows:

- You explicitly separate out the inserts by comparing to the current table; and use SSVB on the connection - thereby maximizing the speed

- The ones that don't exist - you delete, and allow the history table to keep the history.

- Finally - the rows that exist in both source and target are checked for data changes and only updated if one or more fields have changed.

 

Given how commonly we have to do this (on almost EVERY data pipe from files into our database) - could we look at making an Incremental Update tool in Alteryx to make this easier?    This is a common functionality in other ETL platforms, and this would be a great addition to Alteryx.

 

 

SeanAdams_0-1643148983216.png

 

Hi,

 

I appreciate this could be a repeat of this topic but I'm not able to find it easily if it is. 

 

I want to locate a column in my results window with a simple search functionality. I'd like to search for a column and then it'll present back to me potential columns that I could then select to move me to in the results window. It's painful with a lot of columns to keep scrolling to find the one you're after:

 

Screenshot 2024-11-11 135159.png

 

All the best,

BS

 

Please update the Render tool to allow users to name the Excel sheet for the output. Alteryx currently errors when using same naming convention that works in normal Output tool.

it would be great if the formula tool could expand the intellisense to the select column box. For example, I could start typing in the select column box and it would widdle down the list of fields down.let's suppose I wanted to update field 79A, I could type in 7 and it might show something like 

7

17

27

37

70

71

79A

79B.

 

So if I typed in 79 then, it would further reduce it to 

79A

79B

 

And i could select 79A.

 

patrick_digan_0-1614186078945.png

 

Having the open / close ( expand / collapse ) button for the tool container in lhe top right corner implies that everytime a big container is expanded, to close it the user has to move the pointer to its new position, which sometimes mean scrolling / zooming out and then zooming in to locate it.

I suggest to locate that button in the top left corner by the side of the enable/disable switch or even a double click mechanism for open/close, which would enable to user to open, see what is inside the container, and close it without moving the mouse to locate the new location of the button.

 

 

We now have the ability to output to an ESRI File Geodatabase, which is great, but it only allows you to output it to the WGS84 coordinate system.  I would like to have the same functionality to export it to other projections or coordinate systems similar to the ESRI Shapefile or ESRI Personal Geodatabase output tools (we specifically need NAD83 but I'm sure others would like other options as well).

 

 

Hi there,

 

Adam ( @AdamR_AYX ), Mark ( @MarqueeCrew) and many others have done a great job in putting together super helpful add-in macros in the CREW pack - and James ( @jdunkerley79 ) has really done an incredible job of filling in some gaps in a very useful way in the formula tools.

 

Would be possible to include a subset of these in the core product as part of the next release?

I'm thinking of (but others will chime in here to vote for their favourite):

- Unique only tool (CReW)

- Field Sort (CReW)

- Wildcard XLSX input (CReW) - this would eliminate a whole category of user queries on the discussion boards

- Runner (CReW - although this may have issues with licensing since many people don't have command line permission - Alteryx does really need the ability to do chained dependancy flows in a more smooth way.

- Date Utils (JDunkerly) - all of James's Date utils - again, these would immediately solve many of the support questions asked on the discussion forum

 

I think that these would really add richness & functionality to the core product, and at the same time get ahead of many of the more common queries raised by users.   I guess the only question is whether the authors would have any objection?

 

Thank you

Sean

Please enhance the input tool to have a feature you could select to test if the file is there and another to allow the workflow to pause for a definable period if the input file is locked by another user, then retry opening.  The pause time-frame would be definable for N seconds and the number of iterations it would cycle through should be definable so you can limit how many attempts to open a file it would try.

 

File presence should be something we could use to control workflow processing.  

 

A use case would be a process that runs periodically and looks to see if a file is there and if so opens and processes it.  But if the file is not there then goes to sleep for a definable period before trying again or simply ends processing of the workflow without attempting to work any downstream tools that might otherwise result in "errors" trying to process a null stream.

 

An extension of this idea and the use case would be to have a separate tool that could evaluate a condition like a null stream or field content or file not found condition and terminate the process without causing an error indicator, or perhaps be configurable so you could cause an error to occur or choose not to cause an error to occur.

 

Using this latter idea we have an enhanced input tool that can pass a value downstream or generate a null data stream to the next tool, then this next tool can evaluate a condition, like a filter tool, which may be a null stream or file not found indicator or other condition and terminate processing per the configuration, either without a failure indicated or with a failure indicated, according to the wishes of the user.  I have had times when a file was not there and I just want the workflow to stop without throwing errors, other times I may want it to error out to cause me to investigate, other scenarios or while processing my data goes through a filter or two and the result is no data passes the last filter and downstream tools still run and generally cause a failure as they have no data to act on and I don't want that, it may be perfectly valid that on a Sunday or holiday no data passes the filters.

 

Having meandered through this I sum up with the ideal being to enhance the input tool to be able to test file presence and pass that info on to another tool that can evaluate that and control the workflow run accordingly, but as a separate tool it could be applied to a wider variety of scenarios and test a broader scope of conditions to decide if to proceed or term the workflow.

 

This functionality would allow the user to select (through a highlight box, or ctrl+click), only the tools in a workflow they would want to run, and the tools that are not selected would be skipped. The idea is similar to the new "add selected tools to a new tool container", but it would run them instead. 

 

I know the conventional wisdom it to either put everything you don't want run into a tool container and disable it, or to just copy/paste the tools you want run into a blank workflow. However, for very large workflows, it is very time consuming to disable a dozen or more containers, only to re-enable them shortly afterwards, especially if those containers have to be created to isolate the tools that need to be run. Overall, this would be a quality of life improvement that could save the user some time, especially with large or cumbersome workflows.

Scenario:

Upstream tools end in a Summarize Tool that has set of records with the following fields:  EmailAddress, AttachmentUNCPath.  So you get a bunch of recipients with various attachments.  Each recipient can have different attachments, and this will change each time it's run.  In other words, it's fully dynamic.  

 

If the same recipient has multiple attachments, then it would be nice to group the recipient and just separate the attachments with a semi-colon (or whatever) in the same field.  Essentially creating one record per recipient, and therefore one email per recipient, and having the Email Tool attach each file.  In other words, mbarone@paychex.com gets one email with 5 attachments.  And next week maybe only 3 attachments, and so on.  

 

Currently the only way I see to accomplish this is with a batch macro.  


Would be infinitely more convenient to just have the Email Tool by default accept multiple attachments in a field as long as they are separated by a semi-colon, much like occurs in the "to" field.

There is duplicated action in the table tool to force the user decide the decimal places.

 

In the normal situation, all the data preparation process has been completed prior to the Table tool, we just want to leverage on this tool to format the header or incorporate conditional formatting. However, once the Table tool is connected and we have to re-configure the decimal places for all the numeric columns, the column names will be varied from year to year and it brings additional manual intervention to the workflow.

 

We recommend to provide flexibility for us to take the original upstream data source without changing the underlying data set.

Lack of tools in Alteryx to extract data from True PDF. The current set of tools (Computer Vision) only allow us to extract data from images which is not ideal for True PDF documents in terms of accuracy.

all too often, we build an alteryx flow just to realise that step 8 out of 10 was wrong -so back to the beginning and rerun the entire thing.   this often is tedious if your work requires a big data set.

 

So there is a workaround, using the Cache Macro which can be downloaded (but this does require quite a bit of fiddling with containers; disabling items; setting flags; etc) - but it would be good to allow the user to "restart from here" like you can with a powerpoint slide deck.    I appreciate that this may be tricky since Alteryx may be flushing data out of memory as it goes along, so it cannot restart from any arbitrary point - but if we put the workflow into a "testing cached mode" to cache data at each step; or allowed users to set particular controls as a breakpoint and cache at these points, that would help immensely.

 

Thank you

Sean

 

Ok Alteryx, we totally love your product.  And I've got a super quick fix for you.  Why on earth would you Autocomplete the ubiquitous tick mark as "ReadRegistryString(Key, ValueName, DefaultValue='')"

?4-3-2018 12-08-38 PM.png

I find myself in this situation constantly where, 'dummy' suddenly becomes 'dummyReadRegistryString('HKEY_LOCAL_MACHINE\SOFTWARE\SRC\Alteryx\4.1', 'InstallDir')' the moment I strike the enter key.  

Pls help, I don't ask for much.

Hello Alteryx Community,

If like me, you've been developing in Alteryx for a few years, or if you find yourself as a new developer creating solutions for your organization - chances are you'll need to create some form of support procedure or automation configuration file at some point. In my experience, the foundation of these files is typically explaining to users what each tool in the workflow is doing, and what transformations to the data are being made. These are typically laborious to create and often created in a non-standardized way.

 

The proposal: Create Alteryx Designer native functionality to parse a workflow's XML and translate the tool configurations into a step by step word document of a given workflow.

 

Although the expectation is that after something like this is complete a user may need to add contextual details around the logic created, this proposal should eliminate a lot of the upfront work in creating these documents.

 

Understand some workflow may be very complex but for a simple workflow like the below, a proposed output could be like the below, and if annotations are provided at the tool level, the output could pick those up as well:

 

gautiergodard_0-1665068434931.png

Workflow Name: Sample

1) Text Input tool (1) - contains 1 row with data across columns test and test1. This tool connects to Select Tool (2).

2) Select Tool (2) - deselects "Unknown" field and changes the data type of field test1 to a Double. This tool connects to Output (3).

3) Output (3) - creates .xlsx output called test.xlsx 

Hello!
Just another QOL change from me today. 
When building a workflow - just for fun sometimes I like to make mistakes. It's never by accident I promise 😎

 

Now theoretically, if I did make a mistake, and put a tool in the wrong place (or want to refactor, or want to move a select earlier in the workflow etc), I would typically right click, cut and connect around, and then right click the connection I want to paste onto. This works fine, however, some users are unaware of it, and it can still be a bit of a pain.

 

What would be really nice, is if we could hit ctrl and click/drag a tool, to move it elevated of connections. I have attempted to create a couple of gifs to illustrate.

The current method of moving a tool within a workstream:

ezgif-1-dc348d5b10.gif

 

What I'd love, if you could hold ctrl + drag:
ezgif-1-9df6fd68ad.gif

 

Cheers!
Owen

It would be great if we could set the default size of the window presented to the user upon running an Analytic App. Better yet, the option to also have it be dynamically sized (auto-size to the number of input fields required).

Added in Alteryx Version 2020.3, the Browse tool no longer shows a profile of the complete dataset (it is capped when the record data size reached 300MB).

 

My proposed solution is an optional override of the record size limit on the browse tool (which will make the profiling take longer, but actually profile the entire dataset).  I would also like a general user setting to set the default behavior of the browse tool to either be limited or unlimited.

 

Below is the newly included documentation of the Data Profiling Limit, which I'm proposing can be overridden.

 

 

Data Profiling Limit
Data Profiling in the Browse tool is capped at 300 MB. This allows you to process very large datasets faster. For each record in the incoming dataset, we process the record and add the record size to a counter. Once the counter reaches 300 MB, we stop processing records.

It is important to note that there is no specific number of records that we can process. This depends on the dataset since a record size can range from 1 byte to a few thousand bytes. This record size is different from the file size, displayed in the Results grid and Data Profiling Holistic View. The file size is generally different since it has been compressed to optimize spacing.

In other words, 300 MB of record size is not the same as 300 MB of file size.

 

 

 

This new tool can cause confusion when looking at the data profile (e.g. if you expect the sum to be $3 million, but the browse tool is only showing 2% of your total records in the profile tool, the profile sum may only show $60 thousand).

 

The sampled version with a cutoff of 300MB is rarely useful if you are using browse tools to get a quick sense of the variable profiles on medium sized datasets (around 1 million records) since this rarely will fit into the 300MB record size limit.

 

An example can be shown in the image below, where the dataset contains 855,085 records, but the browse tool is profiling only the first 20,338.

 

alteryxExample1.png

 

 

Again, being able to override this 300MB record size limit would fix the problem created in the 2020.3 change to the browse tool.

 

 

 

When using the text mining tools, I have found that the behaviour of using a template only applies to documents with the same page number.

 

So in my use case I've got a PDF file with 100+ claim statements which are all laid out the same (one page per statement). When setting up the template I used one page to set the annotations, and then input this into the T anchor of the Image to Text tool. Into the D anchor of this tool is my PDF document with 100+ pages. However when examining the output I only get results for page 1.

 

On examining the JSON for the template I can see that there is reference to the template page number:

cgoodman3_0-1604393391514.png

 

And playing around with a generate rows tool and formula to replace the page number with pages 1 - 100 in the JSON doesn't work. I then discovered that if I change the page number on the image input side then I get the desired results. 

 

cgoodman3_1-1604393499357.png

However an improvement to the tool, as I suspect this is a common use case for the image to text tool, is to add an option in the configuration of the image to text tool to apply the same template to all pages.

 

cgoodman3_4-1604393738275.png

 

 

 

 

 

Top Liked Authors