Bring your best ideas to the AI Use Case Contest! Enter to win 40 hours of expert engineering support and bring your vision to life using the powerful combination of Alteryx + AI. Learn more now, or go straight to the submission form.
Start Free Trial

Alteryx Designer Desktop Ideas

Share your Designer Desktop product ideas - we're listening!
Submitting an Idea?

Be sure to review our Idea Submission Guidelines for more information!

Submission Guidelines

Featured Ideas

Many software & hardware companies take a very quantitative approach to driving their product innovation so that they can show an improvement over time on a standard baseline of how the product is used today; and then compare this to the way it can solve the problem in the new version and measure the improvement.

 

For example:

- Database vendors have been doing this for years using TPC benchmarks (http://www.tpc.org/) where a FIXED set of tasks is agreed as a benchmark and the database vendors then they iterate year over year to improve performance based on these benchmarks

- Graphics card companies or GPU companies have used benchmarks for years (e.g. TimeSpy; Cinebench etc).

 

How could this translate for Alteryx?

- Every year at Inspire - we hear the stats that say that 90-95% of the time taken is data preparation

- We also know that the reason for buying Alteryx is to reduce the time & skill level required to achieve these outcomes - again, as reenforced by the message that we're driving towards self-service analytics & Citizen-data-analytics.

 

The dream:

Wouldn't it be great if Alteryx could say: "In the 2019.3 release - we have taken 10% off the benchmark of common tasks as measured by time taken to complete" - and show a 25% reduction year over year in the time to complete this battery of data preparation tasks?

 

One proposed method:

  • Take an agreed benchmark set of tasks / data / problems / outcomes, based on a standard data set - these should include all of the common data preparation problems that people face like date normalization; joining; filtering; table sync (incremental sync as well as dump-and-load); etc.
  • Measure the time it takes users to complete these data-prep/ data movement/ data cleanup tasks on the benchmark data & problem set using the latest innovations and tools
  • This time then becomes the measure - if it takes an average user 20 mins to complete these data prep tasks today; and in the 2019.3 release it takes 18 mins, then we've taken 10% off the cost of the largest piece of the data analytics pipeline.

 

What would this give Alteryx?

This could be very simple to administer; and if done well it could give Alteryx:

- A clear and unambiguous marketing message that they are super-focussed on solving for the 90-95% of your time that is NOT being spent on analytics, but rather on data prep

- It would also provide focus to drive the platform in the direction of the biggest pain points - all the teams across the platform can then rally around a really deep focus on the user and accelerating their "time from raw data to analytics".   

- A competitive differentiation - invite your competitors to take part too just like TPC.org or any of the other benchmarks

 

What this is / is NOT:

  • This is not a run-time measure - i.e. this is not measuring transactions or rows per second
  • This should be focussed on "Given this problem; and raw data - what is the time it takes you, and the number of clicks and mouse moves etc - to get to the point where you can take raw data, and get it prepped and clean enough to do the analysis".
  • This should NOT be a test of "Once you've got clean data - how quickly can you do machine learning; or decision trees; or predictive analytics" - as we have said above, that is not the big problem - the big problem is the 90-95% of the time which is spent on data prep / transport / and cleanup.

 

Loads of ways that this could be administered - starting point is to agree to drive this quantitatively on a fixed benchmark of tasks and data

 

@LDuane ; @SteveA ; @jpoz ; @AshleyK ; @AJacobson ; @DerekK ; @Cimmel ; @TuvyL ; @KatieH ;  @TomSt ; @AdamR_AYX ; @apolly 

 

 

 

 

Added in Alteryx Version 2020.3, the Browse tool no longer shows a profile of the complete dataset (it is capped when the record data size reached 300MB).

 

My proposed solution is an optional override of the record size limit on the browse tool (which will make the profiling take longer, but actually profile the entire dataset).  I would also like a general user setting to set the default behavior of the browse tool to either be limited or unlimited.

 

Below is the newly included documentation of the Data Profiling Limit, which I'm proposing can be overridden.

 

 

Data Profiling Limit
Data Profiling in the Browse tool is capped at 300 MB. This allows you to process very large datasets faster. For each record in the incoming dataset, we process the record and add the record size to a counter. Once the counter reaches 300 MB, we stop processing records.

It is important to note that there is no specific number of records that we can process. This depends on the dataset since a record size can range from 1 byte to a few thousand bytes. This record size is different from the file size, displayed in the Results grid and Data Profiling Holistic View. The file size is generally different since it has been compressed to optimize spacing.

In other words, 300 MB of record size is not the same as 300 MB of file size.

 

 

 

This new tool can cause confusion when looking at the data profile (e.g. if you expect the sum to be $3 million, but the browse tool is only showing 2% of your total records in the profile tool, the profile sum may only show $60 thousand).

 

The sampled version with a cutoff of 300MB is rarely useful if you are using browse tools to get a quick sense of the variable profiles on medium sized datasets (around 1 million records) since this rarely will fit into the 300MB record size limit.

 

An example can be shown in the image below, where the dataset contains 855,085 records, but the browse tool is profiling only the first 20,338.

 

alteryxExample1.png

 

Again, being able to override this 300MB record size limit would fix the problem created in the 2020.3 change to the browse tool.

 

 

 

I'm sure there's a reason behind it, but can we please be allowed to run calculations on null values in a formula tool? right now, if we sum three values (1 + 3 + [null]) it produces [null], can the formula tool just ignore the null values? the only way around this is to fill the [null] cells with a value and that adds an additional step to what should be a fairly straight forward process. That value would have to be different for a multiplication formula vs an addition formula in order to not change the answer materially whereas ignoring the value is a more consistent solution. 

When moving a tool container, all of the tools within it become mis-aligned with the canvas grid.  Moving any single tool immediately re-aligns it to the grid, which puts it out of alignment with the rest of the tools in the container.

 

Example:  Put 3 tools in a row in a tool container, all aligned horizontally.  Next, move the container.  Now, move the middle tool, then try to place it back in alignment with the other two.  You won't be able to, because they are out of alignment with the canvas grid.

 

Please fix this.  

When using the text mining tools, I have found that the behaviour of using a template only applies to documents with the same page number.

 

So in my use case I've got a PDF file with 100+ claim statements which are all laid out the same (one page per statement). When setting up the template I used one page to set the annotations, and then input this into the T anchor of the Image to Text tool. Into the D anchor of this tool is my PDF document with 100+ pages. However when examining the output I only get results for page 1.

 

On examining the JSON for the template I can see that there is reference to the template page number:

cgoodman3_0-1604393391514.png

 

And playing around with a generate rows tool and formula to replace the page number with pages 1 - 100 in the JSON doesn't work. I then discovered that if I change the page number on the image input side then I get the desired results. 

 

cgoodman3_1-1604393499357.png

However an improvement to the tool, as I suspect this is a common use case for the image to text tool, is to add an option in the configuration of the image to text tool to apply the same template to all pages.

 

cgoodman3_4-1604393738275.png

 

 

 

 

 

Add Unicode category to the cleansing tool

DELETE from Source_Data Where ID in

SELECT ID from My_Temp_Table where FLAG = 'Y'

 

.... 

 

Essentially, I want to update a DB table with either an update or with the deletion of rows.  I can't delete all of the data.  My work around will be to create/insert into a table the keys that i want to delete and try to use a input/output tool with SQL that performs the delete.  Any other suggestions are welcome, but a tool is best.

 

Thanks,

Mark

There is duplicated action in the table tool to force the user decide the decimal places.

 

In the normal situation, all the data preparation process has been completed prior to the Table tool, we just want to leverage on this tool to format the header or incorporate conditional formatting. However, once the Table tool is connected and we have to re-configure the decimal places for all the numeric columns, the column names will be varied from year to year and it brings additional manual intervention to the workflow.

 

We recommend to provide flexibility for us to take the original upstream data source without changing the underlying data set.

Lack of tools in Alteryx to extract data from True PDF. The current set of tools (Computer Vision) only allow us to extract data from images which is not ideal for True PDF documents in terms of accuracy.

Please add a configuration to the RedShift bulk load to EITHER use access keys or an IAM EC2 role for access. 

 

We should not have to specify access keys when we are in an IAM enabled environment.

 

Thanks

As each version of Alteryx is rolled out, it would be much easier for our users and admin team to validate the new version, if Alteryx allowed parallel installs of many different versions of the software.

 

So - our team is currently on 11.3 - if we could roll out 11.5 in parallel then we could very easily allow users to revert to 11.3 if there are issues, or else remove 11.3 after 2-3 weeks if no issues.

The same goes for versions which are in BETA.

 

This would be a huge help!

 

cc: @avinashbonu ; @Deeksha ; @revathi

It can be daunting to find the tool that is currently being processed by the engine in workflows that contain hundreds of tools with many ins, outs, and branches. During runtime, I want to be shown the tool that is running on the canvas. This functionality should be in the form of a button or something to direct focus to that area. It should not be the default.

To avoid some errors occurring during upgrade or even installation, it would be great  to add an option in the installer to go with a fresh installation (remove any previous Alteryx Designer).

 

If selected, option would:

- Warn users that everything Alteryx related is going to be deleted

- Generate a log of what is going to be removed

- Rename folders and registry keys listed there: https://community.alteryx.com/t5/Alteryx-Designer/Complete-Uninstall-of-Alteryx-Designer/ta-p/402897

(rename instead of delete to avoid "bad surprises")

 

A similar option could exist when one would like to uninstall Alteryx Designer.

 

This would remove the frustration of having to rely on a "white knight" when something happens in the middle of an upgrade or an installation.

 

Thanks,

 

PaulN 

My team uses a shared macro repository (say F:\AlteryxMacros), and we recently ran into an issue with the default save location for macros. While we save most macros to our repository, there are times when folks save their macros elsewhere (let's say C:\MyAwesomeWorkflow). The issue we've encountered is that if you go to file >> save as with a macro, it will ALWAYS default to the macro repository, even when my macro is currently saved elsewhere (C:\MyAwesomeWorkflow). Speaking for a friend, people have accidentally saved things to the macro repository by accident. Or, they waste time navigating from the macro repository to the their current folder.

 

If a macro is saved somewhere, please change the file >> save as to default to the current folder. Thanks! 

Alteryx doesnt support querying tables within Apache Ignite via Ignite ODBC connector. Connectivity from Ignite being an in memory database with Alteryx would help in better connectivity via ODBC.

 

https://apacheignite-sql.readme.io/docs/overview 

Right now in order to pass a parameter to pre/Post SQL we need to make a macro as a work around.

It would be really great if this was a native capability for the output tool so we don't have to replicate all the output too fields as macro inputs

Here is the issue I have, when you are using a Join tool and you have multiple columns that you are joining on (to the point that they don't all show in the 
Configuration window), i have a tendency to use the mouse scroll wheel to move down to see additional columns i am joining on.  The mouse scroll controls different things depending on where your cursor is.  If your cursor is over the Left or Right columns then the scroll button will change the Fields you are using to join on.  I have messed up more workflows then i care to mention due to this.  I do not think it is appropriate for the scroll wheel to effect and change the fields in the configuration window and it should only be used to scroll up and down in the configuration window.  

 

Ryan_Myers_0-1616702929504.png

 

I use a mouse which has a horizontal scroll wheel. This allows me to quickly traverse the columns of excel documents, webpages, etc.

 

This interaction is not available in Alteryx Designer and when working with wide data previews it would improve my UX drastically. 

Please support GZIP files in the input tool for both Designer and Server.

 

I get several large .gz files every day containing our streaming server logs. I need to parse and import these using Alteryx (we currently use Sawmill). Extracting each of these files would take a huge amount of space and time.

 

This was previously requested and marked as "now available", but what is available only addressed a small part of the request. First, that request was for both ZIP and GZIP. What is now available is only ZIP. Second, it requested both input and output, what is now available is input only. Third, while not explicitly stated in the request, it needs to function in Alteryx Server in order to be scheduled on a daily basis.

 

It would be very helpful to have an output of the workflow into a step by step document. so someone who does not have access to Alteryx can undestand the steps taken to create the flow hence the result or output.

トップ賞賛投稿者