The Product Idea boards have gotten an update to better integrate them within our Product team's idea cycle! However this update does have a few unique behaviors, if you have any questions about them check out our FAQ.

Alteryx Designer Desktop Ideas

Share your Designer Desktop product ideas - we're listening!
Submitting an Idea?

Be sure to review our Idea Submission Guidelines for more information!

Submission Guidelines

Featured Ideas

0 Likes

I know y'all are working on data lineage for some future offering and it is very much needed. For highest quality results, please make logs a primary source of lineage information. Being able to use dynamic naming with some tools and macros makes the names in the workflows simple foobar placeholders and do not reflect what actually happened. Today Connect doesn't use logs and leaves many lineage gaps because of this

 

Please move this to a more appropriate category if needed. This future feature work is not part of Connect.

0 Likes

Dear Alteryx Solution Architects,

 

When we were implimenting analytics solution for goverment clinet in UAE, we came across a sititation in which, It requires validation of supporting documents for data quality issues.While working on this challenge we have arrived at the conclution that Alteryx has some limittaion in incorporating attchments to workflows.

 

I would like alteryx to come up with something to overcome this issue.

1. There should be some tool or technique to incorporate multiple attchments (I know  limitted feature of attchment is there in Alteryx but its not great)

2. There should be an option to visualize attachment in Results window if its possible, It will be grat value add.

 

Thanks

Ajin

Hi all!

 

Based on the title, here's some background information: SHAPLEY Values

 

Currently, one way of doing so is to utilize the Python tool to write out the script and install the package. However, this will require running Alteryx as an administrator in order to successfully load, test, and run the script. The problem is, a substantial number of companies do not grant such privileges to their Alteryx teams to run as administrator fully as it will always require admin credentials to log in to even open Alteryx after closing it.

 

I am aware that there is a macro covering SHAP but I've recently tested it and it did not work as intended, plus it covers non-categorical values as determinants only, thereby requiring a conversion of categorical variables into numeric categories or binary categories. 

 

It will be nice to have a built in Alteryx ML tool that does this analysis and produces a graph akin to a heat map that showcases the values like below:

caltang_0-1680442322684.png

 

By doing so, it adds more value to the ML suite and actually helps convince companies to get it.

 

Otherwise teams will just use Python and be done with it, leaving only Alteryx as the clean-up ETL tool. It leaves much to be desired, and can leave some teams hanging.

 

I hope for some consideration on this - thank you.

 

0 Likes

Alteryx currently shows 100% in the profiling of spatial fields in the results window, regardless of if there are rows with missing spatial features. I opened a ticket about this & was told it is expected behavior.

 

Therefore, I submit the idea that the profiling for spatial fields should give an accurate profile of the field, & if there are nulls in the field, it should identify that column isn't 100% OK and show the % of records that have null values, like the profiling does for every other column in workflows.

1.JPG

 

 

 

 

 

 

 

 

 

 

Thank you!

We have the brows icon witch connect at on output at a time. But to be more efficient I would like a browser tool witch connect to 2 or 3 outputs at one icon. Connect to True false at the filter or L J R at the join record.

 

Please add a data validator workflow.

 

Suggested features will be the following:

1.  Add validation name and set the field/s of your data you want to validate. (it can have more than one validation name on one workflow)

2. On the selected validation(name). Add features that will check/validate the information below:

   A. Verify data type
   B. Contains Null
   C. Max and Min string length
   D. Allowed values only, else it will give you an error
   E. Regex expected to match and not allowed to match.

3. It can have two(2) outputs. One is True(which is match) and False(which is fail over/error).

0 Likes

Hi,

The imputation tool allows exchange of numbers. It would be great if we are given the option to impute string values and NULL value too

0 Likes

The Basic Data Profile tool cannot handle files larger than about 40 MB and 33 fields.  When I add the 34th field, and the file size stays at 40 MB (Browse tool rounding), it breaks.

 

I'm trying to get the count of non-nulls for the  "Empl Current" field.  Adding the 34th field drops the non-null count down from the correct 25,894 to 26, and if I add more fields, the count of non-nulls drops to zero.

 

The Basic Data Profile tool is configured with a 10 million limit on exact count and 100,000 limit on unique values.

 

The whole point of the BDP tool is to get one's hands around large data files that are too big to manually inspect, so this tiny limit is really a problem.

I think there should be a tool that can take two proportions from the same row (so four columns: numerator population 1, denom population 1, numerator population 2, denom population 2) and return the z-score between the two groups, with a check box to select the desired confidence level.

 

We monitor quarter-over-quarter changes in satisfaction diagnostics for some of our surveys, and we report the change in Top 2 box % (answers of very satisfied or somewhat satisfied); and we typically only investigate the changes that are statistically significant at a 95% confidence interval.

 

I have to write about 8 different formulas within the formula tool to get me the z-scores between the groups I'm investigating. A z-score tool would greatly reduce my workload!

Browse tool is really a powerful tool. We can see all information regarding datasets very rapidly.

Unfortunately, we only can export information (graphs, tables) manually through PNG files...

 

One major interest of Alteryx in Big Company is to perform DATA Quality reviews.

 

If we could export Browse tool informations (graphs, tables) automatically in pdf file or other solutions, we could save a lot of time in Data Quality tasks.

 

The only solution is to use DataViz tool or set up specific render in Alteryx (very time-consumming)

 

Main benefit would be the ability to share insights of DATA Quality with other business units.

 

Best Regards

Ref: https://community.alteryx.com/t5/Alteryx-Designer-Discussions/Is-there-a-search-in-quot-Choose-Table...

 

With large tables it is tedious to search for a field. It would be a great efficiency gain to allow a user to search for a column in a table by entering a name or partial column name. 

0 Likes

Hello all,

 

Would love to see an analysis too or a major upgrade to the browse tool.

 

9 times out of 10, if i want to understand the data that is in my browse tool, i have to export it into excel just to filter and sort. This functionality is very much so needed in alteryx in either a new Analysis tool or (more ideally) into the browse tool.

 

What are yall's thoughts?

Nick

We don't have a seperate ANOVA tool in Alteryx, do you think of any reason?

 

It's not raw data or row blended data but insights gathered that's important:

 

Linear Regression Tool has a report for Type II ANOVA based on the model table we provide.

But both type II and other types are not available as standalone statistics tools...

 

Untitled.png

 

Here is the list of different types of Anova that may be useful;

 

ANOVA models Definitions

t-testsComparison of means between two groups; if independent groups, then independent samples t-test. If not independent, then paired samples t-test. If comparing one group against a fixed value, then a one-sample t-test.
One-way ANOVAComparison of means of three or more independent groups.
One-way repeated measures ANOVAComparison of means of three or more within-subject variables.
Factorial ANOVAComparison of cell means for two or more between-subject IVs.
Mixed ANOVA
(SPANOVA)
Comparison of cells means for one or more between-subjects IV and one or more within-subjects IV.
ANCOVAAny ANOVA model with a covariate.
MANOVAAny ANOVA model with multiple DVs. Provides omnibus F and separate Fs.

 

Looking forward for the addition of ANOVA tools to the data investigation tool box...

Many software & hardware companies take a very quantitative approach to driving their product innovation so that they can show an improvement over time on a standard baseline of how the product is used today; and then compare this to the way it can solve the problem in the new version and measure the improvement.

 

For example:

- Database vendors have been doing this for years using TPC benchmarks (http://www.tpc.org/) where a FIXED set of tasks is agreed as a benchmark and the database vendors then they iterate year over year to improve performance based on these benchmarks

- Graphics card companies or GPU companies have used benchmarks for years (e.g. TimeSpy; Cinebench etc).

 

How could this translate for Alteryx?

- Every year at Inspire - we hear the stats that say that 90-95% of the time taken is data preparation

- We also know that the reason for buying Alteryx is to reduce the time & skill level required to achieve these outcomes - again, as reenforced by the message that we're driving towards self-service analytics & Citizen-data-analytics.

 

The dream:

Wouldn't it be great if Alteryx could say: "In the 2019.3 release - we have taken 10% off the benchmark of common tasks as measured by time taken to complete" - and show a 25% reduction year over year in the time to complete this battery of data preparation tasks?

 

One proposed method:

  • Take an agreed benchmark set of tasks / data / problems / outcomes, based on a standard data set - these should include all of the common data preparation problems that people face like date normalization; joining; filtering; table sync (incremental sync as well as dump-and-load); etc.
  • Measure the time it takes users to complete these data-prep/ data movement/ data cleanup tasks on the benchmark data & problem set using the latest innovations and tools
  • This time then becomes the measure - if it takes an average user 20 mins to complete these data prep tasks today; and in the 2019.3 release it takes 18 mins, then we've taken 10% off the cost of the largest piece of the data analytics pipeline.

 

What would this give Alteryx?

This could be very simple to administer; and if done well it could give Alteryx:

- A clear and unambiguous marketing message that they are super-focussed on solving for the 90-95% of your time that is NOT being spent on analytics, but rather on data prep

- It would also provide focus to drive the platform in the direction of the biggest pain points - all the teams across the platform can then rally around a really deep focus on the user and accelerating their "time from raw data to analytics".   

- A competitive differentiation - invite your competitors to take part too just like TPC.org or any of the other benchmarks

 

What this is / is NOT:

  • This is not a run-time measure - i.e. this is not measuring transactions or rows per second
  • This should be focussed on "Given this problem; and raw data - what is the time it takes you, and the number of clicks and mouse moves etc - to get to the point where you can take raw data, and get it prepped and clean enough to do the analysis".
  • This should NOT be a test of "Once you've got clean data - how quickly can you do machine learning; or decision trees; or predictive analytics" - as we have said above, that is not the big problem - the big problem is the 90-95% of the time which is spent on data prep / transport / and cleanup.

 

Loads of ways that this could be administered - starting point is to agree to drive this quantitatively on a fixed benchmark of tasks and data

 

@LDuane ; @SteveA ; @jpoz ; @AshleyK ; @AJacobson ; @DerekK ; @Cimmel ; @TuvyL ; @KatieH ;  @TomSt ; @AdamR_AYX ; @apolly 

 

 

 

 

Dear GUI Gurus,

 

A minor, but time saving GUI enhancement would be appreciated.  When adding a tool to the canvas, the current behavior is to make visible the tool anchor that was last used on prior tools.  That being said, when I look at the results window, I might be adding a "vanilla" configuration tool to the canvas and stare at a BLANK results window.  When users are adding tools to the canvas, I suggest that the best practice is to VIEW the incoming data before configuring the tool.

 

I ALWAYS set the results to view the INCOMING DATA ANCHOR.

 

This minor change would be welcome to me.

 

Cheers,

 

Mark

0 Likes

Sometimes, when I am working with new data sources, it would be nice to have a dockable pane that would allow me to view the schema of all of my connected data sources. That way I could rename fields and change data types as needed without having to jump from one select tool to the other to see how the schemas compare.

On 2019.2.5.62427, interactive results grid is only available for the embedded result window but not if you open the results in new window 'Open results in New Window' -> New Window

 

This is verified by @PaulN - https://community.alteryx.com/t5/Alteryx-Designer-Discussions/Interactive-Results-Grid-not-available...

 

It also appears that interactive grid is also not available if you double click a yxdb file to open it and view the content.

 

Would be useful to have the interactive grid in both these areas instead of just the embedded result window.

 

 

 

 

Often in larger workflows, I will copy data partway down the stream into a new workflow in order to troubleshoot a small section in order to avoid having to run the workflow over and over again which can take a while. I'm aware (and thankful) of cacheing, but sometimes if there are many parallel streams or, I'd rather just copy the data from the data preview built into the tool so I don't have to take the time to run the workflow again. I'm also aware I could output a yxdb file and use that, but again that takes longer than I would like.

 

The issue I run into is if I copy the data and paste in a text input tool, all the field types change to what they would default to. This is fine with new data, but for data that has specific fields throughout the workflow, this can be a hassle. If copying data could also copy the field type and size that would be great.

I would like to see more files types supported to be able to be dragged from a folder onto a workflow. More precisely a .txt and a .dat file. This will greatly help my team and I do be able to analyze new and unknown data files that we receive on a daily basis.  

 

 

Thank you. 

My proposal is to create a new support at Alteryx team level according to provided picture with Alteryx toolbox logo for example in the same philosophy

Top Liked Authors