Advent of Code is back! Unwrap daily challenges to sharpen your Alteryx skills and earn badges along the way! Learn more now.
Free Trial

Alteryx Designer Desktop Ideas

Share your Designer Desktop product ideas - we're listening!
Submitting an Idea?

Be sure to review our Idea Submission Guidelines for more information!

Submission Guidelines

Featured Ideas

We use several Files that are fairly large (canvas size). To traverse around to try and find where we last left off or to examine a specific X, Y at a specific zoom....one would need to either remember the tool number to search for or search for a keyword that returns several choices.

 

My suggested solution would be to create a bookmark(s). This would allow you to save a named bookmark that would save exactly where (X,Y) you are in the canvas and the Zoom level (Z). This way you could easily switch back and forth with in a canvas just back clicking the bookmark.

 

If anyone has ever used a CAD program that allowed the creation of 'Views' within the same diagram...this would be similar.

 

Thank you

Our DAT file structure is as follows:

 

The first line of the .DAT file must be a header row identifying the field neames.

The .DAT file must use the following Concordance default delimiters:

Comma  ASCII character (020)

Quote    ASCII character (254)

Newline  ASCII character (174)

 

Thank you,

Pete Vara

Hi,

 

I've been working on reporting for a while now and figure out that creatitng sub total wasn't part of any tool. 

 

Any chance this could be implemented in next versions or any macro available?

 

Thanks

Simon

I constantly find my using pre and post SQL Commands in the Output tool to run SQL when I don't actually have any data to output.

 

One example is when I load data into S3 and want to load it into Redshift. I have SQL code to run but no data to Output - I end up running a dummy row into a temp table.

 

So can we have an SQL tool that simply acts the same as a Pre-SQL command without the associated data output. Once the command is run we should be able to continue the workflow, so the tool should have an option input and output, like the Run Command tool.

 

 

Preface: I have only used the in-DB tools with Teradata so I am unsure if this applies to other supported databases.

 

When building a fairly sophisticated workflow using in-DB tools, sometimes the workflow may fail due to the underlying queries running up against CPU / Memory limits. This is most common when doing several joins back to back as Alteryx sends this as one big query with various nested sub queries. When working with datasets in the hundereds of millions and billions of records, this can be extremely taxing for the DB to run as one huge query. (It is possible to get arround this by using in-DB write out to a temporary table as an intermediate step in the workflow)

 

When a routine does hit a in-DB resource limit and the DB kills the query, it causes Alteryx to immediately fail the workflow run. Any "temporary" tables Alteryx creates are in reality perm tables that Alteryx usually just drops at the end of a successful run. If the run does not end successfully due to hitting a resource limit, these "Temporary" (perm) tables are not dropped. I only noticed this after building out a workflow and running up against a few resource limits, I then started getting database out of space errors. Upon looking into it, I found all the previously created "temporary" tables were still there and taking up many TBs of space.

 

My proposed solution is for Alteryx's in-DB tools to drop any "temporary" tables it has created when a run ends - regardless of if the entire module finished successfully. 

 

 

Thanks,

Ryan

When a workflow has run it has it's final message.

 Alteryx Product Ideas

It would be great to have a datetime stamp included in the file message. When running multiple jobs and editing multiple workflows it would be great to be able to check the last runtime so you know when each job had finished. As I often flip between multiple workflows during development and can working on up to 10 or more at any one time 

 

 

Hi,

 

I think that the sample tool should have a T or F port.

 

Lets say I want to keep first N records but would like to stream the rest of the data (the not sampled one) somewhere else in my workflow, its possible but it would be easier to have that in the sampler. 

 

Simon

Korem

When it comes to something going wrong in Alteryx the last thing I can ever remember is the email address to use for support. Instead of trying to remember whether to use customersuccess@ or clientsvcs@ why can't it just simply be support@?

 

At your time of need, please make it as simple as possible to get help.

When using Server datasources Alteryx can take a long time to query metadata, particularly for long complex queries. This often happens on opening a module or clicking on the Input tool to eit properties.

 

This leads to frustration with these modules. It would be good for Alteryx to cache the metadata (i.e. columns) from these inputs and prompt the user to reuse this cached data if it it takes longer than say 2 seconds to retrieve the query.

 

 

 

As you may be aware localisation is the adapting of computer software to regional differences of a target market, Internationalization is the process of designing software so that it can potentially be adapted to various languages without engineering changes.

 

The idea is to make Alteryx designer tool, the web help content and example workflows to be multilingual (Possibly with the use of "lic" language files or similar) Hopefully the sotware and tutorials will all be localised by crowdsourcing initiatives within the Alteryx community. 

 

I sincerely believe this will help the tool get a lot of traction not in US and UK but in other parts of the world,

Highly likely Mandarin and Spanish would be the first two language versions...

 

Top languages by population per Nationalencyklopedin 2007 (2010)Language Native speakers(millions) Fraction of worldpopulation (2007)

Mandarin935 (955)14.1%
Spanish390 (405)5.85%
English365 (360)5.52%
Hindi295 (310)[2]4.46%
Arabic280 (295)4.43%
Portuguese205 (215)3.08%
Bengali200 (205)3.05%
Russian160 (155)2.42%
Japanese125 (125)1.92%
Punjabi95 (100)1.44%

I am trying to run batch regressions on a pretty sizable set of data.  About ~1M distinct groups of data, each wtih 30-500 x,y pairs.

 

A batch macro with a linear regression works ok - but it is really slow.  Started at about 2-3s per regression.  After stripping out bunch or reporting from the macro, I am down to ~2s.  This is still feels quite slow compared to something purpose built.

 

Has anyone experimented with higher speed versions that just dump out m,b, & r2?

Hi - I miss the functionality in 9.5 of being able to set a default tool in a tab and then drag in tools from the tab. This seems to be gone in 10. Is there any possibility of it coming back?

Many thanks - Nathalie

Hi,

 

I've been using desktop scheduler to download and parse out a streaming JSON file.  My script takes 4 seconds to execute, the data is updated on a per second basis.

 

Currently, my only option is to execute the scheduled job at the lowest level of granularity in the temporal at 1 minute. therefore I'm missing records.

 

Would it be possible to add a second(s) option to the scheduler?

 

I can see particular benefits from doing a CRON at under a minute, especially with event data capture.

 

Best Regards,

 

Allan 

Currently only DateTime based functions are available, Time based functions should be introduced. like TimeAdd(), TimeDiff() etc.

This will help users a lot to calculate different aspects of time based calculations...

 

Ashok Bhatt

Given redshift prefers accepting many small files for bulk loading into redshift, it would be good to be able to have a max record limit within the s3 upload tool (similar to functionality for s3 download)

 

The other functionality that is useful for the s3 upload tool is ability to append file names based on datetimestamp_001, 002, 003 etc similar to current output tool

Within the select tool when you have hilighted a set of rows it would be NICE to be able to RIGHT-CLICK for OPTIONS rather than have to move the cursor up to the options to get to choices.

 

Thanks,

Mark

Hi,

 

Carlson Companies is moving to a Vertica environment and it would be great if that was supported with the In-database tools. That would definitely help and expand the use of Alteryx at our company!

 

Thanks,

 

Tyler Mittelstadt

At the moment, we are not able to use input data field names and its values in Output tool, mainly in the Pre-SQL and Post-SQL statement. I see some discussions on this in the community and in many scenarios we require that. It will be great if we have this option.

Idea:

A funcionality added to the Impute values tool for multiple imputation and maximum likelihood imputation of fields with missing at random will be very useful.

 

Rationale:

Missing data form a problem and advanced techniques are complicated. One great idea in statistics is multiple imputation,

filling the gaps in the data not with average, median, mode or user defined static values but instead with plausible values considering other fields.

 

SAS has PROC MI tool, here is a page detailing the usage with examples: http://www.ats.ucla.edu/stat/sas/seminars/missing_data/mi_new_1.htm

Also there is PROC CALIS for maximum likelihood here...

 

Same useful tool exists in spss as well http://www.appliedmissingdata.com/spss-multiple-imputation.pdf

 

Best

I've come to realize that the JOIN tool is case-sensitive by design but it would be helpful if you could turn that behavior on/off (via checkbox?) within the JOIN tool.  For those of us that work predominantly in database environments that are not case-sensitive, this default behavior has caused me problems many times.  Having to force the case to either upper or lower upstream of the JOIN on both flows in order to ensure a successful join is an extra step that would not be necessary if you could disable case-sensitive with a checkbox.

Top Liked Authors