Alteryx Designer Desktop Ideas

Share your Designer Desktop product ideas - we're listening!
Submitting an Idea?

Be sure to review our Idea Submission Guidelines for more information!

Submission Guidelines

Featured Ideas


It would help if there is some option provided wherein one can test the outcome of a formula during build itself rather than creating dummy workflows with dummy data to test same.

 

For instance, there can be a dynamic window, which generates input fields based on those selected as part of actual 'Formula', one can provide test values over there and click some 'Test' kind of button to check the output within the tool itself.

 

This would also be very handy when writing big/complex formulas involving regular expression, so that a user can test her formula without having to
switch screens to third party on the fly testing tools, or running of entire original workflow, or creating test workflows.

It would be good if an option can be provided wherein on clicking a particular data profiling output (cellular level) one can see the underlying records.

 

May be configurator/designer can be given this option where she can select her choice of technical/business keys and when an end user (of Data Profiling report output) clicks the data profiling result he can be redirected to those keys selected earlier.

 

One option might be to generate the output of data profiling in a zip folder which would contain the data profiling results along with the key fields (hyperlinked files etc).

 

Since in such case even data would be maintained/stored, it would be good to either encrypt or password protect the zip file based on various industry standards.

 

This can be provided as an optional feature under something like advanced properties for the tool, making use of the industry best practices followed in context of report formatting and rendering.

 

The reason why this should be optional is, not always there might be a need to have the detailed linking back to source level records in place.

 

For e.g. if the need is only to highlight the Data Profiling outcome at a high level to a Data Analyst this might not be useful.

On the other hand if there is a need for the Data Steward to actually go and correct the data based on the Profiling results, the linking of profiling results back to source data might come handy.

I'd like to see a tool that you can drop into a workflow and it will stop running at that tool and/or start running after that tool. I know about the cache dataset macro, but I think it could be simplified and incorporated into the standard set of tools.

In version 10.5 if the taskbar is auto-hidden and alteryx is the active window - you cannot access the taskbar by moving the mouse to the bottom of the screen.

You have to use the windows key or switch to another application window

Hi there,

 

Just a quick note on a really small improvement that could be made on the Data Cleansing tool but that could help a lot.

 

Actually this tool allows us to convert input data with NULLs to either blank or 0 values depending on the datatype.

 

It would be really appreciated to be able to do the opposite, converting blank or 0 values to NULLs.

When you press tab from from the Test Type combo jumps to the Ok box, think should go to Test Value text box

I am always coping the open workflow path for various reasons - would love a COPY button right here to copy to clipboard.

 

Please?

 

Capture.PNG

Sometimes in a crowded workflow, connector lines bunch up and align across the title bar of a tool container.  This blocks my view of the title, but also makes it hard to 'grab' the tool container and move it.

 

Could Alteryx divert lines around tool containers that they don't connect into, or make tool containers 'grab-able' at locations other than the title bar?

 

Image demonstrating connector line overlapImage demonstrating connector line overlap

Hi,

 

When you create a data model in excel you can create measures (aka KPI). These is then something you can then use when you pivot the data and measure would dynamically be updated as you segment the data in your pivot table.

 

By example, let's say you have a field with the customer name and a field with the revenue, you could create a measure that will calculate the average revenue per customer (sum of revenue / disctintcount of customer)

Now if you have a 3rd field in your data that inidcates your region, the measure would allow to see the average revenue by customer and by region (but the measure formula would remain the same and wouldn't refer to the region field at all)

 

Excel integrates well with PowerBI and currently these measures flow into PowerBI.

 

While we have a "Publish to PowerBI" in Alteryx I haven't seen any way to create such measures and export them to PowerBI.

 

Hence I still need to load to Excel to create these measures before I can publish to PowerBI, it'd be great to avoid that intermediate tool.

 

Thanks

 

Tibo

 

It would be super helpful if there were a way to

1. have an active list of all inputs/outputs that, if the links were changed, would update the connection for every occurance of that input/output in the workflow

2. a similar list of formulas that could could simply reference in a formula tool, so if you have to change the source formula, it's automatically updated in all the linked occurrances of that formula.

Currently we resort to using a manual create table script in redshift in order to define a distribution key and a sort key in redshift.

 

See below:

http://docs.aws.amazon.com/redshift/latest/dg/tutorial-tuning-tables-distribution.html

 

It would be great to have functionality similar to the bulk loader for redshift whereby one can define distribution keys and sort keys as these actually improve the performance greatly with larger datasets

 

One of the common methods for generalization of different types of normal and beta distributions is triangular.

Though Alteryx doesn't have a function for this, even excel doesn't have this but

  • SAS (randgen(x, "Triangle", c)) and
  • Mathematica (TriangularDistribution[{min,max},c]) like tools include one.

Can we add something like randtriangular(min,mode,max)?

I have my solution attached, but this will ease the flow...

 

Picture1.png

 

Best

We don't have Server.  Sometimes it's easy to share a workflow the old fashioned way - just email a copy of it or drop it in a shared folder somewhere.  When doing that, if the target user doesn't have a given alias on their machine, they'll have issues getting the workflow to run.

 

So, it would be helpful if saving a workflow could save the aliases along with the actual connection information.  Likewise, it would then be nice if someone opening the workflow could add the aliases found therein to their own list of aliases.

 

Granted, there may be difficulties - this is great for connections using integrated authentication, but not so much for userid/password connections. Perhaps (if implemented) it could be limited along these lines.

 

Would it be possible to change the default setting of writing to a tde output to "overwrite file" rather than the "create new file" setting? Writing to a yxdb automatically overwrites the old file, but for some reason we have to manually make that change for writing to a tde output. Can't tell you how many times I run a module and have it error out at the end because it can't create a new file when it's already been run once before!

 

Thanks!

When viewing spatial data in the browse tool, the colors that show a selected feature from a non-selected one are too similar. If you are zoomed out and have lots of small features, it's nearly impossible to tell which spatial feature you have selected.

 

Would be a great option to give the user the ability to specify the border and/or fill color for selected features. This would really help them stand out more. The custom option would also be nice so we can choose a color that is consistent with other GIS softwares we may use.

 

As an example, I attached a pic where I have 3 records selected but takes some scanning to find where they are in the "map".

 

selection_colors.PNG

 

Thanks

 

As I understand SFTP support is planned to be included in the next release (10.5). Is there plans to support PKI based authentication also?

 

This would be handy as lots of companies are moving files around with 3rd parties and sometimes internally also and to automate these processes would be very helpful. Also, some company policies would prevent using only Username/Password for authentication. 

 

Anybody else have this requirement? Comments? 

 

Very confusing.

 

DateTimeFormat

- Format sting - %y is 2-digit year, %Y is a 4-digit year.   How about yy or yyyy.   Much easier to remember and consistent with other tools like Excel.

 

DateTimeDiff

- Format string - 'year' but above function year is referenced as %y ??   Too easy to mix this up.

 

 

Also, documentation is limited.  Give a separate page for each function and an overview to discuss date handling.

 

The Field Summary tool is a very useful addition for quickly creating data dictionaries and analysing data sets. However it ignores Boolean data types and seems to raise a strange Conversion Error about 'DATETIMEDIFF1: "" is not a valid DateTime' - with no indication it doesn't like Boolean field types. (Note I'm guessing this error is about the Boolean data types as there's no other indication of an issue and actual DateTime fields are making it through the tool problem free.)

 

Using the Field Summary tool will actually give the wrong message about the contents of files with many fields as it just ignores those of a data type it doesn't like.

 

The only way to get a view on all fields in the table is using the Field Info tool, which is also very useful, however it should be unnecessary to 'left join' (in the SQL sense) between Field Info and Field Summary to get a reliable overview of the file being analysed.

 

Therefore can the Field Summary tool be altered to at least acknowledge the existence of all data types in the file?

I have run into an issue where the progress does not show the proper number of records after certain pieces in my workflow. It was explained to me that this is because there is only a certain amount of "cashed" data and therefore the number is basd off of that. If I put a browse in I can see the data properly.

 

For my team and me, this is actually a great inconvenience. We have grown to rely on the counts that appear after each tool. The point of the "show progress" is so that I do not have to insert a browse after everything I do so that it takes up less space on my computer. I would like to see the actual number appear again. I don't see why this changed in the first place.

Under options/restore defaults, it would be nice if the canvass could be reset (I sometimes lose windows), but the favorites be left intact.

Thanks!
Susan

 

Top Liked Authors