Join the Alteryx Community’s Maveryx Summer Cup event! Compete, network with others, and earn your gold through a series of challenges from July 24th to August 11th. Learn more about the event here.
The Product Idea boards have gotten an update to better integrate them within our Product team's idea cycle! However this update does have a few unique behaviors, if you have any questions about them check out our FAQ.

Alteryx Designer Desktop Ideas

Share your Designer Desktop product ideas - we're listening!
Submitting an Idea?

Be sure to review our Idea Submission Guidelines for more information!

Submission Guidelines

Featured Ideas

0 Likes

Other tools that I have used allow you to determine where you are caching from so instead of always having to cache at the input, you could cache after a big join.  This would be great for efficiency as having to run everything through the entire workflow every time is innefficient and I end up spending a lot of time waiting for my workflow to go through the same tools.  

For the purpose of debugging a workflow, I often filter just one customerID or any other ID to analyse the workflow.

 

With the Browse tool (ctrl-shift-B) you can just double click a cell and copy the value of it. This is not possible in the result tool, it would be nice if that would become possible.

 

Thanks,

 

Hans

 

It would be great if you could create default settings for the Tool Containers. As workflows become larger, I use containers a lot. But once I have 10-15 containers, I have to set all of them to have a Transparency of 1 and a margin of None. While the changes don't take long to make, it would be nice if they could be preset.

I'd like to be able to disable a tool container but not minimize it so I can still see what's in there. Maybe disabled containers could be grayed out the way the output tools are when you disable them. We would still need to retain current features in case people like it that way, but it would be nice to choose.

With SSIS, you can invoke user precedence contraint(s) to where you will not run any downstream flows until one or more flows complete.  A simple connector should allow you to do this.  Right now, I have my workflow(s) in containers, and have to disable / enable different workflows, which can be time consuming.  Below is a better definition:

 

Precedence constraints link executables, containers, and tasks in packages in a control flow, and specify conditions that determine whether executables run. An executable can be a For Loop, Foreach Loop, or Sequence container; a task; or an event handler. Event handlers also use precedence constraints to link their executables into a control flow.

0 Likes

Hey guys!!

 

I was just thinking... they might not need to fully build out a python ide, but could still reach the same objective.

 

You should be able to keep a python file on its own and call it in r.  By doing this, you might be able to have the json/xml handling of python with the visual/stats power of R while it being nicely bundled in your workflow.  This uses base functions in r and does a good job turning a pandas dataset to an r dataframe you can move along your workflow.

 

You could always just use this same idea to write a file somewhere and once it's written, your workflow will continue.  If you do, the code is literally 1 line in r...  Anyway, let me know your thoughts! 🙂

 

Will this work for your organization?

 

https://www.linkedin.com/pulse/using-python-r-windows-7-subhash-jaini?trk=hp-feed-article-title-publ...

@AdamR_AYX,

 

Limit conversion warning allows for a minimum of 1 message.  Can we set the minimum to 0 to completely ignore the message?

 

Perhaps we can allow warning messages a similar function as ERROR messages and allow the designer to Ignore, Warn or Cancel?

 

ConvError: Imputation (441): Tool #104: No demand: 0.200000000000031 had more precision than a double. Some precision was lost.

ConvError: Summarize (456): Data: 0.360000000004675 had more precision than a double. Some precision was lost.

 

End: Designer x64: Finished running FP Model - Marquee Crew v3.yxmd in 32.3 seconds with 16 field conversion errors and 4 warnings

 

Thanks,

 

Mark

Field Summary is a great tool, but would be nice to have a count and count not null on it.

Would save an extra select for parsing text files in correct format for dates and times.

 

 

It would be super helpful if there were a way to

1. have an active list of all inputs/outputs that, if the links were changed, would update the connection for every occurance of that input/output in the workflow

2. a similar list of formulas that could could simply reference in a formula tool, so if you have to change the source formula, it's automatically updated in all the linked occurrances of that formula.

0 Likes

In the designer it would be nice if the projection of a .shp file could automatically be read by its corresponding .prj file.

Both of these can be partially accomplished with the output of the directory tool:

- List Directories - Summarize unique list of directories from the directory tool output

- Exclude Paths - use filter tool to exlude files or directories based on patterns

 

 However, here are some scenarios that aren't addressed cleanly:

- Directories are not listed unless there is a file contained within.  The tool is called Directory, but it only lists files.  The directory has to be non-empty to be listed.  An option is needed to either list files and/or directories (including empty optionally)

- I can't figure out whether the file specification is a wildcard expansion only or supports regular expression for inclusion/exclusion. I see in the File Browse tool that you can list multiple formats e.g. Text Files (*.txt)|*.txt|All Files (*.*)|*.*.  Here is a use case where this is required.  Our network shares have Windows file restore snapshots stored in a ~snapshot directory.  We don't want the directory tool to traverse this directory (because it literally takes hours to scan), but there isn't an elegant way to exclude it.  If you filter it from the directory tool output, it's already scanned it. What we've done is generate the top-level directory list outside of the tool and fed it into a macro that has a directory tool (with sub-directory scanning enabled) inside. 

- Another way to address this specific scenario woudl be to have an option to exclude traversal of "hidden" folders.  But a more generic approach is ideal.

0 Likes

There is a great functionality in Excel that lets users "seek" a value that makes whatever chain of formulas you might have work out to a given value. Here's what Microsoft explains about goal seek: https://support.office.com/en-us/article/Use-Goal-Seek-to-find-a-result-by-adjusting-an-input-value-...

 

My specific example was this:

 

In the excel (attached), all you have to do is click on the highlighted blue cell, select the “data” tab up top and then “What-if analysis” and finally “goal seek.” Then you set the dialogue box up to look like this:

 Set cell: G9

To Value: 330

By changing cell" J6

 

And hit “Okay.” Excel then iteratively finds the value for the cell J6 that makes the cell G9 equal 330. Can I build a module that will do the same thing? I’m figuring I wouldn’t have to do it iteratively, if I could build the right series of formulas/commands. You can see what I’m trying to accomplish in the formulas I’ve built in Excel, but essentially I’m trying to build a model that will tell me what the % Adjustment rate should be for the other groups when I’ve picked the first adjustment rate, and the others need to change proportionally to their contribution to the remaining volume.

 

There doesn't really seem to be a way to do this in Alteryx that I can see. I hate to think there is something that excel can do that Alteryx can't!

Currently we resort to using a manual create table script in redshift in order to define a distribution key and a sort key in redshift.

 

See below:

http://docs.aws.amazon.com/redshift/latest/dg/tutorial-tuning-tables-distribution.html

 

It would be great to have functionality similar to the bulk loader for redshift whereby one can define distribution keys and sort keys as these actually improve the performance greatly with larger datasets

 

One of the common methods for generalization of different types of normal and beta distributions is triangular.

Though Alteryx doesn't have a function for this, even excel doesn't have this but

  • SAS (randgen(x, "Triangle", c)) and
  • Mathematica (TriangularDistribution[{min,max},c]) like tools include one.

Can we add something like randtriangular(min,mode,max)?

I have my solution attached, but this will ease the flow...

 

Picture1.png

 

Best

Constantly using rand() function but also need;

 

  • Normal distribution function like we have in Excel and
  • Triangular distribution function too...

Picture1.jpg

Idea: can we please add normdist() and triang(min,mode,max) functions...

 

Best

 

Edit: for normal dist. attached a discretized example...

Hi, All.

 

As a newbie, I am impressed with Alteryx's ability to deal with lots of formats / connections when importing / imputing data. In a pretty simple way

 

However, I feel it misses something much more "basic", in my opinion at least. The option of telling Alteryx which decimal separator occurs in database being imported. Like Excel, SAS, IBM SPSS, to name a few, all of them do... Having a default of comma being the decimal separator, but letting the user opting to change it. Numbers in US are separated (integer part from non-integer) by a dot. The entire rest of the world (or almost all of it, there are other exceptions) uses comma instead...

 

I have posted a flow to deal with it on Alteryx Gallery (it is attached here), but it is, at least in my opinion, something cumbersome that should be pretty straightforward.

 

So... Is this something I feel alone, or is this a suggestion that could be thought as an improvement for Input & Output tools in future releases of Alteryx?

 

My best regards,

 

Bruno.

Idea:

I know cache-related ideas have already been posted (cache macros; cache tools), but I would like it if cache were simply built into every tool, similar to the way it is on the Input Tool.

 

Reasoning:

During workflow development, I'll run the workflow repeatedly, and especially if there is sizeable data or an R tool involved, it can get really time consuming.

 

Implementation ideas:

I can see where managing cache could be tricky: in a large workflow processing a lot of data, nobody would want to maintain dozens of copies of that data.  But there may be ways of just monitoring changes to the workflow in order to know if something needs to be rebuilt or not: e.g. suppose I cache a Predictive Tool, and then make no changes to any tool preceeding it in the workflow... the next time I run, the engine should be able to look at "cache flags" and/or "modified tool flags" to determine where it should start: basically start at the "furthest along cache" that has no "modified tools" preceeding it.

 

 

Anyway, just a thought.

 

We don't have Server.  Sometimes it's easy to share a workflow the old fashioned way - just email a copy of it or drop it in a shared folder somewhere.  When doing that, if the target user doesn't have a given alias on their machine, they'll have issues getting the workflow to run.

 

So, it would be helpful if saving a workflow could save the aliases along with the actual connection information.  Likewise, it would then be nice if someone opening the workflow could add the aliases found therein to their own list of aliases.

 

Granted, there may be difficulties - this is great for connections using integrated authentication, but not so much for userid/password connections. Perhaps (if implemented) it could be limited along these lines.

 

In the Report Map tool, I'm locked from changing the 'Background Color' menu, and the color appears to be set to R=253, G=254, B=255, which is basically white. 

 

However, when we use our TomTom basemap, we see that the background is actually blue, despite what's listed in the Background Color window.  (This goes beyond the 'Ocean' layer, and appears to cover all space 'under' the continents and ocean.)  Since we oftren print large maps of the east coast, this tends to use a lot of blue ink.  I've attached a sample image to illustrate this.

 

My solve to-date has been to edit the underlying TeleAtlas text file and change the default background (117 157 181) to white (255 255 255).  Unfortunately, we lose these changes with each data update.

 

Could Alteryx unlock the Background Color menu, and have it affect the 'base' layer, underneath oceans and continents in TomTom maps?  Not sure how it might affect aerial imagery.

Top Liked Authors