Bring your best ideas to the AI Use Case Contest! Enter to win 40 hours of expert engineering support and bring your vision to life using the powerful combination of Alteryx + AI. Learn more now, or go straight to the submission form.
Start Free Trial

Alteryx Designer Desktop Ideas

Share your Designer Desktop product ideas - we're listening!
Submitting an Idea?

Be sure to review our Idea Submission Guidelines for more information!

Submission Guidelines

Featured Ideas

Today when we install custom tools that use DLLs, the DLLs must be placed in the Plugins folder inside the Alteryx installation directory.  This requires a second step after the YXI installer runs.  I would like to be able to package the DLL with the YXI installer and Alteryx will search for the DLL inside the tool's directory, just the same as what happens with custom Python tools.  This will allow custom tools that use DLLs to be installed just as easily as the 1-step installation process for Python tools.

 

For example, this today does not work, but I want it to:

Screen Shot 2020-06-05 at 8.35.17 AM.png

The Alteryx Python tool currently throws an error if the inbound record set has zero rows (screenshot 1).

In order to manage that - you need to create try-except block around the Alteryx.read that instead creates an empty record set data frame. (screenshot 2).   This is inefficient because every time you change the canvas before the python tool, you need to re-code a static field list into the try-except block (i.e. you can no-longer deal with variable fields)

 

Please could you change the Alteryx.read method to create a zero-record dataframe with the correct column names if the input is zero-length?

 

Thank you

Sean

 

Screenshot 1:

ErrorMessage.png

 

Screenshot 2:

TryExceptBlock.png

With the release of 2018.3, cache has become an adhoc task. With complex workflow and multiple inputs we need a method to cache and save the cache selection by tool. Once the workflow runs after opening, the cache would be saved at the latest tool downstream.


This way we don't have to create adhoc cache steps and run the workflow 2X before realizing the time saving features of cache.

 

This would work similar to the cache feature in 11.0 but with enhanced functionality...the best of the old cache with the new cache intent.

 

Embed the cache option into tools.

 

Thanks!

Please offload map rendering, in Browse Tool, to the video card using DirectX or OpenGL, the software rendering currently used is embarrassingly slow and disruptive.  

Hello all,

Here the issue : I have a workflow in my One Drive folder
image.png


In that workflow, I use a macro that writes a file with a relative path (..\6_Big_Data\EN\.csv ) :


image.png

Strangely, it doesn't work and the error message seems to relate to a folder that doesn't exist (but also, not the one I have set)
image.png

ErrorLink: Output Data (1): https://community.alteryx.com/t5/*/*/ta-p/724327?utm_source=designer&utm_medium=resultsgrid|Cannot access the folder C:\Users\saubert\OneDrive - Business & Decision\Documents\B&D_Market\6_Big_Data\EN\.


I really would like that to work :)

Best regards,

Simon

Can we have a User Setting that allows the users to select if Alteryx should prevent the computer to go into Sleep or Hibernate mode when running a workflow?

 

It would be helpful to have the Read Uncommitted listed as a global runtime setting.

Most of the workflows I design need this set, so rather than risk forgetting to click this option on one of my inputs it would be beneficial as a global setting.

For example: the user would be able to set specific inputs according to their need and the check box on the global runtime setting would remain unchecked.

However, if the user checked the box on the global runtime setting for Read Uncommitted then all of the workflow would automatically use an uncommtted read on all of the inputs.

When the user unchecks the global runtime setting for Read Uncommitted, then only the inputs that were set up with this option will remain set up with the read uncommitted.

 

So I discovered this neat little tip today where if you have a browse tool in your workflow and click on the hyperlink (2 in the picture below) whilst the workflow is running, it will open a pop-out browse rather than show the data in the results window, meaning you can still see all of the messages). However, If you click on the Tool name/ID (1 in the image) is locks the results window to that tool. Idea for a fix here

 

joe_lipski_0-1599822085052.png

 

 

And this lead me to think that Alteryx must be populating the temporary browse anywhere in memory as it's running, so it would be great if it was possible to either click on the tool anchors or the tool names in the results window whilst the workflow is running to see the browse anywhere data.

 

 

In order to run a canvas using either AMP or E1 - the user has to perform at least 5 operations which are not obvious to the user.

a) click on whitespace for the canvas to get to the workflow configuration.   If this configuration pane is not docked - then you have to first enable this

b) set focus in this window

c) change to the runtime tab

d) scroll down past all the confusing and technical things that most end users are nervous to touch like "Memory limits" and temporary file location and code page settings - to click on the last option for the AMP engine.

e) and then hit the run button

 

SeanAdams_0-1641577970387.png

 

A better way!

Could we instead simplify this and just put a drop-down on the run button so that you can run with the old engine, or run with the new engine?        Or even better, have 2 run buttons - run with old engine, and run with super-fast cool new engine?

  • This puts the choice where the user is looking at the time they are looking to run  (If I want to run a canvas - I'm thinking about the run button, not a setting at the bottom of the third tab of a workflow configuration)
  • It also makes it super easy for users to run with E1 and AMP without having to do 10 clicks to compare - this way they can very easily see the benefit of AMP
  • It makes it less scary since you are not wading through configuration changes like Memory or Codepages
  • and finally - it exposes the new engine to people who may not even know it exists 'cause it's buried on the bottom of the third tab of a workflow configuration panel, under a bunch of scary-sounding config options.

 

cc: @TonyaS 

 

When I have AMP enabled, I can no longer performance profile my workflows. I get that there may be issues with calculating this across multiple threads but it'd be great to have Performance profiling available for the new engine. 

It would be cool to have annotations that dynamically update.  E.g. a record count would be displayed in the annotation and update after a run if changes occurred.

This idea has arisen from a conversation with a colleague @Carlithian where we were trying to work out a way to remove tools from the canvas which might be redundant, for example have you added a select tool to the canvas which hasn't been configured to change a data type or rename a field. So we were looking for ways of identifying in the workflow xml for tools which didn't have a configuration applied to them.

 

This highlighted to me an issue with something like the data cleanse tool, which is a standard macro.

 

The xml view of the data cleanse configuration looks like this:

<Configuration>
  <Value name="Check Box (135)">False</Value>
  <Value name="Check Box (136)">False</Value>
  <Value name="List Box (11)">""</Value>
  <Value name="Check Box (84)">False</Value>
  <Value name="Check Box (117)">False</Value>
  <Value name="Check Box (15)">False</Value>
  <Value name="Check Box (109)">False</Value>
  <Value name="Check Box (122)">False</Value>
  <Value name="Check Box (53)">False</Value>
  <Value name="Check Box (58)">False</Value>
  <Value name="Check Box (70)">False</Value>
  <Value name="Check Box (77)">False</Value>
  <Value name="Drop Down (81)">upper</Value>
</Configuration>

 

As it is a macro, the default labelling of the drop downs is specified in the xml, if you were to do something useful with it wouldn't it be much nicer if the interface tools were named properly - such as:

cgoodman3_0-1674658512759.png

So when you look at the xml of the workflow it's clearer to the user what is actually specified.

cgoodman3_1-1674658649253.png

 

 

 

Hi 

I'm really missing a search in the medata phane?

If I am on data phane:

Hamder83_0-1658922640426.png


If im browsing though metadata:

Hamder83_1-1658922660398.png



We are trying to utilize Alteryx Workflow migration workflow to setup proper SDLC environments and ensure we have less human intervention in the process. For example, if we create a gallery data connection XYZ in multiple Alteryx environments and try to run the migration workflow, the connection IDs are different in those environments regardless of how we name them. So even if we migrated the workflow, we still have to manually go to each environment, update the connection(s) and upload it again. That sort of defeats the purpose of migration concept itself.

Suggestion is to use gallery connection name/alias as connection ID so that when workflows migrated, connections are mapped accordingly. 

#Deployment #LargeScale #CleanCode #BareBonesCode

 

Request to add and option to strip out all unnecessary text within a Workflow / Gallery App when deploying to the Alteryx Server to be scheduled or used as a Gallery App.   Run at file location still causes the reading of unnecessary information across the network.  

 

Often the workflows are bloated with un-used meta data that at a small scale is not an issue, but with scale... all the additional bloat (kBs to MBs in size) - sent from the controller to the worker does impact the server environment.

 

The impact explodes when leveraging the Alteryx API to launch the same job over and over with different parameters - all the non-useful information in the workflow is always sent to the various workers to handle each one of these jobs.

 

Even having a "compiled" version of the workflow could be a great solution. #CompiledCode

 

Attached is a simple workflow that shows how bloated the workflows can become.

 

I appreciate your consideration.

when using the R-Tool for simple tasks (like renaming files, for example) in an interative macro - there's a delay on every iteration as the R Tool starts up R.

 

The following are repeated on every iteration (with delays):

2018-08-19_21-41-23.jpg

 

Can we look at an option to forward scan an alteryx job to look for R Tools, then load R into process once to eliminate these delays on every iteration?

 

CI / CD is critical to any production level process, especially when multiple authors are contributing new features to the same workflow. Currently, multi-author editing of workflows is extremely difficult, and something that would be aided greatly by using git to control different branches of ongoing work. Luckily, that's something we can already do today! However, the ability to test before merging a pull request is critical to modern CI / CD pipelines. For this, it we need to be able to run a headless workflow from a CI / CD environment. Also, having the ability to pass in parameters to the workflow would allow for robust integration testing - something that isn't straightforward today without running on production environments. 

In cases where there are dynamic tools - you often get a situation where there are zero rows returned - which means that the output of something like a transpose or a JSON parse or a regex may not have the field names expected.

 

However - any downstream filter tools (or other similar tools) fail even though there are no rows (see screenshot below).

 

The only way to get around this is to insert fake rows using a union or use the CReW macro for Ensure Fields.    However, this is all waste since there are no rows so there's no point in even evaluating the predicate in the filter tool.     Rather than making users work around this - can we please change the engine so that a tool can avoid evaluation if there are zero rows - this will significantly reduce the amount of these kind of workaround that need to be used with any dynamic tools (including any API calls).

 

thank you

Sean

 

 

 

 

 

Predicate.png

 

 

 
 

 

In workflow Constants, it would be really useful to be able to populate a new field associated with each user created constant. 

 

E.g. Type, Name, Value, "Description"

 

The description could be left blank but also populated by workflow designers to attach commentary / business logic to the constant. 

 

E.g. Type = User, Name = MyUserConstant, Value = 0.25, Description = "This describes the weighting factor used in Product Calculations"

 

 

Hello all,

A whole field of performance improvement have not been explored by Alteryx : the hardware acceleration by using something else than a CPU for calculation.

Here some good readings about that :
https://blog.esciencecenter.nl/why-use-an-fpga-instead-of-a-cpu-or-gpu-b234cd4f309c

https://en.wikipedia.org/wiki/Application-specific_integrated_circuit

The kind of acceleration we can dream !

 
 

image.png

 

Best regards,

Simon
Auteurs les plus complimentés