community
cancel
Showing results for 
Search instead for 
Did you mean: 

Alteryx Designer Ideas

Share your Designer product ideas - we're listening!
Upgrade Alteryx Designer in 10 Steps

Debating whether or not to upgrade to the latest version of Alteryx Designer?

LEARN MORE

1 Review

Our submission guidelines & status definitions before getting started

2 Search

The community for a solution or existing idea before posting

3 Vote

By clicking the star in the top left corner of an idea you support

4 Submit

A new idea to suggest a product enhancement or new feature


Suggest an idea

Hi All,

 

Was very happy to see the Bulk Loader introduced for Snowflake during last release. This bulk loader is specifically available for Snowflake environments that are hosted on AWS, but does not provide functionality for those environments using Azure. As Snowflake continues to build momentum, I imagine this will be a common request. Is there something in the pipeline to add this functionality?

 

For an interim solution, we will be working toward developing some generic scripts/snowsql to mimic that bulk load, but ultimately we'd love to have this as part of the tool.

 

Best,

devKev

One of the common things that we need to do, is to take a delta-copy of a file or a DB table into the staging area of the analytical database.

This always looks very similar - so it would be useful to make this a wizard based process so that teams can easily build these very quickly rather than having to hand wrap:

 

Process:

- Check which primary keys exist - fill the gaps where they don't

- Are there any rows that update over time (or is this insert-only) - if they update over time, which column is the "updated date" column so that we can spot updates - if there is no update date; then we need to do a column by column check of some kind (like a hash or a checksum)

- Do you want to sync deletes?

- Do you want to keep updates?

 

Outputs:

- Target table in staging area which is now updated compared to the source

- Logging done (similar to what Kimball recommends in the ETL Handbook) with the run date/time; summary stats; and any errors

- Errors table for any errors that arose with row numbers

- Tables in target created (with history table if requested)

 

With the release of 2018.3, cache has become an adhoc task. With complex workflow and multiple inputs we need a method to cache and save the cache selection by tool. Once the workflow runs after opening, the cache would be saved at the latest tool downstream.


This way we don't have to create adhoc cache steps and run the workflow 2X before realizing the time saving features of cache.

 

This would work similar to the cache feature in 11.0 but with enhanced functionality...the best of the old cache with the new cache intent.

 

Embed the cache option into tools.

 

Thanks!

While In-db tools are very helpful and cut down the time needed to write complex SQL , there are some steps that are faster by directly writing SQL like window functions- OVER (PARTITION BY .....). In Alteryx, we need to create multiple joins and summaries to perform a window function. It would be immensely helpful if there was a SQL editor tool for in-db workflows where we can edit the SQL code at any point in the workflow, or even better, if they can add an "edit" function to every in-db tool where we can customize the SQL code generated and then send to the next tool.

 

This will cut down the time immensely and streamline the workflow to make Alteryx a true contender for the ETL solution space.

Currently pip is the package manager in place within the Designer. Unfortunately this is something that doesn't fit our requirements as Data Scientists. We prefer using conda due to the following reasons:

  1. conda manages also non-Python library dependencies. This way conda can be used to manage R packages as well which comes in quite handy (even tough not all packages from CRAN Repository are available)
  1. conda provides a very simple way of creating conda envs (similar to virtualenv but with conda one can also install and manage pip packages --> virtualenv cannot install conda packages!) to isolate required packages (with specific versions) used in a workflow (e.g. for a Python Tool in Designer).

 

So I would like to have conda instead or additionally to pip and would like to create my conda envs where I install the packages I need for a specific task within my workflow. Moreover, if you think about to feature an R jupyter notebook capability (like the Python Tool) it could be beneficial to change from pip to conda for managing packages in both worlds.

  • Developer

We need some way (unless one exists that I am unaware of - beyond disabling all but the Container I want to run) to fire off containers in particular order.  Run Container "Step1" then Run Container "Step2" and so on.

Hi 

Wanted to control the order of execution of objects in Alteryx WF but right now we have ONLY block until done which is not right choice for so many cases 

Can we have a container (say Sequence Container) and put piece of logic in each container and have control by connecting each container?
Hope this way we can control the execution order
It may be something looks like below 


  • Developer

When commenting an expression (with // or /* <> */), the popup box shouldn't appear as it's essentially free text.

 

Quite irritating when writing a block explanation of logic or something similar.

 

Luke

I've seen this question before and have run into it myself.  I'd like to see a new tool that would allow a developer (of a workflow) to choose a path of logic based upon criteria known only during the execution of a module.

 

If LEFT INPUT Count of records < 10,000 THEN Path1 (e.g. use a calgary join)

ELSE Path 2 (e.g. use a standard join)

endif

 

Thanks,

 

Mark

Per my initial community posting, it seems that in environments where the firewall blocks pip the YXI installation process takes longer than it needs. My experience was 9:15 minutes for a 'simple' custom tool (one dependency wheel included in the YXI).

 

Given the helpful explanation of the YXI installation process, it seems the --upgrade pip and setuptools is causing the delay. Disconnecting from the internet entirely causes the custom YXI to install in 1:29 minutes.

 

My 'Idea' is to provide a configuration option to install the YXI files 'offline'. That is, to skip the pip install --upgrade steps, and perhaps specify the --find-links and --no-index options with the pip install -r requirements.txt command. The --no-index option would assume that the developer has included the dependency wheel files in the YXI package. If possible, a second config option to add the path to the dependencies for the --find-links option would help companies that have a central location for storing their dependencies.

I use a mouse which has a horizontal scroll wheel. This allows me to quickly traverse the columns of excel documents, webpages, etc.

 

This interaction is not available in Alteryx Designer and when working with wide data previews it would improve my UX drastically. 

DELETE from Source_Data Where ID in

SELECT ID from My_Temp_Table where FLAG = 'Y'

 

.... 

 

Essentially, I want to update a DB table with either an update or with the deletion of rows.  I can't delete all of the data.  My work around will be to create/insert into a table the keys that i want to delete and try to use a input/output tool with SQL that performs the delete.  Any other suggestions are welcome, but a tool is best.

 

Thanks,

Mark

As we do more work analyzng the canvasses that our folk are producing - it's becoming more and more necessary to have a well documented definition and schema for the XML that is used for Alteryx Canvasses.

 

Please could you publish the full XML definition and schema for Alteryx canvasses - this will allow groups to perform deeper analytics on how people are using Alteryx, automate quality checks; look for learning gaps; scan for dependencies etc?

 

Note: this relates to an idea from @dataprep here: https://community.alteryx.com/t5/Alteryx-Designer-Ideas/Documentation-tool-list-fileformat/idi-p/184...

 

cc: @revathi @LizaNemchynova @ydmuley

I'm loving the new Python tool. One feature I'm missing is better handling of reference shortcuts. If my code is something like this

test="%Question.Text%"
test

and I run my workflow, it will replace %Question.Text% in the python tool's code like this:

test="Hello World"
test

Similar to the R tool (or any other tool really), I think my code should remain unchanged, but my variable test should be set to whatever value is is entered for Question.Text.

 

Thanks!

  • Developer
I have many use cases that involve one or more of the following:
  • moving or renaming a file after importing it
  • deleting a file after importing it
  • moving or copying a file after successfully exporting it
  • writing a temporary file (i.e. batch file for RunCommand tool), then deleting it when finished
A complete suite of file management tools (Copy, Delete, Move/Rename) would make this much easier.
  • Developer

There is a great question in the Designer space right now asking about saving logs to a database: https://community.alteryx.com/t5/Alteryx-Designer-Discussions/Save-workflow-messages-log-in-database...

 

This got me to think a little more about localized logging options in Alteryx.

 

At a high level, there are ways to accomplish this in Designer at a User or System level by enabling a Logging directory and then parsing those logs with a separate Alteryx job.  However, this would involve logging ALL Designer executions, which seems like it may be overkill for this need.  A user can also manually save a log after each execution, although this requires manual intervention.

 

I think adding an option in the Runtime settings for Workflow Configuration to Enable Logging and (optionally) specify a Logging directory would be a great feature add for Designer.  In my opinion this should not apply once a workflow runs on Server (Server logging should be handled in a fully standardized way), but should apply to designer "UI" execution.  Having the ability to add a logging naming convention (perhaps including a workflow name and run date in the log name) would be icing on the cake.

 

This would allow for a piecemeal logging solution to log specific flows or processes that might be high visiblity or high importance, while avoiding saving hundreds or thousands of logs daily of less important processes, and of dev test.  It would also reduce or eliminate a manual process to save these logs individually.

Every time I add a tool container I default the Margin to "none." Could you make a default selection part of user settings? Thank you.
  • Developer

 

I'd like to append a field to an Excel file name in the Output tool but for Excel it appends the field to the Table name instead.

 

The solution is build the filename ahead of time as mentioned in @HenrietteH 's KB article but that seems less than elegant.

 

 

https://community.alteryx.com/t5/Alteryx-Knowledge-Base/How-to-Guide-to-dynamically-renaming-output-...

 

Solution:

 

Expand the list of options to:

 

Append Suffix to FIle Name

Prepend Prefix to File Name

Append Suffix to Table Name

Prepend Prefix to Table Name

 

Cheers,

Bob

 

 

Hi All,

 

With Integration of various platform in Alteryx, connector seems to be an ease of use.

 

One is, yammer connectors. It would-

1. Help to extract insights of organisation pages.

2. Understand the productivity/Ideas of an organisation overall and help in enterprise content management.

 

Currently, the process to extract such data is through REST API/Bulk API and a connector would solve the issues.

 

Thanks

Harsh

A cahce tool would allow a user to temporarily store a snapshot of inline data from previous run of the module.

Imagine a browse tool that was inline as opposed to a terminus tool (input and output). Now allow that browse tool to persist its data after a run of the module. When an option on that tool was activated, it would block all of the dependent tools upstream from it and instead send its cached data downstream.

The reason I think this would be a useful tool is that I often come to the end of creating a module when I'm working on the Reporting tools. I run multiple times to see the changes I've made. When the module has a lot of incoming data and complex data transformations, it can take a long time just to get to the point where the data gets to the reporting tools. This cache tool would eliminate that wait.
  • Developer
Top Starred Authors