community
cancel
Showing results for 
Search instead for 
Did you mean: 

Alteryx Designer Ideas

Share your Designer product ideas - we're listening!
Upgrade Alteryx Designer in 10 Steps

Debating whether or not to upgrade to the latest version of Alteryx Designer?

LEARN MORE

1 Review

Our submission guidelines & status definitions before getting started

2 Search

The community for a solution or existing idea before posting

3 Vote

By clicking the star in the top left corner of an idea you support

4 Submit

A new idea to suggest a product enhancement or new feature


Suggest an idea

I would like to see more files types supported to be able to be dragged from a folder onto a workflow. More precisely a .txt and a .dat file. This will greatly help my team and I do be able to analyze new and unknown data files that we receive on a daily basis.  

 

 

Thank you. 

Can we get the input tool to automatically convert long filenames to the 8.3 convention inside of a macro?

 

I've written a batch macro that individually opens files in order to trap files that fail to open. However, when I pass in really long file names it bombs because beyond some length the Input tool converts the path to 8.3 but that logic doesn't fire inside of my macro.

 

Example of filename:
\\ccogisgc1sat\d$\Dropbox (Clear Channel Outdoor)\Mapping\BWI MapInfo\Workspaces\Local\AEs\Archives\Cara\Sunrise Senior Living\Washington+DC_Adults+55++With+HHI+Of+$75,000++Who+Are+Caregiver+Of+Aging+Parent_Relative+Or+Planning+To+Shop+For+Nursing+Care_Assisted+Living_Retirem.TAB

Now that we have a Snowflake Bulk Loader option, it would be great to utilize the built-in Snowflake internal staging.  This eliminates the need for an end-user to have the technical know-how or access to IT resources to utilize a separate S3 bucket and generally reduces friction in the process. 

 

There was pretty widespread support in the original Bulk Load thread: https://community.alteryx.com/t5/Alteryx-Designer-Ideas/Snowflake-Bulk-Loader/idi-p/105291/page/2#co...

Microsoft Access 2000-2003 does not access big integer

 

Integer — For integers that range from -32,768 to +32,767. Storage requirement is two bytes.


You can find MS documentation here : https://support.office.com/en-us/article/set-the-field-size-ba65e5a7-2e6f-4737-8e72-36b93f966a33


So when you use Alteryx with big integer (e.g Int64 ), it won't work the way you expected and the field will be converted into Double.

So what I propose :

-a change in documentation about this behaviour
-a warning message on the output box when you use Access and you have Int64

-ability to select Access 2016 in order to use Long integer format

Best regards,

Simon

Hi Everyone,

 

Many workflows I work with along with those of my colleagues, use big databases in order to get some data. After a few steps down stream and testing, we normally just add an output and then open up that data in a new workflow to save time running the original workflow. Not that this is much of a burden, but I am used to copying and pasting tools from workflow A to workflow B, but you can't do that with the output, because in workflow B the output needs to be converted to an input. I just think it would be a cool added feature if possible. Anyone else agree?

 

Thank you,

Justin

 

 

  • Input Output

One of the common things that we need to do, is to take a delta-copy of a file or a DB table into the staging area of the analytical database.

This always looks very similar - so it would be useful to make this a wizard based process so that teams can easily build these very quickly rather than having to hand wrap:

 

Process:

- Check which primary keys exist - fill the gaps where they don't

- Are there any rows that update over time (or is this insert-only) - if they update over time, which column is the "updated date" column so that we can spot updates - if there is no update date; then we need to do a column by column check of some kind (like a hash or a checksum)

- Do you want to sync deletes?

- Do you want to keep updates?

 

Outputs:

- Target table in staging area which is now updated compared to the source

- Logging done (similar to what Kimball recommends in the ETL Handbook) with the run date/time; summary stats; and any errors

- Errors table for any errors that arose with row numbers

- Tables in target created (with history table if requested)

 

Hello,

As of today, the in db connexion window is divided into :
-write tab

-read tab

However, writing means two different thing : inserting and in-db writing. Alteryx has already 2 different tools (Data Stream In and Write Data).

Si what I propose is to divide the window into :
-read
-write
-insert

Best regards,

Simon

 

 

Hi All,

 

Was very happy to see the Bulk Loader introduced for Snowflake during last release. This bulk loader is specifically available for Snowflake environments that are hosted on AWS, but does not provide functionality for those environments using Azure. As Snowflake continues to build momentum, I imagine this will be a common request. Is there something in the pipeline to add this functionality?

 

For an interim solution, we will be working toward developing some generic scripts/snowsql to mimic that bulk load, but ultimately we'd love to have this as part of the tool.

 

Best,

devKev

It would be wonderful for Alteryx to be able to connect to and query OData feeds natively, rather than using a 3rd-party driver or custom macro.   

 

OData querying is supported by quite a few familiar products, including Excel and PowerBISSIS/SSRS, FME SafeTableau, and many others. And the protocol is used to publish feeds from Microsoft Dynamics and Sharepoint, as well as many of the 10,000 publically available government datasets with API's (esp. those hosted by Socrata)   

 

I didn't see it as in the Idea section, but questions and workarounds have been discussed in the community a few times (11/15, 3/18, 4/18), and suggestions seem to be just to buy the $400-600 ODBC driver from CDATA (or ZappySys), or I could use a VBA script in Excel trigger a refresh, or create my own Alteryx connector macro (great series btw, though most was beyond my understanding!) 

   

While not opposed paying, kludging, or learning to program, they're just one more thing to build/buy, install, maintain, and break at the most inconvenient time Smiley Happy

 

Thanks,
Chadd

 

OData Overview:

OData (Open Data Protocol) is an ISO/IEC approvedOASIS standard that defines a set of best practices for building and consuming RESTful APIs. OData helps you focus on your business logic while building RESTful APIs without having to worry about the various approaches to define request and response headers, status codes, HTTP methods, URL conventions, media types, payload formats, query options, etc. OData also provides guidance for tracking changes, defining functions/actions for reusable procedures, and sending asynchronous/batch requests.  OData RESTful APIs are easy to consume. The OData metadata, a machine-readable description of the data model of the APIs, enables the creation of powerful generic client proxies and tools.

More info at at http://odata.org

With the release of 2018.3, cache has become an adhoc task. With complex workflow and multiple inputs we need a method to cache and save the cache selection by tool. Once the workflow runs after opening, the cache would be saved at the latest tool downstream.


This way we don't have to create adhoc cache steps and run the workflow 2X before realizing the time saving features of cache.

 

This would work similar to the cache feature in 11.0 but with enhanced functionality...the best of the old cache with the new cache intent.

 

Embed the cache option into tools.

 

Thanks!

As of today, for a full refresh, I can :

-create a new table

-overwrite a table. (will drop and then create the new table)

But sometimes, the workflow fails and the old table is dropped while the new one is not created. I have to modify the tool (setting "create a new table")to launch it again, which may be a complex process in companies. After that, I have to modify it again back to "overwrite".

What I want :

-create a new table-error if table already exists

-overwrite a table-error if table doesn't exist

-overwrite a table-no error if table doesn't exist (easy in sql : drop if exists...)

 

Thanks!

 

The current SharePoint API pull tool does not support the pull of managed metadata columns. It would be great if Alteryx would update the SharePoint List tools to be able to read in managed metadata columns.  

I really love how I can drag and drop a file directly onto the canvas from Windows Explorer and Alteryx knows to create an Input Data tool. But when I tried it with a folder today, hoping to see a Directory Input tool appear, it wouldn't do it. Could we have a similar functionality for automatically creating a Directory Input tool?

Presently when mapping an Excel file to an input tool the tool only recognizes sheets it does not recognize named tables (ranges) as possible inputs. When using PowerBI to read Excel inputs I can select either sheets or named ranges as input. Alteryx input tool should do the same.

@AdamR did a talk this year at Inspire EU about testing Alteryx Canvasses - and it seems that there is a lot we can do here to improve the product:

https://www.youtube.com/watch?v=7eN7_XQByPQ&t=1706s

 

One of the biggest and most impactful changes would be support for detailed unit testing for a canvas - this could work much like it does in Visual Studio:

 

Proposal:

In order to fully test a workflow - you need 3 things:

  • Ability to replace the inputs with test data
  • Ability to inspect any exceptions or errors thrown by the canvas
  • Ability to compare the results to expectation

To do this:

  • Create a second tab behind a canvas which is a Testing view of the canvas which allows you to define tests.   Each test contains values for one or more of the inputs; expected exceptions / errors; and expected outputs
  • Alteryx then needs to run each of these tests one by 1 - and for each test:
    • Replace the data inputs with the defined test input.   
    • Check for, and trap errors generated by Alteryx
    • Compare the output
    • Generate a test score (pass or fail against each test case)

This would allow:

  • Each workflow / canvas to carry its own test cases
  • Automated regression testing overnight for every tool and canvas

 

 

Example:

 

Testing.jpg

 

For this canvas - there are 2 inputs; and one output.

Each test case would define:

  • Test rows to push into input 1
  • Test rows to push into input 2
  • any errors we're expecting
  • The expected output of the browse tool

 

 

This would make Alteryx SUPER robust and allow people to really test every canvas in an incredibly tight way!

With more and more enterprises moving to cloud infrastructures and Azure being one of the most used one, there should be support for its authentication service Azure Active Directory (AAD).

 

Currently if you are using cloud services like Azure SQL Servers the only way to connect is with SQL login, which in a corporate environment is insecure and administrative overhead to manage.

 

The only work around I found so far is creating an ODBC 17 connection that supports AAD authentication and connect to it in Alteryx.

 

Please see the post below covering that topic:

https://community.alteryx.com/t5/Alteryx-Designer-Discussions/Alteryx-Designer-How-to-Connect-to-Azu...

Hi,

 

we use a lot the in-db tools to join our database and filter before extracting (seems logic), but to do it dynamically we have to use the dynamic input in db, which allows to input a kind of parameter for the dates, calculated locally and easily or even based on a parameter table in excel or whatever, it would be great to be able to dynamically plug a not in db tools to be able to have some parameters for filters or for the connect in-db. The thing is when yu use dynamic input in-db, you loose the code-free part and it can be harder to maintain for non sql users who are just used to do simple queries.

You could say that an analytic application could do the trick or even developp a macro to do so, but it would be complicated to do so with hundreds of tables.

 

Hope it will be interesting for others!

Hello,

 

we have several environment in our organization : dev, recept, production.

 

In order to make that change safe we intend to make several connection (standard alias)  like

alias_in_memory_pour_support.PNG

PRODUCTION_HIVE

DEV_HIVE

RECEPT_HIVE

 

In our workflows, we want to use aka:%Question.v_environment%HIVE

 

Sadly, this solution does not work despite the value defaut. 

 aka_et_alias_in_memory.PNG

 

The SQL Editor window could have a better presentation of the SQL code; two issues observed

  • First, that it's simply plain text without even a fixed-width font, much less syntax highlighting
  • Second, if you type in some manually formatted SQL code (e.g. with line feeds and indentation), then click on the "Visual Query Builder" button, then click back to the "SQL Editor" button, all the formatting is lost as it is converted to one run-on line of code, which is very difficult to read.

I understand that going between the Visual Query Builder and the SQL Editor is bound to have some issues; nonetheless the "idea" is to allow a user friendly display in the SQL Editor window:

My "implementation ideas" are based on a couple minutes with google, so hopefully this is a very feasible request; my user base is very likely to spend more time in the SQL editor than not, so this would be a valuable UX addition.  Thanks!

 

  • Input Output

Right now we can create Tableau extract files (.tde), but cannot read them into Alteryx -- this limits the partnership of these two companies.
Please add the functionality to import .tde files,
Best,
Jeremy

  • Input Output
Top Starred Authors