Get Inspire insights from former attendees in our AMA discussion thread on Inspire Buzz. ACEs and other community members are on call all week to answer!
The Product Idea boards have gotten an update to better integrate them within our Product team's idea cycle! However this update does have a few unique behaviors, if you have any questions about them check out our FAQ.

Alteryx Designer Desktop Ideas

Share your Designer Desktop product ideas - we're listening!
Submitting an Idea?

Be sure to review our Idea Submission Guidelines for more information!

Submission Guidelines

Featured Ideas

I love the dynamic rename tool because quite often my headers are in the first row of data in a text file (or sometimes, Excel!).

 

However, whenever I open a workflow, I have to run the workflow first in order to make the rest of the workflow aware of the field names that I've mapped in the dynamic rename tool, and to clear out missing fields from downstream tools. When a workflow takes a while to run, this is a cumbersome step.

 

Alteryx Designer should be aware of the field names downstream from the dynamic rename tool, and make them available in the workflow for use downstream as soon as they are added (or when the workflow is initially opened without having been run first).

Hi,

 

I didn't find this mentioned before, but apologies if it has. I and a few clients have found that the Download tool 'Query String/Body' is quite difficult to use from a UX perspective. If you have a long query (or anything over 3 short rows in fact), it's almost impossible to navigate to the part you want with ease, as you're required to highlight and move the cursor manually or use up and down arrows, still only ever seeing a few lines of the query. 

 

My suggestion(s) would be to:

1: include a scroll bar as a minimum

and ideally in addition, either:

2a: give the box itself more real estate (fixed) in the Payload tab, or

2b: allow the user to resize this box in the tool config (probably more complicated?) 

 

Specific problem below, including my (terrible!) drawings on it:

 

downloadtool.PNG

 

 

I appreciate there are other methods, including taking the query from a field etc., but for many users - particularly newer ones - it's preferred to copy a query into this String/Body and edit as needed, but right now this is an almost unmanageable approach.

 

Thanks,

 

Andy 

Currently - in workflows which make active use of dynamic queries - if the record set is empty then we end up with errors such as "a record was created with no fields".   This creates issues  when a dynamic query pumps rows out to a macro output, or where dynamic queries go into a join.

 

Could we change the result of the dynamic query so that if it returns an empty record set it still has columns but no rows and therefore doesn't cause errors in workflows?

 

Let me know if we need to provide a worked example?

 

Thank you

Sean

Hi Team,

 

Download tool is not updated their encryption policy. This stool still support SHA-1 but as per the organization requirements they want higher encryption mechanism. As in my case we are using SFTP connection and we want to download the data but my SFTP server using SHA-2 encryption due to which we not able to configure the workflow. 

Please upgrade the Download tool for better experience in alteryx.

 

Regards,

Kaustubh 

When the Python Tool operates, it seems to always ingest all the data before processing any of it (i.e. no batch processing). Python can handle this type of functionality with generators, can we update the tool so that it may do some preprocessing (like imports and data prep) and allow a defined generator function to be called repeatedly from a separate input handle and provide batch data frames on output for more parallel-like processing of data?

 

The Python Tool could be updated as such:

  • Multi-Input - Same functionality as now, and also allow this data to be used for preprocessing and setting up the Python functions and a single batch function.
  • Data Input - Ingests data in batches (as most other tools operate) where each batch passes in a dataframe (in this case, a subset of processed entries) into an existing Python function (with a name that is in globals()), and returns another dataframe with that desired output. This can give the option of adding/removing rows as necessary to a subset of the data.
  • Data Output - Partial set of data after data processing to allow tools further in the chain to process in parallel.
  • "On Complete" Multi-Outputs - Same functionality as now, to pass process-complete data to the next tool once all data ingested has been processed. Perhaps give the option to pass the complete set from Data Output.

 

A simple use-case, if a user wanted to use only the Python Tool:

Let's say a user wants to get all URLs from every post in a thread (containing millions of posts) that are in blacklisted domains.

  1. Data prep that sends the list of blacklisted domains into the Python Tool's Multi-Input handle, and that data is transformed and stored in a set within the Python tool once.
  2. A series of posts (strings) are sent in batches (let's say ~10000) to the Data Input of the Python Tool. The tool calls a defined Python function that extracts all the URLs, and filters those in the blacklist.
  3. That data is then transformed into a DataFrame which is then sent to the Data Output of the Python Tool, and only contains results corresponding to the small batch of posts that were ingested. Alteryx can also use this to track progress during execution.
  4. Once all posts have been processed, one of the Python Tool's Multi-Outputs can return a total count of URLs found that were NOT in the blacklist (sure this can be a part of the Data Output, but just for the sake of this example). Could also be used to trigger "on-complete events."

 

I know I used the term "generators" above, and the design could probably be simplified to instead call an Alteryx Python function that yields from a function to await input from the next batch to use actual Python generators. However, I feel my initial approach could be thought of as a simpler process since generators are more of an intermediate functionality.

 

I hope this makes sense and is elaborate enough to pursue. Thanks for the consideration!

 

Idea:  I need a function that given two dates, will return the number of business days between them. I need to know the # of business days between when a sales order is placed and when it ships to the customer. I'm in the US, so I would want to not count Saturdays, Sundays, and US Holidays, but I can foresee others wanting the option to change to other calendars or ignore holidays.

 

There are a couple of posts on this in the community, but everything I've found so far is too laborious to implement or not robust.

 

When commenting an expression (with // or /* <> */), the popup box shouldn't appear as it's essentially free text.

 

Quite irritating when writing a block explanation of logic or something similar.

 

Luke

One of the common things that we need to do, is to take a delta-copy of a file or a DB table into the staging area of the analytical database.

This always looks very similar - so it would be useful to make this a wizard based process so that teams can easily build these very quickly rather than having to hand wrap:

 

Process:

- Check which primary keys exist - fill the gaps where they don't

- Are there any rows that update over time (or is this insert-only) - if they update over time, which column is the "updated date" column so that we can spot updates - if there is no update date; then we need to do a column by column check of some kind (like a hash or a checksum)

- Do you want to sync deletes?

- Do you want to keep updates?

 

Outputs:

- Target table in staging area which is now updated compared to the source

- Logging done (similar to what Kimball recommends in the ETL Handbook) with the run date/time; summary stats; and any errors

- Errors table for any errors that arose with row numbers

- Tables in target created (with history table if requested)

 

In environments with a large number of designers - we are now starting to bump into the issue of many people re-inventing the wheel - or editing a canvas in ways that overwrite each other.

 

Can we make an addition to the flow of work so that I can check an item out of the server, work on it, and check it back in?   that way, people can see that I'm working on it in the designer, my changes are being sent back, and when I commit my changes then people can work accordingly.

 

The other alternative would be code branches & trunks which would be more effective and more useful, but I'd guess this would be a tougher ask (unless Alteryx just embedded GIT under the covers)

The Dynamic Input tool fails when attempting input a set of Excel files with the following error:

Error: Dynamic Input (1): The file "Test2.xlsx|||<List of Sheet Names>" has a different schema than the 1st file in the set.

 

Each spreadsheet contains two tabs and all tabs contain the same columns.

 

The root cause of the schema error is that maximum sheet name length in the two spreadsheets is different.  The first spreadsheet uses "East" and "West" for sheet names.  The second spreadsheet uses "North" and "South" for sheet names.  The Dynamic Input tool uses the longest sheet name when defining the effective Schema.

 

Excel limits sheet name length to 31 characters. It would be helpful if the Dynamic Input tool used 31 as the minimum string length when defining a schema from Excel sheet names.

 

The Input Data tool exhibits similar behavior when using a wildcard in the filename and the "Import only the list of sheet names" option.

 

A batch macro can be used as a workaround.

Hi there,

 

The download tool currently does not work if the user is behind a corporate proxy setup - and the only way to download web-content is using CURL.

This is a significant impediment because this prevents almost all corporate alteryx users from being able to access this capaibilty

 

Could you please look into using the proxy settings that the workstation uses to access the internet in a corp env?

 

thank you

Sean

Hi!

 

So Dynamic Select is a wonderful tool - but in Formula mode it effectively acts as a filter. It drops all of the other fields which don't match the filter and they disappear - floating in the workflow ether, dreaming of the Join tool or other way they can be given XML life anew.  It would be super cool if in stead of just having those Fields which are true exit and continue into the workflow if the False fields could be launched back into the workflow space via a False anchor like on a filter tool....

 

Hypothetical situation - I'm looking to isolate some fields and convert them to a different format based upon name or other characteristic. I'm doing this not to jettison my data set, but to improve it. I run dynamic select and multi-field tool, and suddenly I'm scratching my head. How do I rejoin my workflow with my new and improved data easily? The most direct, albeit stylistically immature way is apparently to a a new_ to my newly created new type fields, join the old fields versus the datastream and drop both of the old fields in place of the New_ versions (soon to shed their prefixes in a dynamic rename)... It works, but it could be much easier.

 

Thanks!

Python pandas dataframes and data types (numpy arrays, lists, dictionaries, etc.) are much more robust in general than their counterparts in R, and they play together much easier as well. Moreover, there are only a handful of packages that do everything a data scientist would need, including graphing, such as SciKit Learn, Pandas, Numpy, and Seaborn. After utliizing R, Python, and Alteryx, I'm still a big proponent of integrating with the Python language much like Alteryx has integrated with R. At the very least, I propose to create the ability to create custom code such as a Python tool. 

 The Download tool is so much more than Downloads. Think about the situation where you are using the Download tool to upload invoices and try explaining that to co-workers. "Oh yes - I'm going to implement the API to upload the invoices using the Alteryx download tool..." Could we call it the Curl tool or something?

The Python tool has been a tremendous boon in being able to add capability that is not yet available in the Alteryx platform.

 

It would make the Python Tool much more usable and useful if you can define the inputs explicitly rather than just relying on the good behaviour of both the user; and also the python code that reads the inbound data (Alteryx.Read('#1'))

 

This is not something that the Jupyter notebook code-interface may handle directly (because the Jupyter notebook has no priveledged knowledge of the workflow outside it); so this may be best handled by the container itself.

 

The key here is that if my python app requires 2 inputs - it should be possible to define these explicitly so that we can test; and also so that we can prevent errors and make this more bullet-proof.

 

The same would apply on the outbound nodes for the Python tool.

 

It would be a handy feature if it were possible to choose a data type for an input tool to read the data in as. For example, if a dataset has multiple fields with different data types, it would be handy to be able to make the Input Tool read and output them all as a string, if needed. This would also make a handy tool, a sort of blanket data conversion to convert all fields to the specified type.

There is a great question in the Designer space right now asking about saving logs to a database: https://community.alteryx.com/t5/Alteryx-Designer-Discussions/Save-workflow-messages-log-in-database...

 

This got me to think a little more about localized logging options in Alteryx.

 

At a high level, there are ways to accomplish this in Designer at a User or System level by enabling a Logging directory and then parsing those logs with a separate Alteryx job.  However, this would involve logging ALL Designer executions, which seems like it may be overkill for this need.  A user can also manually save a log after each execution, although this requires manual intervention.

 

I think adding an option in the Runtime settings for Workflow Configuration to Enable Logging and (optionally) specify a Logging directory would be a great feature add for Designer.  In my opinion this should not apply once a workflow runs on Server (Server logging should be handled in a fully standardized way), but should apply to designer "UI" execution.  Having the ability to add a logging naming convention (perhaps including a workflow name and run date in the log name) would be icing on the cake.

 

This would allow for a piecemeal logging solution to log specific flows or processes that might be high visiblity or high importance, while avoiding saving hundreds or thousands of logs daily of less important processes, and of dev test.  It would also reduce or eliminate a manual process to save these logs individually.

Would be nice if Alteryx had the ability to run a Teradata stored procedure and/or macro with a the ability to accept input parameters.  Appears this ability exists for MS SQL Server.  Seems odd that I can issue a SQL statement to the database via a pre or post processing command on an input or output, but can't call a stored procedure or execute a macro.  Only way we can seem to call a stored procedure is by creating a Teradata BTEQ script and using the Run Command tool to execute that script.  Works, but a bit messy and doesn't quite fit the no-coding them of Alteryx.

It seems that currently the Python tool is raising a `FileNotFoundError` exception in Python when there is not data incoming on an input connection. I have, for example, a Filter tool before the Python tool and sometimes there is just no data coming to Python tool - as it is intended.

 

Unfortunately, the Python tools gives my an error message in those cases with this message before the error:

 
Python (15)Unable to connect to input data (C:\Users\CCEB8~1.HAR\AppData\Local\Temp\3a9bb9672d7abbe6af3176379ae8c3b1\15\4460abb7be83bae8f01b9bf1238a923c.sqlite)

 

This is only the case when there is no data incoming. In all other cases, the tool works fine.

 

Since this is not really an error, a way to either catch this before using `Alteryx.read("#1")` or just having `Alteryx.read()` return an empty data.frame (as I would expect it to do) would be appreciated.

While In-db tools are very helpful and cut down the time needed to write complex SQL , there are some steps that are faster by directly writing SQL like window functions- OVER (PARTITION BY .....). In Alteryx, we need to create multiple joins and summaries to perform a window function. It would be immensely helpful if there was a SQL editor tool for in-db workflows where we can edit the SQL code at any point in the workflow, or even better, if they can add an "edit" function to every in-db tool where we can customize the SQL code generated and then send to the next tool.

 

This will cut down the time immensely and streamline the workflow to make Alteryx a true contender for the ETL solution space.

Top Liked Authors