The Product Idea boards have gotten an update to better integrate them within our Product team's idea cycle! However this update does have a few unique behaviors, if you have any questions about them check out our FAQ.

Alteryx Designer Desktop Ideas

Share your Designer Desktop product ideas - we're listening!
Submitting an Idea?

Be sure to review our Idea Submission Guidelines for more information!

Submission Guidelines

Featured Ideas

0 Likes

We have to run the full workflow at a time, if you can please think of something like partial running of the workflow and also while designing the flow if we add new tools, we have to run the entire flow again. 
Why can't it hold the intermediate data to avoid re-running the entire code.

It appears that the Workflow Dependencies window does not report dependencies from all tools. In the example image, you can see that the file input from the Amazon S3 Download tool is not listed. Some tools may have dependencies that do not easily fit the current field structure of the window, but maybe the input/download tools could be listed with an asterisk or partial reference.

Missing Amazon S3 DependencyMissing Amazon S3 Dependency

0 Likes

Hi,

 

I think it would be great if the run time of a workflow could be displayed by tool or container. This would make refining the workflow at completion a lot easier and also help with thinking of better solutions. Even cooler would be some kind of speed heat map.

 

Thanks

Ability to run. Workflow from failed tool onwards .

If a workflow has 10 tools , if some tools failed with error(at tool5) , in an etl world we don’t want to run it again from beginning, instead we fix the tool (tool5)that had error and run from that tool and finish workflow .tool 5 to tool10.


  • Engine

 Hi there,

 

When working through a question with our team on how Excel & MS SQL represent dates, we did a quick test and confirmed that SQL and Excel are both storing dates & date-times as a number (technically the offset from a fixed date) which really helps for things like BI applications where a fact table may store a very large number of dates on each record (entered date/time; updated date/time; transaction date/time; etc)

 

However, when we look at the same in Alteryx, it seems to be storing these dates as plain text (see screenshot below) - meaning that instead of an 8 byte field for every date and datetime; which can be compressed using offset logic like in Parquet, these appear to be represented as a 19 byte field for date-time.

 

Would it make sense to change the internal representation to a number to make date-offsetting and processing easier (all date-logic then becomes simple addition / subtraction instead of string manipulation)?

 

Note: You can see this in the screenshot below.   the date field has 10 bytes; and the date-time has 19 bytes (where both of these are stored and represented in MSSQL in 8 bytes in total)

 

Capture.PNG

 

 

 

 

 

 

  • Engine

Hi,

     I would like to see Global Variable being made available in Alteryx. I have seen the Global Constant being made available under Workflow "User" configuration. But this is constant and needs to be defined at Design time.

How about a Process Id that needs to be auto genearted and the same needs to be available across the formula tools used with in the workflow.

0 Likes

During development it seems the syntax checker or whatever process runs behind the scenes after a tool is modified reviews the full workflow. 

Ex - Just from observation if I modify the file name in an output tool I don't see why it would rerun the full syntax check process.

 

This reduce the time waiting to continue development.

We often build very large Alteryx projects that breakdown large data processing jobs into multiple self contained workflows.

 

We use CReW Runner tools to automate running the workflows in sequence but it would be nice if Alteryx supported this natively with a new panel for "Projects"

 

Nice features for Projects could be:

  • Set the sequence
  • Conditional sequence
  • Error handling
  • Shared constants
  • Shared aliases
  • Shared dependencies
  • Chained Apps
  • Option to pass data between workflows - Input from yxmd Output - no need to persist intermediary data
  • Input/output folder/project folder setups for local data sources in dependencies window
  • Ability to package like "Export Workflow" for sharing
  • Results log the entire project

Countless times I've been asked by management how long a process will take to run and I really can't say beyond an educated guess (using input file size and complexity of workflow). Yet, when downloading files off the internet or moving files around in a network, Microsoft will give an estimated time of completion (e.g. 10 minutes remaining till files are downloaded). It would be so great if Alteryx would show something similar with regard to how long a workflow will take to finish running. Not sure if you can create an algoithm based on the nubmer of tools, import file size, network connection etc. to give a ETA on when a workflow may finish running but it would be super helpful for me when working on high priortity project so I can communicate with the business side.

 

Thanks!

It would be handy if it were possible to order (i.e. right-click to drag, as in the Select Tool) ALL constants created by the user, including Question constants etc.

Currently if a user has multiple connections in a workflow that connect to a password-protected source, and that password changes, the user will be locked out of their account by login attempts as Alteryx attempts to validate the connection.

 

Today I had to manually edit the XML of another user's workflow in order to remove references to their server, so they could correct their password without locking the account for a third time today.

 

While I understand that aliases are a good workaround to this problem, the issue still has potential to occur.

 

Having an option to load a workflow in a "SECURE" or "SAFE" mode, where it would not validate a query until runtime, or refreshing the metadata manually, would help to significantly reduce lockouts which would improve the usability of the tool.

I understand that Server and Designer + Scheduler versions have the option to "cancel workflows running longer than X”.

 

I'd like to see that functionality in the desktop edition as well.

@AdamR_AYX,

 

Limit conversion warning allows for a minimum of 1 message.  Can we set the minimum to 0 to completely ignore the message?

 

Perhaps we can allow warning messages a similar function as ERROR messages and allow the designer to Ignore, Warn or Cancel?

 

ConvError: Imputation (441): Tool #104: No demand: 0.200000000000031 had more precision than a double. Some precision was lost.

ConvError: Summarize (456): Data: 0.360000000004675 had more precision than a double. Some precision was lost.

 

End: Designer x64: Finished running FP Model - Marquee Crew v3.yxmd in 32.3 seconds with 16 field conversion errors and 4 warnings

 

Thanks,

 

Mark

Idea:

I know cache-related ideas have already been posted (cache macros; cache tools), but I would like it if cache were simply built into every tool, similar to the way it is on the Input Tool.

 

Reasoning:

During workflow development, I'll run the workflow repeatedly, and especially if there is sizeable data or an R tool involved, it can get really time consuming.

 

Implementation ideas:

I can see where managing cache could be tricky: in a large workflow processing a lot of data, nobody would want to maintain dozens of copies of that data.  But there may be ways of just monitoring changes to the workflow in order to know if something needs to be rebuilt or not: e.g. suppose I cache a Predictive Tool, and then make no changes to any tool preceeding it in the workflow... the next time I run, the engine should be able to look at "cache flags" and/or "modified tool flags" to determine where it should start: basically start at the "furthest along cache" that has no "modified tools" preceeding it.

 

 

Anyway, just a thought.

 

I've seen this question before and have run into it myself.  I'd like to see a new tool that would allow a developer (of a workflow) to choose a path of logic based upon criteria known only during the execution of a module.

 

If LEFT INPUT Count of records < 10,000 THEN Path1 (e.g. use a calgary join)

ELSE Path 2 (e.g. use a standard join)

endif

 

Thanks,

 

Mark

I just noticed in a workflow I'm looking at, that I derived a column but after a bit of developing, forgot about it, so there it sat, unused.  It doesn't hurt anything, but it would be useful if that sort of thing would automatically generate a soft warning on the tool in question: e.g. any item not referenced downstream automatically generates an "Unused variable" warning.

 

  • Engine

In order to perform audit-trail logging - it would be valuable to have 2 new capabilities

 

a) environment variables which show the workflow name; filepath; version; run start date and time; etc.   For any worklows we build, we need to have a solid audit trail to be SOX compliant, so having this detail available as a data field to write and manipulate is essential

b) A logging component.   What would be great is a component that you can drop on a workflow, not connected to anything, which is able to trap the start; end; runtime; version; etc of a workflow; and commit this to any output data format (CSV or ODBC etc).   This logging tool would need to be able to capture the full runtime, so it would need to be the last thing that runs (which means it may need to exist in parallel to the main workflow in some way).    This is not currently possible with a complex workflow with outputs, because it's not possible to identify when the entire workflow ended; or the runtime (since output tools don't have an onward connector to pass flow-of-control to catch the final end-time)

 

Again, both of these are necessary to meet audit requirements for workflows and prodcution-quality ETLs for BI data warehouses.

It would be helpful to have the Read Uncommitted listed as a global runtime setting.

Most of the workflows I design need this set, so rather than risk forgetting to click this option on one of my inputs it would be beneficial as a global setting.

For example: the user would be able to set specific inputs according to their need and the check box on the global runtime setting would remain unchecked.

However, if the user checked the box on the global runtime setting for Read Uncommitted then all of the workflow would automatically use an uncommtted read on all of the inputs.

When the user unchecks the global runtime setting for Read Uncommitted, then only the inputs that were set up with this option will remain set up with the read uncommitted.

 

Please evaluate the option to add 2 new containers:

1. parallel - execute tasks inside in parallel
2. serial - execute tasks in strict  order, imposed at design time. In the future the oder of operations could be enforced by parameters or other input conditions at runtime.

 

Please Give us the capacity to mix and match these 2 containers.

Thank you

 

Regards,
Cristian.

  • Engine

Hi all,

I was wondering if any of you have achieved "Transaction rollback" type of feature in alteryx.

 

Following is the usecase:

If a workflow that writes data into multiple outputs (could be relational tables / files) is failed half way through in writing to one of the outputs, is there an option to rollback the partially loaded data & reset the process to the original state (i.e., before the execution of the workflow)? (OR) does this needs to be done programatically?

 

There is a workflow level property - "Cancel Running Workflow on Error". This stops the execution but doesn't perform rollback.

 

Thanks,

Sandeep.

Top Liked Authors