We have discussed on several occasions and in different forums, about the importance of having or providing Alteryx with order of execution control, conditional executions, design patterns and even orchestration.
I presented this idea some time ago, but someone asked me if it was posted, and since it was not, I’m putting it here so you can give some feedback on it.
The basic concept behind this idea is to allow us (users) to have:
This approach involves some functionalities that are already within the product (like exploiting Filtering logic, loading & saving, caching, blocking among others), exposed within a Tool Container with enhanced attributes, like this example:
The approach is to extend Tool Container’s attributes.
This proposition uses actual functionalities we already have in Designer.
So, basically, the Tool Container gets ‘superpowers’, with the addition of some capabilities like: Accepting input data, saving the contents within the container (to create a design pattern, or very commonly used sequence of tools chained together), output data, run the contents of the tools included in the container, etc.), plus a configuration screen like:
This should end a brief introduction to the idea, but taking it a little further, it will allow even to have something like an Orchestration layout, where the users can drag and drop containers or patterns and orchestrate them in a solution, like we can do with the Visual Layout Tool or the Interactive Chart tool:
I'm looking forward to hear what you think.
This has probably been mentioned before, but in case it hasn't....
Right now, if the dynamic input tool skips a file (which it often does!) it just appears as a warning and continues processing. Whilst this is still useful to continue processing, could it be built as an option in the tool to select a 'error if files are skipped'?
Right now it is either easy to miss this is happening, or in production / on server you may want this process to be stopped.
I surprisingly couldn't find this anywhere else as I know it's been discussed in person on many occasions.
Basically the Formula tool needs to be smarter in many ways, but this particular post focuses on the Data Type component.
The formula tool, should not always default to V_String as the data type when entering data or a formula into the formula tool, it should look at the data type and estimate the most likely option.
I know there are times where the logical type might not be consistent in all fields, but the Data Preview and the Function of the formula should be used to determine the most likely option.
E.G. If I type a number or a date directly into the formula tool, then Alteryx should be smart enough to change the data type from the standard V_String to Int, Double or date.
This is an extension to the ideas posted here:
I often need to create a record ID that automatically increments but grouped by a specific field. I currently do it using the Multi-Row Formula tool doing [Field-1:ID]+1 because there is no group by option in the Record ID tool.
Also, sometimes I need to start at 0 but the Multi-Row Formula tool doesn't allow this so I have to use a Formula tool right after to subtract 1.
So adding a group by option to the Record ID tool would allow the user not to use the multi-row formula to do this and to start at any value wanted.
Love the new updates to the Browse tool in 2019.2! However, if you choose the option Open results in new window, which I do often so I can see my whole dataset, the search/filter/sort functionality goes away. Would be great if that new functionality also worked in the new window. Thanks!
Can't wait for the new base maps!
In-app screens, lot of space is wasted because components/tools can just be stacked one below the other.
It would great if we could also insert them horizontally.
Tags : screen, app, macro, layout, tools, UI
Please add official support for newer versions of Microsoft SQL Server and the related drivers.
According to the data sources article for Microsoft SQL Server (https://help.alteryx.com/current/DataSources/SQLServer.htm), and validation via a support ticket, only the following products have been tested and validated with Alteryx Designer/Server:
Microsoft SQL Server
Validated On: 2008, 2012, 2014, and 2016.
This is one of the most popular data sources, and the lack of support for newer versions (especially a 2+ year old product like Sql Server 2017) is hard to fathom.
ODBC Driver for SQL Server/SQL Server Native Client
Validated on ODBC Driver: 11, 13, 13.1
Validated on SQL Server Native Client: 10,11
I've seen this question before and have run into it myself. I'd like to see a new tool that would allow a developer (of a workflow) to choose a path of logic based upon criteria known only during the execution of a module.
If LEFT INPUT Count of records < 10,000 THEN Path1 (e.g. use a calgary join)
ELSE Path 2 (e.g. use a standard join)
You and the team have been doing a lot of innovative changes to the results window for data.
Could I ask for an uplift to the results window for Workflow Messages?
Summary: Error messages in the workflow results window cannot be fully viewed - have to be copied into Notepad and then reformatted before you can read.
Request: Allow user to double-click to see full readable version of a workflow result message
If you have an error message in a workflow result - it gives you a message that is often longer than the window allows and there is no cell-viewer option
As a result, there is really no way to get to the important part of the error message to understand what's going on, other than to use Notepad
Step 1: Copy into Notepad
(you can see the end of line characters being misunderstood)
Step 2: Manually clean this up by breaking on the line breaks
And now you can see the important part of the result message..
Could we rather add the ability to double-click on a result message in the result window and bring up a modal window that formats the error message for you (similar to the modal window used for XML editing of a tool). That would eliminate this entire wasteful effort of trying to read an error message and having to use Notepad?
The current version of the Publish to Tableau macro retrieves an API key at the start of the workflow run. Often times the workflow may take several hours to run before it's ready to write to Tableau by which time the API may have expired. (I think the default tableau server setting times out in 2 hrs) It's one of those soul crushing "I should've forked the output!"
Sample Log Error -
The idea would be to change when the macro obtains the API from when the workflow is initiated to just before the workflow is ready to write to the Tableau avoiding these timeouts.
(If you're having this issue in the meantime you can have your Tableau server admin up the timeout)
When developing and/or troubleshooting workflows, I frequently disable the outputs using the checkbox in the Runtime configuration settings to speed up the workflow and prevent sending emails and/or overwriting data in the output sources... however, 9/10 times I forget to turn off this checkbox when I save my workflow back up to the Gallery. This results in countless emails from users to the tune of "I ran the workflow successfully, but there was no output?" 🙂
Would love love love to see some sort of warning notification (similar to the ones that already shown for data sources etc.) when saving to the Gallery if the "Disable All Tools that Write Output" option is selected in the Runtime settings.
In some of our larger workflows it's sometime tedious to run a workflow in order to see some data, when adding something in the beginning of the workflow. Running und stopping it as soon as the tools gets a green border is sometimes an option.
It would be convenient to have an option in the context menu to run a workflow only until a specific tool.
In effect, only this specific tool has an output visible for inspection and only the streams necessary for this tool have been run - everything else is ignored and I'm fine to not see data for the other tools.
This would speed up the development of small parts in a larger workflow much more convenient.
PS: Yes, I can put everything else in a container and deactivate it. But a straight forward way without turning containers on and off would be preferable in my opinion. (I think KNIME as something similar.)
We are working on building out training content in a story mode and would like to have short snippets playing in a loop for people to see embedded in the workflow. Currently you can add a .gif to a comment background and it will provide a still image on the worklfow itself but functions as a gif in the configuration display. The interesting part is when you are running the workflow the .gif works and then it pauses it when the workflow has completed!
With the release of 2018.3, cache has become an adhoc task. With complex workflow and multiple inputs we need a method to cache and save the cache selection by tool. Once the workflow runs after opening, the cache would be saved at the latest tool downstream.
This way we don't have to create adhoc cache steps and run the workflow 2X before realizing the time saving features of cache.
This would work similar to the cache feature in 11.0 but with enhanced functionality...the best of the old cache with the new cache intent.
Embed the cache option into tools.
It would be cool to have annotations that dynamically update. E.g. a record count would be displayed in the annotation and update after a run if changes occurred.
Limit conversion warning allows for a minimum of 1 message. Can we set the minimum to 0 to completely ignore the message?
Perhaps we can allow warning messages a similar function as ERROR messages and allow the designer to Ignore, Warn or Cancel?
ConvError: Imputation (441): Tool #104: No demand: 0.200000000000031 had more precision than a double. Some precision was lost.
ConvError: Summarize (456): Data: 0.360000000004675 had more precision than a double. Some precision was lost.
End: Designer x64: Finished running FP Model - Marquee Crew v3.yxmd in 32.3 seconds with 16 field conversion errors and 4 warnings
when using the R-Tool for simple tasks (like renaming files, for example) in an interative macro - there's a delay on every iteration as the R Tool starts up R.
The following are repeated on every iteration (with delays):
Can we look at an option to forward scan an alteryx job to look for R Tools, then load R into process once to eliminate these delays on every iteration?
The new Cache tool does not function if the 'Disable All Tools that Write Output' option is selected in the workflow runtime properties. There is no indication of why the cache is not working and this may be confusing because many users won't associate the 'cache' as a normal output. The interface should be changed to make this more clear or the cache function configured to ignore this workflow runtime option.
Preface: I have only used the in-DB tools with Teradata so I am unsure if this applies to other supported databases.
When building a fairly sophisticated workflow using in-DB tools, sometimes the workflow may fail due to the underlying queries running up against CPU / Memory limits. This is most common when doing several joins back to back as Alteryx sends this as one big query with various nested sub queries. When working with datasets in the hundereds of millions and billions of records, this can be extremely taxing for the DB to run as one huge query. (It is possible to get arround this by using in-DB write out to a temporary table as an intermediate step in the workflow)
When a routine does hit a in-DB resource limit and the DB kills the query, it causes Alteryx to immediately fail the workflow run. Any "temporary" tables Alteryx creates are in reality perm tables that Alteryx usually just drops at the end of a successful run. If the run does not end successfully due to hitting a resource limit, these "Temporary" (perm) tables are not dropped. I only noticed this after building out a workflow and running up against a few resource limits, I then started getting database out of space errors. Upon looking into it, I found all the previously created "temporary" tables were still there and taking up many TBs of space.
My proposed solution is for Alteryx's in-DB tools to drop any "temporary" tables it has created when a run ends - regardless of if the entire module finished successfully.
It would be helpful to have the Read Uncommitted listed as a global runtime setting.
Most of the workflows I design need this set, so rather than risk forgetting to click this option on one of my inputs it would be beneficial as a global setting.
For example: the user would be able to set specific inputs according to their need and the check box on the global runtime setting would remain unchecked.
However, if the user checked the box on the global runtime setting for Read Uncommitted then all of the workflow would automatically use an uncommtted read on all of the inputs.
When the user unchecks the global runtime setting for Read Uncommitted, then only the inputs that were set up with this option will remain set up with the read uncommitted.
Currently if a user has multiple connections in a workflow that connect to a password-protected source, and that password changes, the user will be locked out of their account by login attempts as Alteryx attempts to validate the connection.
Today I had to manually edit the XML of another user's workflow in order to remove references to their server, so they could correct their password without locking the account for a third time today.
While I understand that aliases are a good workaround to this problem, the issue still has potential to occur.
Having an option to load a workflow in a "SECURE" or "SAFE" mode, where it would not validate a query until runtime, or refreshing the metadata manually, would help to significantly reduce lockouts which would improve the usability of the tool.
In the Overview pane - can you please show which tools have completed the current run, when viewing this pane during a canvas run? That would allow for a progress check at a glance.
This is a fairly straightforward request. I'd like to be able to pass through interface tool values to the workflow events the same way I would pass it through to a tool in the workflow (%Question.<tool name>%). One use-case for this is that we are calling a workflow and passing in an ID, and if this workflow fails, I'd like to trigger an event that will call back to the application and say this specific workflow for this ID failed.
The temporary solution is to have the workflow write to a temp file and have the event reference that temp file, but this is clunky and risky if there are parallel runs occurring.
Currently, the scheduling via designer controller is independent of the gallery. So, even after a canvas is deleted, the scheduler still continues to execute the cached version of the canvas, as long as the scheduler exists.
Note, this issue does not occur when the canvas is directly scheduled in the gallery, and only occurs when you schedule via the Designer on the controller directly.
Steps to replicate issue:
1) Publish a canvas into gallery
2)Schedule the canvas to run daily via the Designer --> Options --> View Schedules --> Select Controller --> Create new workflow and schedule
3) Delete the canvas from gallery
4) You will notice that the canvas is still getting run on the defined schedule, even though you have deleted the canvas
Observed in Alteryx 11.5.1
Golden copy of a canvas should be the version existing in the gallery. Once the gallery instance of the canvas is deleted/replaced with a new version,
I was wondering if any of you have achieved "Transaction rollback" type of feature in alteryx.
Following is the usecase:
If a workflow that writes data into multiple outputs (could be relational tables / files) is failed half way through in writing to one of the outputs, is there an option to rollback the partially loaded data & reset the process to the original state (i.e., before the execution of the workflow)? (OR) does this needs to be done programatically?
There is a workflow level property - "Cancel Running Workflow on Error". This stops the execution but doesn't perform rollback.
Countless times I've been asked by management how long a process will take to run and I really can't say beyond an educated guess (using input file size and complexity of workflow). Yet, when downloading files off the internet or moving files around in a network, Microsoft will give an estimated time of completion (e.g. 10 minutes remaining till files are downloaded). It would be so great if Alteryx would show something similar with regard to how long a workflow will take to finish running. Not sure if you can create an algoithm based on the nubmer of tools, import file size, network connection etc. to give a ETA on when a workflow may finish running but it would be super helpful for me when working on high priortity project so I can communicate with the business side.