This site uses different types of cookies, including analytics and functional cookies (its own and from other sites). To change your cookie settings or find out more, click here. If you continue browsing our website, you accept these cookies.
Was very happy to see the Bulk Loader introduced for Snowflake during last release. This bulk loader is specifically available for Snowflake environments that are hosted on AWS, but does not provide functionality for those environments using Azure. As Snowflake continues to build momentum, I imagine this will be a common request. Is there something in the pipeline to add this functionality?
For an interim solution, we will be working toward developing some generic scripts/snowsql to mimic that bulk load, but ultimately we'd love to have this as part of the tool.
Wanted to control the order of execution of objects in Alteryx WF but right now we have ONLY block until done which is not right choice for so many cases
Can we have a container (say Sequence Container) and put piece of logic in each container and have control by connecting each container? Hope this way we can control the execution order It may be something looks like below
As we do more work analyzng the canvasses that our folk are producing - it's becoming more and more necessary to have a well documented definition and schema for the XML that is used for Alteryx Canvasses.
Please could you publish the full XML definition and schema for Alteryx canvasses - this will allow groups to perform deeper analytics on how people are using Alteryx, automate quality checks; look for learning gaps; scan for dependencies etc?
I've seen this question before and have run into it myself. I'd like to see a new tool that would allow a developer (of a workflow) to choose a path of logic based upon criteria known only during the execution of a module.
If LEFT INPUT Count of records < 10,000 THEN Path1 (e.g. use a calgary join)
This got me to think a little more about localized logging options in Alteryx.
At a high level, there are ways to accomplish this in Designer at a User or System level by enabling a Logging directory and then parsing those logs with a separate Alteryx job. However, this would involve logging ALL Designer executions, which seems like it may be overkill for this need. A user can also manually save a log after each execution, although this requires manual intervention.
I think adding an option in the Runtime settings for Workflow Configuration to Enable Logging and (optionally) specify a Logging directory would be a great feature add for Designer. In my opinion this should not apply once a workflow runs on Server (Server logging should be handled in a fully standardized way), but should apply to designer "UI" execution. Having the ability to add a logging naming convention (perhaps including a workflow name and run date in the log name) would be icing on the cake.
This would allow for a piecemeal logging solution to log specific flows or processes that might be high visiblity or high importance, while avoiding saving hundreds or thousands of logs daily of less important processes, and of dev test. It would also reduce or eliminate a manual process to save these logs individually.
Essentially, I want to update a DB table with either an update or with the deletion of rows. I can't delete all of the data. My work around will be to create/insert into a table the keys that i want to delete and try to use a input/output tool with SQL that performs the delete. Any other suggestions are welcome, but a tool is best.
When I maximize the SQL Editor Window within the Input Tool only half of the screen show the SQL window. The bottom half of the screen is useless grey space. Why not have most all of the screen be the SQL window and only a small portion of grey border for the Test Qry, Ok, Cancel and Help button? I'd like to see more SQL and less wasted space. Thanks!
In environments with a large number of designers - we are now starting to bump into the issue of many people re-inventing the wheel - or editing a canvas in ways that overwrite each other.
Can we make an addition to the flow of work so that I can check an item out of the server, work on it, and check it back in? that way, people can see that I'm working on it in the designer, my changes are being sent back, and when I commit my changes then people can work accordingly.
The other alternative would be code branches & trunks which would be more effective and more useful, but I'd guess this would be a tougher ask (unless Alteryx just embedded GIT under the covers)
A cahce tool would allow a user to temporarily store a snapshot of inline data from previous run of the module.
Imagine a browse tool that was inline as opposed to a terminus tool (input and output). Now allow that browse tool to persist its data after a run of the module. When an option on that tool was activated, it would block all of the dependent tools upstream from it and instead send its cached data downstream.
The reason I think this would be a useful tool is that I often come to the end of creating a module when I'm working on the Reporting tools. I run multiple times to see the changes I've made. When the module has a lot of incoming data and complex data transformations, it can take a long time just to get to the point where the data gets to the reporting tools. This cache tool would eliminate that wait.
I need support for outbound data streams to be gzip compressed. Ideally, this would be done by a new tool that can be inserted into a workflow (maybe similar to the Base 64 Encoding tool). Just including it in the Output Tool will not address my needs as I will be sending gzip payloads to a cloud API. There are two main reasons why this is necessary (and without it, quite possibly a roadblock for our enterprise's use of Alteryx):
Some APIs enforce gzip encoding, therefore Alteryx cannot currently be used to interact with such APIs
When transmitting large volumes of data across the Internet, gzip compression will significantly decrease transmission times
Since the tool is trying to connect to the target email server to deliver the messages, you are at the mercy of the target server accepting the connection hence it will always be a hit or miss process.
In my case:
So there is no solution for me, because our smtp server need always an authenticated (user, password) and our IP is dynamicly.
For me it would be nessasarythe Email Tool in Alteryx allow authenticated setting when setting the tool to no Autodetect SMTP. But that is not implemented. My be it is possible in the future. So I must wait!
I can not use the Email Tool as an error-free solution.
When a custom (bespoke for @chrislove) macro is created, I would like the option to create an annotation that goes along with the tool. This is entirely cosmetic, but might help users to recognize the macro.