This site uses different types of cookies, including analytics and functional cookies (its own and from other sites). To change your cookie settings or find out more, click here. If you continue browsing our website, you accept these cookies.
I would like to see more files types supported to be able to be dragged from a folder onto a workflow. More precisely a .txt and a .dat file. This will greatly help my team and I do be able to analyze new and unknown data files that we receive on a daily basis.
Can we get the input tool to automatically convert long filenames to the 8.3 convention inside of a macro?
I've written a batch macro that individually opens files in order to trap files that fail to open. However, when I pass in really long file names it bombs because beyond some length the Input tool converts the path to 8.3 but that logic doesn't fire inside of my macro.
Example of filename: \\ccogisgc1sat\d$\Dropbox (Clear Channel Outdoor)\Mapping\BWI MapInfo\Workspaces\Local\AEs\Archives\Cara\Sunrise Senior Living\Washington+DC_Adults+55++With+HHI+Of+$75,000++Who+Are+Caregiver+Of+Aging+Parent_Relative+Or+Planning+To+Shop+For+Nursing+Care_Assisted+Living_Retirem.TAB
Now that we have a Snowflake Bulk Loader option, it would be great to utilize the built-in Snowflake internal staging. This eliminates the need for an end-user to have the technical know-how or access to IT resources to utilize a separate S3 bucket and generally reduces friction in the process.
Many workflows I work with along with those of my colleagues, use big databases in order to get some data. After a few steps down stream and testing, we normally just add an output and then open up that data in a new workflow to save time running the original workflow. Not that this is much of a burden, but I am used to copying and pasting tools from workflow A to workflow B, but you can't do that with the output, because in workflow B the output needs to be converted to an input. I just think it would be a cool added feature if possible. Anyone else agree?
One of the common things that we need to do, is to take a delta-copy of a file or a DB table into the staging area of the analytical database.
This always looks very similar - so it would be useful to make this a wizard based process so that teams can easily build these very quickly rather than having to hand wrap:
- Check which primary keys exist - fill the gaps where they don't
- Are there any rows that update over time (or is this insert-only) - if they update over time, which column is the "updated date" column so that we can spot updates - if there is no update date; then we need to do a column by column check of some kind (like a hash or a checksum)
- Do you want to sync deletes?
- Do you want to keep updates?
- Target table in staging area which is now updated compared to the source
- Logging done (similar to what Kimball recommends in the ETL Handbook) with the run date/time; summary stats; and any errors
- Errors table for any errors that arose with row numbers
- Tables in target created (with history table if requested)
Was very happy to see the Bulk Loader introduced for Snowflake during last release. This bulk loader is specifically available for Snowflake environments that are hosted on AWS, but does not provide functionality for those environments using Azure. As Snowflake continues to build momentum, I imagine this will be a common request. Is there something in the pipeline to add this functionality?
For an interim solution, we will be working toward developing some generic scripts/snowsql to mimic that bulk load, but ultimately we'd love to have this as part of the tool.
I didn't see it as in the Idea section, but questions and workarounds have been discussed in the community a few times (11/15, 3/18, 4/18), and suggestions seem to be just to buy the $400-600 ODBC driver from CDATA (or ZappySys), or I could use a VBA script in Excel trigger a refresh, or create my own Alteryx connector macro (great series btw, though most was beyond my understanding!)
While not opposed paying, kludging, or learning to program, they're just one more thing to build/buy, install, maintain, and break at the most inconvenient time
OData (Open Data Protocol) is an ISO/IEC approved, OASIS standard that defines a set of best practices for building and consuming RESTful APIs. OData helps you focus on your business logic while building RESTful APIs without having to worry about the various approaches to define request and response headers, status codes, HTTP methods, URL conventions, media types, payload formats, query options, etc. OData also provides guidance for tracking changes, defining functions/actions for reusable procedures, and sending asynchronous/batch requests. OData RESTful APIs are easy to consume. The OData metadata, a machine-readable description of the data model of the APIs, enables the creation of powerful generic client proxies and tools.
With the release of 2018.3, cache has become an adhoc task. With complex workflow and multiple inputs we need a method to cache and save the cache selection by tool. Once the workflow runs after opening, the cache would be saved at the latest tool downstream.
This way we don't have to create adhoc cache steps and run the workflow 2X before realizing the time saving features of cache.
This would work similar to the cache feature in 11.0 but with enhanced functionality...the best of the old cache with the new cache intent.
-overwrite a table. (will drop and then create the new table)
But sometimes, the workflow fails and the old table is dropped while the new one is not created. I have to modify the tool (setting "create a new table")to launch it again, which may be a complex process in companies. After that, I have to modify it again back to "overwrite".
What I want :
-create a new table-error if table already exists
-overwrite a table-error if table doesn't exist
-overwrite a table-no error if table doesn't exist (easy in sql : drop if exists...)
The current SharePoint API pull tool does not support the pull of managed metadata columns. It would be great if Alteryx would update the SharePoint List tools to be able to read in managed metadata columns.
I really love how I can drag and drop a file directly onto the canvas from Windows Explorer and Alteryx knows to create an Input Data tool. But when I tried it with a folder today, hoping to see a Directory Input tool appear, it wouldn't do it. Could we have a similar functionality for automatically creating a Directory Input tool?
Presently when mapping an Excel file to an input tool the tool only recognizes sheets it does not recognize named tables (ranges) as possible inputs. When using PowerBI to read Excel inputs I can select either sheets or named ranges as input. Alteryx input tool should do the same.
One of the biggest and most impactful changes would be support for detailed unit testing for a canvas - this could work much like it does in Visual Studio:
In order to fully test a workflow - you need 3 things:
Ability to replace the inputs with test data
Ability to inspect any exceptions or errors thrown by the canvas
Ability to compare the results to expectation
To do this:
Create a second tab behind a canvas which is a Testing view of the canvas which allows you to define tests. Each test contains values for one or more of the inputs; expected exceptions / errors; and expected outputs
Alteryx then needs to run each of these tests one by 1 - and for each test:
Replace the data inputs with the defined test input.
Check for, and trap errors generated by Alteryx
Compare the output
Generate a test score (pass or fail against each test case)
This would allow:
Each workflow / canvas to carry its own test cases
Automated regression testing overnight for every tool and canvas
For this canvas - there are 2 inputs; and one output.
Each test case would define:
Test rows to push into input 1
Test rows to push into input 2
any errors we're expecting
The expected output of the browse tool
This would make Alteryx SUPER robust and allow people to really test every canvas in an incredibly tight way!
we use a lot the in-db tools to join our database and filter before extracting (seems logic), but to do it dynamically we have to use the dynamic input in db, which allows to input a kind of parameter for the dates, calculated locally and easily or even based on a parameter table in excel or whatever, it would be great to be able to dynamically plug a not in db tools to be able to have some parameters for filters or for the connect in-db. The thing is when yu use dynamic input in-db, you loose the code-free part and it can be harder to maintain for non sql users who are just used to do simple queries.
You could say that an analytic application could do the trick or even developp a macro to do so, but it would be complicated to do so with hundreds of tables.
The SQL Editor window could have a better presentation of the SQL code; two issues observed
First, that it's simply plain text without even a fixed-width font, much less syntax highlighting
Second, if you type in some manually formatted SQL code (e.g. with line feeds and indentation), then click on the "Visual Query Builder" button, then click back to the "SQL Editor" button, all the formatting is lost as it is converted to one run-on line of code, which is very difficult to read.
I understand that going between the Visual Query Builder and the SQL Editor is bound to have some issues; nonetheless the "idea" is to allow a user friendly display in the SQL Editor window:
Use a fixed width font, (should be trivial to implement)
My "implementation ideas" are based on a couple minutes with google, so hopefully this is a very feasible request; my user base is very likely to spend more time in the SQL editor than not, so this would be a valuable UX addition. Thanks!
Right now we can create Tableau extract files (.tde), but cannot read them into Alteryx -- this limits the partnership of these two companies. Please add the functionality to import .tde files, Best, Jeremy