This site uses different types of cookies, including analytics and functional cookies (its own and from other sites). To change your cookie settings or find out more, click here. If you continue browsing our website, you accept these cookies.
Be sure to review our Idea Submission Guidelines for more information!
Submission GuidelinesWe have discussed on several occasions and in different forums, about the importance of having or providing Alteryx with order of execution control, conditional executions, design patterns and even orchestration.
I presented this idea some time ago, but someone asked me if it was posted, and since it was not, I’m putting it here so you can give some feedback on it.
The basic concept behind this idea is to allow us (users) to have:
This approach involves some functionalities that are already within the product (like exploiting Filtering logic, loading & saving, caching, blocking among others), exposed within a Tool Container with enhanced attributes, like this example:
The approach is to extend Tool Container’s attributes.
This proposition uses actual functionalities we already have in Designer.
So, basically, the Tool Container gets ‘superpowers’, with the addition of some capabilities like: Accepting input data, saving the contents within the container (to create a design pattern, or very commonly used sequence of tools chained together), output data, run the contents of the tools included in the container, etc.), plus a configuration screen like:
This should end a brief introduction to the idea, but taking it a little further, it will allow even to have something like an Orchestration layout, where the users can drag and drop containers or patterns and orchestrate them in a solution, like we can do with the Visual Layout Tool or the Interactive Chart tool:
I'm looking forward to hear what you think.
Best
This has probably been mentioned before, but in case it hasn't....
Right now, if the dynamic input tool skips a file (which it often does!) it just appears as a warning and continues processing. Whilst this is still useful to continue processing, could it be built as an option in the tool to select a 'error if files are skipped'?
Right now it is either easy to miss this is happening, or in production / on server you may want this process to be stopped.
Thanks,
Andy
I surprisingly couldn't find this anywhere else as I know it's been discussed in person on many occasions.
Basically the Formula tool needs to be smarter in many ways, but this particular post focuses on the Data Type component.
The formula tool, should not always default to V_String as the data type when entering data or a formula into the formula tool, it should look at the data type and estimate the most likely option.
I know there are times where the logical type might not be consistent in all fields, but the Data Preview and the Function of the formula should be used to determine the most likely option.
E.G. If I type a number or a date directly into the formula tool, then Alteryx should be smart enough to change the data type from the standard V_String to Int, Double or date.
This is an extension to the ideas posted here:
I often need to create a record ID that automatically increments but grouped by a specific field. I currently do it using the Multi-Row Formula tool doing [Field-1:ID]+1 because there is no group by option in the Record ID tool.
Also, sometimes I need to start at 0 but the Multi-Row Formula tool doesn't allow this so I have to use a Formula tool right after to subtract 1.
So adding a group by option to the Record ID tool would allow the user not to use the multi-row formula to do this and to start at any value wanted.
Love the new updates to the Browse tool in 2019.2! However, if you choose the option Open results in new window, which I do often so I can see my whole dataset, the search/filter/sort functionality goes away. Would be great if that new functionality also worked in the new window. Thanks!
Can't wait for the new base maps!
In-app screens, lot of space is wasted because components/tools can just be stacked one below the other.
It would great if we could also insert them horizontally.
Thanks !
Arno
Tags : screen, app, macro, layout, tools, UI
Please add ablity to globally, within a module, forget all missing fields.
DearAlteryx team and community,
all the best for 2021!
Thank you very much for enhancing the output option from Alteryx Designer to Excel keeping the format.
For a lot of my use cases this is very helpful!
Still, there are some use cases left. In case I want to overwrite a calculated/linked number (e.g. calculated prediction) with the Actual number, it would be very helpful to feed into those cells as well. At the moment Alteryx is doing the job but I receive a lot of Excel Errors (xml errors) and a corupt Excel file when overwriting calculated fields/linked fields.
Is there a chance to extend the current setup for all of those cases?
Thanks and best regards
Chhristoph
This is a QoL-request, and I love me some QoL-updates!
While I'm developing I often need the output of a workflow as input for the next phase of my development. For example: an API run returns job location, status, and authentication ids. I want to use these in a new workflow to start experimenting what'll work best. Because of the experimenting part, I always do this in a new workflow and not cache and continue in my main flow.
Writing a temporary output file always feels like unnescesary steps, and tbh I don't want to write a file for a step that'll be gone before it reaches production. Esp if there is sensitive information in it.
Thanks.
Hello Alteryx Dev Gurus -
We are migrating and some workflows that used to successfully update a datasource are now giving a useless error message, "An unknown error occurred".
Back in my coding days, we could configure the ORM to be highly verbose at database interaction time to the point where you could tell it to give you every sql statement it was trying to execute, and this was extremely useful at debug time. Somewhere down the pipe Alteryx is generating a sql statement to perform an update, so why not have something on the Runtime tab that says, 'Show all SQL statements for Output tools'? Or allow it on an Output tool by Output tool basis? If this was possible by changing a log4j properties file 15 years ago, I'm pretty sure it can be done today.
Thank you for attending my TED talk on how allowing for detailed sql statements to bubble back up to the user would be a useful feature improvement.
This idea is to fix one of the Power BI Output tool options for existing datasets.
Currently, if the 'Replace existing dataset' option is selected, the dataset is dropped and replaced with one having the same name. Problem with this is that any reports or dashboards using that dataset become invalid (likely due to a changed internal identifier).
Idea is to change the 'Replace existing dataset' functionality to delete & replace the data within a dataset rather than deleting & replacing the dataset itself.
This behavior is described in the following thread & flagged as 'solved' although the workaround isn't practical as a true solution to the issue. We'd like to see this supported more seamlessly via Alteryx.
https://community.alteryx.com/t5/Alteryx-Designer-Discussions/Publish-to-Power-BI-breaks-linked-Powe...
This has probably been mentioned before, but in case it hasn't....
Right now, if the dynamic input tool skips a file (which it often does!) it just appears as a warning and continues processing. Whilst this is still useful to continue processing, could it be built as an option in the tool to select a 'error if files are skipped'?
Right now it is either easy to miss this is happening, or in production / on server you may want this process to be stopped.
Thanks,
Andy
Hi Alteryx community,
It would be really nice to have v_string/v_wstring and max character size as a standard for text columns.
it is countless how many times I found that the error was related to a string truncation due to string size limit from the text input.
Thumbs-up those who lost their minds after discovering that the error was that! 😄
I would like for it to be easier to change input (and output) tools to UNC pathing. I think adding it to the right click menu would be great. Currently, I have to go to options >> advanced >> workflow dependencies. A right click option would be easier.
Thanks!
AD/LDAP Authentication should be an option for the Mongo tool, and the ability to use Gallery Connections would also be great. Local SQL authentication is no longer allowed in most enterprises to simplify security configuration control.
Referencing the previous idea: Inputs/Output should have the option to read/write a compressed file (ZIP or GZIP)
This idea has been implemented for inputting .zip files. However, we still need to use the run command workaround for outputs. It's very common for many users to want to output their .csv, .xlsx, .pdf to a .zip. The functionality would also need to extend to Gallery.
See the following links for people that are looking for this type of functionality:
https://community.alteryx.com/t5/Alteryx-Designer-Discussions/Output-files-to-ZIP/td-p/163502
https://community.alteryx.com/t5/Alteryx-Designer-Discussions/Zip-files/td-p/151456
Feel free to merge this idea with the previous one for continuity.
As of 2019.4+, Alteryx is now leveraging the Tableau Hyper API in order to output .Hyper files. Unfortunately, our hardware is not compatible with the Tableau Hyper API. It would be great if Alteryx could allow a best of both worlds in that they use the new Tableau Hyper API when possible but revert back to the old method (pre 2019.4) when the machine's hardware doesn't support the new Tableau Hyper API. Thanks!
Added in Alteryx Version 2020.3, the Browse tool no longer shows a profile of the complete dataset (it is capped when the record data size reached 300MB).
My proposed solution is an optional override of the record size limit on the browse tool (which will make the profiling take longer, but actually profile the entire dataset). I would also like a general user setting to set the default behavior of the browse tool to either be limited or unlimited.
Below is the newly included documentation of the Data Profiling Limit, which I'm proposing can be overridden.
Data Profiling Limit
Data Profiling in the Browse tool is capped at 300 MB. This allows you to process very large datasets faster. For each record in the incoming dataset, we process the record and add the record size to a counter. Once the counter reaches 300 MB, we stop processing records.
It is important to note that there is no specific number of records that we can process. This depends on the dataset since a record size can range from 1 byte to a few thousand bytes. This record size is different from the file size, displayed in the Results grid and Data Profiling Holistic View. The file size is generally different since it has been compressed to optimize spacing.
In other words, 300 MB of record size is not the same as 300 MB of file size.
This new tool can cause confusion when looking at the data profile (e.g. if you expect the sum to be $3 million, but the browse tool is only showing 2% of your total records in the profile tool, the profile sum may only show $60 thousand).
The sampled version with a cutoff of 300MB is rarely useful if you are using browse tools to get a quick sense of the variable profiles on medium sized datasets (around 1 million records) since this rarely will fit into the 300MB record size limit.
An example can be shown in the image below, where the dataset contains 855,085 records, but the browse tool is profiling only the first 20,338.
Again, being able to override this 300MB record size limit would fix the problem created in the 2020.3 change to the browse tool.
Hello all,
DuckDB is a new project of embeddable database by the team behind MonetDB. From what I understand, it's like a SQLite database but for analytics (columnar-vectorized query execution engine on a single file). And of course it's open-source and free.
More info on their website : https://duckdb.org/
Best regards,
Simon
The guide line of Shape File is below. They recommend that you use only letters and numbers.
"Spaces and certain characters are not supported in field names. Special characters include hyphens such as in x-coordinate and y-coordinate; parentheses; brackets; and symbols such as $, %, and #. Essentially, eliminate anything that is not alphanumeric or an underscore."
But many GIS tools can read and write 2 byte field name at Shape File.
(e.g. QGIS https://qgis.org/en/site/index.html)
And Esri Japan says Shape file can use 2 byte field name.
https://www.esrij.com/gis-guide/esri-dataformat/shapefile/
We want to use 2 byte field name at Shape File on Alteryx Designer.
(e.g. UTF-8 , Shift-JIS )
Thanks,
Kajitani
Have External Tables in Snowflake be accessible in the Visual Query Builder.
Current state: External tables in the Snowflake DBMS are not available in the "visual query builder" tab of the green input tool. These tables are only available in the "Tables" tab.
When using the output data tool, it would save me and my cluttered organizational skills a lot of effort if the writing workflow was saved as part of the yxdb metadata.
I've often had to search to find a workflow which created the yxdb. I tend to use naming conventions to help me, but it would be easier if the file and or path was easily found.
cheers,
mark
Please add official support for newer versions of Microsoft SQL Server and the related drivers.
According to the data sources article for Microsoft SQL Server (https://help.alteryx.com/current/DataSources/SQLServer.htm), and validation via a support ticket, only the following products have been tested and validated with Alteryx Designer/Server:
Microsoft SQL Server
Validated On: 2008, 2012, 2014, and 2016.
This is one of the most popular data sources, and the lack of support for newer versions (especially a 2+ year old product like Sql Server 2017) is hard to fathom.
ODBC Driver for SQL Server/SQL Server Native Client
Validated on ODBC Driver: 11, 13, 13.1
Validated on SQL Server Native Client: 10,11
Hi GUI Gang
At the moment, I have a lovely formatted XLS with corporate branding, logos, filled cells, borders etc. The data from the Alteryx output needs to start in cell B6. I have tried the output tools to this named range, but Alteryx destroys all the Excel formatted cells in the data block.
As a workaround on the forums, many Alteryx users pump out to a hidden "Output" tab, and then code =OutputA1 in the formatted sheet. This looks messy to the users who then go hunting for the hidden tab. Personally I end up pumping the workflow out to a temporary CSV file. Then opening that in Excel, selecting all, and then pasting values in the pretty Excel file.
This is fine for one file, but I need to split the output report block by a country field and do this 100s of time for each month end.
Please can we have a output tool that does the same as my workaround. Outputs directly from a workflow to a range in Excel that doesnt destroy the workbook's formatting.
Jay
Alteryx 2019.4 introduced support for Tableau's .hyper extract format, however it only supports single table extracts. .hyper files have supported multiple tables since mid-2018, so I'd like Alteryx to support that as well.
Here are a couple of current use cases (as of February 2020) and one future one.
- We have malaria incidence data that is joined to multiple sets of spatial data. Doing all of the joins in the extract creation process to build a single table extract is not possible due to processing time & memory constraints, so we use a multiple-table extract.
- There are multiple ways to do row level security in Tableau. A common way is to have separate tables for the data & the entitlements and then use calculations at run-time to filter the data, and for that having a multiple table extract is ideal.
- In 2020 Tableau will be introducing new data modeling capabilities (this was first demoed at the 2018 Tableau Conference, there were sessions on it at the 2019 Tableau Conference) where one goal is vastly improved performance for large fact table to fact table joins where previously we'd have to do much more data preparation. This is another case where multiple table extracts would be useful.
I've attached a sample Hyper file with two tables in the extract (it's zipped because the Community site doesn't accept .hyper files).
Supporting alternative schema and table names in Hyper extracts https://community.alteryx.com/t5/Alteryx-Designer-Ideas/Input-tool-Support-more-than-Extract-Extract... is a prerequisite for this because by definition multiple table extracts have multiple table names.
A related idea is supporting multiple table extracts for the Output tool: https://community.alteryx.com/t5/Alteryx-Designer-Ideas/Support-multiple-table-extracts-in-the-Table...
Jonathan
Alteryx 2019.4 added support in the Input tool for Tableau .hyper extract files. The tables stored in the .hyper files have a schema and a table name. Tableau's old .tde files and Hyper files created by Alteryx & Tableau Desktop use "Extract.Extract" as the schema.tablename. However when using Tableau's Hyper API the default schema is "public" and the table name is arbitrarily specified by the user or application.
This has two impacts:
1) Without this support Alteryx can't open many .hyper files created by other applications. By way of example I've attached a sample .hyper file (in a .zip because the community software doesn't allow .hyper files) that has the schema.tablename "public.table1".
2) Also support for names beyond Extract.Extract is required in order to support multiple table extracts (submitted as a separate Idea).
Please update the Input tool so the user can select the particular schema and table name from the .hyper file.
Jonathan
It would be wonderful for Alteryx to be able to connect to and query OData feeds natively, rather than using a 3rd-party driver or custom macro.
OData querying is supported by quite a few familiar products, including Excel and PowerBI, SSIS/SSRS, FME Safe, Tableau, and many others. And the protocol is used to publish feeds from Microsoft Dynamics and Sharepoint, as well as many of the 10,000 publically available government datasets with API's (esp. those hosted by Socrata)
I didn't see it as in the Idea section, but questions and workarounds have been discussed in the community a few times (11/15, 3/18, 4/18), and suggestions seem to be just to buy the $400-600 ODBC driver from CDATA (or ZappySys), or I could use a VBA script in Excel trigger a refresh, or create my own Alteryx connector macro (great series btw, though most was beyond my understanding!)
While not opposed paying, kludging, or learning to program, they're just one more thing to build/buy, install, maintain, and break at the most inconvenient time 🙂
Thanks,
Chadd
OData Overview:
OData (Open Data Protocol) is an ISO/IEC approved, OASIS standard that defines a set of best practices for building and consuming RESTful APIs. OData helps you focus on your business logic while building RESTful APIs without having to worry about the various approaches to define request and response headers, status codes, HTTP methods, URL conventions, media types, payload formats, query options, etc. OData also provides guidance for tracking changes, defining functions/actions for reusable procedures, and sending asynchronous/batch requests. OData RESTful APIs are easy to consume. The OData metadata, a machine-readable description of the data model of the APIs, enables the creation of powerful generic client proxies and tools.
More info at at http://odata.org
User | Likes Count |
---|---|
14 | |
14 | |
14 | |
13 | |
12 |