Be sure to review our Idea Submission Guidelines for more information!
Submission GuidelinesHello,
After used the new "Image Recognition Tool" a few days, I think you could improve it :
> by adding the dimensional constraints in front of each of the pre-trained models,
> by adding a true tool to divide the training data correctly (in order to have an equivalent number of images for each of the labels)
> at least, allow the tool to use black & white images (I wanted to test it on the MNIST, but the tool tells me that it necessarily needs RGB images) ?
Question : do you in the future allow the user to choose between CPU or GPU usage ?
In any case, thank you again for this new tool, it is certainly perfectible, but very simple to use, and I sincerely think that it will allow a greater number of people to understand the many use cases made possible thanks to image recognition.
Thank you again
Kévin VANCAPPEL (France ;-))
Thank you again.
Kévin VANCAPPEL
Please allow disable or ignore conversion errors in SharePoint List Input.
In SharePoint List Input I see the same conversion error about 10 times. Then....
"Conversion Error Limit Reached".
Can you simply show the error once or allow users to choose to ignore the error? (Union Tool allows users to ignore errors).
I am not using that SP column in my workflow. Meanwhile I have to show my workflow to a 3rd party within the company. SO annoying to see errors that do not apply to my workflow being shown.
It would be neat to add a feature to the Output tool to allow grouping by rows, with all the data related to the group column viewable under a drop-down of the selected field.
I've heard that this is possible with a power pivot but would be a nice feature in Alteryx.
Ex. A listing of all customers in a specific city -> Group by the "Neighborhood" column, the output should be a list of all neighborhoods in the city, with an option to drop down on each neighborhood to see its residents and their relevant data.
Thanks!
Problem statement -
Currently we are storing our Alteryx data in .yxdb file format and whenever we want to fetch the data, the whole dataset first load into the memory and then we can able to apply filter tool afterwards to get the required subset of data from .yxdb which is completely waste of time and resources.
Solution -
My idea is to introduce a YXDB SQL statement tool which can directly apply in a workflow to get the required dataset from .YXDB file, I hope this will reduce the overall runtime of workflow and user will get desired data in record time which improves the performance and reduce the memory consumption.
Hello all,
ADBC is a database connection standard (like ODBC or JDBC) but specifically designed for columnar storage (so database like DuckDB, Clickhouse, MonetDB, Vertica...). This is typically the kind of stuff that can make Alteryx way faster.
more info in https://arrow.apache.org/blog/2023/01/05/introducing-arrow-adbc/
Here a benchmark made by the guys at DuckDB : 38x improvement
https://duckdb.org/2023/08/04/adbc.html
Best regards,
Simon
Writing to XLSB Files using Delete and Append does not behave properly.
Alteryx currently is having an issue with writing to an XLSB file using the Delete and append option with Take the file/table name From field.
Issue:
Workaround:
Create a Batch macro to simulate the Take the file/table name From field function without actually using it.
Example of Issue:
Record ID | Original File | ----> | Updated File |
1 | Old Data | ----> | |
2 | Old Data | ----> | |
3 | Old Data | ----> | |
… | Old Data | ----> | |
1200 | Old Data | ----> | |
1201 | ----> | New Data | |
1202 | ----> | New Data | |
1203 | ----> | New Data | |
… | ----> | New Data |
I am using Alteryx as an ETL Tool and then QlikSense for data visualization.
Alteryx only gives QVX outputs which are the old version QlikView files. It works for QlikSense but it slows down the system. So, the QlikSense support suggested using QVD outputs.
I want to suggest supporting QVD files as QlikSense is being used more widely instead of QlikView, most users are migrating to QlikSense.
It would be more useful and efficient if Alteryx supports latest file format.
Hello all,
Sometimes, when you have too much time to retrieve your tables metadas, you can have this message
Initialization Timed Out: Workflow must be run for field meta info to be accurate.
From what I understand, it's Alteryx and the source system that drives the time out value. However, I have some cases where the long time is "normal" and that really hurts the user experience.
So, I would like the ability in settings to change the default value.
Best regards,
Simon
Hello all,
Apache Doris ( https://doris.apache.org/ ) is a modern datawarehouse with a lot of ambitions. It's probably the next big thing.
You can read the full doc here https://doris.apache.org/docs/get-starting/what-is-apache-doris but to sum it up, it aims to be THE reference solution for OLAP by claiming even better performance than Clickhouse, DuckDB or MonetDB. Even benchmarks from the Clickhouse team seem to agree.
Best regards,
Simon
Hello all,
As of today, we use the good old alias in-memory to connect to our datasources in in-memory. We have several environments so we use constants in order to change the name of the in-memory alias during execution.
To illustrate :
Depending of the environment, the constant « v_gp_contexte » will take different values :
Sounds nice, right? But now, we would like to use DCM and the nightmare begins :
We can't manually change the name and set the question :
if we look at the xml of the workflow, we only find an id so editing it is useless :
(for informationDCM connections are stored in some sqlite db in C:\Users\{yourname}\AppData\Local\Alteryx
So, I would like to use the DCM inside the in-memory alias (the in-memory alias is stored and can be edited), just like for in-db connection alias.
Best regards,
Simon
The ability to output to Amazon Workdocs via a special Output tool would be very helpful for anyone looking into using Workdocs for personal or professional purposes. This is similar in functionality to the OneDrive connector.
Hi all,
At present, Alteryx does not support DSN-free connections to Snowflake using the Bulk Connector. This is a critical functionality for any large company that uses Alteryx - and so I'm hoping that this can be changed in the product in an upcoming release. As a corollary - every DB connection type has to be able to work without DSNs for any medium or large size server instance - so it's worth extending this to check every DB connection type available in Alteryx.
Here are the details:
What is DSN-Free?
In order to be able to run our Alteryx canvasses on a multi-node server - we have to avoid using DSNs - so we generally expand connection strings that look like this:
odbc:DSN=DSNSnowFlakeTest;UID=Username;PWD=__EncPwd1__|||NEWTESTDB.PUBLIC.MYTESTTABLE
to instead have the fully described connection string like this:
odbc:DRIVER={SnowflakeDSIIDriver};UID=Username;pwd=__EncPwd1__;authenticator=Snowflake;WAREHOUSE=compute_wh;SERVER=xnb27844.us-east-1.snowflakecomputing.com;SCHEMA=PUBLIC;DATABASE=NewTestDB;Staging=local;Method=user
For Snowflake BL:
Now - for the Snowflake Bulk Loader the same process does not work and Alteryx gives the classic error below
With DSN:
snowbl:DSN=DSNSnowFlakeTest;UID=Username;pwd=__EncPwd1__;Staging=local;Method=user|||NEWTESTDB.PUBLIC.MYTESTTABLE
Without DSN:
snowbl:driver=SnowflakeDSIIDriver;UID=SeanBAdamsJPMC;pwd=__EncPwd1__;SERVER=xnb27844.us-east-1.snowflakecomputing.com;WAREHOUSE=compute_wh;SCHEMA=PUBLIC;DATABASE=NewTestDB;Staging=local;Method=user|||NEWTESTDB.PUBLIC.MYTESTTABLE
Many thanks
Sean
All the input tools like Input Data and Dynamic Input will have a new flag "Skip on fail" that will process all the data, or none of the input data, or partial of the data requested and will return the data that could be read and do not return any error in the WFs.
If the 'Skip on fail' flag is false - the system should act like it is now.
if the 'Skip on fail' flag is true - the system should return the only the accepted or manager to read data on the default out put, and can have a second output connection for the error log, so we can parse it and do something with it, but the WFs should still run,
I am a big user of the browse tool and the filter option within the browse tool. In many cases I filter on multiple columns at the same time as I'm sure many users do. I am suggesting the following 2 enhancements to filter functionality in the browse tool:
1. After applying some filters, although I can see the filter icon activate at the top of the tool, it is difficult to know at a glance which columns have filters applied without clicking on every column heading and examining the filter settings. In the event a column is filtered, a filter icon could be provided at the top of the column to easily identify filtered columns, removing the need for users to memorise filtered columns.
2. After applying multiple filters, if a user clicks onto another tool with the workflow or anywhere else on the canvas - even accidentally - all filters will be removed and the user will need to reapply them. In my view it would make more sense to make the filters persistent, or at least give users the option of doing so. Doing so would be a big time saver.
Have you ever had the business deliver an Excel (EEK!) file to be passed into Alteryx with a different number of header rows (because it looks pretty and is convenient)? Never, you say? Lies!
I would suggest adding an option to the Input Data Tool that would give us the ability concatenate multiple header rows. This would help enable accurate data profiling for columns when output and eliminate loss from unnecessary conversion errors. Currently, the options allow us to Start Data Input on Line X; however, if the header for the column is on multiple rows, they would have to be manually entered after input due to only being able to select the lowest possible row to assure the data is accurately passed. The solution would be to be able to specify the number of rows that contain headers, concatenate them to a single row (ignoring null and carriage return) and then output that as the header.
The current functionality, in a situation where each row has a variable number of header rows, causes forced errors such as a scientific string conversion of a numeric value.
If you cancel a workflow while its writing into a file, the file creation will not be rollbacked and hence a partial file would have been created.
This is problematic when working with incremental load relying on file from the past.
To embed the "Not ok" filter option in the browse tool
After closing the Table or Query option on an Input Data tool, the table layout in the Visual Query Builder view gets reset to stacking the tables/views on top of each other. It would be great if the layout stayed the way I left it the last time I closed it.
Hi everyone,
Add two additional features to a directory tool. Something like this:
Use cases:
1. Since it is not possible to use a folder browse on the Gallery, this could help a basic user create a list of possible folders to select from with the help of a drop-down
2. Directory analysis for cleaning purposes - currently, if you want to get a list of the folders with Alteryx, it takes forever for big file servers since Alteryx is mapping all the files
Both are achievable today through regex or a bat script.
Thank you,
Fernando Vizcaino
Using File Browse on Excel files first of all is inconsistent between running the Analytical App in the Designer and in the Gallery:
Depending on the use case, both behaviours can be the right one:
Thus, my idea is as follows:
Enhancement request for the option to Encrypt ODBC credentials instead of just hashing them
User | Likes Count |
---|---|
92 | |
10 | |
10 | |
7 | |
6 |