Be sure to review our Idea Submission Guidelines for more information!
Submission GuidelinesHello,
After used the new "Image Recognition Tool" a few days, I think you could improve it :
> by adding the dimensional constraints in front of each of the pre-trained models,
> by adding a true tool to divide the training data correctly (in order to have an equivalent number of images for each of the labels)
> at least, allow the tool to use black & white images (I wanted to test it on the MNIST, but the tool tells me that it necessarily needs RGB images) ?
Question : do you in the future allow the user to choose between CPU or GPU usage ?
In any case, thank you again for this new tool, it is certainly perfectible, but very simple to use, and I sincerely think that it will allow a greater number of people to understand the many use cases made possible thanks to image recognition.
Thank you again
Kévin VANCAPPEL (France ;-))
Thank you again.
Kévin VANCAPPEL
We have a feature to limit the number of records, and I thought why not have a column limit as well?
Columns take up a lot of space and processing, the more columns we have the more it slows down. So if we can declare it at the start to import the first 20 columns always, it’ll ensure that any new or unwanted columns in Excel will be avoided.
Hello,
Right now you can write a file into sharepoint. However, sometimes, you just want to upload a file. There is already the ability to download (for Sharepoint input). I would like the same for uploading a file (based on an path or workflow dependancies).
Best regards,
Simon
Hi
Currently the date time now input outputs data only in string format, it could be useful if there was an option to output the data in date or date time format.
Hello,
A very simple idea :
as of today, there are dedicated connectors to Sharepoint, OneDrive and Azure Data Lake.
For all these connectors, the files we can read are limited, very limited : xlsx, csv, yxdb
The location of the storage is not relevant, we should be able to read any already supported file on these locations (like parquet or shp or whatever).
Best regards,
Simon
In 24.1 the email tool was adjusted such that any error in the workflow prevents the email tool from sending any emails. Previously, if AMP was enabled, the email tool could still send emails even if the workflow contained an error, but this is no longer the case. There are many cases where this is not ideal, one example being:
In larger workflows, I have multiple data streams which, after a point, operate independently. Even if one stream errors, I would like emails to be sent from the other streams. I have tried nesting the email tool within multiple layers of macros, but if any of the parent or child workflows/macros contain an error, the email tool will not send any emails.
I would like a checkbox option in the email tool or workflow configuration that will still allow emails to be sent if the workflow errors. Then, with the use of control containers, I will have full control over email distribution with errors.
My organization use the SharePoint Files Input and SharePoint Files Output (v2.1.0) and connect with the Client ID, Client Secret, and Tenant ID. After a workflow is saved and scheduled on the server users receive the error "Failed to connect to SharePoint AADSTS700082: The refresh token has expired due to inactivity" every 90 days. My organization is not able to extend the 90 day limit or create non-expiring tokens.
If would be great if the SharePoint connectors could automatically refresh the token when it expires so users don't have to open the workflow and do it manually.
For the Output Tool it would be very beneficial to be able to pass a password in order to populate a password protected Excel spreadsheet. It appears there is a decent amount of interest based on the Community feedback pertaining to the subject.
Hello all,
As of today, you can populate the Drop Down tool in the interface category with a query launched from a in-memory connection. I would really appreciate the ability to use instead an in-db connection.
Why ?
It means managing two connections instead of one, and finding ways to manage it on server for both of them, etc etc.. Simplicity is key.
Best regards,
Simon
When loading multiple sheets from and Excel with either the Input Data tool or the Dynamic Input Tool, I usually want a field to identify which Sheet the data came from. Currently I have to import the Full Path and then remove everything except the SheetName.
It would be great if there was an option to output she SheetName as a field.
Hello,
There are several dozens of data sources... maybe it would be useful to have a search in it?
Best regards,
Simon
Hello,
As of now, you can't choose the DCM connections to synchronize. It's either all or none.
However, I have one designer and two servers (Sandbox/Production). Most connections must be common, but not all.
Best regards,
Simon
Connecting to Smartsheets using Alteryx Desktop (and by extension, Alteryx Server) is extremely cumbersome. If a user wants to read data from Smartsheet, they are required to get an API token (preferred) or use a username/password
Then do one of the following to read data from Smartsheets:
1. a. Install a ODBC driver
b. Configure a DSN connection for ODBC
c. Use the input data using a generic ODBC connection
or
2. Use python
To write data to Smartsheets, a user can use Python or upload the data using an API call - both very hard for end users to use especially if they're not Python developers.
Regardless, all of these are problematic. On the server I manage, I have over 15 ODBC connections to Smartsheets and it's getting very hard to upgrade the server hardware because of them. Creating a native connector for input/output of data to Smartsheets will eliminate a headache of managing ODBC connections, and make it simple for Alteryx Desktop users to read and write data.
Idea: “Create THEN Append” Output Mode for Files and Databases
When outputting data in Alteryx—whether to an Excel file or a database table—the standard practice is:
First run: Set the output tool to “Create New Sheet” or “Create New Table.”
Subsequent runs: Manually change the setting to “Append to Existing.”
This works fine, but it’s very easy to forget to switch from "Create" to "Append" after the first run—especially in iterative development or when building workflows for others.
Suggested Enhancement:
Add a new option to the Output Data tool called:
“Create THEN Append”
Behavior:
On the first run, it creates the file/sheet or table.
On future runs, it automatically switches to append mode without needing manual intervention.
Why This Matters:
Prevents data loss from accidentally overwriting files/tables.
Improves automation and reusability.
Makes workflows more reliable when shared with others.
Mirrors functionality found in many ETL tools that allow dynamic "upsert-like" behavior.
Applies To:
Excel outputs (new sheet creation vs. append)
Database outputs (new table vs. append to existing)
CSV or flat file outputs where structure remains consistent
hello,
version 2021.4 does not allow workflows to run if any of their input files are open.... would be great to have an option for the input tool that switches on/off the ability to read from open files. Some of my input files have frequent data changes and i tend to keep them open while testing/simulating results
Thank you,
abdou
Hi all,
When preparing reports with formatting for my stakeholders. They want these sent straight to sharepoint and this can be achieved via onedrive shortcuts on a laptop. However when sending the workflow for full automation, the server's C drive is not setup with the appropriate shortcuts and it is not allowed by our admin team.
So my request is to have the sharepoint output tool upgraded to push formatted files to sharepoint.
Thank you!
Please improve the Excel XLSX output options in the Output tool, or create a new Excel Output tool,
or enhance the Render tool to include an Excel output option, with no focus on margins, paper size, or paper orientation
The problem with the current Basic Table and Render tools are they are geared towards reporting, with a focus on page size and margins.
Many of us use Excel as simply a general output method, with no consideration for fitting the output on a printed page.
The new tool or Render enhancement would handle different formats/different schemas without the need for a batch macro, and would include the options below.
The only current option to export different schemas to different Sheets in one Excel file, without regard to paper formatting, is to use a batch macro and include the CReW macro Wait a Second, to allow Excel to properly shut down before a new Sheet is created, to avoid file-write-contention issues.
Including the Wait a Second macro increased the completion time for one of my workflows by 50%, as shown in the screehshots below.
I have a Powershell script that includes many of the formatting options below, but it would be a great help if a native Output or Reporting tool included these options:
Allow options below for specific selected Sheet names, or for All Sheets
AllColumns_MaxWidth: Maximum width for ALL columns in the spreadsheet. Default value = 50. This value can be changed for specific columns by using option Column_SetWidth.
Column_SetWidth: Set selected columns to an exact width. For the selected columns, this value will override the value in AllColumns_MaxWidth.
Column_Centered: Set selected columns to have text centered horizontally.
Column_WrapText: Set selected columns to Wrap text.
AllCells_WrapText: Checkbox: wrap text in every cell in the entire worksheet. Default value = False.
AllRows_AutoFit: Checkbox: to set the height for every row to autofit. Default value False.
Header_Format: checkbox for Bold, specify header cells background color, Border size: 1pt, 2pt, 3pt, and border color, Enable_Data_Filter: checkbox
Header_freeze_top_row: checkbox, or specify A2:B2 to freeze panes
Sheet_overflow: checkbox: if the number of Sheet rows exceeds Excel limit, automatically create the next sheet with "(2)" appended
Column_format_Currency: Set selected columns to Currency: currency format, with comma separators, and negative numbers colored red.
Column_format_TwoDecimals: Set selected columns to Two decimals: two decimals, with comma separators, and negative numbers colored red.
Note: If the same field name is used in Column_Currency and Column_TwoDecimals, the field will be formatted with two decimals, and not formatted as currency.
Column_format_ShortDate: Set selected columns to Short Date: the Excel default for Short Date is "MM/DD/YYYY".
File_suggest_read_only: checkbox: Set flag to display this message when a user opens the Excel file: "The author would like you to open 'Analytic List.xlsx' as read-only unless you need to make changes. Open as read-only?
vb code: xlWB.ReadOnlyRecommended = True
File_name_include_date_time: checkboxes to add file name Prefix or Suffix with creation Date and/or Time
========
Examples:
My only current option: use a batch macro, plus a Wait a Second macro, to write different formats/schemas to multiple Sheets in one Excel file:
Using the Wait a Second macro, to allow Excel to shut down before writing a new Sheet, to avoid write-contention issues, results in a workflow that runs 50% longer:
Hello,
As of today, we can't choose exactly the file format for Hadoop when writing/creating a table. There are several file format, each wih its specificity.
Therefore I suggest the ability to choose this file format :
-by default on connection (in-db connection or in-memory alias)
-ability to choose the format for the writing tool itself.
Best regards,
Simon
Hello all,
We all love pretty much the in-memory multi-row formula tool. Easy to use, etc. However, the indb counterpart does not exist.
I see that as a wizard that would generate windowing functions like LEAD or LAG
https://mode.com/sql-tutorial/sql-window-functions/
Best regards,
Simon
Hello all,
As of today, you can only (officially) connect to a postgresql through ODBC with the SIMBA driver
help page :
https://help.alteryx.com/current/en/designer/data-sources/postgresql.html#postgresql
You have to download the driver from your license page
However there is a perfectly fine official driver for postgresql here https://www.postgresql.org/ftp/odbc/releases/
I would like Alteryx to support it for several obvious reasons :
1/I don't want several drivers for the same database
2/the simba driver is not supported for last releases of postgresql
3/the simba driver is somehow less robust than the official driver
4/well... it's the official driver and this leads to unecessary between Alteryx admin/users and PG db admin.
Best regards,
Simon
Hello all,
As of today, you must set which database (e.g. : Snowflake, Vertica...) you connect to in your in db connection alias. This is fine but I think we should be able to also define the version, the release of the database. There are a lot of new features in database that Alteryx could use, improving User Experience, performance and security. (e.g. : in Hive 3.0, there is a catalog that could be used in Visual Query Builder instead of querying slowly each schema)
I think of a menu with the following choices :
-default (legacy) and precision of the Alteryx default version for the db
-autodetect (with a query launched every time you run the workflow when it's possible). if upper than last supported version, warning message and run with the last supported version settings.
-manual setting a release (to avoid to launch the version query every time). The choices would be every supported alteryx version.
Best regards,
Simon