Be sure to review our Idea Submission Guidelines for more information!
Submission GuidelinesHello,
After used the new "Image Recognition Tool" a few days, I think you could improve it :
> by adding the dimensional constraints in front of each of the pre-trained models,
> by adding a true tool to divide the training data correctly (in order to have an equivalent number of images for each of the labels)
> at least, allow the tool to use black & white images (I wanted to test it on the MNIST, but the tool tells me that it necessarily needs RGB images) ?
Question : do you in the future allow the user to choose between CPU or GPU usage ?
In any case, thank you again for this new tool, it is certainly perfectible, but very simple to use, and I sincerely think that it will allow a greater number of people to understand the many use cases made possible thanks to image recognition.
Thank you again
Kévin VANCAPPEL (France ;-))
Thank you again.
Kévin VANCAPPEL
Hello all,
As of today, you can only (officially) connect to a postgresql through ODBC with the SIMBA driver
help page :
https://help.alteryx.com/current/en/designer/data-sources/postgresql.html#postgresql
You have to download the driver from your license page
However there is a perfectly fine official driver for postgresql here https://www.postgresql.org/ftp/odbc/releases/
I would like Alteryx to support it for several obvious reasons :
1/I don't want several drivers for the same database
2/the simba driver is not supported for last releases of postgresql
3/the simba driver is somehow less robust than the official driver
4/well... it's the official driver and this leads to unecessary between Alteryx admin/users and PG db admin.
Best regards,
Simon
Hello,
As of today, DCM is great to store credentials. But once we want to dive deeper in technicity, like using macros or Applications, it's really bad. One of the things I hate is that we can't retrieve any informations from the DCM connection, just the id. Not good for logs, really bad for understanding and have some conditional logic related to connection type or name.
Here an example
Nice, I managed to retrieve an id but I have no idea of what it means : what kind of connection? what's name?
Best regards,
Simon
Hello,
Here is the proposal about an issue that I face frequently at work.
Problem Statement -
Frequent failure of workflows that have either been scheduled or run manually on server because the excel input file is sometimes open by another user or someone forgot to close the file before going out of office or some other reason.
Proposed Solution -
The Input/Dynamic Input tools to have the ability to read excel files even when it is open so that the workflows do not fail which will have a huge impact in terms of time savings and will avoid regular monitoring of the scheduled workflows.
Hi all,
When preparing reports with formatting for my stakeholders. They want these sent straight to sharepoint and this can be achieved via onedrive shortcuts on a laptop. However when sending the workflow for full automation, the server's C drive is not setup with the appropriate shortcuts and it is not allowed by our admin team.
So my request is to have the sharepoint output tool upgraded to push formatted files to sharepoint.
Thank you!
Whenever I overwrite an Excel sheet with data of the same format just different values (e.g. Q2 data versus Q1 data) all of my Pivot Tables break and I have to manually recreate them even though the schema didn't change. Somehow the Table is being deleted/removed and replaced with a completely different Table which is what causes the Pivot Tables to break. The only way to avoid this is to manually set the Cell Range, but who has time for that? The only solution I have found is to manually copy all values and paste them over the existing data which is very inefficient the more sheets you are working with.
Hello all,
As of today, you can populate the Drop Down tool in the interface category with a query launched from a in-memory connection. I would really appreciate the ability to use instead an in-db connection.
Why ?
It means managing two connections instead of one, and finding ways to manage it on server for both of them, etc etc.. Simplicity is key.
Best regards,
Simon
Hello,
It's nice to have this OpenAI Connector but it seems it must be the default OpenAI URL. In my company, we use OpenAI on an Azure instance and I'm unable to connect to it.
(by the way, I know pre-sales teams have developed lot of connectors for fireworks, mistral, etc.. it would be very cool to have it available).
Best regards,
Simon
When loading multiple sheets from and Excel with either the Input Data tool or the Dynamic Input Tool, I usually want a field to identify which Sheet the data came from. Currently I have to import the Full Path and then remove everything except the SheetName.
It would be great if there was an option to output she SheetName as a field.
My organization use the SharePoint Files Input and SharePoint Files Output (v2.1.0) and connect with the Client ID, Client Secret, and Tenant ID. After a workflow is saved and scheduled on the server users receive the error "Failed to connect to SharePoint AADSTS700082: The refresh token has expired due to inactivity" every 90 days. My organization is not able to extend the 90 day limit or create non-expiring tokens.
If would be great if the SharePoint connectors could automatically refresh the token when it expires so users don't have to open the workflow and do it manually.
Hello all,
As of today, when you want to retrieve or create a file on Apache Spark for Databricks, you have only two choices : CSV and Avro
However it's clearly missing parquet file type :
-it's faster
-it's better for storage
-it's standard and already supported as input/output of Alteryx or for HDFS so doesn't seem hard to add here.
Best regards,
Simon
Connecting to Smartsheets using Alteryx Desktop (and by extension, Alteryx Server) is extremely cumbersome. If a user wants to read data from Smartsheet, they are required to get an API token (preferred) or use a username/password
Then do one of the following to read data from Smartsheets:
1. a. Install a ODBC driver
b. Configure a DSN connection for ODBC
c. Use the input data using a generic ODBC connection
or
2. Use python
To write data to Smartsheets, a user can use Python or upload the data using an API call - both very hard for end users to use especially if they're not Python developers.
Regardless, all of these are problematic. On the server I manage, I have over 15 ODBC connections to Smartsheets and it's getting very hard to upgrade the server hardware because of them. Creating a native connector for input/output of data to Smartsheets will eliminate a headache of managing ODBC connections, and make it simple for Alteryx Desktop users to read and write data.
For companies that have migrated to OneDrive/Teams for data storage, employees need to be able to dynamically input and output data within their workflows in order to schedule a workflow on Alteryx Server and avoid building batch MACROs.
With many organizations migrating to OneDrive, a Dynamic Input/Output tool for OneDrive and SharePoint is needed.
The enhancement should have the following components:
OneDrive/SharePoint Directory Tool
OneDrive/SharePoint Dynamic Input Tool
Dynamic OneDrive/SharePoint Output Tool
Hello,
As of today, we can't choose exactly the file format for Hadoop when writing/creating a table. There are several file format, each wih its specificity.
Therefore I suggest the ability to choose this file format :
-by default on connection (in-db connection or in-memory alias)
-ability to choose the format for the writing tool itself.
Best regards,
Simon
Hello all,
As of today, you must set which database (e.g. : Snowflake, Vertica...) you connect to in your in db connection alias. This is fine but I think we should be able to also define the version, the release of the database. There are a lot of new features in database that Alteryx could use, improving User Experience, performance and security. (e.g. : in Hive 3.0, there is a catalog that could be used in Visual Query Builder instead of querying slowly each schema)
I think of a menu with the following choices :
-default (legacy) and precision of the Alteryx default version for the db
-autodetect (with a query launched every time you run the workflow when it's possible). if upper than last supported version, warning message and run with the last supported version settings.
-manual setting a release (to avoid to launch the version query every time). The choices would be every supported alteryx version.
Best regards,
Simon
Hello all,
As you all know, you can use API with the Alteryx Download tool. However, this tool is not that easy to configure.
On the other hand, the API world use a lot tools such as Postman or Bruno (an open source clone) which allows easy test, debug... I use it everytime I had to work on a rest API and then I try to translate it to the final tool (such as the Alteryx Download tool). Both tools offer "collection", a set of request, and also environment configuration. Here are some examples on the project I'm working on :
And you can even get some code
I would like to leverage those collections in my download tool configuration, that would be quite easier to use !
Best regards,
Simon
Hello all,
As of today, we can easily copy or duplicate a table with in-database tool.This is really useful when you want to have data in development environment coming from production environment.
But can we for real ?
Short answer : no, we can't do it in these cases :
-partitions
-any constraints such as primary-foreign keys
But even if these ideas would be implemented, this means manually setting these parameters.
So my proposition is simply a "clone table"' tool that would clone the table from the show create table statement and just allow to specify the destination path (base.table)
Best regards,
Simon
Hello all,
Big picture : on Hadoop, a table can be
-internal (it's managed by Hive or Impala, and act like any other database)
-external (it's managed by hadoop, can be shared among the different hadoop db such as hive and impala and you can't delete it by default when dropping the table
for info, about suppression on external table :
https://docs.cloudera.com/HDPDocuments/HDP3/HDP-3.1.4/using-hiveql/content/hive_drop_external_table_...
Alteryx only creates internal tables while it would be nice to have the ability to create external tables that we can query with several tools (Hive, Impala, etc).
It must be implemented
-by default for connection
-by tool if we want to override the default
Best regards,
Simon
Hello all,
We all love pretty much the in-memory multi-row formula tool. Easy to use, etc. However, the indb counterpart does not exist.
I see that as a wizard that would generate windowing functions like LEAD or LAG
https://mode.com/sql-tutorial/sql-window-functions/
Best regards,
Simon
hello,
version 2021.4 does not allow workflows to run if any of their input files are open.... would be great to have an option for the input tool that switches on/off the ability to read from open files. Some of my input files have frequent data changes and i tend to keep them open while testing/simulating results
Thank you,
abdou
User | Likes Count |
---|---|
20 | |
6 | |
6 | |
6 | |
4 |