Be sure to review our Idea Submission Guidelines for more information!
Submission GuidelinesHello,
After used the new "Image Recognition Tool" a few days, I think you could improve it :
> by adding the dimensional constraints in front of each of the pre-trained models,
> by adding a true tool to divide the training data correctly (in order to have an equivalent number of images for each of the labels)
> at least, allow the tool to use black & white images (I wanted to test it on the MNIST, but the tool tells me that it necessarily needs RGB images) ?
Question : do you in the future allow the user to choose between CPU or GPU usage ?
In any case, thank you again for this new tool, it is certainly perfectible, but very simple to use, and I sincerely think that it will allow a greater number of people to understand the many use cases made possible thanks to image recognition.
Thank you again
Kévin VANCAPPEL (France ;-))
Thank you again.
Kévin VANCAPPEL
Hello all,
As of today, when you want to retrieve or create a file on Apache Spark for Databricks, you have only two choices : CSV and Avro
However it's clearly missing parquet file type :
-it's faster
-it's better for storage
-it's standard and already supported as input/output of Alteryx or for HDFS so doesn't seem hard to add here.
Best regards,
Simon
Hello all,
As of today, we can easily copy or duplicate a table with in-database tool.This is really useful when you want to have data in development environment coming from production environment.
But can we for real ?
Short answer : no, we can't do it in these cases :
-partitions
-any constraints such as primary-foreign keys
But even if these ideas would be implemented, this means manually setting these parameters.
So my proposition is simply a "clone table"' tool that would clone the table from the show create table statement and just allow to specify the destination path (base.table)
Best regards,
Simon
Hello all,
Big picture : on Hadoop, a table can be
-internal (it's managed by Hive or Impala, and act like any other database)
-external (it's managed by hadoop, can be shared among the different hadoop db such as hive and impala and you can't delete it by default when dropping the table
for info, about suppression on external table :
https://docs.cloudera.com/HDPDocuments/HDP3/HDP-3.1.4/using-hiveql/content/hive_drop_external_table_...
Alteryx only creates internal tables while it would be nice to have the ability to create external tables that we can query with several tools (Hive, Impala, etc).
It must be implemented
-by default for connection
-by tool if we want to override the default
Best regards,
Simon
Hello,
It's nice to have this OpenAI Connector but it seems it must be the default OpenAI URL. In my company, we use OpenAI on an Azure instance and I'm unable to connect to it.
(by the way, I know pre-sales teams have developed lot of connectors for fireworks, mistral, etc.. it would be very cool to have it available).
Best regards,
Simon
Referencing the previous idea: Inputs/Output should have the option to read/write a compressed file (ZIP or GZIP)
This idea has been implemented for inputting .zip files. However, we still need to use the run command workaround for outputs. It's very common for many users to want to output their .csv, .xlsx, .pdf to a .zip. The functionality would also need to extend to Gallery.
See the following links for people that are looking for this type of functionality:
https://community.alteryx.com/t5/Alteryx-Designer-Discussions/Output-files-to-ZIP/td-p/163502
https://community.alteryx.com/t5/Alteryx-Designer-Discussions/Zip-files/td-p/151456
Feel free to merge this idea with the previous one for continuity.
Hi everyone,
Add two additional features to a directory tool. Something like this:
Use cases:
1. Since it is not possible to use a folder browse on the Gallery, this could help a basic user create a list of possible folders to select from with the help of a drop-down
2. Directory analysis for cleaning purposes - currently, if you want to get a list of the folders with Alteryx, it takes forever for big file servers since Alteryx is mapping all the files
Both are achievable today through regex or a bat script.
Thank you,
Fernando Vizcaino
Hello all,
As you all know, you can use API with the Alteryx Download tool. However, this tool is not that easy to configure.
On the other hand, the API world use a lot tools such as Postman or Bruno (an open source clone) which allows easy test, debug... I use it everytime I had to work on a rest API and then I try to translate it to the final tool (such as the Alteryx Download tool). Both tools offer "collection", a set of request, and also environment configuration. Here are some examples on the project I'm working on :
And you can even get some code
I would like to leverage those collections in my download tool configuration, that would be quite easier to use !
Best regards,
Simon
Hello,
Just like Monetdb or Vertica, Clickhouse is a column-store database, claiming to be the fastest in the world. It's available on Cloud (like Snowflake), linux and macos (and here for free, it's open-source). it's also very well ranked in analytics database https://db-engines.com/en/system/ClickHouse and it would be a good differenciator with competitors.
https://clickhouse.com/
it has became more popular than Greenplum that is supported : (black snowflake, red greenplum, orange clickhouse)
Best regards,
Simon
This is a QoL-request, and I love me some QoL-updates!
While I'm developing I often need the output of a workflow as input for the next phase of my development. For example: an API run returns job location, status, and authentication ids. I want to use these in a new workflow to start experimenting what'll work best. Because of the experimenting part, I always do this in a new workflow and not cache and continue in my main flow.
Writing a temporary output file always feels like unnescesary steps, and tbh I don't want to write a file for a step that'll be gone before it reaches production. Esp if there is sensitive information in it.
Thanks.
Hello all,
As specified in the title, this idea is to distinguish between Append Prefix/Suffix to File and to Table on the Output Data Tool.
For most files (csv...), the table name does not really exist. However, for at least Excel files, if you choose this option, the result will be one sheet by suffix and the only option to have one file by suffix will be to change entire file path.
Best regards,
Simon
In the Input tool, I rely heavily on the recent connection history list. As soon as a file falls off of this list, it takes me a while to recall where it's saved and navigate to the file I'm wanting to use. It would be great to have a feature that would allow users to set their favorite connections/files so that they remain at the top of the connection history list for easy access.
As per a recent discussion (https://community.alteryx.com/t5/Alteryx-Designer-Discussions/Geopackages-Can-Alteryx-Open-GeoPackag...), please add the GeoPackage datatype to the Input tool.
For reference, the open-source project ogr2ogr has this functionality. (https://gdal.org/programs/ogr2ogr.html)
Thanks!
Hello all,
As you may know, Alteryx use the Active Query Builder component. However this component itself evolves with cool new features :
https://www.activequerybuilder.com/blog/2018-04-28-much-faster-visual-sql-query-building-in-the-new-...
You can also try the online demo
https://www.activequerybuilder.com/
Best regards,
Simon
Whenever I overwrite an Excel sheet with data of the same format just different values (e.g. Q2 data versus Q1 data) all of my Pivot Tables break and I have to manually recreate them even though the schema didn't change. Somehow the Table is being deleted/removed and replaced with a completely different Table which is what causes the Pivot Tables to break. The only way to avoid this is to manually set the Cell Range, but who has time for that? The only solution I have found is to manually copy all values and paste them over the existing data which is very inefficient the more sheets you are working with.
Hello all,
In help, we can read that :
https://help.alteryx.com/current/designer/write-data-db-tool
Update/Delete is currently only supported for SQL Server ODBC connections.
I don't know about you but SQL Server is well used in transactional workload but in analytics... well... I have only used once in several dozens of context !
Maybe it would be cool to make it work on many more database?
Best regards,
Simon
The Sharepoint file tools are certainly a step in the right direction, but it would be great to enhance the files types that it is possible to write to sharepoint from Alteryx.
The format missing that I think is probably most in demand is pdf. If we're using the Alteryx reporting suite to create PDF reports, it would be awesome to have an easy way to output these to Sharepoint.
https://help.alteryx.com/20213/designer/sharepoint-files-output-tool
https://community.alteryx.com/t5/Public-Community-Gallery/Sharepoint-Files-Tool/ta-p/877903
Alteryx Designer is slow when using In-DB tools.
We use Alteryx 2019.1 on Hive/HortonWords with the Simba ODBC Driver configured with SSL enabled.
Here is a compare In-DB / in Memory :
We found that Alteryx open a new connection for each action :
- First link to joiner = 1 connection.
- Second ling to joiner = 1 connection.
- Click on the canevas = 1 connection.
Each connection take about 2,5 sec... It really slow down the Designer :
Please, keep alive the first connection instead of closing it and creating a new one for each action on the Designer.
Hello all,
According to wikipedia :
https://en.wikipedia.org/wiki/Join_(SQL)
CROSS JOIN returns the Cartesian product of rows from tables in the join. In other words, it will produce rows which combine each row from the first table with each row from the second table.[1]
Example of an explicit cross join:
SELECT *
FROM employee CROSS JOIN department;
Example of an implicit cross join:
SELECT *
FROM employee, department;
The cross join can be replaced with an inner join with an always-true condition:
SELECT *
FROM employee INNER JOIN department ON 1=1;
For us, alteryx users, it would be very similar to Append Fields but for in-db.
Best regards,
Simon
Hello,
We use the pre-sql statement of the input to set some parameters of connections. Sadly, we cannot do that in a in-db workflow. This would be a total game-changing feature for us.
Best Regards,
Simon
User | Likes Count |
---|---|
61 | |
5 | |
4 | |
4 | |
4 |