Be sure to review our Idea Submission Guidelines for more information!
Submission GuidelinesHello,
After used the new "Image Recognition Tool" a few days, I think you could improve it :
> by adding the dimensional constraints in front of each of the pre-trained models,
> by adding a true tool to divide the training data correctly (in order to have an equivalent number of images for each of the labels)
> at least, allow the tool to use black & white images (I wanted to test it on the MNIST, but the tool tells me that it necessarily needs RGB images) ?
Question : do you in the future allow the user to choose between CPU or GPU usage ?
In any case, thank you again for this new tool, it is certainly perfectible, but very simple to use, and I sincerely think that it will allow a greater number of people to understand the many use cases made possible thanks to image recognition.
Thank you again
Kévin VANCAPPEL (France ;-))
Thank you again.
Kévin VANCAPPEL
Hi, currently using the reporting tools, you need to use the render tool to output it, which makes sense.
However, is there away to render an output when using a connector tool e.g. sharepoint output
Hello,
Right now you can write a file into sharepoint. However, sometimes, you just want to upload a file. There is already the ability to download (for Sharepoint input). I would like the same for uploading a file (based on an path or workflow dependancies).
Best regards,
Simon
Currently, this option is available in the SharePoint Input tool, which will only output a list of files/items found in the directory specified, which is helpful in cases where we need to add some comparison logic to avoid reading a file that have been processed already (in like a data copy type of scenario). However, this feature was not included in the other connectors (Azure Data Lake File Input, One Drive Input, Box Input, etc.).
Additionally, including an optional input anchor to feed in a list of files to read would also be extremely helpful, similar to a Dynamic Input tool, and avoid the need of creating a Batch macro to perform this operation.
Hello,
A very simple idea :
as of today, there are dedicated connectors to Sharepoint, OneDrive and Azure Data Lake.
For all these connectors, the files we can read are limited, very limited : xlsx, csv, yxdb
The location of the storage is not relevant, we should be able to read any already supported file on these locations (like parquet or shp or whatever).
Best regards,
Simon
Currently SharePoint Input Tool allows downloading a file from SharePoint to local machine, then we can process it locally, this is great since the tool can only handle 3 types of files and won't support the rest like Zip or anything else.
SharePoint Output tool, only support the same 3 files type, but will not allow uploading to SharePoint a local file, similar to the input tool but in reverse. Why is this important:
1- We can create local files that is in a format like KML, KMZ, ZIP etc., which we can't upload it to SharePoint after.
2- updating multiple sheets on the same file, updating locally then uploading the file to SharePoint will be a lot faster and more efficient.
it is easier to overcome above challenges if we use Alteryx Designer and Sync the SharePoint, or map it to local drive or folder. but it is impossible to do it when using and scheduling it on Alteryx Server.
Please update the SharePoint Output tool to allow uploading any type of file(s). Thank you.
I was exploring how to make the Outlook 365 tool run faster, and I noticed in my Action tool that there are Start and End Date parameters. However, they come across as these large numbers. I learned today from Alteryx Support that "The numbers you provided for StartDate (1725163200000) and EndDate (1726751052709) represent Unix timestamps in milliseconds. Unix timestamps indicate the number of milliseconds that have passed since January 1, 1970, at 00:00:00 UTC."
While I am attempting to build a custom macro myself, it feels like this enhancement could just be included in the tool itself! Relatively simply conversion using DateTimeDiff([#1],"1970-01-01 00:00:00",'milliseconds') 😊
Edit: it's there for the Calendar option, but ideally it could be included for the Email option too! And while you can you the search parameters, we could make it easier for the user.
Hello all,
As you all know, you can use API with the Alteryx Download tool. However, this tool is not that easy to configure.
On the other hand, the API world use a lot tools such as Postman or Bruno (an open source clone) which allows easy test, debug... I use it everytime I had to work on a rest API and then I try to translate it to the final tool (such as the Alteryx Download tool). Both tools offer "collection", a set of request, and also environment configuration. Here are some examples on the project I'm working on :
And you can even get some code
I would like to leverage those collections in my download tool configuration, that would be quite easier to use !
Best regards,
Simon
Hello all,
As of today, when you want to retrieve or create a file on Apache Spark for Databricks, you have only two choices : CSV and Avro
However it's clearly missing parquet file type :
-it's faster
-it's better for storage
-it's standard and already supported as input/output of Alteryx or for HDFS so doesn't seem hard to add here.
Best regards,
Simon
Hello,
It's nice to have this OpenAI Connector but it seems it must be the default OpenAI URL. In my company, we use OpenAI on an Azure instance and I'm unable to connect to it.
(by the way, I know pre-sales teams have developed lot of connectors for fireworks, mistral, etc.. it would be very cool to have it available).
Best regards,
Simon
Hi all
Currently when you set your workflow to don't write outputs (disable all tools that write output) under runtime of the configuration of workflow- the render and green output tools become greyed out and do not write an output (as expected).
However, this is not the case for connectors - for example, if you use the SharePoint output tool and click disable all tools that write output, it will not be greyed out and still write an output. Is it possible for these connectors to also not run when this is selected in the configuration? As otherwise currently, you have to add it to a container and disable it.
I know y'all are working on data lineage for some future offering and it is very much needed. For highest quality results, please make logs a primary source of lineage information. Being able to use dynamic naming with some tools and macros makes the names in the workflows simple foobar placeholders and do not reflect what actually happened. Today Connect doesn't use logs and leaves many lineage gaps because of this
Please move this to a more appropriate category if needed. This future feature work is not part of Connect.
Multi-Fill Tool
Please consider a new Multi-Fill tool, not for Apps, but for regular workflows, manually run or scheduled.
Similar to the Interface tool-combination of the Text Box & Action (Update value) tools, this Multi-Fill tool would enable the user to update, for example, the User Name and Password in one place for multiple Download tools. It could also be used to update other tool variables like Filter, Sort, Unique, etc.
Hey all,
At present, if you have an existing canvas and you want to move to a DCM Connection - you are asked something like "this will reset all of your connection details - are you sure". If you have complex queries; or pre+post SQL - then you first have to copy all of this out into Notepad before you can convert to DCM and then reconfigure it all again.
However, if you are not using DCM you can change data sources when you go into Workflow Dependancies without losing your queries etc.
Could we revisit the user experience of changing to or from a DCM connection to eliminate this "start from scratch" phenomenon - if you are converging from an existing SQL ODBC or ODB or SSVB connection to a SQL connection via DCM then it should allow you to make this conversion without losing your current configuration; and the same for any other database type.
cc: @mbarone
Request: Google Drive Output Tool to be able to set the maximum records per file and create multiple files
For the regular Alteryx Output Tool, we're able to set maximum records per file. This is helpful in a variety of ways - we use it as part of a workflow where the output gets uploaded into SalesForce and we can only load 5,000 records at a time. I also use this to split up large csv files to be under Excel's ~1M line limit so my teammates without Alteryx can open their reports and not lose data.
The Google Drive Output does not have this ability to split based on the number of records. If I use the RecordID Tool plus a Filter, it crashes Alteryx due to a Bug with RecordID + GDrive Output (it's currently in Accepted Defect stage)
It would be very helpful to have this same functionality that we can with the regular Output Tool
Hi there,
When you connect to a DB using a connection string or an alias - this shows up in the Workflow Dependancies in a way that is very useful to allow you to identify impacts if a DB is moved or migrated.
However - in 2023.1, if you use DCM then the database dependancies just show up as .\ which makes dependancy management much more difficult.
Please could you add the capability to view the DCM dependancies correctly in the dependancy window?
BTW - this workflow Dependancy Window would be a great place to build a simple process to move existing DB connections to a DCM connection!
CC: @wesley-siu @_PavelP
When you start using DCM - you may have existing canvasses which use regular old connection strings which you want to migrate to DCM.
Currently (in 2023.1.1.123) - when you select "Use Data Connection Manager" - it shreds the configuration of your input tool which makes it difficult to just convert these from an existing connection to a DCM connection
The only way to then make sure that you don't lose any configuration on the tool then is to use the XML editing functionality of the tools and copy across your old configuration.
Could you please add the capability to keep my current tool configuration, but just change from using a regular old connection string to using DCM?
Many thanks
Sean
cc: @wesley-siu @_PavelP
Hi there,
When connecting to data sources using DCM - could we please add the ability to make JDBC connections?
see:
https://community.alteryx.com/t5/Engine-Works/JDBC-Connections-in-Alteryx/ba-p/968782
As mentioned in these threads - JDBC is very common in large enterprises - and in many cases is better supported by the technology teams / developer community and so is much easier to make a connection. Added to this - there are many databases (e.g. DB2) where JDBC connections are just much easier
Please could you add JDBC connections to the DCM tooling?
Thank you
Sean
cc: @wesley-siu @_PavelP
When creating a connection using DCM (example being ODBC for SQL) - the process requires an ODBC Data Source Name (see screenshot 1 below).
However, when you use the alias manager (another way to make database connections) - this does allow for DSN-free connections which are essential for large enterprises (see screenshot 2 below).
NOTE: the connection manager screens do have another option - Quick Connect - which seems to allow for DSN-free connections, but this is non-intuitive; and you're asked to type in the name of the driver yourself which seems to be an obvious failure point (especially since the list of all installed drivers can be read straight from the registry)
Please could we change DCM to use the same interfaces / concepts as the alias screens so that all DCM connections can easily be created without requiring an ODBC DSN; and so that DSN-free connections are the default mode of operation?
Screenshot 1: DCM connection:
screenshot 2
cc: @wesley-siu @_PavelP @ToddTarney
For companies that have migrated to OneDrive/Teams for data storage, employees need to be able to dynamically input and output data within their workflows in order to schedule a workflow on Alteryx Server and avoid building batch MACROs.
With many organizations migrating to OneDrive, a Dynamic Input/Output tool for OneDrive and SharePoint is needed.
The enhancement should have the following components:
OneDrive/SharePoint Directory Tool
OneDrive/SharePoint Dynamic Input Tool
Dynamic OneDrive/SharePoint Output Tool
Usuário | Contagem de estrelas |
---|---|
6 | |
5 | |
3 | |
2 | |
2 |