Be sure to review our Idea Submission Guidelines for more information!
Submission GuidelinesHello,
After used the new "Image Recognition Tool" a few days, I think you could improve it :
> by adding the dimensional constraints in front of each of the pre-trained models,
> by adding a true tool to divide the training data correctly (in order to have an equivalent number of images for each of the labels)
> at least, allow the tool to use black & white images (I wanted to test it on the MNIST, but the tool tells me that it necessarily needs RGB images) ?
Question : do you in the future allow the user to choose between CPU or GPU usage ?
In any case, thank you again for this new tool, it is certainly perfectible, but very simple to use, and I sincerely think that it will allow a greater number of people to understand the many use cases made possible thanks to image recognition.
Thank you again
Kévin VANCAPPEL (France ;-))
Thank you again.
Kévin VANCAPPEL
It would be great to have an outbound connector on output tools for 2 reasons:
a) if this outbound connector can carry key results of the output process, this can be saved in an audit log. For example - rowcounts; success/failure. This kind of capabiltiy (to generate a log, or to be able to check the rowcount of rows committed to a database) is important for any large BI ETL process
b) this woudl also allow the process to continue after the output process and also act as a flow of control. For example:
- First output the product dimension
- once done - then connect (using the outbound connector) to the next macro which then updates the Sales fact table using this product dimension (foreign key dependancy)
Improve HIVE connector and make writable data available
Regards,
Cristian.
Hive
Type of Support: | Read-only |
Supported Versions: | 0.7.1 and later |
Client Versions: | -- |
Connection Type: | ODBC |
Driver Details: | The ODBC driver can be downloaded here. Read-only support to Hive Server 1 and Hive Server 2 is available. |
Please add Parquet data format (https://parquet.apache.org/) as read-write option for Alteryx.
Apache Parquet is a columnar storage format available to any project in the Hadoop ecosystem, regardless of the choice of data processing framework, data model or programming language.
Thank you.
Regards,
Cristian.
There are a number of requests for bulk loaders to DBs and Im adding MySQL to the list.
Really every DB connection (on prem and cloud) need some bulk loader capabilities to be added (if they don't have it already)
Our Terdata environments use LDAP authentication and we really need the Teradata bulk load connection in Alteryx to support this feature with Teradata. ODBC works fine with LDAP but Bulk Load Connection doesn't.
Alteryx Development has confirmed this is currently not a feature and also has put this into their backlog.
Hi currently the s3 upload tool only allows file format of *.yxdb , *.json, *.csv and *.avro
In order to optimize loading to redshift, it would be good to have a few more functions
1. Ability to s3 upload with *.gz format
eg: Reading in a file using the input tool -> s3 upload tool (which has a gzip function with the following options - record limit, delimiter, UTF8)
http://docs.aws.amazon.com/redshift/latest/dg/t_loading-gzip-compressed-data-files-from-S3.html
2. Change max record limit, delimiter, UTF8 format
3. Change the objectName to 'take file/table name from field' with filename containing filename or part of filename similar to the 'Output tool'
Adrian
Preface: I have only used the in-DB tools with Teradata so I am unsure if this applies to other supported databases.
When building a fairly sophisticated workflow using in-DB tools, sometimes the workflow may fail due to the underlying queries running up against CPU / Memory limits. This is most common when doing several joins back to back as Alteryx sends this as one big query with various nested sub queries. When working with datasets in the hundereds of millions and billions of records, this can be extremely taxing for the DB to run as one huge query. (It is possible to get arround this by using in-DB write out to a temporary table as an intermediate step in the workflow)
When a routine does hit a in-DB resource limit and the DB kills the query, it causes Alteryx to immediately fail the workflow run. Any "temporary" tables Alteryx creates are in reality perm tables that Alteryx usually just drops at the end of a successful run. If the run does not end successfully due to hitting a resource limit, these "Temporary" (perm) tables are not dropped. I only noticed this after building out a workflow and running up against a few resource limits, I then started getting database out of space errors. Upon looking into it, I found all the previously created "temporary" tables were still there and taking up many TBs of space.
My proposed solution is for Alteryx's in-DB tools to drop any "temporary" tables it has created when a run ends - regardless of if the entire module finished successfully.
Thanks,
Ryan
Given redshift prefers accepting many small files for bulk loading into redshift, it would be good to be able to have a max record limit within the s3 upload tool (similar to functionality for s3 download)
The other functionality that is useful for the s3 upload tool is ability to append file names based on datetimestamp_001, 002, 003 etc similar to current output tool
It would be cool if a connector line would turn red when you select it, making it easier to trace the path (similar to how the lines turn red when you click on a join tool).
The challenge:
We have hundreds of SOAP based Salesforce (SF) connectors in our scheduled modules that were created with Alteryx 9.0-9.5. Alteryx 10.0+ is now using REST API based SF connectors. We have to replace all of these connectors when we move to 10.0+.
Proposed idea:
Alteryx creates an automated process for converting SOAP SF connectors to REST API SF connectors, so that when you open an old module in 10.0+, they are automatically updated.
This seems feasible as the information supplied by Alteryx users for the SOAP SF connectors is sufficient for the REST API SF connectors to work (i.e. URL, username, password, security token, table name, fields, WHERE clause, etc...).
Thanks,
Jeremy
Currently we are limited to chossing one of two layout direction options, vertical or horizontal. Why not make the direction assignable at the tool icon instead of as a module level control. I could right click the tool and have layout direction as an option which would activate a visual handle which could either allow infinite rotation control or rotation control in 45 degree increments. You can use Viso as an example of rotational control for a shape. In Visio the shape rotates, in our case since we are really looking to change the flow direction the icon could remain in the same orientation as it does now but the conenctor point(s) would rotate around the compass in say 45% increments base on the drag of the rotation handle that appears
Hello,
I think it would be extremely useful to have a switch connector available in Alteryx. What I mean by a switch connector is a connecting line with an on/off state that will block the data stream through it when off. Something like below:
Switch Connector in an "Off" state
This would be extremely useful when you only want data to flow down some of the paths. In the example above, I might turn the switch connector to off because I want to see the Summarize results without outputting to a document.
The current methods for having a path/set of tools present but unused are insufficient for my needs. The two methods I and Alteryx support were able to find were: