Be sure to review our Idea Submission Guidelines for more information!
Submission GuidelinesHello,
After used the new "Image Recognition Tool" a few days, I think you could improve it :
> by adding the dimensional constraints in front of each of the pre-trained models,
> by adding a true tool to divide the training data correctly (in order to have an equivalent number of images for each of the labels)
> at least, allow the tool to use black & white images (I wanted to test it on the MNIST, but the tool tells me that it necessarily needs RGB images) ?
Question : do you in the future allow the user to choose between CPU or GPU usage ?
In any case, thank you again for this new tool, it is certainly perfectible, but very simple to use, and I sincerely think that it will allow a greater number of people to understand the many use cases made possible thanks to image recognition.
Thank you again
Kévin VANCAPPEL (France ;-))
Thank you again.
Kévin VANCAPPEL
Currently when you add an event to notify you of workflow failure / success - you have to enter the SMTP settings every time. It would be more efficient to set this up as a user setting which can be used for the default across all canvasses that this user creates.
Often as I am scraping web sites, some clever developer has put an invisible character (ASCII or Unicode) in the data which causes terrible trouble.
I've identified 89 instances of zero-width or non-zero-width glyphs that are not visible and/or Alteryx does not classify as whitespace. There are probably more, but Unicode is big y'all.
Unfortunately, the Trim() string function only removes 4 of these characters (Tab, Newline, Carriage Feed, and Space).
REGEX_REPLACE with the \s option (which is what the Cleanse macro uses) is a little better but still only removes 20. And it removes all instances, not just leading and trailing.
I've attached a workflow which proves this issue.
@apolly: this is what I mentioned at GKO.
And I did see this post (https://community.alteryx.com/t5/Alteryx-Designer-Discussions/Elegantly-remove-all-ASCII-characters-...), but it's too brute force. Especially as Alteryx is localized and more users need those Unicode characters.
With an increasing number of different projects, involving different machine learning models, it's becoming difficult to manage different package versions across workflows. Currently, the Python tool has a single virtual environment, so we need to develop models in different projects always using the same Python and package versions as the Python tool venv. While this doesn't bother the code itself too much, it becomes a problem as soon as we store and load pickled models, which are sensitive to even minor changes in packages.
This is even more so a problem when we are working on the Alteryx server, where different teams might use different packages. Currently, there is only the server admin who can install packages on the server and there can only be one version per package.
So, a more robust venv management in the Python tool would be much appreciated!
Hello,
My issue is very easy to solve. I want to use the generic ODBC in memo for a specific base (monetdb here but it isn't important).
I try to ouput a flow in a MonetDB SQL database. As you can see, I only take very simple field types
However I get this error message :
Error: Output Data (3): Error creating table "exemplecomparetable.toto": [MonetDB][ODBC Driver 11.44.0][MONETDB_SAU]Type (datetime) unknown in: "create table "exemplecomparetable"."toto" ("ID" int,"Libellé" char(50),"Date d"
syntax error in: ""Prix""
CREATE TABLE "exemplecomparetable"."toto" ("ID" int,"Libellé" char(50),"Date de Maj" datetime,"Prix" float,"PMP" float)
Reminder : SQL is an ISO Norm. Default type should follow it, not the MS SQL configuration. Interoperability is key
Links to : for in-db
https://community.alteryx.com/t5/Alteryx-Designer-Ideas/Generic-In-database-connection-please-stop-i...
Issues constated : MonetDB
https://community.alteryx.com/t5/Alteryx-Designer-Discussions/Output-Data-date-is-now-datetime-makin...
Informix: https://community.alteryx.com/t5/Alteryx-Designer-Discussions/Alteryx-date-data-type-error-trying-to...
Best regards,
Simon
I believe many have voiced out this as their pain point within the Community. Essentially, there is no straightforward method to import multiple Excel files which are password protected.
I understand that there is an R solution suggested by several users, however, that is not ideal as it can be difficult to obtain permission from internal Tech team to install the package on the users' computers.
Re-saving them without password is not only a hassle, but also raises concerns for data protection and security.
This may have been raised before, but we would like to see the equivalent of PRICE and YIELD formulas from Excel in Alteryx's Formula tool. I believe many users in the finance industry are using formulas like these frequently and it would be helpful to be able to replicate the formula in Alteryx.
Manually building the formula is possible, however it is unnecessarily complicated especially if you are working on different calendar basis e.g. 30 /360 European.
Thank you!
It would be good if the Email Tool could be enhanced so that it can send HTML e-mails, by that I mean the body of the e-mail is HTML based on a field in the workflow that contains a string of HTML.
Currently we are having to use batch files with command line e-mail clients to send e-mail with HTML generated within Alteryx workflows.
Most databases treat null as "unknown" and as a result, null fails all comparisons in SQL. For example, null does not match to null in a join, null will fail any > or < tests etc. This is an ANSI and ISO standard behaviour.
Alteryx treats null differently - if you have 2 data sets going into a join, then a row with value null will match to a row with value null.
We've seen this creating confusion with our users who are becoming more fluent with SQL and who are using inDB tools - where the query layer treats null differently than the Alteryx layer.
Could we add a setting flag to Alteryx so that users can turn on ISO / ANSI standard processing of Null so that data works the same at all levels of the query stack?
Many thanks
Sean
We build some pretty robust maps with multiple connections and it would be great to copy the map tool and paste it with all of the connections when we want to tweak the map slightly but keep our original map. It is a regular occurrence for us to have a very detailed map grouping by trade area name and then may want to have an overview map with all of the same connections but slightly different layout. Tracking down the connections, reconnecting them and naming them accordingly takes a substantial amount of time even in the most organized of workflows. This function would be a huge time-saver. It would also be of value with joins and unions - anywhere you have multiple streams coming in.
Here's a reason to get excited about amp! Create a runtime setting that gets Alteryx working even faster.
when you configure a file input you see 100 records. Imagine the delight that after you run your workflows all input tools are automatically cached. You run so much faster.
now think of the absolute delight that even before you run the workflows that a configured input tool causes a background read off the input data. Whether it is a new workflow or an opened existing flow that reading can start ahead of the time button.
what do you think 🤔?
I find the myself often needing to create unique IDs for a given category. Currently I end up using the multi row tool and leveraging the "group by" option. Enabling the record ID tool to create a unique count by grouping on distinct categories in an underlying data set would unlock an new level of grouping that would consolidate record keeping functionality in a single tool.
Now that we have a Snowflake Bulk Loader option, it would be great to utilize the built-in Snowflake internal staging. This eliminates the need for an end-user to have the technical know-how or access to IT resources to utilize a separate S3 bucket and generally reduces friction in the process.
There was pretty widespread support in the original Bulk Load thread: https://community.alteryx.com/t5/Alteryx-Designer-Ideas/Snowflake-Bulk-Loader/idi-p/105291/page/2#co...
Cross tab automatically alphabetizes the column headers this can be a little awkward when unioning on column position later on. Would be nice to have this as an optional feature through a tick box on the tool.
It would be great to have the below functionality in Alteryx.
A workflow is built in Alteryx and button click in Alteryx can be used to generate SQL code that can be ran on a specific database platform, such as SQL Server to run external editors such as SQL Server Management Studio. Thanks.
Hi there,
We often get the following error message from the download tool
"00:00:23.555 - Error - ToolId 106: Error in libCURL: You have found a bug. Replicate, then let us know. We shall fix it soon."
Unfortunately this seems to be a transient error so we've not been able to replicate this in a useful & repeatable way. However - we see this happening at least a few times per week on one of our servers, so this is a continuing issue.
Please could you provide more detailed error messaging on the download tool so that this error can be debugged and/or replicated?
Many thanks
Sean
Hi to all,
I have seen one or two posts requesting ability to total up rows and/or columns of numbers, however this idea also requests the ability to subtotal data by a field and also produce an overall total.
This could be an extension to existing tools such as 'Summarise' and 'Cross Tab' or could be a stand alone tool. Desired output of using a tool like this would produce something like this:
This would be incredibly useful for building reports within Alteryx as well as analysing the data, and cut down the amount of tools currently required to produce this. I have seen a third party tool which does some of this but this adds the ability to subtotal.
thanks - Roger
Hello!
I remember a while ago running into a peculiar error:
'The R.exe exit code (4294967295) indicted an error'. This was peculiar, as the data output was still seemingly correct, however, the error made me double-check the community for answers.
There are some very technical sources here:
https://community.alteryx.com/t5/Alteryx-Designer-Discussions/R-tool-Fake-Errors/td-p/25163
https://community.alteryx.com/t5/Alteryx-Designer-Discussions/Boosted-Model-Error/td-p/5509
but in short, this seems to be caused by a return code from C++ libraries, being understood by R as an error. Its a very inconsistent error, typically caused by low memory. This creates what most call a 'fake error' - the code runs perfectly fine, but seems to produce an error that doesn't actually indicate anything wrong.
Within those threads, its also stated that calling the garbage collection function (gc()) does tend to solve the problem on R exit, however this requires a user to understand basic R, and have access to the macro to be able to change the code - thus making predictive analytics more intimidating than it already is for new Alteryx users.
The first occurrence of this error seems to be way back in 2015, however the error is still being reported by users (see posts from 2020 and 2021):
https://community.alteryx.com/t5/Alteryx-Designer-Discussions/Password-protected-Excel-files-R-solut...
https://community.alteryx.com/t5/Alteryx-Designer-Knowledge-Base/Error-The-R-exe-exit-code-n-indicat...
An important issue of these 'fake errors', is not only that they cause confusion, but also that they will cause analytic apps and server workflows to not work as expected, and stop running depending on the configuration.
My suggestion would be to revisit this issue, as by my understanding it occurs inconsistently, and calling garbage collection does not always seem to fix it. Even if the Error message is still created, it may be worth Alteryx suppressing these errors, in the case they are not real errors.
Steps to reproduce:
(as mentioned, its very inconsistent)
1. Open the Boosted Model example workflow
2. *10 the number of maximum trees in the model, in the boosted model configuration (Model customization)
3. Run the workflow, inspect the results (which are seemingly correct), and the error message in the results window.
Hope this helps!
TheOC
The introduction fo a rank tool would be hugely beneficial. Whilst there are currently means to rank using a combination of other tools formula/running total/multirow etc... a specific "Rank Tool" would be provide a seemless and smoother way to rank your data either for further analysis or purely to output this field.
This tool should include a sort by and group by functionaility as well as options for ranking (such as dense ranking or unique ranking) and in addition multi levels of ranking (ie. Rank by "Field A" Then By "Field B" etc...).
We aren't getting a huge amount of help from support on this, so I'm posting this idea to raise awareness for the product teams responsible for the Salesforce connectors and the embedded Python environment.
This post from user Dubya describes the issue in detail:
I have a workflow with several salesforce tools in it, which works fine on my machine. But we need another alteryx user in our office to be able to access, run and maintain the workflow too, via their machine and copy of alteryx designer.
However we're finding that the salesforce inputs and outputs can only be authenticated on one machine at a time.
When the other new user opens the original workflow from the shared network location, the salesforce tools display an error "Salesforce Input (1): {'error': 'invalid_grant', 'error_description': 'authentication failure'}" and the tools fail to load any data. But we can see the full query in the tool and we can even set the custom query option and validate the query successfully, which suggests the source is being correctly connected to and queried, but we just cant run the tool.
The only way to run the tool successfully is to change the credentials and re-authenticate the tool. However this then de-authenticates the original machine, and when we open up the workflow on there and try to run ying the workflow brings back the same error.
We've both tried this authentication back and forth on our own machines and each time one of us re-authenticates, it de-authenticates the other, leading to it triggering the error.
Can someone help explain what's going on and how to fix it, as this doesn't bode well for our collaboration.
We're both running:
The latest build of version of designer 2021.2 (original machine also running desktop automation)
Salesforce Input Tool v4.1.0
Salesforce Output Tool v1.3.0
My response here identifies that this is a problem for our organization as well:
We're experiencing the same issue. It appears to be related to how the tool handles password and security token decryption. I've found that when you modify the related registry entry from "true" to "false", you can see in the tool's xml that the encrypted password and security token are still in there. I'm not sure what else is going on behind the scenes beyond that, but that ought to be addressable by the product teams handling the Salesforce connectors and the Python installation embedded in Designer.
The only differences in our environment compared to u/Dubya's are that we're running on 2020.4 and attempting to use Salesforce Input Tool v4.2.4.
This is a must have for anyone who needs the ability to share workflows among multiple users. This is part of a series of problems that these updated connectors have been plagued with since introducing them years ago, and no one at Alteryx seems to care enough to truly fix the problems. Salesforce is a core system for our organization, so having tools that utilize the latest version of Salesforce's APIs is very important to us. The additional features that the Input tool provides are welcome, but these bugs have to be sorted out in order for us to extract any kind of value out of them. If the "deprecated" Salesforce tools were ever to be removed from Designer while there are issues with the "new" connectors, we would have no choice other than to never upgrade Designer/Server again and be forced to look for another product to serve as our ETL platform.
Please, please, please address this.
User | Likes Count |
---|---|
4 | |
3 | |
3 | |
2 | |
2 |