Be sure to review our Idea Submission Guidelines for more information!
Submission GuidelinesHello,
After used the new "Image Recognition Tool" a few days, I think you could improve it :
> by adding the dimensional constraints in front of each of the pre-trained models,
> by adding a true tool to divide the training data correctly (in order to have an equivalent number of images for each of the labels)
> at least, allow the tool to use black & white images (I wanted to test it on the MNIST, but the tool tells me that it necessarily needs RGB images) ?
Question : do you in the future allow the user to choose between CPU or GPU usage ?
In any case, thank you again for this new tool, it is certainly perfectible, but very simple to use, and I sincerely think that it will allow a greater number of people to understand the many use cases made possible thanks to image recognition.
Thank you again
Kévin VANCAPPEL (France ;-))
Thank you again.
Kévin VANCAPPEL
Hey all,
At present, if you have an existing canvas and you want to move to a DCM Connection - you are asked something like "this will reset all of your connection details - are you sure". If you have complex queries; or pre+post SQL - then you first have to copy all of this out into Notepad before you can convert to DCM and then reconfigure it all again.
However, if you are not using DCM you can change data sources when you go into Workflow Dependancies without losing your queries etc.
Could we revisit the user experience of changing to or from a DCM connection to eliminate this "start from scratch" phenomenon - if you are converging from an existing SQL ODBC or ODB or SSVB connection to a SQL connection via DCM then it should allow you to make this conversion without losing your current configuration; and the same for any other database type.
cc: @mbarone
Please consider implementing a consistent case-sensitive option for all tools and functions.
To compare string values, including case-sensitivity: This post had a good description of the challenge, but the post has been archived:
For all the time I've used Alteryx, I thought that IF "test" = "TEST" would evaluate to false. Today I realised that isn't the case and I was surprised. I'm very surprised that "equals" performs like it does.
A few existing Ideas request case-sensitivity for individual tools:
Case insensitive option while joining two data sets
https://community.alteryx.com/t5/Alteryx-Designer-Desktop-Ideas/Case-insensitive-option-while-joinin...
Unique tool enhancement - deal with case sensitive data
https://community.alteryx.com/t5/Alteryx-Designer-Desktop-Ideas/Unique-tool-enhancement-deal-with-ca...
This new Idea requests system-wide consideration for case-sensitivity, for all tools and functions.
Current state:
These tools and functions are case-sensitive:
These tools and functions are NOT case-sensitive:
These tools and functions can be either case-sensitive or NOT case-sensitive, depending on the options used:
Current Challenges:
How do we easily identify Lower Case, Upper Case, Mixed Case?
How do we easily compare strings for equality, using case sensitivity?
Request:
Ensure all tools and functions include an option to ignore or consider Case
Create new functions for IsUpperCase, IsLowerCase, IsMixedCase
Create a new function for IsEqual, with an option to ignore or consider Case
See attached workflow, which
Hello All,
As of today, Alteryx can use the proxy settings set in Windows Network and Internet Settings "Server pulls the proxy settings displayed in Engine > Proxy from the Windows internet settings for the user logged into the machine. If there are no proxy settings for the user logged into the machine, Engine > Proxy isn't available within the System Settings menu.". Then, you can override the credentials (but not the adress) in system settings but also in user settings.
The issue : in many organizations, there are several proxies that you can use for different use case. And by default, it can happen access to API are blocked by these proxies. The user, which is not admin cannot change his Windows Settings... and even if it's done by IT, it will impact all the system, including other software and leading to safety failures.
What I suggest :
-ability to change credentials AND adress
-a multi-level settings for both credentials and adress:
default : Windows Settings
System Settings
User Settings
Workflow Settings
Download tool/ Settings
Best Regards,
Simon
I'd like to see Alteryx allow a second install of your license on a second, personal machine. Tableau allows this and IMO is why there is such a robust online / blog community around that product.
For those of us that work at mid-size to large organizations, there are often strict rules governing internal data and use of cloud-based data sources. If I discover some new trick I'd like the share with my fellow Alteryx analysts outside of my company, I have no clear way to do that the same way I can with Tableau where I can do it at home not using my company's data.
Being able to learn new features and test things out on commonly available public data (ever notice that Superstore data set everyone who gets Tableau has?) would accelerate what we're able to do with the community site here and the larger analytics blogging community.
Hello all,
as of today, a join in-db can only be done with an equal operator.
Example : table1.customer_id = table2.customer_id
It's sufficient most of the time. However, sometimes, you need to perform another kind of join operation, (especially with calendar, period_table, etc).
Here an example of clause you can find in existing sql
inner join calendar on calendar.id_year_month between fact.start_period and fact.end_period
helping to solve that case :
(the turnaround I use to day being : I make a full cartesian product with a join on 1=1 and then I filter the lines for the between)
or <,>, .... et caetera.
It can very useful to solve the most difficult issues. Note that a product like Tableau already offers this feature.
Best regards,
Simon
When searching for a workflow in the application we severely struggle with being able to locate the workflows we need. The same thing happens when searching in the gallery. The information entered that will pull up a workflow doesn't seem to search across the workflow name nor does it seem to use any regular search engine function e.g. "search term" will return all and only results that contain exactly those parameters.
Example:
WF Name: "Magic_Workflow_business_purpose"
We can search for
For THIS particular workflow, let's say only the search term of "business" works.
It seems to be completely inconsistent. We've had MANY circumstances where NO entered search parameters return the desired results and we find ourselves having to sort all workflows by name and slowly scroll through (waiting for more to load) until we locate the named workflow. Out of all the amazing things Alteryx can do, if we can't find the work we've developed in it, we can't use it.
Thank you!
Regards, MAKpfe
Note: This idea doesn't strictly fit into any given category as it involves enabling support for something that affects numerous aspects of Alteryx's already existing spatial features.
I live in Australia. As do a large number of your users. Like me, many of those users use Alteryx to process spatial data. There is only one problem; we live on a roving continent. Every year our continent shifts ever so slightly but over time that shift becomes significant. For this reason we have our own continental system of spatial coordinate projections. It's called the Geocentric Datum of Australia or GDA.
Since 2000, the official Australian geodetic datum has been GDA94. However, according to the Intergovernmental Committee on Surveying and Mapping (ICSM), because the coordinates of features on our maps, such as roads, buildings and property boundaries (and so on), are all based on GDA94, they do not change over time. This is why they have since adopted a new datum: GDA2020. This has now become the standard for mapping in Australia, bringing Australia’s national coordinates into line with global satellite positioning systems.
A more detailed explanation of this can be found on the ICSM's website: What is changing and why? | Intergovernmental Committee on Surveying and Mapping (icsm.gov.au).
Of course Alteryx supports the more global WGS84 standard, which like GDA94 is a fixed datum. But there is up to a 1.8 metre discrepancy between GDA94 (and WGS84) and GDA2020. For spatial analysis projects that don't require metre accuracy that's not a problem. But imagine you are building a bridge, plotting the lanes of a road or programming a GPS enabled tractor. That 1.8 metre discrepancy between the real world coordinates and the projection is enough to cause problems.
And it is. Which is why we request that Alteryx include support for GDA2020 in its existing selection of spatial projections.
This will enable spatial datasets configured in GDA2020 to not require conversion and thus risk corruption or error. This includes providing the ability to configure GDA2020 as the spatial projection in the input tool and all spatial tools.
Doing so would go a long way to supporting your ever growing Australian user base and maintaining Alteryx's position as a trusted software for processing spatial data.
Hello!
As many of you know, i'm a big fan of Alteryx Apps. However, one of the most painful parts of Alteryx Apps is moving around elements in the Interface Designer. Currently when you have many elements in your interface designer:
And add a new element from the dropdown, or through a new tool:
It is added to the bottom of the interface. Moving it to the top is currently done with the arrows, however this is very slow, especially when you have many interface elements:
Currently (with 9 radio buttons) it takes 18 clicks (each taking a couple of seconds due to delay between movements) to move it, because it moves between each step:
It would be fantastic if we could drag and drop the elements of the interface to where we like, for speed of development and ease of use.
Thanks,
TheOC
Please add support for Databricks' Unity Catalog
Currently, when selecting a Databricks-connection in the “Connect In-DB”-tool, and opening the “Query Builder”, only tables in the catalog named “hive_metastore” are listed. That is, Alteryx submits the following SQL query to Databricks:
Listing tables 'catalog : hive\_metastore, schemaPattern : %, tableTypes : null, tableName : %'
However, with Unity Catalog in Databricks the namespace is three-tier and there may be multiple catalogs (and not just the "hive_metastore" catalog), see https://docs.microsoft.com/en-gb/azure/databricks/lakehouse/data-objects#--what-is-a-catalog
I reached out to Alteryx support, which replied that you currently have a feature request for implementing this change (ID TDCB-4056) and they furthermore suggested that I post here.
Thanks in advance.
It would be oh so nice to be able to copy a container's properties and paste those formatting options onto other containers. It could be accomplished through a Paint Brush icon on CTRL-Copy and Right Click to paste format. either way it would save setting the Color (multi-step select), Margin, transparency.
Cheers,
Mark
when you bring in a comment box or tool container to your canvas it should come in with your preferred defaults for fill colors, font color & size, etc. I have specific color schemes to identify what my comments are for and one scheme that I use most often has font size, position & color, and background color that I have to set every time i bring in a new comment box.
I LOVE working in Alteryx it because unlike excel you set a "macro" to perform repeating operations once and then ignore them to do your real work. This concept should extend to the little things within Alteryx, i.e. settings for preferred defaults for comment boxes & tool containers.
I'm testing out the new Data Connection Manager (DCM) and think it needs 1 enhancement based on the way we'd use DCM. Whenever Alteryx opens, it should sync the Data Sources/Credentials from the Server. This is critically important when sharing these with more than 1 user when a password needs updated.
For example, DesignerA updates the password in the source system, and then updates the password in their Designer DCM settings. In order for this new password to get synced to DesignerB, 2 things have to manually happen: DesignerA would need to sync the new password to the server, and then DesignerB would need to sync the new password from the server. I can live with the first part where DesignerA needs to sync to the server. It's just part of the password update process. The second step though seems perilous. DesignerB should get the new password without having to do anything; as things currently stand, DesignerB will have the old password until they manually intervene and sync the password. Imagine a scenario where it's just DesignerB, but hundreds of people who would all have to sync their credentials.
I also think this idea makes sense in light of the way Data Connections currently work (pre 21.4) where a similar sync happens automatically every time alteryx opens.
It would be wonderful for Alteryx to be able to connect to and query OData feeds natively, rather than using a 3rd-party driver or custom macro.
OData querying is supported by quite a few familiar products, including Excel and PowerBI, SSIS/SSRS, FME Safe, Tableau, and many others. And the protocol is used to publish feeds from Microsoft Dynamics and Sharepoint, as well as many of the 10,000 publically available government datasets with API's (esp. those hosted by Socrata)
I didn't see it as in the Idea section, but questions and workarounds have been discussed in the community a few times (11/15, 3/18, 4/18), and suggestions seem to be just to buy the $400-600 ODBC driver from CDATA (or ZappySys), or I could use a VBA script in Excel trigger a refresh, or create my own Alteryx connector macro (great series btw, though most was beyond my understanding!)
While not opposed paying, kludging, or learning to program, they're just one more thing to build/buy, install, maintain, and break at the most inconvenient time 🙂
Thanks,
Chadd
OData Overview:
OData (Open Data Protocol) is an ISO/IEC approved, OASIS standard that defines a set of best practices for building and consuming RESTful APIs. OData helps you focus on your business logic while building RESTful APIs without having to worry about the various approaches to define request and response headers, status codes, HTTP methods, URL conventions, media types, payload formats, query options, etc. OData also provides guidance for tracking changes, defining functions/actions for reusable procedures, and sending asynchronous/batch requests. OData RESTful APIs are easy to consume. The OData metadata, a machine-readable description of the data model of the APIs, enables the creation of powerful generic client proxies and tools.
More info at at http://odata.org
Idea: Prompt the user to find a missing macro instead of the current UX of a question mark icon.
Issue: When a macro referenced in a workflow is missing, then there is no way to a) know what the name of the macro was (assuming you were lazy like me and didn't document with a comment) and b) find the macro so you can get back to business.
When this happens to me know, I have to go to the XML view and search for macros and then cycle through them until I find the one that's missing. Then I have to either copy the macro back into that location or manually edit the workflow XML. Not cool man.
Solution: When a macro is missing, the image below at the right should be shown. In the properties window, a file browse tool should allow the user to find the macro.
I constantly find my using pre and post SQL Commands in the Output tool to run SQL when I don't actually have any data to output.
One example is when I load data into S3 and want to load it into Redshift. I have SQL code to run but no data to Output - I end up running a dummy row into a temp table.
So can we have an SQL tool that simply acts the same as a Pre-SQL command without the associated data output. Once the command is run we should be able to continue the workflow, so the tool should have an option input and output, like the Run Command tool.
It would be useful to be able to select a single container (containing a data input) or multiple containers using Shift, and run those and only those.
When building a new element to a larger workflow, I often enter a new Input in a new container, the ability to run just that container without having to turn off all my other containers would be really useful in speeding up the start of joining things together.
Hope that makes sense.
Thanks,
Doug
Credit to @pgdelafuente in his post Export Variables from Assisted Modelling Feature I... - Alteryx Community
This came up in a call with a large client - basically there's no easy way to output the feature importance plot, the accuracy metrics of the selected model (i.e. root mean squared error, correlation, max error, etc.). I would assume this is an easy addition into the Assisted Modeling tools, and perhaps useful for some of the Predictive tools!
Most of the time I don't want/need the column that I parsed. Provide a check box for if you want the root column output.
Hi!
Just thought up a simple improvement to the US Geocoder macro that could potentially speed up the results. I'm doing an analysis on some technician data where they visit the same locations over & over again. I'm doing a full year analysis (200k + records) & the geocoder takes a bit to churn thru that much data. In the case of my data though, it's the same addresses over & over again & the geocoder will go thru each one individually.
What I did in my process & could be added to the macro is to put a unique tool into the process based off address, city, state, zip, then Geocode the reduced list, then simply join back to the original data stream using a join based off the address, city, state, zip fields (or use record id tool to created a unique process id to join off).
In my case, the 200k records were reduced to 25k, which Alteryx completed in under a minute, then joined back so my output was still the 200k records (all geocoded now).
Not everyone will have this many duplicates, but I'd bet most data has a few, & every little bit of time savings helps when management is waiting on the results haha!
Hello all,
I suggest a new string function Repeat()
Repeat() forms a string consisting of the input string repeated the number of times defined by the second argument.
Repeat(text[, repeat_count])
Repeat('to',3) gives tototo
It's also a standard SQL function
https://www.w3schools.com/sql/func_mysql_repeat.asp
Best regards,
Simon
User | Likes Count |
---|---|
17 | |
6 | |
5 | |
4 | |
3 |