Be sure to review our Idea Submission Guidelines for more information!
Submission GuidelinesHello,
After used the new "Image Recognition Tool" a few days, I think you could improve it :
> by adding the dimensional constraints in front of each of the pre-trained models,
> by adding a true tool to divide the training data correctly (in order to have an equivalent number of images for each of the labels)
> at least, allow the tool to use black & white images (I wanted to test it on the MNIST, but the tool tells me that it necessarily needs RGB images) ?
Question : do you in the future allow the user to choose between CPU or GPU usage ?
In any case, thank you again for this new tool, it is certainly perfectible, but very simple to use, and I sincerely think that it will allow a greater number of people to understand the many use cases made possible thanks to image recognition.
Thank you again
Kévin VANCAPPEL (France ;-))
Thank you again.
Kévin VANCAPPEL
Our company often builds applications where we need the ability for it to dynamically update dropdowns based on a user's previous selections.
For example:
We can do this in other programs, but unfortunately the lack of dynamic selections/dependent dropdowns is a big limitation for us when building Alteryx applications. Our current workarounds are chaining applications together, or using PyQt within the workflow. Chaining is clunky and often causes unforeseen issues when uploading to Server with errors that are non-descriptive, and using PyQt comes with Python versioning issues.
If this interactivity can somehow be added to Alteryx applications it would be a huge upgrade to our current Alteryx processes. Any suggestions for further workarounds would also be helpful!
Thank you,
Amanda
Multi-Fill Tool
Please consider a new Multi-Fill tool, not for Apps, but for regular workflows, manually run or scheduled.
Similar to the Interface tool-combination of the Text Box & Action (Update value) tools, this Multi-Fill tool would enable the user to update, for example, the User Name and Password in one place for multiple Download tools. It could also be used to update other tool variables like Filter, Sort, Unique, etc.
Hello Alteryx Devs -
When I got to write some scripting in the formula tool, my data stream properties should be the first to be suggested once a user starts typing a letter, not the last.
uppercase(Ad -> gives me:
DateTimeAdd
FileAddPaths
PadLeft
PadRight
ReadRegistryString
[Address]
I think we would need a dedicated R macro to ascertain the chances anyone in is going to need [ReadRegistryString] before they need a column of their own data that starts with [Ad...]
Easy fix. Makes a big difference.
Thanks.
Dear Alteryx GUI Gang,
I'll create a container and then customize the colours, margins, transparency, border and then want consistency for other containers. It would be nice to have a format painter function (brush) to apply the format of one container to another. This of course could be extended to other tools like comments. There might be a desire to apply this to more tools too, but the comments and containers would be my focus as they are almost always custom configured.
Cheers,
Mark
Right now it is not possible to open .xlsx files in Alteryx that has restricted access to specific users from the excel file, even when you are logged in to Alteryx and Excel with the same user. If it is possible to make Alteryx recognize which users/email addresses should be able to input a file to Alteryx I think it would be a great enhancement. To get around the problem we are currently changing the file restrictions through right clicking on it -> Properties -> Security, but this is time consuming and not a smooth fix.
All the best,
Elin
Please add Parquet data format (https://parquet.apache.org/) as read-write option for Alteryx.
Apache Parquet is a columnar storage format available to any project in the Hadoop ecosystem, regardless of the choice of data processing framework, data model or programming language.
Thank you.
Regards,
Cristian.
I know this has been posted before, but the posts are fairly old, and I have just confirmed with Support that it is still an issue. Seems to be a pretty basic request, so I'm putting it out there again under this new heading.
The issue is that if you have data in a field, and you have that data separated by a new line (\n), it will show up fine in a browse tool, or pretty much any other output (database file, Office Document file, etc.). But if you try to use the Table Tool under Reporting, it ignores the line break and strings the data together.
Example:
The field data looks like this in a browse or most other outputs:
Hello, my name is
Michael Barone
and I love
Alteryx
But when I try to pull this field into a Table Tool, it shows up like this:
Hello, my name is Michael Barone and I love Alteyrx
Putting this out here again in hopes that it gets lots and lots of stars so it gets put on the road map!!
Hello all,
When looking at the Results window, I often find it a headache to read the numeric results because of the lack of commas. I understand that incorporating commas into the data itself could make for some weird errors; however, would it be possible to toggle an option that displays all numeric fields with proper commas and right-aligned in the Results window? I am referring to using a display mask to make numeric fields look like they have the thousands separator while retaining numeric functionality (as opposed to converting the fields to strings).
What do you think?
It'd be great to have all DCM connections available in the Data connections window.
And when Use Data connection Manager (DCM) is ticked, The screen defaults to DCM Connection list.
Hello!
A quite minor, pedantic issue from me today.
Currently, the Oversample Field Tool's naming and configuration suggest that the tool can over sample data:
However, I would argue the tool under samples data instead.
Here are a few sources that explain this much better than I can:
And an image is taken from Medium:
Effectively either step is to create a similar (or same) number of records between each class. Under sampling is the process of taking samples from the majority class, and ending up with a smaller dataset than started with. Over sampling is the process of duplicating records within the minority class, and creates a larger dataset.
When using the Oversample tool within Alteryx, using the example workflow for reference:
When summarizing the input:
And the output:
It's clear that the data has actually been under sampled, in that random samples have been taken from the majority class to match the minority, rather than creating duplicate minority records.
I would suggest a quick renaming of the tool to "Undersample Field Tool", and documentation to not cause confusion to new users of the platform.
Kind Regards,
TheOC
There are few workarounds for this task, but it would be really very easy if Data Cleansing Tool could delete Null Rows and Null Columns. After all its just a macro which can be modified and re-packaged into Alteryx Designer.
Currently, to delete a null row requires multiple columns validation for common Null attributes,
similarly to delete a null column every column has to be compared on a row-level and flagged for removal. Both of these approaches are clumsy.
Wouldn't it be so simple if Data Cleansing Tool gave such check boxes !!!
Alteryx Server was recently updated to allow TLS-mediated connections to the MongoDB persistence layer. This allowed us to switch off of the embedded MongoDB to a highly-available MongoDB Atlas cluster. To our surprise after the switch, when we went to edit our workflows that make use of the persistence layer's data (Server Usage Report, etc.) to hit the new Atlas cluster, we found that the MongoDB Input tool does not support TLS connections. This absolutely needs to be changed. Based on organizational constraints, Atlas is our only option for a HA persistence layer. We absolutely have to have TLS support for the MongoDB Input tool. There is no other way for us to natively query our server persistence layer in Designer. Please bring the MongoDB Input tool into alignment with the MongoDB connections that are supported by Alteryx Server.
I recently came to know that Alteryx doesn't support Denodo Data sources. We at our company are using Denodo as a data virtualization tool and also Alteryx is used for data blending. The request is for Alteryx to start supporting Denodo as a data source so that our company can reach out to Alteryx for any support related issues with Denodo.
Highlighted in this post: Solved: DateDiff question - Alteryx Community The DateDiff function under certain conditions does not work as you would expect and I suspect most people would not notice the inaccuracy.
Here is the formula for the Results Column below:
DateTimeDiff("2022-11-30",[Date],"months")
Date | Expected | Result |
2022-11-15 | 0 | 0 |
2022-10-31 | 1 | 0 |
2022-09-30 | 2 | 2 |
2022-08-31 | 3 | 2 |
In the Gallery, the File Browse tool returns the file location on the server where the file was uploaded. This allows the file to then be read in as input to a workflow.
If you need the file path of the original file location, you have to add a Text Input for the user to manually add it.
In my case (#00293302), I used a chained app to populate a list box for the user to select the Sheet Names they would like to process through the application. Unfortunately, since I was not able to capture the original file location the application errored out. This is due to the second app using the file location on the server where the file was uploaded, which is provided by the first workflow. This file location (from the Browse tool) is a temporary file location, where inputs are immediately deleted after processing.
Want to test this out? Create an application where you Output the file path from a File Browse tool.
i know.....grrr, this doesn't match your original file location!
Thank you,
Mark
Hello,
I had a business case requiring a cost effective and quick storage solution for real time online sourced survey data from customers. A MongoDB instance would fit the need, so I quickly spun up a cluster on Mongo Atlas. Atlas was launched by MongoDB in 2016 as a database-as-a-service deployed on AWS. All instances for Atlas require TLS/SSL to connect. Currently, the Alteryx MongoDB connector does not support TLS/SSL connections and doesn't work against Atlas. So, I was left with a breakdown in my plan that would require manual intervention before ingesting data to Alteryx (not ideal).
Please consider expanding this functionality on all connectors. I am building Alteryx out in my agency as a data platform that handles sensitive customer information (name, address, email, etc.). Most tools I use to connect to secure servers today support this type of connection and should be a priority for Alteryx to resolve.
Thanks,
Mike Schock
From Wikipedia
Druid is a column-oriented, open-source, distributed data store written in Java. Druid is designed to quickly ingest massive quantities of event data, and provide low-latency queries on top of the data.[1] The name Druid comes from the shapeshifting Druid class in many role-playing games, to reflect the fact that the architecture of the system can shift to solve different types of data problems. Druid is commonly used in business intelligence/OLAP applications to analyze high volumes of real-time and historical data.[2] Druid is used in production by technology companies such as Alibaba,[2] Airbnb,[2] Cisco,[3] eBay,[4] Netflix,[5] Paypal,[2], Yahoo.[6] and Wikimedia Foundation [7]
More and more companies are going from Hive to Druid for Dataviz needs, maybe it's time to look for Druid Integration with Alteryx?
Hi Alteryx Devs -
It would be *really tight* to have a drop down interface tool that would support auto completion based on a odbc connection to a table/column or ajax call. I recently had a situation wherein we need to give the users the ability to select an address, then run a workflow. But the truth is, our address data is terrible, and what I really needed was to be able to let the users start typing the address, then give them a list of choices to pick from, they pick the correct (but usually wrongly formatted) address, and then I send that value into the workflow.
I could not find a decent way to give a gallery user a reliable way to pick an address from our list, so eventually wound up having to write an ajax piece to handle the auto completion, capture the user input, then post to a service that would in turn, interact with gallery through the API, get the response, and send it back calling page, and back to the user. A significant amount of work to put into something that is an exceedingly common web operation of auto completion.
This would make a lot of gallery operations flow so much more naturally.
Thanks for listening!
brian
I reported this to the support team but was told it was by design and to post here.
In-DB Inefficient SQL
I would like to report that the In-DB tools are generating horribly inefficient SQL code for simple operations. It seems no matter what tools you use every statement is starting with a nested 'Select * From'.
Example Simple workflow:
This is a simple Select and Group by but the SQL Generated is:
SELECT "ShipTo", "ShipTo_Name", SUM("ECM_3PL_OVERHEADS_Unit") AS "Sum_ECM_3PL_OVERHEADS_Unit"
FROM (SELECT * FROM "_SYS_BIC"."shell.app.gsap.FL000_LSC.FL002_CTS.INT.RPT/CA_CTS_RPT_MAIN_001") AS "a"
GROUP BY "ShipTo", "ShipTo_Name"
This is taking a very long time to execute:
Statement 'SELECT "ShipTo", "ShipTo_Name", SUM("ECM_3PL_OVERHEADS_Unit") AS "Sum_ECM_3PL_OVERHEADS_Unit" FROM ...'
successfully executed in 15.752 seconds (server processing time: 15.699 seconds)
Whereas if I take the same query and remove the nested Select *:
SELECT "ShipTo", "ShipTo_Name", SUM("ECM_3PL_OVERHEADS_Unit") AS "Sum_ECM_3PL_OVERHEADS_Unit"
FROM "_SYS_BIC"."shell.app.gsap.FL000_LSC.FL002_CTS.INT.RPT/CA_CTS_RPT_MAIN_001" AS "a"
GROUP BY "ShipTo", "ShipTo_Name"
It is very quick:
Statement 'SELECT "ShipTo", "ShipTo_Name", SUM("ECM_3PL_OVERHEADS_Unit") AS "Sum_ECM_3PL_OVERHEADS_Unit" FROM ...'
successfully executed in 1.211 seconds (server processing time: 1.157 seconds)
So Alteryx is generating queries up to x13 slower than they should be thereby defeating the point of using In-DB. As you can imagine in a workflow where we have multiple Connect In-DB tools this is a really substantial amount of time. Example used above is from SAP HANA DB has 1.9m rows and ~90 columns but we have much bigger tables/views than this.
If you look you will see its same behaviour for all In-DB tools where each tool creates another nested Select with its particular operator.
MY SUGGESTION:
So my suggestion is that Alteryx should combine the SQL of the first few tools and avoid using SELECT * completely unless no Select tools have been used. So it should combine:
- Connect In-DB + Select
- Connect In-DB + Filter
- Connect In-DB + Summarise
Preferably it should combine/flatten everything up until the first join or union. But Select + Filter are a must!
Note it seems some DB's can cope OK with un-nesting these big nested queries in the query plans for some Tables but normally not for Views. But some cannot cope at all and so the In-DB tools cannot even be used to Browse 100 records (due to select *).
Tableau v2018.3 introduced multiple table extracts. These are particularly useful for fact table to fact table joins and fact table to entitlement table joins for row-level security where the number of rows created by the join and/or size of join results would be prohibitively large. Also they are useful for fact table to spatial joins where we might have multiple spatial objects (for example custom province/district/health facility catchment) for each row of fact table data.
So in Alteryx I'd like to be able to specify 2+ tables & their join keys and then write out a .hyper multiple tables extract.
Jonathan
User | Likes Count |
---|---|
5 | |
3 | |
3 | |
3 | |
2 |