Be sure to review our Idea Submission Guidelines for more information!
Submission GuidelinesHello,
After used the new "Image Recognition Tool" a few days, I think you could improve it :
> by adding the dimensional constraints in front of each of the pre-trained models,
> by adding a true tool to divide the training data correctly (in order to have an equivalent number of images for each of the labels)
> at least, allow the tool to use black & white images (I wanted to test it on the MNIST, but the tool tells me that it necessarily needs RGB images) ?
Question : do you in the future allow the user to choose between CPU or GPU usage ?
In any case, thank you again for this new tool, it is certainly perfectible, but very simple to use, and I sincerely think that it will allow a greater number of people to understand the many use cases made possible thanks to image recognition.
Thank you again
Kévin VANCAPPEL (France ;-))
Thank you again.
Kévin VANCAPPEL
I've used the Table tool with large data sets to make tables with conditional formatting etc. There's a couple of suggestions I'd like to see.
1. I noticed an issue where if you disconnect from the tool prior to the Table tool before it forgets your settings quite easily and you may need to redo them. This is quite frustrating if you have lots of columns
2. The controls for sorting and interacting with columns aren't very good, if they were more like the select tool controls that would be fantastic. Perhaps this could be resolved with a select tool beforehand but I still think it is worth putting on the table tool itself.
3. Render output. when making excel outputs with multiple sheets of varying sizes, its very difficult to control. The sheets all stretch to the largest size. I've found I've had to put in white space in Report Text tools on one side of a table tool in order to make up the space and prevent stretching. (I found that solution on the forums)
Thanks.
Frank
I've always wondered why the Data Cleansing tool has the option to convert nulls to blanks, but not convert blanks/empty cells to null.
I'm sure it's debatable given different approaches, but we always look to convert blank/empty data strings to NULL. Currently I have to do an extra cleansing step via a formula tool anytime I want to clean up these blanks.
Work on allowing workflows to run successfully as the same way as the Designer.
1. Allow use of DCM connections in Workflow.
2. Allow use of AMP engine in connections.
3. Expose the full execution log in the Scheduler Output Window.
4. Refresh the screen when a Schedule is running frequently showing the same detail as in the Execution Log.
5. Allow a retry options for Scheduler. Allow for number of retries when an Inteval between retries similar to SQL Server Job Agent or other schedulers.
I noticed the Workflow appears to validate the SQL when you click on the three dots next to the SQL statement in the Workflow Design. My suggestion is to not run the validation until after the workflow is saved.
While Alteryx allows for a proxy username and password in the settings, these are not passed properly to an NTLM proxy. Support for NTLM authentication would be incredibly useful for a number of corporations who utilize this firewall setup.
We currently have to either download via Python or cURL through batch commands called by Alteryx. Since Alteryx uses a cURL back-end, this should be a fairly simple addition to the existing download tool by allowing a selection of proxy server, port, and authentication method in addition to the proxy username and password. This could be done either in the tool itself or in User Settings.
Hello,
Tableau has a veru useful "split" function that allows you to split a string with a delimiter and specify the number of the result you want
https://onlinehelp.tableau.com/current/pro/desktop/en-us/functions_functions_string.htm
Qlik has the same function, subfield : https://help.qlik.com/en-US/sense/February2019/Subsystems/Hub/Content/Sense_Hub/Scripting/StringFunc...
I think this is quite useful and a very standard feature.
Best regards,
Simon
I want to jump to expression #3 of formula (3), when I see following error message. Now I can jump to formula (3), but only expression #1 is opened, not #3. If I have 30 expressions, it is hard to find #20 in 30s.
Hello,
As of today if you want to connect to Snowflake or MongoDB, you have to overwrite the default LUA Files, and that requires admin rights. I don't see the point of not including the good LUA files directly in the Alteryx packaging.
Best regards,
Simon
Hey all,
I would love to be able to have an interface tool that allows a user to search through drop down values (when there are more than 100 or so) similar to autocomplete. It would be helpful as a multiselect or single select drop down. I have inserted a very poorly mocked up picture below. It would essentially be a modified version of the drop down as all the values would be in the tool, but the user could type to find what they are looking for.
While I was trying to integrate Alteryx workflows into modern data catalogues got me thinking about the transformation lineage. To integrate the transformations into those applications, an understanding of what transformations are happening and in what order is needed. Why not take this one step further for documentation use?
So my suggestion is:
Create a natural language description of the transformations and sequencing of a workflow. This could be used as the default descriptions and exported as a readme file for reviewing (e.g. during workflow handover activities), adding workflows to version control or project plans.
When building custom tools for Alteryx using the Python SDK, there is no current way to test these outside of the Alteryx Designer.
This means that your development process is:
- write some code (no code-sense; intellisense; auto-complete because Jupyter; VSCode; Visual Studio; etc cannot access AlteryxEngine or any of the other imports)
- hope
- copy that .py module into your C:\Users\<username>\AppData\roaming\Alteryx\Tools\<toolname>
- fire up Alteryx
- drop this new custom tool on a canvas
- run it to see if you get any errors
- then copy these errors out of Alteryx result window into Notepad to be able to read them
- then go back into your development environment to make changes
- repeat.
This is very painful, and this will directly scare most people away from learning how to create custom tools since it's not only inefficient - but also scary and frustrating for beginners.
Proposal:
Could we instead create mock python libraries; and a development harness (like Google does with Android development in Eclipse) in this SDK where:
- you have full code intelligence (intellisense, autocomplete)
- you can simulate engine events in a test harness (for example in the Android SDK; you can simulate the user rotating their phone, turning off GPS, hitting a volume button, etc).
- you can also write test cases which can run automatically
- then once you know that your tool will work - only then you drop it into the Alteryx Designer environment.
NOTE: This IDE way of thinking also allows you to bring the configuration pieces (like number of inputs; etc) out of raw code and into configuration options.
Although you may be able to do remote debugging by using platforms like PyCharm - that really does not give you the full ability to check in the code of your tool; along with all the test cases; in a harness that allows you to automatically check different events; or to make sure that your tool works in the test harness before deploying.
Thank you
cc: @BlytheE @SteveA @Ozzie @tlarsen7572 @cam_w @jdunkerley79
Hello Alteryx Devs -
When I got to write some scripting in the formula tool, my data stream properties should be the first to be suggested once a user starts typing a letter, not the last.
uppercase(Ad -> gives me:
DateTimeAdd
FileAddPaths
PadLeft
PadRight
ReadRegistryString
[Address]
I think we would need a dedicated R macro to ascertain the chances anyone in is going to need [ReadRegistryString] before they need a column of their own data that starts with [Ad...]
Easy fix. Makes a big difference.
Thanks.
It would be great if you could include a new Parse tool to process Data Sets description (Meta data) formatted using the DCAT (W3C) standard in the next version of Alteryx.
DCAT is a standard for the description of data sets. It provides a comprehensive set of metadata that can be used to describe the content, structure, and lineage of a data set.
We believe that supporting DCAT in Alteryx would be a valuable addition to the product. It would allow us to:
We understand that implementing support for this standards requires some development effort (eventually done in stages, building from a minimal viable support to a full-blown support). However, we believe that the benefits to the Alteryx Community worldwide and Alteryx as a top-quality data preparation tool outweigh the cost.
I also expect the effort to be manageable (perhaps a macro will do as a start) when you see the standard RDF syntax being used, which is similar to JSON.
DCAT, which stands for Data Catalog Vocabulary, is a W3C Recommendation for describing data catalogs in RDF. It provides a set of classes and properties for describing datasets, their distributions, and their relationships to other datasets and data catalogs. This allows data catalogs to be discovered and searched more easily, and it also makes it possible to integrate data catalogs with other Semantic Web applications.
DCAT is designed to be flexible and extensible, so they can be used to describe a wide variety. They are both also designed to be interoperable, so they can be used together to create rich and interconnected descriptions of data and knowledge.
Here are some of the benefits of using DCAT:
Here are some examples of how DCAT is being used:
As the Semantic Web continues to grow, DCAT is likely to become even more widely used.
DCAT
RDF
The Multi-Field formula tool has three really powerful features that it supports:
[_CurrentField_]
[_CurrentFieldName_]
[_CurrentFieldType_]
These are really powerful within Multi-Field formulas because they allow for a dynamic process to apply across multiple fields.
However, they would also be very helpful in regular formulas and Multi-Row formulas, for code transportability.
A basic example: I have a Longitude field that is a string. I need to set it to a value of 0 if there is a null value.
My formula today:
IF ISNULL([Longitude]) THEN 0 ELSE [Longitude] ENDIF
Now lets say I want to use the same formula somewhere else, but for Latitude instead.
That formula looks like:
IF ISNULL([Latitude]) THEN 0 ELSE [Latitude] ENDIF
If I could use [_CurrentField_] instead, that would allow me to instead write both formulas as:
IF ISNULL([_CurrentField_]) THEN 0 ELSE [_CurrentField_] ENDIF
This code can easily be copied for any field that requires replacing Nulls with 0s, and doesn't require refactoring to use a Multi-Field formula instead.
This also means that if I later change my field name, the code will remain consistent. This not only speeds up development time and flexibility, but more readily allows for validation that the existing code has not changed.
There are few workarounds for this task, but it would be really very easy if Data Cleansing Tool could delete Null Rows and Null Columns. After all its just a macro which can be modified and re-packaged into Alteryx Designer.
Currently, to delete a null row requires multiple columns validation for common Null attributes,
similarly to delete a null column every column has to be compared on a row-level and flagged for removal. Both of these approaches are clumsy.
Wouldn't it be so simple if Data Cleansing Tool gave such check boxes !!!
Hello all,
When looking at the Results window, I often find it a headache to read the numeric results because of the lack of commas. I understand that incorporating commas into the data itself could make for some weird errors; however, would it be possible to toggle an option that displays all numeric fields with proper commas and right-aligned in the Results window? I am referring to using a display mask to make numeric fields look like they have the thousands separator while retaining numeric functionality (as opposed to converting the fields to strings).
What do you think?
After talking with support we found out that Oracle Financial Cloud ERP is not listed among supported Data Sources as stated in the url below:
We would like this added as our company will begin working heavily with Oracle Financial Cloud ERP to bring data from that into our SQL servers. Is there a reason why that connection is not currently being investigated and set up?
Thanks,
Chris
Due to different file formats whether it is .xlsb or any other formats, sometimes it requires end user to install additional drivers/engine.
Some of these driver installations require installations of outdated software e.g. Microsoft Access 2013 (Microsoft Access Database Engine 2013), which poses unnecessary security risk.
Therefore we recommend that in the future version should take note and incorporate such drivers into the installation package so that there is no need to install them separately.
From Wikipedia
Druid is a column-oriented, open-source, distributed data store written in Java. Druid is designed to quickly ingest massive quantities of event data, and provide low-latency queries on top of the data.[1] The name Druid comes from the shapeshifting Druid class in many role-playing games, to reflect the fact that the architecture of the system can shift to solve different types of data problems. Druid is commonly used in business intelligence/OLAP applications to analyze high volumes of real-time and historical data.[2] Druid is used in production by technology companies such as Alibaba,[2] Airbnb,[2] Cisco,[3] eBay,[4] Netflix,[5] Paypal,[2], Yahoo.[6] and Wikimedia Foundation [7]
More and more companies are going from Hive to Druid for Dataviz needs, maybe it's time to look for Druid Integration with Alteryx?
Hello,
I had a business case requiring a cost effective and quick storage solution for real time online sourced survey data from customers. A MongoDB instance would fit the need, so I quickly spun up a cluster on Mongo Atlas. Atlas was launched by MongoDB in 2016 as a database-as-a-service deployed on AWS. All instances for Atlas require TLS/SSL to connect. Currently, the Alteryx MongoDB connector does not support TLS/SSL connections and doesn't work against Atlas. So, I was left with a breakdown in my plan that would require manual intervention before ingesting data to Alteryx (not ideal).
Please consider expanding this functionality on all connectors. I am building Alteryx out in my agency as a data platform that handles sensitive customer information (name, address, email, etc.). Most tools I use to connect to secure servers today support this type of connection and should be a priority for Alteryx to resolve.
Thanks,
Mike Schock
User | Likes Count |
---|---|
6 | |
5 | |
3 | |
2 | |
2 |