Be sure to review our Idea Submission Guidelines for more information!
Submission GuidelinesHello,
After used the new "Image Recognition Tool" a few days, I think you could improve it :
> by adding the dimensional constraints in front of each of the pre-trained models,
> by adding a true tool to divide the training data correctly (in order to have an equivalent number of images for each of the labels)
> at least, allow the tool to use black & white images (I wanted to test it on the MNIST, but the tool tells me that it necessarily needs RGB images) ?
Question : do you in the future allow the user to choose between CPU or GPU usage ?
In any case, thank you again for this new tool, it is certainly perfectible, but very simple to use, and I sincerely think that it will allow a greater number of people to understand the many use cases made possible thanks to image recognition.
Thank you again
Kévin VANCAPPEL (France ;-))
Thank you again.
Kévin VANCAPPEL
Here's a reason to get excited about amp! Create a runtime setting that gets Alteryx working even faster.
when you configure a file input you see 100 records. Imagine the delight that after you run your workflows all input tools are automatically cached. You run so much faster.
now think of the absolute delight that even before you run the workflows that a configured input tool causes a background read off the input data. Whether it is a new workflow or an opened existing flow that reading can start ahead of the time button.
what do you think 🤔?
The find and replace feature is great. Unfortunately, I was unpleasantly surprised to learn the hard way that workflow events are outside of its reach. Please expand to include the entire workflow to act on everything opening the xml in notepad could find and replace. The following demonstrates the omission...
On left side I search for the string “v022”
Below that shows zero matches
In the open event box near the center, “v022” appears in the command box
The occurrence in event command box should appear as a match, but does not.
We are trying to utilize Alteryx Workflow migration workflow to setup proper SDLC environments and ensure we have less human intervention in the process. For example, if we create a gallery data connection XYZ in multiple Alteryx environments and try to run the migration workflow, the connection IDs are different in those environments regardless of how we name them. So even if we migrated the workflow, we still have to manually go to each environment, update the connection(s) and upload it again. That sort of defeats the purpose of migration concept itself.
Suggestion is to use gallery connection name/alias as connection ID so that when workflows migrated, connections are mapped accordingly.
See this community link for context:
tl;dr:
An option to clear the In-DB File History is not available in the Designer's GUI. If this feature is required, it's recommended to open an Idea on the Alteryx Community to submit an enhancement request.
Please implement this as an idea; I need to clear some In-DB connections that are no longer valid and in a managed environment, accessing the registry is laughable.
Thank you!
Would like to be able to connect to the Stibo STEP system/database as a Data Source. Some people have the Stibo server on-premise while others have it hosted in Amazon (AWS).
Not sure what else I could provide at this point for further details.
Hello Alteryx Gurus -
I've got some workflows that run daily, but there are times, depending on the breaks, wherein I don't get any data from one of my data sources. Which is actually fine, nobody did Job X today. But it makes Alteryx puke out and I get an error message emailed to me. Ultimately, I've got to hop into the rather voluminous log entries to determine if this was a data stream not initialized / was empty error, or something else that I actually need to care about.
That being said, in the coding realm, it is relatively simple to look for specific flavors of exceptions and then just eat them without notifying people. So, why not add something to the runtime / events panel for emailing at error time to allow for ignoring data stream not initialized errors? In this way, I could get notified when there is a real error I need to pay attention to, and not get notified when there is no new data, which isn't really that big a deal.
Thank you for attending my TED talk on enhanced error reporting and exception classification capabilities.
When using the Sharepoint Output tool - we have seen a few situations (which are widely reported in the designer discussions thread) where the write to sharepoint fails, but no error is raised.
This often happens because of mismatch in data types, but we've also seen this for other reasons (e.g. we had this once due to column ordering). In the worst case - this can end up with the sharepoint list being emptied out if the write fails on the first item, again with no error indicator.
The Sharepoint Input and Output are very widely used as a way of giving users a very simple UI to input data that can flow into an Alteryx Canvas - so this is a very commonly used pattern in our environment.
Could we request that the Sharepoint output tool be changed to include explicit errors and warnings on write, so that the user has a guarantee that either the write took place or there was an error to reflect the issue?
Thank you.
Hello,
SQLite is :
-free
-open source
-easy to use
-widely used
https://en.wikipedia.org/wiki/SQLite
It also works well with Alteryx input or output tool. 🙂
However, I think a InDB SQLite would be great, especially for learning purpose : you don't have to install anything, so it's really easy to implement.
Best regards,
Simon
Hello all,
In addition to the create index idea, I think the equivalent for vertica may be also useful.
On vertica, the data is store in those projections, equivalent to index on other database... and a table is linked to those projections. When you query a table, the engine choose the most performant projection to query.
What I suggest : instead of a create index box, a create index/projection box.
Best regards,
Simon
I guess it's better if the current column filter feature would cover the entire data set not just the partial results. This would be useful especially if after you run the complicated workflow and you just want to test the data particular nodes in the canvass.
#Deployment #LargeScale #CleanCode #BareBonesCode
Request to add and option to strip out all unnecessary text within a Workflow / Gallery App when deploying to the Alteryx Server to be scheduled or used as a Gallery App. Run at file location still causes the reading of unnecessary information across the network.
Often the workflows are bloated with un-used meta data that at a small scale is not an issue, but with scale... all the additional bloat (kBs to MBs in size) - sent from the controller to the worker does impact the server environment.
The impact explodes when leveraging the Alteryx API to launch the same job over and over with different parameters - all the non-useful information in the workflow is always sent to the various workers to handle each one of these jobs.
Even having a "compiled" version of the workflow could be a great solution. #CompiledCode
Attached is a simple workflow that shows how bloated the workflows can become.
I appreciate your consideration.
It really would be great if Alteryx supports the 'set' data type.
I often have situations where I really wish I could make a field with a data type of "set".
For example, I have a table of pets owned by each person.
The "Pets" Field would be perfect if it could be processed as a "set" type.
Person | Pets |
John | Dog, Cat |
Susan | Fish |
(I sort the pet values in alphabetical order, and concatenate the string values using the summarize tool. This is the best I could think of)
Hello all,
A whole field of performance improvement have not been explored by Alteryx : the hardware acceleration by using something else than a CPU for calculation.
Here some good readings about that :
https://blog.esciencecenter.nl/why-use-an-fpga-instead-of-a-cpu-or-gpu-b234cd4f309c
https://en.wikipedia.org/wiki/Application-specific_integrated_circuit
The kind of acceleration we can dream !
We have discussed on several occasions and in different forums, about the importance of having or providing Alteryx with order of execution control, conditional executions, design patterns and even orchestration.
I presented this idea some time ago, but someone asked me if it was posted, and since it was not, I’m putting it here so you can give some feedback on it.
The basic concept behind this idea is to allow us (users) to have:
This approach involves some functionalities that are already within the product (like exploiting Filtering logic, loading & saving, caching, blocking among others), exposed within a Tool Container with enhanced attributes, like this example:
The approach is to extend Tool Container’s attributes.
This proposition uses actual functionalities we already have in Designer.
So, basically, the Tool Container gets ‘superpowers’, with the addition of some capabilities like: Accepting input data, saving the contents within the container (to create a design pattern, or very commonly used sequence of tools chained together), output data, run the contents of the tools included in the container, etc.), plus a configuration screen like:
This should end a brief introduction to the idea, but taking it a little further, it will allow even to have something like an Orchestration layout, where the users can drag and drop containers or patterns and orchestrate them in a solution, like we can do with the Visual Layout Tool or the Interactive Chart tool:
I'm looking forward to hear what you think.
Best
The original engine support expanding the formula tool with custom functions either in XML or C++. The new AMP doesn't support these yet.
There is a fair number of user who are using these in E1 and would be good to have this available in AMP
There are three places that provides the log information:
1) Regular results window:
Pro: In the process sequence so the user can understand the order of the process.
Con: Doesn't have info on how long each tool takes to process.
2) Workflow -> Runtime -> Enable Performance Profiling
Pro: Processes are sorted in the processing duration descending order which helps to identify the ones that took long to run.
Con: Doesn't show the process sequence.
3) Actual Alteryx log file:
Pro: There are timestamps for each process so the duration can be calculated.
Con: Not ready accessible and not user friendly to be seen from the interface. Not clickable to see more details in the workflow.
I think it will be SUPER HELPFUL to integrate all three together to show in the process order along with the running time.
When I proceed with this command in a python tool:
from ayx import Package
Package.installPackages(package='pandas',install_type='install --upgrade')
in Alteryx it only updates to 0.25, but the Latest version is 1.1.2.
When I would like to upgrade from the Python side i get the following:
ERROR: ayx 1.0.54 has requirement pandas<0.25.0,>=0.24.2, but you'll have pandas 1.1.2 which is incompatible.
Can you please make sure we can upgrade to the latest version of pandas without any compatibility issue?
This is important because of json_normalize. Really useful tool, available from pandas 1.0.3!
So I discovered this neat little tip today where if you have a browse tool in your workflow and click on the hyperlink (2 in the picture below) whilst the workflow is running, it will open a pop-out browse rather than show the data in the results window, meaning you can still see all of the messages). However, If you click on the Tool name/ID (1 in the image) is locks the results window to that tool. Idea for a fix here
And this lead me to think that Alteryx must be populating the temporary browse anywhere in memory as it's running, so it would be great if it was possible to either click on the tool anchors or the tool names in the results window whilst the workflow is running to see the browse anywhere data.
Currently in 2020.2 (but I assume all versions), when you have a workflow running and click on the Tool Name/ID (1 - in the picture below) in the results window it is then not possible to click on the canvas OR get back to the messages for the full workflow as it is then locked to that tool.
The idea is that it should be possible to get back to all of the workflow messages if you click on a tool name in the results window whilst the workflow is running.
However, a neat little tip that I found is if you click on the input, output or browse hyperlink (2 in the picture below), it will open a pop-out browse rather than show the data in the results window, meaning you can still see all of the messages)
This leads me to think that it could and should be possible to see browse anywhere data whilst the workflow is running if this is fixed. Here's a separate idea for that.
Hello Alteryx,
Would it be possible to extend the "Cache and Run" functionality also to tools with multiple outputs? Our clients use the R and Python tools very frequently and the runtimes tend to be pretty long. For the development purposes, it would be great to have the caching possibilities also on these tools.
Thank you very much for considering this idea.
Regards,
Jan Laznicka
User | Likes Count |
---|---|
7 | |
5 | |
3 | |
2 | |
2 |