This site uses different types of cookies, including analytics and functional cookies (its own and from other sites). To change your cookie settings or find out more, click here. If you continue browsing our website, you accept these cookies.
Be sure to review our Idea Submission Guidelines for more information!
Submission GuidelinesWe have discussed on several occasions and in different forums, about the importance of having or providing Alteryx with order of execution control, conditional executions, design patterns and even orchestration.
I presented this idea some time ago, but someone asked me if it was posted, and since it was not, I’m putting it here so you can give some feedback on it.
The basic concept behind this idea is to allow us (users) to have:
This approach involves some functionalities that are already within the product (like exploiting Filtering logic, loading & saving, caching, blocking among others), exposed within a Tool Container with enhanced attributes, like this example:
The approach is to extend Tool Container’s attributes.
This proposition uses actual functionalities we already have in Designer.
So, basically, the Tool Container gets ‘superpowers’, with the addition of some capabilities like: Accepting input data, saving the contents within the container (to create a design pattern, or very commonly used sequence of tools chained together), output data, run the contents of the tools included in the container, etc.), plus a configuration screen like:
This should end a brief introduction to the idea, but taking it a little further, it will allow even to have something like an Orchestration layout, where the users can drag and drop containers or patterns and orchestrate them in a solution, like we can do with the Visual Layout Tool or the Interactive Chart tool:
I'm looking forward to hear what you think.
Best
This has probably been mentioned before, but in case it hasn't....
Right now, if the dynamic input tool skips a file (which it often does!) it just appears as a warning and continues processing. Whilst this is still useful to continue processing, could it be built as an option in the tool to select a 'error if files are skipped'?
Right now it is either easy to miss this is happening, or in production / on server you may want this process to be stopped.
Thanks,
Andy
I surprisingly couldn't find this anywhere else as I know it's been discussed in person on many occasions.
Basically the Formula tool needs to be smarter in many ways, but this particular post focuses on the Data Type component.
The formula tool, should not always default to V_String as the data type when entering data or a formula into the formula tool, it should look at the data type and estimate the most likely option.
I know there are times where the logical type might not be consistent in all fields, but the Data Preview and the Function of the formula should be used to determine the most likely option.
E.G. If I type a number or a date directly into the formula tool, then Alteryx should be smart enough to change the data type from the standard V_String to Int, Double or date.
This is an extension to the ideas posted here:
I often need to create a record ID that automatically increments but grouped by a specific field. I currently do it using the Multi-Row Formula tool doing [Field-1:ID]+1 because there is no group by option in the Record ID tool.
Also, sometimes I need to start at 0 but the Multi-Row Formula tool doesn't allow this so I have to use a Formula tool right after to subtract 1.
So adding a group by option to the Record ID tool would allow the user not to use the multi-row formula to do this and to start at any value wanted.
Love the new updates to the Browse tool in 2019.2! However, if you choose the option Open results in new window, which I do often so I can see my whole dataset, the search/filter/sort functionality goes away. Would be great if that new functionality also worked in the new window. Thanks!
Can't wait for the new base maps!
In-app screens, lot of space is wasted because components/tools can just be stacked one below the other.
It would great if we could also insert them horizontally.
Thanks !
Arno
Tags : screen, app, macro, layout, tools, UI
Please add ablity to globally, within a module, forget all missing fields.
When configuring a FILTER tool, the results of your formula are uncertain until you RUN/PLAY the workflow. Compare that experience with the configuration of a FORMULA tool where you see a "Data Preview" of the first record results.
TRUE or FALSE could readily be added to the Filter Tool and save the execution time for the workflow.
When you get to HTML tool versions, you could check many rows of data and potentially give back counts of TRUE and FALSE results as well.
I'll put this on my x-mas list and see if Santa has me on the naughty or nice list.
Cheers,
Mark
#Deployment #LargeScale #CleanCode #BareBonesCode
Request to add and option to strip out all unnecessary text within a Workflow / Gallery App when deploying to the Alteryx Server to be scheduled or used as a Gallery App. Run at file location still causes the reading of unnecessary information across the network.
Often the workflows are bloated with un-used meta data that at a small scale is not an issue, but with scale... all the additional bloat (kBs to MBs in size) - sent from the controller to the worker does impact the server environment.
The impact explodes when leveraging the Alteryx API to launch the same job over and over with different parameters - all the non-useful information in the workflow is always sent to the various workers to handle each one of these jobs.
Even having a "compiled" version of the workflow could be a great solution. #CompiledCode
Attached is a simple workflow that shows how bloated the workflows can become.
I appreciate your consideration.
When I proceed with this command in a python tool:
from ayx import Package
Package.installPackages(package='pandas',install_type='install --upgrade')
in Alteryx it only updates to 0.25, but the Latest version is 1.1.2.
When I would like to upgrade from the Python side i get the following:
ERROR: ayx 1.0.54 has requirement pandas<0.25.0,>=0.24.2, but you'll have pandas 1.1.2 which is incompatible.
Can you please make sure we can upgrade to the latest version of pandas without any compatibility issue?
This is important because of json_normalize. Really useful tool, available from pandas 1.0.3!
Currently in 2020.2 (but I assume all versions), when you have a workflow running and click on the Tool Name/ID (1 - in the picture below) in the results window it is then not possible to click on the canvas OR get back to the messages for the full workflow as it is then locked to that tool.
The idea is that it should be possible to get back to all of the workflow messages if you click on a tool name in the results window whilst the workflow is running.
However, a neat little tip that I found is if you click on the input, output or browse hyperlink (2 in the picture below), it will open a pop-out browse rather than show the data in the results window, meaning you can still see all of the messages)
This leads me to think that it could and should be possible to see browse anywhere data whilst the workflow is running if this is fixed. Here's a separate idea for that.
So I discovered this neat little tip today where if you have a browse tool in your workflow and click on the hyperlink (2 in the picture below) whilst the workflow is running, it will open a pop-out browse rather than show the data in the results window, meaning you can still see all of the messages). However, If you click on the Tool name/ID (1 in the image) is locks the results window to that tool. Idea for a fix here
And this lead me to think that Alteryx must be populating the temporary browse anywhere in memory as it's running, so it would be great if it was possible to either click on the tool anchors or the tool names in the results window whilst the workflow is running to see the browse anywhere data.
Hello all,
A whole field of performance improvement have not been explored by Alteryx : the hardware acceleration by using something else than a CPU for calculation.
Here some good readings about that :
https://blog.esciencecenter.nl/why-use-an-fpga-instead-of-a-cpu-or-gpu-b234cd4f309c
https://en.wikipedia.org/wiki/Application-specific_integrated_circuit
The kind of acceleration we can dream !
The current version of the Publish to Tableau macro retrieves an API key at the start of the workflow run. Often times the workflow may take several hours to run before it's ready to write to Tableau by which time the API may have expired. (I think the default tableau server setting times out in 2 hrs) It's one of those soul crushing "I should've forked the output!"
Sample Log Error -
The idea would be to change when the macro obtains the API from when the workflow is initiated to just before the workflow is ready to write to the Tableau avoiding these timeouts.
(If you're having this issue in the meantime you can have your Tableau server admin up the timeout)
Please enhance the dynamic select to allow for dynamic change data type too. The use case can be by formula or update in an action for a macro. If you've ever wanted to mass change or take precision action in a macro, you're forced to use a multi-field formula. It would be rather helpful and appreciated.
Cheers,
Mark
Today when we install custom tools that use DLLs, the DLLs must be placed in the Plugins folder inside the Alteryx installation directory. This requires a second step after the YXI installer runs. I would like to be able to package the DLL with the YXI installer and Alteryx will search for the DLL inside the tool's directory, just the same as what happens with custom Python tools. This will allow custom tools that use DLLs to be installed just as easily as the 1-step installation process for Python tools.
For example, this today does not work, but I want it to:
Can we have an option to save a workflow in a prior version for backward compatibility? I think Tableau offers this functionality.
Example:
If I have 2019.4.8 and a colleague has 2019.1.x, I cannot share my workflows because my colleague will receive a notice that the workflow was built in a newer version. I want to be able to save my workflow in 2019.1.x and send to my colleague.
This is predicated on the workflow not containing any tools/features not present in the older version. In that case, give me a warning about the specific tools/features that are not backward compatible. Thank you.
The Alteryx Python tool currently throws an error if the inbound record set has zero rows (screenshot 1).
In order to manage that - you need to create try-except block around the Alteryx.read that instead creates an empty record set data frame. (screenshot 2). This is inefficient because every time you change the canvas before the python tool, you need to re-code a static field list into the try-except block (i.e. you can no-longer deal with variable fields)
Please could you change the Alteryx.read method to create a zero-record dataframe with the correct column names if the input is zero-length?
Thank you
Sean
Screenshot 1:
Screenshot 2:
Environment variables act as a shortcut so that different computers can be configured in different ways, but a particular path will still point to the right place.
For example if you open up explorer and go to %TEMP%\ - you will open up whichever folder is set up as Temp on this machine. This is super useful so that you can use a particular logical folder without knowing the actual placement on every machine (for example the Windows Directory)
This works partially in the Directory / input - when you put in the environment variable, it is able to search possible subdirectories (screenshot 1) but it does not work once you run the workflow (screenshot 2).
It seems as if the designer hits the Windows API directly, but it does not work within the engine.
Please could you alter the engine to be able to make full use of the environment variables on the machine in question in the directory path or input tool path?
Screenshot 1 - works in designer
Currently we can't use any PaaS MongoDB products (MongoDB Atlas / CosmosDB) as Alteryx Gallery doesn't support SSL for connecting to the MongoDB back end.
SSL is good security practice when splitting the MongoDB onto a different machine too.
With an increasing number of different projects, involving different machine learning models, it's becoming difficult to manage different package versions across workflows. Currently, the Python tool has a single virtual environment, so we need to develop models in different projects always using the same Python and package versions as the Python tool venv. While this doesn't bother the code itself too much, it becomes a problem as soon as we store and load pickled models, which are sensitive to even minor changes in packages.
This is even more so a problem when we are working on the Alteryx server, where different teams might use different packages. Currently, there is only the server admin who can install packages on the server and there can only be one version per package.
So, a more robust venv management in the Python tool would be much appreciated!
Hi!
I just installed 2020.2 & I LOVE this update within the Formula tool:
Are there any plans to bring this ability to the Filter tool as well, where I can double click on my column name & have it grab the brackets as well?
Not going to lie, love that you added the Open/Save/Undo buttons back to the top toolbar too! Latest version looks great, thank you!
I love Workflow Meta info, especially the ability to put the Author, the search tags,the version, the description, etc...
But why can't we use it as Engine Constant? It doesn't seem very hard to implement and it would change life for development.
Hello,
we have several environment in our organization : dev, recept, production.
In order to make that change safe we intend to make several connection (standard alias) like
PRODUCTION_HIVE
DEV_HIVE
RECEPT_HIVE
In our workflows, we want to use aka:%Question.v_environment%HIVE
Sadly, this solution does not work despite the value defaut.
There are three places that provides the log information:
1) Regular results window:
Pro: In the process sequence so the user can understand the order of the process.
Con: Doesn't have info on how long each tool takes to process.
2) Workflow -> Runtime -> Enable Performance Profiling
Pro: Processes are sorted in the processing duration descending order which helps to identify the ones that took long to run.
Con: Doesn't show the process sequence.
3) Actual Alteryx log file:
Pro: There are timestamps for each process so the duration can be calculated.
Con: Not ready accessible and not user friendly to be seen from the interface. Not clickable to see more details in the workflow.
I think it will be SUPER HELPFUL to integrate all three together to show in the process order along with the running time.
Hello Alteryx,
Would it be possible to extend the "Cache and Run" functionality also to tools with multiple outputs? Our clients use the R and Python tools very frequently and the runtimes tend to be pretty long. For the development purposes, it would be great to have the caching possibilities also on these tools.
Thank you very much for considering this idea.
Regards,
Jan Laznicka
It appears that the Workflow Dependencies window does not report dependencies from all tools. In the example image, you can see that the file input from the Amazon S3 Download tool is not listed. Some tools may have dependencies that do not easily fit the current field structure of the window, but maybe the input/download tools could be listed with an asterisk or partial reference.
Missing Amazon S3 Dependency
User | Likes Count |
---|---|
22 | |
14 | |
13 | |
12 | |
12 |