Advent of Code is back! Unwrap daily challenges to sharpen your Alteryx skills and earn badges along the way! Learn more now.
Free Trial

Alteryx Server Ideas

Share your Server product ideas - we're listening!
Submitting an Idea?

Be sure to review our Idea Submission Guidelines for more information!

Submission Guidelines

Featured Ideas

Sometimes when using someone else's Gallery App which has a long list of options to select from, I will hit Run before realizing that I haven't set all of the options.  Then, the App fails (obviously).

Rather than just getting a message that my app run failed on the Gallery, it would be nice to have a link then that automatically re-loads the previous options I had set so I would be able to see which option I didn't fill out properly.

 

If a job fails it would be perfect if we could set something in the workflow settings so that the job would retry again in X number of minutes for the next Y number of times. We have jobs that connect to external resources and sometimes the network will reset and will cause the connections to all drop. An example would be I want a workflow to try again in 10 minutes for a maximum of 5 times so over the next 50 mins it will retry every 10 mins if it fails

0 Likes

When viewing the schedule viewer with more than the default per page workflow amount of 25 workflows, if you sort it only sorts in the 25 in the current page. I would expect it to first sort all workflows then show the first 25, and actually used the interface with this as my expectation for longer than i'd care to admit, leading to a large amount of confusion when i for some reason couldn't figure out why workflows that i scheduled were not showing up. Obviously this is down to user error and now that i understand it i don't have any issue, but at the same time, why would the sort not apply before population of the list?

 

Interested to know if i'm the only one who's been annoyed by this.

 

~ Clive

In some organizations, it may be difficult, if not impossible, for permissions to be applied or exemptions made to enable wide ranges of users the “Logon as batch job” permission needed to run workflows in a Server with the current run-as credential capability.

 

If possible, could the Alteryx process still run as the server admin or "Run As" account, but enable the workflow to access the various different data sources (windows authentication) using specific credentials entered when running the workflow. So while the whole process runs as Service Account A, the access to databases, file systems, etc. may be done using their own specified credentials.

 

Some of this can be accomplished today by embedding credentials in database connections, but this isn’t an ideal scenario, and a more holistic solution that covers a wider array (or all supported) data sources would be preferred.

 

We've noticed on our Server Gallery that users must click 'Run' two separate times when running an app.  The first time they see 'Run', its really taking them to a configuration screen, rather than actually executing the tool.  What if that first 'Run' button was changed to 'Configure'?  We've seen that users hesitate to run apps because they aren't sure what they're getting when they click 'Run' the first time.

 

Picture1.pngPicture2.png

 

Not sure if this has already been suggested but I couldn't find it in the ideas...

 

It would be awesome if in the Gallery some better documentation could be created for the naming of the different private studios, collections, and districts. The naming causes some confusion because it is so different than most other products which causes confusion in our company.

Isn't it great that we can toggle between outputs from an app in the Gallery, being able to view and download each one? Yes! However, what if an app produces a lot of outputs...such as 1 per state...or 1 per product type...or 1 per month...etc. What is get is something like this:

Capture.PNG

Now imagine how long it would take to download all of these...ouch. I recommend Alteryx add a "Download All" button to the page that appears after an app finishes running.

 

Related Community Post here.

0 Likes

I'm really loving the new data connections for 11.0. We have deployed them in a private gallery for our users and it's great. The only drawback is that the server itself cannot easily be configured to use these same credentials. I would like some way for the service account running our jobs on our server to use the data connections from the gallery. I can assign the credentials to that service account just fine, but it never picks them up since it never open up the Alteryx GUI when it's running jobs. This would allow us to have one spot with all of the credentials. Currently, we're going to have 2 locations with all of our credentials that we'll have to make sure stays in sync: 1 in the gallery for the users, and 1 on the sever box for the server.

 

Thanks!

In any large IT environment - you will have multiple systems which each use different nomenclature to describe the same thing

 

This relates to products; currencies; customers; suppliers; trade types; etc.

 

At present - our users are bridging this gap either by:

a) creating a bunch of excel spreadsheets with "Magic Code Translation Tables" - which is unfortunate because these become unmaintainable very quickly and live on people's desktops (and are not reusable assets) or

b) creating a whole morass of one-way-translation tables to translate from each input source to the normalized format - these are all hand-rolled translation tables; with hand-rolled ways of adding translations etc.

 

 

What would be very useful is to allow Alteryx users to specify these kinds of domain concepts on the Alteryx server, with a flexible way of adding synonyms.  For example - our master customer list is kept on the server with a master customer ID (call it MID for Master).   If I'm dealing with a new system that uses a different customer ID (call it NsID for New SystemID), then I can map the NsID to the MID centrally so that anyone who wants to do analytics on this data can just drag in a converter from NsID to MID; and also drag in the master customer list with the MIDs.

 

This would allow all these Magic Translation Tables to become an enterprise asset rather than isolated data islands, and act as an accelerator for every other team using this data.

 

 

 

 

 

Would it be possible to specify whether a worker handles scheduled jobs, ad-hoc jobs or both?  Right now it seems that the workers treat both types of jobs the same, meaning that a slew of ad-hoc jobs initiated from the Gallery could slow down jobs that are scheduled to run on a regular cadence.  It'd be great if those scheduled jobs could have a dedicated worker (or workers) and have any ad-hoc jobs handled by a separate worker (or workers) so that the scheduled jobs (which might be more important) are not held up by one-off jobs.

Can we make the ability for a developer using Alteryx Designer and having a workflow using in-DB tools to pass along their credentials with the workflow when scheduling to run on Alteryx Server.  An individual user may have access to only certain tables in Teradata for example and the ODBC connection on Server may not have access to the same tables so we need to be able to pass along the encrypted credentials.  THanks!

As large enterprises continually strengthen security around their system and data assets, we're seeing adoption of products like CyberArk's Enterprise Password Vault (https://www.cyberark.com/products/privileged-account-security-solution/enterprise-password-vault/ )

 

The system is essentially a central repository that secures and automatically rotates passwords for privileged accounts- things like a functional account you would use to run workflows against a certain database or set of systems.

 

It would be great if Alteryx could build both Server (Run As Account) and Designer (for individual database connections) integrations with a tool like that.

Hi,

Currently for most of our workflows we are using our private Alteryx Server to run workflows, using executables that call alteryxenginecmd and output logs to dynamically created log files.  However, there is currently no way to leverage a custom log location for a workflow that is run directly from the Alteryx Gallery (apart from sending an email, which has scalability problems).

I would like an option to create the output log as part of a workflow, so that when we save that workflow to the Gallery, it can output a log with a dynamic name easily.

Because of sensitive nature of data we deal with, all of our infrastructure is located in a restricted area. As a result, our Alteryx server can only be accessed on machines with a corporate-built system connected to corporate network unless the access has been authenticated.

What this means is that while I am on my corporate-built laptop, I can access the Gallery node from anywhere as we use Windows login for authentication. However if I wanted to schedule a workflow while connected to a non-corporate network (eg when working from home) I wouldn’t be able to do it because the controller server can't be accessed - it uses http protocol without authentication.

 

Currently there's no workaround and the situation creates number of challenges for colleagues using the scheduler. It would be great it it was possible to use DirectAccess or alternative way of identifying that the connection is coming from a corporate client.

Starting with Windows Sever 2016 edition one could use Docker containers technology on windows environments. My idea is to dynamically convert Designer jobs/workflows  to Docker containers at runtime.

While Alteryx is a self-service tool for the business there is a need for administrators to get a little more insight to the usage of the installation on the server. Each customer is forced to create their own monitoring tools. I understand NewRelic and Cloudwatch can easily be used but there is a need to split the admin view by organization departments since each dept needs their own resource utilization and workflow report. Logical split view reporting should be facilitated by the Alteryx server product.

I was reading a post on the Community (http://community.alteryx.com/t5/Publishing-Gallery/Macro-sharing-Best-practice/m-p/38330) which reminded me of an idea that I had.

 

It would be really nice if a Gallery location could be used as a "Macro Search Path" so that macros don't need to be downloaded from the Gallery and saved locally to be used in a workflow.

 

So in addition to going to Options>User Settings>Edit User Settings>Macros and adding a local/network path, you could add your internal gallery information...

I have tables that I need to run ETL jobs on every 5 mins. But, a batch job runs every day for an hour at the same hour. During that time I can't query the source.
With the current scheduler setup; it appears that my only option would be set up multiple schedules. Each running once a day, and a separate schedule for every 5 minute increment of the day with the exception of the one hour my source can't be touched.
Rather than that degree of hassle, doesn't it make more sense to set up a scheduled with and/or/not criteria?
Example:
Run every X minutes on Y days
Except: during %t am - %t am on Z days

I've seen some applications that have a visual scheduler for setting the exception times. That would be pretty cool too.
Bonus points if you can make an admin console for the server which allows the admin to set blackout date/time by table or DSN for all users (override their schedules).

It would be great for Alteryx to provide the UI to allow the user to maintain the data on the target table through Alteryx. The workflow application would be a standard way to maintain reference data.

 

This would allow us to deliver a quick way to interface with relational tables. Something similar to the following projects:

 

- django admin site

- phpgrid

- etc.

 

 

This would avoid using Microsoft Access for example for quick table edits and using the simplified Alteryx app instead

Right now we can set the limit of processes a worker can run and each worker will ping the controller when a slot is available so that it can assign a new process. However, this is not efficient and you can truly load balance because you don't know how busy that worker acutally is...

 

We need a way that might be simliar to how we do on DataStage GRID environment where the LSF platform balances that workload across the entire GRID.

Top Liked Authors