The Product Idea boards have gotten an update to better integrate them within our Product team's idea cycle! However this update does have a few unique behaviors, if you have any questions about them check out our FAQ.

Alteryx Server Ideas

Share your Server product ideas - we're listening!
Submitting an Idea?

Be sure to review our Idea Submission Guidelines for more information!

Submission Guidelines

Featured Ideas

Hi all,

 

In an enterprise environment - DB connections need to be set up from the server and pushed down to your users; and they need to be managed across the various servers in your software lifecycle.

 

In other words - you may have a sandpit / dev server env; a UAT env; a pre-prod; and a prod env - and each of these need to have the same DCM credential IDs so that users can access these.     

(before you say "you can do this from the desktop) - that is true, however that's not a workable solution in an enterprise env because that means that users can change the password from their desktop into a prod env which is a breach of IT General Controls)

 

The solution here is to break DCM out in to a separate service - where

- all your servers (dev; UAT; Pre-Prod; Prod) can all point to one instance of DCM

- users can maintain their own connections and credentials

        - Each needs to have up to 2 owners so that you can deal with people moving jobs / leaving the firm

- users can also entitle these connections and credentials to their team members so that when the team member logs in, it shows a popup saying "you've just been given access to new credentials / connections"

- A particular connection may have multiple different variants - depending on the environment.

        - HR Data may point to a UAT version of HR data if you're on the UAT server; and to Prod if you're on the Prod server

        - if a connection is environment specific - then it also needs to have segregated credentials (since the login to your UAT HR Data may not be the same as prod).

 

Thank you all

sean

 

cc: @wesley-siu @_PavelP 

 

 

Today, when you trigger a job using the Server API, it is considered as a manual run type. In fact there are only 2 type of jobs : "Scheduled" and "Alteryx_Run"

I think "Alteryx_Run" should be segregated into "API_Run" and "Manual_Run". This way in future version we could treat those type of job differently. 

We could also have more stats around the type of jobs.

Good Day.

 

We would like a built-in process that would search or, and resolve, workflows that are stuck in the "initializing" state. These seem to happen for various reasons but communication problems between the controller and workers .. usually a socket timeout.. which appears  to be most problematic. It seems  that these type of errors should be expected in all but the most stable environments,

 

Currently, the only tool that we have to solve this problem is to restart the Alteryx Service on the controller and while this works there tendency to cause some collateral damage in workflows ...erroring or restarting from their beginning.

 

There may be a way to solve this without restarting the service by editing Mongo using a tool like Robo 3T but that is unproven and has its own risk.

 

After dealing with this issue and struggling for quiet some time we think that the best option is to implement a "clean up" DB process that will run every 5 min or so, capture a list of workflows in the "initializing" state , then compare that list to one in the next 5 min cycle and fix any workflows that appear in both lists. We think that returning  any stuck workflows to the queued state would be the best Fix  option.

 

We just don't want to continue to use Restart the Service process to solve this issue and accept the collateral damage.

 

Thank you for your consideration

 

Tom D  

 

 

 

Currently when we need to disable/enable schedule on API,we need to update all the schedule info,could you provide only one attribute to disable.

 

Can we just update on parameter

 

"enabled": true, --> "enabled": false

 

current update example,we need update all

 

{
"workflowId": "string",
"ownerId": "string",
"iteration": {
"iterationType": "Once",
"startTime": "2022-09-06T08:01:52.717Z",
"endTime": "2022-09-06T08:01:52.717Z",
"hourlyContract": {
"hours": 0,
"minutes": 0
},
"dailyContract": {
"runOnlyWorkWeek": true
},
"weeklyContract": {
"daysOfWeek": [
"Sunday"
]
},
"monthlyContract": {
"simpleDayOfMonth": true,
"dayOfMonth": 0,
"occurrence": 0,
"dayOfWeek": "Sunday"
},
"customContract": {
"daysOfMonth": [
0
],
"months": [
0
]
}
},
"name": "string",
"comment": "string",
"priority": "Default",
"workerTag": "string",
"enabled": true,
"credentialId": "string"
}

I would like to suggest the idea of being able to handle row-level security data sources in a more seamless way using Kerberos passthrough, where Alteryx Gallery will pass the information that User A is running the workflow to the underlying DB and will authenticate as User A. 

 

We have many workflows that are built to handle different queries of a database that are reliant on knowing who is running the workflow in Gallery. We also have many regional workers, and we want to keep the administration of these connections to the data as simple as possible. 

 

For more information, check out the Community thread on this subject.

We have groups asking within our organization for ways to check the status of a running workflow in Gallery. They are wanting to understand which step in the process the workflow has completed for longer-running workflows.

 

They are looking for an experience similar to when running in Designer where they can see which tools have been completed. At the very least, they would like the log to be reported live and not shown at the end of the run.

 

Currently, the run feels like a black box where they do not know how close it is to completion or which steps it has made it through.

 

We have tried to build workarounds like the Email tool, but have been unsuccessful. For example, the Email tool does not send an Email until the workflow completes which defeats the purpose. The closest workaround is writing our own log along the way that can be reported on which is not a clean solution. 

As per design Alteryx Server retains all FAILED jobs in the Queue and Results collections even when we set the server to keep run history and results for x days

 

Purging records from Designer involves manual activity

 

Proposing the idea of purging these error records through automation script:

 

Step 1: Stop Alteryx Server

Step 2: Backup Mongo DB

Step 3: Replace big size files: AS_ResultsFiles.Files.bson, AS_Results.bson, AS_ResultsFiles.bson, AS_Queue.bson with Empty .bson files of same name in the backup/AlteryxService

Step 4: restore MongoDB from the backup (with the replaced files

Step 5: restart Alteryx service.

In the example given, there are four scheduled jobs running in the server at the moment and one manual job is being triggered by the user and is in a queued state for more than 30 minutes to start running. However, in MongoDB, when a job is triggered, that time is captured as the start time (not considering the queue time). If we consider the start time of As_Queue in our workflow, we ended up with a mess. Since that manual job is queued for 30 minutes and running for only 3.30 hours, it is being killed by our workflow. It should only be killed after 4 hours.  

 

  • How did we determine the total queue time & the execution start time for the running/queued jobs? 
  • How do I kill this job automatically after four hours while taking queue time into consideration? 
  • Is there any other way to kill the manual jobs after four hours? Please note that scheduled jobs will be killed automatically by the system after four hours.

For our large organization a requirement is that the Server connect to MongoDB cluster requiring keytab kerberos authentication. Can we please upgrade to needed MongoDB version that would support this as well as enable it within Alteryx Server.

 

(https://docs.mongodb.com/v3.2/tutorial/control-access-to-mongodb-with-kerberos-authentication/)

We have a MongoDB connection string that we plug in to RuntimeSettings.xml during install.  Upon startup, the Mongo connection fails because RuntimeSettings.xml expects the connecting string to be encrypted.  The fix for this, is for someone to manually update the MongoDB connection via the Alteryx UI.  We would like a way to encrypt the MongoDB connection string for RuntimeSettings.xml before we start Alteryx for the first time.

0 Likes

Hi Team, 

 

The Alteryx API documentation for Audit was only available for certain entities(workflow, collection, etc.,) and not available for Schedule entities

If you create a schedule in Gallery, information about the schedule like creation date time, frequency, owner, type, last run, next run, etc. will be updated in MongoDB. If a user edits or modifies a schedule in the gallery, the edited/updated information is only available in MongoDB. There is no possibility of seeing audit information like old value(before change), new value, or operation (update, delete, insert).

 

We required the Audit information of the entire gallery operation such as schedule, Collection, Workflow creation, updation, and deletion. 

 

Regards, 
Ariharan Rengasamy

0 Likes

Hi All, 

 

The Alteryx API documentation was only available for certain entities(workflow, collection, etc.,) and not available for Schedule entities.

 

https://help.alteryx.com/developer-help/auditlog-endpoint

 

I want to see schedule audit information like old value(before change), new value, operation (update, delete, insert).


E.g If you create a schedule in Gallery, information about the schedule like creation date time, frequency, owner, type, last run, next run, etc. will be updated in MongoDB.
If a user edits or modifies a schedule in the gallery, the edit/updated information is only available in MongoDB.

 

Regards, 

Ariharan R

0 Likes

When I publish a workflow, I want it to have different behaviour based on the collection from which it was executed or scheduled. Let us suppose that I create a DEV collection and a PROD collection. The workflow is shared in both. If the workflow is executed from DEV, then I will have some logic to do X (output to specific folders, etc). If from PROD, the logic will do Y. The logic will be up to me. If the collection information is captured in the MongoDB layer, then I can pull it out on the fly by joining the collections, appInfos, and AS_Queue or AS_schedules collections. Querying the DB on the fly during execution and pulling that value is trivial. Once I have it, I can use it for any sort of logic as an environment sensing variable. Can you please store the info in MongoDB?

0 Likes

I would like to see ability to limit the number of versions a user can save of a workflow. As a server environment ages we start to see the database become unnecessarily larger because the number of version of workflows. For servers related to DEV and QA environments as an Admin I would like to be able to say users can keep their 10-25 most recent versions of their workflows. Then anything over that will start to auto-delete just like the auto save feature in Designer. 

0 Likes

Alteryx saves log files in UTF-16 please change the format to UTF-8 as the logs can be streamed using applications like New Relic to track the performance as they only support UTF-8 format.

 

 

Top Liked Authors