Be sure to review our Idea Submission Guidelines for more information!
Submission GuidelinesHello all,
This may be a little controversial. As of today, when you buy an Alteryx Server, the basic package covers up to 4 cores :
https://community.alteryx.com/t5/Alteryx-Server-Knowledge-Base/How-Alteryx-defines-cores-for-licensing-our-products/ta-p/158030
I have always known that. But these last years, the technology, the world has evolved. Especially the number of cores in a server. As an example, AMD Epyc CPU for server begin at 8 cores :
https://www.amd.com/en/processors/epyc-7002-series
So the idea is to update the number of cores in initial package for 8 or even 16 cores. It would :
-make Alteryx more competitive
-cost only very few money
-end some user frustration
Moreover, Alteryx Server Additional Capacity license should be 4 cores.
Best regards,
Simon
I think file explorer or management related to workflow in the server should be able to be seen, linked and reffered to so that much more creative way to check and use them in the server via multiple workflow can be realized.
Data Connections and Workflow Credentials are key part of migration process for workflows to Gallery.
They are provisioned for each user upon request.
When a developer leaves the organization, there is no easy way to identify all the Data Connections and Workflow Credentials assigned to that user.
Current options in the Gallery is for the Admin to browse through each Data Connection and Workflow Credential, navigate the Users tab and identify the list users.
For a large organization with many Data Connections and Workflow Credentials it will hard to manage this since.
1. If the workflows change ownership, the new owner has to be given access to the Data Connections and Workflow Credentials.
2. Remove user access to the Data Connections and Workflow Credentials.
A Gallery page in Admin should include all Assets a user owns/has access to which includes the following. By selecting a user, the list should populate.
We have implemented a solution to capture this information by getting the details from MongoDB. Also an automated process where the list goes to the manager when a developer leaves the organization so that he can manage the assets by identifying a new owner for the assets.
In the new version, we have an easy way to change ownership of a workflow in Gallery. In the same manner other Assets also should be taken care.
There are some products in the Market whic allows to install multiple services as Windows service on a single server.
In large organizations it is found that when we are running ALteryx Server in multi node setup at that time 1 worker server has only one Alteryx service installed and runs as single windows service. If we have configured FID with log on as a service FID in that case if one FID reached to its shared path mapping capacity 1018k then it stops authentication and windows service is not able to start.
If we have multiple services installed on the same server then we can configure like :
AlteryxService.exe : FID1
AlteryxService2.exe : FID2
AlteryxService3.exe : FID3
In this case we can utilize the server compute and enhance the multi tenancy instead of adding more additional server.
The V3 jobs API endpoint woefully lacks any usefulness. The current endpoint only has a get jobs/{jobid} method that is not useful because a database admin must query the database to get a list of all job IDs. To add insult to injury, this method is only limited to the user whose job is running or queued.
These are new features that I am proposing
1. GET jobs/list—This method must be callable by all users. Parameters such as none (default—full list), running, or queued will display jobs with the appropriate status. The job ID of the running or queued job and the worker it is running on must be included in the resultset.
2. GET jobs/{ownerid} — This method must be callable by all users. Like the GET jobs/list above, the resultset must include the job ID of the running or queued job and the worker it is running on.
3. DELETE jobs/{jobid} — This method must be callable by the person who scheduled the job, the owner of the workflow, or the curator. This method is the equivalent of cancelling a job on the Server Admin page - #!/admin/jobs by a curator. All three mentioned people have a vested interest in the running or queued jobs on the server and must be able to cancel those jobs.
4. POST jobs/reassign/{jobid}/{new_job_tag} — This method is restricted to the curator and applies to any job in a queued state. It allows a curator to reassign a job to another job tag or the first available worker for reasons determined by the curator.
This is an enhancement that I am proposing
1. GET jobs/{jobid} — This method must be callable by all users. This will allow any user to get the details of any running or queued job.
Logging requirements
All DELETE or POST methods must be logged and purged based on the Persistence Option > Delete queue and results after (days).
I would like the ability to "favorite" Public flows that are on the server and then have a separate "Favorites" section. As the volume of Public flows on the server increases, It would allow for users to be able to navigate to frequently used flows much faster.
The gallery needs to implement basic auditing in the data connections.Currently, there is no way to determine who or when a data connection was created or updated.
The dataConnections Collection contains data connections with these keys
_id: (ObjectId) Document primary key.
ConectionString: (String) Hashed database connection string.
PasswordSecured: (String) Encrypted password for the database connection.
ConnectionName: (String) Data connection display name.
Subscriptions: (Array) Array of subscription IDs the data connection has been shared with.
Users: (Array) Array of user IDs the data connection has been shared with.
UserGroups: (Array) Array of group IDs the data connection has been shared with.
Add these keys to provide a basic audit trail
Modify the gallery to allow the values of the new keys to be displayed. Modify the API endpoint to retrieve this information.
You cannot currently upload a new workflow and specify your own workflow_id GUID. This would be useful for systematic workflows that need to be referenced in code. Currently, you either to search for a workflow by name, but you are not guaranteed it a workflow instance you uploaded. This would be helpful for server and workflow administration.
Given there are multiple api versions. I need a way to call the api and get the server version so I can make the correct API call or construct code logic which provides the user code requirement based on the versions features or limitations.
I propose a api call ../getserverinfo/ that returns server metadata like version, default worker thread count, and default memory allocation.
There is currently not a way to call the API and find out the calling user. For instance, if I have a user API key and secret, I to return the rest of the user info for the calling user or who is calling the api. I propose a api call like ../user/whoami
After a job is run on Alteryx Server, Gallery lists Job Results with a Status column containing with one of 2 values:
If any WARNING messages are generated by the workflow, the operator/user is unaware unless they take the time to expand the message log details, then scroll through the long list of messages that typically appear in the log.
Because the Success Icon appears whether there are Warnings or not, the user must dutifully spend extra time scrolling through the list looking for Warnings even if there are none to be found.
My Idea: provide additional information under the Status column in one or more of these ways:
I think that the user would benefit from a filter where they could focus on errors, warnings, or other types of messages in the same spirit as the Designer interface, but I recognize that would be a lot of work and I am not asking for that now.
Workflows which are scheduled and continuously failing in a row 5 times need to stop/disable the schedule. Sent a mail to the workflow owner stopped schedule due to continuous error.
For Administrating the schedule workflows this feature helps a lot. Many users create workflows and dump into server and schedule it and forgot it if we implement this strategy, it will be helpful to both users and Admin team.
Hello,
Maybe it's time to have a better licensing model. In addition of the current and restricted core-base model, why not having a user-based model?
Best regards,
Simon
At my organization, we have many workflows on our server that take data from one database and store it on another. We would love the ability for an alert system that warns us when a job fails so that we can solve it immediately and not risk the chance of not noticing until a few weeks or months later.
Maintaining multiple workers (five currently, soon, six) with identical setups is challenging when dealing with In-Database (InDb) connections. I must log in to each worker, start Alteryx Designer, go to In-Db settings, and create the connection. This also becomes tedious when trying to update passwords, which occurs every 90 days in my company.
The suggestion is to set up an In-Db connection on one worker and have it propagate to the other workers.
Propagates to other workers → |
This would save time maintaining workers in the gallery and help prevent errors during setup on each worker (e.g., typing in the wrong password).
As an "extra credit" mission, expose In-Db connections through an API that can list, create, update, or delete an In-Db connection.
When scheduling workflows in the gallery that are recurring, it would be beneficial to have start and end times also. For example, when setting the frequency to hourly, if there is an option to run between 9 AM and 5 PM that would be great. This would prevent us to schedule workflows for all 24 hours and take up systemic resources when other important workflows could run instead
Hello all,
According to https://openlineage.io/
An open framework for data lineage collection and analysis
Data lineage is the foundation for a new generation of powerful, context-aware data tools and best practices. OpenLineage enables consistent collection of lineage metadata, creating a deeper understanding of how data is produced and used.
This is typically the open standards needed for lineage analysis and I think it will become more and more a differenciator with your competitors. As of today, DBT or Apache Airflow already supports it (as producer), Egeria or Marquez already support it (as consumer) and guys from Datahub are working on it (as consumers)https://feature-requests.datahubproject.io/p/openlineage
So I think Alteryx should implement this standard API as a producer, it's the next big thing in Data Governance and you don't want to stay behind !
Best regards,
Simon
I would like to be able to suspend scheduled jobs - those that are queued to run.
The only option currently is just to delete them.
I want to be able to 'put them on hold' then release them as and when is convenient. Once you release them, they just go back in to the queue or run if there is a free scheduling slot.
I just underwent an exercise of recovering my controller in the event of a catastrophic failure. One of the steps is to recover the DCME keys (DCM Encryption keys) - which is documented here: https://help.alteryx.com/20221/en/server/install/server-host-recovery-guide/dcme-keys-to-backup.html...
This DCME recovery needs to be revisited. This document assumes that the previous controller is running. In a disaster recovery situation, this is not possible. What, if any, can be done to recover the DCME keys if the host has is completely irrecoverable?
For context, having an irrecoverable host has happened. Complete hard drive failure (showing my age), nuked virtual machine and its backups (no one paid attention to the notices that the data center was shutting down), and fire.
Current State:
Currently, all workflows and applications are in list-form within "My Workspace" (formerly Private Studio) and Collections. In My Workspace, I might have workflows and applications that support a broad range of domain spaces and audiences. As the developer (or Artisan), they're all in My Workspace, but shown as an exhaustive list with no categorization unless I name them to represent not only the function of the workflow/application but also the domain.
Once those same workflows/applications are moved to collections, there can exist confusion over whether the workflow/application is intended for a schedule, manual run, or application. Separating by naming convention gets messy and degrades clarity for non-developer roles.
Proposed Solution:
I would like to see folders, only one or two levels deep, be added to My Workspace and to Collections. This proposed solution would not alter permissions, as those would be common for the parent collection and any assigned roles would function the same for that entire collection. The solution is simply adding organization to enhance the user experience.
For example: I might have a Collection that is intended for my Finance team....
Finance_Collection / Scheduled_ETL_Workflows / Workflows
Finance_Collection / Scheduled_Analytic_Workflows / Workflows
Finance_Collection / Applications_for_AccountingDepartment_ReceivablesTeam / Workflows
Finance_Collection / Applications_for_AccountingDepartment_PayablesTeam / Workflows
Finance_Collection / Manual_ETL_Workflows / Workflows
Finance_Collection / Manual_Analytic_Workflows / Workflows
Finance_Collection / etc...
All persons who have been assigned the role connected to the "Finance Collection" will still see everything in all of the folders but would have a better sense of what "workflows/applications" are intended for their use according to the folders the workflows/applications are organized into.
Value Added (Why This Matters):
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
I welcome input and feedback from the community and would appreciate your support if you find this suggestion useful for your Alteryx experience!