My team manages an enterprise server for our company. I'm looking to see if any other server admins are charging out portions of their server to cover the licensing. For example, let's say I have 4 different internal teams using this server and the annual cost for the server licensing is $20k. The easiest option is the just charge each team $5k. However, I want to know if anyone is looking at various metrics to determine a more accurate chargeback. Things like number of jobs, duration of jobs, number of users, etc. I'd love to hear what others are doing! Thanks!
@afox1 This sort of question is more of an internal discussion specific to your own company's culture. However... I found myself in a similar discussion recently and here is what we were thinking:
If you had to make internal charges by team, it would be easiest to divide the cost by the number of Designer/desktop licenses each team used. This is mostly for the sake of simplicity.
The idea is that all Designers have an equal opportunity to deploy workflows and apps to the server, whether or not they choose to is not as relevant.
If you wanted to charge based on use of the server you could, but you may end up with more headaches and detailed billings to handle.
For example:If one team deploys 100 apps that get used twice each in a month and another team deploys 10 workflows that run 20 times per day - which had the greater resource drain? That would depend more on your business, network load, and time of day.
Since users can deploy apps/workflows may times over which may or may not be used you'd have to look at the results history (pulling from the DB of course). There is a way that users can delete their results though, so you'd have to get a regular data pull going and store it elsewhere.
So, in the end you'd spend more in administrative costs trying to track all of the metrics if your goal is a "fair" way to divide cost internally. Flat rate by team or charge by the number of Designers they have (assuming each has the same potential to use the server).
Thanks @patrick_mcauliffe, this is a good idea to use the Designer licenses. I'm all for keeping it simple. One idea we recently discussed was to base the charge back off the cumulative duration of jobs by studio. Meaning, if we looked at the total time all the jobs ran on the server across 90 days, then looked at the run time of each studio to calculate that percentage to charge them back. We thought this would be be straight forward, but also drive the right behaviors. Those behaviors being: writing efficient workflows and also developing and testing as much as they can on their local designer vs the server. Thoughts on that method?
@afox1 I would hesitate to use duration of run time as a metric of efficiency.
The best example I could give you to illustrate is from one of our users. He's in IT, very knowledgeable, writes very efficient queries and workflows, and usually deals with some big requests from senior leadership. One such request was to pull together all sorts of various metrics and put them into a dashboard. Well, the way it works out, at least one of those Data Inputs is doing a "Select all" on a very large and slow DB2 table. It takes anywhere from 3 - 6 hours to run depending on the amount of data in the current table and other network conditions.
Several of us have looked over his workflow and it is so simple, it's just a matter of the data size and other conditions outside of his control.
That's also the only workflow he has on the server right now.
Although the example I gave is on the extreme side, sometimes there's nothing that can be done about how long something takes to run.
Now, if we were charging based on cumulative time to accurately account for resource utilization would his team pay the most? Probably.
But if the purpose was to drive behaviors - that might backfire as it gives the user the wrong impression and may cause them to try taking shortcuts they shouldn't.