This site uses different types of cookies, including analytics and functional cookies (its own and from other sites). To change your cookie settings or find out more, click here. If you continue browsing our website, you accept these cookies.
Read Alteryx customer stories to learn how they transform their organizations into becoming a data-driven business.
Announcing Alteryx + Snowflake | Alteryx and Snowflake make analytics and data science fundamentally easier. With the new integrated starter kit, you can push down data prep transformations and more into Snowflake for faster data quality and analytics output. Learn More
Caisse D’Epargne is a more than 200-year-old French bank. Initially, Caisse D’Epargne was only allowed to offer saving accounts, and quite recently (in 1978), we started offering checking account to our customers. The bank started with 200 Caisse D’Epargnes, I mean 200 different companies under the same brand name, and now we belong to a network of 15 individual Caisse D’Epargne with their subsidiaries after many merges during the last 40 years. About 3 years ago, our local bank (Hauts de France) merged with another Caisse D’Epargne embracing everything you can imagine in a merge such as changing the process, reorganizing the company, having new data software (Alteryx and Tableau) and a new individual incentive plan for sales built from scratch. Using Alteryx, the data and reporting team was able to pull data from multiple sources and create a successful incentive plan for each employee.
Describe the business challenge or problem you needed to solve
With new processes in place, a bigger organization and a new data software we have one key project in the organization, the incentives for the sales people that needed to be restructured. This is an important subject and is very sensitive for some people, we put a lot of time to create and make sure everything is right.
Before the merge, we were using a collective incentive plan which was quite simple. We had a lot of different indicators, coming from different data sources, such as our databases (Oracle) and specific files. The board decided they wanted an individual incentive plan based on the job of the employee. It seems legitimate, but hard to manage: how can you establish same incentives for a regular bank advisor and for a private banker specialized in complex operations? It was not possible to do so! We have more than 35 different jobs, with more than 90 different indicators, and we are not even talking about the data sources yet!
The first year, we worked hard to have something up and running, but it was challenging to update changes as they happen. For instance, if an employee had done a good job but lost one customer who would allow him to get his incentive, we needed to correct this between the production system and the decision-making system. We had one year of troubles, using new tables (or tables we never used before), we eventually created the plan, but it was the hard way!
2019 was the year of the phoenix or our reborn! We decided to rebuild the incentive system from scratch, making it better, more durable, more efficient, and most of all, easier to manipulate in case we have any wrong data somewhere.
We built an employee incentive plan around a few principles:
We could not choose the indicators, they were already written
We decided to split things as much as possible
We decided to make it easy to add data or withdraw data if needed
To avoid people saying, “your numbers are false”, we wanted to provide to each employee the details of the contracts explaining the indicators
We wanted to be able to implement multiple controls whenever we needed to add one, if we detected a problem
Change the objectives quickly if the business realized expectations were not aligned
Understand where the data goes or where it doesn’t go (most of the time)!
The main goal was to be able to have both a performing tool (running every day), with rapid response, and be able to drill and easy to maintain!
Describe your working solution
To follow all the principles, we had to do a little bit of data modeling and imagine multiple possible scenarios such as:
One person having only one job all the year in one place (easy one ;))
One person having 2 or more different jobs in the same place
One person having the same job in 2 or more places
One person having 2 or more jobs in 2 or more places
So we used a very innovative tool: the white board! And we wrote the possibilities and made a scheme about how the data have to be implemented.
To split or not to split, that is the question One huge problem we had was having multiple huge workflows which were running for 2 hours at least and so complicated that we couldn’t optimize it. It was so sensitive because changing one thing could result in a massive change in the output. To avoid this, we splitted into multiples workflows that took at most 5 minutes to run and are very precise. We also wanted anyone in the company to understand each workflow, so we try not to do things in one huge formula tool with everything, we split everything we can! We have those workflows running in the following order:
1 – Objective preparation
From the files we have received (you know these beautiful Excel files easy to read for a human with unneeded rows, merged cells…), we did a bit of Alteryx and we got a beautiful dataset with one column for the job, one for the objective, one for the indicator which were normalized by the way.
Mostly what is done here is really a lot of prep, apply the same treatment to multiple files with different requirements, like on one file it’s column A, on the other B… pure happiness to discover this!
2 – HR preparation
To have all the employees (more than 2,000 in Salesforce), we had to use the HR database. Of course, we don’t have access to it, and the HR only provided us the data we needed by removing columns of the original database.
We had to make all this clean, with some codes only existing in the HR database, we join our structure database, to have in the end a database with one line per coworker per job per place (and a few other needed variables)
3 – Extractions
Extract from our data marts the data that we need. Extracting a lot more than what we exactly needed to be sure we have everything if we miss a new code
4 – Preparations
We used the data we have from the extractions or from the files we received and made one file per indicator (thanks for the output option allowing us to do so) at this point we have “n” files with a detailed format, one line per sale, with the data about who sold it, when and what product it is. The trick here is to have one workflow per extraction, and if for instance you have 10 indicators coming from one database, you should be able to extract the table once and then make again the same treatment for each KPI.
5 – Corrections
Just a workflow to get the data from the datamarts for some information and check the format of the corrections made by colleagues. We did some simple checks, for example, if the client number and the coworker is right, and also if the product is measured.
6 – Regroup!
Having all these files, we can now regroup all the KPI files, the HR file and the objectives files, get two different datasets, one with an aggregated version of the dataset containing one line per KPI per person/job/place and the other with the detailed version containing the KPI, the person/job/place combo and the data which is only here if they want to see what we account for.
We have those workflows category, the extraction and preparation workflows contains one workflow each per source. For instance, in the extraction, we have one source (one query), so one workflow and for the rest there is only one workflow per part.
In the end, all these workflows run on a daily basis, they don’t take so much time to run and are quite safe. We have a lot of checks and balances, most of them are quite simple, checking the number of employees, or making sure we have more data in today’s file than in yesterday’s file and many other like one simple control, making sure we have the same total between the detailed and aggregated version.
Our solution does not use fancy tools or very advanced things, the complication is more about the imbrication and the smoothness of the process, going from many files, many different sources to two files and everything must be as stable as possible. The hardest part consisted in being able to create a data model which answered all the constraints we had. A lot of the hard work was more about imagining the different possible cases and how to fix them, how to make sure that everything is clean, because when you realize there are so many possible cases, one which may seem dumb could generated a specific trouble for us:
Example : What we had in the HR database
And what we obtained at first
Whereas we wanted to keep the data the way it was in this case, so we had to calculate a “job order” to make sure we didn’t summarize things which didn’t need to be. We discovered many very little things which were totally changing the data and implying very different numbers for the employees.
Describe the benefits you have achieved
In the end, we have everything clear, and when we are notified of an error, numbers that are different between the detailed and aggregated version, we directly know where to look, if it’s a problem linked to an extraction, or if it’s more in the HR part. We built multiple workflows that can answer different needs quickly. The most important and efficient thing we established with this project is a weekly meeting with the animation team and sometimes the HR. If we see that they don’t understand something, or we diagnosed an error and are working on it already we can all work together. It allows them to have a better understanding to communicate with more precision!
Splitting is not about splitting one workflow into multiple small parts, it’s more about flexibility, being able to use an yxdb file fast, and be able to handle every needed control. When a sales person forgets to register a sale, we can add it without having too much trouble, when the board wants to create a new job and set goals, we can do this too with ease, everything gets easier, we only have to duplicate one workflow or a part of the workflow to have it up and running!
After doing all this, if you ask me what I think about the incentive. Hum, hard question indeed, first if you have an incentive system, there are 2 important things keep it simple and either do an important incentive or don’t. To explain, when I say keep it simple and if you want people to be involved into the game, don’t make the rules so hard to understand. The employees won’t be able to know how to make any changes and they are impacted by the system. Moreover, I would say that it is important to have something noticeable for all the coworkers, so keep an eye on the remuneration subject!
Finally, if you ask me the difference between individual and collective, I would say that it depends on a lot of things, both have pros and cons. On the individual side, you can say that a top-performing person might be more motivated. If you consider the collective aspect, it can create synergies by allowing one person to be more specialized, and value the group work, but sometimes if you have bad elements in your team, it can be a pain for the best ones.
In fact, the best system is more about having a collective part and an individual part, so that the good performers can feel better than the others but the team will still be motivated! Finally, it all depends on your company, how it works, what you do (or sell).