In this episode of Alter Everything, Alteryx Sr. Community Content Engineer, Sydney Firmin, will be guiding Podcast Producer, Maddie Johannsen through the concepts of Interpretability and Fairness through the lens of Causal Inference. You’ll also hear familiar voices of causality expertise, Dr. Victor Veitch, and Dr. Amit Sharma. This episode builds on concepts discussed in Episode 44: Causality.
Continue the fun and share your thoughts on Twitter using the hashtag #AlterEverythingPodcast, or leave us a review on your favorite podcast app. You can also subscribe on the Alteryx Community at community.alteryx.com/podcast. While you're there, fill out our audience engagement survey: https://www.surveymonkey.com/r/QN23V7B The first 100 people to leave their feedback will be entered to win one of five pairs of Bluetooth headphones.
Special thanks to Baba Brinkman for our special theme music. You can access Baba’s full rap track for free on SoundCloud.
Maddie: [00:00:06] This is Alter Everything. A podcast about data science and analytics culture. I’m Maddie Johannsen, your producer for this episode. If you haven’t tuned in to Episode 44, I’d highly recommend starting there so you can first learn more about causality, and in today’s episode, Alteryx Sr. Community Content Engineer, Sydney Firmin will be guiding me through the concepts of interpretability and fairness through the lens of causal inference. You’ll also hear familiar voices, Dr. Victor Veitch, and Dr. Amit Sharma who will share their expertise on causality.
[00:00:45] Sydney let's talk about how machine learning algorithms affect our day-to-day lives.
Sydney: [00:00:52] Yeah! Machine learning algorithms are becoming more and more a part of our daily lives. Just as an example, we are interacting with the work of data scientists and machine learning engineers every time we get on the internet. From search engines to recommendations and advertisements we’re interacting with these algorithms daily. Beyond these seemingly kind of small interactions, machine learning algorithms are also being applied to a wide variety of fields from medicine to finance to criminal justice. And these applications can impact people's lives in really direct and important ways like loan approvals or the length of a sentence in a court case. So with machine learning algorithms being applied more and more to important or high-stakes use cases. There's been a growing concern around the interpretability and fairness of machine learning algorithms.
So there's this trade-off in machine learning algorithms or statistical models and we talk about in an episode 43. It's a trade-off between accuracy or how close a model’s estimates are to the truth, and interpretability or how clear it is - why a model makes the estimates it does, like what reasons it has.
So when we select a model for high-accuracy, we're making the assumption that the data can speak for itself. That it has all the correct information it needs to make good, fair estimates. And the question that really becomes is that a reasonable or acceptable assumption to make?
Is it okay for us to put a black box model into production to make decisions that directly impact people's lives when we don't know why it makes the choices it does? And is interpretability an important part for making sure those models are fair?
Victor: [00:02:55] So it's obviously desirable right? It’s a good thing, and so you should have it. But of course there is a trade-off
Maddie: [00:03:02] That was Dr. Victor Veitch. You probably recognize him from episodes 43 and 44.
Sydney: [00:03:08] So the trade-off Victor’s talking about is something we discussed a lot in Episode 43. Traditional statistical models tend to be more interpretable but less accurate. Machine learning models tend to be more accurate but less interpretable and there's an inherent trade-off between these two features.
Maddie: [00:03:25] So you're talking about a trade-off between being able to tell a better story using a stats model, versus being more accurate using a machine learning model.
Victor: [00:03:39] Typically what we mean by that is, they tell a story about how the data was generated, right? So like classical statistical models are very interpretable. And that story in particular involved some relatively small set of parameters where you can just literally stare at the value of those parameters afterwards and be like, oh, it turns out that like this particular covariate was an important one.
Sydney: [00:04:03] So thinking about a standard linear regression model, which is an interpretable model hailing from classical statistics. With linear regression you can determine an explicit approximate relationship between your target variable - or what you're trying to predict - and the different variables in your data set you're using to make that prediction so your predictor variables.
So if you have a data set where you're trying to create estimates of how much a house will sell for, you might have predictor variables like square footage or house quality or the age of the house when it's sold. With a linear regression you'd be able to tell explicitly the impact each of these variables has on the sale price.
So for every year house ages, the house itself is worth $500 less. You can tell which predictor variables are most important based on how much they change the outcome or the final price of the house and those rates - the 500 dollars a year - those are the coefficients of the model.
Victor: [00:05:09] And of course the trade-off that you make there is that often black box or hard to interpret models are just much better predictors, right? And I think that in situations where that's true, the interpretability of interpretable models is like actually fairly questionable. Because if you fit a model to your data and the model doesn't actually give a good fit, then you try and “read the tea leaves” of the parameters of that model, right?
It's totally unclear whether that means anything and certainly probably doesn't mean what you think it means in terms of the story you told about where the data came from. So in that sense, I think you know interpretability is important - the ability to interpret your results is clearly important, but I think building models which are meant to be interpretable is not so important, or at least it's not necessarily a fruitful line of work.
Sydney: [00:06:12] What Victor saying here is that if you spend all your time worrying about making an interpretable model, you might end up making a meaningless model because the assumptions you make shoehorn your data into an interpretable model that results in incorrect relationships between the variables making the model meaningless.
Thinking about the house price example again, the relationship between house age and price might not be linear and it might not be consistent. What if people really like old houses and downtown areas and they're willing to pay more for those and then houses built kind of in a middle-aged (like the 80s) kind of belly out and are worthless compared to the newest houses which are worth more. So that's not a direct [example] - every single year your house ages, it's worth $500 less. It varies more [than that]. Does that make sense?
Maddie: [00:07:10] Yep.
Victor: [00:07:11] Assuming that you can write down a model and this is how nature has actually generated the data, I'm certainly skeptical that that is ever a sensible thing to do. Like I think you know, this is a thing where statisticians have, for reasons which may or may not have been good, really been willing to swallow this assumption when they shouldn't have. So I mean in that sense, I think the willingness of data science to engage with like nonparametric models or models that don't come with like a generative story, I think it's much better to you know, build whatever model provides the best possible prediction and then look for ways to interpret or understand how the model is making that prediction.
Sydney: [00:08:00] Nonparametric is a branch of Statistics that doesn't hold itself to assuming that a data set has a specific probability distribution.
So if you remember what a bell curve looks like from an “Introduction to Statistics” or a science class, nonparametric statistical analysis doesn't assume that's what the distribution of a data set looks like or relationships within the dataset look like. So, machine learning models might be more effective because you aren't making as many explicit or strong assumptions about the data or the model, but it's not to say that so-called Black Box models are foolproof.
Victor: [00:08:44] I think that the weakness that black box algorithms have is that you're not quite sure that they'll answer the question that you care about, right? So in particular if you really want to know, often, the question that you actually have is a causal question.
Maddie: [00:08:49] Oh causality. We talked about that in episode 44.
Sydney: [00:08:56] Yeah, so if you're just joining us now or want a refresher what you need to know about. Causal inference is that it's the process of building a more qualitative model of how the data was generated before you start getting into statistical or machine learning analysis.
And with this qualitative model you're able to control for confounding factors or other parts of your data set that might get in the way of identifying a real causal relationship or a cause and effect relationship. And so, Dr. Victor Veitch who's been speaking so far and our other interviewee, Dr. Amit Sharma are both specialists in the area of causal inference in their research.
Victor: [00:09:59] If I, you know, intervene and give you a drug, are you going to get better or not? And an issue with a black box model as you might be like, “Oh, well, I don't really know what the effect of like the drug covariate in my model was” right? Because it doesn't come with a weight.
One thing is like once you appreciate the tools of causal inference, you'll realize that you can use these totally Black Box models to answer these causal questions, which are the things that you actually care about. And in fact, it goes further than that, which is that you should prefer the Black Box models to like simpler more interpretable models because either way you're going to succeed in answering the question, but you'll get a more reliable answer with the better predictor.
Sydney: [00:10:44] I think this makes a lot of sense. If you can control your data going in and isolate your data to the point where it's capturing a meaningful causal relationship, you should probably just use the model that's going to get you the very best results possible.
Victor: [00:11:03] Another thing that people worry about with a black box models is [for example], I have a bunch of training data like ImageNet or whatever. I fit a model there and then I go to some totally new domain or maybe I trained on a food classifier in North America, and then I go to China and I try and deploy that I'd and the whole thing just breaks down, right? And because I can't interpret the model, I won't know in advance that that's going to happen. But the thing is, models which reflect causal structure should not have this problem.
So if you learn a real causal relationship in the world like, X actually causes y, then even if you go to a new domain, X continues to cause Y right? Like, smoking causes cancer irrespective of whether you are a coal miner in West Virginia or a doctor in Germany, right? It's just a real relationship.
And so, another way in which I think causality and black box models are synergistic is if you train black box models in a way where they're forced to reflect a causal structure, for example using these like invariant risk minimization techniques, then you expect to automatically gain this robustness to domain changes. So I think just taking the causal component of the whole business seriously, really kind of alleviates whatever weaknesses Black Box models might have had.
Sydney: [00:12:45] Something Victor just mentioned or alluded to is domain transferability or knowing that the model you've trained on your data set can be applied to a data set from a different source and will still work in the same way. This is an important concern for data science projects - knowing that the patterns you've captured in your model are meaningful in all the situations that it's expected to work in, and not only being relevant to the specific case that your training data came from.
When a model is fit to a specific training data set and picks up on relationships in the data set that only exist within that data set within that sample, and not the entire population you're trying to capture, your model has been overfit to that subsample.
Typically, overfitting is a term used to describe a model that has identified random noise in a sample data set instead of meaningful relationships. But in a sense, you can think about overfitting in a situation where the model only applies to a geographic population that it was sampled from, or to relationships that exist within the subset of data you're working with and fall apart when you try to apply it to a more broad use case or set of data points.
In this situation you failed to capture the causal relationship you were looking for. Doing effective causal inference can account for this concern which is something Amit talked about as well.
Machine learning models are often trained on a specific data set taken from a specific time and place. But people then try to use that model trained from that data set in a different time or a different place.
Amit: [00:14:36] People call this problem domain adaptation as well. Transfer learning is also another name. The more I think we want to build such models the more they'll be an intersection between causal inference models and machine learning models.
The second thing that is often forgotten. I'm surprised how often I am part of these conversations where when you think about data science, it’s almost never about saying something about a data set right? I think I've said this before as well. It's always about making a decision. The reason people look at a data set and want to build some algorithm is that that algorithm most likely is going to be used to make some decision at the end you can think of the healthcare example where you look at some past data sets of people with some illnesses, maybe look at the treatments and you can predict that if you give this treatment then this is the outcome.
Sydney: [00:15:43] Amit is saying that people will often assume that their data captured at whatever time and place can be comfortably used to make decisions in other situations.
This can be demonstrated in the healthcare example.
Amit: [00:15:57] So for example, I'm given a data set from a hospital. I do not want to over fit on that and that's why I'll do cross-validation. I'll do train test splits. I'll keep the test split completely unknown and only test it at the end. And all of that is good, but it only helps you prevent against overfitting to this data set.
But there's another kind of overfitting which can happen is over fitting to the distribution and by distribution, I mean what are the sample of people that came in to this hospital from which this data set was collected, right? It could happen at hospital one tends to serve people who are elderly more often.
And so when you train your model your model was optimized to give better accuracy for elderly people maybe not so much for younger people because there was not so much data. Now that you've shifted to hospital one the opposite could be happening there, and younger people could be going more to that hospital.
Sydney: [00:16:59] And that's the healthcare example. Different hospitals are serving different populations.
So when you try and train a model with one population without controlling for causal factors like age or environment or population demographics. You can't guarantee the model is going to work as well in the other hospital.
Amit: [00:17:47] Now what's going to happen? I mean once this model is built someone is going to use that model to make future recommendations for treatment. The same happens in any kind of setting, and what is often overlooked is that the model was never trained to make the right decision. So, if I can just be a little bit technical here - what the model was trained on was to minimize error to the recorded outcome in the data set to the predicted outcome that the model outputs right?
Sydney: [00:17:50] So what Amit is saying here is that when you set up a machine learning algorithm to learn relationships within a data set. You also need to set it up with a cost function so that the model has something to correct itself with. When it is looking for patterns in the data, it will use the target variable - what you're estimating - to determine what those relationships are.
So it'll be like “okay if I start with square footage X and try to predict the house cost Y, it’ll circle back and check how off it was using the training data set.” Like, “oh my estimate was off by X thousand dollars,” and that's what the cost function is. It's giving the model a way to know when it's wrong and how wrong it is.
There's this video I really liked by a mathematician 3Blue1Brown on YouTube and then he's talking about neural networks and the cost functions for them. And he has these little characters and one of them has like a little newspaper and it's like “bad model, be better!” That's what I always think about like with a cost functions like.
And so the model is able to correct itself and kind of hone in on a relationship in the data set as opposed to taking random guesses, but that's all the model is really capable of doing as far as learning a pattern because all that the model knows about the world at large is the data we feed into it and that's all it can really go off of.
Maddie: [00:19:29] Gotcha. Yeah, that makes sense.
Amit: [00:19:31] So what the model is being trained on is a very different problem. It's an error minimization problem over the past, whereas what it is often deployed for is a decision-making problem over the future.
Right? So decision making problem is different because here you're looking at interventions that happen in the real world and give back an outcome, whereas in the training step, you're looking at interventions that already happened with some other constraints and you’re just looking to maximize the predictive accuracy.
Sydney: [00:20:03] This is a concern that can be connected back with one of my favorite philosophers David Hume. Something we do in science generally speaking is assume that things in the future will look the same as they do in the past and present, and that's called the uniformity of nature.
Maddie: [00:21:12] I wonder about that because, I honestly don't know that much about it, but I wonder how, accurate that is and everyday life.
Sydney: [00:21:25] Data science assumes often that like the data we collected in the past will be useful for estimating things in the future. We use these relationships with previous data and then we'll apply it to like new incoming data and mean it's a big part of science to though like we tend to assume like, oh like every fire I've ever seen in the past is hot, and hurts when I try and touch it and so I know not to try and touch the fire
Maddie: [00:21:00] Right.
Sydney: [00:21:57] and yeah, like what Hume is saying is that like we don't have a rational reason for believing that. We don't have a rational real reason to assume that everything we've experienced in the past will continue to be true in the future.
But we do and there's nothing to be done about it, so we shouldn't worry about it too much. It's like the ultimate philosopher’s shoulder shrug, like “oh well.”
Maddie: [00:21:27] Can't control it.
Sydney: [00:21:29] It’s just how we are. Yeah, don't worry.
Maddie: [00:21:33] It is what it is.
Sydney: [00:21:34] Yeah, pretty much.
Amit: [00:21:37] And I think this is something that is also now coming to the fore, that even though they may look deceptively similar, like this idea of predicting past outcomes to the idea of doing something in the world and then predicting future outcome, they look deceptively similar, they're often not the same, especially in complicated social settings or social science human behavioral type of settings.
Sydney: [00:22:06] All right, so. what Amit saying here is that machine learning problems that don't take causal inference into account may be biased by their training data and may not actually be that helpful for decision making. Back to the hospital example.
Amit: [00:22:22] And so that's I think the first thing that a lot of us are realizing that unless we take care of causality - and causality in this case would simply mean that the age variable was actually confounding our estimate of prediction error. And so what we could have done is probably built age specific models for this hospital and that would have worked because now if you even went to a different city, you can just somehow control for that and build its specific predictions there as well and it might work, right? But immediately you can also see that age is just one variable, there could be the severity of illness, there could be doctor biases and so on. There are so many things that are different about two hospitals.
So I think this is one place where I think we'll see more and more of causality and causal thinking coming in is how do we make machine learning models more robust to different domains?
Maddie: [00:23:22] So to recap, we use models in order to make sense of our data. But models can mistakenly only contain information that doesn't apply to all situations such as only using data from an elderly population and applying that model to a hospital that primarily treats a younger population. And a way for data scientists to approach this issue is to think about causality.
Sydney: [00:23:45] Yeah so, we've learned that causal inference can potentially help ensure domain transferability and help ensure that we're asking and attempting to answer the right questions. But where does this leave us with fairness and interpretability?
Amit: [00:23:59] One of the things that it's taken me a while to realize as well is that they're actually fundamentally connected.
Sydney: [00:22:06] Amit’s perspective is a little different than Victor's on this.
Amit: [00:25:33] And the reason I say that is, let's take the first argument that you might think of, that they seem to be just different problems. Right? And this is what I used to believe for a long time, probably for two or three years, which is that if you want fairness, we should try to do whatever it is to satisfy the fairness constraints that we have brought up. So for example, we should have equal accuracy on different demographic groups, for example, race. And then it doesn't matter if a person can understand the algorithm or not, as long as we can prove, or as long as it can empirically show that this complicated algorithm does the task that it was supposed to do which is it makes the decisions fair on the basis of race, then it's good, right?
It's kind of arrogant, actually, as human beings to think that we need to understand what this algorithm is doing. Because if you think of society and societal constructs, they are so complicated that if it were the case that we were too able to understand truly fair decisions, then we would have thought of heuristics ourselves to implement that algorithm, right? Why need a machine learning model to implement fair algorithms?
Sydney: [00:25:31] This is kind of related to what Victor was saying about making assumptions about how the world works. How are we really supposed to know what a relationship looks like unless we know exactly how it works.
Amit: [00:25:44] So that's what I used to think earlier, and then I think that interpretability is something that's just a nice add-on which may be required in certain cases where you want the operator of the algorithm to have more trust. So for example, in a healthcare setting a doctor may want to actually know what the model is doing before they make a recommendation based on that machine learning algorithm.
Sydney: [00:26:06] But, this is where it changes for Amit.
Amit: [00:26:09] But let us unpack this argument, and I hope you'll quickly realize as well that it's a false dichotomy. I mean, they're both connected. So what we are missing in the fairness pitch that I just gave is that who made these constraints and how do we know that they are actually capturing what we mean by equality and justice in the particular domain that we are working in.
If you take the loan example, there is a constraint that we choose which is a mathematical constraint, it's a statistical proxy for what may be written in law or in justice as something that's fair. Right?
Sydney: [00:26:53] The loan example might already be familiar to you. It's where machine learning or AI is used to approve or deny loan applications. To account for historical biases that might exist in a data set, for example, single women having lower approval rates than single men, the algorithm designer or data scientists might implement a statistical proxy to correct for the historic bias.
Maddie: [00:27:21] Yeah, this loan example is interesting because you know having loan applications approved or denied by AI is something that I've known has been going on, but it's also just kind of scary to think about just given how much it really intimately affects our lives.
Sydney: [00:27:37] Definitely and I think the intent behind a lot of this like “well, let's just throw machine learning at it” is good. I think the intent is good. They're like, “well, we know people have bias, machines don't have bias, so let's give the machines the data and everything will be fine.” And what that doesn't take into account is that there's bias in the data and so it's good that it's becoming more and more common to think about it.
There's a book published in 2016 called “Weapons of Math Destruction” that talks a lot about this and I do think it's more and more on people's radar of awareness when using machine learning or data to make decisions, but it is interesting because it's hard to know that you’re accounting for everything if you're just trying to go in and retroactively correct data. What if there's a blind spot we have now that we won't see for a long time and it's going to be perpetuated through algorithms?
Maddie: [00:28:41] For sure, and definitely reminds me a lot of book that I recently finished
“Artificial Unintelligence: How Computers Misunderstand the World” and the author kind of goes into the same thing where she says, you know, machines aren't biased but humans are and humans made the machines and humans are collecting this data and humans are making these algorithms and you know, so things are just inherently going to be baked into these algorithms or you know, any sort of AI applications, but you know, there's definitely ways to kind of combat that and being aware of it is a good place to start.
Sydney: [00:29:21] Definitely.
Amit: [00:29:22] But there are two problems: one is there could be multiple statistical proxies that may convey the same ideal because the ideal is often vague. And the second is that we might also be missing other kinds of unfairness that are not enshrined in this particular statistical proxy that we took.
So one simple example, is it just so happens we only had data about race and so that's why we make this black box algorithm optimized for fairness and race, but it might be that in doing so we introduced biases in other dimensions of the data set.
So for example, imagine there was this unobserved gender variable that was also important and by making the algorithm fair on race, we might have created a situation where let's say women of the disadvantaged women have the advantage group would now face discrimination because the algorithm may move them towards the bad outcome setting. And in that sense, that's not what we wanted right? That's not what we said when we wanted a fair algorithm and unless this black box is now interpretable and unless we can actually look at the rules it's generating and discover that now it's actually discriminating against women of the advantage race, we have no hope right? We would sort of think that this true black box is optimizing what we think whereas what we have missed is any mathematical constraint.
Sydney: [00:30:58] It's really hard to keep track of all the possible biases in a data set and it's impossible to correct for bias you don't notice, don't know about, or potential interactions between variables that are harder to identify. When we start engineering to accommodate our data, we are also potentially introducing our own personal bias on what we think is most important. Additionally, when we start engineering to accommodate our data, we are also potentially introducing our own personal bias on what we think is most important.
Amit: [00:31:35] That we put on algorithms is a combination of the algorithm developers own thinking, their own sort of limited cultural and lived experience, and their own interpretation of Ideal law or justice, right? And we sort of have this problem that we are only optimizing what we have asked the algorithm to optimize, but we are missing probably the forest for the trees. And more and more, I feel they are interconnected in the sense that to be truly fair, you have to be interpretable and it is okay if you're not as complicated an algorithm as you could have been. I think I think it's okay to sort of get rid of this complexity and high accuracy preference that we often have because these domains we don't understand.
Sydney: [00:32:35] I kind of think of this type of approach as playing whack-a-mole. You can keep trying to correct your data to handle different variables the way you think it should be handled but that doesn't mean they won't keep popping up.
Amit: [00:32:47] I think these domains are complicated domains that social scientists, and actually even in medicine, we don't understand the human body and biologists and medicine have looked at it for years and it's still not easy to understand what's going on. And so just to think that we can come in and have very complicated models deployed, I think it's a bit risky. And what I think is more useful is to have models that have both of these desired properties: one, they can be understood by the operators, by the people who are going to evaluate these algorithms, and second by that interpretability you can also get fairness contracts pretty easily.
Sydney: [00:33:30] I think the way Victor talks about this is by doing causal inference on the front end. So, you know, you've set up your data in a way that you're asking and answering the right questions. Amit argues it's also important to consider this on the back end of modeling when you're implementing an algorithm into production.
Amit: [00:33:47] That's one of the goals of my own research is to look at how we can use causality to understand the effects of these systems. So coming back to one of the things that you had just mentioned is what happens when we start thinking of algorithms as interventions themselves, right? I think what happens is that almost all of social science literature that looks at the effects of economic interventions, social interventions, and even biomedical literature, epidemiology, and look at the effects of drugs, and diseases, they all become relevant and they all become instantly applicable.
Maddie: [00:34:31] This starts to become about ethics to some extent, right?
Sydney: [00:34:35] Amit describes it as a new type of sociology.
Amit: [00:34:39] So now if you think about, “here is an algorithm, here are people who developed it, here are people who will be using this algorithm, and here are people who would be affected by the algorithm.” So now you instantly have a setting where you can start thinking about the effects of these algorithms on all these different stakeholders in different ways.
Sydney: [00:35:00] If we're thinking about the effects of these interventions, we're thinking about causality.
Amit: [00:38:37] You can use causal inference or causality theory as well as methods to start thinking about the effects of these algorithms on questions of fairness or questions of performance with respect to any desired metric. So one of the ways people have started thinking about fairness question, is also in terms of the causal effect of a particular demographic on the decisions that an algorithm makes, and we can again invoke the same kind of a counterfactual idea.
Sydney: [00:35:42] You might remember from Episode 44, a counterfactual is just the opposite of whatever happened and it's a big part of causal inference because it allows you to isolate cause and effect.
Maddie: [00:35:54] Got it.
Amit: [00:35:55] Which is, for example, if you have education of a person and you're trying to decide whether the algorithm is biased by looking at either higher or lower education, you can ask a question, “What would have happened if I suddenly switch this person's education to a different value, but I don't change anything else” right? The simplest cases, you can literally just fit in a different value and look at the response. That would be one way of knowing whether that education feature was really being used by this algorithm. I think the more nuanced answer is that we have to understand that in the real world, education just doesn't change by itself. And I think this is where causal relations and causality really comes in. What most likely happens is if you change your education, you also change your age. There are very few degrees you can get quickly, right? And so now you almost think okay, so that means not only do I have to think of this counterfactual - which is what happens in a world where education is changed - but I also have to think of how do all the other features of this person change if their education changes.
Sydney: [00:37:10] this is like a point in causal inference that people will often try to treat causal inferences as a missing data problem. So when you think about a counterfactual, you're like, “well what if I had gone to school for another four years and got my PhD?” Sometimes people try and figure out the outcome like of that “what if” just by changing that number in the input data set.
So they’ll train the model, they'll have a person who has X education and then to figure out what their counterfactual is, like what if I'd gone to more school, they'll just plug in a higher education level for them and create an estimate with that. But it's a really limited way to deal with counterfactuals because it's data-driven and not model driven.
So when you think about what if I had gone to school for an extra four years, those four years you spent working on your PhD, you could have done something else like traveling or working in a career and getting career experience and those are all contributory factors. If you change one part of your data, it's likely you have to change another part just because of how the world works and that's where the qualitative causal model becomes so important.
Maddie: [00:38:27] Got it. Yeah, that makes a lot of sense. If you wouldn't have received your degree, maybe you would have had kids. Maybe you wouldn't have had kids. There are so many different things that go into it, you know. Yeah, like maybe you would have moved to another country or bought a house or not bought a house. So yeah that makes a lot of sense.
Sydney: [00:38:43] So the loan example, if you're just trying to figure out what you would need to change about yourself to get approved for a loan after getting rejected, you might be like well if I had more schooling maybe that'd fix it, but then you'd also have to change those other things in your life.
Maddie: [00:39:02] Right.
Amit: [00:39:03] Right because that's the only counterfactual that would make sense. There's no use thinking of counterfactuals that would never happen in the world, which means that no user will ever come with that profile to the algorithm. And these are I think very early ideas.
I don't think there's a clear case to be made yet that causality and causal tools will be the way that people think about fairness and interpretability. I think there are simpler tools also in some cases sort of very simple statistical practices that can also help in these questions. But I feel at least in terms of thinking about these issues or at least if you have a question about fairness or interpretability and you just want to frame it in the right way, I think I'm 100% certain that having an exposure to counterfactuals and a causal way of thinking really helps because you can now know almost in a very precise sense on when I say interpretability for this domain for this kind of a user, what do I mean by it? How can I formulate a precise statistical estimate for that interpretability metric that I'm interested in.
Sydney: [00:40:21] so I think a value-add of causal inference is the ability to frame data science questions and scientific questions in a more holistic way. It's the ability to make the algorithms we implement more contextualized and have more awareness of their potential impacts.
Amit: [00:40:42] I think especially in thinking about it, at least for me, causal inference has helped a lot.
Maddie: [00:40:51] Thank you Sydney for again walking us through these.
Sydney: [00:40:54] Thank you for continuing to tolerate me.
This episode of Alter Everything was produced by Maddie Johannsen (@MaddieJ).
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.