Alter Everything

A podcast about data science and analytics culture.
Episode Guide

Interested in a specific topic or guest? Check out the guide for a list of all our episodes!


Showing your work isn’t just for math class, it’s also for AI! As AI systems become increasingly complex and integrated into our daily lives, the need for transparency and understanding in AI decision-making processes has never been more critical. We are joined by industry expert and Director of Data Science at Western Digital, Srinimisha Morkonda Gnanasekaran, for a discussion of the why, the how, and the importance of explainable AI.





  • Srinimisha Morkonda Gnanasekaran, Dir. Or Data Science & Advanced Analytics @ Western Digital - LinkedIn
  • Megan Bowers, Sr. Content Manager @ Alteryx - @MeganBowers, LinkedIn



Ep 162 (YT thumb).png


Episode Transcription

Ep 162 Explainable AI

[00:00:00] Megan Dibble: Welcome to Alter Everything, a podcast about data science and analytics culture. I'm Megan Bowers, and today I am talking with Sri Nimisha Morkonda Gnanasekaran, the Director of Data Science at Western Digital. In this episode, we chat about explainable ai. The why, the how, and the importance of getting it right.

Let's get started. Sri Nimisha, it's great to have you on our podcast today. Thanks for joining. Could you give a quick introduction to yourself for our listeners? 

[00:00:31] Nimisha Morkonda: Sure. Thank you for having me, Megan, to begin with. And so I'm, uh, Sri Nimisha Morkonda Gnanasekaran. So I work as Director of Data Science at Western Digital.

My responsibilities basically around charting the technical strategy for the team, setting objectives during the team in delivering tangible business outcomes from a technical aspect, and my education background was in the field of ai, so definitely much before AI became the buzzword. I have an undergrad in Computer Science and cybernetics from the University of Reading in UK and a master's in computer Science from the University of Colorado in Boulder.

I'm located in Colorado. Love the mountains here, which is what I do for hobbies. So you know, we go out on hikes and explore the mountains, which is always very relaxing. Outside of work. I also serve as an advisory board member in the strategic AI program at the University of Colorado at Colorado Springs.

And I love participating in various university activities and do go back as a community, engage with them. I also serve as a judge for various hackathons, school science fair events across various categories and technology, ai and just in general, women in tech and leadership. So that's a little bit about me.

[00:01:53] Megan Dibble: That's great. I love how involved you are in the community in different education technology efforts. Cool. In our episode today, we're gonna talk a lot about explainable ai. So I'd love to just start off if you could give us kind of a background of where explainable ai, where that field started. 

[00:02:11] Nimisha Morkonda: The whole idea of explainable ai, I think has been inherently around since the evolution of the field, but I think the significant milestone, if I have to pick one, would be around 20 16, 20 15, I think there was a paper.

Which was published under the title, why should I Trust You? Explaining the predictions of any classifiers. So it was a bunch of researchers and they start off with an example of what inspired them. And the example they state is where they train a model and it's an image based model that classifies various animals.

And the model was classifying a husky as a wolf in spite of having very good training data. So they started exploring why the model was making such a conclusion, irrespective of getting good, you know, performance on the model. And they found that the model was giving importance to the snow that was in the background.

So that's what led to the conclusion towards the Husky being a wolf. And again, just to clarify, right, this is really not a bias or hallucination problem. It's where the model is making an inference based on the information it has. The fact that the snow is presented in the background is why it's getting a little confused and leading to an incorrect classification more so than a bias or a hallucination problem.

So if you, if you wanna think about it in a very simplistic way, it's just exactly how we humans try to explain our decision or thought process in a varied list of steps, right? We thought about A, B, C. Because of BNC, we decided to do DNE and so on. So it's very sequential in that we're trying to get the model to do the same in terms of explaining where and why it's making a certain decision.

[00:04:06] Megan Dibble: Yeah, that's a great explanation. And I think I, I like that example of the wolf versus the dog, and we might not even know it's using the snow in the background to decide Right. Until you start putting different inputs in. And I'd love to hear from you on like why explainable AI is important and really what happens if we don't get it right.

[00:04:25] Nimisha Morkonda: Yeah, I, I think the biggest question with all the different concepts on ai, you know, ethical, responsible, the why is always the big question. And I, again, I would like to start off with an example, right? I think a common term that you will hear around ML models is that it's a black box. And what that means is people tend to not understand how the model is really working, right?

So it just basically is referring to lack of transparency, interpretability of the ML models, and when the model gets it right, it's all good. Great. Nobody thinks one step further, but when the model gets a wrong, then comes the need for accountability, right? The model predicts it incorrect. Now how do we.

Understand why it's making it incorrect. So can we get the aspects of why, when things go wrong? Again, for analogy like I would like to think of, again to our earlier example of humans trying to explain our decision process. Let's say we have conflict with someone and the best way to resolve the conflict is sitting down, talking through what the perspective was, us explaining our thought process with the other person and the other person explaining the thought process, right?

So we try to do that explanation to rectify the wrong in all kinds of settings. So I think it's essentially the same concept where when the model is doing something incorrect or different than what we expected to do, then we need to uncover the root cause where we are trying to get more explanation as to why it's doing certain thing a certain way.

And it doesn't always necessarily mean. Incorrect. It could be just that, not very accurate or not close enough to what we are expecting. So I think that's where the why comes in, and I think that leads to space of having trust and transparency around AI systems. I mean, I know AI has been advancing so much.

We've had a lot of people saying it's the next greatest thing that is going to. Change the world. But if you think about trust around systems, how many of us are really comfortable to sit in a self-driving car and let the car drive itself? We are not there yet. We still don't have that trust, and we still have a long way ahead of ourselves to establish that trust.

And I think. That's where this whole importance of explainable AI will come into picture because you want us to be able to trust the system and that comes when you know how exactly things are working rather than just it being a very fancy thing. That's, that's going around. 

[00:07:06] Megan Dibble: Definitely. On that thread, are there industries in which explainable AI is more crucial to implement than others where, you know, we need the model to explain itself even more critically than other industries.

[00:07:20] Nimisha Morkonda: Oh, certainly, certainly. Right? I mean, if you think of any industry that is, that involves a high stake decision point similar to. Human lives, right? Wherever there is something that's of really high value involved, I think it will be where it's very crucial. And, you know, a couple of examples to relate to, uh, healthcare would be one main field where explainability is very important.

I mean, let's say you adopt AI system that diagnoses a patient and recommends a treatment or comes up with a recommendation on a medication. The model should have the capability to explain its reasoning, evidence, the thought process as to why it arrived at that, so that there is trust even between doctors, patients, and the system.

Saying that, I think, yes, it makes sense and we understand, you know, what the risks and benefits are much better, how much you can rely on that system recommendation. And again, I think another example is in finance where let's say you, you adopt a AI application to approve your loan process, right? If it rejects a loan, if it approves well and great, again, back to our example of if it does well then yes.

But when it does not, then that's where you wanna be able to get justification on the decision. So it's important. So you're not inducing any bias or discrimination that can be known or unknowingly part of the system that you have trained. And again, going to our other example of full self-driving cars, right?

I think that's another area where explainability is very important. If your self-driving car is not. Doing the right thing and if it's, you know, having trouble making decisions. I think inherent feedback there, getting explanations as to why the model is feeding in confidence in, in making that decision at that point in the middle of the road, I think is very important.

So I think anywhere where there is really legal. Ethical social implications where all of the responsible AI is coming into picture. I, I think those are areas where we should really focus on building systems that we can explain and understand how they are working. 

[00:09:44] Megan Dibble: Yeah, and that relates back to a recent episode.

We had two on bias and ai, and we had talked a lot in that episode about just how. Historical biases work their way into the data that you have, the historical data that becomes part of the record and it perpetrates. And so I think that this is like a nice follow up of you need that explainable piece to your models so that you can pinpoint, oh, this is why it recommended hiring these people.

It's basing it on this past practice of hiring only certain kinds of people, you know, implementing those pieces of explainability. Seem like a logical next step if people listened back to that episode and are thinking, well, how do I combat bias? Or how do I even make sure that it's not unintentionally biased?


[00:10:32] Nimisha Morkonda: yeah. Yeah. No, certainly. I think, I think a lot of challenge with bias is the unconscious and unintentional bias, right? That you say like it creeps into the data. You're not even aware that it's existing until. You have some kind of a review, and that review I think happens very appropriately when you try to do explainable ai trying to understand your model behavior per se.

[00:10:57] Megan Dibble: So then moving into the, how part of this, what kinds of techniques can companies use to make their AI more explainable? 

[00:11:05] Nimisha Morkonda: Yeah, so there's actually a lot of research around this and a lot of techniques available. My recommendation always to anyone looking at adopting AI systems is understand your data first.

That's your fundamental, you know, that's, you know, ties back to. Do you have any bias in your data? Do you have anything that will fall as a limitation in your data? So I think just understanding what data you have, what other limitations would be really the foundation to all of this. And then beyond that, in just talking about explainable AI techniques, there are kind of two techniques that generally people talk about.

One is intrinsic. Another one is post hoc. So intrinsic techniques are those that really aim to make the model, the AI or ML model itself more transparent and interpretable. So these are basically operating with the principle of, you know, let's use simpler but more structured architectures where we incorporate a lot of human knowledge into the model.

We try to reduce complexity. I, I think there is a conception that the more complex your model is, the better the performance is not, not quite the case, right? They don't really translate one way linearly that way. In the earlier paper that I pointed out, they actually designed a model called Lying, which is very commonly used today.

It is basically a model where it's generating local explanations for each predictions, and it provides you. Ability to tweak the input data and see how the model's behavior changes. So the way it works is you take for each instance of your input, for your each data point, you create a set of instances that are slightly modified from your original instance.

Then predict the outcome. So then you do this in a repetitive step. You then assign some sort of a weightage based on the proximity of your new outcome to its original outcome. And if you do this repetitively, you're gonna have a lot of data points, which then you use to train, let's say a decision tree model that will basically break down your instances, saying that if this happened more of an FLS scenario.

That it says if your instance where this, this is where your outcome was. If your instance changed to this, this is what your outcome changed to, and depending on how much of a shift in the outcome it is. You basically get to see how much of that feature or instance was giving importance in your whole model's interpretation.

So this ties back to, you know, having decision trees where you are breaking down the rules of your models so you visually see where your nodes are, what thresholds lead to each decision points and so on. So those are like the intrinsic techniques. And there's this whole other world, which is post-OC techniques.

So those are where you've already built a model and you already have it functioning very well. You are happy with the performance, but then you start to see when it's missing certain things and maybe you realize there is some bias and so on. So you want something that runs on top of your existing model.

So that's why it's called post-talk. So these are things like feature importance analysis and diving a little more deeper into that would be techniques like shaft. So Shap, it's S-H-A-A-P is is the acronym. And what it essentially does is it's mimicking a game theory approach where you are looking at how much of each of your feature allocates in terms of contribution to your predictive outcome.

And maybe I'll, I'll break it down as an example. It might be better. So let's say you have 10 features that feed into your model. And you want to understand where your model is placing importance among those 10 features, it's not gonna give a equal importance to the 10 features because the learning is not equal, right?

So it'll tell you which features it's giving more weightage. And if you wanna tie it back to our earlier example of the Husky and the wolf, in that scenario, it basically was placing more importance to the background, which was the snow. So it'll give you insight on where is the root cause of your error or biases coming from.

So that's shap or shapely values is what's called. And then there's also a lot of visualization techniques that's being used. Two of the common ones are uh, PDP, which basically stands for partial dependence plots. And what that does is a graphical representation. Between your feature and the predicted outcome, and it does it one feature at a time, so it keeps all other features constant.

It tweaks one single feature and then visualizes how the output changes. And then iteratively does that for all the features. So uh, it kind of helps understand the effect of one single variable on your role model and how changing that impacts. Double clicking that into the next level is the other technique, which is called ice, which is individual conditional expectation plots.

Again, very similar to PDP except it goes a little deeper in that it's, it's, it's more granular, so it generates plots for individual outcomes. Where you can go deeper, dive into especially the ones that are incorrect, so then you can basically see what are your features that contributes high into incorrect.

So if you think about all of these methods that's there, I'm sure a lot of people as they were listening, were thinking, okay, this is computationally intensive and they are, A lot of the postop techniques are intensive operations because you're doing a lot of permutation. I mean, the example, I gave us 10 features right now, let's scale that to 10,000 features that you have.

I mean, you could be endlessly doing the permutation in combination trying to get your explainability so. It's not gonna be very effective, right? It all boils down to selecting which is appropriate for the problem that you're trying to solve. So I think this is just, you know, some of the examples I stated are what's available out there, and depending on the problem that you're trying to solve, you basically evaluate what is appropriate for your situation and, you know, use it based on what your objective is.

[00:17:45] Megan Dibble: That makes sense. It sounds like there are quite a few options for people to use. What would you recommend for someone starting out if they see that their model's not doing quite what they expect? Is there one option you recommend starting with or test on a smaller sample of your model? How do you get started with testing out these different options?

For explainability, 

[00:18:05] Nimisha Morkonda: I would really start with the Shap values because the learning curve is pretty simple. You basically have a Python library that you can import and apply it on top of your model and then calculate the shop values for all of your features. So from a usability, it's pretty simple learning curve for someone wanting to do an exploration.

That would be very good. And you know, if, if they have a model, let's say, that has a lot of features, I'm sure they've already done feature importance analysis, that kind of tells them what are the important features. So then maybe you can take the important features and then just do the shop values on top of it.

So it reduces the computationally intensive part. If someone that's a little more advanced, I think Lyme is a very good example that you can try because it balances your act of not having a very computational intensive process, but also deals with something that, that the model has, you know, a lot of features.

If you're talking about thousands of features, I think that's a very good balance that you can try not compromising your uh, computation capability. 

[00:19:17] Megan Dibble: Great. Thanks so much for all those explanations. Explainable AI is really just one piece of an overall AI strategy, so I'm curious to hear from you at a high level, what are some other aspects of AI strategy and adoption that our listeners should know about?

[00:19:35] Nimisha Morkonda: Mm-Hmm. Yeah. I mean, this, I think can be a workshop by itself. Sure. Yeah. I think it's so, it's, it's so wide in terms of what do you think about when it comes to adoption? Right. But like you said, I can really give a very high level gist. I think the first and foremost is understanding the problem that you're trying to solve with ai, right?

I mean, really evaluate if AI is the best solution that you have out there to solve the problem. And the next question is, is that what your customer and your organization wants? Right? Does it align with what your customer goals are, what your organization goals are? Because ultimately, those are the stakeholders.

So if that alignment is not there, then that's probably not a great idea to adopt. Right. And it seems 

[00:20:20] Megan Dibble: from talking to other experts on the podcast, like it's easy for the technical team to just skip those first two steps. I hear people say that a lot of start with the problem, make sure AI is what you need.

Make sure machine learning is what you need, so, exactly. Love that. You pointed that out too. 

[00:20:34] Nimisha Morkonda: Yeah. Yeah. I mean, I think it is the jumping on the bandwagon, right? Uh, think AI is creating that peer pressure that people wanna adopt it and, and I think. The other big gap or other big thing that you need to evaluate is does your organization have the needed skillset?

Do you have the needed technical folks? Technical background, and is your workforce also aligned with the goals? Right? I mean, if your organization, if the people working in your organization are not aligned. It's gonna really cause a lot of chaos and change the dynamics of the team. Do they see AI as a threat?

Are they bought into the plan? Those are, I think, a lot of free work that you need to do. And once all those are set in tier, then you can go about really defining your strategy. And I think in terms of strategy, the biggest thing that you need to start off with is have clear. Measurable objectives for whatever you are building.

What is the cost that you're gonna incur in building that infrastructure? And how am I going to measure the return on investment after I put in all that cost? How am I going to measure the outcome? Do I have tangible KPIs that I can say, here is how exactly I'm gonna measure, and here is how I will evaluate if my strategy has been successful.

Again, I, even though I've said in the last, I think this is really one of the most important piece is. Have checkpoints in between to see if you have failure peeking into your plan. Yeah, just like any project, I think you need to have a backup failure plan, let's say. You know it's not going as well as you thought it was, and this is where again, having clear KPIs or measurements to measure your progress helps so you can really reevaluate your strategy before you go too farther into it.

You have those kind of milestones in the middle and say, here are my interim milestones that I'm going to use as a checkpoint, and if I'm not expecting certain outcome by this interim milestone, then here is what I'm gonna do to adjust my strategy. It might not be a complete revamp of your strategy, but might be just, you know, slight adjustments that you wanna do to get you to that end point without going further along the wrong path.

Looking at it all together will be how I would recommend folks to look at it. 

[00:23:02] Megan Dibble: I love that. A lot of wisdom in that short time and at a high level, so appreciate that. And you mentioned too, like a piece of it is having people with I. The right technical skillset, having the right people to work on these kind of projects.

And from your experience on, you mentioned at the top of the episode, the technical advisory board for the strategic AI program at UCCS, you know, what do you see as these like critical skills to be successful in this field of data science and ai? 

[00:23:31] Nimisha Morkonda: Yeah. Yeah. I'm glad you asked because I think there's a lot of folks that tend to think having a really strong understanding of.

What model you use, when you use how to evaluate the model is the only or the primary skill, right? But I usually think of skills in the field in two different aspects, technical and non-technical. The non-technical is as equally important as the technical. I mean, from a technical aspect, I think it's very commonly known, right?

People need to understand how data science model works. I would really go one step further in understanding at the mathematical level, how does the model actually work? And it's very convenient today that there are a lot of library that's available in Python. Where you could just import it and you apply and you're done.

Right. But understanding how that model really works at the mathematical level is, is one step deeper that you can, uh, inculcate in your skillset. That'll take you a long way. But I think the technical part still is definitely a lot easy and a lot of people do it very well from a non-technical aspect, I think.

The domain knowledge application of data science is very diverse, right? I mean, every single field has applications, so you could be working in various fields, healthcare, finance, service, industry, manufacturing, and so on. What will really make you stand out among others is understanding the domain knowledge.

So it's not like you have to become an expert in that field, but you need to understand how that business is operating, let's say. Your business is making a product. You need to really understand how the product tool is working at a very basic level, just good enough for you to get the ins in and out.

And this will be very powerful in taking you one step further than the others. 'cause you understand both sides of the coin. You understand how your product is working, where you are applying your skillset from a data science aspect and. It'll really make you comprehensive in knowing what's the application of certain models in making the product be successful.

So I think that's just understanding that domain knowledge is, is very important. And I think the, the next big thing I would say, one of my favorite, which I still work on sometimes is storytelling skills. This is really an art. It's not technical. It's really an art. I mean, anyone can give a report that summarizes a bunch of data, but how do you really convey it as a story is where I think you can stand out.

And I think there's multiple blocks to it, right? You need to know who your audience is. You need to know the problem you're trying to solve. Most importantly, keeping it simple. I mean, in my view, explaining the model in technical terms is a very easy job. But the difficulty lies in being able to explain it in simple terms to someone that's non-technical who ends up understanding what you're trying to convey.

These are skills that will really make you stand out in the long term and be successful. 

[00:26:48] Megan Dibble: Definitely. And a few things that came to mind as you were going through that. The piece about understanding the business side as well. That made sense because of what you were talking earlier about, you know, if you're defining the success metrics, defining KPIs.

I think if you have that business understanding without that, how do you determine how your model is successful? You know what it's really gonna do when you implement it. So that makes a lot of sense. And then on the storytelling piece. We had an episode on that kind of at the beginning of this year.

That was really helpful. So encourage listeners to go back and check out that one. That's a topic I'm super passionate about and. It is an art, but it's also something that you can get better at with practice and Oh, yes. Like you said, can really set you apart. So I think that's great. 

[00:27:34] Nimisha Morkonda: Absolutely. Yeah. I'm still working on that.

You know, a lot of times when you're trying to explain something very complex, when you have, you know, multiple models that come into pictures, I, I think it, it can become complex, but trying to keep it simple is like my mantra. 

[00:27:50] Megan Dibble: Yes. 

[00:27:51] Nimisha Morkonda: Yeah. 

[00:27:51] Megan Dibble: I love that mantra. Well, thanks so much for joining us on our podcast today.

I really enjoyed our conversation. 

[00:27:57] Nimisha Morkonda: Sure. Thank you so much for having, it was fun talking about all of the explainable AI stuff, so thank you for having me. 

[00:28:05] Megan Dibble: Thanks for listening. To learn more about top expansion in this episode, including a white paper from Alteryx on explainable ai, head over to our show notes on

See you next time.

This episode was produced by Megan Dibble (@MeganDibble), Mike Cusic (@mikecusic), and Matt Rotundo (@AlteryxMatt). Special thanks to @andyuttley for the theme music track, and @mikecusic for our album artwork.