Community Spring Cleaning week is here! Join your fellow Maveryx in digging through your old posts and marking comments on them as solved. Learn more here!

Data Science Mixer

Tune in for data science and cocktails.
Episode Guide

Interested in a specific topic or guest? Check out the guide for a list of all our episodes!

VIEW NOW
MaddieJ
Alteryx Alumni (Retired)

How does data science power the university student experience behind the scenes? We’re joined by Danielle Lyles, data and evaluation scientist at the University of Colorado Boulder, to learn more. 

 

 


Panelists

 


Topics

 

SocialTile-AllLocations-1200x628.png


Cocktail Conversation

 

During the podcast episode, Danielle talked about how some of her recent projects addressed challenges the university faced because of the pandemic. Have you done any interesting data projects that grew out of the pandemic? Any topic modeling projects that gave you "masks and hand sanitizer" as results?

 

Join the conversation by commenting below!

 

 

Danielle cocktail conversation.png


Transcript

 

Episode Transcription

SUSAN: 00:01

Is there such a thing as data science celebrities? I'd say so. And if you want to hear from some of them firsthand for free, you'll want to join us for our virtual INSPIRE Conference, running from May 18th to 21st. Some of the data science celebrities joining us at INSPIRE are Dr. Hannah Fry, Billy Beane, Dr. D.J. Patel, and Jake Porway. Plus, still more data and tech luminaries. There will also be special women in data science panel discussions for different global regions, addressing data science in education and digital transformation. And we're thrilled to have two video sessions of Data Science Mixer at the conference. I'll chat with internationally recognized expert and author Alberto Cairo about data visualization and how to create and consume visualizations efficiently. I'll also talk with Renee Teate, who's the Director of Data Science for HelioCampus, and also well-known for sharing her journey into data science through her becoming a data scientist podcast and Twitter account. I hope you'll join us at INSPIRE for these exciting conversations and much more. Register for free now at inspire.alteryx.com.

 

[music]

SUSAN: 01:18

Hey, folks. So I've mentioned this before on Data Science Mixer, but I used to be a professor. So many long nights and weekends grading and coming up with class activities. Unfortunately, I never came up with a foolproof way to use NLP to grade papers or clustering to assign semester grades. But there are lots of ways data science is being used in higher ed, from admissions, to advising, to alumni relations.

 

[music]

SUSAN: 01:45

Welcome to Data Science Mixer, a podcast featuring top experts in lively and informative conversations that will change the way you do data science. I'm Susan Currie Sivek, the data science journalist for the Alteryx Community. And to learn more about data science in higher ed, I set up a chat with my friend Danielle Lyles.

DANIELLE: 02:04

My name is Danielle Lyles, my job title is Data and Evaluation Scientist, and I work at the University of Colorado, Boulder, jointly in the Office of Data Analytics and the Office of Undergraduate Education.

SUSAN: 02:17

Wow. That's a mouthful [laughter]. Sounds like you're doing a lot of different things in those roles, so that will be interesting to explore. Before we get into that, though, would you mind also sharing with us which pronouns you use?

DANIELLE: 02:29

She/her/hers.

SUSAN: 02:30

Awesome. And since this is Data Science Mixer, let's do a refreshment check.

DANIELLE: 02:36

I have a can of Coca-Cola [laughter], which I am opening right now.

SUSAN: 02:40

Wow. Actual ambient sound effects. I love it. Wow. All right. And I have a Hoplark tea, which is like iced tea but made with hops.

DANIELLE: 02:50

Yummy.

SUSAN: 02:51

Yeah, yeah. It's pretty good. It's actually made in Boulder, so it's very appropriate. With our beverages in hand, it's time to get started.

 

[music]

SUSAN: 03:05

So I would love to hear a little bit, first of all, about how you got into data science and how you ended up in your career with multiple levels and offices that you currently work for.

DANIELLE: 03:17

Awesome. And I'm starting with my PhD because it's important part of the story.

SUSAN: 03:21

Yeah, yeah.

DANIELLE: 03:22

So I got my PhD in applied math in 2007 and I didn't know what to do with myself, so I went and did a postdoc. And then after that, I knew I didn't want a tenure track faculty job and I had tried teaching. And I wasn't aware, data science wasn't really a thing back then, there weren't a lot of options. So I went into teaching math at universities for ten years. It was a good gig, I liked it. I taught at CU Boulder, actually, for about four years, and while I was there, I learned from my students, I learned about data science. I taught a few modeling classes where they had to do projects, and I gave them freedom to do what they wanted and I learned about cool data sciency things that you could do through them. I also mentored a group for women in math, and from them, I learned about imposter syndrome, which was an important thing for me to learn. Because I thought data science sounded really fun, it was time for a change. And then I realized I had imposter syndrome. And when you have it, you think you're the only one. When you are aware of it, you can overcome it. So I decided to follow my dream of being a data scientist. I learned python and machine learning on my own the cheap way, through data camp and Coursera, did a data science fellowship for people with PhDs, and started looking around for jobs. I lucked into my job in a way. A colleague referred me to someone who was newly in charge of the Office of Data Analytics. And I met with him, talked to some other people, they interviewed me, the job was never advertised, and somehow I ended up with the title Data and Evaluation Scientist. I was the first full-time data scientist hired at CU Boulder.

SUSAN: 05:10

Wow. Interesting.

DANIELLE: 05:11

And I love my job.

SUSAN: 05:13

Yay [laughter]. I'm sure your boss will be happy to hear that when he listens to the podcast. That's terrific. No, I love that story. I love the idea that your students taught you so much along the way, too.

DANIELLE: 05:24

I bet as a former teacher, you understand those things.

SUSAN: 05:28

Yeah, it's true. It's true. And this idea of imposter syndrome I know is one that comes up for many of us, but maybe especially for women pursuing data science, and especially for folks who are the first data scientist in their particular area. So how has that experience been for you?

DANIELLE: 05:46

In the beginning, I didn't really have a boss that was particularly in charge of data science. But I was used to working independently, having been a faculty and doing a PhD. So it was cool. The thing, my imposter syndrome, the way it manifested is I was like a perfectionist and afraid of making mistakes. But I have since made mistakes and watched other people make mistakes and learned that it's fine.

SUSAN: 06:16

Wait, what? It's fine? It's fine to make mistakes? What [laughter]?

DANIELLE: 06:20

I was able to help hire my boss, and I'm really glad that we got him. And he's amazing. So now I had someone actually in charge of our data science team and things got even better after that.

SUSAN: 06:33

Yeah. That's terrific to hear. Awesome.

DANIELLE: 06:34

And I will say that following my dream and actually getting a job, that grew my confidence. So now I know I can do anything I put my mind to.

SUSAN: 06:41

Yay. Terrific to hear. So tell us a little more about the office that you're working in and what it does at the university. What are some of the main kinds of projects that you work on?

DANIELLE: 06:52

For sure. I'm going to start with just the Office of Data Analytics. It's a big group that houses institutional research, which most universities have. So institutional research's been [inaudible] a while. We have a data engineering group, we have a survey and assessment group, and then the data science team, we're new. So started with little me. And we're all one big ODA family. And my team-- did you want to hear about how we are organized?

SUSAN: 07:20

Sure, yeah.

DANIELLE: 07:21

I looked it up today and I believe it's called Federated. I don't know how common the lingo is. But we're a centralized team, we work together, yet we're also allocated to different units of the university. So, as I said before, I work with undergraduate education and student success. There's two other data scientists on my team now, one who works with admissions and one who works in financial aid. So we're able to build relationships and domain expertise learning from and with our stakeholders. But then together, we're working on breaking down silos across the university, leading cross-functional projects, and being an objective referee between the groups. So, for example, there's often a sort of tension between admissions and student success. Because admissions needs lots of students, but we want the right students so they can be successful. We just allow them to be data-driven, show them what we find, work together with both of them. Data science in higher ed is a wide-open space with lots of opportunities, so it's super fun to be working on stuff like this.

SUSAN: 08:25

Yeah, that sounds really awesome. So admissions, student success, these cross-functional kinds of projects. What are some of the favorite projects that you've worked on during this time, now that you are no longer the sole pioneer doing data science work?

DANIELLE: 08:40

My super favorite project was actually for admissions. Before we had an admissions data scientist, I was the only one there. And my boss actually saw a big need because he was in a meeting where people were trying to figure out, "Which admitted students should we reach out to? Should we do the high-touch outreach? We can't call all of them. We can't send them all see-you-buffalo socks. But who should they be?" And he was there and thought a model could help us with that. So I built this tool, which we're calling Buffalo Trace because it's tracing see-you-buffaloes, but Buffalo Trace is also a whiskey?

SUSAN: 09:17

Oh, wow. That's good for our Data Science Mixer theme. I like it [laughter].

DANIELLE: 09:20

Exactly. And what it does is it predicts which of our admitted students will pay a confirmation deposit. About 20% on average do that. So it spits out probabilities, and they use it to utilize these high-touch recruitment and outreach resources to students who are more likely to pay a deposit. The reason it's my favorite is because right now it's confirmation season and there's a bunch of people using it and they're thankful for it, they're excited about it.

SUSAN: 09:53

Awesome.

DANIELLE: 09:54

But one of my favorite parts of the whole process was actually teaching these stakeholders what it is, how to use it, why we trust it. So I still get to scratch the teaching itch in my job.

SUSAN: 10:07

Nice. Yeah. What kinds of challenges did you encounter in teaching those stakeholders about your model and about how to use it, maybe that were different from challenges you encountered teaching undergrads?

DANIELLE: 10:21

Good question. I don't know if the challenges were different than teaching undergrads. But the challenges were one part, explaining it. So we were working on a presentation, it made complete sense to us. Then we went and showed it to one of the would-be stakeholders and she was like, "I still don't get it. Can you say it again?" And then I spent some time getting all teachery and coming up with a fun way to explain it with a lottery ticket metaphor. So that was a good learning experience, to get feedback from people that aren't on in the weeds on the nerdy data science team about if what I'm saying makes sense. But the other thing that was a challenge but good is that they're all so smart that they ask really good questions. But that was good. Good and fun.

SUSAN: 11:10

Yeah. Yeah, absolutely. Can you tell us any more about the technical aspects of that project? What kind of model and any other interesting nuances that you discovered along the way?

DANIELLE: 11:20

I tried many different models. Yes. But the one that worked the best was good old logistic regression. So challenges with the method was that the data set is highly imbalanced. Residents pay deposit on a rate around 37%, but nonresidents was around 17%. And so you have to be-- you have to do things when you build your model. Like either balance the data or have it weigh errors more heavily on the class that doesn't occur as frequently. So if you don't do that, it just predicts, "No one's going to pay deposit. I'm an awesome model, look how accurate I am." And so--

SUSAN: 12:00

And you will have no students [laughter]. That's a little scary.

DANIELLE: 12:03

Making sure to-- Yeah. Making sure to have-- when you understand how the cost function works and it's just looking at the errors, and it can just be wrong on everybody who pays a deposit when it's so imbalanced. So forcing it not to do that was important. And then the biggest challenges, though, - because I was already familiar with stuff like that - was the data. Getting it, understanding it, what it is. Some of the things I looked at were event attendance. But the systems--

SUSAN: 12:38

These were admissions events [crosstalk]?

DANIELLE: 12:40

Admissions events, yes. And they start as early as junior year in high school. But the data was loaded into their system in 2018. And so a lot of the events before that were lost. And so when I tried to use all of the events that were in there to train on 2018 and 2019, the event data was imbalanced because it didn't go back as far. But I found that and then only added up events during what we call the admissions cycle. And then it was fine. But the smart people, they asked me, "Why are you only adding up events during the admissions cycle?" And I said, "Hey, your data didn't go back the same amount and time for everyone, so I had to do that or it made the model act funny."

SUSAN: 13:28

Right, right. So you don't know entirely yet about the outcomes of the first round of applying the model, it sounds like, but soon.

DANIELLE: 13:36

True. Very true. Yes, we don't.

SUSAN: 13:39

Very exciting, though.

DANIELLE: 13:41

Yes.

SUSAN: 13:41

Very cool. I know you also worked on a topic modeling project as well.

DANIELLE: 13:47

Yes. I've actually worked on a couple of topic modeling projects. One had to do with the pandemic. Actually, we did this-- we had this new student survey, so this survey group, that always went out to students by high-low response rate, and they managed to get it into the system in such a way that almost all of the students answered it. And one of the questions we asked them was, "What is one thing CU Boulder can do to make you feel more comfortable coming in fall 2020?" And I was asked to do topic modeling on that, find the common themes and their relative proportions, and that information was taken by the boss of my boss up to the higher level people when they were thinking about how to form-- what to do for fall 2020.

SUSAN: 14:34

Nice. So can you tell us what some of those main themes that you found were?

DANIELLE: 14:38

The most common theme was they wanted masks, they wanted hand sanitizer everywhere. They had all of these cool ideas about hand sanitizer stations everywhere, distancing in the classroom, following the guidelines, whatever they may be during time as things got worse and better. And making sure that other students would be-- that the rules would be enforced. That was the most common theme. But other things were concerns about online classes. They didn't necessarily want their classes to be online.

SUSAN: 15:13

Sure, yeah.

DANIELLE: 15:14

And they didn't want to pay the same amount for online classes.

SUSAN: 15:16

Right. Yep. Understandable, yeah.

DANIELLE: 15:20

And they still wanted ways of meeting each other and building community, even if it's online. Those are the top three things that come to mind.

SUSAN: 15:30

Interesting. So basically, you were able to take all of these open-ended survey responses, distill them down through using topic modeling, and then provide those insights back to the administration to inform the pandemic response for the fall.

DANIELLE: 15:44

Correct.

SUSAN: 15:46

So that's really cool. And it sounds like it's been a really good opportunity to try out some different techniques and to have a real impact on the university with the projects that you're doing. Are there other challenges you've observed in using your data science expertise in this higher education context? Other things that have come up?

DANIELLE: 16:04

Yes. Our biggest challenge is that the university wasn't founded on gathering data and bringing it together to better understand itself and the students. So the data, it's often very siloed. There's often no documentation. And there's data owners, and some of them don't even want to share it.

SUSAN: 16:30

Oh wow. Yeah.

DANIELLE: 16:32

But we're working on-- we have a big project going on to centralize the data into a big data lake and even democratize it so all the smart people that we work with can also dig into it. My boss says that we are data rich but information poor.

SUSAN: 16:47

Yeah. Yeah. So what will democratizing access to that data look like for you all?

DANIELLE: 16:52

So there's the big data lake that should have, hopefully, eventually, everything in it, and then people will be able to know what's in there and then request what they want, and there'll be a person in charge of getting the data to people. And also some standardized views of tableau dashboards and things that we know people will want that they can just see. And of course, access levels will be dependent on roles in the university depending on who you are. But currently, my group, not necessarily my team, but the Institutional Research Group, just does a lot of data fulfilling requests. But they still can only get to one certain piece of the data, not all the other pieces that people want.

SUSAN: 17:39

And to some degree, I'm sure that's just to be expected in a university where you have a lot of personal information that can't be shared very easily without some safeguards.

DANIELLE: 17:48

True, yes.

SUSAN: 17:49

Darn that FERPA [laughter].

DANIELLE: 17:52

Yes, getting financial aid data was really hard. And you have to be careful with it, too.

SUSAN: 17:58

Sure, yeah.

DANIELLE: 18:00

You can't share certain student-level data with people, but-- so we don't do that.

SUSAN: 18:05

Yeah, yeah. Of course.

DANIELLE: 18:07

I have been in a situation where I've had students wanting to work on data, understand-- the parking people wanted to know if having a parking permit affected retention. So in order to let them study that, we anonymized it before we gave it out.

SUSAN: 18:25

That's so interesting. Yeah, even having spent years in a university, I wouldn't have thought about doing an analysis on parking permits. But I love it. It makes sense, right [laughter]? That's cool. I wonder what they found.

DANIELLE: 18:37

One other challenge is that maybe since data science is so new in higher ed, when units want something and my team is new, they don't necessarily come to us first. So I feel this competition with the outsourced data science tools that are out there as well.

SUSAN: 18:54

Yeah, that's interesting, isn't it? That's a whole nother industry. Yeah. And actually, back to your point about having people bring you projects, that actually speaks to what we talked about in our last episode with John Thompson, talking about building that demand for the work of the data science team within the organization. So that's interesting to hear that as a new team, I'm sure that's something that you want to continue to work on so that folks recognize like, "Hey, here's more people who can solve our data questions and give us some answers." Including about parking permits [laughter].

DANIELLE: 19:25

And in that respect, the Buffalo Trace has been super successful. And I guess McKinsey came and built them some type of model. So one of the people that is using Buffalo Trace said they used to call it the McKinsey model, but now they were going to call it the Lyles model [laughter]. And I said, "You can't call it the Lyles model, though. It's Buffalo Trace, but thank you."

SUSAN: 19:49

Well, it's like in science, if you discover something, you get your name put on it. So that would be cool [laughter]. That's awesome. One of the things that I heard over the years in academia was just some skepticism about whether data and modeling could be used for dealing with academic issues, or how much is an art and how much is a science. And I wonder if that skepticism is anything that you've encountered in your setting.

DANIELLE: 20:20

I haven't encountered any skepticism. People are really excited to use data science. Though it does matter what you're trying to use it for, so we're very careful about what we try to predict, for example. Because 18-year-old students are not very predictable [laughter].

SUSAN: 20:37

That's awesome. Very true, very true. So one other thing that, of course, is a major push for most colleges and universities and other organizations is diversity and inclusion. And is that part of what your office does in terms of planning your work, looking at projects, those kinds of things?

DANIELLE: 20:57

We are definitely always thinking about diversity inclusion. It's really cool, working with-- everyone that I work with cares about students and all the types of students that there are. So, yeah, we're trying to build right now as diverse a class as we can. And Buffalo Trace is helping with that. Because if somebody is trying to get women engineering students, for example, they can select them all and prioritize their resources to recruit the ones that are most likely to come to hopefully help us get more of them. And you can do that with any student subgroup. And then one thing that I've always done, I've done a lot of exploratory data analysis, or people asked me-- for example, I looked at attrition and graduation of ACOd students. And if you haven't heard the term, they're students who didn't get into the college that they applied for. So we say, "Hey, you could still come here, but you can be in our program for exploratory studies. And you can try to transfer in to the college that you wanted. Or maybe we'll help you find something that you like better, that's better for you." So it's like an attrition and graduation of them. When do they leave? How many of them leave? Where do they go? Do they go to another university or community college? Does it seem like they just drop out of higher ed? Which subgroups have the highest rates of attrition? And when I did that, I always look at everything, but I pay also attention to first-generation students. In all of my analyses, so every time I do an analysis, there's this theme and it's always true in there in the data what I show, that is if you control for first-gen status, ethnicity doesn't have that much of an effect on retention. And so that's really the first-generation students. And it's got the leadership very interested in what are we doing with them and how can we better support them. And even then, it's hard to get the data on them. So she's working on pulling together all the data from all the different places to even answer the question, "How many of our first-gen students are in special programming?"

SUSAN: 23:06

Right. So when you talk about special programming, do you mean special academic supports, other kinds of social interventions?

DANIELLE: 23:15

Academic and social, yes. All of it.

SUSAN: 23:19

Yeah. Interesting. And Danielle, do I remember correctly that you were a first-gen college student?

DANIELLE: 23:25

Yes, I was [laughter]. I am or was a first-generation college student, so yeah. Always looking out for them. And also, I look out for everything, but I make sure I don't forget to check all that stuff.

SUSAN: 23:36

Yeah. That's awesome. I love that. I love how our personal experiences end up shaping our careers. And even when you're looking at something as seemingly objective and scientific as data, we have our own interests that we still bring to the table. And particularly in this case, it can have a really positive effect on those students. So super cool. So we talked a little bit about your topic modeling project with the pandemic survey. Were there other data challenges that came up for you and your team as a result of the pandemic or were there maybe some ways that the pandemic weirdly kind of made data projects seem more urgent or more effective?

DANIELLE: 24:15

Yes. Well, there were a couple of urgent things. One is something my boss worked on. But we awarded scholarships based on high school GPAs and standardized test scores, and then standardized test scores were no longer required for this year. So we, he had to come up with a new way of awarding scholarships. I can't go into the details on that because--

SUSAN: 24:37

Sure. Sure.

DANIELLE: 24:39

But another time-sensitive thing that was pretty cool was cohorting students. So student success, pretty sure all the research out there shows that cohorting students is good for their success academically. But we hadn't ever done it--

SUSAN: 24:53

Can we back up for one second and--?

DANIELLE: 24:55

Yeah.

SUSAN: 24:56

So you said cohoring students is important for their success. Can you tell us what cohorting means in this context real quick?

DANIELLE: 25:02

Yes. So this would be entering freshmen, and if they're in the same dorm, they're also in the same classes so they can make friends more easily and have more of a community. That's the idea behind it.

SUSAN: 25:14

Yeah. Nice.

DANIELLE: 25:16

So they always knew it was a good idea, but then along came covid and it became a better idea because it could also help blunt the spread of covid. So they said, "Hey, we're going to do this. If they're in the same dorm, we want them to be in at least two classes together, we'll batch enroll them into the classes." They gave them a survey. I did this for the ACOd students. So, again, they didn't have a major, but they had an interest that we asked them what they were interested in studying. And so there wasn't a lot of time and we needed to batch enroll them. And my math brain was like, "Hey, this is an optimization problem. I could program that, could just program-- there's no problem." But there wasn't a lot of time. I realized after I tried to do it, it was a little harder than I thought. And I didn't have all the information that I needed. But what I ended up helping them with, they were super thankful for, and that was-- they had to get data from seven different places even to do this process, and I was able to merge it all together. And then we put them in their classes by hand.

SUSAN: 26:24

Oh, gosh. And how many students are we talking about here?

DANIELLE: 26:28

Like 2,000.

SUSAN: 26:30

Wow.

DANIELLE: 26:30

But I was able to merge everything and sort everything at least in such a way that we were all just in there putting classes into a spreadsheet. And then after that, I was able to write code that would take that and put it in the format that the Registrar needed very easily. So she loved our files, ours were the best [laughter].

SUSAN: 26:52

I have no doubt. Awesome. I just think it's funny, thinking about your topic modeling results just all showing up as hand sanitizer and masks.

DANIELLE: 27:01

Hand sanitizer and masks [laughter]. Hand sanitizer stations outside all the classrooms and everywhere. Please.

SUSAN: 27:10

So much sanitizer. Awesome [laughter]. Well, we have one question that we ask all of our guests and want to give you a crack at it as well. So this is for our Alternative Hypothesis segment. And that question is, what is something that people often think is true about data science or about being a data scientist, but that you have found to be incorrect in some way?

DANIELLE: 27:35

Now, that's a hard question because I don't know what people often think is true [laughter].

SUSAN: 27:41

I know. I'm asking you to assume some knowledge that--

DANIELLE: 27:44

I guess the one thing that people often think is true that's not true in my role, and I've heard other people say is not true, is they think if you're a data scientist, you're just doing machine learning all the time and that's it. And I'll say that I did a ton of exploratory data analysis, and I also did quality analysis on models built by outside companies. And the Buffalo Trace model was the first time that I did any machine learning. And even then, it was a very small part of the time I spent on the project, because that part, when you understand it, is actually very easy and very fast. And the hard part-- but I think this is something that most people know, the hard part is understanding and learning the data, as my boss says. Or sometimes called cleaning the data. But it's really about understanding it and learning it, and you end up also cleaning it up while you're in there.

SUSAN: 28:40

I like that. It's helpful to reframe cleaning the data, which just sounds like a chore, like cleaning your bathroom or tidying or vacuuming, but instead thinking about it as learning your data. That's a very nice reframing of it.

DANIELLE: 28:53

Learning the data. And there's lots of ways to learn your data. Because, like the event data, it's not like I ran it through some algorithm, right? I'm learning the data. The data looks good. I had to look at it over time, time before an admission cycle to notice that it didn't go back as far for the students from 2018, so.

SUSAN: 29:14

Right, right. Well, and in addition to all that cleaning and learning of data and becoming familiar with it, it sounds like for the projects that you've done so far, there's been a lot of time spent gathering that data, convincing people that you should have access to these data, helping them then understand the outcomes. So sounds like communication has been a big part of those projects as well.

DANIELLE: 29:36

Communication is huge. Yes.

SUSAN: 29:39

Is there anything that we haven't talked about yet that you want to get in there about your data science career, about doing data science in higher ed? Anything else that stands out to you that we haven't talked about yet that you would like to address?

DANIELLE: 29:54

It's fun to-- I had an opportunity to sort of-- I don't want to say blow people's minds, but there were assumptions in the advising world about the ACOd students and when they leave. So they thought they're leaving after their third semester when they don't get transferred into the major that they want. And so when I had looked at the data, I looked at when they leave and said, "They're actually still leaving." Most students leave after their second semester and they're the same way. And they're actually not even going and getting their major at some other university. So being able to just look at the data. And then present it to them and hear them say, "Wow, that's not what I thought." And so at least until a whole different reason of thinking why students are leaving. And they can get a different idea of it because they don't see all the students, right? They only see the ones that actually come to an advisor. And it could be that those students are the ones leaving after their third semester.

SUSAN: 30:58

Yeah, yeah. I definitely noticed during my career in academia that it was easy to kind of latch on to a few anecdotes about certain students versus actually seeing the bigger picture that the data can drive home.

DANIELLE: 31:11

Exactly. Yeah. So that was interesting. Not that I want people to be wrong, but I do want to find and share insights that then make an impact somehow.

SUSAN: 31:21

Yeah. Yeah, absolutely. And it's fun to blow minds, so that's always good [laughter]. It's always a good thing. Very cool. Well, Danielle, thank you so much for joining us today on Data Science Mixer. It's been great to have you here.

DANIELLE: 31:33

Oh, thank you. It's been lovely to be here. Sorry [laughter].

SUSAN: 31:38

That's okay. It's just a cheesy ending that-- I do it every time, so.

DANIELLE: 31:44

No, I like it. It's like when you-- every time I start a presentation, I'm like, "Hello, my name is Danielle and I'm very happy to be here."

SUSAN: 31:50

Exactly.

 

[music]

SUSAN: 31:52

Thanks for listening to our data science mixer chat with Danielle Lyles. Join us on the Alteryx Community for this week's cocktail conversation to share your thoughts. For this week's conversation, share about your pandemic data projects. Danielle talked about how some of her projects addressed challenges the university faced because of the pandemic. Have you done any interesting data projects that grew out of the pandemic? Any topic modeling projects that gave you masks and hand sanitizer as results? Share your thoughts and ideas by leaving a comment directly on the episode page at community.alteryx.com/podcast or post on social media with the hashtag Data Science Mixer and tag Alteryx. Cheers.

 

[music]

 

 


 

This episode of Data Science Mixer was produced by Susan Currie Sivek (@SusanCS) and Maddie Johannsen (@MaddieJ).
Special thanks to Ian Stonehouse for the theme music track, and @TaraM  for our album artwork.