ALTERYX INSPIRE | Join us this May for for a multi-day virtual analytics + data science experience like no other! Register Now
The Alteryx Community will be temporarily unavailable for a time due to scheduled maintenance on Thursday, April 22nd. Please plan accordingly.

Alter Everything

A podcast about data science and analytics culture.
Alter Everything Podcast Survey

Be entered to win a prize while sharing your thoughts about our podcast.

Give your two cents here
MaddieJ
Alteryx Community Team
Alteryx Community Team

In this bonus episode of the Data [in the] Sandbox mini-series, Maddie calls Susan because she’s creeped out when her TV starts making personalized TV show recommendations. It’s like the TV is in her head! Thankfully, Susan explains that the TV isn’t in Maddie’s head – it’s just machine learning and artificial intelligence!

 

 


Panelists

 

Maddie Johannsen - @MaddieJ, LinkedIn, Twitter

Susan Sivek - @SusanCS, LinkedIn, Twitter


Topics

 

 

KidsMini Series - Podcast YouTube-6.png

 


Transcript

Spoiler

[music]

MADDIE: 00:03

Welcome to a special bonus episode of Data [in the] Sandbox, a mini-series powered by the Alter Everything Podcast. I'm Maddie Johannsen and today my friend Susan Currie Sivek is going to teach me about machine learning and artificial intelligence. For any kids, teachers, or folks beginning their data journey this episode is for you. Let's jump into it.

SUSAN: 00:34

Hello?

MADDIE: 00:35

Susan. Thank God I caught you. There's this weird thing that I noticed, and I thought maybe you could explain it to me because I think it has something to do with this whole data thing.

SUSAN: 00:45

Oh wow. Well, I can try. What's up?

MADDIE: 00:47

Okay. So when I watch one of my TV shows, the TV somehow knows some other shows I might like and tells me I should watch them. It's so creepy. Because sometimes it picks shows that I do like and I don't know how it figures it out. It's like it's in my head somehow.

SUSAN: 01:06

Yeah. It's a recommendation engine.

MADDIE: 01:10

There's no engine in the TV like the engine in a car. I think I'd hear it if there was something in there.

SUSAN: 01:19

Right. Well, it's actually not that kind of engine. This is all computer code and machine learning. It's sort of artificial intelligence, like AI. Have you heard that term?

MADDIE: 01:32

Oh, yeah. Like robots or in science fiction shows.

SUSAN: 01:37

Kind of. Yeah. Oh, this is a really great question to talk about. So let's talk a little bit about AI and machine learning and algorithms.

MADDIE: 01:46

Yeah. That all sounds pretty fancy. So let's do it. I want to understand how the TV figures out these shows.

SUSAN: 01:52

Definitely. [music] So when we talk about artificial intelligence, we're really just talking about machines, like robots or computers sometimes, that do things that seem like they show human intelligence. So in other words, we might say they make decisions or even do things physically sometimes like humans would.

MADDIE: 02:19

Well, I know robots do all kinds of things. They can build cars. They vacuum the floor. There's even a surgery robot that I've heard of.

SUSAN: 02:31

Yeah, that's so true. Yeah, it's really incredible that people have figured out how to build machines that can do all those things.

MADDIE: 02:37

And the machines don't even have brains like we do.

SUSAN: 02:40

No, they sure don't. And in reality, the vacuuming robot and even the surgery robot they're sort of smart but only in a really narrow way. If you ask the robot vacuum to go do surgery or if you wanted the surgery robot to go vacuum it probably wouldn't go too smoothly. Even if you could somehow give them the right tools for those jobs, they still really wouldn't do those jobs well.

MADDIE: 03:06

But human surgeons are smart. They do surgery and can also vacuum houses. So why can't the robot surgeon figure that out?

SUSAN: 03:15

Yeah, well, if we could put a vacuum in the surgery robots hand, I guess it would be the problem is that it wouldn't really know what to do with it. It hasn't been trained to use that vacuum just like you and I haven't been trained to do surgery.

MADDIE: 03:31

Hmm.

SUSAN: 03:32

Yeah, I know it's weird. The robots and even your TV they've all been built and trained to do various things. These days all of the artificial intelligence that we have is pretty specific. We've figured out how to teach a computer to play chess super well or even to do surgery or to recommend TV shows, but all that AI isn't what we'd really called general AI that can solve lots of different problems. We have narrow AI. It's software and hardware that does really specific things. Sometimes they work really, really well. So well that we get those creepy feelings like you had. Like, "This thing is in my head." But we haven't really figured out how to build artificial intelligence that can do all the things that humans do well all in one package.

MADDIE: 04:19

Okay. Interesting. So how do we teach robots to do things or teach my TV to do things? I mean, it's probably not like how I go to school.

SUSAN: 04:30

No, but school for robots sounds pretty neat.

MADDIE: 04:34

Yeah, well sometimes to be honest my teacher sounds like a robot.

SUSAN: 04:38

Oh, no. Well, robots might be teachers someday. In fact, you're probably already using some kind of computer software for school that has some aspect of artificial intelligence. Something we'd call machine learning.

MADDIE: 04:51

But I'm not a machine. Well, let's get back to teaching the robots.

SUSAN: 04:58

Yeah. I mean, a lot of how we teach robots is what we would call machine learning, but it's actually not just for robots. Basically, machine learning refers to algorithms that improve through experience. [music] So I can break that down a bit.

MADDIE: 05:13

Yeah. Start with that algo whatever that word was.

SUSAN: 05:18

Algorithms. Yeah. Algorithms are strategies or rules that computer software uses to figure out how to make a decision. And that decision might be something like your streaming service that's picking shows to recommend to you or the decision might be your math software for school deciding which kinds of math problems you need to practice next. Both of those decisions get better or more useful or more accurate for you as the computer gets more experience. So it's learning as it sees which TV shows you like to watch and the recommendations get better and more interesting. It's learning as it sees which math problems are hard for you and it makes you do more of those so you get in practice.

MADDIE: 06:06

Well, thanks a lot algorithms.

SUSAN: 06:09

But if it makes you better at math that's helpful. Besides, I know you like those hard math problems. And these algorithms are all based on math, so you'll probably be creating them yourself someday.

MADDIE: 06:22

So what are the numbers in the math for the algorithms?

SUSAN: 06:26

So the algorithms are using your data. It's that data, again, like we've been talking about all through our conversations. Just numbers that represent your behavior and your choices. So the algorithms crunch through all those numbers and they make decisions. They say, "Well, Maddie watched five episodes of show A, but only one episode of show B, so let's recommend for her more shows like show A."

MADDIE: 06:51

And the more data the algorithms have the better they work?

SUSAN: 06:56

Yeah, for sure. We try to find exactly the right kinds of data that will help the computers make the best decisions.

MADDIE: 07:02

That's crazy. Teaching computers with data. Too cool. So when the TV suggests a show for me, it's actually learned what I like and what I don't like?

SUSAN: 07:13

Yeah. It's making its best guesses about that at least. And just to make it a little more complicated the recommendations that you get, those are often based not just on your data but also other people's data. The streaming service or the shopping website, they have tons of data about what people like to watch or tend to buy. With all that data, the algorithms can learn or we would say be trained to make pretty good recommendations for you. That's why they might sometimes seem like they're in your head.

S3: 07:44

Ooh.

MADDIE: 07:45

So let me see if I get it. Artificial intelligence, AI, is when computers seem like they're smart because they make decisions or learn or act kind of like humans do?

SUSAN: 07:58

Mm-hmm.

MADDIE: 07:58

And machine learning is part of helping them do that. So the data is what the algorithms used to get trained to do whatever it is they do?

SUSAN: 08:08

That's right. Awesome.

S3: 08:09

Hooray.

SUSAN: 08:10

And it's important to remember that although all this stuff seems really high-tech and complicated, it's actually humans building all of it. We design the systems that collect data, we make the systems that learn from the data, and we create the algorithms that do things like make recommendations based on the data.

MADDIE: 08:28

Well, yeah, it's not like the robots and TVs just dropped in from outer space or something.

SUSAN: 08:34

No. But again, that would be kind of cool. I'm going to add that and school for robots to my list of novel ideas. Well, anyway. Yeah, exactly. Humans come up with all this stuff. So we have to be careful about how we design artificial intelligence systems.

MADDIE: 08:51

Let me guess, there's what my parents call a cautionary tale coming up here.

SUSAN: 08:57

Oh, yes. Definitely. So one of the more famous stories is from 2016 when people created a chatbot. That's a computer program that can respond automatically all by itself to people's questions or comments. But the people who made it, they wanted it to learn how to talk to people by learning from how people communicate on Twitter.

MADDIE: 09:20

Yeah. My parents won't let me go on Twitter.

SUSAN: 09:24

Yeah, yeah. They might have a good point there. Your parents don't want you to learn bad habits of communication from what you might see on Twitter. Twitter is sometimes great but sometimes pretty awful. And that's really the same reason this whole plan backfired. You don't want to teach your computer program to communicate based on questionable communications. We call that training data. The data that was used to teach the computer what to do.

MADDIE: 09:49

Oh, no. I think I see what's coming.

SUSAN: 09:51

Yeah, yeah. This chatbot learned some pretty bad behavior from the worst of what it saw on Twitter. It said such mean and offensive and even racist things that it had to be shut down.

MADDIE: 10:04

Okay. Yeah. That is a wild story. But did other people figure out how to help computers learn to be nice?

SUSAN: 10:12

Well, we figured out a lot, but things aren't perfect yet. There have been other cases where machine learning systems have made unfair or inaccurate recommendations for people who aren't men or people who aren't white or people from different backgrounds. That could mean that somebody doesn't get a chance at a job or is unfairly suspected of a crime or maybe even gets an incorrect medical diagnosis. So it's super important that people creating these systems are aware of that possibility and take every precaution they can to make sure people are treated fairly.

MADDIE: 10:47

Wow. And I thought we were just making cool robots and finding TV shows with this stuff. That's a lot of responsibility.

SUSAN: 10:56

Yeah. Yeah, it is. But you know the saying, with great power--

MADDIE: 11:01

Comes great responsibility.

SUSAN: 11:03

Yeah. Exactly. And this challenges also why we need more people from all kinds of backgrounds with all kinds of experiences to study and work in data science. Remember, we talked about data jobs in our last conversation. And even if you don't end up working in machine learning or building robots or whatever, this stuff is still really important to understand because it affects all of us every day. Even things like your TV show recommendations.

MADDIE: 11:30

Oh, yeah. Speaking of that, I should probably get back to my shows.

SUSAN: 11:34

For sure. Just remember it's all data and algorithms behind the scenes. [music]

MADDIE: 11:44

Thanks for tuning in to Data [in the] Sandbox. This episode was written by Susan Currie Sivek, theme music by Andy Uttley, and album artwork by Jen Ho. To hear all of the Data [in the] Sandbox episodes, be sure to visit community.alteryx.com/podcast or find us on the Alteryx's YouTube channel. There's an entire playlist of Alter Everything Podcast episodes, clips, and fun animated videos to learn about all things data. Catch you next time.


This episode of Alter Everything was produced by Maddie Johannsen (@MaddieJ).
Special thanks to @SusanCS for writing this episode, @andyuttley for the theme music track, and @jeho for our album artwork.