Community Spring Cleaning week is here! Join your fellow Maveryx in digging through your old posts and marking comments on them as solved. Learn more here!

Data Science Mixer

Tune in for data science and cocktails.
Episode Guide

Interested in a specific topic or guest? Check out the guide for a list of all our episodes!

VIEW NOW
MaddieJ
Alteryx Alumni (Retired)

Do children interacting with AI-enabled systems need special protections? Steven Vosloo, policy specialist in digital connectivity at UNICEF, shares what data scientists should keep in mind to ensure the rights of children in digital spaces. 

 

 


Panelists

 


Topics


Cocktail Conversation

 

Mixer LI (5).png

 

If there are kids in your life, have you noticed anything interesting about how they respond to AI?

Have you considered how kids might engage with your data science work? How has that shaped your practice?

 

Join the conversation by commenting below!


Transcript

 

Episode Transcription

SUSAN 00:00

[music] On Data Science Mixer, we've had a lot of conversations with data science and AI experts where issues around ethics and personal rights have come up. They've had a variety of opinions and concerns. But one thing all those experts had in common? They've all been adults. We haven't talked to any kids about these issues. But today's guest has done that around the world as, one, part of his work with UNICEF on the rights of children with regard to AI policy and strategy. Welcome to Data Science Mixer, a podcast featuring top experts in lively and informative conversations that will change the way you do data science. I'm Susan Currie Sivek, senior data science journalist for the Alteryx community. For today's episode, I talked with Steven Vosloo, and I'm excited for you to hear more about his work on policy and strategy, plus his suggestions for what data scientists can do around this issue in both their professional and personal lives. Let's hear more from Steven.

STEVEN 01:01

Thanks, Susan. Thanks for having me. My name is Steven Vosloo. I am a digital policy specialist at UNICEF. So I'm based at their headquarters in New York City.

SUSAN 01:11

Excellent. And do you mind sharing with us which pronouns you use?

STEVEN 01:15

Yes. He/him.

SUSAN 01:16

And one more very important question. As you know on Data Science Mixer, we often try to have some sort of special beverage or snack or something with us while we're chatting. So do you have anything special with you there today?

STEVEN 01:27

I just have water, but for now, for now.

SUSAN 01:32

Yes. Very good.

STEVEN 01:32

But I'm looking forward to a glass of wine later, so there we go.

SUSAN 01:34

Oh, excellent. What kind of wine, if I may ask?

STEVEN 01:37

I like red. Yeah. I like red. Yeah. French red.

SUSAN 01:41

Cool. Very nice. Very nice. As an Oregon resident, I think I'm legally obligated to say Oregon Pinot Noir is one of my favorites. But it actually is, so that works out well. [laughter] Awesome. Maybe you could just tell us a little bit about how you got to the position you're in right now, kind of your journey into digital policy and the particular area that you primarily focus on.

STEVEN 02:02

Yes, of course. So I started my career after studying computer science, and I graduated in '95. So in '96, I was a web developer. And believe it or not, I'd spent three years learning about mainframes and COBOL. So the internet had just-- I remember, I went online I think in '94. It wasn't even part of our curriculum, but you could learn HTML and pick it up. So it was a great time. I spent the first five years working, yeah, as a web developer in Johannesburg - I'm from South Africa - and then in London. And of course, it was the dotcom time, so for techies, great time, exciting, and we thought we had finally taken control of the world and this is our moment. And then, of course, the bubble burst. And I went travelling and I remember picking up a magazine at the time called Yahoo! Internet Life, French magazine from Yahoo. All right. So--

SUSAN 02:56

French magazine about the internet, wow, that is truly an artifact. [laughter] I remember those things too.

STEVEN 03:03

Right. And there was an article about all these techies in Silicon Valley who are now out of work after the bubble burst, and they've been using their talents to support non-profit organizations and developing systems for social impact and education and health. And this really moved me. It was an aha moment for me. So since then, I've stayed in technology and my work has really been about digital for social impact. I've worked in e-government in South Africa as a UX person and specialist. I gave up coding after a while. I realized I'm actually not that good at coding, but I could speak geek and I could speak English. So it was a good time to be a kind of interlocutor between the rest of the world and the tech teams. So I worked there. I've done a lot of work with young people, digital media, and youth and cross-cultural awareness. At Stanford, I did a project in 2007 on this connecting young people around the world. And at that time, still believing that-- and the research at the time showed that this really was good at breaking down barriers and connecting people. And of course, it still is, but instead looks a little different now. And, yeah. And then mostly just really joined the UN and have been working in digital learning mostly. I joined UNICEF three years ago. So really looking at using my technology experience and implementation experience, looking at what kind of policies should be in place to make sure that digital spaces for children are both safe and empowering. Yeah. So a lot of my work has been on digital literacy, children and misinformation, and of course, children and AI. That's my biggest project.

SUSAN 04:49

Yeah. Yeah. And, of course, that brings us to our data science connection here, which is awesome. And I would love to hear more about that particular project. It sounds like there are a lot of different forums, and I'm sure many of our listeners will be familiar with, if not involved in, actually building some of these. A lot of different forums that AI can take in kids' lives. So I'm curious about some of the main areas where you see AI having a role and that you're focusing on right now.

STEVEN 05:17

Sure. So as you well point out that AI is very much in the lives of children, whether it's through the recommendation engines and in video platforms or news feeds or putting bunny ears on your selfie, or. It's to virtual assistants to increasingly in the classroom through personalized learning systems. And at the same time, children-- so children with this direct interaction with AI enabled systems, they also have these indirect impacts. For example, AI systems that determine their educational opportunities or their parents' loan application, which have a kind of a knock-on effect to the child's quality of life. So what we've seen is that AI is very much in children's lives and at the same time, we didn't find in our research that children as a demographic and their rights were being recognized enough in AI policies and AI strategies or even in the development of AI systems.

STEVEN 06:23

So I can give you an example. We looked at '20 national AI strategy, and almost no mention of children except thinking about the next generation of AI talent, which-- right. And there's nothing wrong with that. You need to think more broadly as just kind of a resource. Think about how AI can support children's development or support their health and their education and their protection, their participation. So yeah, we really wanted to kind of dive down and raise awareness about children. I need to say briefly especially for what you said about your listeners developing these systems. There was a great report that came out a few months back from Data & Society called the Unseen Teen. Right. And it was written about how little children-- and when I say children, I'm using the UN definition of below 18.

STEVEN 07:20

So it includes adolescents, of course. How little they're actually involved in many big tech development programs. Of course, this is generally speaking because, of course, there are many products developed with kids. But the report, yeah, it was based on interviews with many leading practitioners, and there was a real gap there. So the Unseen Teen is a gap that we thought a spotlight needs to be shown here, especially given that children are a major user group. One third of all online users are children. So, yeah. That basically needs to change. And part of that is kind of recognizing children's rights. And I could talk about that if you want and kind of what that means in AI.

SUSAN 08:07

Yeah. I'd be curious to kind of hear your definition of rights in that context.

STEVEN 08:11

Well, being UNICEF, we look to the Convention on the Rights of the Child, which is-- again, this was developed in 1989 and it's the most ratified UN treaty of all time. Most countries in the world have signed on to this. And it, basically, says that a child has all the rights of an adult, but a set of additional rights. So your right to healthcare, your right to education, your right to protection. Children have a right to play. It is a UN right, which is amazing. So if we look at how AI systems impact on children, we have to look at it through the lens of those rights, both the protection and empowerment, because children also have a right to participation in the matters that impact on them. So designing with children and including children in the design process is part of that kind of exercise in that right. It does also say that children have evolving capacity, which we know. So a 5-year-old and a 17-year-old are quite different in many ways. But those are, broadly speaking, the child rights.

SUSAN 09:20

Awesome. Thank you. Yeah. So it sounds like age is a potential distinguishing factor among how kids might be exposed to AI or involved with online activity generally. Are there particular areas of concern around different demographic groups kind of beyond age, maybe looking at gender or socioeconomic status? Particular groups that are maybe of more concern for any reason?

STEVEN 09:45

That's a really good question. Yes. So within age, of course, you get these developmental stages. So age itself is a bit of a blunt instrument because not all 13-year-olds are the same, right? But we've had to come up with some kind of system. Even within age, you get your early childhood if you're kind of early adolescence, later adolescence. So these evolving capacities, of course, impact on your things like your ability to understand issues of consent, whether the system that you're interacting with is collecting your data. Do you understand that? Do you understand that it's an AI system or AI-enabled system versus a human being? Right. So that's the one-- in terms of demographic, for sure we know that's your gender orientation makes a difference, where generally girls don't have the same exposure to digital literacy programs, AI and literacy programs or in many parts of the world, the same kind of opportunities to be online.

STEVEN 10:46

So your agenda matters. Your socioeconomic status, I guess, matters in the sense that your data may not be used to train the models that you're interacting with. So you may be a child in South Africa, whereas the system you're using was largely based on the American child. And even that is such a broad generalization. Right? So I would just say as well that children's data, because data is such a key part of AI, is children's data is different. And when we say that we mean that it's in terms of the way that, one, the children understand use of the data, coming back to understanding of whether it's being collected, they may not be aware of their rights and a sense of their data rights as a consumer. Or if they feel something's going wrong on a tech platform, their kind of ability to be able to redress and seek redress. And in the US, you have COPPA, which is the Child Online Privacy Protection Act. So that, for example, says that children's data under 13 can't be used for [inaudible] advertising. So, yeah. But again, the data is different and we do need to think differently about this particular demographic of user.

SUSAN 11:56

Right. Right. So I believe you have a report coming out pretty soon that's going to talk about some specific guidance around kids and AI and these issues around data and demographics and so forth. So I'm curious what some of the key takeaways are going to be in that report?

STEVEN 12:10

Absolutely. So let me zoom out for a second. What we'll be releasing is version two of the guidance, and we're very excited about this. We released version one in September last year. And drawing on my techy background, I wanted to take a slightly different approach to developing a policy guidance. So we went through a fairly consultative process to develop it last year. So we met with AI experts around the world to get regional inputs into what this kind of guidance would be about how you think about children and AI. We also, for the first time for the UN, met with a whole bunch of kids. So we held nine workshops which were, for me, one of the best parts of the project, again, around the world and asked them not what they think an AI policy should be about, but kind of what excites them about AI, what worries them.

SUSAN 13:04

So can I pause you there for just one second? I'm curious about what those workshops look like. Do you just kind of have like a classroom setting with kids talking about these issues? Or how did that look?

STEVEN 13:15

So we actually did write this up in a workshop methodology, which I can send to you to put in the show legs, but so it's a really great methodology. Typically, we would do it more out of school than in school. So set up a very more informal kind of space. It's about half a day and we spend the first while just talking about broad AI concepts and also talking to the kids about their digital lives, just to say, "Okay. What do you do in your day? What do you do when you wake up? What's the last thing you do at night?" And then the map it up. And you look at the digital day and the footprints they've left and you start contextualizing that within AI systems and algorithms. And the most interesting part was, we present them with a bunch of case studies that we broke into groups. And for example, you're applying to college and for the first time, your application is going to be reviewed by an AI system.

STEVEN 14:15

And these are the criteria that they will look for and grade you on. And then, we would speak about issues of fairness and issues of bias. So this was a really nice way to talk about a complex issue in a way that the kids could relate to. Yeah. And then we ended with these things of what excites you and what worries you. That was interesting because it really showed how different groups have different reactions. So what worries you? The kids in the US and in Europe spoke about being hacked or about how lazy they've become because robots do everything for them.

SUSAN 14:52

[laughter] That's funny.

STEVEN 14:53

Yeah. Whereas kids in South Africa and Johannesburg were-- so because unemployment is so high, their youth unemployment, this really kind of concerned them that AI could be taking away jobs that are already scarce.

SUSAN 15:05

Oh, wow. Interesting.

STEVEN 15:07

Right. But on the other hand, they were excited about kind of how AI could be used for alleviating poverty or kind of improving healthcare at a national level. So, yeah. It was really great fun and we were to report on it. Yeah. And we've got this methodology if anybody else wants to get to consult kids in the AI process.

SUSAN 15:27

Yeah, yeah. That's interesting. And I can imagine that, again, that might be of interest to some of our listeners who might be working in this space. So, yeah, thanks for the details there. It's a fun image just to think about kids sitting around talking about AI bias. That's awesome. Very cool. So I'm sorry. I think I disrupted your discussion of the takeaways from the report as well.

STEVEN 15:47

Yes. I'm sorry. I wanted to say we released the report and draft, the guidance, last September based on these consultations, but purposefully in draft, which is quite unusual for the UN because we recognize we don't have all the answers and moving from AI principles to practice is the real challenge. Right. You know? There're, I think, well over 160 principles for ethical and responsible AI, so we don't need more principles. We need to work out how you apply that. Right? So we released it almost like an MVP of a policy board. And we've been working with eight organizations around the world to pilot the guidance and to kind of tell us what works and what doesn't, and we've written those up in case study. So what we'll get at the end of November is version two, the non-draft version of the policy guidance and this pack of case studies and some additional resources like tips for kids, tips for parents.

STEVEN 16:44

But in terms of what's in the guidance, we have nine requirements for child-centered AI. And I won't go into all of them. But for example, we spoke about inclusion of children in the design process, not just in the AI design process, but in the AI policy design process, and meaningful child participation. So not just kind of rolling in three kids at the end to tick a box. Right? Really involving children throughout the process and really taking on board what they have to say in the impact of the features that might be implemented or kind of how policy is worded. Obviously, kind of diverse teams that we're looking at. We talk about the requirement for provide transparency, explainability, and accountability for children. So your listeners, you will know, you'll recognize some of these principles as being part of the AI kind general body of principles. But we kept asking, what does it mean for children?

STEVEN 17:45

So right. So in this case, explainability needs to be a language that's age appropriate. Right? And when there are children involved, there needs to be a human in the loop more often than for adult. And oversight bodies that look at accountability should have child rights experts on them to think about these additional kind of impacts on children. We speak while protecting children's data and privacy and just really taking a privacy by design approach. And I'm not sure if you've come across this, but recently in the UK, there was an act that was passed called the Age Appropriate Design Code. So it's only specific to the UK, but it really looks at, from a very practical design perspective, how if your platform has child users, even if it's not intended for children, but it may have children, child users, you need to really take this privacy by design and default to the highest levels of privacy. So kind of location tracking is off by default and many of the kind of more kind of data sharing features really requiring you to opt in versus kind of opt in by default. So it's that kind of thinking.

STEVEN 18:53

And then we also talk about preparing children for the AI future, for the present, so AI literacy, AI curriculum. And it's not about every child being the next data scientist or kind of coder. Right? But some people do say that not every child is going to be a coder, but every child shouldn't be a conscious and critical user of technology and asking questions about how the data is collected and what are the privacy settings. Right? And what are they getting in return for a free platform? I mean, questions that increasingly are being asked, but yeah, that children should be asking. So there were nine of them. I encourage everyone to go and look at the guidance and use it as much as possible. We had some good feedback and response from the draft version.

STEVEN 19:42

We've had some dev teams like ByteDance and TikTok contact us and say, "Can we come and talk to their teams and their protection teams just to try and interpret what those guidelines, which are still quite high-level, what it could mean for their platform? So yeah, that's been really kind of rewarding. And we've had the government of Scotland officially adopt these requirements, this guidance.

SUSAN 20:03

Wow.

STEVEN 20:04

Yeah. There were national AI strategy in April a lot. So it's baked into that, which we're really excited about. Yeah.

SUSAN 20:12

Yea. That's great to hear some concrete results of your work and hearing from companies and governments that can make these things come to life in the hands of kids. I'm sure that is very rewarding. It's terrific. Awesome. So you mentioned Scotland here, but are there other differences in terms of policy and maybe the openness to addressing this particular area of AI policy that you've observed as you've looked at different countries around the world?

STEVEN 20:41

Yes. We have, actually. So now, at least 14 governments have developed AI strategies around the world, although we know that most of the capacity and funding is concentrated more in the north, that's kind of US, Europe, and China. Right. So that's where the bulk of the heavy lifting happens. I wanted to just talk about something that's happening in Europe. So in terms of AI regulation, there isn't any of that globally. The first kind of tentative came out of the European Commission in April of this year. It's that proposal for a regulation of AI. They're calling it the AI Act. And it's pretty good. It's worth taking a look into. So this would govern-- the same as GDPR governs the European Union, this would apply to the European Union and those countries. But what's interesting, it does require child rights and the Convention on the Rights of the Child, and it does actually pick out some wording around children and around their evolving capacities. What's interesting about it is that they categorize AI systems in two, give it these different risk category. And I just mention this because children are mentioned in some of them. So certainly, AI systems, the worst kind of scenario isn't what they call unacceptable risk. So these are not-- these would not be allowed. So this is kind of AI used for social scoring or the exploitation of the vulnerabilities of children. They also ban live biometric identification in public spaces.

SUSAN 22:17

Oh, wow.

STEVEN 22:18

So facial recognition and [inaudible] cameras in public spaces, except there are one or two exceptions, and one of them is to find missing children. Right. But then they also have this next category of high-risk AI or AI system, which is acceptable, but you really have to show as a provider that you are complying with minimum standards and that you're checking the security and the robustness on the system. And these are ones that are using HR for hiring and firing or for kind of selecting who gets access to which benefits, educational benefits or health benefits. So that's really encouraging to see that. And yeah, we supported that as UNICEF.

SUSAN 23:04

Yeah. Yeah. And I imagine that like GDPR, there will be kind of a ripple effect of those policy changes around the world more broadly.

STEVEN 23:12

Exactly. Yeah. Yeah. Exactly. That's definitely the European Commission's hope, I would say. Yeah.

SUSAN 23:18

Yeah. Yeah. Interesting. So what are you excited about when it comes to looking at issues around AI and kids in terms of maybe new innovations or policy changes? What are some things that you're looking forward to in the future?

STEVEN 23:33

I think this experience of consulting with children was so rewarding and just seeing how digitally engaged-- they really wanted to know more about AI system. They wanted to know what's happening with their data. They wanted to be included in the design and the policy. So I was really encouraged by that. I was excited by that because I think this is-- as the next generation of data scientists and AI policymakers regulators, it's encouraging that they're aware of these issues in a very kind of-- perhaps in a basic way now, but want to know more. What's interesting in the workshops that came out, they really expected more of industry out of tech platforms. And I think that that's going to-- that's what we found, anyway. It was a fairly small sample, 245 kids. But they were aware of this kind of interaction, this trade-off of kind of I give my data but I get ads or I-- right. There's exchange. But I think they were-- and they, obviously, love their-- they didn't think about their online life or offline life. It was just life.

SUSAN 24:40

Right. Yeah. No boundary there.

STEVEN 24:43

Exactly. Yeah. So they're excited about all the technology that they're using. But I think, yeah, they wanted to feel that there was transparency and more protection. Yeah. Which is also encouraging.

SUSAN 24:56

Yeah. Yeah. The interesting generational ships with regard to those opinions, and then maybe, eventually, the way companies handle those issues. So good to hear.

STEVEN 25:04

Exactly. But I think in terms of lots of kids using AI, but the use of AI to better provide for children's healthcare.

 

Or UNICEF itself uses AI in our programming to do things like better models of disease outbreak and spread or we have an initiative that tries to measure the connectivity levels at schools and map that and, basically, use that data to inform programs to connect all schools and connect kids.

SUSAN 25:34

Cool.

STEVEN 25:35

So the opportunities for using AI in kind of new in style developments is enormous. So it's hugely exciting. Yeah.

SUSAN 25:42

Yeah. Yeah. Absolutely. So I thought I might throw one more question in here. You mentioned tips for parents, and I imagine we have probably a few parents out there among our listeners. I was wondering if maybe you would share one or two of those that might be useful for parents who are thinking about how their kids are relating to AI and data?

STEVEN 25:59

Sure. This was probably the hardest document we had to work on because there's often such a generation into generational gap in terms of experiences and expectations. And because UNICEF is global, we think about-- again, I'm from South Africa. Your average parent is pretty disconnected from their children's digital lives and vice versa than perhaps in, let's say, Silicon Valley. But if you think about that, what kind of conversations might they have? And that really was one of the recommendations, to really talk to your children and take an interest in their digital lives and have a conversation about-- it doesn't have to be about AI, but the implications of those systems, like, what happens to your data and what happens to your memories or your online activity today that could kind of warn you later, "That's cool today, but not so cool in two years' time." And are you comfortable with that?

STEVEN 26:58

So, yeah, I think that, one, is really taking time to have those conversations. And secondly, just to really take an interest in children's digital lives. We often hear the parents going, "Well, I don't know about that stuff," and I don't know. They're on--

SUSAN 27:12

What is TikTok? I don't know.

STEVEN 27:14

Well, exactly. Yeah. But just on this [inaudible] day, well, you should find out what they're doing because it's interesting and it opens the doors for those conversation. But I think also the last one, which I think is quite an easy one, is to ask your school what's their privacy policy, what's their policy around kind of profiling and surveillance, which we know is a big part of kind of the predictive analytics and the data collection. So just being more curious about, are there kind of rules or policies in place?

SUSAN 27:50

Yeah. No. That's a great point. Thinking about all the different kinds of classroom management software and tools that are out there that are undoubtedly becoming more complex in terms of their capabilities, but where do all those data go? So I think that's a great point.

STEVEN 28:03

Yeah. Exactly. Yeah.

SUSAN 28:05

Cool. So I wanted to ask you one question that we always ask on the podcast just as our alternative hypothesis recurring segment. And the question is, what is the thing that people often think is true about data science generally or working around on issues that are around AI data science, but that you have personally found to be incorrect?

STEVEN 28:27

That is a great question. Consultations with experts around the world. They weren't AI experts. A lot of them were government policymakers. And you often get somebody who's suddenly put in charge of regulating AI or coming up with a policy. They don't have any background, but they're kind of learning fast, but they're overwhelmed by the space. So I think my answer would be that this perception in the sense that for those who are not in data science or in AI that it's all magic and that it's unstoppable, but it has a life of its own. Right. And that perception really is very widespread. I know we laugh about it, but we really need to stress the point that there are people behind all of this, behind the systems, behind how they've designed, how they're optimized, how the data is collected. So we can really guide that. So the myth that AI is magic, that would by my answer.

SUSAN 29:24

Yeah. No. I like that. Fortunately, we're not at the point yet that we felt anything that is unstoppable, as far as I know. So that's good. There is hope at this point.

STEVEN 29:35

I know. No. Exactly. And I think that's true. And perhaps people have higher expectations or exceptions. Some of what they described it like, "Wow. I didn't know AI could do that or machine learning."

SUSAN 29:49

[laughter] That's funny. They're just very optimistic, I guess.

STEVEN 29:53

Well, exactly. Exactly. I should just, yeah, add, if I can, just two other bits. The one is, by the time this is edited, it would have happened, but we're hosting on the 30th of November and 1st of December a global forum on AI and children, which will look at AI and education and healthcare and look at this with guidance and how it's being used by these different case study organizations. So the recordings of that will be live. So--

SUSAN 30:20

Awesome.

STEVEN 30:21

--if somebody is interested in this, please check that out. And then the other point is, I mean, also we're doing a lot of work at UNICEF on data governance and around the governance of children's data. We have a manifesto and a whole bunch of papers. So again, from a data science perspective, this is more on the governance side. But yeah, there's some really great resources there that I'd encourage anyone to check out.

SUSAN 30:44

Yeah. Absolutely. And we'll put links to those in the show notes as well so folks can find them more easily. Terrific. Well, this has been really thought provoking and really interesting to hear all about your work, Steven, and I really appreciate you taking the time to join us for the podcast.

STEVEN 30:58

Thank you, Susan. I appreciate it.

SUSAN 31:01

[music] Thanks for listening to our Data Science Mixer chat with Stephen Vosloo. Join us on the Alteryx community for this week's cocktail conversation to share your thoughts. For this episode, let's chat about what you've observed with kids interaction with data science and AI. If there are kids in your life, have you noticed anything interesting about how they respond to these technologies? Or from the professional side of things, have you considered how kids might engage with your data science work? And has that shaped your practice at all? Share your thoughts and ideas by leaving a comment directly on the episode page at community.alteryx.com/podcast or post on social media with the hashtag #DataScienceMixer, and tag Alteryx. Cheers. [music]

 


This episode of Data Science Mixer was produced by Susan Currie Sivek (@SusanCS) and Maddie Johannsen (@MaddieJ).
Special thanks to Ian Stonehouse for the theme music track, and @TaraM  for our album artwork.