48. Jessie Smith co-designing AI

This episode we're chatting with Jess Smith about the Radical AI podcast and defining the word radical, what is AI - non-living ability to learn… maybe, AI consciousness, the responsibility of technologists, robot rights, what makes us human, creativity and more...
Date: 25th of November 2020
Podcast authors: Ben Byford with Jess Smith
Audio duration: 50:23 | Website plays & downloads: 219 Click to download
Tags: Consciousness, Education, Podcast, Rights, Co-design, Creativity, RadicalAI | Playlists: Creativity, Conciousness

Jess is a PhD student in the department of Information Science at CU Boulder. Jess received her Bachelor’s in Software Engineering from California Polytechnic State University. Her research foci include machine learning fairness, algorithmic bias, value tensions in sociotechnical systems, and the unintended consequences of rapid tech growth.

Follow Jess on Twitter @_jessiejsmith_ and www.radicalai.org


Transcription:

Ben Byford[00:00:07] Hi and welcome to the 48th episode of the Machine Ethics podcast. This time, we’re with Jessie Smith, from the Radical AI podcast. She’s also studying a PhD in the department of Information Science at CU Boulder. We chat about the Radical AI podcast and defining the word “radical”, what AI is and discovering new ethical issues using science fiction in her podcast called Sci-Fi IRL, we chat about who is responsible for AI. We also discuss some current hot topics like GPT-2 and automated cars hitting people, we look at how we should value computer-generated creativity, co-designing systems, and creating ethical real-life example projects in CS education, and right at the end, we have a brief chat about AI consciousness. You can find more episodes of the Machine Ethics podcast at machine-ethics.net, you can contact us at hello@machine-ethics.net, you can follow us on Twitter at Machine_Ethics, or Instagram at MachineEthicsPodcast. If you’d like to support us, then support us on Patreon.com/machineethics, or get in contact for more information about sponsorships. Thanks again for listening, and hope you enjoy.

Ben Byford[00:01:24] Cool. Hi Jess, welcome to the podcast. If you could just quickly introduce yourself. Who you are and what do you do?

Jessie Smith[00:01:29] Of course. So, my full name is Jessie J Smith – that’s what you’ll see online in many different spheres, but you can call me Jess. I am currently a PhD student at the University of Colorado in Boulder in the United States, and I am pursuing a PhD in Information Science. It’s totally okay if you don’t know what that means, because I still don’t know what that means! But broadly speaking, my research bridges the gap between the computer sciences and the social sciences. So I’m taking a very holistic approach towards issues of ethics and fairness and equity, specifically as it relates to artificial intelligence and machine learning.

Ben Byford[00:02:17] Cool. Awesome. So, all those things are the meat of what we like to discuss on the Machine Ethics podcast. So, thanks very much for coming on, you’re in good company. And actually, as well as doing your PhD, you are doing a podcast. I came to you through the Radical AI podcast, so check that out. I’m sure if people are listening to this podcast they also maybe listen to other podcasts in this area, and the Radical AI podcast is awesome, so go and check that out. If you could, could you just give us a quick promote?

Jessie Smith[00:02:53] Yes. Promotion. That’s actually my co-founder and co-host’s job, not mine. Just kidding. That was a plug for Dylan, who was recently on your podcast as well. But yes, the shameless plug is that we’re called the Radical AI podcast. It is co-founded, co-created, co-produced and co-hosted by me and Dylan Doyle-Burke, and we’ve been around for about five months now. We launched in mid-April of 2020, so we are… it’s kind of like a quarantine baby, this podcast. And if you want to know more, check out our website at radicalai.org. The podcast itself is similar to yours, in that we focus on all things AI and ethics, though we like to joke that AI has broadly expanded to mean all things technology-related, and ethics has expanded to mean all things society-related. So really, just get your society and technology fix on this podcast.

In general, one of our main missions is to promote the under-represented voices in the field, as opposed to those who stick to the status quo and who get all the limelight. So we really focus on sticking to people, topics, ideas and stories that are “radical” and that are under-represented in the field. Also that push the agenda a little bit, and make people squirm in their seats and feel a little bit uncomfortable, because they’re not easy topics to unpack.

Ben Byford[00:04:21] Yeah, and I think at the end of each podcast you also challenge the interviewee to say what radical means to them and how that word – in relation to the ethics side – how does that fit in with the technology, as well.

Jessie Smith[00:04:39] Yeah, thank you for pointing that out. That’s a big part of the project, in general, to co-define, with the community – that is the radical AI community, but also the broader AI ethics community – what this word radical means as it relates to AI. Because obviously “radical” is a word that’s been around for centuries, if not longer, and it’s been adapted by many different kinds of cultures, but of course it has its historical roots in the black radical tradition. Dylan and I don’t want to be the people who bastardise a word that has such historical meaning, so it’s important to us to really ask the community what they think this word is evolving into as it relates to technology specifically. Part of our project, really, is just asking every interviewee, “How do you define the word radical as it relates to AI? Do you think your work is radical?” and eventually, in the works, we have some projects to try to get all that – to glean all of those answers from our various guests on the show – and come up with some broader definition of what radical AI really is.

But a sneak peak into that is that it has a lot to do with systems of power and oppression, and getting to the root of some of the problems that have existed in society for a long time when it comes to power. And you can see that with the word radical too, just looking at the linguistics of radical. If you see the radical – its Greek roots – in different domains, so like in mathematics, the radical is like the root of the number; in linguistics, the radical means the root of a word; and in botany, radical means the root of a plant. So that’s actually where we got our logo inspiration as well, for the podcast, this robot hand holding up this plant, and there’s these imaginary roots coming out of the bottom, because it has this soil in its hand. And so it’s all about getting to the root of these issues, which is why we ask such tough questions to our guests on our podcast.

Ben Byford[00:07:00] Yeah, that’s really cool and really interesting. I wasn’t aware that it had such a rich etymology, almost. The word.

Jessie Smith[00:07:09] Yes, that’s the word I was looking for. Thank you. Etymology.

Ben Byford[00:07:13] Etymology, yeah. It’s hard to – I mean, when I’m doing this podcast, I’m always searching for words and never quite finding them when it really matters, you know? It’s quite irritating. It’s like, “Oh…I know what I mean.” Anyway. So, the first question we ask on the podcast, Jess, is: What is AI?

Jessie Smith[00:07:33] Wow! I feel like you just put a mirror up to my face, because I’m seeing now what it feels like to be put on the spot.

Ben Byford[00:07:42] Yeah, sorry about that one. It’s part of the deal.

Jessie Smith[00:07:47] AI – okay, well let’s break down the etymology of this word, why don’t we? So we have artificial intelligence. So, I’ve actually seen a few talks about people defining AI as it relates to machine learning, because there’s also this contention about, “Is AI machine learning? Are they the same, are they different?” And so, in the talks that I’ve seen they were discussing that an artificially intelligent thing obviously has some sort of “intelligence”, but then we have to ask, okay, but what does it mean to be “intelligent”? And that’s a tough question to answer, because what does it even mean for a human to be intelligent? Does it mean to have a conscience? Because if that’s true, then we shouldn’t be calling AI “AI”, as it exists today, because as far as we know, no AIs have consciousness yet. Does it mean for it to have the ability to learn? Well, okay, then we should probably call dogs and cats and cows and pigs and other animals intelligent because they also have the ability to learn, so if we define – for me, I have to ask myself, what do I think intelligence is? And I would probably assume – maybe I’ll just stick with the ability to learn for now, because I think that’s the easiest definition.

Then the artificial piece of AI, I’m assuming comes from the fact that it’s not a human, and it’s not an animal, so maybe artificial means non-living. But then we have to ask what it means to be alive. And that’s also a super-philosophical, deep question. So, I’m answering your question by not answering your question, and saying that I really have no freaking idea. Maybe it means, “non-living ability to learn”.

Ben Byford[00:09:38] Before we started the podcast, you asked me how long this would be, and at this rate it could be all day. If we dig into this, and keep going. That definition is as good as any other, really. We always ask this question, and we get such different answers, and it really comes back to your cultural history, ideology, the technology itself, how much science fiction you’ve read. All these different things contribute to how you think about – and your opinion on – AI. And if you dig into it, you might think it’s clever statistics, or if you have no idea then you might think it’s something completely different, so it’s always nice to tease out these different themes, and different opinions before we get started, so we know where we are, basically.

Jessie Smith[00:10:30] Yeah, sorry I don’t have a more explicit definition, I’ll have to think about that. It’s a much more existential question than I expected it to be.

Ben Byford[00:10:37] That’s totally cool. So you run the Radical AI podcast, but you also host Sci-Fi IRL. Could you just give us a quick intro to that.

Jessie Smith[00:10:51] Totally, yeah. So Sci-Fi IRL started about a year ago, in fall 2019, with a colleague of mine, Shamika Goddard, who is also in the same year of her PhD as I am, and in the same programme at UC Boulder. It’s a much broader view of technology ethics, so it’s not focused at all on specifically AI and machine learning, but it’s more just about how technology in general has ethical concerns, but we approach them through the lens of science fiction. So every month we come up a new episode, based off of some sci-fi story that we either read, or watched, or listened to – I guess those are the only three mediums you can consume a sci-fi story doing. So we listen to, or we experience, a sci-fi story and then we come together and we discuss what ethical concerns were raised in that sci-fi story as it relates to contemporary and modern technology issues and concerns. So it’s a good way for us to take a step back from our comfort levels and what we know to exist in the world today, to think and speculate about what the future could be and what the present is, but maybe we’re not willing to admit. And to really just take a different lens and a different approach towards thinking of tech ethics issues.

Ben Byford[00:12:16] That’s really cool, and are there some things in there? Some themes that keep coming back to you, or keep arising when you’re looking at those different stories? Things like Black Mirror and stuff like that comes to mind as a reflection of things in the present which we might not want in the future. Are there paramount issues that we should sort out now that you can see throughout science fiction being an issue, or re-arising, that we can probably do stuff with right now?

Jessie Smith[00:12:53] Oh, jeez. Well we kind of cover everything from very closely related to today, to very far off dystopic sci-fi future. So, in terms of themes, ironically, the one that keeps on coming back up is robot consciousness. Which I don’t think is quite relevant to us yet. Maybe in five or 10 years, we’ll see. Maybe 50 years. Maybe never. But I think the one that has probably come up at least in a few episodes that is really relevant to today is responsibility of technologists, and so asking, “Who is responsible when things go wrong?”. Especially in code, and with the technologies that are created. Is it the company, the organisation? Is it the coder, the engineer? Is it the designer of the technology? Is it the manager? Is it the person who was using the technology, because they signed their life away by using it? Or is no one responsible?

Ben Byford[00:13:50] And, what’s the answer?

Jessie Smith[00:13:54] We’re figuring it out.

Ben Byford[00:13:56] Sorry, I just like to probe a little bit into these answers. So, I flipflop sometimes between thinking that this is like a really big issue, and it’s not an issue at all really, sometimes. But I guess when you talk about these kind of systems that learn and, you know, you could treat them as tools, and you start thinking about how they’re going to be used by governments, or open source, or hackers, and it starts becoming murky, as they’re learning different things. Who’s responsible at the time of learning from different people, especially if they’re in their homes and people are teaching them stupid things, and they just learn those stupid things? So yeah, it becomes problematic. And some people would have a hard line and say, “No, this is a tool – someone’s sold this to you and they’re responsible,” but others, maybe yourself, lean on the other side.

Jessie Smith[00:14:52] Yeah, definitely. And as we talk about what it means for something to be artificially intelligent, I wonder if in the future as things begin to learn and teach themselves and change fundamentally from what they were originally coded to do – maybe not fundamentally, because unless robots are changing and wanting to kill society and I’m adamantly against that view of AI – but for the most part, if a machine is teaching itself and changing in some way, and then it does something that is harmful to the human, then in theory, really, the AI should be the one that’s accountable and responsible and at fault. But you can’t really send an AI to jail, or send an AI to serve its time for the harm that it did to society, so what do you do? I don’t think we know yet.

I’m thinking this has happened a few times already. A few years ago when the Lyft self-driving car hit that woman in Arizona. I think it ended up being the driver’s fault because they were distracted listening to the voice while they were driving, but they were wondering for a little while if it was the driver’s fault, or if it was just a car malfunction, who would be the person who would serve time for the death of another human, because that’s not really a thing that we’ve ever encountered before.

And even more recently, with this whole GPT-3 hype that’s happening. I think there was an article that was in The Guardian, a few weeks ago that a lot of people were buzzing about that was like this amazingly written article using GPT-3 software, that if you haven’t read it you should definitely read it. It’s super-terrifying. It’s basically this AI that’s trying to convince us that we should not fear AI, but instead we should just fear ourselves. Also very existential, so maybe on point with topics we’re talking about today. But then you also have to ask, maybe not in terms of harm, but in terms of other regulatory questions, who is responsible for copyright of an article that was written by an AI. There’s other questions that come up, that are maybe not as pressing or terrifying, but they’re still interesting, because you ask yourself, Do robots have rights in the same way humans do? Can they break those rights? Do they have to serve time in the way that humans do? I mean, that’s just a super-existential question that actually we have an episode of Sci-Fi IRL coming out talking about next month. So these are all unanswered questions that I have no answer to, but they’re fun to speculate about.

Ben Byford[00:17:28] Yeah, they’re really, really interesting. In fact, in a previous episode we had David Gunkel with robot rights. So check out that previous episode, talking about and digging further into whether artificial intelligence should have rights. So, you’ve got some hot topics there, Jess. I just thought I’d dig into those a little bit.

So the automated car accident. There was a recent hearing about that, and I think you’re right; they found the driver negligent. And it feels strange, because it’s almost like they were set up to fail. They were given a system that doesn’t need your help, it’s supposed to be automated and it works unless it doesn’t work, and then what are you going to do? You then have a very short time to be cognisant of what’s actually going on and take control. It’s a difficult thing to do, and it’s difficult to point the finger, because the whole idea around automated cars, in this whole industry, is this idea of automating and taking away control. So it’s really terrifying for the industry, that these sorts of things have happened, and some of those manufacturers have had to scale down operations because of it. The imperative is kind of still there. The ethical idea that – hopefully – you have better travel, and you have less crashes, more time, greener places, less roads, because we won’t be sitting in traffic all the time, and it will be more efficient, and all these sorts of things. All these great things that we do want, and we have this emotional reaction, almost animal reaction, to things that could possibly kill you, and get in the way of all this happening. So it’s really interesting, how that rationale affects, and how we feel about it.

And also you talked about the GPT-3 article in the Guardian. It was quite funny, and it highlighted in the article about it that it was sort of a curation article. So they went, Here you are. Here’s your starter for ten. Make some stuff. And then it produced that article that it came out with about coming in peace and not killing humans. But those writers were taking a cut of what was a best fit, what worked best for something interesting to read, so there’s this human–machine process/interface/curatorial practice. They’re collaborating on the end product, and it’s really interesting for, you know, the future of copyright, the future of creativity, and what does it mean for artistic endeavour? Using some of these tools, and I’m going to go and do that and make loads of money, so bye!

Jessie Smith[00:20:54] It’s evidently… it’s a question almost asking ourselves, What makes us human? What is it about us that is essentially our humanness, and what does AI threaten to take from us in that? There’s this YouTube video you reminded me of that I saw years ago, before I was even aware that AI ethics was a thing, and maybe you can link to this in your show notes, it’s called “Humans Need Not Apply”, it probably has a ton of views now. I think it got pretty popular for a while, and in the video they discuss the automation of the human workforce and how we need to fear in so many words the fact that our jobs are just not going to be the same, because automation and robots are just going to take over every job that we could possibly think of. And then at the end, he says, Oh, except we’re all creative snowflakes. Humans can take to work in the field of creativity, and make art, and that’s what makes us so unique. And then he has a sceptical look and he comes on and says, Oh wait, just kidding, AI has already been making music, AI already makes art, AI is super-creative too, so we’re not really immune to having those jobs stay strictly for humans either. It just reminded me that robots – I should say AI, not robots – AI is super-creative. It’s a shame that some of that creativity will just be dismissed as mathematical algorithms and equations and geometry, because some of the things that I’ve seen AI create are honestly just incredible and I wish that I could give credit to something that was alive. But I can’t.

Ben Byford[00:22:48] Yeah, and it’s kind of interesting, because you’re a human, and you’re making meaning out of it for yourself – isn’t that fine? You’re getting value, and you’re seeing the value in it, essentially?

Jessie Smith[00:23:06] Oh, wow. That is such a deep question. I’m going to have to think about that one for a while.

Ben Byford[00:23:12] Yeah, it’s kind of where is the line, what is okay? When do we need to start caring about attributation? That’s interesting. And value. But anyway, let’s like do a rough segue – and so talking about you and your background, Jess. You come from this software engineering side, and then you’re now straddling Humanities and Computer Science departments at the University for your PhD, trying to make sense of those two worlds, kind of cross-pollinating. You’re a conduit, if you like. So, what’s the really exciting thing in the middle ground at the moment that you’re concerned with?

Jessie Smith[00:23:56] Yeah, there’s a few research questions that I’m focusing on right now, and I’m co-advised by two PhD advisors who are really good for my research questions, in that one of my PhD advisors, Casey Fiesler – his research now is very heavily focused on ethics and education, and asking qualitative questions, doing qualitative research, and things like co-design, participatory design. Then my other advisor, Robin Burke, is much more technical right now, and he’s asking questions that are a little bit more nitty gritty for the engineers, like how do we encode and optimise for fair treatment in an algorithmic machine learning system? His lab is all about recommendation systems, so I’ve been in the headspace of recommender systems for the last year, just because of the work that they were already doing.

So, for me and my research, and bridging the gap between those two sides, now some of the questions that I’m wondering, on the qualitative side – and let’s stick to the example of recommender systems – so an example I use in this field that I think most people can at least understand is Spotify music. So a qualitative question that I have for Spotify music is, what does it mean to treat different stakeholders for Spotify fairly? So what does it mean to treat the musician fairly, in terms of having their music recommended to users? What does it mean to treat a music listener fairly, in terms of having recommendations actually match their interests and aren’t just stereotyping who they are? And then what does it mean for the system to be fair as a whole, to treat everyone equitably and fairly, given their different needs and interests? So that’s like a qualitative question. What does it actually mean, to treat them fairly?

And a quantitative question would be, well how do we change the recommender system to incorporate things like re-ranking, which is this technique where you take a recommendation list and re-rank the different items in a list, based off of different fairness concerns, and based off of a user’s tolerance for diverse music? So that’s like a very algorithmic, quantitative question.

Then my mixed-methods question, which is the meat of my work – because you can’t really do the qualitative and the quantitative work and mesh them without doing some kind of mixed methods. It would be to then ask – when we take the qualitative definition of fairness, or definitions that we’re working with, and feasibly and pragmatically encode them into the algorithm using these re-ranking algorithms, and fairness metrics, in a way that actually doesn’t contextually collapse what we discovered qualitatively, but actually upholds the things that we learned, and the fairness goals that we set qualitatively – how do we make sure that those are actually enacted and coded into the system in a way that does what we want it to do, so it’s intentional, instead of black box and opaque?

Ben Byford[00:27:17] Mmm, and I guess a lot of these services use your data to create the quantitative side, right? The interaction data, the usage data, anything they can get their hands on to make those suggestions and recommendations. But for recommendations from these sorts of systems for Spotify and things like that, you could almost ask, “I haven’t seen you click on any jazz recently, do you still want to be shown jazz?” Or anything like that. “Do you want to see much more different things? Open your eyes to some different kinds of music that you’ve maybe not seen before?” Serendipitous – that’s what I was looking for, “serendipity”. Does that come into it anywhere?

Jessie Smith[00:28:12] Isn’t it funny that it’s such a strange idea to actually ask the user of an AI system what they want and need? And by funny, I mean quite sad, because that’s totally what should be happening. When I say co-design, because I realise this isn’t a term that is commonly known, what I mean – in participatory design, too – is asking people who are impacted by the system to help design the system, and to participate and collaborate and communicate. And so, that’s exactly what I’m trying to do: using participatory design and co-design with all the different people who are impacted, and impact, the system. So, asking the people who consume recommendations, like Spotify listeners, “Hey, what would it mean for you to be treated fairly on this platform?”

This is an interview study that I did with one of my collaborators, Nasim Sonboli, and we’re currently working on a paper that will be hopefully accepted into a conference soon, and we dissect what it means, qualitatively, for the consumer of recommendations to be treated fairly. The next step would be asking the producers of recommendations – the people like the musicians, or the YouTube content creators, the LinkedIn employers, things like that – what it means for them to be treated fairly in the system. Because what we find usually is that these two stakeholders have conflicting needs and wants, so we take all these conflicting and convoluted needs and wants and we show them to the engineers, which is like the final step, and we say, “Okay, here’s what your users actually need and want, how can we feasibly make this happen? How do we make this work?” Whilst also having one businessman in the room that’s there to say, “Well, this is our bottom line and we have to make money”. So having everyone in the room.

Ben Byford[00:30:07] Yeah, that’d be nice. The YouTube example is a really good one, it’s rife for things happening on YouTube algorithmically, and you hear about all these things about getting into this YouTube hole, where you start to be shown really suspect stuff, and it’s through your interacting with it, and the way that the algorithm is choosing stuff to keep you on, which is –

Jessie Smith[00:30:31] Conspiracy theories.

Ben Byford[00:30:33] Yeah, all this and all the different things that get shown, it keeps you there. It drives good interaction and stuff like that, so that it’s optimising for the wrong metric, almost, essentially, maybe, or more of a business metric. So it would be really nice to see how co-design in that situation – maybe we’ll have to wait for some insider information, or something.

Jessie Smith[00:30:59] Working on it.

Ben Byford[00:31:01] Yeah? Okay, cool. Well, anyway. I was going to ask you about the education side as well, so you’re interested in that area, and I always feel there should be more reflection, debating, ethics, things like this, incorporated in the educational process. How can we expect our developers and citizens of the future to make good things if we’re not enabling them to do that? So I wondered if you had any thoughts about what we should be doing there? How can we help our designers and developers to make good products?

Jessie Smith[00:33:41] Oh, Ben. I’m so glad you asked this question, because this is my last leg of my research, and my career interest here is the education piece. Because I do think that’s the missing piece in all this. I think there’s a lot of people that are emerging in this space that are there to critique the space, I think there are a lot of people in this space that are making the space – like the engineers, the coders, the designers, but I think there’s something missing in education and awareness-building that helps translate the critiques to the practice, and so in my opinion, I think that all computer scientists should be trained to ethically speculate, as we like to say in my lab, that’s advised by Casey Fiesler, to ethically speculate, which really means just to think about the harms that our technologies can cause. So to think about the people who will misuse technology, to think about the people who will abuse a technology, to think about unintended consequences of a technology, that are of course no fault of the engineer, or computer scientist, but just entropy in the universe, causing chaos. And it might not be anything that we can predict, but to think through those things before making an algorithm that predicts gender, or making an algorithm that predicts sexuality, you know? Thinking through what people might use those things for, before making them.

This is something that is pretty near to my heart, because it’s the reason why I started out in this AI ethics space in the first place. I was getting my undergraduate degree in software engineering, and the story that I’ve told a million times now, but I’ll say again just for fun here, is that I was taking my very first data science class, and the semester that I was taking the data science class, I was also taking the one, single, required ethics course for computer scientists at my university. And coincidentally, they fell on the same day, and so in the morning I would learn data science, and in the afternoon I would learn computer ethics, which largely related to a lot of data science topics – just because of a lot of AI and machine learning topics are ethical conundrums, as you know. But there was one specific morning, I think it was a rainy fall morning – oh no, this was in the spring – it was a rainy spring morning. I went to my data science class, and my professor taught us how to scrape the Web, and how to build a web scraper, and it’s this incredibly powerful tool that we can use to get information from any website without API, which means without needing permission, or asking permission from people to get their data. Then that same day in the afternoon, in my computer ethics class, the professor told us exactly why scraping data from the web was so unethical and not okay. I had this “aha!” moment in my head, where I was like, Why the heck did I learn this in a different class, instead of in my data science class? And the 30 students who were in my data science class with me, they’re not in this ethics class. They’re not going to know that this is harmful, and they need to watch out what they’re using this power for!

So I started thinking a lot about how there’s this missing piece in computer science education. And it’s accountability and the responsibility of the computer scientist to think through what it is that they’re creating. What it is that they’re saying yes to from their engineering manager, what it is that they’re building and sending out into society to fundamentally change society, and to ask if they’re willing to do that, and if they’re wanting to do that, and to make it more about – it’s all about this intentionality, right. So asking, are they coding with intention, or are they just doing it to make money? And I hope that if we are, we do eventually include and incorporate social sciences – humanities, philosophy, critical race theory, a lot of the gender studies, a lot of the really core components of social sciences – if we do incorporate those throughout the entire computer science curriculum, so that it doesn’t just seem like a one-off module that you have to check off, but that it actually seems like a part of building computer technologies, then I think it will fundamentally change the entire discipline of computer science.

Ben Byford[00:36:16] Yeah, I totally agree. Sounds awesome.

Jessie Smith[00:36:19] Good.

Ben Byford[00:36:20] It’s kind of nice, in contrast to the UK system of higher education, in the American system you can pair subjects, so you can major and minor. You have more opportunity to join onto other courses in the first year or two, right? So, I wish that in the UK system they had more of that; you basically have your bubble and you’re less likely to go outside your bubble. More onto you, because you have this course area and there’s very little cross-pollination there, as far as I remember from my time at university, anyway. So it would be nice to have that cross-pollinating of ideas coming into it, especially computer science.

We did have Miranda Mowbray, who was teaching ethics on the Computer Science course at Bristol, so there have been efforts happening there. I’ve heard of other anecdotal things going on, so that’s really, really positive, but it is strange that you get taught these techniques, these powerful machine learning techniques, and technical aspects that you can apply to whatever you want in the world, and then you’re not necessarily taught, “Have an intention,” and “Reflect on this stuff,” you know. “Do good, make sure you don’t use this tool in a way that you don’t want reflected badly on your grandparents, or your spouse.” Or “People that you love could be affected if it turned into a big thing, or runs wild”, or what have you. So it’s really interesting that these things should really be paired up and go hand-in-hand with each other to make sense out of them.

Jessie Smith[00:38:11] Totally, and you don’t even have to ask or tell computer scientists to do “good” necessarily. I’m not going round telling them to be saviours in society, but it’s a bit like Google’s original motto, you know, “Don’t be evil”. I think it’s important to train computer scientists to not be evil at the very least. And in terms of embedding it into the actual curriculum, and the feasibility of that, I have seen in practice that this actually makes computer science better to teach.

Over this last summer, I was the graduate professor of an introductory computer science course. It was in the Information Science department, not the Computer Science department, so I had the freedom to make whatever I wanted out of it, and I restructured the curriculum to include ethics and social science and humanities throughout every single module and assignment that we did, and they learned exactly the same amount of coding that they would have in the other form of the class that didn’t include the ethics, but instead of me just teaching them, “Okay here’s what big data is, and he’s how to create a table using this data,” I taught them, “Okay, here’s what big data is, here’s how data can be subjective, here’s how data can be super-powerful and how data changes our lives, and here’s how you give your data away to companies, here’s how people exploit your data.” Making it almost more approachable, and realistic. Showing them, “Okay, here’s computer science, but here’s how it relates to our day-to-day lives and how it’s super-important to understand the power structures at play with the way that we code these technologies, and the way that they’re enacted in society”, and all the students said at the end of the class that they absolutely enjoyed and loved all the assignments that had to do with real-life examples, because it made it tangible. And it wasn’t just like, “Okay, let’s make a grocery store algorithm that creates a checklist where you check off your fruit that you have to get in the store,” it was like, “No, let’s make a self-diving car algorithm that determines who to save or who to kill”. And of course the students found the second one more interesting.

Ben Byford[00:40:31] Woo, so did they create anything that we can sell to Waymo or any of those? If so, get in touch with me and Jess at the end of the podcast. Stay tuned for our details. Sounds like it makes it more real, and they’re going to go off and work on projects for organisations that are going to making an impact in the real world, so it just makes sense. I know that sometimes they have that in design courses, where they have real projects brought in by companies, and they students have to work on them. It’s nice to make things a bit more concrete, and not being too heavy-handed with the ethical stuff, but make it apparent that this is the system that we live in. So, it does all make sense. Yes.

So, do you have a hope for AI in the future? Something that you’re working towards? Some vision? because we all love using and making technology, but I was just wondering if there was something that was dear to you, and something that you might be working towards. And what does that look like to you?

Jessie Smith[00:41:53] Yeah, I actually really appreciate you asking this question, because I do think that as AI ethicists, it’s important for us to come back to hope and to ask about optimism for the future because it’s really easy to just stay pessimistic and want to crawl into a hole and never come out and see what the future holds. So for me, I think there’s several possible futures that I’d love to see come out of where we’re at right now. I think a lot of that has to do with shifting the balance of power. So coming back to this idea of power that is so pervasive on the Radical AI podcast. Taking the power away from the Mark Zuckerbergs and the Jeff Bezoses, and instead shifting it to the people, and really letting them decide what it is they want and need from this technology, and instead changing the narrative that technology is this capitalistic gain to exploit, but that technology can be this wonderful, amazing tool that helps uplift humanity, and helps us become the best society that we want to be.

That doesn’t necessarily mean having a technological interventions in every single aspect of our lives, and becoming like the humans in Wall-E, who are so supported by technology that they’re not even really human anymore. Shout out to addictive design and persuasive design because there’s a lot of negative places we can go with technology in that realm too, but instead us still remaining human, whatever that means, and having technological tools that uplift a lot of the things that we already want to do, so tools that help us be more creative, but don’t replace our creativity. Tools that help us be more efficient, but don’t replace our jobs, or tools that help predict the future, whether it’s weather or sports, or even harmful disasters that are going to happen in the world without predicting things that are inherently unpredictable, like people’s risk to commit a crime. And then this can also expand into other realms, like environmental sustainability, and human kindness and empathy, and I think there’s a lot of realms of humanness that we haven’t tapped into in terms of technological interventions, that I hope in the future we do in an intentional way.

Ben Byford[00:44:33] Yeah. I totally agree with all of that except for, take the jobs. Take the jobs! I mean the BS jobs. Take them away! I don’t want them. But still have to reconcile that with the fact that I don’t want to be floating around like a blob in Pixar’s Wall-E, or anything like that. so we have to somehow make sense out of that situation. Where we find meaning and how we can drive our economy without jobs. Did you have a pessimistic view? Like, how could this work if we didn’t have AI ethics?

Jessie Smith[00:45:15] Oh, jeez. I try not to think of this. It’s probably important to tap into for motivation. If we just keep going as is right now, without intervention from ethics and social sciences, I see technology – especially AI – perpetuating the problems that exist in society to the point where they are no longer solvable. So, issues of racism and sexism and oppression, and everything in between. Things that have existed so long, that are the worst parts of society. I see those being enacted and encoded into technology and perpetuated, and furthered. Unfortunately to the point of no return.

Ben Byford[00:46:05] Yeah, and I guess a lot of that is to do with intention, and bias and things creeping in. So, I’m going to segue to a bizarre and a big can of worms question. So, are we ready? Can AI be conscious?

Jessie Smith[00:46:29] Oh my gosh, I love this question so much. I am going to say yes, because I don’t think that humans have fundamentally decided or agreed upon what it means for us to be conscious yet. And so until we decide, or are able to scientifically prove what it means for us to be conscious, and agree on it, I don’t think that we will have the ability to say that an AI system is not conscious. Especially for those who think that consciousness exists in a place that is unobservable. The dualists who think that consciousness exists outside of the physical world, and is instead like an inner feeling that we can never tap into in someone else’s body – I think that those people in particular probably already think that AI has a consciousness, because there’s no way for us to know.

Ben Byford[00:47:27] Well, that’s a very good answer. I’m going to put a lid back on that can of worms, and then we’ll maybe come back to that another time. So thank you so much for your time, Jess. If people want to follow you, contact you, find out about the podcast, how can they do that?

Jessie Smith[00:47:41] Yes, you can check out my website jessiejsmith.com. You can follow me on Twitter _jessiejsmith_, because I have a super-generic name, so I gotta use those underscores. You can always look at the Radical AI podcast at radicalai.org, or shoot me an email at jessiejsmith01 – (’cause I’m generic) – @gmail.com.

Ben Byford[00:48:12] Cool, well, thanks for being on the show, Jess, and I’ll see you next time.

Jessie Smith[00:48:16] Bye. Thanks so much for having me.

Ben Byford[00:48:20] Hi and welcome to the end of the podcast. Thanks very much for listening, and thanks again to Jessie, or Jess, sorry, for spending time with us. I have a confession to make. It’s taken quite a while to get this episode out, so I apologise for that, because I had a great conversation with Jess, and my audio was terribly recorded. So it didn’t actually come out and wasn’t usable, so I don’t know if you noticed, or it was really hard to listen to because of that fact, but I had to re-record the whole of my part of the conversation with Jess. Re-act it, re-record it, make sense out of it, so hopefully it wasn’t too garbled and confusing. My responses were somewhat like the ones we had when we spoke before. Tell me what you think. It was a bit of a monumental process, so sorry about that again.

I really enjoyed my conversation with Jess, so thanks very much for spending the time with us. I think the stuff really resonated with me because it’s really great, some of her ideas about imbuing sample projects, bringing projects to life, using some of the real-world humanities and ethical quandaries and bringing those into the classroom is a really good idea. And that is, I guess, somewhat to do with co-designing systems and things like that. And it’s just really fun to talk about some of these things like AI consciousness and creativity and values, so I hope we have a chance to speak again and have a chance to get to grips with some of these bizarre and interesting things. So thanks, and again if you want to support the podcast and check out more things, Patreon.com/machineethics. Thanks again for listening, and see you next time. Bye.


Episode host: Ben Byford

Ben Byford is a AI ethics consultant, code, design and data science teacher, freelance games designer with years of design and coding experience building websites, apps, and games.

In 2015 he began talking on AI ethics and started the Machine Ethics podcast. Since, Ben has talked with academics, developers, doctors, novelists and designers on AI, automation and society.

Through Ethical by Design Ben and the team help organisations make better AI decisions leveraging their experience in design, technology, business, data, sociology and philosophy.

@BenByford