81. The state of AI Ethics with Alice Thwaite

This time I'm chatting to alice about teaching ethics, the idea of information environments, the importance of democracy, the Ethics hype train and the ethics community, people to follow in AI and Data Ethics, ethics as innovation and more...
Date: 17th of September 2023
Podcast authors: Ben Byford with Alice Thwaite
Audio duration: 01:03:11 | Website plays & downloads: 115 Click to download
Tags: Capitalism, Ethicists, Teaching, Democracy, Business, Humanities | Playlists: Philosophy, Business

Alice Thwaite is a technology ethicist and philosopher. She founded the Echo Chamber Club and Hattusia, where she won the CogX Award for Outstanding Achievements and Research Contributions in AI Ethics. She currently works as Head of Ethics at OmniGOV, MGOMD.


Transcription:

Transcript created using DeepGram.com

Hi, and welcome to the 81st episode of the Machine Ethics Podcast. This episode, we're talking with Alice Thwaite. This episode was recorded on 15th September 2023. Alice and I chat about teaching ethics, information environments, the importance of democracy, the ethics hype train, and the current ethics community, the need for humanities in tech, and, indeed, the business case for working with humanities, people to follow in AI and data ethics, and the idea of ethics as innovation. If you'd like to listen to more episodes like this, you can go to machinedashethics.net, and you can email us at hello@macedashethics.net.

You can follow us on Twitter, machine_ethics. Instagram, Machine Ethics Podcast. And if you can, you can support us on patreonpatreon.comforward/machineethics. Thanks so much for listening, and hope you enjoy. Hi, Alice.

Thanks very much for coming on the podcast. If you could yourself, who you are, and what you do. Hi, Ben. Yeah. My name is Alice, Alice Thwaite.

I, always find these kind of introductions quite difficult to do because you're trying to show that you are worthy listening to, but we're also British. Right? So, I've I've been in this field around, ethics and technology broadly since about 2015 2016. One of the highlights of my career that I can kind of talk about because I think in the ethics world there are plenty of things that we achieve that no one can really talk about. And that's not nefarious in any way, you just know what I mean.

It's just kind of like it's it's all things under the radar and stuff. But I won the CogX award last year for, outstanding research contributions and achievements in AI ethics with, a consultancy I used to have called Hetusia. When it comes to what I really specialise in, I'd say it's it's it's it's a couple of themes. About 4 years ago, I was really a specialist in online polarisation echo chambers, and I got quite frustrated with that community. As I think we often do in this field, it's like you'll notice that people jump around a little bit and I think sometimes that's the good reasons, which is that they get fed up with that kind of research field.

It's sometimes for not so good reasons, which is that they might notice there's hype somewhere else and they can make money from it. So, yeah, I was in that polarisation space for a while. I then started to turn my attention to what does it mean to, incorporate ethics by design into technologies and technology structures and technology platforms and the way that came about was I was at the Oxford Internet Institute at the time, I was doing a master's there, and I was the only one who came into that, institute. This may or may not be true, but but I was one of very few. But definitely in my cohort, I was the only one who came from a philosophy background.

And a lot of people would use this word ethics, and I'd be like that doesn't relate at all to anything that I'd studied in my undergraduate, that doesn't relate anything at all to which was a western analytic philosophical degree, right? So I started thinking about how can I teach ethics starting with these foundational principles that come from philosophy, which I think is absolutely required because, you know, there's a lot of people who you'll ask, you know, what what should we talk about when it comes to AI ethics? And I'll jump straight into something like transparency. And you're like, yeah, transparency is great, but why, you know, think about why we need transparency and the only way that you can actually design for transparency is by understanding that transparency relates to freedom. It relates to democracy.

Unless you're holding on to freedom and democracy is kind of foundational ethical labels, then you might kind of get stuck down this transparency route that actually might cause more harm or kind of works against these bigger principles. So, that course was first done at the Said Business School in Oxford, then developed into a series of courses that I did at general assembly, and that then led to various different advice around designing ethics, privacy, into technologies, into operations strategy, thinking about organizational transformation from an ethical perspective. And now that's what I'm doing today. So I'm working at an advertising, media agency called the group is called Omnicom Group, and then I'm working at OMG, which is an Omnicon Media Group, and then specifically with Omnigot, and they're just such a fabulous group of people. And, obviously, you think, advertising agency, how does that relate to technology?

But then, ultimately, a lot of technologies get funded by advertising. So, that's my career trajectory. That's what I'm up to. And, hopefully, that was an interesting enough bio that means that you wanna hear from me more. I I think, what what we do is we put a pith a, title on the episode, and then that will reel people in, and then they'll be here, and they'll be, like, excited to listen to you, Alice.

And they're, like, what's this all about? With that trajectory, it feels like starting with the echo chamber, and I think I actually first heard of you or spoke to you when you were doing the echo chamber club. Right. And we spoke, I think we mentioned before the podcast, like, 4 or 5 years ago, which is, seems crazy now. You were thinking about this, idea of the echo chamber, social media, and and how the Internet and and, kind of social mediated situation changes people's behavior or impacts their kind of mental health or, emotional ability to interact with other people or whatever it is.

And then I think, stop me if I'm wrong, you then looked at kind of more kind of, tech in general and then AI stuff. And then you're kind of coming back to the kind of, social echo chamber in a way with the whole advertising bit. It's interesting because I like, it's always interesting when you hear people's perspectives on your career, career. Because the the strands that I've always had as, like, a common denominator in what I'm doing is I have always been super interested in, information environments. And I've always been intra you know, information environments, I think, is a more holistic way of looking at something that other people might call a public sphere, for instance.

Like, it's kind of funny when you're in these groups, people will suddenly not like a specific word for a very niche reason, and I'm definitely guilty of doing that, so I talk about information environments, for a very niche reason. And and so I've always been interested in that, and then I've always been interested in kind of democracy as like a social structure that we can use to inclusively involve people in decisions and hopefully create better worlds. So for me, the trajectory has always been democracy and, information environments, and to that extent, kind of, like, freedom and inclusivity. And then everything that I've done around that has been kind of, like, subdivisions of that. So polarization was kind of you know, I was interested in it before, and then, like, I think everyone gets into ethics by getting by kind of jumping on sort of some sort of hype train.

Right? And then you gradually get disillusioned with that hype train, and then you just really you start questioning it. And you're like, hang on. This is way too simplistic. And then you fall down the rabbit hole, and either you drop off entirely or you kind of get into the position of being newer in, which is like, god, like, this is so naughty, and we've been in this for a while.

But, yeah, for me, it's it's all around these deep philosophical concepts. And that's also I think that the other common thread throughout all of this is when I was doing the echo chamber club work one of the things I realized is that the theory around echo chambers and the theory around polarization that was presented at the time by Cass Sunstein and Eli Pariser was basically just kind of like I think about it as, like, pop nonfiction. It's like generally, it's a white dude in America who is like, There's this common sense issue, and I'm the only one who's thought about it. We're seeing it with AI ethics at the moment. Right?

Like, there's suddenly all of these people who are jumping on board the AI ethics train just being like, there's this issue. No one has thought about this. And then they kind of, like, write a book because they're able to get a book deal. And, suddenly, this book is incredibly popular because it taps into something that people are a bit fearful about, but actually has zero grounding in any sort of discipline that's come before it. Because, actually, if you look at most academic disciplines, they have dealt with a lot of these big subjects.

It's just that they refuse to read the literature around it. So I jumped on that train and then was like, hang on a second. It feels like what Pariser is talking about and what Sunstein is talking about is, like, an attack on a particular incarnation of democracy, which is in itself problematic. Right? And then that's why I got really into a rabbit hole around different types of democracy.

And, actually, we need to be thinking about the kinds of social structures. Because democracy, like, there's 100 and 100 of ways of I mean, just look at how many countries there are. There's so and so many more. Like, conceptually, there must be 1,000, maybe even 100 of thousands of ways of organizing and creating a democratic country. And so then for Eli Pariser to come out and say quite simplistically, oh, you know, social media and echo chambers are problematic for democracy.

Just didn't really add up the more you got into it, and that's and that then then has led me to do what I'm constantly doing, which is, like, how are we reading all the literature on this? Are you taking into account philosophy? Have you consulted a historian? What does art say about this? Like, there's this amazing person called Alice Bennett who works out of the full hope inst university who writes on attention from an English literature perspective.

Do you know how amazing her insight is? And when we're talking about kind of, like, the crisis of attention or attention studies, is anyone referencing her work? Like, this is the kind of space that that I like operating in. And, that again is kind of like an overall transition because from an organizational perspective, people have constantly ignored, like, humanities and social sciences when designing these technologies. I mean, you even look at what Rishi Sunak says about we need more stem.

And there's this overall idea that if you're doing a degree in the humanities, then you are wasting your money and wasting your time. And, you know, I've got this personal kind of narrative, which I'm not gonna say I coined it, because no one ever coins anything. But I I I do think I was one of the first to kind of come out on BBC Radio and say, you know, we need more humanities working in development technology. And I'm starting to see LinkedIn bio saying, I use humanities to kind of design technologies, which is really quite cool. But we do need more investment in these areas, and we need to recognize that these methodologies have existed for 1000 of years.

They are pretty cool. They do help us answer a lot of these product questions. They give us ways of, like, understanding the complexity of it, the nuance of it. And I guess we're just stuck in this point again with AI ethics, where you get, like, you know, a computer scientist who has absolutely zero, 0 understanding of cognitive, psychology. I you know, I'm just gonna say it.

Like, Stuart Russell's book, for instance, Stuart Russell gets pulled into loads of things. There is a section on his book where I nearly threw it across the room because he missed he he he completely misdefined consequentialism, which is a foundational ethical concept. He completely misdefined it. And I was, like, how can you constantly be called up onto these kind of, like, ethics podcasts? And, yes, you know, he writes really well about the control problem or whatever.

Mhmm. But, fundamentally, you haven't got a grounding in this. Just consult people who do. Humane AI? Or it's I know you're basically about the control.

Because this is what Stuart Russell talks about. Stuart Russell talks about, which is, you know, a valid concern, but it's also, it's also not the only concern with AI. This idea that AI might one day, destroy humanity because, we can't control it anymore. Mhmm. Yeah.

Yeah. Yeah. I mean, I think, we've had a a couple of people on the show who have talked about the control problem and the, what's it called, alignment problem. Yeah. So you definitely go check those ones out.

My personal my personal opinion, which I feel like chimes with yours, is that we have, like, things now that we need to deal with. Right? Yeah. Absolutely. Which we can definitely think about those other things, but, like, there's loads of stuff right now Yeah.

Which are issues. I really enjoyed it. I I really want, Stuart Russell to come on to the show, so I can, talk about that with him. So if you're out there, I actually I think you did. I I was like, you hated it.

But I actually did, meet Stuart once and and, gave him a bit of a going at, and I don't think he liked it liked it anymore. Liked what I said about him. So I mean, there's all of these people. Right? Yeah.

Human compatible. Exactly. Yeah. Yeah. Yeah.

Yeah. There's there's a lot of these people who it's re it's do you know what? It's really tricky when you're again in this space, which is the like, people like Tristan Tristan Harris. Yep. On the one hand, has done so much for, like, for showing that this is an area that is important to talk about.

Right? Yep. For getting it onto the, getting onto people's agendas. Done a fabulous job. His ideas, I don't think are very well formed.

And I, don't think that he is very good at consulting the rich body of literature that exists. I think that he kind of sits in this space of I've got this hunch, and I'm gonna talk about it. Mhmm. Mhmm. And and I and I feel very, you know, complicated emotions towards these sorts of people.

Because on the one hand, you're like, I am very grateful to you for raising this of the agenda. What I'm not grateful to you for is not then stepping out of the way and letting other people speak. And and, you know, you can see that with a lot of the people that I've just spoken about. Like, they they manage to get these huge funds, to do to do all of these work. And, yes, like, Eli Paris, I might have an incredibly diverse team at the moment.

But you can see a lot of these institutes that are headed up by these kind of people. Right? And on the one hand, incredibly grateful. On the other hand, you know, I am I'm not gonna kind of, like, get down the kind of, like, the privilege Olympics type thing, but I am, you know, I've got a lot of privilege. But being a woman, I was I was just absolutely unable to get the kind of funding that that was absolutely I find it impossible.

I'm not saying that my career is is is in any way. I'm quite grateful for the stuff that I'm up to, and and it it's it's really cool. But I do remember feeling a lot of frustration towards those kind of people at the time, and it's really difficult to kind of, on the one hand, be incredibly grateful towards this community and the fact that it's growing and the fact it's getting more press time and and all of these things, but then the infighting that then exists in the ethics community is also really difficult to kind of manage, and I think you see that through a lot of kind of these big movements, like you look at feminists for instance, and feminists just can't work together because they're also kind of, well, when I say they can't work together, of course they can, I don't want to kind of like, you know, like there's just these little little kind of, like, gripes that we all have? And it's just a really complicated emotional space in a way that doesn't really exist in other industries, I would say, because the stakes aren't so high. Or maybe I'm saying that because I am just, like, so blindsided and think that I'm the center of the world and I'm really narcissistic.

But, actually, if you are working in insurance, then, actually, it doesn't matter. You know? Well, I have very little, experience in the insurance industry, for sure. So I'm I'm sure they are, facing their own issues, I guess. I actually feel like, I've got strong opinions about insurance, but that's probably not the time and place.

Yeah. Yeah. Yeah. Yeah. I I will say though that I do think it's important for us to talk about these things as, like, a group of ethicists.

It's just acknowledge that, you know, there are these social structures that exist within our own movement, which potentially make it harder for us to do our jobs sometimes. Mhmm. And it would be really cool if someone was to study them, really, because then we could probably learn a lot from it. And I know that on these kind of podcasts, you're expected to come in, and I'm expected to tell you everything about AI ethics and ethics in general and just kind of tell you how to do it. But, actually, I am super interested in these kind of smaller structures that and whether or not the movement is gonna happen, and what change is gonna happen and what change is actually gonna happen.

I'm super interested in these kind of like smaller social elements as well. Mhmm. So I feel like we've given a bit of a roasting to some people. So I was wondering if you had any institutions or individuals who actually you really rate, in in the space right now. Because I think that would be super useful for people listening who want to, who are interested in who you think have this grounding, or have this wealth of interest in the area, which is actually maybe more valid or just interesting or whatever it is?

Sure. I don't know if it's more valid. But, I I I really have a lot of respect for Careful Industries and Projects by Ith. For instance, I think that, you know, when Hetusi was around, we were kind of, like, the 3 3 kind of independent consultancies that were working in this space from London. So obviously, you're based out in Bristol, but I think I think they're really cool.

So that's Rich Caldecott and Sarah Gold. Mhmm. I think that there's some institutes that are doing phenomenal work. I'm always really interested in what privacy international is up to. The Montreal AI Ethics Institute newsletter is just it's so good.

Yeah. It's so good. Yeah. Yeah. Privacy International, Access Now is really good.

In the advertising space right now, I think that the conscious advertising network's up to some really interesting stuff. I think that check my ads is up to some really interesting stuff. There's some academics who are incredible. I really I really appreciate the work that comes out from Big Brother Watch as well. Yeah.

There's there's all sorts of these kind of, like, highly niche and highly specialised individuals who are really raising the bar when it comes to information in this area and really trying to think about what it means to change society either through information gathering research and or activism. And I've probably missed out a ton of people from there. And the Ada Lovelace Institute has has been powerful for a very long time. So and, of course, everyone's gonna mention the DARE Institute as well. Right?

Like, they're they're they're really cool. There there's all sorts of people who who I truly respect and are doing phenomenal things in this space. And some of them get a lot of recognition, others don't. Yeah. Yeah.

And, actually, on that point, when you were talking right at the beginning, you mentioned that it's one of those really annoying, kind of activities, you know, that we do. Right? So when we do a good job, no one sees it. Right. Or, like, it's like a blog post somewhere about all the things we did.

And it doesn't have, like, this, service which meets the public or businesses per se. Because you're you're trying to do you're trying to make the creation of a service or product better internally for a company, or maybe working some research which covers a broad area, which will help defense or energy or advertising or whatever. And these things don't necessarily interface with general public. So you're like, it's very hard to get a good PR on on some of the of the work that we do almost in in that way. It's really so there was a group that I was part of, for a while and, so it's run by Erica Chung who, so I don't know where she's based at the moment, but she is also incredible.

So she was one of the, the major whistleblowers on Theranos. I hope I'm pronouncing that correctly, and we were kind of looking at what would a consortium of our of ethics firms look like, and what do the services look like. And it's super interesting because we know what the advantages are when it comes to ethics. Like, we know it's an inherent strategic enabler. We know it creates a ton I mean, I'm using I'm using kind of, like, business language at the moment.

Sometimes people don't like it. I'm gonna preface this with Yeah. Ethics is the right thing to do because it's the right thing to do. Right? It just is.

It's like sustainability is the right thing to do. Not because we're saving the planet, but because the planet, we're saving, we're creating, we're ensuring there's going to be a planet which is hospitable for humans in the future, it's just, it's just like a no brainer, When you're kind of like working in the space that we're working in which is maybe not kind of quite so public facing and is maybe more kind of business facing, then you do resort to this kind of language around competitive advantage, a strategic neighbor. But it really it really does all these things, and I think I think that what's really interesting about ethics is that it is so genuinely innovative, like, when you like, what kind of unwinds me up a little bit is people like AI is innovation. It's like, okay. Well, how do you define innovation?

Innovation for me is when you use a, new method or a new tool to apply to an existing problem. Right? AI is not a new method. It's not a new tool. It's just, you know, And what are the existing problem?

And people are trying to invent problems for it to kind of, like, exist in. Yes. So so I kind of look at it when people are, oh, we're gonna do innovation around AI. It's like, yeah, but everyone's doing that. Whereas ethics on the other hand, it's like we have completely ignored the social sciences.

We've completely ignored things like anthropology and history to help us solve kind of, like, critical societal problems that exist through technology. Of course, there are some people who've done it. That for me is, like, genuine innovation. Right? And so you tell people this and it feels a bit too risky because this word innovation, people like using it, but actually, they're not that keen on doing it.

Like, that's just the way it is. And then, suddenly, like, I've been in the organization that I've been in now for a year 3 months, and it's really exciting just kind of, like, watching people, like, have that real world experience of just how much of an enabler ethics is. And you've just got to experience, and you just need to kind of, like, take a chance almost. And I don't know really where I'm going with this, but just to say that if you are listening, I cannot tell you how much you will benefit from kind of just having this different thinking in your organization and in your company. And, also, if you're listening to this and maybe you're kind of on more of the public facing, kind of consumer facing activist side, Please bear with us who are working on that business side, and don't get annoyed with some of the language that we use because that is really just kind of like a translation for businesses to then kind of get around these issues that you were just saying about how do we prove that we're valuable.

You know? And like you say, Erica Chung is doing some really interesting work in this space. One of the people I didn't mention was, is was Olivia Gamblin who is, the, ethical intelligence founder. Like, such like, the thing she's achieved is is just is is very cool. So she was part of that group as well.

So there there there are there are things there are things going on, basically. And what you've just said is is a is a really personal issue, but we just need people to believe us and to take a chance, basically. I I think, we've had, Olivia on the podcast a couple of times now, actually. We we very rarely get people on more than twice. So we've done that, twice now, and we're getting to nearing a 100 episodes now.

So I'm getting very tired of interviewing people. No. We've we've scratched the surface on this space, unfortunately. I feel like the world of AI and ethics is just getting more and broader, over this time. So if anyone's listened to episode 1 going forward, it is such a difference.

It's like a a a gulf of difference between now, from 2016 and to now, when we started this. So and and it sounds like you've been on that journey since then as well. It's it's interesting though because there's this idea. I don't know. This is probably something to discuss.

Right? But there's this idea that AI is, like, accelerating at such a pace and no one can keep up with the pace of AI. Right? And I actually think that a lot of these things are exactly the same problems that existed in 2016. It's just that, you know, the the kind of the social and political structures around them have changed marginally.

Mhmm. Like AI, if you think about it, rests on mathematical principles that were invented. I mean, when when was when was the multiple regression kind of, like, really come in? Like, the linear regression, the multiple regression statistics. I think it's around the 1920s, the 1930s.

Yeah. I mean, regression itself is older than that. Yeah. Yeah. Exactly.

Like, actually, this is not that new. And The Grand Scheme, it's really, really not that new. Mhmm. And yet, like, I think it suits some businesses and some individuals to be, like, the pace of change here is crazy. And it's like chat GPT 4 came out, and everyone's like, woah.

This is crazy. It's like, no. It's not. It's using exactly the same methods as existed for GPT 3. It's just got a bigger dataset.

It's used way more like, actually, it doesn't take that much to kind of wrap your head around it. And everyone was like, god, we need the ethics of generative AI. It's like I was just looking at, you know, HPS, which is the history and philosophy of science, department in Cambridge did a really great reading group before this whole generative AI thing truly kicked off. Mhmm. They did a really great reading group around what the major ethical issues in AI, all of them all of them applied to generative AI.

Like, there was nothing and then you're just kinda thinking, like, are there, you know, are there new issues with generative AI? Not really. Like, yeah, there's, like, marginal differences that you can kind of, like, claim at, like, gulfs or not. Mhmm. At the end of the day, we're kind of stuck in the same situation, which is that, you know, who is responsible for building these tools and who does it benefit?

What is the kind of financial situations there? You know, which which lives and which stakeholders' lives are actually being affected by this? Either because they are quite literally dying from these tools, or they're being incarcerated, or their work is, like, deemed inappropriate and it doesn't matter. Like, there's these issues are kind of, like, perennial, and they are all start with societal issues. Right?

They all start with, like, who do we value as human beings and for what reasons do we not value them? And then technology kind of either accelerates those changes or or or and or diminishes them slightly. And that's that's the space that we need to start with is that, yes, there are changes. And I think you're right. Like, since 2016, there has been far more of a spotlight on AI ethics in particular.

Mhmm. More and more people are, like, coming out and kind of speaking on this topic and kind of, like, claiming knowledge on this topic. But are the job openings there yet? If we're honest, not really. You know?

Bobby Institute's getting more money. If we're honest, not really. Like, and these are kind of the issues that we need to kind of, like, champion. You know? It's like, as far as I'm concerned, if you think that there's an issue with AI ethics, shown you know, Stephanie Hair, who wrote Technology Review is not neutral, often says, show me your receipts.

Right? Where, you know, where is where is your AI ethics team? Like, where is, like, the accountability that you've got in place? What money are you spending on this? And then everyone kind of looks at you like, you know, but we dedicated a 30 minute slot in our conference to this.

It's like, stop. You know? So I guess what I'm trying to say is I hear you on a lot has happened since 2016, but has it? Well, I I'm gonna claim it has because, but I think that's probably more of the response to I can talk to my mum about it now. That Right.

That's probably the test, isn't it? Right? Right. I can I can talk to my mum? Hi mum.

And, and and, you know, it's in the news. It's it's it's it's available. The the knowledge is being disseminated in a way that it it hadn't before. Right. And and like you pointed out, like, maybe it's not because the technology is, like, fast forwarded, like, a massive amount, but these things are just more present in our lives now.

Exactly. And I think you're right there. Like, you know, like, you've got the Me Too movement, for instance, which suddenly put, you know, I think the Me Too movement was a major catalyst for the Black Lives Matter movement, for instance, and that really kind of like put these structural inequities in into people's minds. And then when you then talk about technologies, you know, into people's minds. Then when you then talk about technologies, like influence in that, then people kind of have something else that they can kind of grab onto in their heads.

Think we're kind of in agreement, but we're choosing to put emphasis on different parts of it, which I think is often the root of most disagreements. It's like Yeah. Yeah. I wanna push a different narrative to you. You know.

And that's I've got a different story to tell, but actually, we have the same basics of facts and stuff. So Yeah. Yeah. Definitely. Definitely.

I think, I mean, to your point about job openings, I think there isn't that many jobs in what we would probably if you searched for AI ethics, or AI ethicist or ethics technology ethicist, I think there would probably be still a handful of jobs there. But I think in 2015, 'sixteen, I think there was no jobs there. I I think this is maybe a new category, which had been something else and has kind of changed into this new term. I don't think, you know, there's people who talk about AI ethics now, they used to do other stuff, right, which were similar parallel. Students, to be fair.

Like, you kind of look at our age. You look at our age. I'm talking about academics mostly, I guess. Alright. Alright.

Yeah. Yeah. Yeah. Like, a lot of my Twitter sphere, my, echo chamber is a lot of academics. Right.

Yeah. Giphy. Yeah. They're amazing in general. Sometimes not.

Yeah. Most of the time. So lots of people were like, oh, we've been talking about this for x number of years, blah blah blah. You know what I mean? Whereas, yeah, I mean, some of the the upstarts in this in this area may be less so.

I do, like, I do coming back to it though, I do find these, like, inherent structures behind it all very, very, very interesting. And I know this is where it makes me super niche, and I'm and I quite like this space. I quite like working in this space where you're not necessary you're not a household not that any AI ethicist is a household name, but there are certain more ones that are more kind of like prominent than others. Like Tim McGeber, for instance, like, is is, and I like being in this space where you've kind of been there for quite a long a long a long time. You know, if you saw me, I'm doing invert inverted what's it?

Like little little quotes. Yeah. You know, but I suppose anyone who's been working in the tech epic space since 2016 has objectively been in it for a long time. Yeah. I think pre previously, when when I first started this, you would wheel out Joe Bryson, Joanna Bryson or Right.

Right. Right. Right. Alan Winfield or someone in in the UK anyway who would. Yeah.

Lou, Floridi as well. Yep. Yeah. Yeah. Always good.

I I love I love his work. So I'm I'm just gonna put that up there. He is prolific. He is certainly prolific. Yeah.

What but I I think it's so great to just come on these things and just have a chat and just if I was to tell you what are the things that I'm trying to push and work on right now, it is just mainly in that space around how do we create those more those those job openings. Mhmm. You know? How do we prove to organisations and to institutions that this is an incredibly difficult skill set that is worth paying for, and you will benefit from it. Yeah.

So I mean, you probably already talked about this, in your previous kind of discussion. But if you were gonna pitch to a company, let's say Amazon, because they're notoriously not good at this. But the other The business department. Right? Other other companies exist.

And that they have to they're they're interested in putting together a opening for a ethics team, let's say. I mean, that sort of No. Organization could have a whole team on this. I'm looking at Microsoft. They've got Teams, and and then they've got embedded individuals.

So what is that cell? What what are you what are you pitching to to make that happen? Well, often it's quite good to kind of work in a smaller department rather than kind of have a centralised space. Like, it's, in order to make any sort of impact, you do need to have a specific thing that you're trying to change as opposed to trying to change everything generally. Like, so that's one thing.

Unless, of course, if you're in that centralized space, then you might have, like, an education piece to play. But anyway, like, something is is is massive as Amazon. It's kind of I'm always reluctant, like, you know, like, loads of people are like, well, Unilever is really good at sustainability. And you're kind of like, yeah, but which which brand? Which brand?

Unilever. Yeah. Exactly. Yeah. Yeah.

And and also and which decision because, you know, some are some are good and some some are not. So I think that I think the main the main three kind of, like, ways that I pitch around this is that a, it is innovative. Like, you are like, it's just by its very nature. If you are bringing new skills and new thinking, and I'm not just talking about AI, but genuinely, like, your first time ever bringing on an anthropologist to kind of work through with your design team, of course, you're then gonna have different perspectives. Like, and of course, that's gonna just improve and benefit the work that you're doing.

Yeah. I think recently, I was kind of said that I'm always the antithesis to group think. Like, what what better what better like, surely, surely, you want that in your team. So that's kind of that that's the innovation stake. The other one is around risk management.

It's super interesting working alongside, compliance and legal colleagues because quite often the things that I'm seeing and working on is not potentially as detailed as what you get in that compliance space, but it's definitely the sort of thing that's gonna come into the compliance space in, let's say, the next 7 to 8 years. Right? Yeah. So everything that was in the digital services acts, if you'd have kind of, like, been following what academics were writing about when it comes to these platforms and platform governance, you'd have been ahead of it really easy, and you wouldn't have had that mad scrabble when it kind of came in a couple of weeks ago. So put the risk management element to it, which is also just increases, like, product life cycles.

Like, if you then have a every if you're building a product, then in your head, you're gonna be like, this is gonna be something that we can sell for the next 4, 5, 6 years. Yep. That's actually quite a relatively long product life cycle. But suddenly, you know, you're gonna have the same amount of investment into building a product. If you can sell it for 2 or 3 years compared to selling it to, let's say, 8 years, you've immediately, like, improved the return on that investment by, let's say, 400%.

And that's just through better governance and risk management. Like, it's kinda crazy to me that people can't see that immediate financial benefit to doing I hope I've explained that correctly, but, like, it's like Yeah. Yeah. Yeah. Just looking at like product life cycle stuff is, is, is, is super important.

And then the other side is just around trust. Right? So those are kind of the big cells. But the main the main cell is, like, do you actually want to include, like, societal considerations into the building of your product? Do you actually want to have a holistic approach to sustainability or not?

Like, if you just wanna jump on the bandwagon and just kind of think that carbon reduction is is it and that we're just gonna trust what our suppliers say when they say they're carbon neutral, then that's one approach. Or do you wanna have someone on board who's and I'm not a sustainability expert. But a sustainability expert who's able to scrutinize it and go, hang on a second. They're saying they're carbon neutral, but they're carbon neutral because they're actually doing a tonne of carbon offsetting. And that carbon offsetting is going to companies that we're not really sure whether or not what kind of trees they're planting, whether or not biodiversity aspect of it.

They're doing it in landscapes that aren't going to support that. Like, do you want to have someone who's doing that? To my in my view, if you're a larger company, absolutely. Absolutely. Because you actually don't wanna be pouring money into something that ends up being corrupt.

So it's a long pitch, but ultimately, those are kind of the 3 buckets that I put things into. And it just makes you better. And you can interpret better in whichever way you want there. You You know, whether or not it's like a moral betterment or like an efficiency betterment or, you know, whatever you want. But, yeah, I think I think people are mad not to to be looking in this incredibly rich space where people are graduating with PhDs the whole time, and then they kind of go off and become a bartender or a lawyer when they could be doing using their knowledge, using this great work alongside maybe someone who does specialise in business transformation.

So you just set up a team where you've got this one person who's a product specialist and someone who specialises in business transformation. I'm doing a strategy for you. You can all you can all take notes. And then suddenly you've got this incredible, like, team in a department that has a specific remit. And then you kind of let them go for 2 years.

I I I find it crazy that Mhmm. Not saying this. I think I think the the counterpoint to that is I I think that people worry that these types of positions are, no, we're just gonna say no to everything sort of position. And we're gonna tell you that you're doing it wrong all the time. And I guess what you're pointing out is that it's not that.

It's a collaborative effort to make better things for a better business, for a better world, you know, all this sort of stuff. And it's not really about saying no to things. It's about how we build better I think I think that's I think you're right. And I think the reason for that comes from kind of compliance. Because compliance, if you think about compliance, it's just got these are the legal initiatives.

Yes or no? Yeah. And that's where people get annoyed, and they're like, oh, it's illegal. And you're like, well, it's anyway, in my view, if it's whatever, we're not gonna get into into the whereas ethics for me is, like, look, it's not necessarily in a lawsuit yet and or it is in some sort lawsuit. It's not in a lead you know, losing the word I don't know.

But it's not necessarily there yet. But or it might be there in spirit. So quite a lot of ethicists do rely on, like, human rights conventions and that kind of thing. But actually interpreting it in a way which enables you to think through how you invest your money a little bit differently and and, like, think through how you communicate with your suppliers is just a forward thinking thing to do. Right?

So I do see ethics as using different methods than compliance because compliance is very much rooted in that legal methodology. And, yeah, like, as ethicists, you might be like, we've done the research on this. You know, one of our major stakeholder groups is, you know, the people that we contract and supply with, and it turns out that we're not paying them enough money. Mhmm. No.

You should not do that. But, hopefully, any good ethicist is then gonna come back with a reasonable figure or strategy to be like I'm thinking about that chat GPT example, right, of, like, the Kenyan workers being spent, like, being paid $2 an hour. It's like you're kinda gonna go, no. This isn't a good thing. But any ethicist should then turn around and go, okay.

Well, how do we get the research to figure out exactly what would make a humane working condition for these workers? Right? And then that just turns into business strategy as normal. It's like you then think, okay. Well, actually, we need to make sure that they've got all of these benefits that they've got, that they've got kind of the mental health structures in place to help them through that all of that.

Well, they're actually, like, able to to, have a seat at the table when it comes to making key business decisions that maybe we don't contract it out and we bring it in house. How long does that take? Okay. So in 9 months time, like a quick fix for us is that we're gonna double their pay. And in this, and we're also gonna commit these promises.

Like, for me, that's that's that's how I work, and that's how I operate. And it you could construe that as just a no, but it's also a okay. Well, let's let's fix this. How can we What's the solution? And how do we transparently communicate that to people so that if they do turn around to us and go, oh, actually, you're only spending, you know, paying $4 that you and this kind of annoys me because tech companies do kind of, like, create narratives the whole time, and you're like, you're just you're just no.

This is this is a cover up. The 2 that I always look at that is I think Lush does a really good job at kind of, like, transparently communicating why they've made the decisions they they have. Patagonia obviously has a really good reputation in this space as well. It's just just about being proactive around it as opposed to if there's media scandal giving a comment and just being like, you know? You caught us.

Yeah. Yeah. Exactly. Oh, god. We haven't thought about that.

Like, actually, yeah, we probably did, like, on that kind of contract with the in, with chat GPT. They probably just, like, went with the lowest bidder. Mhmm. They spent, like it was something like a 120 like, 220, £1,000. No.

I'm not getting that. It's 120. There were 3 zeros on the end. So whatever that number is on their content moderation. It was valued at 27,000,000,000.

Like, that just doesn't add up to me. It's like, surely, you spend more money on something, like, as fundamental as making your platform safe. It's it's bonkers. I think, for me, you've gotta weigh up the, like, a part a part of that situation, right, is not that it costs so much, you know, that the Kenyan workers are being paid some amount of money. I I mean, that's the reality of the situation.

But the as you're doing that work, you you you're gonna think about, like, oh, is this actually feasible? Like, you know, like, is it feasible to employ people to look at the worst stuff that humans can throw at them? And what's the ethical implications of that? You know? And for me, there's a lot of like missed triage in doing this stuff anyway, which is which is probably part of my, you know, ethicist says no situation.

But, like When you say yes, triage, what do you mean? So when I when I'm in triage, like, there's this there's this, you know, beginning of the project, let's say, and someone has this idea and you have to look at the the project viability for business probably. Right. But you also want to look at it in terms of how it's going to have some sort of social good or impact, right? And if you're not doing that, you can quite easily get into these holes of you know, what we're seeing with the Kenyan workers, ChatGvT and OpenAI, where, people are or or in indeed like Facebook and and moderation there as well.

You know, where people are being, we can only assume, exploited. And and, you know, there is a beginning conversation where we could have actually mitigated all that stuff. Yeah. Yeah. And it's about yeah.

Just making sure that people are raising the right things, you know, like consequence scanning was something that dot everyone have produced. And it's like, actually, has that been updated? Probably not. Like, that was that's quite old now is the dot everyone consequence scanning, method. Yeah.

So I think, for for me, there's, like, there's there's so much that you could do with those people in those positions where they're they're just making your that they're actually saving you so much money because you don't go down certain avenues. Right? Like, if you go down this avenue, then this catastrophic thing may happen. And we have to and it comes a bit into, like, the risk area then at that point as well. But I did some work, and a part of the work was putting ethics into the data science pipeline.

You know, so you're working on a project. How do you bring people into the room to help you with that product? How do you put the kind of the structural process, which can can bring in kind of some ethical thinking or some better ways of doing things than just we're making a product and this is what we think about it. Aligning with the kinds of people who are already the room, and and what extra things you need in the room, and extra processes, and things to think about. But at the beginning of all that process, you have to work out if it's worth doing in the first place.

Like, do do we spend all this money, and time, and effort, like, doing this thing? Or actually, is it way too risky? Or, because it's gonna impact so many people negatively, or has a potential to do that. And when you're thinking about like arms and things like that, it's like so much easier to to see the consequences of those things. Right.

But everything has consequences, and and giant, you know, tech firms and stuff like that. And they anything they produce are gonna have a consequence because they're just in millions of people's fingertips. Right? Right. So I think for me, there's this the the conversations lacking in the area where it's more about what kinds of things we're making at all.

Right. And how do we make those things better if we're gonna make them, before we even consider, like, better ways of actually producing? I think you're totally right. I I I guess what I struggle with, there's the chances are that someone has always done work on something. It's just about then kind of going out.

And I I would be then interested in, like, how do you make that from an organisation point? How do you make that feasible? Mhmm. Where where do those individuals sit? Right?

Like, how I'm not sure exactly where how the decision comes about to, start on an r and d project around a particular product. Right? Let's be honest. Quite often, it's, startups, and it's the owners, you know, the the founders who just decide this is a good thing to do. Mhmm.

And they kind of like so do you put that responsibility in the VCs, on the investments? Like, do you see do you see what I'm trying to get at? Like Yeah. What where does that structure come about? And I think I think you're really right in raising this because, you know, that example I just kinda gave around how do you pay people better.

Like, there is an argument to say, actually, as an ethicist, you should just start at the beginning, but sometimes you can't. Yeah. Sometimes, yeah, it's not an option. Kind of all inherit these structures and biases that ultimately do come about because it's just not a priority for startup funders, startup owners. And even if it is, like, I mean, we've both owned small businesses.

Like having to, like, document things and write things down and document every conversation kind of like that in itself, even even if you're kind of like working on this is is really tricky. It's like, what what are the actual tools that we're gonna ask founders to to use and to create, and where does that funding come from? So, yeah, you're you're you're totally right. It's just it's always a lot of this just comes down to money. Which is, my other favorite topic, that topic.

Yeah. I know I know you you you're you feel very strongly about democracy. Do you do you see democracy in of itself at all for in your kind of arsenal, essentially? Like a view of which you can put onto things to be able to, like a value that we can use to pick things apart, let's say. Yeah.

I think, there's a couple of ways of looking at this. So one is that, people always use it almost like a verb. We're gonna democratise this. It's like, what version of democracy are you talking about? But, I think I think Rachel Caldercutt, Careful Industries and Promises in Trouble is really good at thinking about inclusive design.

I'm sure there's some other, there's some other organisations too. I'm not so involved in that DEI space. And I can imagine you've got lots of listeners who are who are much more proficient in DEI than I am. But I think that's generally what you're looking at when you're thinking about democratization of decisions within companies is that you are kind of in that diversity, equity and inclusion space. And and and kind of weirdly, I like yeah, I do I am really interested in democracies.

I guess I'm more kind of interested in literal political structures Mhmm. As opposed to how do you take the the the ethos of democracies and apply them to organisations, which are inherently anti democratic unless they are, of course, a cooperative. Like like, the structure of corporations are pretty authoritarian. Yeah. Or they're an oligarchy or whatever you wanna you know, that's that's the way it's like You have to you have to try pretty hard to make it not that, don't you?

Yeah. But the reality is that someone owns it. There is, like, there is someone who owns it or there are shareholders who own it. You've got someone who's responsible for keeping those people happy. Mhmm.

The only other structure is to have, yeah, cooperative. And there's very few organizations that are cooperatives. So, so then the whole idea of then trying to include democracy some way in those organisations is a is a bit of a farce. It is, you know, and it's also not to say as well that in some organisations, like something which has a bit more of a of an oligarchy approach is not a good structure. So I suppose I've answered that question in a way that absolutely, maybe 5 other people on the planet would answer that question.

But, Yeah. I don't, I I I think that inclusive inclusive design decisions are really important. I think there's loads of ways of going about that. I think that one of the key things that is important to do is just ensure that you are open with the extent to which people who are taking part in that inclusive design actually have, sway and decision making potential over the final decision. Like, that's one thing.

So there's this idea, you know, like, I don't know if it's still kind of I don't know if I call it a fashion or not, but this idea of cysts and juries, for instance, like, the idea of a jury, that metaphor in your head makes you think that they actually do have the final say, whereas actually quite a lot of citizen juries end up just being focus groups. Do you do you see what I mean? Yeah. Yeah. Yeah.

So they they have literal power in that situation. Yeah. Yeah. Whereas focus group is just kind of there to, yeah, exactly provide some external perspective. So, and I do think the inclusive design is so you've got that aspect, which is, like, you've got to be as upfront as possible with the participants, the extent to which their their view matters.

Not matters, but it's going to be integrated. And then, yeah, you've got to kind of offer, like, extreme transparency and extreme accountability as as as much as possible, within the structures of the organisation that's in play, and just and just be as upfront and honest about that. But that's all in kind of like participation and inclusive design. Like I say, Rachel Caldercutt does think a lot about this. And I'm sure there's a lot of other individuals who think a lot about it too.

My position has always been, but you need expertise as well. You know, you can't just you you speak to people who haven't thought you've probably done this. You speak to people who haven't really thought about AI ethics, and they come out with these principles. And you're like, this is quite basic. Mhmm.

There's people who have thought far more about this. It's not that your opinion doesn't matter. Of course, it matters, but you need to have, like, a certain amount of expertise to educate people so that they can then know the right avenues to talk about. So Mhmm. There's always that kind of, like, yin and yang, I suppose.

Yeah. I know there was something on LinkedIn where we had a difference of opinion. And you said that you wanted to talk about it on Yeah. I think that was embedding teams into companies. Right.

And you said there could be an external consultant could do that role. Yeah. I I I think my my main opinion there is that I'm, I'm a believer in the fact that we can't do everything. So the small startup, for example, is going to be tough to have a dedicated person for this. So what do you do?

You either, go and get an external party, and that's totally an option. And you already mentioned Olivier, and there's there's individuals that you can call on, and, or you educate your team. Mhmm. And, I'm I have I'm a true believer, right, in people who are studying, thinking, and, doing a lot of research in this area. But I'm a true believer in the fact that the companies themselves just need to get educated.

Like, they just need like, their staff need to be aware of, like, everyone has to be aware at a basic level of what's going on and why, this thing exists and how we can use it. And there are people who you can utilize, but there's also, like, this wealth of knowledge that we can pull together as well. How I think about it, it's like we don't need ethicists everywhere. We we need everyone to be brought up by the work that academics and ethicists and and individuals and companies and institutions are doing. And and also, like, one of those really important things to me is and it's often extremely difficult, is just sharing.

Like, if you're a company and you've you've done extensive research in a particular area, and it's going to be useful, then you should hopefully be able to share some or all of that research, you know, if it's gonna if it has a social good, about it. And and like I say, it's difficult sometimes. But, I I mean, it's it's tricky because again, you're looking at, like, the market dynamics and but you're right. Like, if you're a small business, then what do you do? I mean, the advice that I've generally given is that if you are approaching 30 employees and you're building a tech product, and one of you haven't got a part time person on board who's looking at the ethics of this stuff, then you're missing the trick.

Right? You know, should they be in the first five employees? I don't know. You know you know But, like, maybe the founders are interested in it. Right?

Yeah. Exact yeah. Exact yeah. Exactly. You know?

Yeah. But there's this kind there's this moment that happens, I think, between, yeah, that 5 to 30 mark where you're, like, you've got a significant amount of investment now and or producing enough revenue to bring these people on board. And, and even if it is, like, you know, part time part time ethicist working who does 2 days a week with you and 2 days a week with another person, you know, another company, whatever. I do think I just can't seem thinking about organizational structures. And unless, like, you know how it is.

You're working as an external consultant. You brought on to do a workshop, internally to keep pushing pushing on it. Yeah. Yeah. And and that's that's kind of the constant frustration compared to being a consultant towards, like, working internally.

As working internally, you can't see see the the improvement and the updates, but you don't necessarily get to do some of that cool research. You need to bring in other people to do that cool research. And as the consultant, you might get to do that cool research, but you then don't see the kind of, like, the augmented shift and change. Yep. So, yeah, you've got there's kind of arguments each way.

It's kind we're kind of just getting back to what should you be spending your money on. What is an adequate profit margin for your shareholders? Like, what are they actually demanding of you? You know, this this kind of thing. Yeah.

I mean, I think that comes down to the the structural nature of our situation though. In the capitalist situation, I think if you are just chasing profit, then then we're all in a problem. And then we get legislation in for certain safety reasons, but often because runaway capitalism doesn't account for certain behaviors that we wanna see, certain rights that we, as individuals, want to, obtain or or keep. So I think, for me, the capitalist situation is is part of what we need to deal with, almost. But that's kind of like this massive structural issue.

I don't know if you've been on a podcast before where I again, I'm very particular of defining capitalism. I guess I guess let's let's talk about, like, Ayn Rand's, capitalism. Let's talk about, like, individual, like, neo socialism, runaway markets, that sort of thing. Oh, without regulation. I guess because Yeah.

Okay. Because I guess I guess what the way that I think about capitalism, like, I think commerce is generally really good and really exciting. Like, I don't have a problem with commerce. I don't have a problem with, like, entrepreneurs and businesses. In fact, I think, you know, they do a tonne of good, like an amazing stuff.

So you've got the Ayn Rand idea, but also the best definition of actually being was engravers, David Graeber's debt. And he, he basically defines capitalism as when you are using money to create more money. So you're not there's it's not labor or resources, which are then building money. It's just, like, literal investments. Like, the structures are in place to make sure that if you invest £5, you definitely will see that £5 increase Yeah.

You're doing literally nothing. And he and he traced in his book debts, then kind of he traces it back to I can't remember where he traces it back to you. But I that's that's what I think about. It's like, if you are if you are in a system where it is absolutely expected that someone just by investing in that money will see a return on their investment of a certain amount, then that then we're in a problem. But if you're in a situation where someone's like, do you know what?

I'm doing some great work. And, like, whatever that labor really involves, and I'm getting paid a wage for that, that for me is just, like, commerce, and that should just be celebrated, like, and so and then regulation kind of comes in as part of that. So then you're like, okay. Well, is this a problem with capitalism or not? It's like I think I think, actually, it's the stories that we tell ourselves about the roles that matter in tech.

And, I think that we're very familiar with the idea that, a product manager or an operations executive, like, yes, they might not work in sales, but they have, like, a really important part to play in, like, product development and kind of making sure it is a good product. Like, people bring on UX designers in the first kind of first 10 employees often. Like, why is it that the UX designer doesn't have an ethical component brought in there? And before you were saying as well that, you know, these roles had you might not see a role for a technology ethicist, but you might see something else like UX and user experience and user design is really taking on a big lump of that at the moment. Do they have the skills to continue?

Maybe not. But then that comes back to your other point you were talking about around education. But I do think that there needs to be wide education, but there also needs to be a centre that is driving that education. And I also think that the accountability comes from the leadership. But that's true of, like, any other profession.

Like, if you've worked in tech, then you'll know that, like, developers are very good at telling you about when something's not their responsibility and where they need to go to. Right? They, you know, they're very good at kind of saying, oh, you need to, you know, speak to the iOS developer for that. You need to speak to this. I think that should be the case for ethics as well.

I feel, Alice, like, this is gonna be a conversation that we could just keep having. So maybe maybe we'll have you back at another time. And we we can dig into, the structural catalyst situation, and and start up teams and and all that. I feel like you've got a a nice bar there already with the kind of 30 individuals. There should be a, FS, a tech ethicist there.

I think that's a nice idea. The last question we always have on the podcast is what excites you and what scares you about our AI mediated future? I think what excites me is, like, all of the people who are doing master's degrees or postgraduate degrees in some form who would care about this stuff. Like, the research coming out from academia and from civil society is truly exciting. I'm also excited by businesses in the future just being like, woah.

Look at all this stuff. What kinda scares me is just, is just general exploitation. You know, we need to we need to treat each other with respect. We need to think about how we pay people a little bit more. So I think this whole conversation has been down to, like, what our organizational structure is looking like and how do they operate.

Yeah. There's so much opportunity here. We just need to make sure that no one exploits it. Please don't ask someone to come in and speak to you for free. Don't tell them that there's no budget to speak at your conference.

Like, it's just it's not it's not cool. So, yeah, I think those are the those are the 2 things. Wicked. Thank you very much, for coming on. And, yeah, I'm super excited.

I've got lots of things to think about, to to mull our conversation over. How do people follow you, find out about you, all that sort of stuff? Oh, god. I don't know. It's such a shame.

It's such a shame that we've lost x or Twitter. Twitter is now turned to x. Yeah. I actually don't know the answer to this question. I, I would add me on LinkedIn, actually, and say listen to this podcast.

And I am my my potential new year's resolution is to start a stop writing a bit more next year. So there might be a newsletter coming soon, but at the moment, there's not much. But, yes, it's always it's always a pleasure to come on and and and have these kind of conversations and, so thank you for having this this forum to do it. Yeah. I think I think I think LinkedIn is probably the best shout.

Very unsexy platform, but, yeah. Yeah. I mean, it depends who you're targeting, doesn't it? No? Sweet.

So thank you very much, and, we'll speak to you again. Thanks, Ben. Hi, and welcome to the end of the show. Thanks again, Alice, for coming on the show. I think I've known Alice and been on panels with Alice, for quite a while now, and so it's really great to actually finally get her on the show, get it all out almost.

I think I think me and Alice have a lot of, similar things to say. So I think it's really great, that I I was able to kind of channel some of that through some of our conversation. It was really, really fun. I also feel like, a lot of these episodes, there's a lot of, kind of repeating themes like capitalism. And also, I really like the idea that Alice was proposing that ethics is actually part of your innovation strategy.

It's kind of like part of what sets you apart, your USP. This is the thing which is is gonna bring in coverage and money and it's not gonna blow up in your face and it's gonna be great. I think that's a really good message to go with. If you'd like to support the podcast, you can go to patreon.comforward/machineethics. Do get in contact with us.

We have anything that you have to say about in this area or people and themes you'd love to hear on the podcast by emailing hello at machinedashethics.net. Thanks again, and see you next time.


Episode host: Ben Byford

Ben Byford is a AI ethics consultant, code, design and data science teacher, freelance games designer with years of design and coding experience building websites, apps, and games.

In 2015 he began talking on AI ethics and started the Machine Ethics podcast. Since, Ben has talked with academics, developers, doctors, novelists and designers on AI, automation and society.

Through Ethical by Design Ben and the team help organisations make better AI decisions leveraging their experience in design, technology, business, data, sociology and philosophy.

@BenByford