98. Careful technology with Rachel Coldicutt

This episode we're chatting with Rachel about AI taxonomy, innovating for everyone not just the few, Rachel's chronic honesty, responsibilities of researchers, socially responsible technology, ethics work as free labour, the right to repair, tinker, improve...
Date: 12th of March 2025
Podcast authors: Ben Byford with Rachel Coldicutt
Audio duration: 50:33 | Website plays & downloads: 64 Click to download
Tags: LLM, Research, Rights, Unions | Playlists: Design, Values

Rachel Coldicutt is a researcher and strategist specialising in inclusive, community-powered innovation and the social impacts of new and emerging technologies. She is founder and executive director of research consultancy Careful Industries.

She was previously founding CEO of responsible technology think tank Doteveryone where she led influential and ground-breaking research into how technology is changing society and developed practical tools for responsible innovation. Prior to that, she spent almost 20 years working at the cutting edge of new technology for companies including the BBC, Microsoft, BT, and Channel 4, and was a pioneer in the digital art world. Rachel is an advisor, board member and trustee for a number of companies and charities and, from 2020-2023, served as a non-executive director at Ofcom. In 2019, Rachel was awarded an OBE in the New Year’s Honours for services for the digital society.


Transcription:

Ben Byford:[00:00:04]

This was recorded on the 14th of January, 2025. We chat about how AI is now all LLMs, innovating for everyone, not just a few, responsibilities of researchers, and seriously, do people not think about the impacts of their work, socially responsible technology, ethics as free labour or free thinking, AI as a tool in our toolbox, and our right to repair, tinker, improve, innovate.

Ben Byford:[00:00:42]

If you like this episode and like to get in contact with us, you can email us at hello@machine-ethics.net. We're on Bluesky, machine-ethics.net, Instagram, Machine Ethics podcast, YouTube, @Machine-ethics. And If you can, you can support us on Patreon, patreon.com/machineethics. Thanks very much and hope you enjoy.

Ben Byford:[00:01:10]

Hi, Rachel. Thanks for joining us on the podcast. If you could introduce yourself, who you are, and what do you do.

Rachel Coldicutt:[00:01:18].

I'm a technology researcher, something I've been doing for many years now, having worked initially in product. And I run a research studio called Careful Industries.

Ben Byford:[00:01:36]

Before we dive into... Basically, you've done a lot of things, right? And I'd love to talk to you about a lot of those different things. But we generally start the podcast with this question around AI, so I was wondering if you could tackle that with me. So for you, what is this amorphous AI thing that people are talking about?

Rachel Coldicutt:[00:01:57]

Right. This is interesting because one of the things we do is we do ethics training and strategy development with people who are wondering how to get to grips with AI. And something we're about to publish actually is a 101 guide to what AI is. Because what I realised at the end of last year, working with lots of different people who were not technologists by trade, that all the existing definitions tend to be for regulators, or they're written from either a legal perspective or a very technical one. There's not very much, maybe hesitant for data engineers or automated decisions or LLMs. How I tend to talk about AI now is there's a field of activity. So not a thing, but lots of things. And within that you get everything from research breakthroughs, like Alphafold, things that are able to benefit from the structured data, known outcomes, all the way through to those annoying little generative AI pop-ups that come in software, right. One of the things we've done is we've developed a bit of a taxonomy that puts all those things together within the world of AI. But it isn't a thing, it's lots of things.

Rachel Coldicutt:[00:03:42]

And one of the reasons I think the ambiguity arises is because it's in everyone's favour, basically. It's in everyone's favour who works in industry to keep as much ambiguity there as possible. Google to describe AI differently to open AI. I think they do that partly to say they're not doing AI, we are. So it's completely okay to not know what it is. But I think if we thought about it, like how biology is a field, for instance, I think about it like that as a broader not a narrow one.

Ben Byford:[00:04:32]

What kinds of people are talking about what kinds of things then in that case? Are you seeing that there are maybe people in the industry and they're talking about one type of area, and then are average Joe or people maybe who are in business who are like, we don't know what to do with this stuff, and everyone's talking about this myriad of things.

Rachel Coldicutt:[00:04:56]

Yeah. So I think one of the things that that's not very helpful is that over the last, I'm going to say two and a half years, in lots of the software we use all the time, we're starting to see AI assistants pop up as part of the new wave of generative AI. And so I think lots of people who are not domain experts quite reasonably think generative AI and those little chatbot things are AI, and that's what the whole thing is, right?

Ben Byford:[00:05:39]

Yep.

Rachel Coldicutt:[00:05:39]

And I think there's a lot of sales reasons that companies like Microsoft and OpenAI would make you want to think that, right? And, very often what is being offered to you as AI is, sort of, is maybe slightly more complex decision making, but it's not particularly advanced. It's like something that has a shiny bow on it, rather than being particularly technically marvellous. But the way it works is that because of the lack of clarity around how things are described, it would be totally reasonable for someone to use a chatbot and think they're doing something quite advanced that is connecting with very complex technical ways of working. And I think particularly the rise of the term prompt engineer to me ask questions.

Ben Byford:[00:06:53]

Yeah, yeah.. The engineering bit.

Rachel Coldicutt:[00:06:56]

Yeah, it's like it's a good example of how Like the language that is applied around AI is elevating. So it's like making fairly boring things that seem extra and shiny.

Ben Byford:[00:07:11]

Yeah, yeah, yeah, exactly. And when I was looking at your website, your various websites, should I say, I was struck by the careful innovation part. So I was wondering if you had an idea of what that was for you, that careful innovation, or how it was different from other types of innovation or innovation in general?

Rachel Coldicutt:[00:07:39]

Yeah. So, I think one of the main things that is very important to me is to innovate in ways that don't leave people behind. So very often, in the AI Opportunities Action Plan that was published yesterday, There's lots of references in there to attracting elite talent, cutting edge, being at the absolute forefront of the thing. And I think there's something about this methodology of leaping ahead. Every time there's a leap, that leap needs to be accompanied by an concerted effort to bring everybody along. Whereas what actually tends to happen is that the leaps carry on leaping and a very small number of people get to benefit from that.

Rachel Coldicutt:[00:08:44]

Our mission is to make technology work for eight billion people, not eight billionaires. And so really, the challenges I'd make as a part of that is... There are certain things that do need to work for everyone. Who we leave out is a deliberative choice. I think there's an extent to it within capitalism and market-driven technologies. We need to be really pragmatic and open and honest about what we can really achieve. So one of the things we do when we work with clients is we get them to articulate and explain what their values are. Because what your trade offs are that you are willing to make look different based on who you are, what your politics are, what your services are, and that I think what we see in technology as a whole is the normalisation of trading off everything. It doesn't really matter what is lost as long as progress some sort is made, but progress for whom and at what cost.

Rachel Coldicutt:[00:10:07]

I think really it's about two things. It's about being inclusive and as well about having an active awareness of the consequences of the things you're doing, right? And mitigating those where you can and where you can't, at least having an awareness of the impact that you're having and being able to write that down to some, I don't know, ethical ledger, sort of a thing, which I appreciate it sounds a bit woolly, but I think holding complex spaces like that is. There are not so many things on which there are always definitely right answers for everyone, I think.

Ben Byford:[00:11:07]

Yeah, I mean, I think that's part of the reason ethics exists, right? Because otherwise it would be easy and we wouldn't need ethics. We would just do the right thing all the time. End. Good. Done. I really want one of the T-shirts, by the way, that says eight billion. I saw that and I was like, oh, yeah. I like that. It's a very clear and easy to understand strapline.

Rachel Coldicutt:[00:11:32]

Yeah, and I don't think it's complex, right? And one of the things I find really fascinating about techno-optimism is sometimes how unmoored it becomes from the truth or facts. And one of the things you certainly see in I see, in tech policy, if I'm allowed to name names, I would say one of the things the Tony Blair Institute do is they hype everything up to the very endth. And I think they know that what they're saying is not strictly true, but they are inflating it with that level of hyperbole in order to manipulate the end outcomes, right?

Ben Byford:[00:12:34]

Yeah.

Rachel Coldicutt:[00:12:35]

That's definitely, for instance, something that happens with OpenAI, with Elon Musk, with a lot of the existential risk people, that you're almost, I think, knowingly overstating the impact is that even if someone was meeting you pathway, you would get lots of gains. And one of the problems, I think, of working or trying to work in a more ethically sustainable way that centres values and better outcomes for people is that you're chronically honest, right? You're not... I wouldn't be be able with good conscience to talk things up to that level, knowing it was untrue.

Rachel Coldicutt:[00:13:37]

But I think potentially we're getting to a place where there's no choice. Actually, That a tone of debate is going to become just so totally heightened that there's no room to have just normal sensible conversations about it anymore.

Ben Byford:[00:14:02]

I wonder, though, because you're talking there specifically about the existential risk people. And I imagine it's almost like a belief system at that point where they think they're not hyping it up enough, right? Because they have this belief that things could and quite easily may go badly if we don't look at this stuff. So do you think there's a problem there with the known unknowns?

Rachel Coldicutt:[00:14:33]

I mean, I don't know. I'm very sceptical about this, right? I think in two different ways. So one of the things I always think, right, is that the people who are, like someone like Jeffrey Hinton, who talks about the risks created by technologies that he has been very instrumental in creating and pioneering. There's quite a lot of egotism at play there. There's something in that that I don't really understand. I went to a talk that he gave a couple of years ago, which he was almost laughing at the fact that in all the time he was working in the lab, he'd not really thought about people in a society and what the impacts of the technologies he was making might be. And it was only later on and he said something, and I'm going to get this slightly wrong, but I think it may be related to him being a parent or something, that he was like, Oh, no, I thought, actually, I do have this responsibility. And the idea that anybody might be operating in an environment where they feel like it's possible or plausible to act with zero responsibility to anybody else is extraordinary.

Rachel Coldicutt:[00:16:09]

And it seems extraordinary to me that we are elevating people who are admitting they have done that on the one hand. And then on the other, a little bit of it seems to me like people are saying that they're godlike in their own capabilities. And oh, whoops, they've just accidentally made the worst thing in the world. And like a little bit of me is like, if you're so smart, why didn't you stop? There's something about it It doesn't really... If you encounter it on a genuinely human logical level, it doesn't really make sense.

Ben Byford:[00:16:52]

Yeah. I mean, it feels very akin to the Manhattan project and that thing. And people being interested in the making and the science and the ability to do something rather than the outcomes or the possibilities of manipulation on the other side.

Rachel Coldicutt:[00:17:10]

Yeah. Yeah, yeah, yeah. And I think there's something about the status that technologies have in a society that when people admit to those things, they're not treated as pariahs.

Ben Byford:[00:17:29]

Yep

Rachel Coldicutt:[00:17:30]

They're treated as seers who have looked ahead and unlocked a terrible, terrible thing, and we all now need to listen to them. A little bit like the fact that the ethicists with the most creditability are people who have worked for the worst companies. It seems extraordinary because really the people with the most creditability, I think, ought to be the people who haven't done that, right?

Ben Byford:[00:18:05]

Yeah, 100%. It's bizarre, but I guess it's part of that survivor's fallacy as well. Is it survivor's fallacy? We're seeing the people who have already made a name of themselves, essentially, rather than the people who have purposely stopped, and therefore we don't know about them.

Rachel Coldicutt:[00:18:27]

Yeah, for sure.

Ben Byford:[00:18:28]

Unfortunately. Yeah, it is troubling. I wonder, taking a bit of a tangent, again. Okay, I'm going to do one more question, I guess, to set your risk, and then we'll move on.

Rachel Coldicutt:[00:18:43]

Okay, cool.

Ben Byford:[00:18:44]

So you mentioned that you don't dwell in this area, essentially. Do you think that it is maybe worth spending time on or something that will be interesting or will happen? Or is it something that we should be spending as little time on as possible. And there's the easy answer of we have problems that we need to solve right now, situation.

Rachel Coldicutt:[00:19:11]

I suppose I've got my answer there comes in like two parts, because I absolutely think that long term thinking is critical. But the idea that many people ought to be devoting their time and effort to mitigating long term imaginaries of a small group of powerful people doesn't seem to me to be a great use of time. So not that I think we ought to only be working on what's happening now. I do think we ought to be working on what's happening now. But I also think we need to be actually laying out more positive pathways. So lots of the work that we do is about how can we make it better, and what's the alternative? And there's an extent to which I think we get stuck in mitigating when actually there's a lot of power in making an alternative and having it ready. And from a policy perspective, one of the things I talk about loads is that more than one thing can happen at once.

Rachel Coldicutt:[00:20:26]

So some of the work we're doing at the moment is trying to say, as well as some of the more techno-optimist, technologically aggressive moves, it would be really possible to roll out an inclusive innovation approach all over the country. We don't have to do only one. We can do both at once. I would definitely like to see more more of that happening. Rather than dwelling on, I think, what very powerful people direct us to, we can be creating alternative paths, I think.

Ben Byford:[00:21:17]

And I guess, continuing from that, how does that look? You, Careful Industries, and to a certain extent before that, your research and policy stuff in Doteveryone was about the social responsible side of technologies. How do you see that playing out at the moment? What can we participate in that way?

Rachel Coldicutt:[00:21:44]

I mean, It's like at the moment, I can't really believe I'm saying this. I've been working in technology for nearly 30 years now. So I spent 20 years making things and the last 10 in policy. And what I'm so surprised by is the fact that socially responsible technology gets harder and harder and harder Particularly, I think in the UK at the moment, there's a huge amount of political energy directed towards a certain growth. And I think that what I'm seeing is that being a socially responsive technologist, is becoming more and more of a niche activity. I would say in my work, the number of people who expect our work to be free, who expect ethics or inclusive approaches to be something that we can just come and give you at no cost, that will just work, rather than intentional ways of being and doing. I think slightly it's because of all the escalating around generative AI over the last couple of years. But there is definitely a sense that there's a technology magic money tree. And I would say that those of us who are working to be more inclusive and more ethical are being increasingly marginalised and written out of budget lines. And it's very tough out there at the moment.

Ben Byford:[00:24:01]

Yeah. I mean, I was smiling when you were saying that. It's all like a smirking. It's a bit ridiculous. And some of the stuff that I do is you talk and you do workshops and stuff like that. And the amount of free labour that I do. Part of that is promotion, but part of it is just dissemination. It's like I'm trying to help you guys, and I know a bit about stuff. Could you just pay me just some stuff? I also need to live.

Rachel Coldicutt:[00:24:40]

Yeah, it's really, really odd. I had a conversation with someone last year where they genuinely said to me, It never occurred to me you would need money. I don't know, is that because you think I'm a saint or that social responsibility is something you need to be able to afford to do. Yeah, it was really odd interaction in terms of them being genuinely surprised that we might need to be paid.

Ben Byford:[00:25:20]

Maybe they thought you were a quango or something or some other entity.

Rachel Coldicutt:[00:25:24]

Yeah, I mean, who knows? But there's an extent to which I I think, just back to that point about ethical ways of working cannot come in at the end. We can't come in and wave a magic wand over something and say we're going to take out all the bad bits now and make it okay. And so I think there's something about, how can I put it? I think maybe collectively, we need to be really assertive about the value of work that is not extractive. So one of the things we have been trying and failing, as it happens, to get the funding before, but maybe I can do a little advert for this now. Yeah, pitch it. Is building a movement of progressive technologists, right? So it's like what we think is, what's happening is that everyone who is doing socially positive work tends to be working totally up against it. No one has got time to do marketing. We're working with tiny budgets and what we collectively need to do is come together as a movement. We need to be almost working as a union, right? We are putting out messages collectively. We are pitching what a world in which technology has positive impacts really looks like and how that works and what that looks like for people.

Rachel Coldicutt:[00:27:14]

And so there's a part of that that is about organising, a part of it that is about telling different stories and pitching alternative ways it could be. And the last part of it is about banding together to really make those things happen. And the way that funding and things work, it just doesn't make it worth anyone's while to work collectively, because we're all competing against each other for tiny amounts of money. But it feels like actually in order to cut through, I'm going to get this number wrong, but the number that tech companies spent lobbying in Europe last year, I'm actually going to look it up while I talk to you. So in 2021, tech companies spent €113 million, at least, on lobbying in Brussels.

Ben Byford:[00:28:22]

That sounds conservative, actually, I'd say.

Rachel Coldicutt:[00:28:25]

Yeah, right. But if you think about how little money goes into civil society, into people doing more ethical work. It's like a tiny, tiny fraction of that. And so I think the only way we're able to muster ourselves against that, is by working collectively and organising. And I would love to get to a point where that's possible.

Ben Byford:[00:29:03]

Yeah, yeah, yeah. Well, it sounds like a great idea. I've got, I don't know if I need to speak about this yet, but I've got this idea about environmental-based unions as well, which I'm... This is the first time I've talked about it, but it's basically the idea is that we spend a lot of time at work and those people who are environmentally interested or action-based, they can sign up and they can start helping in their sectors en masse. It's unfair just to go, Oh, we're all in it together at home, and you have to use less electricity, and you have to use whatever. It's like, well, most of us are in business, and most of this business do the things to us that we are at home doing. So we should probably be starting there. And we can do that ourselves. We can collect to do that. So I like the idea that there should be some loose association of ethically minded or responsible... There's all these words, isn't there, that people use? If you're allergic to using the word ethical, then you can use a responsible technologies.

Rachel Coldicutt:[00:30:20]

Responsible has its own issues. I do use responsible. One of the reasons we settled on counter for is partly because as responsibility begs the question, responsible to whom, and people have different incentives. But I do think that there is a political element here. I use the word progressive almost purposefully because I think for me, I would say I am certainly politically on the left. I am interested in conversations that centre... I'm interested in moving away from always having to centre capital and market value and want to have more conversations that are centring well-being, social value. And that within the idea of being ethical, it would be very possible that one could make a case that your ethical perspective on the world leads you to, I don't know, navigate a path of extractive capitalism. It does happen. I do think, actually, we should be a bit more political, bit more politically engaged.

Ben Byford:[00:32:01]

I like the idea that you could go down this line and then still be an extractive capitalist in that sense.

Rachel Coldicutt:[00:32:10]

I mean, I'm not going to name names here. But I can think of someone who's quite a celebrated technology ethicist who might have had quite a lot of media coverage. I'm fairly really certain is doing some deals that I personally would not think of as being ethically sound.

Ben Byford:[00:32:43]

I think I got a few people in my head, so we'll... Yeah. I was wondering if we go back to this idea of AI and this step change you were talking about earlier, is there some obvious sense to you that AI can be leveraged for community action or in a more socially minded way, these sorts of technologies, AI specifically or..?

Rachel Coldicutt:[00:33:17]

I mean, I would always talk about AI as a tool. And so with the caveat that I previously resisted the thinginess, the set of tools, And the point of tools is that we should use them to help us. So, you know, something I said previously is like, imagine if the person who invented spanners said, everything now needs to have a spanner on it. We're never going to have any progress of any kind or any changes to what people do unless everybody uses a a spanner. And I'm definitely not someone who thinks that technology is terrible, but I think that all technologies are tools which we ought to use to do the best we can. And if we've made tools that don't work, and I would define not working as actively creating harm, right then I think we ought to make different things. And so I'm absolutely not going to say, I don't know, we should never use AI in diagnostic medicine. But if we do use AI in that context, we need to know the quality of the data that is informing it means that people are not actively excluded or harmed. We need to know that we've got the skills and capabilities to make it work.

Rachel Coldicutt:[00:35:10]

I do think as well, there's this weird thing that has happened where lots of people are content with things just not quite working, with just being a bit rubbish. I would really ask that, particularly if we're looking at rolling out emerging technologies, I feel like we collectively have a responsibility to demand that things work and they do the things they do properly. And so if we're in a place where it's proofable that an AI-enabled tool or way of working can meaningfully improve and create better outcomes – great, right? But I don't think you want to be using it just because it's there.

Ben Byford:[00:36:06]

Yeah, yeah, yeah, yeah. And I guess that there's a lot of, because of that hype we were talking about earlier, there's a lot of, well, we just need to use it. We just need to use it, quick, use it, because it's the thing that we need to use now. So whatever it is. So if you're out there and you're wondering how you should use AI, contact Rachel in Careful Industries. Contact us at the show if you have any other questions or people you want to talk to. Do those things. Let us know that you're listening. I had a wonderful conversation recently with an ethicist who who says some lovely things about the show. So if you're not an ethicist listening to this, or even if you are, if you could just let us know that you're interested in the podcast and everything that's going on.

Ben Byford:[00:36:58]

So I got another tangent, which I want to get to before we start running out of time, if it's okay. Thank you very much again for coming on. I've got a couple of questions, and some of them relate to this eight billion again. And I'll do them in this order. So I was wondering, I'm presuming I know what the answer is, really. Should all technology be basically Google, Amazon, Microsoft, OpenAI? Is the first question. Second is, can I have a T-shirt? And then after that, I want to explore more the extent to which this stuff can play out, which I guess comes back to existential risk a little bit. But on the first instance, should everything be those big companies? And if not, what can we do about it?

Rachel Coldicutt:[00:37:56]

No. And I think there's lots things we can do. One of the things we've been doing for the last few years is working with a group of community technology organisations. So these are people who make their own technology largely, either because what they need or want is not available in the market or they don't want to align with the values of what is available in the market. And these are hugely various. We've got within that community, social care co-ops, energy co-ops, skate parks, arts centres. So people doing all kinds of different things who are brought together by both values and technical capabilities. And I think one of the things that happens, you that has happened increasingly over the last 10 years is the ways that technology works become more and more obscure.

Rachel Coldicutt:[00:39:07]

So not only are you not able to take the back off your phone, but you're actively encouraging courage not to. The idea that we cannot tinker, repair, improve, I think is ridiculous. One of the thought experiments are quite offering up is, what would it look like if we, as of tomorrow, we had no new technology and we had to live with and adapt and change what we have now. And we could do such a lot if we, within the constraint of improving and developing what already exists. Without always glomming onto this idea that there'll be a new... Like the idea that you might even have to update your phone every year or a couple of years in order to have the processing capability to use modern tools. But if you think about how extraordinary it is that many, many, many people are walking around with computers in their pocket that are much more powerful than what was used to send the first rockets into space. The amount of untapped potential that we all have at the moment that we're not using because we're giving that work to Google and Amazon, right, is huge. And I think that with all organising and creating a space and opportunity, and I'm just in the middle of writing some proposals for what a network of innovation first organisations all over the country could do that enable everyone to realise the technologies we have. I think we're missing out on loads, basically.

Ben Byford:[00:41:30]

Yeah, I'm reading from the same hymn sheet as you. The analogy I always give is that most people are watching YouTube and using Excel and Word and, you know what I mean? And Alan Turing would be rolling in his grave just like, what are you doing with these amazing machines? This just incredible amount of, compute and power and possibility space, I guess, so that's the thing. It's like this huge possibility space, and we're doing these five things or four things. Provided to us, like we're saying. It's quite astounding, really.

Rachel Coldicutt:[00:42:15]

Yeah, and I think the idea that we've all become content to outsource that work. I know it's partly because life is hard and complicated with many competing needs, and frankly, capitalism is exhausting. But we are realistically going to need to live differently, right? The climate is not going to heal itself without everyone making different choices. I know that it's probably It's really unrealistic to expect people to make personal trade offs like that. But I think we could certainly make consumer out of the box technologies easier for people to reuse, manipulate, do other things with, and that the curiosity that that would enable might lead to some really interesting, different things.

Ben Byford:[00:43:28]

Yeah, definitely. Going to ask you, this is the penultimate question. So there's obviously a lot of hype, but do you think that there will be this technology which evolves into to, I guess, this another belief system of transhumanism and all these things clashing together. But do you see there being this AGI or superintelligence thing coming out the other end and that occurring in the world, or how do you feel about that trajectory?

Rachel Coldicutt:[00:44:04]

I mean, no.

Ben Byford:[00:44:08]

Okay, moving on.

Rachel Coldicutt:[00:44:11]

I think I would say AGI, right. Did you see, I think it was last week or maybe before Christmas, I can't remember, someone leaked an OpenAI Microsoft memo in which they defined achieving AGI as making a hundred billion dollars.

Ben Byford:[00:44:40]

Right.

Rachel Coldicutt:[00:44:41]

And I think that tells us a lot about where that is.

Ben Byford:[00:44:49]

Yeah. It's all about what you measure and if they're measuring just the money, then...

Rachel Coldicutt:[00:44:55]

Yeah. And there is a great bit in Meredith Broussard Artificial unIntelligence. She describes Hollywood AI, and the fact that Hollywood AI has captured lots of imagination, and that probably, I think we're looking at narrow AI achieving more and more, but that a little bit like P-doom, existential threat, I think AGI is like, I think it's almost like a religious belief, right? And that If we... There's something about, when people who are good at maths say they believe in something that is implausible, we believe them. But if the same people were coming and telling us about miracles they had witnessed, but they weren't good at maths. They would probably be decried as illogical. I do think it is almost religious.

Ben Byford:[00:46:17]

Yeah, I don't know. I think it's a nice thing to think about, but it's something which doesn't need to be in the same way that science fiction ponders different ways of being and different futures and thing. He doesn't necessarily need to take over our general ideologies.

Rachel Coldicutt:[00:46:37]

Yeah, for sure.

Ben Byford:[00:46:38]

Well, thank you so much for coming on the show.

Rachel Coldicutt:[00:46:42]

That's great. Thank you.

Ben Byford:[00:46:44]

The last question we normally ask is what scares you and what excites you about our AI-mediated future?

Rachel Coldicutt:[00:46:53]

What scares me is genuinely the fact that if you are a man in one of those jumpers with the half zip thing at the neck, and you've got a deck in which lots of graphs show numbers going up exponentially, what scares me is that people will believe you without doing any diligence, and that there's just a huge amount of power that these, I think, largely unproven narratives have, and that it's very difficult to cut through with just a bit of pragmatic, sensible stuff. I do genuinely find that quite scary. If that's something that excites me, I think... I mean, the thing that has kept me working in this the space for such a long time is that I think technology is amazing, right? And that even just on an everyday level, it enables such extraordinary things for all of us. And I think what keeps me going is the idea, maybe slightly utopian idea, that that can be used and repurposed purpose to liberate in a joyful, abundant way for everyone. And it doesn't just have to belong to the billionaires.

Ben Byford:[00:48:40]

Well, again, go buy the T-shirts. Yeah. Rachel, thank you so much for coming on. How do people find you, follow you, all that thing?

Rachel Coldicutt:[00:48:50]

I am very easy to find on the Internet. I did leave Twitter last year. I find me at Careful.Industries is the site, or I'm, likewise, very, very easy to find, particularly on Bluesky.

Ben Byford:[00:49:13]

Great. Thank you so much, and keep up the good work.

Rachel Coldicutt:[00:49:17]

Thanks a lot. Bye-bye.

Ben Byford:[00:49:19]

Hi, and welcome to the end of the podcast. Thanks so much to Rachel. She's one of those people that I've seen at conferences and expos and have followed her work for many, many years now. So really exciting to talk to her about some of those questions. Obviously, quite worrying what she was saying about how ethics is sometimes seen as a free labour thing or something that people should be just doing out of the goodness of their heart. This is not necessarily real work. So that's slightly troubling, considering, obviously, we have to design things and make things in a certain way, which is, hopefully, in my mind, good for people, good for the environment, and good for many, many, many more people, not just a few, as Rachel is discussing.

Ben Byford:[00:50:12]

Thanks again. And if you want to find more episodes like this, you can go to machine-ethics.Net. And if you can, you can support us on Patreon, patreon.com/machineethics. Thanks again, and I'll see you soon..


Episode host: Ben Byford

Ben Byford is a AI ethics consultant, code, design and data science teacher, freelance games designer with years of design and coding experience building websites, apps, and games.

In 2015 he began talking on AI ethics and started the Machine Ethics podcast. Since, Ben has talked with academics, developers, doctors, novelists and designers on AI, automation and society.

Through Ethical by Design Ben and the team help organisations make better AI decisions leveraging their experience in design, technology, business, data, sociology and philosophy.

@BenByford