95. Responsible AI strategy with Olivia Gambelin

For Olivia's 3rd time on the show we're chatting about Olivia's book on Responsible AI, scalable AI strategy, AI ethics and RAI, bad innovation, values for RAI, risk and innovation mindsets, who owns the RAI strategy? why to work with an external consultant? agentic AI, predictions fo the next two years, and more...
Date: 11th of December 2024
Podcast authors: Ben Byford with Olivia Gambelin
Audio duration: 57:46 | Website plays & downloads: 20 Click to download
Tags: Responsible AI, Risk, Strategy, Author | Playlists: Responsible AI, Generative AI, Values

One of the first movers in Responsible AI, Olivia Gambelin is a world-renowned expert in AI Ethics and product innovation whose experience in utilising ethics-by-design has empowered hundreds of business leaders to achieve their desired impact on the cutting edge of AI development. Olivia works directly with product teams to drive AI innovation through human value alignment, as well as executive teams on the operational and strategic development of responsible AI.

As the founder of Ethical Intelligence, the world’s largest network of Responsible AI practitioners, Olivia offers unparalleled insight into how leaders can embrace the strength of human values to drive holistic business success. She is the author of the book Responsible AI: Implement an Ethical Approach in Your Organization with Kogan Page Publishing, the creator of The Values Canvas, which can be found at www.thevaluescanvas.com, and co-founder of Women Shaping the Future of Responsible AI (WSFR.AI).


Transcription:

Ben Byford:[00:00:00]

This episode was recorded on the second of December 2024. We chat about Olivia's new book, Responsible AI, Scalable AI Strategies, the Intentions of RAI or Responsible AI, Bad Innovation, Risk and Innovation Mindsets of Responsible AI, Reasons to Work with External Consultants, the Importance of Context and Use Cases of AI, as well as predictions for the next two years.

Ben Byford:[00:00:42]

If you like this episode, you can find more at machine-ethics.net. You can contact us, hello@machine-ethics.net. You can follow us on Twitter, machine_ethics, Instagram: MachineEthicspodcast, YouTube: @Machine-ethics. And if you can, you can support us on Patreon, patreon.com/machineethics. Thanks very much and hope you enjoy.

Ben Byford:[00:01:13]

Hi, Olivia. Welcome back.

Olivia Gambelin:[00:01:17]

Thanks, Ben. It's great to be back for the third time, I think.

Ben Byford:[00:01:21]

Yes. I actually had to look it up, and I realise this is our bi-yearly meeting, apparently. I love it. I feel like I obviously follow some of the work that you're doing from your websites, your book, which we'll talk about, and LinkedIn and stuff that you put out. But we first talked in 2020, and then two years later, or maybe a bit longer than two years because it was a review of 2022, and now we're back at the tail end of 2024 as well. So I think maybe we'll have to talk about the next two years and then we can recoup in 2026.

Olivia Gambelin:[00:02:03]

Exactly. Ben, I think you're one of the only people in podcast that can do Olivia through the years. You're going to be able to track my thinking. Hopefully, I'm not going to be embarrassed by anything I said in the past.

Ben Byford:[00:02:15]

We're always embarrassed. I think there's a thing about the right to forgetting, right? Because people do say stuff, right? And people do change their mind on stuff, and I think that's fine.

Olivia Gambelin:[00:02:27]

Exactly. We got a lot more research and a lot more knowledge now than we did four years ago, for sure. Exactly.

Ben Byford:[00:02:32]

We're so much older.

Olivia Gambelin:[00:02:33]

For sure. That, too.

Ben Byford:[00:02:36]

Yeah, my bones. Anyway, so I brought you here today because I haven't spoken to you in two years. And I realised that last time we spoke, we actually spoke about so many things. I was looking over the stuff that we were talking about. So if anyone's interested, we talked about the new wave of Gen AI stuff and copyright and things like that. But we also spoke about consciousness and the Chinese Room. And it went all over the place, which is fabulous.

Olivia Gambelin:[00:03:06]

I got to give a listen to that episode.

Ben Byford:[00:03:08]

Yeah, it's really good. And obviously, it's the tail end of COVID, so there's probably a bit of that in there as well. So how are you doing? Are you okay?

Olivia Gambelin:[00:03:19]

I'm doing really well, Ben. I think 2024 was a really, I guess, let's summarise it into the good chaos of the year. And 2025 It is really, I mean, it'll be interesting when we meet again in two years. Right now, I'm really excited for 2025. And I think in two years I'll be saying 2025 was a really great year. But we'll see.

Ben Byford:[00:03:42]

Yeah, yeah, yeah. Well, I'm hoping that it will be a good year because we're projecting, we're like, we're going to make it-

Olivia Gambelin:[00:03:49]

We're manifesting right now.

Ben Byford:[00:03:49]

Yeah, we're manifesting, right? 2025. So I was graciously received a copy of your new book, which was out in 2024, which was called Responsible AI, and it has a longer title, doesn't it? What's the longer title?

Olivia Gambelin:[00:04:08]

Implement an Ethical Approach in your Organisation. And I know that having to say the entire title of my book, I'm like, All right, deep breaths, and here we go.

Ben Byford:[00:04:18]

Here's the whole thing. Yeah, but when you look at it-

Olivia Gambelin:[00:04:20]

I mess it up all the time, too.

Ben Byford:[00:04:22]

And it's like responsible AI, boom, and then... Yeah, exactly. And I have to say that I read half of so all my questions and my knowledge are based on the first half of the book and a cursory look at all the rest of the book. So forgive me if you have to be like, oh, well, actually, that's in the later bit of the book, and I've covered that, which is always a problem because I'm massively dyslexic, so I have to power read to my ability, which is not great.

Olivia Gambelin:[00:04:54]

It's a thick book, too, so don't worry.

Ben Byford:[00:04:57]

Yeah, I mean, it's an enjoyable book. It's one of those books where I'm the person who is interested in the space, so it makes sense for me to absorb it in one way. But I guess the first question is, who is this book for?

Olivia Gambelin:[00:05:14]

Well, this book is actually for more of an AI audience rather than the responsible AI audience. Yes, the responsible AI audience. Obviously, this is all for you folks. This is built off of our years of honestly building this industry in this market in this field of knowledge. But you're not the main target audience, I would actually say, because hopefully, as you're reading this book, you're like, well, yeah, no doubt. This makes complete sense. This is just well-organized information I already knew. Hopefully, that's your experience. But for more of the AI folks, the practitioners that are actually, well, truly the business leaders that are looking to implement or develop AI with their organisation, this is for them. This is for the business leader that sat down going, I am faced with this AI imperative, and frankly, I'm excited and slightly scared, and I want to be able to do this well from the outset. I mean, at the end of the day, this is a strategy book for good business in AI. That's it. Nothing fancy.

Ben Byford:[00:06:17]

Great. And it does feel like a combination, coming together of all these melting pot ideas, like the things that you've done, and it has some examples and case studies in there. But like this wealth of this information and going, Well, here's a way you can do it and just go away and do it, I guess.

Olivia Gambelin:[00:06:36]

It was one of those... I kept getting faced with the question over and over again, What am I missing? Where do I start? What am I missing? Where do I start? I said, Okay, I'll just write a book on that so I can give an easy answer. Just go read this. It's your manual on what you're missing, where to start, and how, honestly, to build a scalable AI strategy on those responsible foundations that everyone knows today are so key to success in the long term.

Ben Byford:[00:07:03]

Yeah. And I've got some quotes here. One of my favourite ones is, "Adopting an AI strategy is important, but ensuring it's the right one will lead to success". I'm paraphrasing a little bit there. But I feel like probably if you're a larger organisation, you probably have something going on in this space. But you need to keep up to date, but also you need to be like, Well, what makes sense for my context and sector and people? And there's a big focus on people in the book, which is one of the things I like because you need people to do stuff, right?

Olivia Gambelin:[00:07:43]

Exactly. Having a strategy is important, but having people actually execute on the strategy.

Ben Byford:[00:07:49]

And not just leave it on the shelf or in the file somewhere.

Ben Byford:[00:07:52]

Exactly.

Ben Byford:[00:07:53]

One of the things I noticed, which I thought was amazing, was that because I am one of those guys, you often mention responsible AI and ethics.

Olivia Gambelin:[00:08:07]

Yes.

Ben Byford:[00:08:07]

And I wondered, for you, does that mean that responsible AI doesn't also include ethics?

Olivia Gambelin:[00:08:13]

So how I break it down, or for me, how I conceptualise it is responsible AI and ethics are two different, let's call them disciplines or practises. Responsible AI, with how the industry approaches it, is like this umbrella term. So underneath responsible responsible AI, you'll find things like AI ethics or AI risk and safety or AI governance. It's this umbrella space of doing AI well. That's how I'm really dumbing it down, but making AI that's not falling apart or messing up parts of society and humanity. That's responsible AI versus AI ethics is a very specific discipline within that. And how I break it down is responsible AI is good business practise done. You can do responsible AI without ever touching ethics because all you're doing is making sure that the business functions in a way that's supportive of AI done in a way that is well.

Olivia Gambelin:[00:09:12]

And by well, I mean being able to bring models into production, being able to adopt it, being able to just actually use these tools. Because you can do AI really wrong without actually impacting humanity, but really wrong meaning screwing up your team or messing up your workflows or wasting time and resources. Jumping time and resources into a black hole. That's AI also done wrong in a business sense. So responsible AI is basically just getting those good business practises in place that allow you, enable you, empower you to actually engage with these tools.

Olivia Gambelin:[00:09:46]

Versus AI ethics is a very specific corner of that practise where you're focusing on specific values. So ethics is where you actually engage with these principles that we to talk about. So you'll hear transparency, fairness, privacy. I throw those around like buzz terms, but those are actually specific ethical principles. And when you are engaging in the practise of AI ethics, that's when you are engaging with not only those principles, but also human values like empathy and creativity and curiosity. So dividing those out, I think, is really important because all companies need to do responsible AI point blank. You literally, you cannot function in AI without those responsible foundations. Not every company is going to engage in AI ethics for a variety of reasons. It's the companies that engage in AI ethics that really get the competitive edge in the long term as well, because they're designing and developing and using these tools in a way that is coherent and complementary to how we as a society function. So it's that human bridge aspect there.

Olivia Gambelin:[00:10:55]

So yes, they are different practises in my mind. It also helps me dividing those. It also helps me working with clients to get around very sensitive topics when we're stuck on an ethical dilemma. I can detach from that dilemma, get the foundations in place that will enable us to actually make the ethical decisions, but actually bring them into action by removing the conversation into responsible AI, where it's just business practises. And then that gets us moving forward, gets us unstuck from a lot of those sticky ethical conversations as well.

Ben Byford:[00:11:31]

It's weird in my mind. I think I'm on the same page as you. I see it the other way around. I see AI ethics as the whole-

Olivia Gambelin:[00:11:42]

I like it.

Ben Byford:[00:11:43]

You know?

Olivia Gambelin:[00:11:43]

Yeah.

Ben Byford:[00:11:44]

Maybe because people call originally, people called themselves AI ethicists, and it was a bit like... And then responsible AI came around because I feel like people are allergic to the word ethics, right? Sometimes.

Olivia Gambelin:[00:11:56]

Always.

Ben Byford:[00:11:57]

Which is a shame because it has these connotations attached to it, which at the beginning of the book, you do cover some of those things which do make businesses, specifically, more concerned with, are we going to do ethics? Because I don't know how that's going to work. And we get stuck in this position where we're not moving forward. So I think you do cover some of that stuff. And like you say, you can tandem work those streams, I guess.

Olivia Gambelin:[00:12:26]

Yeah.

Ben Byford:[00:12:28]

Is it I felt like the whole structure of what you were aiming for was value-led, right? So you have these overarching values, and they're extremely important because they bleed through to all the parts of what's going on in that strategy. So I wondered if that was important and if there was a way of doing that incorrectly and wrong. You know you could have... Because if you do that stage and you are embedding values which are, let's say, it's hard, difficult. Bad is a difficult word, but they are going to be not good for society, let's say.

Olivia Gambelin:[00:13:11]

Misaligned

Ben Byford:[00:13:11]

Yes, exactly. Yeah. So do you think there's a thing there that could happen?

Olivia Gambelin:[00:13:16]

I mean, there's always a possibility. There's always a possibility to pick a selected value, and either the value or how you're defining that value is not necessarily aligned with the needs of your stakeholders or just society as a whole. It's definitely possible. We're humans. This is part of our DNA. I would say that it is far less likely if you were going through the practise, through the process of actually selecting those values, having those definitions, testing them both internally and externally, you are far less likely to actually have selected and focused on a value that is misaligned. And if you blindly go in and say, Yeah, we do ethics. We're good people, without actually asking, well what are those driving values? To me, that's more of the mistake is going in blind and saying, well, we're good people. We'll figure it out, versus actually selecting those values.

Olivia Gambelin:[00:14:12]

But there is a chance. But I wouldn't say it's selecting the... It's not like a bad negative selection of those values. It's more of they weren't strategically the right values to select for what you're building for your company, for the market or industry that you're operating in, or for society that you're serving.

Ben Byford:[00:14:36]

And I guess an extension of that, does your strategy or your thinking around this account for bad intention? Because obviously there's an intention side of it as well, right? And for me, there's this thing around where things can go co-opted, right? So you can have... An example of this is like, have and habit building and nudging, this idea of nudging. You can do small things over and over again over time and change people's behaviours a little bit. And I feel like these days, UX has been co-opted a little bit into the capitalist machine and for profit. And it's like, well, UX was used to be about building better for people, right? And bringing people into that design process. You can use those same skills or the same techniques to design so that you are manipulative, basically, or exploitative. So I wonder, is that part of how this process could be co-opted, almost?

Olivia Gambelin:[00:15:45]

Yeah, it definitely can be. I mean, any process can be. Anything can be corrupted. It's part of life again. I think for me, my intentions in this book and designing this framework and the ability to build these strategies, and really the focus is I'm not writing for the... How do I put this nicely? I'm not writing for the people that are coming in with ill intentions. I'm not writing for the, I'm going to say, the lowest rung of AI development. When you get to that manipulative UX, to me, I look at that and I'm like, That's just not good innovation. That is a default into a mode that works for a little bit. But if you just look at any of these, they're crumbling. I mean, yeah, we had UX, let's say, for Facebook that was designed to keep users on the screen. Talk to the youngest generations. No one uses Facebook anymore. It works in the short term, that manipulative approach. But long term, people get tired. People wake up, people see that this is not what they want, and there is a pushback. So I'm not writing for that lowest rung of ill intentions. I'm writing for the people that see, that have the motivation to actually rise to the occasion, that have the drive to actually redefine this cutting edge of innovation in AI.

Olivia Gambelin:[00:17:23]

So yes, it can be used for ill intentions, but those ill intentions are short sided and eventually will crumble. Frankly, I say this, let's say, with a grain of salt, that's not my job to catch the ill actors. That's the job of regulators and lots of amazing people with different nonprofits and different organisations that focus on helping balance out those ill intentions. I'm writing, and my job is to work with people that get this as the future of AI and have that desire and need to push it forward in a direction that's in alignment with what we as people want to need, not just technology for the sake of technology.

Ben Byford:[00:18:16]

Yeah, great. Because we don't want to make things which are unnecessary, I guess. Just.

Olivia Gambelin:[00:18:26]

We've made a lot of things that are unnecessary.

Ben Byford:[00:18:28]

I know, I feel like that's what we like doing, making unnecessary things.

Olivia Gambelin:[00:18:33]

It's easy to make unnecessary things. It's really incredibly hard to put in the work to find the necessary things that actually matter.

Ben Byford:[00:18:42]

Yeah, that's what we're here for, hopefully.

Olivia Gambelin:[00:18:46]

Exactly. I like to challenge people.

Ben Byford:[00:18:48]

I think you briefly mentioned it, actually, because there's another thing about what you had in the book around this risk innovation thing. So you got these two ways of strategizing and moving forward. And part of that is you could take the risk approach and you could take the innovation approach. So I wanted if you could describe that briefly. And then my question around that is, whether you can do them both at the same time.

Olivia Gambelin:[00:19:19]

Yes, so risk and innovation, a lot of times, ethics and also responsible is this, it's risk. We have to avoid the risk. We're reducing risk, which is an incredibly important side to all of this, but it's only one side. You also have the innovation side, which you hear me talk about much more often. That's where we are using our values and these responsible foundations to actually engage further with our creativity, engage with that innovative side.

Olivia Gambelin:[00:19:50]

So how I break it down, it's pretty simple. The risk side, you are in a preventative mindset. You are protecting. You're protecting You're protecting your stakeholders. You're protecting your users. You're protecting your company. You're protecting for those values that you have selected. It's a protective mindset. Let's say privacy. Let's use privacy as a value here. How do I protect my users privacy? How do I protect against, let's say, data leakage that would be in violation of privacy? I'm preventing violations against privacy. And again, I selected just a simple value value example there.

Olivia Gambelin:[00:20:31]

The innovation side, though, you are asking questions of how do I align here? How do I design for these values? How do I use these as guardrails? Not even guardrails. How do I use these as a compass that points me towards a purpose or a direction or a strategy or an angle and an objective that I am looking to achieve? And on that side, let's use privacy again. You're talking more of how How do I design for privacy? Is this in alignment with privacy? Does this redefine what privacy means for my users in a way that is coherent and resonates with my users? It's a creative mindset.

Olivia Gambelin:[00:21:16]

So you have the, how do I protect? How do I align? Either or. You can do both, like you said. You absolutely can do both. Some companies, some industries are better suited for the innovation side or the risk side, depending on these values. But ideally, a mature company is able to have a balance between the both. They've done the risk side. They've insured against the pretty obvious mistakes that companies are following. Well, I would call them obvious, but the obvious mistakes, hopefully to be more obvious, blatantly obvious, although I'm always proved wrong about how obvious they are, they've protected, but now they're also engaging and pushing forward. And so you absolutely can engage with both. You can break it down into, for some values, you may be more risk-focused, for other values, you may be more innovation-focused. A mature company will have a balance between it.

Ben Byford:[00:22:15]

And it's up to them to choose the appetite for that strategy and align the company with that.

Olivia Gambelin:[00:22:24]

Exactly. And I go into it a little bit in the book about... Actually, I have a whole framework in the book. I should remember this. I wrote the book. I have a framework in the book that helps you walk through where your focus should be, if it's more on the innovation side, if it's more on the risk side. But it breaks it down per value, too. So you can get that overview of not saying only one, but which to put more time and resources in, which to put the strategy behind or alignment with.

Ben Byford:[00:22:54]

I think there's a perennial question in my mind is, who who this sits with? There's the receiver of the responsibility for the strategy, right?

Olivia Gambelin:[00:23:10]

Yes.

Ben Byford:[00:23:11]

And in the book, you talk about the business level, and then there's maybe a project level, depending on if the business has multifaceted services, products, stuff going on. And in my mind, I think there's people who are perfectly... If you don't have a ethics team or a due diligence team who are interested in ethics or interested in a resource way, then I feel like in my head, when I've worked in various companies, there's the project managers, and then there's maybe QA And I'm like, cool, man, could QA be the harbinger of the metrics for things being done? You know what I mean?

Olivia Gambelin:[00:23:57]

Yeah.

Ben Byford:[00:23:58]

Do you have an opinion the people, kinds of people who could do stuff like this?

Olivia Gambelin:[00:24:03]

Well, I do think I go into this a little bit in, I think it's chapter 12 or 13.

Ben Byford:[00:24:11]

It's going to be the end of the book. I haven't read yet.

Olivia Gambelin:[00:24:12]

Yeah, exactly. It's towards the end of the book where I go I remember a couple of the main business functions about if you're coming from more of the marketing, PR, communications background, what your role is, versus what your role is, say, as a product or project manager, quality assurance, versus you are a C-suite in the technical department. Everyone has a role to play. That's what makes responsible AI something that can be a bit difficult because it quite literally is change management. You have to incorporate the entire business, and everyone does have a role to play if you're doing it well. Who it sits with, if you don't have an ethics team, I see oftentimes this strategy sits with the technical heads. So I'm watching a lot of companies hire chief AI officers or VP of AI or so on. I mean, if you're not, then you're going to be left behind in this next year, I would say. You need actually someone to have their eye on that. And that person, because they own the AI strategy as a whole, should also be owning... Well, this is how I say it. AI and responsible AI are synonymous when it comes to that strategy.

Olivia Gambelin:[00:25:26]

So the strategy side is owned by who owns the AI strategy, which seeing people hired into that position or chief technical officers, heads of data or so on, it depends on the company's structure. But they're the ones that are typically owning these strategies. But there are roles to play throughout the entire organisation.

Ben Byford:[00:25:48]

Yeah. So everyone's involved.

Olivia Gambelin:[00:25:50]

Yes. It's a team effort. Yeah.

Ben Byford:[00:25:53]

Come on, guys. Let's do it. And obviously, there's yourself, ethical intelligence. There's people like me sometimes and other people who can come in and deliver some of this stuff.

Olivia Gambelin:[00:26:08]

Yes. Which I find very helpful for companies when you work with an external consultant because we provide... We're like a neutral party. So because we're not internal to the system, we're not caught up in trying to... I'm going to put it nicely. We're not playing corporate politics. We're there to do a job. And so for us, our main motivator is ensuring that your company is doing this well. That's it. We're not trying to get a promotion internally. We're not trying to get more budget or team allocated to us. We're literally there to support your functionality in AI. So I see a lot of companies opting for that external consultant because of the neutrality and because we're coming from backgrounds of experience of working in multiple contexts. It's not just I've done this in one place. I've done this for countless organisations of different shapes and sizes in different markets and with different technologies. So throw at me whatever you want. I've seen it. We can deal with it.

Ben Byford:[00:27:19]

Yeah. I guess it's extremely useful. I always think there's a size issue. So do you think there's an issue with smaller companies getting this right than the larger corporations?

Olivia Gambelin:[00:27:36]

I think smaller companies have the advantage that it is easier for them to get the practises in place because they're not a behemoth. It's far easier to train 20 employees than 2000 versus 20,000 employees in terms of that scale. So the rate The way at which smaller companies can pivot onto these responsible practises is far faster. Where they're strapped is in terms of resources. So smaller companies aren't necessarily going to be able to afford building out an ethics team or building out a responsible AI So we're not a team having a position allocated to this. And in some cases, it can be difficult to afford, say, a higher-end consultant that comes in with that experience, because this is a niche field with a very niche expertise, and there's a handful of us out there that know what we're doing. I'll put it that way. It's very easy to ask ChatGPT questions that aren't very helpful in terms of answered responses. So I would say smaller organisations have an advantage of ability to change and adaptability, a disadvantage in terms of having to find where to pull the and resources from, not just for responsible AI, but for AI itself, the digital transformation that needs to be underground.

Ben Byford:[00:29:11]

Go buy the book.

Olivia Gambelin:[00:29:12]

Exactly. Go buy the book. Go buy the book and go give it an Amazon review because we are beholden to the... I am also beholden to the algorithms. The algorithms.

Ben Byford:[00:29:20]

Other vendors are available.

Olivia Gambelin:[00:29:23]

Yes. Other vendors are available. You can go through my publisher. You can go through, I think it's Goodreads or something like that. Ironically, you can cut this out if you want or you can keep it in. Ironically, being an author in the space of responsible AI, as an author, one of the things that you have to pay attention to are Amazon reviews, point blank, because how your book is reviewed on Amazon, the number of books purchased through Amazon does significantly impact how your book is received in the market, just with how Amazon works and how everyone... How a lot of book purchasing habits people go through Amazon. Ironically, as a responsible AI author, my audience And my main base is not use Amazon. Yeah, exactly.

Ben Byford:[00:30:20]

Yeah. I was just thinking that. I was like, I was definitely avoiding looking up on Amazon or purchasing. You know what I mean?

Olivia Gambelin:[00:30:27]

Because I had someone there They are like, damn, you don't have a lot of reviews on Amazon. They're like, your book's not doing well. I was like, well, no, my book's actually doing very well. My audience doesn't like Amazon. So not a good place to look for whether or not this book's doing well.

Ben Byford:[00:30:46]

Yeah, that's really unfortunate, isn't it?

Olivia Gambelin:[00:30:50]

It is what it is. You know what? I'm going success based off of the number of books that I'm selling, which is counter to what would think on from the Amazon lack of reviews.

Ben Byford:[00:31:03]

So go review on Amazon. Don't buy it on Amazon, right?

Olivia Gambelin:[00:31:07]

Exactly. Exactly. Go give it a nice thumbs up on Amazon.

Ben Byford:[00:31:11]

So I feel like I'm having like a... So I'm nearly middle-aged, right? So I feel like I'm either having a crisis or the world of AI is changing around me. And that is what is always been.

Olivia Gambelin:[00:31:30]

Probably a bit of both.

Ben Byford:[00:31:31]

A bit of both. Nothing to do with my crisis.

Olivia Gambelin:[00:31:35]

Really good crisis every so often.

Ben Byford:[00:31:39]

Just bought a brand new motorbike. No. I have this realisation recently, right? So follow me along with this if you can, because I'm hoping you'll have some answers and it'll be illuminating for me personally.

Olivia Gambelin:[00:31:57]

All right, I'll put my ... guru hat on.

Ben Byford:[00:31:59]

Yeah, exactly. Yeah, exactly.

Ben Byford:[00:31:59]

It feels like you and I have been thinking about this space for quite a long time now, and other people, obviously, many academics. And we've got this idea of AI ethics and responsible AI and all this stuff. And it mostly pertains to the building of AI products, right? So you're building stuff and you have this AI pipeline, data science pipeline. And there are certain times where you might want to consider doing certain types of ethics-based work or responsible-based work along that pipeline. And you can say some stuff about the product at the end and the usage, but it's contextual a lot of the time. So it depends what you're using for. And there's loads of good examples of this in using algorithms or AI products in distributing money in public sector and stuff like that and all this stuff.

Ben Byford:[00:33:00]

But more recently, we've had generative AI, so LLMs, stable diffusion stuff with images and video. And it feels like everyone's just suddenly sidestepped responsible AI. And there's a couple of big players, and they've just produced some products, and everyone has to be okay with that. And it feels like we've been talking about the making of these things, and now everyone's asking us for how to use these things, and whether they should use them and how you can use them responsibly, and all this stuff.

Ben Byford:[00:33:38]

So I have some things to say about that, but it feels like there's this massive amount of other work or other stuff now in this new situation. So I don't know how you've tackled that or you feel similarly or not.

Olivia Gambelin:[00:33:55]

Yeah. Actually, oh, man, I love how you've broken this down because it is very true. I would say back when we spoke in 2020, Olivia, through the years. Exactly. A little sidebar there, Olivia, through the years. Back in 2020, I was definitely much I'm focused on the design. I've always been more on the design rather than the development side of AI. For me, they're different. Design is designing the features, the scope, the system as a whole. Development is actually building it. And for me, I've always been more on the design side. That's just how my brain works. So, yeah, probably about back in 2020, and even over the last few years before generative AI, I was much more focused on that design and development. Then with generative AI, there was a very clear shift towards adoption. How do I use this? And it's been very interesting I think it's opened my mind up a lot more, actually, in my thinking on my design brain, because through this experience of suddenly going from, let me, working with people on how to embed ethics, how to align different design features with our values, to all of a sudden being considered an expert in AI because I understood the technology, and then how do I use it? What do I do with this?

Olivia Gambelin:[00:35:29]

What it clued me into, and actually where a lot of my excitement, next rounds of research, next book, everything, I'm foreshadowing here, is I'm going back towards the design side, but I'm bringing with me a very clear understanding that, yes, we can build and develop this technology, but if we ignore the context and use of it, anything that we do on the design and development it won't actually matter. That that use case tends to be far more important than the technology itself. And I know that sounds weird, at least for right now, that use case is really heavily influential. So I'm enjoying actually working with clients in their adoption because it is showing me... It is highlighting a lot of these gaps and holes that we were ignoring, where We develop interesting technology, and then all of a sudden, if the user is not using it how we expected. And it's showcasing a lot of the limitations and the failure points of AI and where we are not fully meeting our needs. So I would say right now, I'm really enjoying the client work all on this adoption and implementation.

Olivia Gambelin:[00:36:54]

It's opening up a lot of what I find opportunities as as I'm side questing in my off time down back towards my design hole. But I like how you painted that out. It really was early days, AI ethics was very much more focused on the making of the technology versus now it's, how do I use this technology?

Olivia Gambelin:[00:37:20]

But okay, I'm almost done. I'm almost done with this monologue, Ben. Sure. I think this is pivotal for us because back early days AI ethics, something that I didn't realise, definitely didn't know in 2020, I just assumed everyone was using AI. Now, because of the past two years with generative AI, I've come to realise no one was using AI back then. So working in AI ethics and responsible AI, we don't even have AI. Now, companies have to use it. So we are being forced to actually engage with responsible responsible AI practises in a way that the right conditions have manifested. The right conditions have manifested for us to actually need responsible AI. And so this is a huge opportunity and influx of responsible AI work to where then we'll be back moving in a direction where we can look at working on the technology again, because we're going to be creating that demand cycle. As we're building these responsible AI practises, part of those responsible AI practises is demanding from vendors better technology, which is going to put us back on that build of the technology, which it's the fun cycle that we're in.

Ben Byford:[00:38:46]

I almost think, as you're talking then, I was like, damn, are we going to be like, there's this new thing and it's called responsible generative AI.

Olivia Gambelin:[00:38:57]

Oh, my God.

Ben Byford:[00:38:58]

Because people love the terms.

Olivia Gambelin:[00:38:58]

I'm sure someone's coined that.

Ben Byford:[00:39:00]

Yeah, exactly. People love these branding terms. So we're doing responsible generative AI or Gen AI now, maybe. We just haven't realised yet.

Olivia Gambelin:[00:39:10]

Rgen AI? I don't know. We should get more acroynms.

Ben Byford:[00:39:14]

Rgr. Rgr. Yes. Yeah. Good. So in response to that, you mentioned that people are talking about it and people are getting you to think about it. Is that in terms of using some of your thinking and your work to deliver some of that into, we want to use text models or image models, and how do we do that responsibly? Is that a thing that is directly coming down the pipeline?

Olivia Gambelin:[00:39:47]

Yes. And for me, it's a lot of the work that I have right around the corner is more actually working with companies to become AI-enabled. And these are companies that are operating in highly sensitive markets where they have no choice. They're like, we cannot end up in the news about screwing this up, but we have to do this in order to survive the change, not even survive, but we want to lead the digital transformation that is happening, and we can't screw this up. And so I become the person of, she's going to make sure we don't screw this up person with those AI strategies. And this is more because of my expertise. The book has a lot of influence on this. I sit a couple of layers up on that entire organisational AI change management. I'm less so on individual projects, not saying that I'm... Every so often I do dive into those. But right now, the driving need, at least for me as Olivia, is sitting on those AI enablement strategies.

Ben Byford:[00:41:03]

Yeah, and giving them the tools to feel better about using these technologies.

Olivia Gambelin:[00:41:09]

Yeah. It goes from... I'll walk into a meeting and I can see people are going, We got to talk about AI. We got to do AI. We're really nervous. And I come in, I'm like, All right, guys, don't worry. We can do this. There is a way to do this that doesn't compromise your entire company. Here's how we're going to start. And you just get that sense of, Okay, I can use the cool, shiny technology without the AI overlords taking over and me losing my job. Great. Win-win for everyone.

Ben Byford:[00:41:40]

I feel like I'd be behoved to not ask a I guess, with those people in the room, I think the most prescient, I'm probably saying the wrong words here. The most pressing question is probably around, am I jobless? Am I losing my job? Is this it, guys? You know what I mean? I sit around and write emails all day and...

Olivia Gambelin:[00:42:04]

Oh my God.

Ben Byford:[00:42:05]

You know?

Olivia Gambelin:[00:42:05]

Yeah.

Ben Byford:[00:42:06]

Okay. So is that something that people are asking about? Or can you talk to your feeling about some of that stuff?

Olivia Gambelin:[00:42:15]

Yeah. There is... That is a massive concern. We'll start there. That is a massive concern. I think that is one of the... Oh, I'm sure I should start making up statistics on how often I'm I ask that question because I'm sure I could ballpark it pretty... Yeah, exactly. Honestly, it's probably about 80 %. I'm pretty sure I could gut check ballpark that pretty easy because that question I get asked all the time is that it is that fear. Because It's really being driven by that generative AI aspect. And what I explain to people is if you are, let's say, mediocre at a repetitive-based job, you're probably going to be out of work. But there is still hope. I'm an optimist, so there is actually hope. Generative AI and these models, you actually still need a core foundational knowledge base of your own field if you are even going to begin to engage with these models. So for example, I do not know how to code in Python. I cannot sit down with ChatGPT and code a website in Python. I don't even know where to start. I would have to spend probably months of talking with ChatGPT just to understand what questions to ask in the first place to get the right code, to build in the right... I don't know where to start. Just as much as an engineer cannot sit down and engage with ChatGPT on these highly complex ethical dilemmas and strategy build and the work that I do, you need to have a core foundation of what questions am I asking in the first place?

Olivia Gambelin:[00:44:04]

So being able to ask the right question is now more important than ever. And in order to ask the right question, you have to know what you're talking about. You have to know where your gap in knowledge is in order to pinpoint it and fill it. So coming back to people being afraid of losing their jobs, yes, if you're mediocre and you refuse to adapt with the times, you're probably going to lose your job. But if you are a critical thinker and you focus on, how do I ask the right question? What do I do with the information once I have it? Then you're sitting yourself up for being highly competitive in what's turning, what is morphing into a very interesting job market. We're five years from now, well, our horizon is two years then. So two years from now, you're going to be vastly different job titles out there.

Olivia Gambelin:[00:45:02]

And that should be something exciting, not terrifying, if you are willing to rise to the challenge. That's the key difference. Are you going to push yourself or are you going to sit back and fear what you're losing currently?

Ben Byford:[00:45:22]

Yeah, exactly. So there's a bit of, in the hope that you will lose something, That is something that you shouldn't be doing anyway, essentially. Yeah. In an ideal world, we all have leisure time and chillax, right? I think we're like this idea of the Jetsons hasn't quite hit us, right? No. The '50s, '60s future where everything's like flying cars and like lounging around and drinking cocktails and stuff like that. If we could push more that direction and less in like, we're all cleaning toilets and stuff. That'd be great.

Olivia Gambelin:[00:46:02]

Yep, I would agree.

Ben Byford:[00:46:06]

Okay, so we briefly chatted before the before the record that the AI, EU.

Olivia Gambelin:[00:46:16]

EU AI Act.

Ben Byford:[00:46:18]

Yes, the act is coming in. And there are other things also coming in in different places as well. But the EU one is the most well-structured and covering one because It means that you can't deal with anyone in the EU without looking at it. Is that exciting, interesting for you personally? Is it a combination of a lot of this work that we've been doing over the last five, six years thing?

Olivia Gambelin:[00:46:49]

It's exciting for me in the sense that there is a very clear market for AI governance work now. It's It's quite literally, it's like the EU went, All right, here's an entire new industry, have at it, to be able to support this. I think it's exciting to see us on a global... Humanity on a global scale, because let's be real, the way we're so interconnected, it's very hard to develop and not eventually have to engage with the European market. So this isn't just for the European market. This is really a global reaching regulation. It's exciting for me to see those as people try and set standards in what used to be a very Wild West, have at it, good luck, development space. So that part is exciting to me. Me on a personal level, I'm not much of a compliance and policy person. I'll always be very honest with that. I work with some brilliant minds that are I mean, I don't even know how they do it, the depth of their expertise and the policies and the constant change in governance spaces. It's a lot. So I tend to work with people that have the mindset for that Because I don't.

Olivia Gambelin:[00:48:16]

My mindset, and you can probably look at this recording and easily pull it out, but I'm much more on that design innovation side. I'm a strategic thinker. That's always been me. So I am very excited to see the work being done that the EU AI Act is creating. I'm not excited to do that work because that's not the work that I do. I work with some brilliant minds that do do that. I work in tandem with them. So I guess in that way, I'm excited for the space that it is allowing me to further dig into the areas that I find really interesting, which is on that strategy side, which is on the innovation side.

Ben Byford:[00:48:57]

Yeah. And I guess part of that is, like you're saying, it's like prime time for responsible AI stuff. It's acknowledging that, okay, we do want this to be a thing. If you're using in this context, we want you to show you're things, essentially. And to tell us about it and to be able to be held accountable and all these nice things and not just be like, here's a thing, and you just have to deal with it, guys. Don't worry about it. Good It's not going to be hurtful or problematic at all.

Olivia Gambelin:[00:49:35]

Look the other way.

Ben Byford:[00:49:36]

We're nice people, so don't worry about it.

Olivia Gambelin:[00:49:40]

Yeah, exactly.

Ben Byford:[00:49:41]

Yes. I mean, that's one of the things that I always talk about, you could be the most well intentioned set of individuals and still do this badly because you haven't considered all sorts of things or thought about or reflected on all the things that go into making AI products.

Olivia Gambelin:[00:49:59]

Yeah. You didn't get the right, you didn't get the right foundations in place to be able to act on those intentions.

Ben Byford:[00:50:07]

Nice.

Olivia Gambelin:[00:50:08]

That's key.

Ben Byford:[00:50:09]

Wrapping up. I remember what I was going to say now, actually. I feel like, let's make predictions, okay? So we've already had one. So the next two years is coming, and it's nearly Christmas here in the UK and in other places around the world. I think the new big thing for me is agentic or agent-based AI. I feel like for me, if I was working this space, it'd be like, okay, we were thinking about doing Jeeves or some butler type thing. Now, it's probably time we should get on that. Have you got any other thoughts about how the agent or other AI techniques are going to play out in the next two it? Is there any futurizing?

Olivia Gambelin:[00:51:02]

See, these are always hard to do because I'll say something today and then a week later, something else will come out. I always find the predictions, at least right now in the AI market. I would Second, I would agree with you on that, Agentic. It's at least over the last month or two, I've really seen that rise. And conceptually, it makes sense. It's a very humanistic way of engaging with... It's a very humanistic interface, basically, where it doesn't matter what your AI literacy rate or level is, you can still engage with AI. Plus, we've already been introduced to aspects of it. We've had Siri, we've had Alexa for years. It isn't a foreign idea to us to be able to have an agent to engage with. I would say in terms of generative AI as a move away from the giant foundational models. From what it's looking like, we're going to have a few providers of those large language models, the massive foundational ones. And I've seen far more adoption, use, and development in terms of narrow use cases, narrow Gen AI models, ones that are even closed systems where they're built solely on a company's data and for specific purposes. It helps with the accuracy. It helps with privacy concerns. It counterbalances a lot of the major concerns that the foundational models introduce. Those narrow and smaller models, smaller databases, smaller demand on our energy usage, I would say It is where we're going to see a lot of development just further in the generative AI space.

Olivia Gambelin:[00:53:07]

And then I'll have another prediction that's AI-related, AI, adjacent. I think we're going to see a bit of a backswing, meaning I'm watching younger generations. I'm listening to people talking now, and there is this desire for less, actually. There's a desire for less technology, less rapid development, less having to wake up every day and relearn a new model. I'm not saying that the development will slow down, but I think at least with the younger generations, we're seeing businesses pop up that are built off of meeting in real life. I mean, literally locking your phone away and not touching it for an hour. So I don't know exactly how this is going to play out, but I do see as we have the rapid development and adoption of these AI tools, I think we're going to see a bit maybe subculture, maybe different movements that are far more vocal and focused on disconnecting from the grid.

Olivia Gambelin:[00:54:29]

So kind of AI related. But I see that already happening and gaining more attention.

Ben Byford:[00:54:38]

Yeah, nice. Well, I personally think that's a good thing. And I wonder how those things coalesce, like you were saying.

Olivia Gambelin:[00:54:46]

Exactly.

Ben Byford:[00:54:46]

Yeah. Well, thank you very much for your time, Olivia, on this sunny day. And how do people find you, follow you, read your book?

Olivia Gambelin:[00:54:59]

Well, then It's been a pleasure as always. And I'll see you in two years. You can find me at oliviagamblin.com. Through my website, you will be able to find my book. You can find the different platforms and providers that you can purchase it through. You can also find my newsletter through my website. My newsletter is called In Pursuit of Good Tech, and I cover a wide range of topics. In the new year, I'll be covering a lot of methodology for AI adoption, as you were asking. Then moving into some of what I call values-based product innovation. I'm playing with some new subjects there, some new, well, let's say frameworks and methods there that I'll be trickling out through that newsletter. And yeah, otherwise, you can contact me actually through the website or connect with me on LinkedIn. If you really have a burning desire to talk with me, please do the website. Linkedin is one of those black holes that messages go to die on. I eventually respond, but my LinkedIn is a little bit overrun these days. But I do eventually respond and I do like talking with people. So reach out, and yeah.

Ben Byford:[00:56:15]

Great. Well, thanks very much for your time, and I'll see you next time.

Olivia Gambelin:[00:56:18]

Likewise, Ben.

Ben Byford:[00:56:24]

Hi, and welcome to the end of the episode. Thanks again to Olivia for coming on the show for the the third time. It's always a pleasure speaking with her and to see what she's been getting up to, how some of her ideas have changed, and of course, what she's looking forward to in the future. I think it'd be quite interesting to see how our predictions have played out when eventually we'll come back together, maybe in two years, and chat about them again, maybe with a new book in hand, and definitely with much more experience and all that thing.

Ben Byford:[00:56:56]

If you're looking for help on AI or Gen AI or responsible AI in your corporation, organisation, etc, then do get hold of Olivia. You can also contact me at the Machine Ethics podcast, or you can check out the consultancy, EthicalBy.Design as well. We're having a flurry of activity here on the podcast towards the end of the year, so expect a end of year roundup episode filmed with myself and Karin Rudolf very, very soon. And we already have some episodes in the bag for early next year with some excellent speakers. So please stay tuned for those as well.

Ben Byford:[00:57:30]

Thanks again for listening. And if you can support us, please go to patreon.com/machineethics, and I'll speak to you next time.


Episode host: Ben Byford

Ben Byford is a AI ethics consultant, code, design and data science teacher, freelance games designer with years of design and coding experience building websites, apps, and games.

In 2015 he began talking on AI ethics and started the Machine Ethics podcast. Since, Ben has talked with academics, developers, doctors, novelists and designers on AI, automation and society.

Through Ethical by Design Ben and the team help organisations make better AI decisions leveraging their experience in design, technology, business, data, sociology and philosophy.

@BenByford