73. 2022 in review with Olivia Gamblin

For this end of year episode I'm joined by Olivia Gamblinn to discuss: ethics boards, generative images models and copywrite, concept art, model bias and representation in the generative models, paying artists to appear in training sets, plagerism, chatGPT and when it breaks down, factual “truth” in text models, expectations for AI and digital technologies generally, limitations of AGI, inner life and the Chinese room, consciousness, robot rights, animal rights and getting into AI Ethics...
Date: 1st of February 2023
Podcast authors: Ben Byford with Olivia Gamblin
Audio duration: 01:17:37 | Website plays & downloads: 206 Click to download
Tags: Conciousness, Robot Rights, ChatGPT, Ethics board, Ethicists, Bias, AGI | Playlists: Philosophy, Conciousness

Olivia is an AI Ethicist who works to bring ethical analysis into tech development to create human-centric innovation. She believes there is strength in human values that, when applied to artificial intelligence, lead to robust technological solutions we can trust. Olivia holds an MSc in Philosophy from the University of Edinburgh, concentration in AI Ethics with special focus on probability and moral responsibility in autonomous cars, as well as a BA in Philosophy and Entrepreneurship from Baylor University.

Currently, Olivia works as the Chief Executive Officer of Ethical Intelligence where she leads a remote team of over thirty experts in the Tech Ethics field. She is also the co-founder of the Beneficial AI Society, sits on the Advisory Board of Tech Scotland Advocates and is an active contributor to the development of Ethics in AI.


Transcription:

Transcript created using DeepGram.com

Hi, and welcome to the 73rd episode of the Machine Ethics podcast. This month is our end of year 2022 episode, where I get to chat with Olivia Gamblin about the year just gone. This episode is more of a relaxed kind of ramble chat to see in the new year and for insights of the year gone past. And we seem to get through quite a few topics. We chatted about ethics boards, generative image models and copyright, who and what is represented in the images created by these models, paying artist to appear in training sets, chat, GPT, and when it breaks down, plagiarism, factual truth in text models, limitations of AGI, consciousness, and, indeed, are you a robot?

Robot rights, animal rights. We also discussed getting into AI ethics and a joke of an AI ethicist, whatever that is. If you'd like to hear more episodes from us, you can go to machinedashethics.net, or you can contact us at hello@machinedashethics.net. You can also follow us on Twitter, machine_ethics, Instagram, machine ethics podcast. And if you can, you can support us on patreon.patreon.comforward/machineethics.

Thanks for listening, and hope you enjoy. Hi, Olivia. Thanks very much for coming on the podcast. Again, I think you're one of the, I think there's only 2 people we've had on twice. So, I'm hoping it's a a privilege, but also it's a privilege for us.

So thank you very much for coming on, the podcast and talking about yourself, how you've been thinking about, 2022, the last year, and what's been going on. So hi. Hi, Ben. And it's great to be back again. I am gonna say it's an honor to be back.

So Yeah. This year, I feel like has been, another roller coaster for AI, certainly. A little bit less of a catastrophic year for epidemics and things, which is nice. Obviously, we've had a lot of, things about environment and all that sort of stuff. So hopefully people are all aware about our current global situation in that way. But we're not here to talk about that. We're here to talk about AI and AI ethics and the impact on society and all that sort of, lovely things. So I know we've got a a few things to talk about, but how has this this year been for you and your organization, Olivia? So us at Ethical Intelligence, we've grown our team and we have actually at a very cool kind of pivot about halfway through the year. We started pivoting to focus more on products.

So before when we first started out, we're about 3 and a half years old. When we first started out, we were working in more consulting services projects, research development, sometimes even just like a good old think tank. But when we started, there wasn't really any information in this field. It was still brand new. It was still this kind of what is AI ethics?

How would this even be applied? And over the past few years, we've done all these projects. We've collected all this this, what we call data, but all this information on how on how a company can actually practically apply ethics. And this was the year that we looked at all of our data and we went, hold on, we got some really good stuff here. We can actually start to make some products that scale to help companies.

And these ones are the basic structural, things. So one of the things that we did is start started supplying ethic sports, which has been a very, very cool experience for us. I know ethics boards can kinda get kickbacks sometimes, but we figured out the format that works with tech teams and it's we describe it at times kind of like your conscience. Like, if you've ever seen Finding Nemo, you've got Dory poking around saying like, I'm your conscience. It's almost like these ethics boards are used like that by tech teams where they're they're they're deep in it and they're they're working on it like, should I be able to use this this data set or what should I what should I be work working on or or watching out for here?

And they'll ping their ethics board and be like, hey, is this okay? Or what should I watch for? And their conscience comes back and says, it's all good. You're good. Keep going.

Or like, wait a second, dude. Wrong direction. So we we we like to joke around this. It's lighthearted, but we figured out a format that worked with tech teams. So we started rolling out these ethics boards and then we also started building up blueprints around how to implement specific ethical principles from an operational standpoint.

Like, how do you get your team and your processes set up to be able to handle trust and then make the technical decisions? So that was probably a longer answer than you're expecting there, but this year has been a really cool pivotal year for us at Ethical Intelligence of just growing the team and starting to focus on building out some products, still doing all of our our fun service, research projects, but being able to channel that that insight into products that can reach a wider audience. And I think on top of that, you've also been doing, like, you're saying other stuff, but you also have a like, an article newsletter, which I've been reading, that you put out. Yeah. Yeah.

We've got we've got something called EI Insight or E Insight, and that's our newsletter. That's done by Adil. She's our head of marketing and it is fantastic. She takes once a month a group of our our EI experts and collaborates on a tech scandal that came out and then writes about, you know, how could this have been done better? So it's not it's not attacking the company or the scandal or something.

It's just like, here's what went wrong and here's how can we how we can actually make it go right. So it's that insight into know, that company made a headline, let's not do that. Let's not repeat that same that same mistake. And then on top of that, we have The Equation, which is our quarterly tech ethics magazine. We just had our last issue come out.

It was on the business case for ethics. And our next issue, which I know is gonna lead into a little bit more of what we wanted to talk about, Ben, here, is actually gonna be on generative AI since that's been such a big topic of this last year carrying into the beginning of this year. Yeah. Yeah. And how are you feeling about this subject?

Right? You were talking about, large, language models and image generation, and I guess these both come under generative AI, I guess, if you're putting this big banner on it. And we've had things like which is more recent, but we've had over this year, we've had this idea that we have progressively, become much better at taking a small language model, I think small inverted commas Small. Exactly. Small for like, I don't know, big 8 years ago, but like now small.

And matching that up with image recognition image creation stuff that we've also been looking at, and you get this, ability that we can guide images into creation, stable diffusion, all that sorts of stuff. And this is it just feels like it's suddenly just exploded, right, over the last year. We've had this ability to create stuff before, and we've had NVIDIA looking at, how you paint scenes and interpret that into, generative kind of photos with this kind of kind of painting style. So we've had all these projects, but suddenly, we're able to write something down. I want the Mona Lisa smoking a cigarette, you know, and it would just try.

It would try and produce you a thing because it's seen, cigarettes in images. It's had a caption saying a woman smoking cigarette, and then it's it's using those things that it it's seen before to augment into this, actually very amazing and imaginative, almost imagery. So it what's your gut reaction before we kind of deep deep dive into some some things that people were saying about this stuff? Oh, gosh. Okay.

So I have so much to talk about both in terms of models like DALL E and then CHAP GPT. So you've got the image generation. You've got the text generation. So much to say in both of those, but overall, my gut reaction, and this is more on a personal level, I am challenging myself this year since ethicists were trained to look at where are the holes first, where are the holes, where's the risk, how do we safeguard against that, I am challenging myself this year to actually the first question, be be asking myself, what could this technology be used for in a good way? Like, what's the cool side of this?

What's I don't know the cool side, but what could we use this to do to really be tech for good, that pursuit of good tech? And this is just on a personal level of I I wanna take that positive route this year, and so I'm trying to do it now with generative AI. And so my gut reaction is, wow, that's impressive technology. Look. I mean, we have to admit that.

It's it's very impressive what what these models can do. And on top of that, my gut reaction, the ethicist is kicking back in now and the risk factors. I see it as and I'm painting broad general strokes, not to be ironic with that use of term there. But the systems themselves, it's going to be interesting to see what information is fed into those systems because if it's the same information so say, take DALL E, for example, we're using image images from the net with a from from from the Internet, fed into DALL E, and then what happens when those images start becoming the same images that are fed back into DALL E, what kind of feedback loops were gonna end up in there. So there's that concern also just, of course, you're feeding information into these systems.

Chat GPT breaks on on very certain subjects versus it's pretty strong in other areas and you can kind of tell, okay, there there was more influence or there was more information fed and trained on in this specific subject area. Nothing wrong with that, but it's just something that we need to be aware of. And then the second part of where I am more curious slash concerned because I have this this motto of curiosity curiosity this year, but curious slash concerned. It's just it's more in the use cases. What are we actually gonna be using these models for?

There are great use use cases and there are ones already that we've seen where you're looking at going, oh, come on guys. Really? Like again, so it's that's more of my my first gut reaction of let's see what the technology can build and do and what we can build with that technology, but let's actually be conscious about how we're engaging with this technology. That's where I think the main conversation new at their general access. Yeah.

I think, it's funny because I've I've already have used it. Right? So if I Oh, yeah. Definitely. And I had a toy example to start with, obviously.

I was using Midjourney, so there's several of these. There's Dolly, Midjourney. There's several you can actually download onto your own computer, which is nice. And phone. And phone.

Yeah. Yeah. Yeah. Apps, pre trained apps. And, I think, OpenAI and Google also have their own image, ones which I can't remember what they're called, but they're they're that up after.

They're all in there. And one of the things, that you were you were saying, right, was, like, what are the use cases? I was I had this, idea that I was gonna make a poster, for this upcoming event I was running, And I was typing in something into Midjourney, and I was guiding, you know, the image in this way. So write some text. I want faces, cartoon, little game controllers, dudes.

And he produced this thing. Right? And it was, like, amazing. I was, like, this this is great. This is, like, this would this will work.

This will be fine. But then I notice at the bottom, there's, like, this caption or, like, a signature sort of thing. And you can imagine you you find lots of images on the Internet where there's watermarks or there's signatures or there's something, you know, is the normal thing to do, putting your signature or your logo or something on something. So So I was like, okay, cool. I'm not just gonna just use this image for my poster.

It's it's just got this weird mark on it, And it Yeah. It's almost someone's signature, but it's kind of an an amalgam signature. Interesting. So then I took it as, like, this jumping off point, and I had somewhat the skills to then go into Illustrator, Adobe software, and and, like, make my own version, basically. And to go, Yeah.

Cool. This is, like, my starting point. I'm gonna make my own poster. This is the sort of buy by 1. So at that instant when I saw that that signature, right, I was like, okay.

Like, is this image, like, pretty much just someone's image. Right? Or is it that the model expects there to be some sort of signature there? And I don't know what the answer is there yet. And it's like, cool.

Like, you know, lots of people were talking in this way about plagiarism and stuff like that, but I actually don't know technically how, how explicit the plagiarism is, weirdly, in the output, even though obviously people are talking about, you know, stylistic Yeah. Stuff. But this instant of me using it and just being like, oh, I can't use this image because it's got some signature on it, but it's not actually some signature. I was like, I don't know what that is. Now I'm scared.

And Yeah. I haven't heard of that happening yet. So I think it's I haven't heard of anyone running across, like, a sort of signature thing, so that's really interesting to me. But I would definitely side with you and be a little bit freaked out about, am I actually accidentally taking someone else's artwork that I have no idea. I mean, the Internet is full of people and artists and Yeah.

I mean, I dabbled in graphic art way back in the day too. Please, if any generative model is listening, do not take any of that artwork. It is not good. It was done in a graphic design class way back in the day and it's funky. But it's like you never know.

You have no idea because people have their own little blogs and maybe have a following of 5 other people, but it is still their art. And so I think the fact that there is a little little signature popping up is interesting. It either says something about the need for artists to sign their own work, which is good, have our be able to sign sign our work, but or it has taken it from another artist's work, but there's no way to tell. That's more of the we don't know where that's coming from. And that's where where the creepiness and the questions come in.

But, yeah, that's funky. I haven't heard of the signature one yet. Yes. You broke it. I know.

I did break it. And the the other time I used it specifically, obviously, was to play around with, like, portraits of my face, because we're narcissistic. Right? Of course. Of course.

And then I used it for, like, concept art. So I'm I was building a thing and I was like, I need this. I was thinking about the style, like, what's this gonna be? And I was like, make some concept art. And the way I think about that is 1, concept art is it feels like a real skill, right?

So it's like, there's a real skill that people have. They are amazing artists, but also the concept art doesn't necessarily get into the finished product in a way that some of the other art does. So I'm kind of torn in this idea that, you could just do concept art with a generative model, or weirdly, you could just probably just Google it and and or, like, search other search engines are available. Duck, duck, go it and and find images which are similar to what you were looking for anyway. So so this is almost like we've got this really, complicated tool to do stuff that we already do in different way.

So there's I guess, but it it depends on your use case there as well. So, yeah, I'm not really sure about that as a Yeah. And it's it's lowered the barrier to being able to generate digital art or concept art because beforehand, you you did have to know some type of tool. I'm sure a lot of us use Illustrator with Adobe. This isn't a promotional for them, but that's just the the default for a lot of us.

And that can be a high barrier to entry for people if they just they don't know that tool. It's a it's a lot of buttons, and they do a lot of things. Mhmm. And you can spend the time and effort to learn all of the shortcuts and the the cool little tricks that you can do or you can look it up online, but that takes a lot of time. Even if you're willing to invest that time, it still takes a lot of time to get that skill level up and then to create the art itself.

Again, another barrier of time and effort and skill. Mhmm. Whereas with these generative models, you can actually get the specific ish image. You can you have more role in, I really want a green tree with, I don't know, popcorn blooming on it. I can kind of generate that in a style that I that I want, that I can feed in.

It's not exactly what I'm picturing in my mind, but I can get it close enough that I'm happy with it and I can use it versus if I were to Google or DuckDuckGo or Brave, a different, image along those lines, I can only imagine what would come up on stock imagery. That's more entertaining than anything, but it it won't again be as close to what I want. So it's in a way having these general generative models image generative models Yep. You do have that level of personalization that hasn't been available before. And, I mean, speaking speaking as founder of Ethical Intelligence, we do a lot of content creation.

We we, pull together a lot of different resources in ethics, and we get stuck sometimes with, like, how are we gonna depict ethics? We can't do the robot and hand touching again for the umpteenth time. Like, what other imagery can we use? And so we have this conversation as a team all the time. Do we actually wanna start using some generative some generative imagery?

Of course, we wanna say what that imagery is and probably even list the prompt because it's that that part's interesting of how did we get to that image. Mhmm. But in a way, like, we're expected to do all of this content production constantly. We're tired of the stock imagery. This might actually help us get a little more creative even with the imagery that we're using.

Yeah. It's weird because it it it almost sounds to me like, the problem there is the the grind of production and and being relevant and keeping on top of just being a content producer. But also the fact that, you know, for example, I know that DeepMind, there you go. Yeah. DeepMind open source, they put Creative Commons on some of their imagery that they produce, some illustrators and three d work that they did.

And I think it's on Unsplash. It's somewhere. Yeah. So you can just download that stuff. So I don't it's also this copyright thing.

Right? So if it was easier to use images, if there was some images to use, you'd probably just use those images. Right? You wouldn't need to, like, generate them with this. So there's a copyright thing, which is also kinda Yeah.

We need we need to write the rules of the road write the rules of the copy right now. I like to think of it this way. It these generative models, the engineers and the people creating those models, that is their own art form. Like, for engineers, their model, that is their art form. Their code, that's their art.

But that's not other people's art. Other people's art, like, the the the painters, their their art form is the canvas at the end of the day that is now being fed into these models. And so we need to have a discussion about where is that copyright, who has actual ownership over this image. I am firmly in the camp that it is not the AI's image. You can't have an AI artist that sells 1,000,000 off of the art.

That it it that is a system or company behind it benefiting from the people that wrote the code and the people that created all the imagery that was fed into it. I there's just there's too much there where I I don't think that that is possible and that is deeply disrespectful to the people that have both built the system and the the artists that have fed willingly or unwillingly their their imagery into it. But when it comes to these rules of copyright, I don't think even though the code itself is is the art form of of the engineer, I don't think it's engineers or companies that should be driving how that copyright works. At the end of the day, the data that these models are being trained on is someone else's life work often. And the amount of time and effort and love and care that goes into an artist's work, that needs to be respected.

So we need artists actually to be able to write these laws of copyright. And that may get a little complicated for these models, but, you know, you can't go in and say, hey, please, do the purple tree with the popcorn coming out of it. Terrible example. You think I'd be more creative, but purple tree with the popcorn coming out of it and Olivia Gamblin's art style. I don't think I have a strong enough artistic style to actually be able to to feed a system that, but say I did, say I was a world famous artist.

Then really that system is now taking a style that I have perfected and I have loved over years years of development. There needs to be some type of copyright in my direction for that because that is my style. That I own that. That's like my intellectual property on a level. So there needs to be more conversation.

This is not something that engineers can come in and say, it's fine. We've done all of this. It's good. Like, we created these models, and it's separate enough from the art. No.

Artists also need to be in that conversation. And also understand that that for for engineers, this is their own art form. This is the cool thing about art is it's the artist always in conversation with something. So now let's have the conversation between the artists and the engineers. Let's have that that conversation.

That itself will be an art form. I've gotten really deep into it. Yeah. Yeah. I think I think I don't disagree with what you're saying, but, like, I think it's one of those, like, devil's advocate things here.

I think if the copyright issue is so tangled, I think if the artist you know, it's it's from which direction are you speaking. So if the artist was involved in that direction, then they probably wouldn't have their artwork as part of one of these big data sets. Right. And if they did, then they probably want to be compensated in a way which makes sense. And I don't actually know what makes sense.

So if anyone solves that and gets back to me, that would be awesome, and I would talk about it. Like, I am very curious here. Yes. But How are we gonna make this work? Thanks.

Exactly. Yes. Because you're right. Like, Olivia Gamblin, right? Yep.

Your artwork might be great, but it might be a fraction of the amount of data. Like, we when we talk about biased data or data in, biasness of datasets, what we're really talking about, in this way is that you're, you know, 0.0.0.0.1 percent of the total dataset is not gonna influence it probably, in a way that's gonna make sense for, like, asking it to be in the style of Olivia gambling. So there are other things you could do. There might be a Olivia gambling, like, decoder that you, like Yeah. Patch onto the back of the model, and it's a different Yeah.

System at that point or, like, augmented the system. And again, we should make that. But Exactly. Great ideas for anyone listening. Yep.

But as it stands right now, like, you can ask for Picasso. Right? Because there's going to be Picassos and you can ask for, like but you're not gonna be able to ask for, certain people who are, you know, not present in that dataset. So you have, for example, I actually did this. If you ask for it, if you type in, like, superhero, you're going to get a DC or Marvel character.

Yeah. That's what superheroes are for the data. There's a lot of images of them. Exactly. Right.

There's a lot of images in them. So you do get this heavy bias in the generation of the images anyway. So you're already biasing, like, who gets seen almost, like Yeah. In that. Even though if you're asking for, like, something weird and obscure, it's going to give you something that it knows about.

Right? Yeah. Exactly. It's that that we know that there's gonna be bias coming into this, and I think it's just more even more and more the case of artists still need to be respected and valued because if you ask an artist to paint a picture of a superhero, they will be more creative. They're not they're not beholden to what dataset they pulled from the Internet.

And that, I think, isn't is still important. And, you know, back to your point as well about ownership, it maybe it is impossible to do the the direction of being able to trace back, okay, this artist's work influenced this, like, pixel on the Yeah. Exactly. On the image. But, you know, are are there is there a system that we can build that runs complimentary to something to to say Dolly's just the first one that comes to mind, but one that you can run an image check of just ensuring, okay, this is original enough.

Like this is, this, the originality of this vote, of this generated image is not pulling, like, 50% or something from this one one artist's work. So in the case that you had with the signature, you'd be able to run that image through through another system and that system can say, don't use this. This is literally another piece of artist's work that that's been regenerated or no. You're good. It it's not it's not matching.

There's not like, kinda like like plagiarism. It's like it's it's a 75% match with this other artist's image that they've produced. And you're like, oh, okay. That that explains the signature. Or or or in that case, it can give you, like, a rundown of the the the top percentiles of influence, and then you could pay those people pay those artists.

Right? I'm I mean, there's so many different model. Yeah. Exactly. There's so many different models here, and we just we need people that are gonna work on that because it it is worthwhile and it is something that we need to do out of respect for our artists and supporting our artists because they're, you've got the struggling artist metaphor, but we do need to respect them.

This is their life's work and we can't just brush them off because we've decided to feed it all into these large these large models. Yep. Okay. So I think we solved it. Right?

That that's it. We've done it. Yep. Solved it. Done.

Okay. Cool. So at the heart of the image generation stuff, there's, like, this language model. I was looking into what was I looking? I was looking into some of the models, but they use BERT or something.

They use some way of, like, taking in some text, tokenizing it, and putting it into a latent space or, some some space that that you can then use that as a, something to pass through into the image generation. So, it's very complicated, but I urge you, there are some good videos on the Internet, talking about how stable diffusion works and, how we've done it in the past as well. But at the heart of this, we've had this explosion in just bigger is better language models. Right? And it seems like there's there's not a upper limit at the moment.

I don't know if there's research that shows that there will be, but we are getting larger, larger, larger, corpuses, which are then fed into larger and larger, networks of neurons or neural networks systems, which are then producing it's it's almost difficult for, like, more correct text. Text which is more like, you know, what it's seen before. So we start getting large language models who can solve mass problems because it's seen mass problems before. Cause they're tokens and it knows it's seen them before. And it can do, moving robot arms because it's, interpreting this text and it's got access to things and it, you know, there's it starts taking standard language, and then transporting it into something useful, which might be other language, in stable, in the image generation, it might be fed into something else.

So, yeah, I'm I think it's just it probably we talked about this last year again, but this year is just more of the same, bigger, better. Where do you see this going, Olivia? Well, it's bigger is better than it's definitely the American influence coming in here. We gotta go bigger. We gotta be the best.

We gotta be biggest. No. I think this is a very interesting development because this is the first time I've seen with the large language models and especially with chat gpt. I can't tell you, Ben, how many people have reached out to me about chat gpt. It's like every other day I've got someone in my inbox going, hey, what do you think of this?

The interesting thing though is those people used to be, artists when DALL E came out or truck drivers when they're looking at self driving cars automation. Now I have engineers reaching out because with the release of chat GPT and this the these large language models, they can do basic engineering and they can do it pretty well because there's a lot of examples of it online or there's there's been a lot of example data that it's been fed. So the engineers kinda freaked themselves out, I feel like, and it's been interesting to see it turned back on on them going, wait a second. This what do we do? Like, what what if this takes us out of a job?

But, again, back back to the large language models. This has been the first time in a while that I've seen the tech community actually very, very excited. And you get people coming in saying, this is, as close to AGI as we're gonna get or this this is the start of AGI. I firmly sit in the camp that it is not AGI. It's not artificial general intelligence.

It's just a really, really powerful large language model that has access to tons of different subjects and informations, but it's it's not thinking. That's what we need to stress. It's not thinking, it's correlating on information that already exists and there's ontological trees that that decision trees that you can trace back and figure out this is where it all came from, but it it it's not ideating. It can only reproduce something that was already in existence where we as humans, we have that ability for ideation. We can like, for example, I really hope no one thought of a purple tree with popcorn coming out of it before.

That was just me sitting there going popping an image into my head, but I've never I have no no correlation for that. There's nothing that ever would hint at there being that kind of tree in existence, but I can sit here as a human and pull these crazy things together and see it. Versus the models, they have to have an instance close enough to it in the past where they can correlate and jump to it. Mhmm. I have a friend that he gets a lot of of not joy, but entertainment in what he says, breaking chat gpt.

But he likes to figure out the questions to ask it where it it it can't figure out an answer. It just sits there loading. And these are usually ones where he and and he is, he specializes in in ontological reasoning. So he's like, well, what's happening right now is I've asked a question and the correlation between my question and potential answers is too far of a jump. And so the model has sat there trying to find a different correlation, trying to find a different logic jump or different, oncological tree to go down and it can't.

It's hit a dead end in its actual infrastructure. And so it sat there sat there sat there sat there. And then it comes back usually with, like, I I'm a large language model. It's like, cool. I already knew that.

Broke you. He he likes to do that just to see where where he can push it. But these large language models, they're these LLMs, there's potential there to do some really interesting cool things. The question that's and this isn't this isn't as an emphasis. This is actually, and I can't claim it.

This was coming from other conversations with engineers and technologists is what is the actual use case for these? Technically speaking technically speaking, very, very cool technology, really impressive. What do we use them for though Besides the the guy on Twitter generating generating workout routines, not workout routines, workout I think they're called routines. Mhmm. God, you you can tell that I I do not go to the gym and that I am much more a dancer than anything.

Right? That use of terminology. But what how do we use this? That's more of the question. And that's what I'm actually excited to see this year.

Both excited, but, you know, cautiously optimistic where we have these cases. We have the potential to do something interesting with these models, but we need to look at the use cases. We need to understand what the limitations of the models are and what we want to get out of them. Like, if we're just making a chatbot to have a chatbot for fun, cool. Then it's a chatbot for chatbot for fun, but we can't start replacing people with this.

There it makes mistakes. It makes a lot of mistakes and it's gonna get better. That's that's for certain, but it's still not gonna replace the the human ability to critically think, assess, and ideate. And that's that's just period. That's that's that's where I put my foot down.

Okay. So humans for the win. Hey. We can use it for for creativity. Like, I was talking to this, I was talking to this woman who poor thing, she was like, I was asked to write for the an article on Harry and Meghan and I could not I was done.

These these are her words. It's like I I I was done with it. So I used it. So I used chat gpt to to generate an introduction just to get just so I could kind of think about it. I was just so exhausted.

And I was listening to her. I'm like, there's nothing wrong with that. And I'm so sorry you had to write another article on on Harry and Meghan. But, yeah, you know, let's use this as something that spurs our creativity and our ideation like you were, like you did with the, the image generation and then used it to create your own. That to me seems really kinda cool.

When you're stuck in a creative rut, how do you use these systems to get you out of that rut and and open up your mind a little bit more instead of just relying on it of, well, you know, I'll just have it write the whole article or I'll just take the exact exact image, when I really actually wanna produce something in my own. Yeah. And I think we probably already do that. Like, if you wanted to write on a subject, you'd probably go and research that subject and you you would, you know, take some of that with you. If not explicitly, you might internalize some of that.

Right? So I think it's, yeah, I think it's a really good jumping off point for that. I think, also, though, it has a similar issue with the the the kind of, let's say, copyright, but the the plagiarism aspect of it, that the image stuff does. Because you've now got, you know, could you give me the full works of Shakespeare? But, you know, insert some something and it would give you something.

And it's like, how close is that to something which exists, that I've just kind of like, you know, just presented to me. Right. And there's a lot of there's a lot of conversation at the moment about plagiarism in terms of writing, like work for schools or universities and and, colleges, just taking stuff directly from chat to Ept. And, and relying on it, again, to be somewhat truthful, let's say, inverted commas, but, like, somewhat correct in the way that the correctness, we're expecting to be. There's this kind of, like, scientifical problem here of, like, a a logic problem of what is truth and how do we define knowledge and stuff like that.

But we, you know, there's certain correctness that we we look for when we we're talking about just doing, essay writing. There's obviously semantic correctness. Right? The spelling, the formation of sentences. There's also, like, the sky is blue and, you know, the expectation of certain, The facts.

Yeah. Ontologies, like you were saying before, to be the case. And sometimes it's, you know, it's gonna present a sense of information which is just categorically, let's say, untrue, but not worth expecting as well. Yeah. And two points there.

1st about the the essay writing and taking just well, gbt wrote my wrote my essay. I look at that and I and I'm question really, when you do that, we've got these questions around plagiarism and there are definitely ethical implications but to me I look at it like, why would you cheat yourself? That's who's gonna suffer at the end of the day when you do that because you're speaking to students, but you're assigned these essays for a reason. Mhmm. We have all been through the pointless essays that we don't like to write, but it it exercises our brain in a certain way.

And if you're just offloading this onto chat GPT or any other large language model, you're cheating yourself. You know? You're not getting that that brain exercise. So that's that's kinda that that's one of my first points off of what you were saying. And the second one here, that you're bringing up about the different types of factual accuracy and semantics.

And then I would even argue, like logical argumentation that actually having, valid and sound arguments with within essays. The struggle around when something is factually untrue, when something like the the system spits out sky is magenta and it's not sunset. And so you're like, okay. Well, the world's ending then. But It's it's you're sitting there and the sky the sky is blue is probably a poor example because we can see it and it makes it a lot easier but but for things where I have asked a question about, like where did the Statue of Liberty come from?

And it gives me one answer and I don't know where it came from, then I'm gonna read that answer and I'm gonna think, okay. Well, that has to be true, I guess. I'm gonna trust this answer when really it could be something completely different but we've portrayed these large language models as these omnipotent omnipresent, systems that know everything and can tell when that's not true. If they do give incorrect information, but it's very hard when you're asking information that is outside of your knowledge base to know if that information is correct or not. Yeah.

And so you have to take it with a healthy dose of skepticism of, alright, I'm I have no idea where the statute of liberty is from and the systems told me this and maybe I should go fact check that just to make sure. So it's, again, general use. How are we as users engaging with these models? Yeah. It's interesting a point that you make there because it's almost like, you know, when you're using Wikipedia or just browsing the web the web for information, you have a certain amount of, like, expectation of what you're gonna see and its legitimacy depending on, you know, where it is, who's written it, when it was written, and all this sort of stuff.

So maybe you need to have somewhat of that when you're talking about these language models as well. You know? I'm looking at this information, but, you know, I might have to fact check check it. I'm gonna read it through. It might be directly copied from somewhere.

It you know, there's all these questions you you've gotta ask almost. Should I verbatim use this in the circumstances I wanna use it as well? Like like you're saying with the plagiarism, like, is this, actually gonna do me a disservice in using this in this place? And I was reading earlier, actually, there was a, a great newsletter called The Batch. So check that out, which is all about AI stuff from Andrew Yang.

Andrew Anj? Okay. And, it's it's great if you want to have, like, a, like, a weekly thing about just what's going on in AI. And the the one this morning was just like people were using chat gpt, again, others are available, to give answers for counseling questions and then using it in a counseling context without the, opt in, the knowledge of the recipient. So it's kind of like, actually, at the most of the time, like you were saying before, it's like, what is the end goal here, guys?

What is the use case, and are you actually doing it in good intentions like you're saying as well? Is it what are your intentions behind this, and are they kind of like are you being negligent almost in presuming that this is okay? You haven't really thought about this at all, have you, and the consequences. So Exactly. And I I I read I think it was a company called Coco or something.

They they used it in that direction. And that to me was this innate misunderstanding of the the importance of being able to speak and share experiences with other people. We're not trying to optimize a response. Let's get the best response. You you are trying to facilitate an empathetic connection between 2 people.

And that, you know, life is all about the balance. If you optimize for a certain specific aspect or metric without concern towards what that does of the balance of the system or the balance of the use case, then it gets thrown out of proportion. I'm sure they may have optimized for the best possible response to someone's, mental health query or, concern, but that wasn't why the person was there. And so that optimization has thrown the use case in the system out of balance. Yeah.

Yeah. Yeah. Definitely. Awesome. I I feel like we didn't solve that one, unfortunately.

No. But I I I think, you know, we that one is still a giant question mark. And like I said, this year, I am taking a positive approach to things. So I'm waiting to see how things develop, and I will sit and chat with people and about use cases. I can be much more I'll be much more vocal about use cases.

But for now, I wanted to just see how the technology develops and keep the respect of it's a very very impressive piece of technology. Mhmm. But it is far from being general intelligence and it is still a far cry from being anything close to what the human mind is capable of. I just wanted to pick quickly on that thread, on the general, AGI thread, as as you bring it up. What that that again is kind of like an intention.

It's a it's a cultural artifact. It's a dream. It's all these different things. How are you feeling about that, idea in 2023? I am feeling about AGI the same way I have always felt my, yeah, my my perspective on AGI hasn't changed even with the depth of as I get deeper into this space, as I understand better from people's perspectives, how they define AGI, how it works, to me and my definition of what general intelligence means, It's not correlation.

It's not, something you can find in neural networks because for me, AGI or just general intelligence, let's put it there, is more than facts on a page or pixels on a on on a web browser. It includes the ability to take in emotion and feel emotion and experience that emotion. It's the ability to interpret the world around us. It's the ability to go beyond just this has been written and put on the web, but actually I can sit here and I can look out my window and I can take that information and I compare it with what's on the web. I have no limitations to the input I can take versus with with artificial intelligence, there will always be a limit to the information it can intake because it can't possibly sit there and then fly to, I don't know, go to a different part in the corner of the world where there are no computer screens, there's like remote part of this world and intake that information that it finds out there where I can, granted with a couple different there's a couple barriers to entry on that, but, like, I can.

I I I do not have those limitations. And I can seek input that is not just, excuse me, I got excited. I started talking too fast here. Like, I can intake information that is not just something I can see with my eyes or I can hear, I, you know, I've got smell, I've got taste and sure, those sound funny, but that's also still input. And I can sit here and sure, Ben, we're communicating through Zoom right now, but I I get this understanding of, like, who you are as a person.

I understand your personality. I have layers and layers of, I'm just calling it input right now because we're talking about AGI, but layers and layers of input that are so detailed and minute and, and intricate that I firmly believe just the way that we develop artificial intelligence, it's not possible to capture those layers. It's not able we we are not able to capture that in intricacy. And I'm only talking about input. I haven't even talked about, like, output, meaning things that I can do and influence in the world around me where, you know, a j I the a j I AI, it's not like it can, pick something up and drop it and see see the cause and effect.

I can. So that's what I that's what I look at and I'm like, by my understanding and definition of AGI, AGI to me is a a system that is has the same capacity as human, and I just don't think that's possible. I just don't think it's possible. So that's never changed on my perspective. And that's that you're also getting some of my personal innate understanding innate understanding of what it means to be human.

So I know I'm influencing this, with my own perspective. But, again, that's my perspective. Yeah. Yeah. Yeah.

I think I think I feel, I I think more abstractly about this subject, whereas AGI might be this, like like, super intelligent human shaped, ethereal being, right, which could Yeah. Suddenly become as intelligent as humans. And we say intelligent, but we could do stuff that we expect humans to be able to do. Like, that that that for me is kind of boiling it down, because, obviously, you get this semantic soup about, like, what do you actually mean here? And then surpasses us, like, can, you know, keep going and doesn't have these boundaries that we, biological beings do.

I feel differently. I feel like, genetics produced this situation, and it could be that we could create a artificial genetic situation. And I think I think things could happen. It's just that we don't necessarily know how it could happen, and and it could look, much different to humans, and it could be more much more alien. So we we produce, you know, a slime mold, which is artificial.

And you know what I mean? It's just like Yeah. Yeah. Exactly. We could do cool stuff, and it doesn't necessarily have to be constrained to this idea of, a human being type thing, but in a computer.

I feel like it could be more like you've literally got a cat that, you know, the the the type of intelligence we're talking about. It it can do stuff, and it sort of knows it can do stuff. And I think that's where it all boils down to the fact, like, we can make programs do stuff and autonomously do stuff. But They don't want to do stuff, like, when we turn the computers off, that we we don't feel bad about it. Right?

Yeah. But we turn the cat off and we feel bad about it. Poor cats. So there there must there's this internal life situation, this autonomy, this, internal dialogue going on. And that is what, for me, we're trying to get to grips with in philosophy, especially.

But, like, you know, as in in engineering, like, what does that look like? And that and that's a lovely I think this is an amazing Venn diagram there, which is is exciting to me, but it doesn't necessarily look human either. Yeah. Which I I find is more interesting because it it can explode out the possibility space a bit more. Yeah.

I think we also sit there. How do you determine if something has an inner life? Because I that that I do think is is a big caveat for us. And it's the catch 22 where I'm thinking, okay. If I if they you know, with chat GPT or or any of the other large language model, generative text systems out there, we need to learn the names of them, Ben.

Oh, yeah. So that we can reference different ones. It'll be it'll be in the show notes. Alright? That's how it works.

Exactly. Exactly. But if I if I sit there and they've released constraints on it so that it can right now with with chat GPT and I'm that's the one I've played with, so that's the one that I'm I'm referencing. If I ask it any emotions or anything like that, it responds. Like, I'm I am just a an LLM.

If they remove that restriction and I say, can you feel? Do you have an inner life? And it responds, yes. Is that actual indication, or does it just know that that is the response to that question? So you're stuck there in this catch twenty two.

You're like, no. I mean, this is the philosophical thought experiment of our day of what does it have? It does the large language model have in in our life or did it just pick it up in the data? Who knows? I don't have the answer for that.

I just think it's it's a funny funny thought experiment that we're getting stuck in. Yeah. Yeah. It's cyclical. Definitely.

And I think the answer is probably no, but I also don't know when the answer will be yes. So that's the that's the issue, isn't it? Yeah. Yeah. Yeah.

So What's the tipping point? How do I know that you have an inner life, Ben? You could just be honest with a robot. I don't know. Well, I think I think that is I think that's the crux, isn't it?

It's behaviorism, with, dualism and and and all these different ideas of, like, you know, self, like what is what is the being that we are? And if it it's like the duck, duck type, duck analogy. If it if it sounds like a duck and it looks like duck, it's probably a duck. So you're going and, they say that about pornography for some reason, but if it if it does human stuff and it sounds human, it's human. And and that's sort of if you look back at, Alan Turing and his, seminal papers on this sort of subject and the Turing test, this this sort of he was kind of, like, thinking in that terms.

Like, it we might as well think of it in those in that way if it kind of displays what we think of as human attributes, then, you know, how are we to discern? But I think, obviously, we've grown up, we've done some stuff since then. Yeah. And and I don't disagree with him. I just think that is there never gonna be a way to check, or are we always gonna be stuck in a situation where it's we are essentially I'm believing that Olivia gambling is a is a sentient being, in the way that we think of as a being with in a in life, in actions, in autonomy, that sort of thing.

Yeah. Or are you a robot? Or, you know, are you a automaton? You know? I am I'm secretly an AGI and I've I just don't want anyone to find out, which is why I say, no.

This can't exist. I think there's an interesting point here, though, Ben, and this is this is me as a skeptic. Mhmm. This this is me coming from from point of skepticism of, again, that that belief that there's something so innately different about human nature. But what if what if here's the philosopher coming out.

Yep. What if we actually did look at these systems and ascribe some level of humanity to these systems, what would that what would the implications be for our behavior Mhmm. With with these systems? How would we would we treat them any different? Would we use them in any other different use cases?

Would this be, you know Yeah. We we still approach it as this is tech this is technology. And maybe with some of the systems, we need to take a different mindset and approach. And even if there is no human essence, because I can't possibly say that even in a thought experiment, it makes me cringe. But the again, these are this is my personal bias.

Then what, yeah, what what would that shift in mindset change about how we approach our technology? I think maybe that is an interesting thought. Just just thought case of what what would that lead to. Yeah. I think, it actually harks back to one of our an episode a couple of years ago with David Gunkel and the whole robot rights and things like that.

And and part of his argument there is that actually the way that we treat animals and the way that we might treat artificial things, robots, let's say, does change us ourselves more than it can necessarily change the thing. You know, it changes how we, how we react to the thing. And the the the classic example is like shouting at Siri or Alexa. Yeah. It's like, Alexa, turn the lights on.

It's like and teaching your children that this is okay and that you can have, you know, maybe nicer language and speak people in a certain way and, you know I have to admit I have been yelling at Alexa. My mom and I have been yelling at Alexa over, the Christmas holidays because Alexa is programmed to play. And I'm looking at the one in the kitchen because I think it might actually start playing. Mariah Carey's Christmas by default. Yep.

Or I thought it said it would start playing, but we we, I figured out that if I asked it to play Christmas music, it would just default to that Mariah Carey. And so I would do it from the other room, and then I would hear my mom, Alexa. No. So, complete side tangent, but, Good for ganks. Oh, and I think I woke up the Corgis too.

Oh. You can hear them in the background. Okay. And they both know that they're Cordies. And they have been in our life, I am convinced.

I just wanted to die back in the conversation. But, yes, they're awake and Yeah. Yeah. So kind of lastly, I anecdotally got asked a lot of questions about how I got into AI ethics, how I think about a career in data ethics, what the new sector of tech ethics is. And I feel like I don't I don't necessarily feel qualified to talk about that, and and it's amazing that it's just suddenly happened.

I think in the last, few months, I've had, like, you know, I wanna say tens, but probably, like, 5, different separate things coming in from different directions about people, mostly students, who are interested in this area. And for me personally, I think it's great, but also I don't necessarily know what to tell them. So do you have any answers for these sorts of people? Yes. And I have been seeing the same thing I get.

When I started out, I had, like, 1 or 2 very random emails of, hey, what is an emphasis or how would I do this? Now I feel like it's every every week almost just someone reaching out either about their their research or or actually how do you how do you make a career in this space? And it's usually students. And to me, that's very exciting to see if we've got this this workforce coming in. We've got this fantastic workforce.

It also shows the importance of this space, especially for future generations. So wake up older generations. This is something that we need to actually take seriously. And I do see it taking the same trajectory as the role of an AI ethicist, I do see it taking the same trajectory as say data science. I think it was about 30, at least 40, we'll say 50 just just so I'm safe in terms of my calculations of time.

Data scientists did not exist, period. Oh, yeah. Yeah. It is really recent. Yeah.

Yeah. Yeah. Exactly. Really, really recent. And then it built up quite quickly, but built up over time where, you know, the first few people were calling themselves data scientists and and getting funny looks.

We've had the same thing with AI emphasis. The first few people were getting funny looks. I remember I have had so many funny questions when I was first starting out of what in the world is an AI emphasis? Why? What is this?

To now where it's a recognized title and people are looking to get into this space and how to work under this title, how to be an AI ethicist. And I say time and time again, 5, 10 years down the line, I would say more towards 5 years down the line, this is just going to be a position that companies hire for point blank or they're going to have their full stack ethics solutions and they're gonna have their ethics departments. I'm watching as companies build out their ethics teams and a lot of the tech companies just laid off their ethics teams, but I think that's more indicative of how the management of those ethics teams in those companies worked, not the actual ethics team's contribution to the company. Mhmm. Because there's still a lot to be worked out in terms of how do we where's the budget, where's the responsibility and leadership and say in a company that an ethics team has.

So that's, I'm going too far down off off the track of the original question. But to get into this space, I love this space, I think it's fantastic to work in. It's fascinating, it's quickly growing and so if you've got a curious mind, it is the perfect place to be. You are at the crossroads of so many different thoughts and ideas and sectors. I mean, as an ethicist and as an AI ethicist, I have to stay up to date on technology developments but also I have to stay up to date on on political developments and regulation developments and I have to stay up to date on ethical theory and I love talking with psychologists because that influences my work.

It's just such a cool crosshair of so many different sectors. And so, exactly. It's like, oh, where where are all these coming from? Yep. But when I talk to someone that is just getting into this space, I usually ask them, are you more interested in researching and, and diving into, problem definitions and, and like, do you wanna sit at do you wanna sit in the library and read text and analyze and and look at data and try and figure out, this is the problem that we're facing?

Or do you want to be at a desk and design be designing solutions and looking at, okay, we found the problem but then how does it look like to actually solve this problem and do it and be very practical in your application? I'm not saying one is better than the other. They're just very different choices because you've got the decision to make of do you want to focus on academia? And our academics put in so much research and very, very important work in terms of just these ethical concepts and principles and what they mean. And they're they're starting to turn around case studies as well.

So do do you want that academic track, which is a very important track, it's based on research, then you're looking more at think tanks, academia, traditional academia. And that is a direction where you're much more in the theoretical and that fits for some people. And then other people, it fits in, I wanna be much more practical. And in that case, you are looking for ethics teams and internships there, but also branching out beyond just the term ethics, you've got responsible AI teams. A lot of times what this is called in the tech industry is responsible AI.

It is ethical ethics, AI ethics in practice, but responsible AI is a lot less scary of a term apparently than ethical AI. So we go with responsible AI. Fine. Yep. Then you're looking for responsible AI teams or like privacy and trust teams or, explainability teams.

Like you're you're looking for specific principles that you want to be, practically designing solutions for. So there are 2 tracks and then there's the rare few that kind of sit in between both those tracks and, I would say that I'm one of them, meaning I spend most of my time on the practical side but I do love my academic academics and I do love my research so I will work with other academics on, research papers. So there's a way to do both. But what I tell people is start looking for internships if you wanna go that practical route, start looking for internships, get involved with communities like, AllTech is Human has a fantastic job board. Or as well, here's a subtle pitch, ethical intelligence, we have what's called the EI expert network.

We look for people with master's degree and above or 5 years of experience and above. So we do have some qualifications there, in terms of being an expert to join that network. But one of the reasons we set up that network and it's become such a vibrant community is I saw this surplus of people that wanted to work in this space and and didn't have an avenue, didn't know how to do it. So part of what we do with that expert network is both have the community building where people in this space can have someone to go talk to, but also be able to connect those that want to work in practical application of ethics, bring them onto projects where they can actually execute there and so start to make either a transition fully into a career in responsible AI or have that outlet that they're looking for beyond just their their academic career or their their independent career. And and, I'm guessing that we're there's a presumption that we're making is that this is not going away.

Right? This is like, this is something that's grown up quite recently, but with the prevalence of AI and these tools and even some of the stuff we're talking about today, you know, which is quite, you know, in the last year has just been blowing up generative images and stuff like that, you know, has a bit of what you have talked about, like, has a bit of law in it, has a bit of philosophy, has a bit of, you know, data science in it. And you gotta you gotta, you know, gonna put on these threads, see what excites you about it, and then, you know, go down that line, try and, join these communities and and, I guess, make a difference at the end of the day, isn't it? Yeah. It's, I find I was, like, joking about this stuff.

Back in 2015 when I was even when I was just starting to think about this area at all, I was talking with someone at the Digital Catapult in London, and we were joking that with all the automated car stuff and all the other AI things that are coming through, wouldn't it be funny if we had the we had AI ethicists? Wouldn't that be hilarious? And and we were, like, chuckling about it. It's like now, you know, 6, 7 years on, it's not it's no joke, guys. I kid you not, Ben.

When I was in high school, I was it must have been like my senior year of high school because I was trying to figure out as you do your senior year of high school, which in America, you're 18 and no clue what you wanna do with life. I'm sitting there going, I must decide everything that I wanna do in life now and all of my life has come down to this decision of what I'm going to major in in college. If anyone's listening that is of that age, that is complete and utter lie. So don't worry, you have plenty of time to figure things out. Even beyond that, you have plenty of times to to change and figure things out.

But I remember we had a family friend over, and I was saying, like, I really love philosophy, but I don't really know what to do with it. So I I I don't think I'm gonna major in philosophy. Spoiler alert, I did major in philosophy. But he said, like, oh, you know, there's these things called medical ethicists that, exist at all these hospitals. But, you know, there's this this thing called a tech ethicist that's coming out of that.

Like, a tech AI ethicist. It's kinda like a medical ethicist, but it's for technology. I remember sitting there listening and going, dude, you're crazy. What? Like, what do you mean that no.

I don't wanna do that. Why would I wanna do that? That's not even a career. And what do you mean? And then Yeah.

Yeah. Yeah. Definitely. It doesn't exist yet. Right?

Exactly. Yeah. And then I remember, 4 years later, my senior year at, in my undergrad, my degree, I was in an innovation class. I have I my minor is in entrepreneurs. One of my minors is also in entrepreneurship.

I'm in an innovation class and we had to come up with ideas for a business. And I kid you not, and I found these notes just recently about in the last year or so. I found these notes as I'm cleaning out my my computer, my hard drive, my notes from this class. And I had the 3 company ideas that we had to do for this this innovation class. I kid you not, I come down to the third one and it says, well, what if I had, what about like a network of ethicists that I was able to bring into companies and give them the ethical guidance in in creating their technology.

And I remember reading back and I was like, oh, come on, seriously. This, it's just been something that's echoed that I have laughed at at multiple fruition. Yeah, so that, that that's been kind of my path both as a as an ethicist, I started off going, what the heck is this? To what if I make a company out of this? No.

Also, my teacher thought it was a terrible idea too. So here we are. Now to to present day where I'm giving young people advice on how to start getting involved and yeah, and trying to encourage them of please get involved, come into this space. We are so short handed on talent and it the need for it is just gonna keep growing. So very cool place to be, growing career area.

Don't let the current tech layoffs scare you. Those are gonna come back around, and it's yeah. I I'm very, very hopeful, and this is here to stay. Yeah. Yep.

I'm we're gonna call it a day. This has been amazing. Thank you so much for your time. I know it's very early for you, where you are in the States at the moment. So thanks very much again for spending this time with us.

I was just gonna say that actually I've been doing quite a lot of, lecturing recently. So if anyone does need a day or a session, with their students, it's usually design students or, computer science students, but I do a lot with, for some reason. Is that something that you guys do a bit of as well? Yes. So we're we're actually going to be running the master class for the data lab up in Scotland.

They've got a what is the full name of it? They've got the data academy, excuse me. We've got the data academy. We're running and that's all students. I think it's master's students if I remember correctly, but we're running the masterclass on data ethics.

We've worked with a couple of of PhD students in the past, or like research projects coming out of universities. So we we do a lot of teaching and what we call training, in terms of students and universities in academia, but we are actively trying to figure out how to create how do I explain this? So so the the expert network is for people that are currently active and working in responsible AI but we have a lot of students that are fascinated by it and could learn a lot just by being able to shadow or be able to to pop into some of these conversations. So we're actively looking at this here. How do we create maybe a a shadow program or, a cohort of of people that can come and and see, oh, this is what it's like to sit in on an ethics board meeting and how an ethicist works in action.

So we're trying to figure that out then. Long story short, we're trying to figure out how to make it a little more open to to aspiring and and growing talent, in this space. Cool. Awesome. That's really good to hear.

Olivia, thank you so much. How do people find you, follow you, contact you? Well, on the net, as I said earlier, is I called it the net and I couldn't I realized as soon as I did that, I need to, you know, my nerd is showing or mom's on the net style influence there of that that YouTube video. Done with that tangent. Clearly, I am in need of my second coffee for the day.

But you can you can find me personally. I've got my own my own website, it's just oliviagamplin.com. You can connect to me, you've got my Twitter and my Linkedin are both connected to that website. It's probably the easiest one to point you to. If you want to check out what we do with Ethical Intelligence, you can find us at ethicalai_ co on Twitter or Ethical Intelligence on LinkedIn.

Those are two main social channels, but we've also got a newsletter. It's called e e insight, literally spelled e I site. And we've got, The Equation, which is our digital magazine and couple of other different resources blog. We're really relaunching the podcast. So, more podcasts then.

But we've got a lot of resources. So check us out actually just at ethicalintelligence.co. And you can find all of these resources and where to sign up to them, if you wanna follow some of our our work as a company. Thank you so much, and I'll speak to you soon. Sounds good.

Good to talk to you, Ben. Hi, and welcome to the end of the podcast. Thanks again for Olivia Gamblin for coming to chat with us. If you'd like to hear from more from her, you can go to episode 42 where we discuss probability and moral responsibility. Like we said before, I think this year has been a a bumper year for image models and text models, really blowing up and allowing us to demonstrate and see and, in fact, just touch and interact with use cases and, the general public getting into the action, using the output of these models.

So that's been really, really interesting for me to see and really lush to have this time to discuss it with Olivia. Hopefully, in future, we'll we'll do a proper deep dive on foundational models, text models, and we'll get back to you on that one very, very soon. As previously mentioned, if you're interested in reaching out to myself or indeed Olivia, then please do for getting hold of us for consultation, talks, lectures, all that great stuff, all around AI ethics and all the things that we like to talk about on this podcast. Thanks again for your support in 2020. We have a few things lined up already in 2023, but I'll be really fantastic, to hear what you have to say and what you'd like to see with the podcast.

So please do get in contact hello@machinedashfx.net. And again, if you can, support us on patreonforward/machineethics. Thanks again, and see you next time.


Episode host: Ben Byford

Ben Byford is a AI ethics consultant, code, design and data science teacher, freelance games designer with years of design and coding experience building websites, apps, and games.

In 2015 he began talking on AI ethics and started the Machine Ethics podcast. Since, Ben has talked with academics, developers, doctors, novelists and designers on AI, automation and society.

Through Ethical by Design Ben and the team help organisations make better AI decisions leveraging their experience in design, technology, business, data, sociology and philosophy.

@BenByford