96. 2024 in review with Karin Rudolph and Ben Byford

For our 2024 round up episode we're again chatting with Karin Rudolph about the AI Ethics Risk and Safety Conference, the EU AI Act, agent based AI and Advertising! AI search and access to information, conflicting goals of many AI agents, weaponising disinformation, freedoms of speech, the LLM plateau, shadow AI, and more...
Date: 19th of December 2024
Podcast authors: Ben Byford with Karin Rudolph
Audio duration: 54:37 | Website plays & downloads: 10 Click to download
Tags: Legislation, AgenticAI, Disinformation, Machine ethics, Freedoms, LLM, Policy, Shadow AI | Playlists: Generative AI, Review of the year, Legislation, Machine Ethics

Karin Rudolph is the founder of Collective Intelligence, a Bristol-based consultancy specialising in AI ethics and governance. Collective Intelligence provides training and resources to help organisations implement ethical AI practices and robust governance.

Karin also organises the AI Ethics, Risks, and Safety Conference, an annual event taking place in Bristol in May 2025.

Karin is a regular speaker at universities and conferences and an active member of the tech community in the South West of England.


Recording: Was really hoping to create a great video in this studio, but unfortunately we had some audio and video issues. I should have spent some more time and/or money on it ideally. But hey, we've learnt for next time! Please support us on Pateron and help us produce more and better episodes in future!


Transcription:

Ben Byford:[00:00:00]

Okay, so this is weird, isn't it?

Karin Rudolph:[00:00:13]

Yeah, it is a little weird, but it's nice. Yeah, it's good. A little more comfortable.

Ben Byford:[00:00:17]

This is the Machine Ethics podcast. Hello.

Karin Rudolph:[00:00:22]

Hello.

Ben Byford:[00:00:22]

This is the first time we've had a video podcast. So hello, you.

Karin Rudolph:[00:00:28]

Yeah, I'm the guinea pig.

Ben Byford:[00:00:29]

Like this anyway, like in-situ in a studio.

Karin Rudolph:[00:00:33]

It's nice.

Ben Byford:[00:00:34]

And I thought as an experiment that we should... Because we know each other, we should get together, talk about the year.

Karin Rudolph:[00:00:40]

Yeah, yeah. Sounds good.

Ben Byford:[00:00:41]

We did this last year, I believe. Yeah, we did this year. The audio was terrible last year, so I apologise.

Karin Rudolph:[00:00:47]

Yeah, the content was great, which is the important thing.

Ben Byford:[00:00:50]

Yeah. So hopefully the audio is crystal clear.

Karin Rudolph:[00:00:53]

Yeah, it should be. It should be. It should be fine.

Ben Byford:[00:00:56]

And as a treat to you on YouTube, you can look at our faces for once.

Karin Rudolph:[00:01:03]

Yeah.

Ben Byford:[00:01:03]

Which is interesting. So if you listen to this on the podcast, this is a video on YouTube. Go check it out as well with us lounging around on a sofa. So hi, Karin.

Karin Rudolph:[00:01:14]

How are you doing? Hello, Ben. How are you? My fine, thanks.

Ben Byford:[00:01:16]

Yeah?

Karin Rudolph:[00:01:17]

Yeah, all good.

Ben Byford:[00:01:17]

It's a bit cold in here.

Karin Rudolph:[00:01:19]

It's fine. It's good. It's all good.

Ben Byford:[00:01:22]

So this is our end of 2024 roundup episode, which we've done historically...last year we did with you. Before that, I think, was with Olivia, which we had on the podcast last episode. Before that, I think we had Ben as well. We've been doing it for a couple of years, looking back at the year. Also, you've got some stuff in in 2025.

Karin Rudolph:[00:01:46]

Yeah, lots of things.

Ben Byford:[00:01:47]

Which we'll talk about as well. Just to start us off, who are you?

Karin Rudolph:[00:01:52]

I am. My name is Karin Rudolf and I'm the founder of Collective Intelligence, which is an AI ethics and Governance Consultancy based in Bristol, UK. I do lots of things, but I'm not going to give all my curriculum because it's boring. I'm also the organiser of the AI Ethics Risk and Safety Conference, which is happening again in 2025, and I'll tell you all about it. I do lots of things, and I'm really involved in the tech, AI tech scene here in the Southwest. And yeah, I'll leave it there for now.

Ben Byford:[00:02:26]

Great. And so we... You put on the conference this year and I was there.

Karin Rudolph:[00:02:33]

You were there.

Ben Byford:[00:02:34]

Yeah, you were there. Which was great. And I got to do a roundtable discussion on explainable AI, which was extremely fun.

Karin Rudolph:[00:02:43]

Yeah, that's good. I enjoyed it. That was really That was really good.

Ben Byford:[00:02:45]

With some really good speakers. So how do you think that conference went for you and how was it received and what's going to happen with it in 2025?

Karin Rudolph:[00:02:56]

Yeah, I think it was extremely, I'm really surprised and extremely pleased with the results. The feedback I received after was incredible and literally LinkedIn was exploding with comments. It was really good. People enjoyed it a lot. It was very successful. Speakers were really happy. Attendees were really happy. The people are asking me what's going to happen again. So I decided I'm going to do it again because I enjoyed it so much.

Karin Rudolph:[00:03:21]

So it's coming again next year, 15th of May 2025. Same place, WaterShed, Bristol. Got new topics. Well, some of the topics which are basically a continuation of this year. AI regulations, which is a big topic, 2025. Got things around AI governance. Again, another big topic and something we're going to discuss because it's changing quite a lot. The other one, which is a new take or a new angle is AI applied to industry challenges. I got two main things I'm going to be exploring. I'm not going to give you more details because I'm on planning stage, but it's going to be I think the topics are really important and they are very critical for the world we're living in today.

Karin Rudolph:[00:04:07]

The final one is looking at next generation of AI systems. So I got really good speakers from interesting organisations. So it's going to be a slightly different, but I think it's going to be great and really interesting. The speakers, I got already confirmed that I'm fantastic. And yeah, I invite everyone to come along.

Ben Byford:[00:04:25]

Sweet. And that'll be in Bristol again.

Karin Rudolph:[00:04:27]

Bristol again, WaterShed, yes, the same place.

Ben Byford:[00:04:29]

And you'll be there. I'll be there.

Karin Rudolph:[00:04:30]

You'll be there. Hopefully, we're going to do a podcast.

Ben Byford:[00:04:32]

Hopefully, we do a podcast. Yeah, that'll be great. And I guess it's really important because obviously last year there was a larger focus, let's say, on legislation, standards, how you maybe apply these things into your organisation or get involved with what's going on. Do you think that's still going to be important in 2025, that whole piece? I know the EU Act is coming in. Do you think that's going to just explode this space of people looking into how to utilise AI technologies and be able to the governance side?

Karin Rudolph:[00:05:15]

Yeah, I think it's going to be slightly different because now we're doing all those theories around, Oh, this is going to happen. Oh, when this is going to happen, we're doing this and that. The next Next year, 2025, where the AI Act already has been approved is happening. The first set of provisions are going to start from February 2025, which is less than two months ago. From that point, I think things are going to start changing, especially in things related to generative or general purpose AI, which is also another set of provisions are going to start being applied from August next year. I think the AI governance space is going to change a lot.

Karin Rudolph:[00:06:00]

Until now, it's been full of theories and full of frameworks and hundreds of frameworks and hundreds of papers and all that. But now next year, it's going to be, okay, we need to actually implement these things, which is something people have been speaking and writing about for possibly the last five years, with all the responsible AI frameworks and all this proliferation of more and more documents. I think next year is going to be, Okay, we had all the documents we need. Now we need to implement this. Obviously, we don't know what's going to happen. Lots of things are going to change. We got all these conversations around AI agents, which is something we can discuss later. It's a really interesting topic. Lots of people disagree and say it's a hype. I think they are very impactful technologies, and I wouldn't be surprised...we are going to expect to see things we can't even anticipate.

Ben Byford:[00:06:53]

You mentioned that they've introduced the general, LLMs or that technology into the AI Act, because obviously the AI Act was being put together before this massive transformer explosion that's happened. Do you see that happening? Do you think there's a provision that will do agentic or agent-based AI as well? Or was that something that will be yet to be seen?

Karin Rudolph:[00:07:17]

I don't think, before I can tell, the AI Act doesn't mention specifically the word AI agents. However, obviously, they talk about systemic risk and general purpose AI, which is a really interesting thing because obviously they are general purpose AI, so they can be applied to different industries in different use cases, and there are lots of unknowns about what's going to happen and also with risk associated with that one. I think that's something the AI Act is going to have to deal with and implement a new set of provisions, which already, I mean, the AI Act is a beast. It's a huge piece of legislation with lots of different angles, recitals. Then you have the code of practise, which is the how to implement some of these things. So the AI Act tells you this is going to happen. These are the main concerns or the scope of the legislation. And then you have the code of practise, and then you have the harmonised standards. They are not ready yet, so something's going to in 2025, 2026.

Karin Rudolph:[00:08:32]

I think that's going to be a state when these things are going to be implemented in businesses, workflows. And then I think they are going to have to say, Okay, this is something we need to look at. But it's not there yet.

Ben Byford:[00:08:43]

No... But I guess, like you say, it's going to be interesting how it evolves because of all the collection of that stuff.

Karin Rudolph:[00:08:52]

Yeah, absolutely. I mean, the more people are going to start using this, we're going to have to... Because now they are agents still. It's not really a thing.

Ben Byford:[00:09:02]

We're skirting around it. What is agentic or agent-based AI?

Karin Rudolph:[00:09:06]

Lots of people might disagree. There are lots of conversations around this hype. It's a real... AI agents, the way the companies are presenting this, specifically Microsoft and Google and other companies, big companies, is at the moment, you have LLMs and you have things like you ask questions, they reply to you, they can do summaries, they follow your instructions. AI agents are supposed to have a higher level of autonomy in the sense of, for example, you can ask the agent, you can train your own personal assistant, which is another word. Some people say it's a different thing, but they're like advanced assistance. And then you can say, Hey, go and book a hotel for me, or go and do whatever. You give the instructions, and then this thing, this agent will go into the world to find solutions. Solutions for you. Obviously, that will come with a new set of opportunities, but also new risks. And this is something it hasn't been deployed widely. So I don't think we know exactly how that's going to work.

Ben Byford:[00:10:19]

Yeah. And on the technical level, as far as I understand it, you have an LLM and you can ask it stuff. And I think the way it works is it orchestrates several instances of a language model or something. So you can say, I want to book a hotel room, and one of those instances will pop up and go, Okay, this is the plan for booking in your hotel room, and then it makes another instance and goes, this is the one that checks the plan.

Karin Rudolph:[00:10:51]

Yeah. And the other one says it's right or wrong. Yeah, yeah, yeah, yeah, yeah thats really interesting you mentioned that because that was something I was reading recently called Magentic AI, which is Microsoft, a new toy, new thing called Magentic AI.

Ben Byford:[00:11:06]

There's always something, isn't there?

Karin Rudolph:[00:11:07]

And they use specifically the word orchestrator. So they got basically, you say, Oh, I want you... AI agent. Oh, AI agent, I want you to go and book a hotel for me. And then this orchestrator can be... Can have, for example, four different agents under this... This is the boss, basically. And these agents will report to the orchestrator, which is just insane. It's kinda crazy. It's like a mini-organisation inside your organisation working with you.

Ben Byford:[00:11:41]

Well, it makes sense if you've ever used ChatGPT and gone, Write me this thing, and the first time it comes out, it's not great. And you say, Oh, could you write me that again? You will have more information.

Karin Rudolph:[00:11:59]

Yeah, absolutely. But you we can create personas as well, which is something people are doing. And you can see the big difference in result. If you ask ChatGPT or Claude, Oh, write that, it's like, whatever. It's something generic. But then you feed this with information about what you're doing or results or maybe some documents or links, you have a lot of better results. So the agents, I think that's the next step of this is, okay, this is all the information I got for my booking, for my hotel. Or they can be hotels, it's possibly those tasks people People are like, oh, I don't want to do it. But then you have all the other issues with, especially, I think OpenAI just announced they're going to be adding advertising. So yeah, you're going to encounter another issue. You're going to get one of these agents and say, Hey, look at a hotel and back. Yeah, actually, you're going to get not the best deal for you, but the best deal for the advertised company.

Ben Byford:[00:12:54]

This is exactly what the podcast is all about. This is what we care about.

Karin Rudolph:[00:12:58]

It's going to be an It's an interesting space to watch. Yeah, absolutely.

Ben Byford:[00:13:02]

Yeah, I think it reminds me when we first had Siri and Alexa and things like that, we have this mono communication. We've gone from a web page, and if you search If you look at the search for something on Google, you'll have many results. Yeah, yeah. And obviously the top results might be ads.

Karin Rudolph:[00:13:21]

Yeah, it usually says a sponsored, which is something like this. Exactly.

Ben Byford:[00:13:25]

But hopefully there's an indication there. But if we're being advertised to in aligned with the results in this text, we might only get one answer again. If it's not explicit, it doesn't have to. And to be honest, this could already be happening.

Karin Rudolph:[00:13:44]

Yeah. I guess they need to... At the same way that when you have results, it has to say sponsored, I guess you need to go through the same route. Otherwise, it's like book hotel and you're only going to get two or three chains of The hotel chains and all the other things are going to be dead, basically. But then you have competition laws, antitrust. You're going to encounter lots of issues if you don't do this. This is a sponsored result.

Ben Byford:[00:14:13]

Yes.

Karin Rudolph:[00:14:14]

But potentially, you can just literally get rid of all the small players.

Ben Byford:[00:14:19]

Yeah, exactly. I had on my list AI LLM Search. Because we already sort of, have an issue like that. If you ask a question in ChatGPT or other ones that are available, PI, GROK, lots of them.

Karin Rudolph:[00:14:41]

Claude, yeah.

Ben Byford:[00:14:43]

You're going to get an answer, right? And you're going to get something back. Part of how it works is it's not going to say, I don't understand. It's going to give you something.

Karin Rudolph:[00:14:52]

Unless you ask for David Meyer, and then it's not going to reply.

Ben Byford:[00:14:56]

Okay, well, most of the time, it's going to give you something back. And that thing that you get back, it might not be the best thing, right?

Karin Rudolph:[00:15:05]

Oh, yeah.

Ben Byford:[00:15:05]

It's not a search engine.

Karin Rudolph:[00:15:07]

No.

Ben Byford:[00:15:08]

It's not there to give you the best result. It's there to give you...

Karin Rudolph:[00:15:12]

The most likely result.

Ben Byford:[00:15:13]

Exactly. Sometimes It's something that looks correct.

Karin Rudolph:[00:15:16]

Yeah, it looks correct.

Ben Byford:[00:15:17]

And what I find interesting about that is obviously people might just take that as rote and the implications of that. But on a basic level with search is people are now having to manufacture their information in a way that is consumable for LLMs, and you're trying to game the LLMs so that they give you their result now? So it's a bit like search engine optimisation. Everyone's trying to get up high on the results.

Karin Rudolph:[00:15:51]

So you mean prompt? You need to learn how to prompt this thing to get better results.

Ben Byford:[00:15:56]

Yes, exactly. You need to know how to prompt it. You need to have your information in the model. So when they're scraping the web, for example, if you write a blog post which says, I'm looking for the best kitchen tiles, or something like that. And that's written on your website, that could correspond to someone's prompt, right?

Karin Rudolph:[00:16:20]

Yeah.

Ben Byford:[00:16:21]

And you're trying to game the system ...

Karin Rudolph:[00:16:25]

Yeah, but the thing you already have is people giving you the prompts to get a specific, so it's like copy and paste the prompt. So you got a dictionary of the best prompts to get this and that.

Ben Byford:[00:16:37]

Yes. Yeah. But the information... What I'm saying is the information may or may not be in the model.

Karin Rudolph:[00:16:42]

Yeah, yeah, yeah. Absolutely.

Ben Byford:[00:16:43]

You're trying to make the best information so the model picks out your information when they ask those questions.

Karin Rudolph:[00:16:51]

Yeah, but that's supposed to be a problem, an issue that the agents will solve because they will go literally to go and search for the things.

Ben Byford:[00:16:58]

Okay. Yeah, yeah.

Karin Rudolph:[00:16:59]

But Obviously, also, you're going to have some limitations because you need to search something. You need some level of parameters. You can just search whatever, you need to search something. You need a hotel. Yeah, I want a hotel in Spain, whatever. It's not anywhere in the world. You need to give some level of information. Yeah.

Ben Byford:[00:17:17]

But like we said before, it might be that they go to a specific website every time, or they go instead of going to search to do that.

Karin Rudolph:[00:17:26]

Yeah, that's the risk. If you have all this advertising model, which, yeah, that's absolutely. You're going to get the same results again and again, or you're going to just go for who's paying the money.

Karin Rudolph:[00:17:38]

Which is, yeah. Yeah, exactly. That's an issue. Absolutely.

Karin Rudolph:[00:17:41]

I don't know. I've been looking at some of these videos and they show you how they work. It's quite interesting. It's like a mini-organisation or a mini-society in a way. Yeah, I don't know. It'll be interesting to see how this... If they are adopted by many organisations, which is not supposed to be a plan, how that's going to work and how can we actually control these things? Because at some point, I think people start thinking already and how we start losing control of this. Because now we have so far, it's something that you have an almost one-to-one interaction. But these agents are going to interact with other agents and then potentially with millions of them. All them. It won't be just one to one. It's like another layer of we have humans interacting with each other, and then we have these machines, and then you're going to go up into... Yeah, I don't know. I mean, obviously, we're speculating I don't know. I don't think anyone can say certainly what's going to happen.

Ben Byford:[00:18:48]

In my head, I can just see someone accidentally programming it badly and it just recursively overloading OpenAI servers and it all breaks.

Karin Rudolph:[00:18:57]

I mean, yeah, those things can happen. Absolutely. It can happen. And maybe some people think this is not something I want to use. But you remember with Google, I think it was 2016 when they did this. I don't remember the name, but it was this agent at the time. They didn't call it like that. Call it to make a reservation for a hotel booking or something.

Ben Byford:[00:19:21]

It had a funny name, didn't it?

Karin Rudolph:[00:19:23]

I don't remember the name, but it was literally these guys, they were doing the demo. People were like, wow. And then they decided not to release that because it was so scary. Basically, you're going to call someone and you don't know if that person is going to be a real person or one of these agents. So one of the thing, I guess, is going to have to be a transparency again, which is like, you need to reveal I'm an agent, which sounds really James Bond. I'm an AI agent and my name is. Hello. Yes. It's not going to sound like that. Yeah, it's not going to sound like that. It's going to sound... Yeah.

Ben Byford:[00:20:00]

Yeah, AI agent.

Karin Rudolph:[00:20:01]

Yeah. And then, then, then possibly you're going to have a different level of interactions because you're going to have your own AI agent with a specific name and personality, potentially. It's going to talk with the other one. So, yeah, I find it fascinating.

Ben Byford:[00:20:16]

I feel like all this stuff is wrong. You know what I mean? With that specific example, it's easier to send API calls over the internet than it is to phone. The technology there is so much simpler if you just sent a API call to book a restaurant if they had a reservation software instead of the whole rigmarole going through the telephone network, synthesising a voice to produce... There's just so much more technology there that is so unnecessary. I don't think it's necessary. In that instance, specifically. It feels to me a bit like Alexa, to go back to that. Alexa is amazing, right? But it's not necessarily a mass market device.

Karin Rudolph:[00:21:04]

No.

Ben Byford:[00:21:04]

It changes people's life, like massively, but only certain people.

Karin Rudolph:[00:21:10]

Yeah, I think Alexa is an interesting case. And I think that was when the Amazon released that. They were expecting loads of, wow, this is going to be incredible. And it didn't happen? Yeah. Yeah, it didn't happen.

Ben Byford:[00:21:22]

But in a similar way, maybe, like you're saying, people maybe... I feel like we could do that tomorrow with the whole booking the the restaurant thing. But maybe people will be like, this is cold calls and we're just going to put down the phone to it.

Karin Rudolph:[00:21:37]

Yeah, but then you can have other users, like the computer use by Anthropic. Anthropic, they release it, which is quite I remember watching this when it takes control of your...

Ben Byford:[00:21:48]

Your machine.

Karin Rudolph:[00:21:50]

Your machine, basically, and your laptop. And it's like, wow. And lots of people were like, I mean, yeah, that's another use of this is an agent. And you say, Hey, go find files for person X, and you just leave it there, and it goes, you find your files and then compile that into something else and then can give you a summary of all the information about person X. I mean, that's an incredible thing. You know

Ben Byford:[00:22:15]

See, again, I don't think that's incredible.

Karin Rudolph:[00:22:17]

You don't find it incredible?

Ben Byford:[00:22:18]

No, no. Because the way... You're right. It is magical, let's say.

Karin Rudolph:[00:22:25]

I wouldn't use the word magical.

Ben Byford:[00:22:27]

It feels like that you know.

Karin Rudolph:[00:22:29]

I find that amazing. I was like, wow, this is incredible.

Ben Byford:[00:22:33]

But for me, there's the substrate of computers and then your computer. There's this agent that's using your computer. They're both the same substrate. But this computer over here is using, sorry, I've got my hands waving around. There's two hands here. Is going to this top layer meant for humans and moving things around. And it's like, no, just talk to the computer in its own language, computer. You know what I mean? It feels like, I think in that instance, it feels like a lot of these things were doing from the wrong direction.

Karin Rudolph:[00:23:08]

Mmm okay.

Ben Byford:[00:23:08]

So I think it would be much more interesting if there was an AI who learnt all about bios and how computers work and just asked the computer to do the thing that you wanted it to do directly and doesn't have to move the mouse around.

Karin Rudolph:[00:23:24]

I think moving the mouse around is quite creepy as well. I found that a little bit creepy. I was like, Yeah, that's surely that's not necessary. But I guess that's because it's at early stages. But I feel the same with autonomous cars, which is... It has a dual purpose anyway. When you see these autonomous cars, if you see a steering wheel moving around, that's creepy. If no one is there, you don't need to do that. But obviously, they do it because sometimes someone might be there, and then you have the dual use of autonomous quote unquote,... It makes sense. But still, when you see a car moving around with the steering wheel and no one is there, it's like this old film of the piano, someone playing the piano, but it's not there. Who is playing the piano? So yeah, it's a little bit... I think it's a little bit of maybe unnecessary.

Ben Byford:[00:24:11]

That's why I said magic.

Karin Rudolph:[00:24:11]

Yeah, it's a little bit unnecessary. But I guess with time... I mean, it's impressive when you see that and you see the video and you see it moving around and get all the information. It's like, wow, and take seconds. I think that's the first stage, obviously, with time, they're going to be a lot more sophisticated. They won't need to do all these moving things around. They will be like, that's the file, that's the summary of all the information about this person or whatever. Yeah, exactly. It's much of the time.

Ben Byford:[00:24:39]

And at that point, if you had this living on your computer and it knew everything about your files and all the stuff that you normally do on your computer, that might be extremely useful.

Karin Rudolph:[00:24:50]

Yeah. I think it's going to be a lot more efficient. I think the problem, one of the things we need to look is when these agents go out into the-Interact with the world. Yeah, Wild West or Wild World or whatever you want to call this. People say the Wild West. Because they're going to have a competing interest. And this is something people are looking at the ethics of autonomous... Not autonomous, AI agents. The agents is that. So, yeah, I want to get them a booking from my hotel or I want this from my agent. But there's going to be a conflict between all the people who want exactly the same. So we both want to book a room in this place or we both want to do this. But then you're going to have potentially 2 million people doing or wanting exactly the same.

Ben Byford:[00:25:40]

And then we'll get into the paper clip situation.

Karin Rudolph:[00:25:42]

And then you're going to get, okay, that. And then all the people are talking, I think it's Shannon Vallard, she gave an example about this and hospitals. Yeah, I want to book an appointment because I think my situation is an emergency and I want an appointment tomorrow. Other people might think exactly the same, but we don't have enough appointments for everyone to come at the same time, same place.

Ben Byford:[00:26:08]

That's a really interesting point because obviously that's ethically very important.

Karin Rudolph:[00:26:16]

Yeah, it is.

Ben Byford:[00:26:18]

So I wonder where the buck stops with that one. I'm trying to think of the right phrases, but you could see a world where these agents have, like machine ethics, have some way of understanding when or not to push.

Karin Rudolph:[00:26:39]

Which should be a priority, for example, in medical reasons. What are the priorities? What are the priorities? What are the priorities? What are the priorities? What are the priorities? I need a clear emergency person is like, yeah.

Ben Byford:[00:26:48]

Whether that lies with these big organisations or are we going to legislate for these? You know what I mean?

Karin Rudolph:[00:26:55]

Yeah. How does that work? That's a really good question. Honestly, someone can say, Hey, my agents are more advanced and can get more information and I pay monthly.

Ben Byford:[00:27:05]

And your agent is not as good and less pushy.

Karin Rudolph:[00:27:09]

I think we're going to enter that type of really... I don't know that my happen, I think that it's very likely. How can we solve that? God, I wish I could give you an answer. I don't think anyone can at the moment. But we're going to encounter conflict of interest. We're going to encounter all these people wanting the same thing. Someone's going to push something else. People are going to pay for something. Other people won't be able to afford it. So yeah, I think we are going to, in a way, replicate some of the issues. Obviously, we have a society as humans, but we're going to add this.

Ben Byford:[00:27:41]

Yeah. If anyone is listening to this, and I feel like we just invented several papers of research just then. If you hadn't already been thinking about it.

Karin Rudolph:[00:27:50]

I think it doesn't be what I'm thinking. But yeah, it's really just find it fascinating. I know. I can't help but thinking, oh my God, this is so interesting. It's like, yeah, we look plenty to do and think about it.

Ben Byford:[00:28:02]

Yeah. So we also had the American election this year.

Karin Rudolph:[00:28:07]

Yeah. Surprise, surprise.

Ben Byford:[00:28:09]

Did you see any of this AI stuff coming into that on the run-up? But also it has implications to what happens now, obviously, into 2025 and how America responds to...

Karin Rudolph:[00:28:22]

Yeah, I think it's interesting because a lot of people were concerned about misinformation, and disinformation are two different things. The impact of, for example, Sora was something was released back in February, beginning of 2024, and there was a very limited release for red teaming and small teams trying to... Then they decided not to carry on with the release or not to open widely to a wide population because of fears of interfering into the elections in America. Now they released it last week, I think. So that's available.

Ben Byford:[00:29:04]

So you can go and use that.

Karin Rudolph:[00:29:05]

Well, not here, not the UK, or the European Union. Okay. It has been restricted here.

Ben Byford:[00:29:11]

But you can use it anywhere else?

Karin Rudolph:[00:29:12]

I think you can use it in America, certainly. And it's full video. Yeah, you can text video. I think you can produce up to one minute video. I've seen some examples and they are incredible. Still, you see sometimes hands with five, but not five. That would be normal. It's seven fingers. Yeah, seven fingers on normal hand. With your seven fingers, you see crazy things, but it's a lot better and you can see, wow, in some cases, it's astonishingly good.

Karin Rudolph:[00:29:45]

So, yeah, going back to the election, I don't think that was a main issue. I think lots of people were really nervous about it. The election went the way they went. Some people might dislike the results. I don't think that was mostly something you can say AI or misinformation campaign had a big impact. I don't think you can claim that.

Ben Byford:[00:30:10]

I haven't seen many claims.

Karin Rudolph:[00:30:13]

No, no, it's Previously, that was very... So I don't think that was necessarily something people can say had a big impact on the elections. Obviously, misinformation campaigns and people spreading...

Ben Byford:[00:30:28]

They don't need AI necessarily.

Karin Rudolph:[00:30:30]

Yeah, you can do it. You can just get fake news or whatever, which is completely nonsensical and drop it there. And some people say, yeah, that's true. People can't believe whatever. That's out of control. In terms of what's coming, certainly it's going to be a big change in an American approach to AI, especially because obviously Elon Musk is going to be the... It's something called the Department of Government Efficiency.

Ben Byford:[00:30:59]

Yeah, I don't know how I feel about this. How do you feel about this whole thing?

Karin Rudolph:[00:31:09]

I don't know, really. Yeah, I don't know. It took me a surprise.

Ben Byford:[00:31:14]

Before this recording, you were talking about how you like this technology. You like to talk about the good things about the technology.

Karin Rudolph:[00:31:24]

Yeah, I still like it.

Ben Byford:[00:31:26]

There's obviously lots of bad things that we can pick out and talk about and interesting ethical quandries and all this stuff. But it could be that Elon Musk, however you think of him personally, there being a department that does efficiency stuff for the most ridiculous bureaucracy in the world. It could be a good thing. I don't know. It could-

Karin Rudolph:[00:31:50]

Yeah. I'm trying to be optimistic with all the changes. I mean, realistic more than optimistic. It's not blindly optimistic, realistic. I know it was a shock for many people. I think I was surprised when I saw results as well, which also tells you a lot about how we are understanding the world. Just to give you a really quick... I know it's slightly different, but I attended a webinar on the AI applied into diplomacy, international relations. Which is, this entire field of political science and people trying to understand different situations around the world. They use something called public sentiment analysis, which is literally like, try to analyse, to understand what people post online and the sentiments to detect potential civil unrest or potential conflicts or even worse. But one of the things that came into the conversation The webinar is how you do that when you have lots of content is AI generated. You can't really see what people are really thinking. You can't really analyse what people are doing or even saying, because lots of things are AI generated.

Ben Byford:[00:33:21]

It depends on how you feel about that whole surveillance thing, though, doesn't it? Because one side could say that's good because that whole surveillance thing sounds bad when you put it in that context. If you are interested in sentiment in that way, then obviously this is not a good situation. The AI coming and making fuzzy that sentiment. So are you saying that that whole... Do you think that's a good thing that we have that at the moment?

Karin Rudolph:[00:33:54]

I don't know if it's good or bad, but it makes me think, how can you anticipate potentially situations that can be real danger or can be... I know it's a social science that you tend to analyse what people are saying to try to figure out what's happening or anticipate the potential against your unrest or any situation. Where you don't have that, you don't have a reflect or reality, you can't do anything. Yeah, you might say it's good because surveillance, but surveillance is happening in any way underneath in ways we can even-

Ben Byford:[00:34:27]

If we think about it, we've only had that for 10, 15 years anyway. This has never been the case before anyway. So we're just going backwards again, essentially.

Karin Rudolph:[00:34:39]

Yeah. The thing I was trying to make the point with the American election is lots of people didn't anticipate that the elections, the results will be the way they were. And I think that's a lot of things around what people not expressing what they believe or AI-generated content. So all the analysts trying to understand this phenomenon and said, Oh, the results of the election is going to be that. And then, surprise, that didn't happen. And I found that quite puzzling why not many people saw that before happening.

Ben Byford:[00:35:13]

Yeah. But it sounds like they're measuring the wrong thing then at that point.

Karin Rudolph:[00:35:18]

Yeah, potentially, or something is not clicking there because it's really strange not to see the things happening.

Ben Byford:[00:35:25]

But the whole AI content is going to be a problem for that surveillance state situation. But it's also going to be a problem for information, for looking on the internet and experience things which I know things aren't always factually true, but now they are not even made by people, necessarily.

Karin Rudolph:[00:35:45]

Yeah. Then you have another layer of concern. But yeah, disinformation is an interesting topic, which is something, possibly, to discuss an entire podcast. I think there are lots of things to tell about about disinformation and misinformation, deliberately spreading something on the other one is more organised campaign to mislead people. There's also a danger of start levelling things we don't like as misinformation, which I think people don't talk enough about it.

Ben Byford:[00:36:18]

It depends if it's me doing it.

Karin Rudolph:[00:36:20]

Well, yes, exactly. It's like, I don't like your opinion, misinformation or this information or whatever you want to call it, which is like, okay, this is another way of shutting down people who potentially can express and they should be expressing their political opinions. We might dislike them. Obviously, there are limits for things.

Ben Byford:[00:36:37]

I feel like that's Trump's line these days with anything.

Karin Rudolph:[00:36:42]

I think it is beyond Conservatives or Republicans. I think this is across all political spectrum. It's just disinformation. Yeah, I think you can't just say it's left, right. Yeah, it's across the board.

Ben Byford:[00:36:56]

Karin, you're spreading disinformation right now.

Karin Rudolph:[00:36:58]

You think?

Ben Byford:[00:36:59]

No, I'm just joking. Maybe.

Karin Rudolph:[00:37:02]

I am. Yeah. No, I'm not. I'm not doing that.

Ben Byford:[00:37:05]

I think that's called not knowing the full picture rather than disinformation, isn't it?

Karin Rudolph:[00:37:11]

Yeah. God. That is such a complicated thing. Then you got freedom of speech and freedom of expression. It's just super complicated.

Ben Byford:[00:37:20]

So do you think that feeds into... Obviously, in the United States, they care a lot about these...

Karin Rudolph:[00:37:27]

The first amendment.

Ben Byford:[00:37:28]

First amendment, freedoms around speech and rights to arms, all these things which are in them in the... Written in stone, which they're not, by the way, in the amendments. They're called amendments for a reason, right? You can literally amend them.

Karin Rudolph:[00:37:42]

Yeah, that's going to be a difficult one to try to change in American.

Ben Byford:[00:37:46]

Any American, I apologise, but that's just the case. So... I'm going to get cancelled.

Karin Rudolph:[00:37:53]

No, no, no.

Ben Byford:[00:37:54]

What do you think about how that feeds into freedom of speech and things like that?

Karin Rudolph:[00:38:02]

Things can go very controversial in these discussions. I think it is a tendency, and again, this is not left, right. This is just tendency because we want to control speech, basically. You have social media, people saying things we don't like.

Ben Byford:[00:38:20]

Yeah, not we, but-

Karin Rudolph:[00:38:22]

Yeah, people. Not you.

Ben Byford:[00:38:25]

Not me.

Karin Rudolph:[00:38:25]

Yeah, it's exactly. And then you have all this Intentionally, you try to control what people are saying, you're trying to control. Okay, there are legitimate cases when you don't want violence and you don't want things that are pretty common sense, obviously wrong and bad. But then you have cases when you people can express an opinion you just like, and you go with all the force into, No, you can't say that. And I think there's a limit, and we shouldn't be encouraging people to self-censor, because I think that's an issue. I think it's an issue where, again, it's not across the entire political spectrum.

Ben Byford:[00:39:09]

And I feel like in the last few years, I've heard a lot of that being the case in universities, where...

Karin Rudolph:[00:39:15]

Yes. Visiting lecturers and talks and stuff. It's a big, big problem. I mean, for me, that's shockingly bad. It's unacceptable because university is a place which you should be discussing the things.

Ben Byford:[00:39:28]

Of all the places.

Karin Rudolph:[00:39:30]

Yes. That's when you need to go exercise your muscles, your brain.

Ben Byford:[00:39:35]

You don't have to agree.

Karin Rudolph:[00:39:36]

You don't have to agree. You can disagree. And that's how you learn things. If you go to university, and I say, I'm not going to go in any examples, but I tried to read... The amount of repetition of things is shocking as well because people want to repeat something, not to upset someone else. So you got all these papers, which is basically a repetition of the same things because nobody wants to say, actually, your conclusions are not really... You can't verify these things. Social sciences, I'm not going to go into Social sciences, but it's basically you have an endlessly chain of confirmation biases. Again and again, again, again. People just look in all the information and going to affirm what they believe. And that is unacceptable on university levels, in my view.

Ben Byford:[00:40:29]

Well, I mean, I don't know if there's specifics around that that you want to dig into, but I feel like you are not going to have many friends in the Humanities Department after this.

Karin Rudolph:[00:40:42]

It's okay. I'm okay with that. I studied sociology, so I remember in my old days, you will go out and say, Oh, let's study phenomenon. And then you will go, you will try to understand the, Oh, this is happening because of this set of reasons. Now you go, said, Oh, this is happening. Let's see how evil people are because this is happening. It's like, you shouldn't do that. We just start dismissing people. I think there are lots of patronising people on one side and another one. If you disagree with me is because you are an evil person. It's like, no, you disagree with me because you disagree with me. It doesn't make you an evil person. It makes you something like, really. Is that more around the political? Oh, yeah. I mean, yeah. Social science, humanities. It's like, yeah, it's really... I mean, the AI Ethics space is not you know...We are discussing with ...

Ben Byford:[00:41:35]

Because I feel like we've gone massively off topic. Because this was supposed to be a ...

Karin Rudolph:[00:41:39]

Okay. Yeah. I know...

Ben Byford:[00:41:40]

But it's fine. Do you have a hot take on the AI ethics scene?

Karin Rudolph:[00:41:45]

Well, I think it has become a really place with you have your limits already people telling you the things you shouldn't be discussing or which you should or shouldn't be discussing, which I something that is... We spoke last year at the previous podcast when you have all these people saying, Oh, you can only focus on this specific risk. You can't talk about this essential risk. We might dislike, we might disagree with that, but people have all the right to research what they want, they feel is needed. And then you have the other people dismissing all the actual things happening now because we should be only be talking about existential risk. So We have these two things. It's like, okay, we need to find a common ground, which is like, we can focus into different things.

Ben Byford:[00:42:36]

Yeah, it's weird. I don't think the existential risk safety people are actually the AI ethics people. I think they are working on side projects, and they do overlap, but I don't think they should be warring each other.

Karin Rudolph:[00:42:55]

No, that is certainly the case. Different things. Yeah, they're looking at different things. Exactly.

Ben Byford:[00:43:01]

Sometimes they're looking at similar technologies. I think that's the only similarity in my mind.

Karin Rudolph:[00:43:06]

Yeah. The problem is when you have limited resources, you want people to focus on what you want.

Ben Byford:[00:43:10]

Why are they getting the money and we're not getting as much money as they are?

Karin Rudolph:[00:43:16]

Yeah, I guess. I guess that's something. But I think it's a lot about who's controlling the discourse as well, the public discourse, which is... Yeah, that can be... Because it's also attached to us some moral judgement as well. We are the rightful people and rageous, and we are right, and we should be talking about this and all these things. I think there's a lot of things happening in that sphere.

Ben Byford:[00:43:42]

Yeah. Well, I mean, if anyone is concerned with how I feel about this is that I care about both of those things. It's just that...

Karin Rudolph:[00:43:49]

Yeah, I do. I do the same. And I think we should be focused in a lot of different things. It's not just one thing. Same with technology. I mean, yeah, there are things that can and things are clearly negative, but there are lots of good things, and I just really want to be realistic about it and not being dismissive of evil technology against saviour technology because it shouldn't be extreme should be, A, this is having an impact in our lives, and let's talk about it.

Ben Byford:[00:44:20]

Yeah. So on that note, how do you feel about... Sorry, this is a very bad segue. So on a completely different note, there's this idea that maybe we come as far as we can with the current crop of LLMs and that type of technology hitting some plateau. And obviously, we talked about agentic and other ways to exploit that, what we already have. But what do you see as an outcome of maybe be that we've come as far as we can with this transformer situation and what it can do?

Karin Rudolph:[00:45:11]

You're talking about scaling laws, whether the more computer data equals better. I think I know enough about what's going to happen. I know lots of people are criticising and they're saying LLMs, they hit a wall, and whatever we is going to be a really small changes or small improvement. I mean, if that happens, if that's the case, I'm not sure we're going to go into a different direction.

Ben Byford:[00:45:43]

Right.

Karin Rudolph:[00:45:44]

Or different use cases. I don't think I got an answer to that question, to be completely honest.

Ben Byford:[00:45:50]

Yeah. I'm hoping my main thing on all this stuff at the moment is the environmental issue surrounding the technology. And I'm really hoping that it hits a really hard plateau.

Karin Rudolph:[00:46:04]

Yeah. I know what you mean.

Ben Byford:[00:46:05]

And actually the race to the bottom is around environmental stuff.

Karin Rudolph:[00:46:10]

Yeah, I think there's a discussion around large language models and small language models. And the other one, which is quite interesting, is maybe to refine the algorithms without necessarily adding more data and more computer power and more and more. I think you're right at some point, even though I don't think anyone can say what's going to It's not going to happen. But if we hit a wall, I don't think it's going to happen is, Hey, let's forget about AI. I mean, we're going to do something else. I don't know, guys. Yeah, okay. That was it. It was fun. And no, that's not going to happen. So when people say, Oh, this is high. I mean, no, that's not going to happen. We're going to find different ways. Maybe, yeah, algorithms are going to be more efficient or we're going to use less computer power. Unlikely because we have all the things around semiconductors and they are more powerful than they send in entire industry, which is quite interesting space to watch as well, because all the geopolitics with Taiwan and China. That's an entire industry, which obviously they want to improve their processes and their products.

Karin Rudolph:[00:47:16]

However, even if we hit a wall in the sense of, okay, all the improvements of these current LLMs is going to be small improvements. Okay, that's fine, because people are going to start using this more and more or into the workplaces, that's going to produce another type of developments, which I don't know. The way I see it is we're in that point, which is we reach a point when people are using this, but not a mass scale. This is not a massive adoption of technology. We are so many are using it, a lot of people are using it, other people are not using it. Loads of people are using it and not declaring their use, which is going interesting. I read this book, which is quite fun, quite interesting, the Co-intelligence. It's a guy called Ethan Mollick. It's quite interesting. I like it. He has a really cool substack called One Useful Thing. I think it's called. He talks a lot about, for example, how people are in workplaces are using ChatGPT or cloud LLMs, and they're not telling they're using it. It's like some people call this shadow AI. It can be because they're using it in There are laptops at home.

Karin Rudolph:[00:48:31]

So you have your work laptop and you have the other one. If you are not authorised by your organisation to use this, you can use your thing and send you information into your email, which we will do. Or if you can get away, you can use it in your work laptop. Still, let's say you have to summarise a report and it usually could take you 2 hours to say that. Then you use a length, you put the report, ask, summarise the main meetings, and then less than a minute, you can have that. So the reasoning is, if I'm a person working in this, this is my job or part of my job, why on earth I'm going to tell my boss? I mean, This is what the reason is. That's the reasoning. If I say that, I'm going to have a great report in possibly 10 minutes because I have to refine it. It's ready instead of two hours. If I tell everyone this is happening, I got two risks. One, I can be replaced very easily. My workload is going to go up to the roof. So people don't say that. So we actually don't know how many people are using this in work.

Ben Byford:[00:49:42]

Some people are using it cutting down their workload.

Karin Rudolph:[00:49:45]

Yeah, absolutely. But I'll tell it anyone.

Ben Byford:[00:49:47]

Are they going to the spa?

Karin Rudolph:[00:49:49]

Yeah? I don't know what people do, but I think it's, again, it's something we're discussing before where people say, Oh, this hype is not changing. I think it's changing massively. The thing is people are not talking about it because they prefer to keep it quiet.

Karin Rudolph:[00:50:03]

Yes. Well, I've been thinking about this a little bit, not in the shadow AI situation, but in the things that businesses should be thinking about when considering this stuff. And part of that is the AI policy, essentially.

Karin Rudolph:[00:50:17]

Yeah, absolutely.

Ben Byford:[00:50:18]

You could lay down to the outside, but also internally to your employees, we are aware of using these certain technologies, and it's permissive and allowed to do X, Y, and Z.

Karin Rudolph:[00:50:35]

Yeah, it should be. Exactly. It should be.

Ben Byford:[00:50:37]

We're stipulating it.

Karin Rudolph:[00:50:39]

Yeah, it should be.

Ben Byford:[00:50:39]

You don't need to hide anything from us.

Karin Rudolph:[00:50:41]

Well, yeah, that's exactly a point. Absolutely.

Ben Byford:[00:50:42]

And it's not acceptable to do this other thing.

Karin Rudolph:[00:50:45]

Yeah, exactly. I think he also makes that. It's very important. We need to experiment with things and we need to use them. Obviously, we need to be responsible. You're not going to put all the health record for your patients.

Ben Byford:[00:50:56]

Please don't put all the health records on.

Karin Rudolph:[00:50:58]

Don't do it. Yeah, it's Yeah, but you can do all the things. We need to treat people like intelligent human beings as well.

Ben Byford:[00:51:06]

I just feel like just because you said that, we need to not treat humans as intelligent beings.

Karin Rudolph:[00:51:10]

Well, I don't know. Because people have done things. I mean, yeah. Okay. You can summarise the report, which doesn't contain personal information. That's absolutely fine.

Ben Byford:[00:51:21]

Okay. Yeah.

Karin Rudolph:[00:51:21]

Go and do it. Check it anyway because obviously... I mean, summarising this information is there. It should be fine.

Ben Byford:[00:51:28]

Yes.

Karin Rudolph:[00:51:28]

Because basically, it's...the information is there. You need to take it anyway. Now, is that sensible to put all the information when it's a previous concern? Yeah. That's when you have to say, Okay, this is good, this is bad. Don't do this, but you can do that.

Ben Byford:[00:51:47]

And now you know. Or maybe you've had some training.

Karin Rudolph:[00:51:50]

Yeah, but that's interesting because I was talking about that the other day. To write that policy that's useful, you need to understand what people do. Go back into the chicken and egg situation. If you don't know what people are doing, you can't write a policy. So if you have people not telling you, I'm doing this because I'm scared of this, you're going to send me more reports, I don't want to do it, you can't plan for that. So we have that problem.

Ben Byford:[00:52:16]

I feel like you're already in a problem situation if you don't have any level of trust in your organisation.

Karin Rudolph:[00:52:23]

Yeah, absolutely. And then other people are also scared of losing their job, which again, shouldn't be dismissed, obviously. It's a very important topic.

Ben Byford:[00:52:33]

So we come to the end.

Karin Rudolph:[00:52:36]

Oh, no. Let's carry on.

Ben Byford:[00:52:38]

We could just keep going. But unfortunately, we only have this lovely space with us at an amount of time. So we'll hopefully see what you think, and we'll do this again. So for 2025, before we leave, what are you excited about? What's your hopes? And would you like to leave people with?

Karin Rudolph:[00:53:00]

Well, I think lots of things are going to change. I mean, obviously the I Act, which has become a little bit of an obsession in my life. That's going to be a really interesting space, all the changes and the implementation, which is... I mean, people are scratching their heads because it's such a complex piece of legislation implemented, especially for small businesses. They don't really have any help. That's another space we have to look at in more depth to see how businesses, governments or even consultancy, they can help these businesses. That's an interesting space. AI agents, that's going to be an interesting one. We'll see, maybe next year, we come together again, the end of 2025. We said, that was nothing.

Ben Byford:[00:53:47]

Hype.

Karin Rudolph:[00:53:47]

Who knows? Hype, who knows? And obviously, a conference, which I'm going to say again, yeah, come along. If you're in the UK, and even if you are not in the UK, we have people coming from three different countries, which It was fantastic. Come along. It's going to be open, affordable. It's great speakers, the topics. It was a great conference. Yeah, it was a great one.

Ben Byford:[00:54:09]

It's always nice to meet and shake people's hands in person.

Karin Rudolph:[00:54:12]

Yeah, absolutely.

Ben Byford:[00:54:14]

Karin, thank you very much for your time.

Karin Rudolph:[00:54:16]

Thank you. Thanks, Ben. That was fantastic.

Ben Byford:[00:54:18]

Have a good end of 2024.

Karin Rudolph:[00:54:20]

Yeah. Merry Christmas also. Enjoy. See you. Bye.

Ben Byford:[00:54:24]

Bye...


Episode host: Ben Byford

Ben Byford is a AI ethics consultant, code, design and data science teacher, freelance games designer with years of design and coding experience building websites, apps, and games.

In 2015 he began talking on AI ethics and started the Machine Ethics podcast. Since, Ben has talked with academics, developers, doctors, novelists and designers on AI, automation and society.

Through Ethical by Design Ben and the team help organisations make better AI decisions leveraging their experience in design, technology, business, data, sociology and philosophy.

@BenByford