84. Review of 2023 with Karin Rudolph
Karin Rudolph is the Founder of Collective Intelligence a Bristol-based consultancy that provides resources and training to help startups, and SMEs embed Ethics into the design and development of technology. She is currently working on the launch of the Ethical Technology Network, a pioneering initiative to help businesses identify, assess, and mitigate the potential ethical and societal risks of emerging technologies. Karin has a degree in Sociology, studies in Philosophy and is a regular speaker at universities and conferences.
Transcription:
Transcript created using DeepGram.com
Hello, and welcome to episode 84, our special end of year 23 episode. Joining me in person to talk about the year just gone, we're talking with Karen Rudolph. This episode was recorded on the 29th December 2023. For this episode and my favorite episodes, we recorded in person. However, at this time of year, we really struggle to find somewhere nice and quiet.
So we ended up in a coffee shop, so please do excuse some of the audio. There will be a little bit of clanging of cups and coffee paraphernalia. I've done my best to edit out the general background noise. So if it sounds a bit unusual, that's because of the heavy, handy editing going on. In this episode, we talk about algorithms of oppression, existential risk of AI, that horrible acronym, TESQRL, Geoffrey Hinton's resignation, the AI Safety Summit, EU's AI Act nearing its publication date, the New York Times' recent lawsuit, Neural Rights, the AI Ethics Risk and Safety Conference, and much, much more.
If you'd like to find more episodes, you can go to machinedashethics.net. You can contact us, hello at machinedashethics.net. You can follow us on Twitter, machine underscore ethics. Instagram, machineethicspodcast. YouTube at machinedashethics.
And if you can, you can support us on patreonpatreon.comforward/machineethics. And please rate and share the podcast wherever you listen. Thanks again, and hope you enjoy. Hi. Welcome to the podcast.
Could you introduce yourself, who you are, and what do you do? Yeah. Hello. Hello, everyone. And it's great to be here.
Thanks for inviting me, Ben Ben, after so many years. So, yeah, my name is Karen Rudolph. I'm the founder of Collective Intelligence, which is a Bristol based consultancy, and we provide training and resources for organizations that want to incorporate ethical thinking and to also understand the potential risk ethical and societal risk of emerging technologies. I'm also the cofounder of the Tech Ethics Bristol meetup, which has been running for 3 years, has been really successful. And, also, last last one, I'm gonna be launching a new project very, very soon.
I'll tell you all about it at the end. So, yeah, it's called the Ethical Technology Network, and it's a new initiative in the southwest of England. Wicked. So thank you for joining me on this yearly kind of roundup, ramble, chat about what's happening in AI, AI ethics. So before we get into that.
Oh, no. Oh, no. I know. I know it's coming. What is AI?
What is it to you? What what do you think? God. Okay. I think AI is the the best way way to define AI is is is a work in progress, really.
I don't think anyone can give a definite answer or a really good, super clever answer where AI is, artificial intelligence has been around for decades. Now everyone talks about it. But, I mean, the 2 things is the way they try to understand is artificial. I mean, it's not natural. And intelligence, obviously, there are lots of definitions intelligence, and people argue about if these machines are intelligence or, you know, stupid or whatever.
I think the way to understand AI is maybe from a kind of technical perspective is basically a software of machine learning that uses large, language models or a vast amount of data to make some type of pattern recognition or pattern analysis and to try to predict what's coming next, which is in the case of large language models. That's kind of the boring technical definition, but, obviously, we want something a little more interesting. So there are lots of different definition that is gonna be according to your field of studies or your interest. Lots of people say, and I think I agree with that, coming from a social sign as a social scientist, somewhat as background in sociology, There are socio technical systems, which means they are embedded in society. We feed the system with our behaviors, our kind of rules, with all our kind of society as a whole, and we also interact in ways they also feed us back with some of those results.
So it's kinda a circle, and it's constantly in a constant kind of, kind of feeding each other, basically. So in that way, it's a lot more than just a software. The third well, actually, way to understand AI is to understand that as a system, which some people would say is a system of oppression. You know, something that's gonna manipulate people. That's kind of more a vertical ways, to understand AI or understand any technology.
So some people are really scared of this kind of level of manipulation of, especially big corporation or or big big enterprise can have into kind of people's behaviors. Other people see the opposite as kind of system of liberation. You know, techno optimism. Everything's gonna be great. Society's gonna thrive, and we're gonna be fantastic.
So what is AI? I mean, god, it's so difficult to say, but it depends, obviously, on lots of different questions, those were different, kind of takes on people's kind of fields and interest. But I would say social social technical system is possibly one of the best descriptions. 1. Yeah.
Do do you fall in the kind of pro AI or negative aspects of AI camps? Because obviously, you were saying that the the systems of oppression that were just which is pretty Yeah. It's it's pretty yes. Dumbing is not. Yeah.
Brutal. Yeah. Yeah. I mean, I don't believe I I don't believe any of the two extremes. I think and especially when you work and kind of trying to do ethical thinking about the systems, I think that shows people are realistic more than anything.
I don't see artificial intelligence as a evil force in any way. I love technology. I say technology. It's amazing. And the way and the the reason I want and lots of people yourself working in AI ethics or the field related to technology ethics is because we want we believe these things can be good.
Mhmm. And we want to ensure that we're using a method of all the tools we have to to make these things work better. I think the people who don't believe this is is good or it's completely evil is the people who say, no. You need to stop it. You need to stop all development.
You need to stop everything because it's so damaging. But if you want to intervene in a way or another, yeah, that's my take on this. I think it can be a really interesting tool and yes. It's an amazing technology. So, yeah, we'll be pro.
Yeah. That sense that's good. Sweet. I'm I'm doing, double thumbs up here. It it kinda depends, though.
Right? Yeah. Yeah. Of course it depends. But, yeah, I I I don't see evil forces.
Yeah. No. I don't don't see terminators, and I think people get really obsessed with these things. I think it's a good segue because, I was preparing for conversation, and I felt like that we had, in the roundup episodes, different years for kind of, like, things which were big. Right?
So last year, it was generative AI. Like, generative AI was kind of seeping out of the academic sphere into the public sphere, and it was becoming images and texts and all these sorts of things. And so last year really was a big year for generative AI. You can probably say that again for this year because it's just continued. Yeah.
Yeah. Absolutely. But, also, what you were alluding to is that, obviously, as ethicists, we probably we don't talk about evil, good, talk about the nuance of the situation, what aspects we want to to take forward into the future. But there's this I think this is the year of existential risk. It is.
Yeah. It's been coming for a long time. I feel like it's really hit the the public sphere Absolutely. The same way. Yeah.
Yeah. I think that's really interesting. And it's just it's not okay. There's so many aspects. I mean, this is like an entire series of what podcast is talking about existential risk.
Lots of things are happening. Lots of things also happened, especially from March 2023 when we saw the letter from the Future Life Institute in America, which is an institution also linked. Some people well, it is linked, really. It's not some people. It's linked to the idea of existential risk of artificial intelligence.
We have people like Max Tegmark, who has been famous for being kind of on that side of long termism and transhumanism. You have also Stio Russell, which is a a lot more moderate, but he still kind of, you know, believes in those things. Yeah. My take on that is, I think it's import it's an important conversation. I don't agree with people are like, oh, no.
We shouldn't be talking about this. You know, it's like kind of forbidden topic. I don't think that is helpful. We might disagree. Yep.
Absolutely. But it's worth discussions. When it comes to existential risk, some people also link this to something called test grill bundle. Have you heard about this? Test grill is a really weird so it's stands for transhumanism, extravias.
Yes. No. I have heard it. There are a couple more. I don't remember.
Really long. Yeah. It's a really long thing, which is a a guy, I don't remember his name, has been talking about this for a few months or maybe years. I don't know. Basically, there are 3 elements from existential risk of people who believe this can kind of AI development of very clever AI can lead into the existential sorry, human extension, basically.
Yep. So transhumanism, which is the idea of will become some type of cyborg and they some people say that this is a natural evolution, human evolution. So our next step is to become, you know, some type of machine, mind machine, your arms robotics. Yes. So we are kind of half human, half robots.
Yeah. Some people really believe that's the next step of evolution. Yep. We can argue against or in favor against I'm already robots. You know?
So but that that some people believe that. But then you start adding things like long termism, which is a different it's not necessarily long term vision. Long termism is it's almost a religious. It's really strange. I think I think this is you've hit across what I think most people have the problem with for this whole thing is that it's it seems like a pseudo science religious Yeah.
It's it's a really it's a really strange thing. And I I almost don't think in isolation, each of those things per se are problematic. They're just a way of viewing the kinds of things that you could put money and effort into. Right? Yeah.
Absolutely. Or, like, beliefs in what we should be doing. But the idea that you have all these different things, and they all contribute to this, almost religious Yeah. Vision of the world, which you think is correct is is probably problematic and and upsetting. Yeah.
I think it's those people are are kind of against the the idea. I'm not against the discussions of ideas. Right. Yeah. I think the idea should kind of have 3, you know, ideas are important to discuss.
Yeah. Long term mission specifically talks about not the next 500 not even 500 years. We're talking about the future humanity in the next 5000 years. I mean Yeah. Even beyond that, I mean, who knows?
I mean, that's impossible to predict. And the the religious component, which I find it quite interesting and also has been people criticize that a lot is is this sense of we're constantly looking for transcendence since we're gonna be more than our kind of mortal bodies We're we're gonna die. We're gonna become this kind of singularity, like, some type of cosmic travels, which is I don't I start getting really, like, okay. I don't really understand the cosmic travels. But then you have things like effective altruism as well Yes.
Which is into the mix. And yeah that also for lots of people that is is complex because we start kind of in a way or another we start to kind of create in different tiers of human beings or categories of human beings. Potentially, someone who's billionaire has more value than may maybe you and me or any kind of Yeah. Well, you're you're almost bifurcating the the species, like Yeah. Exactly.
There's human beings and then then there's whatever the other thing is. Right? Or the other thing and the other thing and the other thing. Yeah. You start to kinda create this kind of, yeah, different kind of degrees.
And now something that I find interesting, again, the discussion itself, as you said, obviously, I don't think this in any way should inform public policies, for example. Yeah. I don't think anyone is thinking about kind of using existential risk as a framework to, you know, put money to research. But when people start thinking, okay, this should be main priority. I agree with people who disagree with that because I don't think this is should be the main priority.
However, catastrophic risk of AI, that's a different fish. Yeah. And I think those things should be to taken into consideration. Yep. So you've talked about the that horrible acronym, which I can't pronounce, but I believe Timnit Debru also talks about that.
This kind of religious aspect of transhumanism mixed with long termism, blah blah blah. All these sorts of things which are creating, and, trying to drive almost money investment Yeah. Yeah. Absolutely. Absolutely.
And it's problematic as you put it because you don't want it to drive policy because there are lots of things that we need to do now. Right? Absolutely. And and what's the catastrophic risk in in what we need to do right now, do you think? I mean, catastrophic risk, the way I thing that can go wrong if we don't take action now.
So they are I'm talking about potentially 20, 50 years time, which is I think it it's an interesting discussion that we should all have. Things like, for example, you know, if we start connecting AIs to all our energy supply, for example, and then we have some type of exchange emerging behavior we can even anticipate or even understand and then suddenly Yeah. We have no access to any of the critical infrastructure. I mean, that is kind of pretty bad for society. Yeah.
You know, that's those things could happen. A really bad cyber attack that, for example, you know, leads to collapsing or the collapse of, financial services, for example. Yeah. It's not existential risk. 1, humanity won't go extinct.
However, it can be pretty catastrophic if you, for example, one day you go to a bank and the money disappeared and you don't have authority to say, hey. Don't worry. We have here we we took all the safeguards. No. Because it's basically it's gone.
Yeah. I mean, those things can happen. I don't see I I don't think those kind of potential catastrophic risks are science fiction. I think these things can go wrong. Absolutely.
So one of the things I did want to hit across, was that we had lots of people seemingly quitting their jobs or, like, seeing the new possibilities within some of these large language models or just AI tech as progressing and going, actually, I can't work for Google anymore Yeah. And and such like. Because it's too important for me to to be in a institution and not be able to talk publicly what Yeah. About what I know. Absolutely.
I see. Jeffrey Hinton. Exactly. Yeah. One of the couples.
Yeah. Do do you think, that sort of thing is is bleeding into the idea that people are starting to think that we need to put the guardrails on, put the brakes on, or just discuss more publicly Mhmm. Without, you know, being sued or whatever, that there are these issues, these larger issues that we need to to, Yeah. I think it that was one of the good things about this letter, and then we saw lots of other letters. They were very similar.
Some people say, no. We don't need to discuss that. We need to discuss bias and discrimination exactly now. Yeah. But the good thing about all these letters and all discussions and the people like Geoffrey Hinton and all the people, also, they came saying, okay.
This is I mean, it became at some point, I think, March, possibly June, that was the main topic of discussions. Yeah. Discussion. Mainstream, which is really important. As you said before, it was like academics having some discussions, and then everyone was on BBC news.
Every everyone. I think that's an important thing because we, obviously, we need to discuss this. We need to put also pressure on governments to put regulations. It's been 3 years since, you know, the European Commission started talking about regulations. Now Yeah.
We're gonna We can yeah. We we yeah. We can talk a little bit more about that. But, yeah, I think that it's a good amount of pressure, and I think it's just an important thing to do as a society. We also need to understand things.
People quitting their jobs to speak more freely. I mean, yeah, I think that's, you know, definitely, it's a good it's a good sign. I I don't see anything wrong with that. Now some of them, they're gonna go and create some think tanks, which is like, you have lots of influence already. Are you so I mean, yeah.
We'll we'll see. I mean, it depends what you're gonna do with that knowledge. Yeah. You're Are you you got all the You're not interested in the think tanks? It depends.
I mean, they are good as think tanks, but there are loads of lobbying, you know, and lobby groups. And, yeah, I mean, I'm not entirely convinced that's the best way to do it. I also secretly think there's there's a lot of think tanks now. Like Yeah. A lot of And and, institutes and research organizations.
Yeah. Absolutely. Yeah. I mean, to to I mean, again, people are just obviously free to do whatever they they they want. No.
You're you're telling them what legal. No. This is new tendency. But you can't research that. You can't talk about that.
So I'm not per se against think tanks. Of course not. But when they are they become lobby groups and then, yeah, you start thinking, I don't know. I'm not so sure. Let's let's keep going down that line.
And, we also had this year, we had the US gazump, the UK, the safety summit. So we had the safety summit in the UK. Yeah. And the week before, the US put out a, announcement about And there's a safety summit institute. I think there wasn't the night before.
Oh, it's it's just so I love it. It's so I know. Anyway, so we had we had these, governmental kind of, ideas about, what we should be doing with AI. And, actually, it's frustrating for people like me because it was a closed door Right? Everyone wants to go.
I couldn't go either. I would have liked to have been there and, stuck my oil in, but, you know, but you also have fringe events. Yes. There's a lot of awareness about AI and AI safety. What what do you think this was all about, the this the safety summit?
Yeah. I mean, there's also cynical voices that said it was just a performance. You know, people say, no. We should do this and that. I I kinda disagree with that.
I think the fact we are talking about this, the fact people are more aware of the potential downsides on risk. The main you know, mainstream is constantly talking about it. We have the announcement from the government. The AI task force became the AI safety institute. Right.
So we're gonna see, look. Well, America, there's there's one America. There's one here, another one in they're kind of proliferating every everywhere. And so everyone's trying to get into the safety of AI, how to create safe systems. So I I think it was it was a good good thing to do and to have and to host.
And, obviously, this is all about being the leader of AI. It's it's yes. Obviously, it's a vanity. Always a little bit of vanity behind those things. But I think the discussions were interesting.
It was really good to see China, taking part, and I think possibly 20 something countries representative of 20 something countries came. I saw some of the discussions online. It was a positive development. We hosted 1 here in Bristol while you were there. The yes.
The fringe event. Yeah. The fringe event. That was really good as well. Lots of interest, lots of questions.
Yeah. I mean, what's gonna happen next? Yeah. Obviously, that's the the big question. Next year, we know in May, I think it's in May, South Korea is gonna host another one.
Right. Yeah. It's safety summit. Kinda it's gonna be virtual. And then November, France will do the same in person.
I understand. So, yeah, I think we are following a little bit of the, sustainability type of events, like, you know The cops. The cops and, you know, the the global. Everyone comes together to promise this and that. But, yeah, as you said, yeah, we can promise Yes.
Whatever we want. But Exactly. It's it's mostly like we we've come together. We're having these discussions, and it's a good excuse to Yeah. Talk with partners that you don't necessarily get to talk to face to face all the time, that sort of thing.
Talk to governmental bodies. But you are the the outcomes are often these promises to fulfill certain duties Yeah. Let's say. And they're generally coming from either go government level or from big company level. Right?
So we will see the invites going out to, you know, your Facebooks and Googles and Yeah. Yeah. All the big big All the big players. Yeah. Which I think is is we need to invite these people.
Yeah. We need to have lots of people, academics, member of civil society. Absolutely. We need to have them all. Only 100 people got an invitation, so I see.
Those people were upset. The golden ticket. Yeah. It's like, I want to go. Yeah.
Everyone wants to go. Wow. But the interesting thing also is, also people were kind of thinking, oh, they're gonna be discussing existential risk, and I don't think that's a fair, representation because I don't think they were talking about existential risk. In my my impression, they talked about long term risk, not entirely than 500 or 5000 years. Yeah.
But they were talking about what can go wrong. They talked a lot about creating something similar to them, they call nuclear atomic agency, the international atomic agency. So kind of this body global body, you know, with some powers to say, okay. These are high level risk of something going very wrong similar to kind of nuclear disasters, so they can stop saying, okay. We need to stop this at this point.
Like, alert. Alert. I was thinking. That's very reassuring. Yeah.
Yeah. But yeah. And there was a lot of conversation also about bias and discrimination. I think it was to say, it was more or less balanced to some extent, obviously. The first one, next one should be better.
But Yeah. Code of conduct, those of companies saying we don't need regulations. We can, you know, come to set some type of promise, and we promise not to do this. I mean, those things are complete nonsense in my view. Yeah.
These are the things I they don't help because, yeah, I can have code code of conduct, so I can have promises to never do this and then yeah. Yeah. I That's that's that's Yeah. From this. Like I guess it's it's something that we've been working on, you know, collectively and separately for ages, like, advising, like, what people should be doing.
Yeah. Absolutely. Part of that is publicly saying what you want to how you want to conduct yourself, what your principles or your ethics or whatever it is. You have to follow that up with, like Yeah. We need to yeah.
You need to have some actions. I know just go to conduct some promises or or more frameworks. We know there are, like, 500 plus frameworks. Yeah. I mean, we have plenty of those things.
We need to start implementing this. Yeah. So, yeah, it's it's good only if that can lead to regulations. I I'm a big fan of regulations. Lot of people are, no.
We shouldn't have. I think that's gonna be nonsense. I was gonna ask you that. So part of the this equation is that we've had the EU a bit like the GDPR Yeah. Mhmm.
Which, was brought out in 2018. 18. Yeah. I think that was talked about before that. So now we're we're gonna have a EU led, like, I guess, again, like gold standard regulation.
Should should we see. Which will influence a lot of different countries, obviously. The UK where we are, we're no longer in the EU. No. We're gonna have to do something about that.
Right? Then we're gonna have to look at that legislation and go Yeah. Okay. What bits are we gonna take on board? Which are we not?
So so you're, like, pro Oh, yeah. Yeah. Yeah. I I am pro regulations. Obviously, you do have good regulations.
There are lots of things and lots of levels of uncertainty, lots of things you can't really, you know, forecast or predict. But the interesting thing about this EU AI act, which is yeah. I saw the draft. The final draft was published. It was Saturday Saturday, 8th December.
I woke up. I was like, oh, yeah. Pass. Because it was, like, 3 days of, kind of, negotiations and France, Germany, and so France, Germany, and Italy, I think they were against the regulation of things like generative AI. There there were lots of discussions about open source, AI and innovation.
What's gonna happen with these businesses? But then they they agree they reached some type of agreement. Now it has to go next year, well, early 2024. Potentially March, May, and that's exactly where it went, through the European Parliament and the council. Mhmm.
And then the final agreement, and they will be, like, enacted. And then you have about 18 months to 24 months to be kind of implemented so businesses don't have to comply immediately. Yeah. However, things like, I think in generative AI, they've got a shorter window. I think it's 12 months.
And things related to, like, biometrics, which is is that's a very complex because there's this risk. Because the rates risk is too high, and it has a window of 6 months. Now real time biometrics, is a real real headache, for everyone, really, because you have this you never you should never use it then accept and you have all these exceptions exceptions like terrorism or any act like, you know, really really bad things happening. And, you know, they can say, yes. We are gonna use it because it's an exceptional exceptional circumstances.
But then that can be abused very easily as well. So it's it's, yeah, it's it's a very, very difficult one. And also they had the new risk of systemic risk, which is wasn't before the law. So you have four level of risk the eu act may I add you have the unacceptable risk you have social scoring you have things like real time biometrics with exceptions Yeah. Deceptions.
Then you have, anything like kind of manipulate people behaviors and a scale. So those things that you can you can develop Yeah. You can put on the market. Then you have the high risk AI, which is possibly I mean, some people say it won't be as many. I think lots of things are gonna go into high risk.
Anything that can have an impact on people's life, like access to education Mhmm. Employment, financial services, lots of things that you know Health. Yeah. Computer says no. Yeah.
That's that's that's the type of things that can affect you. So those things are gonna be lots of conformity assessment, loads of documentation. It's gonna be quite heavy in documentation and all this assessment you need to fill. And lots of companies I mean, I don't think anyone knows how to do it, to be completely honest, because that's never been done before. So we have all the example x a hemp examples of other type of, you know, protocols, like, you know, biomedics or bioethics.
They have kind of very strict protocols. Product liability, when you buy something that is faulty, so you can go and say, hey. I need my money back or something happened. But, yeah, it's a mix of loads of different things I'm trying to put into these AIs, because we don't really know how to regulate them. We're gonna it's gonna be a trial and error.
Yeah. But that's high risk. Then you have limited risk and minimal risk. The interesting thing about limited limited risk, there are chatbots And since you're gonna interact, you have to follow the transparency requirement, which is, say, hey. You need to make sure people understand that's a chatbot.
It's not a person. You need to, you know, ensure you demonstrate that deep fake is actually deep fake. So you have watermarking, which some people are already saying, we are creating, you know, all the system to to fool the watermarking, basically. So you go watermarking, and then you go AIs. They're trying to say, okay.
We're gonna ensure this thing. I don't know how on earth they're doing that. Yeah. But, yeah, you're gonna have one which says, hey. You should be doing that, and then you gotta create someone is creating something already to kinda to try to erase that.
Yeah. It Well, so you respect that anyway. It feels like one of those things. There's loads of these things which it seems easier for companies because they have a interest to to not pay fines and Yeah. Yeah.
Operate. Right? And they are very heavy kind of fines as well. Exactly. These are these are substantial fines.
And they so they have an interest in not getting fined, making money, and doing that within, you know, the guardrails of the law. And all that, like, the world is a globalized market, you know, with the Internet. So you're you're having to if you if you sell in the EU, you're gonna have to deal with this stuff. Yeah. But there's people who are gonna be using AI tools for other uses who aren't necessarily companies, who aren't institutions, who who aren't incentivized to abide by their systems, or or will not actually it won't occur to them that it that exists Yeah.
Which is gonna be, like, bonkers. Right? Yeah. I mean yeah. That that's, yeah, that's another big problem.
Absolutely. Absolutely. And, I mean, to to give an example very quickly, when it comes to things like minimal risk, there are things like spam filters supposed to be a minimal minimal risk. And there was a case with it. I was talking with an representation the other day, and everyone laughed because it was like well, apparently, the republic, the republicans in the US, they were complaining against I think it was Google saying, the company was sending their political advertisement or campaigns to the, spam fill folder.
Yeah. So, obviously, people laugh because they are republican. But what people don't realize, yeah, you can laugh about that now. Mhmm. But what about if there's some minimal requirement?
It's like, basically, you can develop spam filters. Nobody's gonna check them. Yep. But then it's not the political party you disagree with, but it's your business. Yeah.
Yeah. Or it's something a message you want to because, yeah, you are launching a new product, whatever. And then company exit, you know, actually, I don't like these people. Yeah. So I'm gonna send the message into a small, folder.
Yeah. That's less funny when it happens to you. And he says, yeah, it's a second strain because it's not really a risk to say, oh, it's you know? But it can put companies into kind of serious difficulties. Yeah.
Well, I feel like like most of these things, I think AI is, like, that's the tool, right, for filtering things which are obviously spam all the way up to maybe less so. I think the problem that that you've outlined there is actually the monopoly problem Yeah. Absolutely. With the the mode of communication. Right?
Yeah. It's true. So, like, if you're a Google or a Facebook or or a Microsoft Yeah. You basically control the mode of, like, the the means. Yeah.
Which is also take us to another big problem, which is why we're giving this company so much control and also so much so much of the ethical decisions. Basically, you say, oh, I don't like your message because your message is whatever. Yeah. Oh, it's it's not according to my values. Okay.
Yeah. Fair enough. We have legal things. We got legality to say, okay. These things are acceptable.
They are legal and it's terrible. But sometimes they, you know, I don't like your political party, and it's nothing wrong with the message itself, but the company don't don't like these people. Why why you they have the control to that. Yeah. You know?
So Yeah. But they the the also the problem is that they have control and that we don't have access to monitor that control. Oh, yeah. Absolutely. Way that's, like, we don't have view on those levers Yeah.
Or that learning data. You know? It's all behind behind. Yeah. Yeah.
Yeah. Yeah. We have no gaps happening, which is yeah. It takes all these things about data provenance and Yeah. How we can yeah.
These are other things that's gonna change as well. You need to start kinda demonstrating where you're getting data, how you're collecting data, how you while you're, you know, cleaning the data. I mean, the entire life cycle, which is extremely complex thing to do. Yeah. And it's like, when you start reading all the assessment and documentation, it's like, wow.
This is a lot. You know? It's a lot of kind of processes and things with Right. Yeah. It it it's quite heavy.
Yeah. That's why, I guess, the UK, they're trying to do something different. And some people agree the UK don't know how I feel about the UK pro innovation regulations. So the the I think, I can't remember hearing a an ounce of regulation coming through the UK. It was mostly, like, we are pro AI, and we are supportive of innovation, and we are interested in Yeah.
You know, accelerating our placement in the globe. But sounds good. Yeah. Exactly. So this is all good for productivity, innovation, all that sort of stuff.
So I'm guessing if the EU want the EU, the UK wanted to be more productive Mhmm. They could follow on the heels of the EU or or do something similar but different or different again, to maybe put whole like, fill the holes where the EU acts. Yeah. Or incentivize that behavior. Yeah.
I think, yeah, I think one of the issues with the UK regulation I mean, one of the good things, some people will say these are good things, is decentralized. I mean, the UK you in the EU, you have this kind of centralized kind of government, and you have this or AI kind of advisory, whatever it's called, council. What's the name? Like, yes, like a council. People just always kinda looking at all the potential bad things might happen.
But they also create a database of incidents, things that have gone wrong, which I think is a good thing because you see I think engineering all the kind of complex engineering projects they also have all these kind of safety assessment They're very heavy on what happened in aviation. For example, they have this very big history or all the things that went wrong 20, 30, 50, 60, whatever years ago, and then, okay, now we have a really good documentation to avoid future problems because we know these things went wrong before. So here, it's a lot more like, yeah, body doing, you know, financial contact authority is gonna be looking at all the financial services and health. It's gonna be another body separate, which is yeah. It could be people say it's gonna be a lot more flexible, but the problem is, are they gonna communicate to each other?
I don't know. Nobody knows how a nurse they're gonna become like, okay. I know that happened, and that potentially can affect because, also, we're talking about big things like generative AI or multimodels, systems or models. They can affect lots of different things. We won't affect just financial services.
It can affect lots of all things related to or even unrelated to that, kind of field. So we need that level of centralization in a way even though people are against idea. You know what I mean? It's like we need Yeah. That okay.
We have all the information in one place instead of, oh, you know, yeah, I I do I I I can I couldn't see your report? Do you send it to me? Like, yes. I send it to you. Oh god.
Sorry. But, yeah, that that's not helpful. So, yeah, I think we need a little more kind of strict kind of oversight and more, like, centrally centralized approach than Yeah. All this. Because they either risk it, they never communicate to each other.
Yeah. I mean, these are perennial issues for sure, which I we won't continue digging into because always frustrating. So I've got I've got noted down here OpenAI's Sam Altman Oh. Debacle. Oh, yeah.
Yeah. I don't know if you had anything to say about that. It's kind of sort of Yeah. Still ongoing. I don't think I don't think anyone knows.
I mean, yeah, apparently, 5 people, they know what happened. Yeah. All the rest of speculation. And, apparently, one of the existential risk and the potential really bad thing that might happen, people who believe in those kind of risks, they start getting really unhappy with some old man being, like, apparently, reckless thinking okay, I don't care about the risk. I want to make money.
That's the story. I have no idea if that's what the reason, you know, this guy was fired then rehired. Then rehired. Yeah. Just yeah.
It it also make make seeing all these people with a huge amount of money and budget, their communication is just awful. They were saying, you know, you have a PR. Yeah. Yeah. But I I guess they're they're spinning it.
Right? It's like Yeah. But still, I mean, surely, you can communicate these things a lot better because the only thing you're doing is fueling more and more conspiracy theories potentially or kind of crazy theories. So, apparently, that was one of the reasons. And the as I mentioned in this q star, supposed to be, very advanced model Okay.
Yeah. Which is supposed to be like well, the advanced model is Yeah. Can do kind of basic math, kind of calculation of math. Yeah. We're supposed supposed to be a, like, a really kind of breakthrough on artificial intelligence.
Yeah. It's called yeah. I think it's called q star. And that was apparently one of the reason they started developing something that nobody was told. And some old man was like, yeah.
Let's carry on with doing this. I was like, okay. You need to stop this now. And that was one of the tensions. I have no idea to be honest.
This is speculation. Don't know if you know anything about what happened. I feel like it will shake out in the news at some point, but it it's a combination of, NDAs and self interests and company interests and investor interests that obviously Absolutely. Making it very hard to work out what's actually going on there. But that was if if anyone's interested, it's the OpenAI, CEO.
Yeah. Yeah. They were fired. We hired, etcetera. Yeah.
Some people think also people who can follow effective altruism, they were unhappy also with Apparently, some some old man Right. Doesn't like the effective altruism movement. Yeah. And there were some tensions there. But, yeah, I don't know.
I mean, at the end of the day, it was purely about money and investment and, you know, are you developing this? And But it's it's ironic because, the OpenAI organization was originally a not for profit organization. Not for profit. Yeah. Yeah.
So the so the irony here is that it's no longer a not for profit organization, or should I say part of it is and part of it isn't. Yeah. I believe. Did I get that right? And, now they're making presumably huge amounts of money.
Yeah. I can imagine they they are spending a huge amount of money doing what they do as well. So there is that. I can't imagine they are making loads of money other than taking on other people's money. They're taking on loads of other people's money.
Yeah. Yeah. Probably. Well yeah. No.
You know, talking about that, I was thinking it was a couple of days ago or possibly a week ago, The New York Times issued, OpenAI of copy cop copyright infringement, saying, okay. This information you're releasing, you're gonna give me that for free. It's Yeah. For paid subscribers. And, yeah, obviously, the you got you got a serious issue there.
Yeah. I don't know what's gonna happen with that, but it's it it's just an interesting And it's one of the first of that kind of Yeah. I think a Getty a Getty Images. Yeah. Yeah.
Oh, no. It wasn't the. I don't know which one. Yeah. They also they sued another kind of, mid journey.
I don't remember which one. One of the generative images. Yeah. The one of them. Saying, yeah.
You're using our images, and you can do that. And then New York Times yeah. New York Times is a is a big is a big thing, so obviously, it's making loads of noise. Potentially, that's gonna open the way to loads of loads, potentially, people and organizations saying, okay. You're using my Yeah.
My content. I you know, you need to pay for it. Yeah. There's this thing called copyright, everyone. Yeah.
This is something. But it's interesting because I was reading about this. Some people say, okay. If they they have some type of agreement Mhmm. We can open AI and say, okay.
We're gonna kind of release a new type of, you know, exclusive for New York Times subscribers access. Yeah. So you have OpenAI, and you have this link with New York Times, which is specifically for so so if you are a subscriber, OpenAI oh, sorry. In New York Times, you can access through OpenAI instead of going in into the website, New York Times website, which is obviously it's a crazy idea. Some people and, yeah, it will be, like, a very personalized news feed Yeah.
Would link to New York Times. I don't know how that's gonna work, but that's supposed to be a link between these two things. Okay. So they're they're trying to undermine the lawsuit by saying We can work together, guys. Yeah.
I don't think that's gonna happen, but that's potentially something people say that might happen and, you know, I I don't I think New York Times is gonna obviously, they're quite angry about it. Yeah. Yeah. You can see why anyway. I mean, I think, for me, that comes back to the idea of it's it's partly, like, recommendation systems, but it's also partly search.
Right? Yeah. And Google and much lesser extent, Microsoft has shown that if you're the winner in search, there's there's that's large profitable Yeah. Yeah. Absolutely.
Absolutely. So if the next winner in search is a big language model Mhmm. Then you you almost don't have to do anything else already, and you've already got this golden goose situation. You've got this large, part of the market without even having to to do poetry or, you know, help them their their own stuff. Yeah.
But I'm thinking how's it gonna work in terms of, for example, you know, lots of these companies and all the marketing goes through search engine, and then you pay advertisement, and you have all these kind of journeys. People go from one place to another. That will be just now will be the chat gbt type of all in. I don't know. All in one.
Well, in my in my dreams, right, this is why I'm lying in bed going, oh, if only if only they made the Internet different back when when commerce existed on the Internet. Yeah. You know? Like, we we don't need to do ads. Right?
Let's just all pretend that the Internet isn't just ads and that we live in a completely different universe. But there there are just simply of a business models. And, for example, if, let's say, Google and Gemini Mhmm. Gemini. Yeah.
They're just Yeah. Release. Announced, like, last week. If they decided that that's the search is now Gemini, whatever the the large model is, and you pay them Mhmm. Then and it is vastly better than anything else out there.
I can only imagine people would pay. Yeah. Yeah. Absolutely. Absolutely.
And it almost doesn't even have to be a large amount of money because every, you know, all those ad hits millions and millions of millions of people paying. So it really gives them a lot. Yeah. Yeah. Yeah.
It's a bit like that thing when you if you're a Facebook subscriber, there's a 1,000,000,000 Facebook like, a 1,000,000,000 people on Facebook or whatever it is. If everyone paid a dollar a month or something like that, that's, you know Yeah. Yeah. 1,000,000,000 a month. Right?
So you Yeah. Yeah. Absolutely. Scale, right, in the situation. So I I can envision I can dream about Yeah.
But you're giving even more control of more things, which is like the idea of super apps or, you know, like the the the Chinese one, but you got everything in 1 Yes. Big, huge applications. You got all the financial you know, all the access to banks, to all the your communication, health record, everything in one place, which is Yeah. It makes me nervous too. And the 10 I mean, and the tendency or the way we potentially gonna go is will be into the smaller language models.
Yes. More symbolic type of AI instead of the thing we have more now. So required let require less amount of data. It's supposed to be better kind of reasoning or logic. This should be using instead of this massive pattern recognition, thing we have now.
So the tendency could be the opposite to that and other people, like, you know, just very quickly, very briefly, like the data ports, the idea of you can kind of have control of your data you buy and you sell, which is the the Tim Berners Lee idea. Yep. And what Brave Brave is a slight different. Brave is they they do kind of software. Solid.
Yeah. Solid. Yeah. Exactly. Solid.
Burners Lee. Yeah. So yeah. So what you're you're saying we're going to the all in one super app or we're going into the opposite, which is the small kind of systems. Yes.
Yeah. We don't know which one we're going. I would suggest that AI wouldn't like the small system, you know, because I guess the success of the current generation of AI is is the large amount of data they consume to to to be trained on. Yeah. I do like the idea of having my own data where I can, tell the system what I want done with it.
That's cool, and I and I love that. But I haven't seen any of the system take off, and I've actually worked on some of those Okay. That's cool. Startups startup land working in, the UI and and and you user experience with some of those things. Oh, okay.
Other school. But I haven't come across one which is, unless they nailed it. Mhmm. Yeah. But both technologic technologically and for the user experience.
But most but I guess mostly for the, like, just marketing, just getting people to use it Yeah. And and getting that network effect. Yeah. Yeah. Yeah.
I don't know. I quite like the idea, and I I will distrust a big or super app. I it is not something I will sign for. Yeah. No.
That was completely against my it's like my my nightmare, basically. Like, all my privacy. Like, hey. Take control of everything. It's like, no.
Thank you. No. Yeah. I wouldn't go there. Why?
They're they're nice. They're not They're convenient. Yeah. They're just too convenient. It's like interested in anything else but making money, by the way.
Yeah. Yeah. No. It's just too too many things in one place now. It makes me nervous.
So do you have any predictions for this coming year? Oh, god. Predictions. I don't believe in predictions. I think the things we're gonna carry on, kind of, discussing, definitely, the AI the next AI safety summit, absolutely, the idea of safety, which is interesting because now it's taken a lot more into the cybersecurity space and even the language is a lot more.
Which is I don't I don't think it's a bad thing anyway. But it's just taking a different angle now. Societal risk is still very important. We're gonna see a lot more, especially with the neck the elections next year. We have I think post.
I think I heard you today. More than 70 countries are having elections in 2024 Yeah. Including, obviously, USA. And, well, in the UK, we should have something around November well, before January, so during 5. And, obviously, that's gonna be that's just massive.
I mean, we don't know what's gonna happen. We know already the use of these deepfakes are extremely, I mean, you can't really distinguish these things as if anyone can. And, yeah, you can create AIs to distinguish and to say, hey. This is an AI, but then you create an AI that is gonna corrupt that AI or that is like a change of you create something, a safeguard, and then you create the It's an arms race. Yeah.
You're right. Against that safeguard just to to gonna break it. So, obviously, disinformation, misinformation, deep fakes, elections, those things are gonna be high in the agenda. More AI safety summits everywhere in the world. New rights, something I find interesting.
People have been talking about have been talking about that. Lots of people mock the idea. And to be honest, I don't mock any of these things. My view on this is I listen. I don't dismiss people's I might disagree with them, absolutely, but I'm curious.
I try to read people. I disagree. I don't just follow 1 I try to get get out of my bubble. Seems to be like there are lots of things happening in that space as well. This is book of I don't remember her name.
We talked a lot about it. Yeah. Oh, god. My surname. I I guess your point is that the we're gonna get more implants.
Yeah. Neuralinks, for example, is the the other one. And, obviously, the fact that I can have in terms of I mean, that's just so huge. Yeah. Yeah.
This is absolutely huge. I mean, people just my my control mind controlling, basically. I'm just gonna interfere into your thoughts and your opinions. Exactly. And that that's that's huge.
You're you're essentially recording the data that your brain is producing in order to control these things. Yeah. But then you're learning a lot about Yeah. And you can influence people's way of thinking, which is Yeah. You know, if we talk about human rights, that's one of the part of human rights is to have the freedom to think of freedom thoughts.
Yeah. So, yeah, it's it's a lot of these tendencies to control things, which are like, okay. This is yeah. That's something to to be worried. Absolutely.
But you're right. I think it's gonna be another big big discussion. Wow. I I I I would would definitely get a episode on that in in future. Yeah.
You need to get this, lady. I think this is going to it's gonna be one of those things where it will explode onto the scene properly at some point. So Yeah. I I think it's a little discussion. And the fact people say now, oh, we don't have the technology.
Okay. But we might have the technology Yeah. And we need to start thinking about this. Yeah. Some people think these are just complete nonsense Yeah.
Discussions. For me, I'm interested in discussions. I think in the near future, you're gonna have companions. They're gonna be AIs. Those are people are against idea.
Mhmm. Well, I think that's gonna be a reality very soon. I don't know how I feel about that. I I I Yeah. I don't know.
I almost feel like the whole it's it's that classic Iron Man thing. Right? So if you've seen any of Iron Man depictions, there's this AI helper. Right? And I feel like it should be like that.
It'd be it'd be nice if you had a a helper, which you may get emotionally attached to, and that's bound to happen almost a 100%. But they they don't need to be there to, you know, egg you on, have a romantic relationship. They're just they're just there to just to kind of Yeah. Smooth over the the hard edges of some Yeah. Some of the modern living, I would say.
Yeah. I mean, romantic relationship with the robots is not something that ever crossed my mind. I can't really think about any anything like that, but I can imagine, you know, somewhere, for example, years ago, I went to a wrist a Bristol robotics lab. Yeah. And they they have this kind of research about this pepper, this little robot.
It's a very cute little thing. Yeah. And it was kind of helping people kind of recovery after an injury or something. And I took part of one of the kind of exercises. It was really simple, like, oh, lift your left hand, lift your, you know, whatever arm.
Just doing very simple exercises. But you start looking at this thing, obviously, it's like, it's very cute. It's there, and it's telling you, now lift your right arm. Now I'll lift and then you feel like, oh, it's very cute. And then at the end, the robot said, well done.
And I feel so and it's so stupid because I felt like this well, I did very well. You know? It's completely ridiculous, honestly. I wasn't doing anything. You know?
You did a good job. I did a good job, basically. Yes. And I felt like, oh, that's great. I felt like, really, like, I achieved something, which is but you can't I mean, it's really difficult not to feel that, especially when they're cute and they're kind of really you know?
But if people hate it, yeah, actually. We had an episode on on social robots with Birch and Malay. Okay. And it's I would definitely check that out if you're interested in this space because I think such an interesting area, which we It is. I don't think we have that much on, you know.
I feel like there must be a lot of work to do in that social robotics area. And the thing is the only way we're gonna do it is when we actually have them around us and then we're gonna say, okay. Because the one thing is to have it in a lab, you know, very controlled situation and people will get attached, emotional touch, or they hate whatever. Yep. But then when you release this into society and long term and they're gonna be your carriers, essentially.
Yep. Which is I don't I see that's gonna happen, you know, especially your aging population, you know, all the problem with care system. I think in 20, 30 years time, we're gonna have, yeah, potential little things saying, take your medicine. Yeah. Is that really bad?
That's my question, to have a robot and you feel attached to that robot. I guess it's nuanced, like all these things. Yeah. It is. Very nuanced.
And I without dialing into a number of hours of conversation Yeah. I I think I think I'll park the the nuance there. It it is. And I'd like to ask you, what what are you doing in 2024? Oh, I got lots of really interesting things I'm gonna be doing.
Yeah. So, yeah, first of all, I'm launching something called the Ethical Technology Network, which is a new initiative in the southwest. And it's interesting because I keep saying the southwest, but it got people from other, parts of the countries already saying, hey. Are you gonna do this? Come over here.
Yeah. Or are you doing just southwest? So it's I say southwest because it's based in Bristol mainly. But it's all about providing practical tools, especially for small businesses, SMEs, small and medium sized businesses, startups, and people who are now big large corporations because they they have no access to anything. Basically, you have all these regulations.
You can have all these changes. They're gonna be absolutely gigantic, and you have no you have nothing. If you are a small business, you you don't know where to start. You don't know where to go. You have tons and tons of web pages with millions of papers, frameworks, research, but you don't know where to start.
You don't know where to even start looking at anything. So I don't I want to bring all these things into okay. If you are a small business and you need to comply with this regulation, this is the type of thing you need to follow. Yeah. If you want to adopt one of these AI frameworks, this they are the potentially a best fit for your organization.
Yep. So these are companies that might be making AI products, but also probably using AI products. Right? Buying as well. Recruitment is another area.
And also investment, which is something that at some point, I have no idea how the investment system works. But, that's why the network is me plus all the people with all this expertise. A part of the network are lawyers, researchers, centers, and universities. I want to gather with some people expert investment because of that. I can provide all the advice, but I can bring the people with the expertise to help this organization.
So that's why the network Yeah. Is gonna be like a membership type of thing so businesses can get access to these services. Mhmm. And to launch the network, I'm gonna be hosting a AI ethics AI ethics risk and safety conference. Yep.
And I had AI ethics, and then I started adding risk and then safety. Became too fashionable to not to add to safety. I I feel like AI ethics is those things as well, but I think It is. It is. It's absolutely.
It is. Yeah. But I think risk is another thing. I was, yeah, talking about the other day how risk has become a part of the political discourse, a public discourse. You know, people talk about risks in the ways we we didn't talk about 10 years ago.
So I think risk is also the idea of, okay, we have risk and we need to understand this risk. And safety, obviously, is a can approach this to to to realize some type of solution. So it's gonna be the first one in the region, and I'm pretty sure that's the case. So big deal. Yep.
It's gonna happen the 15th May 2024 at the watershed in Bristol. It's part of by the harborside, so it's a really cool venue. And the conference, again, is for businesses. It's very practical. It's not just to have academic discussions or to talk about, you know, conscious machines or whatever, which is, again, they're really interesting topics, but it's not the conference won't be about that.
So it's about businesses helping other businesses, organizations helping businesses, And it has 4 themes. First one being all about regulations. So I have speakers coming to talk about, okay, if you're a business, this is gonna come, this is gonna happen, this is how you can prepare your business for the relations, so got that cover. 2nd theme is about standards and frameworks. Yep.
And I got people very excited. I I am dying to tell you who's coming, but I won't because it's too I I I gotta keep it. Am I gonna be there just to disagree with everyone? Yeah. Yeah.
You're invited to come. You're invited just to to do a podcast if you want live podcast. I think I'm I think I'm gonna come, and I'll just gonna be like in the course. But that's I like that. I think it's it's great that people disagree.
Okay. Good. So I've got people coming to talk about standards, people developing standards. That's really good. And 3rd team theme is, training.
So what's happening in the world of, you know, how we're training people to understand this risk. Yep. 4th 1, which also is gonna be quite unique in that sense, is organizations implementing this kind of frameworks or doing actually doing the job, not just talking about, oh, we're gonna do all these great things. So I got organizations coming. Okay.
We started developing our AI framework, AI ethics, our risk framework, and these all the things that went wrong, all the things that worked Yeah. And lessons learned. Yep. So it's gonna be all about that kind of practical advice. 15th May.
15th May. All day. Store. Bristol. Yeah.
And it's gonna be affordable, so people can come from London, Manchester, whatever. Record it, that sort of thing? You gonna film it? No. It's gonna be unique.
No. Just one day. But I'm open to, you know, if you want to do a live podcast Yeah. Yeah. Yeah.
Absolutely. Do, your, greatest hits. Yeah. No. Absolutely.
And, yeah, we're gonna have a panel discussion and all that. So it's something I think Okay. But, yeah, that's that's the plan. And we also have Ben Yep. You were to say now?
Yes. Yep. The end of March, we're gonna be doing training, kind of half day. Yep. Well, we need to go into more details, but we'll be an AI ethics I think we'll see we'll be AI and ethics kind of training for, again, business organizations.
We're gonna cover things like AI, generative AI tools, applications, something around AI governance and regulations, especially an overview, and very importantly, a practice practical exercise. Yeah. So that's gonna happen the end of March. And that's part of the AI fringe again? Yeah.
They should. Yeah. They should be part of that. Applying for that. Yeah.
I'm applying for that. So hopefully, that's gonna happen as well. Yep. Super. So, are you feeling positive about 2024 before we go?
Oh, yeah. No. It's gonna be busy and it's gonna be loads of things. I mean, my my pile of books and articles like that now is just out of control. So that's always good.
Yeah. Awesome. Thank you very much for coming on the podcast. I will look forward to working with you, and I'll see you and, everyone else next year. K.
Brilliant. Thank you so much for having me. Bye. Hello, and welcome to the end of the podcast. Thanks again to Karine Rudolph.
I'm especially excited to do a special, roundtable podcast at the AI Ethics Risk Safety Conference. So check that out in the coming months. And I'll put a link in the website show notes when tickets for the conference are announced. I especially like the idea of neural rights. We talked briefly about, social robots and, social AI, and I'm sure those would be things that are cropping up in the near future, as well as, obviously, the massive takeover of foundation models, LLMs, and hopefully something new.
We'll see. And I hope you all have a wonderful 2024.