93. Socio-technical systems with Lisa Talia Moretti

In this episode we're chatting to Lisa about: Data and AI literacy, data sharing, data governance and data wallets, design values, selling in ethics to organisations, contractual agreements and ethical frameworks, AI unlearning, what organisations needs to know about ethics, and an AI ethics consultant directory...
Date: 3rd of October 2024
Podcast authors: Ben Byford with Lisa Talia Moretti
Audio duration: 01:01:49 | Website plays & downloads: 99 Click to download
Tags: Responsible AI, Business, Values, Sociology, GDPR, Regulation, Personal data, Governance | Playlists: Values, Business, Responsible AI

Lisa Talia Morettiis a Digital Sociologist based in the UK. She holds a MSc Digital Sociology and 17 years of experience working at the intersection of design research, social theory and technology. Lisa is the Chair of the AI Council at BIMA and a board member of the Conversation Design Institute Foundation. In 2020, Lisa was named one of Britain’s 100 people who are shaping the digital industry in the category Champion for Change. Her talk 'Technology is not a product, it's a system' is available for viewing on TED.com


Links mentioned: Elements of AI https://www.elementsofai.com/


Transcription:

Ben:[00:00:04]: Hello, and welcome to the 93rd episode of the Machine Ethics podcast. This time, we're chatting with Lisa Talia Morretti. This in-person chat was recorded on the sixth of August, 2024. Lisa and I talk about data and AI literacy, data sharing and governance, marketing things as smart and intelligent, GDPR, data wallets, personal information, and data ownership. We also talk about design values, selling in ethics to organisations, procurement and agreeing to ethical frameworks, and what organisations need to know about ethics.

Ben:[00:00:39]: As Lisa was kind enough to come to our office in Bristol, there is some background noise, seagulls, things like that, but hopefully you can hear us both clearly. If you like this episode, you can find more at machine-ethics.net. You can contact us, hello@machine-ethics.net. You can follow us on X: Machine_Ethics, Instagram: Machine Ethics podcast, YouTube: Machine-Ethics. And if you can, you can support us on Patreon.com/machineethics. Thank you very much, and I hope you enjoy.

Ben:[00:01:12]: Hi, Lisa.

Lisa:[00:01:13]: Hi, Ben. So great to be here. Thank you for having me.

Ben:[00:01:17]: Welcome to... We're upstairs in the office, which we were saying looks a bit like a teenager's bedroom.

Lisa:[00:01:24]: The glamour of the behind the scenes.

Ben:[00:01:28]: Yes, we're recording. We've got some soft things up. So welcome. We met at the AI, Ethics, safety, regulation, responsibility conference.

Lisa:[00:01:42]: Yes, we did. Here in Bristol.

Ben:[00:01:44]: In Bristol. Not so long ago. Could you briefly introduce yourself and what you do and all that stuff?

Lisa:[00:01:53]: So I am Lisa Talia Moretti. I am South African, been living in the UK for 14 years now, and I'm a digital sociologist. And there's a lot of people who have no idea what digital sociology is and how that relates perhaps to ethics, but we can dive into that in more detail today. But essentially, digital sociologists are really interested in understanding how tech is changing society. And when we talk about how tech is changing society, it's not just around the impacts that it's having, but also how it's changing social norms and how it's changing relationships between people, not only just relationships between people and machines. And yeah, so just in a nutshell, a little bit about me and what digital sociology is.

Ben:[00:02:41]: Great. And you work... You've worked for lots of people, obviously, but you're currently working for And Digital. And what's the things that you as an organisation, and you're part of that, what kinds of things are you doing with organisations to help them or guide them that thing?

Lisa:[00:03:00]: Yeah. So a lot of my work is divided into, I would say, three key areas. So the first area is conducting research. So that's around conducting cultural research, looking at how culture and society is changing as a result of technology and how there's new norms and new behaviours and new habits that are shaping within society. The second piece is around then understanding and better mitigating against some of those negative impacts. So looking at as a result of these changes that we can identify, what is the longer term impact here? Are there any ethical issues and scenarios that are emerging, which in the AI space, there are many to discuss. And then the third piece, which is the piece that I'm really probably most new to, but I find incredibly interesting, is how our relationship with data is changing and how data literacy levels within society are changing. And also then thinking as a result of that, how organisations need to think about new data governance models. So I'm really interested in innovations in data governance, data sharing, new ways of structuring data within organisations, sharing data, collaborating with the data. I think this is such an interesting space that is emerging.

Lisa:[00:04:30]: And it's going to be, I think, a whole new industry, I think, within digital and a whole new profession within digital.

Ben:[00:04:39]: Yeah, yeah. If we step back briefly, there's so much to dig into there. Can we first answer the question which we always ask on the podcast, which is, what is AI?

Lisa:[00:04:51]: What is AI? Okay, so AI for me is not a single technology. I think people, a lot of my clients that I work with, a lot of them I think that AI is something that you just get in a box. And in the same way that you would unbox a monitor or a mouse or an iPad, many people think that that's what AI is. But actually, artificial intelligence is very much made of many different components of technologies. When we think about AI, we have to think about, well, what is the application? Some AI is very good at computer vision. Computer vision technology is a It's a whole thing that is very different to any technology that is working on statistical identifying patterns and using any... Or voice recognition, or so I think, yeah, when we talk about AI, we need to think about the many different components that make a system intelligent. That often involves some machine learning element. There are often some user interface that needs to be there. There is, of course, the algorithms that are there. There's a model that's there. It's made up of many different things. I think one of the key things to always think about when we're talking about AI is that this word intelligent is a wonderful distraction.

Lisa:[00:06:30]: Yeah. Maybe not wonderful distraction, but it's a distraction. And the reason it is a distraction is because the machine mimics a kind of intelligence and it mimics a smartness. But we need to be very careful that we don't think that that machine has the same type of intelligence as a human intelligence is. And so I think for me, when we talk about AI, really kinda thinking about it in those two terms, it's made up of many different types of components that make the system intelligent. And we need to be really careful about the use of this word intelligence, because while it mimics intelligence, it's not the same as human intelligence.

Ben:[00:07:17]: I know. We could spend the whole time just talking about the semantics of- Totally.

Lisa:[00:07:22]: Ai. Totally.

Ben:[00:07:24]: I think at some point we're just going to have to call it and just be like, No, we're not taling about AI anymore. We're going to talk about super fancy statistics. Like fancy statistics or predictive data analytics. That's what we're talking about here.

Lisa:[00:07:44]: I was going to say, I was pretty much going to say very much along the same lines, like advanced predictive analytics. And I think I way prefer talking about competent machines as opposed to artificial intelligence.

Ben:[00:08:00]: There was a while back, everything was smart, right? Yes. So we had smart things. Yeah.

Lisa:[00:08:06]: Everything was smart, including the cities, the ... We had smart appliances, smart fridges, tosters, kettles. It was phenomenal.

Ben:[00:08:18]: I don't think many things are being pushed as smart things so much anymore.

Lisa:[00:08:22]: No, smart has definitely tipped over into intelligence.

Ben:[00:08:25]: Yeah, that's true. Maybe it's just intelligent stuff. Bizarre. Anyway, I wanted to briefly come back to that idea of the data sharing piece, because I think it's one of those things which is in my life is reoccurring, this idea of people's data or company's data, what happens with data and data provenance and data sharing, and the fact that a lot of these systems work on a big amount of data, right?

Ben:[00:08:54]: Big data. It's very difficult to train from scratch on no data. That's just not possible with machine learning. A hundred %. So can you talk more about what's interesting in that area? Or can you see that's happening at the moment?

Lisa:[00:09:13]: Yeah. So I've been working within UK government now for the last six years and for a number of different departments, so Ministry of Justice and Ministry of Defence, being the two major ones. Also, I've been doing some work with ONS, so the Office of National Statistics, and through doing research with them and doing user research with them. And what we found is that when we talk about doing user research with them, what I mean is we actually are doing user research with the public, with the British public. And so we test different government services, and we inquire about different government services that people use. We identify what some of those issues are. And then through a series of test and learn and prototypes, we hopefully make improvements to those services. And so through doing that, we have found that as there's a relationship between data literacy levels and the desire to share your data. So as data literacy increases, as you know more about what it means, the value of your data, how your data could possibly be used, all the things that it could possibly go wrong, or potentially how your data could be used against you, your desire to share your data decreases.

Lisa:[00:10:34]: As data literacy increases, sharing hesitancy also increases. What we have found is that there is an increasing eroding relationship that people have with organisations that are being seen to collect loads of data because organisations haven't done a very good job of of building trust, maintaining trust over time, talking to people about how their data will be used, how it is protected, and also how it will not be used. And so what people do is they educate themselves through headlines. Through the media. And there's a lot of scare stories out there. There are a lot of, and not just scare stories, there's a lot of really terrible and negative things that have happened as a result of data breaches that people are really nervous now to share their personal data. And we need personal data in order to deliver services to the public to determine if you are eligible, to determine that you are actually the person who is receiving this service. So we don't want any fraud to happen. We'll try to minimise fraud from happening.

Ben:[00:11:48]: So this is coming from the government direction?

Lisa:[00:11:51]: Yeah, absolutely. So as more government services move online, we need to collect data in different ways. So it's not just through you filling out a form anymore. And then that form going through to an office. This is all now happening through interactions, right? Through government services and collecting data through that way. But people get really nervous about you typing their information into the internet, into the digital, because it's a space that they feel that they have less control of, and it's probably a space that they feel that they understand less. If you give me a paper and a pen, and I'm 50, 60, or 70 years old, I'm used to paper and pen. This is the thing that I understand, and I know, and I somehow trust more than this strange ephemeral amorphous space called the internet.

Ben:[00:12:43]: Do you see the opposite being true for the younger generations?

Lisa:[00:12:49]: It's quite interesting. I think everyone is quite sceptical. I think trust is dipped across the board. I think for many younger people, they accept the risk a lot more easily. They understand that there is probably... Something bad could happen to their data. However, they don't have another option. I'm going to have to just trust the service, and I'm going to have to just hope that this works out in my favour. I feel that they're far more used to just sharing things online, so it doesn't feel like such an enormous step change for them or behavioural change for them. Whereas those people of an older generation, this is a huge, huge step change going from paper to digital. Yeah.

Ben:[00:13:38]: It feels like, to me, right? As someone who's interested in UX, and new technologies and use of technologies, I feel like if we're talking about governmental stuff, they should just know who we are, right? Because we've already got...

Lisa:[00:13:56]: I'm laughing now.

Ben:[00:13:57]: I know, I know. I'm interested in the answer to this. Because we've got, what is it called? Your national insurance number, right? Yes, correct. So you've got some identifiable information that you can give to identify you within the government and as a citizen, right? Yeah. So are we sidestepping some of that stuff because it's... Because things aren't joined up? Or what's going on?

Lisa:[00:14:24]: Yeah, there are so many ways to answer this question. So the first thing is that within the UK, we don't have an identity card, right? Or an identity system in any way.

Ben:[00:14:37]: Okay. Yeah, you know more than I do.

Lisa:[00:14:40]: Say in Europe, so I'm from... I've got I am parents, and I was born and raised in South Africa. In Italy, and in most places across Europe, there is a national identity card. That is your official card that says, This is who you are. This is an identity number. If you show this card, This is evidence of your identity, and you can use this card to be able to access different services. And we have the same thing in South Africa. We used to have national identity, like little green books, and now we've got cards. In the UK, we don't have that. What people tend to do is they have to use their driving licence or their passport to be able to… Those are the only forms of ID. Within legislation, those aren't identity documents. Those are credentials that allow you to travel and show that you are able to drive. But in absence of an identity card, we have started to use these credentials, identity credentials, to be able to verify people's information. But now there are millions of people within the UK who don't drive and who don't travel or have expired documents for reasons like age or changing circumstances.

Lisa:[00:15:57]: So that's So your challenge number one is we don't have an actual national identity system or card.

Ben:[00:16:05]: And I think in the US, they have their identity. They have a number as well, don't they?

Lisa:[00:16:08]: They have a very similar thing to us. So a national insurance number, which is like... It's called their Social Security number. Yes, Exactly. Yeah. Yeah. So that number is owned by certain people within governments or certain departments within government. So your national identity number is issued by DWP. And so And we call it a Nino, a national identity number.

Ben:[00:16:33]: Now, in order- I'm glad that you know about this. Yeah. This is not going to be interesting to anyone outside the UK.

Lisa:[00:16:40]: So in order to get access to that data set, you need to get approval from the Nino board. So there's a whole group of people that protect that data set. So what's really interesting is that there are a lot of people who don't work within government who think, the government has all of the information that it possibly needs about me, surely, to be able to verify who I am. And that's not true. There are certain government departments that have information about you, and then there are other government departments that have other information about you. And so if you think of yourself as a puzzle, within government, you are broken down into lots of different pieces, and different government departments hold different information about you, and very seldom ever the two shall meet.

Ben:[00:17:34]: Oh, dear. Okay.

Lisa:[00:17:35]: So data sharing policy is very complex. It's governed by boards and regulation and legislation, and data sharing agreements that can take many, many, many months to put together and agree on. For some people, that is seen as a spectacularly good thing. For other people, that is seen as an inhibitor to innovation, an inhibitor to joined up services. And there's arguments for both cases.

Ben:[00:18:09]: Yes. Good. Well, I'm glad we went on that tangent. There we go. Thank you very much for that. I'm going to keep on the data just for a bit, but slightly to the left or the right, to the tangent of that. So is there this idea of the citizens data, right? And there's this idea that you're sharing data with a company or another organisation, a public body, and that you have some ownership of the data, and that that is a thing that has become more and more prevalent as an idea that we should be owning our own data. I feel that data is important and therefore we need to be securing it in some way. Is that the conversations that you're having as well? Is it associated with AI stuff, but it's very much high in the mind of technology in general.

Lisa:[00:19:05]: Yeah, absolutely. So in the same way that humans require food in order to function, AI requires data in order to function. And some of that data needs to come from your personal identity. In the case of government, we need to collect certain personal identifiable information about you. Within GDPR, We have certain data rights, and some of those data rights include things like data portability, the right to consent to your data being processed and stored and used, and the right to erasure, all those sorts of things. However, part of the challenge around all of this is we don't have the right technologies implemented to make a lot of those, to actualize a lot of those data rights. The piece around data ownership is there are lots of conversations in that space that are happening. A lot of it isn't amongst the public. A lot of the conversation around data ownership is around data enthusiasts who work in the data industry, data and technology industry. I'm not convinced that people want to own their own data. I think there are people, and the reason I say I'm not convinced is because when we've raised this within user research, people don't really want to take responsibility for their own data, mostly because they don't know how.

Lisa:[00:20:36]: What would it mean to take responsibility for my data? What would it mean to own my own data? Where would I put it? How would I protect it? These are not digital skills that we have taught people en masse. People also have a lot of stuff in their lives that they have to think about and do. That earning my own data and protecting my own data is not really high on my to-do list. People are much more worried about protecting their money and making sure that their money is safe and that they don't fall victim or prey to any financial scams. Earning your own data is a whole separate conversation that a very small group of people are really actively pursuing. But I think as a concept within society, that's not really on the cards right now.

Ben:[00:21:35]: It's funny because there is a Venn diagram of data protection and exploitation of... Well, the company is exploiting your data, but also of individuals defrauding you because they have access to public accessible data on you. And they put that together and then they've produced some scheme. So there is a thing that is there is a crossover there, maybe more education or data interest.

Lisa:[00:22:07]: Yeah, absolutely. I think there's definitely something around that. And I think it's one of the things that people immediately get nervous about when they think about owning their own data. They're like, am I equipped to even protect that data? And will somebody take advantage of me if I'm then owning my own data? It feels like if I own my own data, is it easier to access than going through a big security system that is being want to maintained by a bigger organisation? So I totally like that Venn diagram around data exploitation and data ownership. Yeah, it does.

Ben:[00:22:42]: It's changing the attack vector, isn't it?

Lisa:[00:22:45]: Totally. Yeah.

Ben:[00:22:46]: So let's move back to the middle of the thing that you said you did in your job. And there's that advice and they're taking these words responsible AI, and ethical AI, and ethical data, and all that stuff. And so I was wondering how you came to this area in the first place, actually. Can How did you start there?

Lisa:[00:23:16]: Yeah, absolutely. So started out studying journalism, and then in my final year, I decided to do... I went from print journalism to digital journalism as my specialism at Uni, and then, hilarious, got hired in publishing. And I was then working in magazine companies. So it was really... It was funny. But then when I was there within a month or so, the editor said to me, You're the youngest person in the room. Do you know this thing, MySpace? Do you know what that is? And I was like, Yeah? She's like, We need a MySpace page. Can you do that? And I fell into digital publishing through being the youngest person in the room. And then I ended up working in that same organisation. And I went from being an editorial assistant to managing the digital estate. I was the group digital editor there. And I was fascinated at how people were engaging with our digital platforms and how we were then able to see what people were doing online and then make better decisions or different decisions for the magazine. So I started to see this relationship between these two sets of products, the one printed product and the digital products.

Lisa:[00:24:34]: And I then moved to the UK, did a little bit more of that digital publishing stuff. And then a friend of mine sent me a link to this digital sociology programme that was being started at Goldsmith. I enrolled. I did my MSc in digital sociology at Goldsmith, and my whole world really changed. And people talk about life-changing moments. That, for me, was a major life-changing moment because I felt like that degree allowed me to look behind the curtains, so to speak. It allowed me to see things that I hadn't considered before. So things like the datafication of society, how a lot of this data was coming from us and from human systems. I started to learn about socio-technical systems and not just about technical systems. I started to have a far greater appreciation for the amazing benefits that can come from this new relationship that we have with technology, but also really became quite, I don't want to say fearful, but concerned. A concerned citizen around some of the risks that could happen. And ethics is all about what does it mean to live a good life? And what does it mean then to build good tech?

Lisa:[00:25:54]: What does it mean then to have a good relationship with technology? And if data and technology are increasingly determining what it means to live a good life and to have access to certain things, then we need to have the conversation about impact and ethics. And that is essentially my story of how I got into the space. Yeah.

Ben:[00:26:15]: And you mentioned in your... I saw your TED Talk, and it mentions this dichotomy of people think that technology is neutral and just technology, and it's just code, but actually has this interaction. It It seems obvious, right? But it has this interaction with human beings. It's made for us. It will ... It's in this social technical relationship where they influence each other. Yeah. And you get this a cyclical thing happening.

Lisa:[00:26:47]: Yeah. There's this amazing paper that was written, and I wish I could remember the name of it, but essentially the paper talks about socio-technical blindness. Essentially that phrase refers to the fact that when we see a technological device, we see an iPad or any device, we fail to see all of the social structures that are behind that. And we fail to imagine all of the groups of people who are involved in creating that, programming that, marketing that, updating that. And so it's very easy to see this shiny technology the gadgets, and to think, Oh, this is just neutral. I can just use it to my own, for my own thoughts and wishes and desires. But a lot of that interaction that is happening, you know this as a designer. A lot of the interaction is constrained. It's been designed. It's been designed for you to interact with it in a particular way. You can't just do anything with it. There There are certain patterns and limitations to how you... And so the technology in itself, like constrain certain behaviours, and you stop certain things from taking place. Yeah.

Ben:[00:28:10]: And some of those things are explicit, and they've made a decision to go one road, not the other road. And some of those things have just happened by the process or the people in the room, all the values that the people have in the room and all these things that come into play.

Lisa:[00:28:30]: Exactly. And when you haven't worked in the technology industry before, you perhaps don't see that certain values are centred within a design process, and not all values are equal, right? And if a charity is designing an app, there are different values that are centred within the design of that than if a big tech firm who's based in a completely different part of the world compared to the one where you live, they centre different values. These are different things. But we forget that a lot of design is sensitive to values within that organisation. But also, sometimes it's also luck about who is the team. How much autonomy does that team have and how confident do they feel to push back on certain values that the organisation has in order to prioritise certain values that the team has, right? There's always this tension between teams and organisations, people and organisations. Yeah.

Ben:[00:29:40]: Well, I think if you have a message to those teams now, I guess I would say that if you don't know necessarily if you're in the place of power, that you're just making a product, right? But if you are able to see something happening that you dislike, then push back. You have the opportunity there to change the situation, right? And it might be a little bit horrible for you, but if these things get into people's pockets, then that could be millions of people that you're helping there. So I think there's a real, I'm trying to find the words, opportunity to do some real good there.

Lisa:[00:30:27]: Yeah, completely.

Ben:[00:30:29]: When you're When you're working with these organisations, what's the reception that you get? Because obviously, you really want to talk to the users, potential users, stakeholders, other people who might get affected. You want to build things with users in mind, also better social futures, better ethics in mind. Do you get a really good reception? Do you get pushback? How does that actually happen?

Lisa:[00:30:56]: Yeah. So it's a great question. So Because this is England, everyone's very polite, even if they don't really want you to be there.

Ben:[00:31:06]: Lisa, could you just go away for a little bit?

Lisa:[00:31:10]: That's interesting, Lisa. Could you send me that in an email? I'll take a look at that later. No. Usually, there is one person who is very enthusiastic about ethics or is very concerned about the impact that the technology that they working on could have. That's normally how I get into the room, is that it's normally of one person's doing. So day one is very often about trying to understand how many people are concerned about this or how many people are enthusiastic about it. Is it just this one person? Or is there a group that I can align myself to that I can work with to be able to get support, internal support? And very often, unfortunately, it's often only either a very a very small group, we're talking 2-3 people, or it's a lone individual. So day one is usually trying to understand ethics, literacy levels. And then a lot of it is about the sell, unfortunately. We're still having to really sell in the concept that data and technology can create real material harms, but also material benefits. And we're here to identify potentially what those harms could be, mitigate against those, identify what the benefits are, and try to amplify some of those positive outcomes.

Lisa:[00:32:34]: The second thing, I tend to then work with is trying to understand as an organisation, why ethics is important. What is it that you're trying to do here? Is this a risk mitigation exercise, or is this an exercise in trying to be more innovative as an organisation? I think ethics as a tool can be used to do both of those things. Another reason that is definitely emerging for people wanting to bring in ethicists is because they are really struggling to do anything to advance their technological capabilities of the organisation because they feel that they're getting stuck by governance. And so they identify that they want to do something. Then there's this loads and loads of fear. And then there's suddenly lots of meetings, and then there's suddenly lots of paperwork, and then there's suddenly lots of risk registers. And all the team are trying to do is run a very small prototype in a very controlled setting that isn't actually going to have any impact whatsoever, but they're just trying to see if it's even technically feasible. And through identifying if it's technically feasible, if they can start to test that in a controlled way, they can start to then better understand what the potential risks could be and work in that way.

Lisa:[00:33:54]: But a lot of organisations are just not even set up to have an experimentation function, even if it's in a controlled way. Ethics can also start to be used to allow for improved agile governance within an organisation. If you have an ethics framework and a set of principles that your teams are educated in and know how to use, you should be able to trust those teams of people to be able to work within that framework and to be able to say, okay, now we need to stop because now this is the point where we need to go through the formal governance process because we've got all of the information we need to be able to say go, no, go. And that's when things with cybersecurity can kick in, things with the data team can kick in, things with any third access management can start to kick in. But before the team has even started the process, a lot of those gates are already coming down. So that is the big next thing is trying to understand why do you want ethics? What are the challenges you're facing? Is this risk? Is this about innovation? Is this about governance? And then trying to build things out from there. Yeah.

Ben:[00:35:10]: Yeah. And does some of that fear come from the legislative part, or is it defamation?

Lisa:[00:35:19]: Yeah, a lot of organisations have seen many headlines around, Organisation X abused this chatbot abuse use this customer, or Organisation X has accidentally broken the law by giving out the wrong information, or Organisation X has launched an app that doesn't recognise people of colour, and they are very afraid of that happening. I think they are also very aware that a lot of this technology requires huge amounts of data, and we have data legislation within this country that they need to be very mindful of not crossing those red lines because the fines are really steep, and a lot of organisations are not in a position to just easily pay those fines without it having significant impact to the business and operations and also to the livelihood of the organisation. Yeah.

Ben:[00:36:21]: Yeah, yeah, yeah. That's interesting. And do you see... Because obviously we're getting more legislation, right? Yes. More things are coming in. So hopefully it'll make it simpler, right? Because there'll be these things that we have to do. And then, presumably, there'll be these structures that we can put in place to help people out. It also depends on what they're doing. Obviously, there's like... But hopefully that picture will become clearer as we have to live more and more with these technologies.

Lisa:[00:36:53]: Yeah. So as GDPR is extraterritorial as legislation, the EU's AI Act is the same. So it applies worldwide to organisations who are processing data for EU citizens. And so, yes, I do think that the EU's AI Act is becoming the baseline for what codified ethics looks like. And a lot of these things, if we go through the history of legislation, a lot of laws were once ethical issues and ethical concerns. Slavery, as an example, that was not always a law, extraordinary as that may sound. It wasn't. For many people, that was an issue of ethics for a long time. This is a moral issue. Now that has become legislation. I think more and more what we will start to see is that ethical issues that we are concerned about within the technology industry will start to top over and become codified and not necessarily always in legislation, but codified in mission statements, principles in the way that partnerships are shaped and formed, your contracts between partners, etc. So I do think that that will start to happen more and more. Yeah.

Ben:[00:38:16]: Yeah. And I think that's really interesting, that last piece, actually. Are you seeing that? That codification of procurement, right?

Lisa:[00:38:27]: I am starting to see that. So in the last big tech ethics project that I worked on, it was in the aviation industry. And we put together... So the ethics framework basically consisted of a brand new set of AI principles, ethics principles, and the explanation of what those principles are, because if you are wanting to put them together, for anyone who wants a great tip, explain your principles to your team. Say, if we're talking about There's a principle around consent. You need to explain what consent means, and you need to understand, people need to understand what you are actually meaning by this principle, because a lot of teams can't just read a set of principles and be like, Oh, this magically, I have the same understanding of what this means to the person who wrote this. It was the principles, explanation of those principles, and then a set of how-to guides. When we were putting all of that together, we then started to work with different teams across the organisation to make sure that we were implementing the ethics principles and frameworks in a way that made sense to other departments and other governance structures.

Lisa:[00:39:38]: So we worked with the data directorate, we worked with cyber security teams, and the other team that we worked with was the procurement team who were really interested in updating contracts to say that we would like all partners, contracted partners, to be open and transparent about the ethics frameworks that they have internally and to share those ethics frameworks with us to make sure that there's an alignment between ours and theirs. If you do not have an ethics framework, then we require you to work to our ethics framework and to be able to meet those standards and meet those principles and guidelines that we've identified. So we actually wrote, worked with procurement to co-write some paragraphs that we would then put into third-party supply contracts. So I have first-hand experience of both doing that and seeing that in action.

Ben:[00:40:30]: That's great.

Lisa:[00:40:30]: Yeah, yeah.

Ben:[00:40:31]: I feel like there's so much power in just the purchase. You know what I mean?

Lisa:[00:40:37]: Yes, 100 %. And feeling really empowered when just to start asking questions, not just to blindly accept this new AI plugin that whatever Company X has implemented, and now suddenly that you need to accept this. It's like, no, if this is an AI plugin, then surely we can decide if we want the plugin turned on or not. And we will make that decision once we have an idea around what data are you going to be collecting through that plugin? Where will that data live? Does that data get used to train some big robot brain or model that you have on your side? How do we reap the benefits of that? It's really difficult, right?

Ben:[00:41:24]: Or are you paying people in Kenya to... You know what I mean? The training of the system. What are the hidden aspects of what you're doing as well?

Lisa:[00:41:32]: Yeah, absolutely. And one of the things that's really interesting about GDPR and just goes to show how quickly legislation can run into issues is that within GDPR, we all have the right to be forgotten. AI can't forget, though. Once your data is in a model, it's like looking at a baked cake and then being like, I want to remove the eggs from this cake. It's like, that's not possible to I have to make you a whole new cake and remove the eggs at that stage. Yes.

Ben:[00:42:04]: Yeah, that's a cake now.

Lisa:[00:42:06]: Exactly. Yeah. And so it's a really interesting, enormous challenge. If you do not want Meta to use your data, you need to opt out before they start the programme. Because once the programme is switched on and you've said, yes, you can use my data to train your model, you can't go back.

Ben:[00:42:36]: It's funny because there is progress in selective forgetting.

Lisa:[00:42:41]: I do not know about this. Do tell me.

Ben:[00:42:44]: Which is, because if you imagine the process of training some of these large neural networks models, what you're doing is often the time you're trying to make a prediction from an input, right? And you go through the network and then you say, okay, how close was I to that prediction? And I'm going to back-propagate some changes to get ever so slightly closer to that prediction every time. And you do that lots and lots of times. And hopefully, you'll get closer and closer until it's acceptable or it does the right thing. There's some metrics there that you can decide if it's good or not. Massively simplifying this whole process. But in the same way, you could give it a input. And if it gives you the correct output, you can then tell it not to, right? You can back-propagate things so it moves away from that thing. My problem is, I don't know enough about it to know if it's actually breaking the whole thing. You're smashing the cake and making it useless as a cake at that point. But you could selectively unlear certain things.

Lisa:[00:44:01]: Yes, yes. I totally... Yeah. So you can guide the machine to produce different outputs. Yes. But from my understanding is you can't actually individually isolate people's data that have gone into the model and like some magical thread pull out all of that data.

Ben:[00:44:21]: Not unless the data was present like that in the first place. But most of the time it's not. Yeah. So I mean, It's just a big word soup. It's a big word soup. We've gone from cake to soup. It's a big number soup, let's say, that it turns into words. Yes. But if anyone who knows more about that than we do, then please get in contact. That'd be awesome to hear more about that because it sounds super technical, but also it changes the game a little bit, doesn't it?

Lisa:[00:44:53]: Yeah. And all these technical concepts, the issue of explainability, all these technical concepts need to be explained in plain language, right? So that jargon-free language so people can understand them. And this is how we raise literacy levels. Everything that we should be able to explain it, whether it's 100 % understandable is a different thing. But we should be able to broadly explain the concepts.

Ben:[00:45:19]: Which is a lovely segway. Thank you. You're so welcome. And to the next question, which is around... Do you know elements of AI, right? There's a website. I think it's Norwegian.

Lisa:[00:45:30]: So Finland. So it was put together by the University of Helsinki.

Ben:[00:45:35]: Yes, exactly. Do you think that... We were talking about data literacy earlier. Is there a thing that we need to make more prominent about social media, data, AI technologies, which maybe isn't present enough in school as we grow into the workplace?

Lisa:[00:45:54]: So this is something that I talk about probably the most when I am working with different government clients because when we're designing a new service or we are taking a paper-based service, we're moving it to digital, we call this Process Digital Transformation. We're having to actually do two things. One, we're having to create the service, get it into its new channel format. But through that process, we're also having to educate the public on how to use it. So the service has It needs to always do two things. It needs to educate the public on how this thing works and all the now digital components that you need to know, like the privacy agreements and how your data will be used and what data we're collecting and all this thing, and build trust in that way. And then you also need to make sure that the service actually works, right? That people can go through the screens and there's no major failures, et cetera. And when we're doing this process, I'm always asking the question, whose job is it within society to educate the public on digital changes to technology and changes to our relationship with technology.

Lisa:[00:47:09]: Whose job in society is it to train individuals about how AI is being used and what it can be used for. And so that there isn't a huge amount of fear that's attached to it, but it's more like a critical thought process, analytical discussion We can have a really meaningful constructive conversation as opposed to everything being so heightened by emotions. Right now, the people who are doing the so-called job of educating the public is Hollywood and the media. Yeah. Hollywood has a huge role to play in telling stories about our futures, our near futures, our far-out futures. A lot of people, as a result of all of these things, and also from the images that they see online, which is another massive bug bearer of mine, you type in artificial intelligence and you see an electric blue brain or you see a robot, and you also, as a result, watch a film and people say, Artificial intelligence, and then there's robots walking around. The public associates robots with artificial intelligence very strongly. There's this very strong link between these two things. Then there's also all of the headlines, right? And all the stories that are written in everything from the Economist to the Sun to the Daily Mail and Telegraph.

Lisa:[00:48:45]: Everything you read is now dealing or mention some technology at some point in the news cycle. We have all of these now strange metaphors and emotions and fears are attached to this. There's not a lot of education that's going along with that. There is a role to play within this gap in a lot of the work that we're doing at the moment about who is filling in those gaps? Who's saying, when you see this film, when you read this article, these are the important things to pull from this. These are the questions that you should be asking yourself. These are the myths. These are the facts. Yeah. It's a big, big, big ethical issue, really.

Ben:[00:49:36]: I'm just imagining a big Hollywood movie of just people having a really nice conversation about the values and the new service they're making.

Lisa:[00:49:54]: I really wish that that could be profitable and that somebody would fund that. Yeah.

Ben:[00:49:54]: I mean, there's some okay documentaries, right? But then you have to seek those things out and really identify with wanting to take those things on board and to get involved. And then and likewise with the elements of AI, you have to go to the website. No one's pushing you to do these things. There's no requirement. So it's an interesting problem, isn't it?

Lisa:[00:50:21]: Yeah, it is really tricky. And I think even at a university level where we're trying to... I was an associate lecturer at Goldsmiths, and I taught digital research methods and design thinking there. And very often students would have come across something to do with the impact of technology by the time they're doing that course. But a lot of it has been done either through one lens or the other, either through a very computer science lens, very technical, or the humanity students who have just looked at it maybe through a social-cultural lens. But the joining of these two worlds, the socio-technical, is not often... The education around that is not often very well catered for.

Ben:[00:51:10]: Yeah. Do it. So we've talked a little bit about you going into organisations and how they react to you being there and the kinds of work that you do with those organisations. But is there a sense that those organisations themselves are putting into place certain things that they're on an organisational level or educating or have this idea of ethics embedded anywhere? Or how does that work?

Lisa:[00:51:38]: So the short answer is no. Most organisations are aware that technology has a range of different impacts on society. Not very many organisations are doing anything about it. Many organisations, most organisations, don't have an ethics team. And a lot of the time, the reason for that is they don't know where ethics lives within the business. They don't know if this needs to sit within cyber-security, or does this sit within legal, or does this sit with design teams? And so that's a huge issue, not knowing where it lives. The result of not knowing where it lives and maybe not being creative in thinking where it could live or seeing a home for ethics in some way, is that suddenly they then have a panic about, who's going to fund this thing? And because of the way most organisations work, funding is distributed to different departments, and marketing gets X and design gets X. And so then it's trying to find an organisation that is willing to part with their funding and fund the work because ethics needs to be paid for. It needs to be prioritised and paid for. And then the next thing is around if you can find budgets, then people don't often know who to hire.

Lisa:[00:52:52]: So they look at the marketplace and there's a lot of snake oil, like on sale there and the varying merchants of snake oil. But there are lots of really good people who do brilliant work, but they are so few and far between and they're quite difficult to come by that a lot of organisations don't know so-called what to shop for in terms of talent. Am I getting somebody who has a background in philosophy? Am I getting someone who has a background in design? Do I need to get somebody who has a particular type of degree? And so I think that poses another challenge, and they almost needs to be some directory that organisations can say, okay, this person, we're wanting to get ethics in to mitigate against risk. These are some of the people who come really recommended. They've got X number of years of experience. They're at this level. They've worked with these teams. But that thing doesn't exist just yet. And so a lot of it is done on a recommendation by recommendation basis.

Ben:[00:53:58]: Yeah. So if you identify that you need this stuff, but maybe you don't understand necessarily what the outcomes look like. Because obviously, it might be that someone has an interest to develop internally or there is an external partner they can work with. But I guess it's having the idea of we're actually getting something at the end of this, which is not fluff, which is not snake oil. So what kinds of things are they expecting from that?

Lisa:[00:54:28]: Yeah, it's a really, really good question. So a lot of the time when I have those initial conversations and I ask people about why do you want... Why do you want ethics within your organisation? Let me help you really understand this. A lot of them look to me to shape the brief. A lot of them aren't really sure. The most that they associate with ethics in terms of an outcome or an output is a set of principles. And then I will then push back and say, yes, it's one thing to have these principles, but then how do you expect people to implement these? And then we have another conversation about, okay, right. I understand now, right? Okay, so who's going to be implementing these? Who's going to be accountable for these principles over time? How do you want to work with the different organisations. I think very often people see ethics as a siloed team, and actually, ethics is a relational discipline. It requires you to build relationships between different people within the organisation because ethical issues are numerous and pop up everywhere and also are very often connected or related to a domain expertise.

Lisa:[00:55:41]: So one such ethical issue might be environmental impact and harms. Lots of organisations have sustainability teams. The ethics professional within the organisation needs to work with the sustainability team to be able to understand how are we measuring carbon? What are our carbon targets? What is our sustainability policy, are we making sure that our tech estate is included in the work that we're doing within the sustainability teams? It is not reasonable to expect for an AI ethicist to have all the domain expertise that they need to be able to solve all of the issues. Ethics is relational, it's collaborative. You require expertise to be able to support the solutions that need to come out of mitigating some of those risks.

Ben:[00:56:28]: Yeah, I think that's super important. It's almost like you'll have the ethics person in the corner over there, and every now and then they'll stick their hand up and go, oh, actually... Bias. Bias. You can't do that. Or like, be more transparent or something. But it's about incorporating ethical thinking, processes, governance, bring those legislative things in to support your operation.

Lisa:[00:56:58]: Yeah. And so much of it is around building bridges and allowing for experts to do their jobs once the ethical issues have been identified. It's not all on the design team to do that work. It's really, I think, we need to lift some of that pressure and accountability and responsibility off designers' shoulders, and we need to spread the risk and responsibility across the organisation. And once we identify ethical issues, we need to work with internal colleagues and partners to solve those. You can't expect a UX designer to solve all the problems. It's ridiculous in the same way- That's what UX designers are for, isn't it? In the same way, you can't expect your digital sociologist or digital anthropologist or AI ethicist to solve all of the problems. Very often what they are going to do is shine a spotlight on issues. But you need to then empower them to work with the appropriate team to fix those things. Yeah.

Ben:[00:57:57]: I'd love it if they just sold. Just like those guys. They know what they're doing. Just don't get in their way. Yeah. Yeah. Sweet. So this has been fantastic. Thank you for coming here and doing this in person. We don't get to do enough in person on the podcast. So this is awesome. The final question we always ask is what excites you and what scares you about our AI-mediated future?

Lisa:[00:58:27]: So what scares me is that we're going to sleepwalk. Into it and that we will be so blinded by the gadgets and these whizzy things that they can do that we forget to look at the impacts and that we forget to do that. I think what excites me about the future, and I actually mentioned this in my talk at the conference that we met at, is that the future is just a prediction. It hasn't happened yet. And because of that, it's actually a really exciting thing to think about, right? Because we get to design the future. So what excites me is a new generation of designers and researchers who want to design a new and better future, and that the future of AI is people-powered, and that the future of AI is people-led because, dear human, the future really needs you. And that's the thing that excites me most, I think.

Ben:[00:59:32]: Amazing. Sweet. So feel empowered. Yes. Exactly.

Lisa:[00:59:35]: Yeah, totally.

Ben:[00:59:37]: Lisa, thank you very much for your time.

Lisa:[00:59:38]: Thank you so much for having me. It's great to do this in person.

Ben:[00:59:41]: Is there any way people can follow you, find out about you..?

Lisa:[00:59:50]: So I'm one of those bad social media people, but you can find me on LinkedIn. Lisa Talia Morretti. There's only one. And yeah, find me there. Cool. Thank you very much. Thank you so much.

Ben:[00:59:59]: Hi, and Welcome to the end of the podcast. Thanks again for Lisa for coming to my office. I much prefer these actual in-person recordings when it's possible. It gives us a completely different vibe. It was nice to get an update from Lisa about more progress and things that are going on in data governance, personal identifiable information, data vaults, data wallets, and all that things. I feel like they're a perennial issue that keeps coming up on my radar. So maybe it's something that we can get to solving in the end. I know there's lots of opinions and solutions that people have. I personally have worked on several projects of these with other people in the past. So I feel like there must be a solution somewhere. Anyway, I also really enjoyed the contractual nature of responsible AI frameworks or using that procurement process to lay down a foundation where you, operating with another organisation, were on a level playing field. We have this conduit of when we're talking about AI stuff, we're going to need these things to be in place. I think that's really useful and maybe a basic to operate with these technologies. Also, we could extend that out to environmental frameworks and how we work on the amount of carbon or processing or bandwidth, all these sorts of things as well. It feels like that would be a similar arrangement as well, which would be great.

Ben:[01:01:28]: If you're still listening, thank you very much. If you can, you can support us on Patreon, patreon.com/machineethics. You can get hold of us at hello@machine-ethics.net. Your feedback or people you recommend, always welcome. And thank you again for listening, and I'll speak to you next time. Bye.


Episode host: Ben Byford

Ben Byford is a AI ethics consultant, code, design and data science teacher, freelance games designer with years of design and coding experience building websites, apps, and games.

In 2015 he began talking on AI ethics and started the Machine Ethics podcast. Since, Ben has talked with academics, developers, doctors, novelists and designers on AI, automation and society.

Through Ethical by Design Ben and the team help organisations make better AI decisions leveraging their experience in design, technology, business, data, sociology and philosophy.

@BenByford

Previous podcast: AI Truth with Alex Tsakiris