77. Doing Ethics with Marc Steen

This episode Marc Steen and I chat about: AI as tools, the ethics of business models, writing Ethics for People Who Work in Tech, the process of ethics - “doing ethics” and his three step process, misconceptions of ethics as compliance or a road block, evaluating ethical theories, universal rights, types of knowledges, what is the world we’re creating with AI?
Date: 23rd of May 2023
Podcast authors: Ben Byford with Marc Steen
Audio duration: 52:24 | Website plays & downloads: 180 Click to download
Tags: Human centered design, Business models, Compliance, Human rights, Knowledge | Playlists: Design, Business, Philosophy, Rights

Marc Steen works as a senior research scientist at TNO, a research and technology organization in The Netherlands. He earned MSc, PDEng and PhD degrees in Industrial Design Engineering at Delft University of Technology. He worked at Philips and KPN before joining TNO. He is an expert in Human-Centred Design, Value-Sensitive Design, Responsible Innovation, and Applied Ethics of Technology and Innovation.

Marc's first book, Ethics for people who work in tech, was published by Taylor & Francis/CRC Press in October 2022.


Transcription:

Transcript created using DeepGram.com

Hi, and welcome to the 77th episode of the Machine Ethics Podcast. This episode was recorded on the 3rd May 2023. We're talking with Mark Steen. We chat about AI as tools, the ethics of business models, writing his book, Ethics for People Who Work in Tech, the process of ethics or doing ethics, Mark's 3 step process, his experience with misconceptions of ethics as compliance or roadblocks, evaluating ethical theories, and universal rights. If you like this episode, you can find more at machinedashethics dot net.

If you'd like to contact us, you can go to hello at machinedashethics.net. You can follow us on Twitter, Thanks very much for listening. Hi, Mark. Thanks for joining us on the podcast. If you'd like to, introduce yourself, who you are, and what you do.

Yeah. My name is Mark Steyn. I live and work in the Netherlands. I work as a senior research scientist at g and o, which is the Netherlands last several years, I've specialized in responsible innovation and ethics involved in the development and deployment of technologies, very much a focus on data and algorithms and what people today would say is AI. Great.

Well, I think that leads us into our first question. So in your mind, obviously, we've got the book that we can talk about, which is, ethics for people who work in tech. So what is tech? What is technology in that kind of sphere? And what, by extension, is AI?

Yeah. As you've noticed, I said what people call AI. Frankly, I I think AI is a bit of a silly term. I know people have been using it since what is the 1950 something with the Dartmouth conference. Silly in a sense that it draws so much attention to mimicking people, like, it's intelligence, like human intelligence, first thing that comes to mind, and then artificial.

So I don't like the term so much. I would rather talk about, tools, that we can use or instruments that we can that that through which we can perceive the world or or machines that help us do things. For example, a a spade, you don't call it an artificial hands, do you? Or or or a bike, you don't call that artificial legs. So why do we call these, machines or tools?

I mean, they're they're machines and tools. Yeah. I guess if if you're gonna make a rebuttal to that, you wouldn't have a a spade which kind of went off and did its own digging, I guess. You know what I mean? Does the AI does it own does it does it do its own?

I mean, you you you give it a prompt. Well, let's talk about JetGDP. Obviously, it's been in use for for a couple of months. You give it a prompt, and it gives back more words. You give it words back, it gives you even more words back, so it's like a word creation machine, a language juggle machine.

I mean, it's it's it's obviously brilliant in what it does, and, quite surprisingly, so some people say it's dangerous, some people say it's various reasons. So there's the AGI, fear of it taking over the world. Way, way, way before that, there are there are much more real risks, like, what has been described for for for 10 or more years, the way that algorithms, propagate or exacerbate existing injustices or inferences, discrimination, etcetera. And I just just today, I read, like, 100 of, journalist AI firms that spit out thousands of news articles, just to sell ads. And all these articles will have disinformation, misinformation, fake news.

So, yeah, it's not very helpful if you value, truth or news items that aren't true. So these are real dangers now. Way before we have something like AGI, we need to worry about those risks, I think. Yeah. I think there's a lot that we could pick apart in there, which is always fun.

How much time have you got? You know? You could do part 2 and 3. You just Yeah. Yeah.

Yeah. Yeah. Exactly. So I think I mean, it's what if you go back, way back, the original idea of the Internet, right, this was egalitarian idea that you could spread information and people could share and people could collaborate. And then commerce, you know, came on board.

And, arguably, what you just pointed out there is that the Internet is now more about not how you share information, but how information is monetized or commercial somewhere, and that is leading to this point that you made about misinformation or dummy articles or just the Internet getting taken over by loads of crap, which arguably has already happened, you know, in my mind, for instance. So I think it's a shame because it's it's less about any of the the technologies per se, but the structural commercial aspect of it. You know? If there wasn't that commercial aspect of it. Yeah.

Totally agree with you. And, you already mentioned the book Ethics for People Who Work in Tech, where I consider its audience more broadly so developers, obviously, but also people who are involved in procurement or policy making or any application. So, yeah, many, many people nowadays encounter, technology or awareness. I I devote a couple of chapters on on what comes before you do ethics, like business models or understandings of, well-being or policy making for well-being, like GDP, how good is that? Do we need to go beyond GDP and measure things beyond economic growth?

Yes. Of course, According to me and according to many nowadays, growing. Well-being, how do we how do we understand well-being? Is it like how much stuff you have, Or is it the the the relationship where you have with people, with or without technology? So, yeah, there's lots of things you need to think about, and then I think you can you can also do ethics, but before, it is the business model, it's policies, and it's what we consider to be normal or desirable.

But I guess you could do you could do I think I'm doing air quotes here. You could do ethics with those things as well, like, with the business models as well. Oh, yeah. Yeah. Sure.

Sure. Sure. The the thing I I wanted to point out is that, some of the proposals I, you can encounter in the book in the in the ethics part, so there's the middle parts, 3 parts, and the middle parts is is is most purely ethics. Some of the suggestions there, for example, social media app that is different from your normal social media app. So your normal social media, its purpose coming from its business model based on advertising is to lure people to it as much as possible, as often as possible, and to and to keep them there as long as possible.

And you can think of an alternative design where it will ask you, hey, Ben, I see you're using this. What do you want to do with it? How many minutes do you want to do this? And then after that minutes has has passed, it it prompts you, have you found what you're looking for? Otherwise, you can do something useful now rather than stay here for for your next bit of content and you're you're scrolling through your timeline.

And then, of course, somebody will say, hey. But then your business model needs to change. Yes. Then your business model needs to change. So they're tied up the topics of, business model and governance and ethics.

I mean, before we get in the weeds a little bit, let's go way back because we mentioned the book originally. Why did you write this book? What did you want to achieve with it? And what interests me as well that you you came from industry, essentially. So how what did you how did you get excited or interested in ethics to start with anyway?

Yeah. That's a nice question. Yeah. I started with, working for Philips Electronics and KPN, the Dutch telecom operator. I have my education, my background in in Delft, industrial design engineering.

And in that school, it's normal, to put people central and to consider technology as a, as a means and also to have, an interest in organizing the process of innovation. So that has always been my my sort of interest. People, technology as a tool, and then organizing the development of the design of the invention process. And then, after 10 or so years working in industry, I became interested in, in ethics. I did a PhD at the University of Humanistics, Humanistic Studies in the Netherlands.

I defended it in Delft. And in that, I looked at, the ethics that happen sort of implicitly, sort of inherently in your design process. So the designer will typically if if they follow human centered design, they will invite people to the to the office, have a talk with them, focus group, usability test, all these variations that you have. And then when I was asking, how open can you be to their experiences? Or is there some philosophical hurdle that will keep you in your like like, you have blinkers on.

You can always see what you're interested in, so you have to be really open. And then how much creativity can you have. If there's a brief, if there's a budget, so how much room to maneuver have you? So these I I I took this on with with philosophy of, French philosophers. Yeah.

That gave me solid basis for further exploration of ethics. So I did that in the in the years after. Write a book. During these years, my coworkers and clients and also partners were asking me, hey, Mark. Can you help us, with ethics?

And in their mind, they had some idea of ethic as a hurdle, as a roadblock that you need to pass or maybe even a rubber stamp that you can get if you do it well, and then the ethics is good. Now I was saying to them, like, yeah, but that's not the kind of ethics that I can do for you. What I can do with you is, organize a process. So that's where my Delft education comes through again. A process of reflection and inquiry and deliberation in which you yourself and your and your team, your project team, put issues on the table, organize dialogues about them, look at them critically from different perspectives, And then, yeah, the 3rd step, so is is is is that you is that you continue your project.

So the association that some people would have with philosophy, like, okay. So you're in your armchair. You're you're doing big things and big ideas in your head only. So, my approach and also why why my colleagues and and and clients find it helpful is that they can they can continue that project. That's precisely the that's precisely the idea.

Only with a couple of questions running in parallel to their project and only with with a couple of findings along the way in an iterative process. Hey. We can do this differently. Hey. We can do this better.

Shall we improve this or that? Sorry. Just Yeah. And then your last bit of the question, so this is how I started developing this method that I've also called sometimes sort of jokingly rapid ethical, deliberation as as a not also to agile or or other methods. Yeah.

And the book contains much of my my my experience with doing that in practice with people and projects. Awesome. And I think in the the book, you mentioned, like, this 3 step situation. So I was wondering if you could just outline your the basic vision for that and and how it works. Yep.

Yep. Yep. So the first step is, identifying issues in your project that may be, problematic or the interesting, let's call them because, yeah, some sometimes it's not necessarily problem, but it's interesting in a way. So identify topics you want to study, that you feel you need to study, that pay attention to. 2nd step is organize conversations about these topics, And then 1st within the project team, that's easy enough.

You can integrate in your project team meetings, but ideally also with your client, with stakeholders also outside the organization. This borrows also from, human centered design where you where you invite in a customer or potential customers and also from value sensitive design where you will invite stakeholders with different perspectives on the topic, to express our values and what their concerns. So that's the 2 steps, 12. And the 3rd step was this, like, take it to x, take it to action, do something with it. Ideally, in an iterative manner.

Ideal also ideal also in a in a learning, manner. So you try out for 2 months if you do this feature differently, if you do this application differently, and then you monitor and then you you modify. That's also why sometimes used to work, the the metaphor of the steering wheel. So ethics is then hopefully not a barrier or a roadblock or the stamp, but more like a steering wheel that you can use to bring your project safely from a to b and avoid collisions, stay in the right, half of the road, take the right exits. So steering wheel.

Mhmm. Yeah. I mean, the to me, it's interesting because it sounds a lot like a combination of bits from agile, like you said, design, like, value centered design, human centered design, or what was popularized, by IDEO design thinking. Mhmm. But I guess you're being more explicit with saying that we are thinking in this these terms.

Right? We're Yeah. Yeah. But but but but but, yeah, totally happy that you recognize these because indeed this has also been my work experience, all the all the, the method that you're saying. Yeah.

Yeah. Broadly design thinking, yes, also. And here, of course, listeners can have different, ideas pop them up in their head. Some people will think of design thinking, yeah, that's in a room and a whiteboard and you're sticky notes. Mhmm.

That can be a form in which you can do it. But for me, the design thinking is more an, a going back and forth between, problem setting and solution finding. It's also a way of systems thinking where sometimes you think about the problem, try to understand it more deeply. Sometimes you think it's a solution and try it out. So for me, design thinking is really at the core of the book and also of my approach, but in more in that methodological manner of, not so much rushing forward to a solution, but always having some, options open to revisit the brief, to rethink the problem, to reidentify, issues that you want to pay attention to.

And now here, of course, if this is exaggerated, it would mean that in any project team meeting, all the options are open. Again, obviously, that's not very helpful. But the other caricature would be, like, you can never ask questions, and it's always rushing forward, and that's also silly. So a combination of problem finding sorry, problem setting and solution finding. So so with that 3 step process, there is, in my mind, a lack of of external pressures outside of that process.

So for example, there isn't any depiction of how legislation or governments or, kind of external parties can play a role in this other than maybe being brought in as stakeholders themselves. Like, if they were if you were reaching for more stakeholders, you might pick, different parties like that. I was just wondering if if you thought that this was enough for organize an organization or that they, need to situate themselves within, obviously, the context of, business, but also of citizenship, of environment, of all these other, kind of more external pressures, and how does that kind of figure out? How do how do you think about those things? Yeah.

It's a it's a very good question. They are there in the book, but you're talking about really, really, really important, topics like, the GDPR that you need to comply to, your code of conduct, either of the IEEE or the ACM or some other professional organization, The something something people within your organization, the legal people, compliance people even. So, yeah, they're all there, but I don't give them so much attention in the book. Maybe maybe that's a flaw. On the other hand, I do mention them, like, in places scattered through throughout the book.

Possibly, that is because my book is bit of a reaction to a one dimensional perspective that some people may have on ethics that I sometimes encounter. Like, hey, Mark. You're doing ethics. Can you help us with data protection? I was saying, yeah.

You're looking for somebody with a legal background helping you with compliance to the GDPR, general data protection regulation of the EU, but that's not that's not ethics. I mean, that's one slide. That's 11 one one, piece of the pie of ethics. And similar for the other things, yeah, you need to comply. You need to not, violate law.

You need so in the middle part of the book, I I I discussed more at length for ethical perspectives. 1 is consequentialism, the pluses and minuses. 1 is duty ethics. And there, I pay some more explicit attention to duties, compliance, positive duties, negative duties in the sense of, human rights, some things you need to do, some things you need to avoid to, rights of, of citizens like you were saying. And then the third one is relational ethics.

4th one is virtue ethics. So the legislation bit and the compliance bit is is, yeah, is discussed in the in the context of duties and rights. Mhmm. And and following on from that, I like the the because you had the title in there of universal duties, and I just wondered if this is it's obviously a pith a question, but is there universal duty or ethic? Or or is that something that you feel like will ever happen?

No. I don't think there are universal duties in that sense. No. Because it it's interesting because obviously human rights is a is a declaration of intent. It's not like a thing in its own sense.

Like, there's not a natural human right. Right? There's no natural scientific law that says that something should be Yeah. Exact exactly. It's a it's a social construct.

Yeah. You you don't find it, coming I think I write somewhere, like, it doesn't go on trees. It doesn't fall from the sky. It's made up by people who were looking for ways to live together well. Exactly.

Exactly. Variation of that. Exactly. So do you consider that the the human rights is part of those kind of universal ethic or universal duties, I said. Yeah.

Yeah. Yeah. Yeah. Yeah. There there's a big overlap, I think, between concerns for respecting and protecting human rights or fundamental rights Mhmm.

And, and many of the ethical concerns that I that I write about. Yeah. Sure. Yeah. Yeah.

Yeah. Yeah. Okay. Cool. I I just, you know, from a a pure philosophy point of view, it's just an interesting question anyway.

What what what would you think? Would you be looking for some universal duty? Because we can't can't just it like thought experiment. Yeah. Which rule can you think of that everybody all the time in any situation needs to follow?

Yeah. Well, I think I think it depends on what you call a rule, essentially, doesn't it? It's like the the problem that I always have with human rights is the descriptor the description of human rights is kind of open ended. A lot of it is depends on your interpretation of the language. Yeah.

Yeah. Yeah. Yeah. I see what you mean. So a couple of years ago, 2 years ago, I started studying, law at the Open University, Open University, in another lens.

And, I wrote a piece, I think, where it comes together nicely, a lot of pieces and exercise at examination of one of these courses that was on fundamental rights. About Siri, is he staying for Rijico Indicati's system for risk indication that, Dutch local governments were using to find, fraud in citizens with social welfare, a sort of cousin of the infamous childcare benefits scandal in Netherlands. Yeah. And the judge, 2 years ago, 3 years ago, what was it, decided that this is unlawful Mhmm. Because it violates article 8 of the European Convention of Human Rights.

And I'm coming back to what you just said, like, yeah, this this this this right for privacy is open ended, but it isn't because there the judge makes, well, like like judges do on the one hand or the other hand. So on the one hand, the government needs to put some effort in finding people because otherwise, all the taxpayers' money spent on all these fraudulent people. Yeah. So we need to do something, but it needs to be proportional. First of all, it needs needs to be legal within law, legality, and then proportional and subsidiary are the other 2 criteria.

Are there better ways? Are there less violent ways, so to say, to do? And then they decided, well, in this case, for these applications, it's not a good idea. So he forgot. He he forbid it.

So then it's a nice example, I think, of of of of how they come together. The realism of of of text inspector people, using an algorithm, how far can you go? Well, this was, too far. So we stopped that. Yeah.

And hopefully learned from that. And hopefully, that's one of the things that I want to contribute hopefully with with my work and with my book is making, the people who work on these algorithms or use these algorithms more aware of, of these concerns way before it's implemented, way before it's actually utilized in practice. You you mentioned that those are all kind of, they're agreed. Right? We're contractually obliged to the European Union the European, Declaration of Human Rights, if you're part of the European Union, for example.

So there there are certain things, but they're all social contracts or social, constructs. It's just a you know, it's that cunt, kind of trying to discover if there's a universal thing or universal method. It's almost like, this kind of mathematical equation to ethics as opposed to this push and pull reflection. Like, I think more in the book that you talk about this this way of thinking about ethics as discourse or reflection or, collaboration and and and these things rather than, you know, equations. You know?

Very much so. And this may be a nice stepping stone to one of the chapters that's also in the first part where I do some, groundwork or background. I explain different types of knowledge, and I invite readers to imagine in their head a a big tree, the tree of knowledge, with 4 big branches. 1 is natural science, 1 is social sciences, one is humanities, and one is technology. So I give technology and design its own branch, which is normally not not done.

And then I go on a bit about how are they different, because they are different. The natural sciences is more domain where you're where where you talk about, like, we can come to real equation f is m times a, force is mass times, what does the a stand for again? Acceleration in English. Right? Yeah.

Yeah. It's anyways, physics. Physics with real formulas and real numbers that always work. Mhmm. In outer space, on Earth, a century ago, nowadays, always the same.

And social sciences, they they're a bit different already because, I mean, a century ago, and now it's different. These are the continent here, people behave differently. So there's already bit of difference between the natural and social sciences. And the humanities even is even more different. I think its definition of humanities is a study of, the products of the human mind, like all the books in theology, and in philosophy, and in law, and art, and, not only books, of course, also art in other forms.

And then technology is is even different again because that's not just studying, but it's changing stuff and creating stuff and and seeing what happens and then trying out to solve more practical problems in in in in the real world. Mhmm. And then I make, sort of parallel that I hope will will help people with technology backgrounds feel that ethics is really helpful can be really helpful. When I say, well, the work of a normative ethicist and of an engineer are very similar. If you look at the world, they believe or feel or think that something is not quite right, yet yet with it, we need to do something and then they go about changing it.

So the the work of normative ethics and the work of creating and using technology is very similar. And you don't you don't need count for that. It's very much I like very much your characterization. It's not high up there. It's not complicated.

It's really these 3 steps. Put your issues on the table, have conversations about it there to also to to to to ask silly or uneasy questions, there also, to, to tolerate the unease that comes with it. Like, we're 6 months down this project. We have 2 more months to go. Why talk now about the project brief and its assumptions?

It's it's scary sometimes to do the work of, ethical reflection and deliberation. Yeah. And who do you see doing this this work? Is it, people, teams themselves enacting some of this stuff? Or are there, special people that have job titles or consultants or other people that can be brought in.

Yeah. Yeah. Actually, I hope that, once once a reader has gone through the through the book and also add lots of links, there's also website connected to that you go to lots of other sources and videos and podcasts. They can do it themselves. I mean, yeah, I believe very much in taking small steps first, and then some people will want to get really good or even better at it and, of course, that they'll find ways to do that, but first steps you can do just just by this.

And I've actually got a another devil's advocate question for you here. Excellent. This is my favorite, thing to do. So there's this idea that, I love that which is kind of at odds where everyone is a moral agent. Right?

So you have this saying in the book where are you a technologist or a developer or data scientist or a business person or are you making things or whatever? You're all participating, right, in this sphere. You're all going to be making technology or helping to make technology or facilitating the making technology that's going to affect people. And you should be, an agent in that. You should be aware.

Right? Self reflecting, presumably. But then you also go on to say that people should be mobilizing their intrinsic motivations in a positive direction. Right? So, using their own motivations, their own intrinsic, kind of moral outlook for the good of the situation.

Move moving the steering wheel in the right direction, let's say. But for me, it it for for my experience in my life, I feel like that intrinsic, motivation isn't enough. Right? It's, you have all these forces around you. Money is a big one.

But you also have social, other social forces that play a part in that. And I wonder if if you can talk to, you know, how you might, kind of break through or or better understand what is, you you know, a good intrinsic motivation rather than a maybe self self serving one maybe. Yeah. I think I remember writing that section because, my book was beginning to be too much critical on technology, all the things that can go wrong. And am I fearful of technology?

Well, not really, but sometimes I can be fearful of evil people using technology for evil ends. And then on the other end, I was feeling hopeful, and how can I express that? So that's where I think I wrote a bit that you just sort of paraphrased. Like, I hope to speak to the to the goods that I assume is in each of us, to do to do well, to do good. And then I like very much what you're drawing attention to.

Like, there are external forces. There are social norms. Why would you ask a silly question like that? I mean, we have one more month to deliver. Why?

Question. And we also spoke about, like, the reality of many companies of, most companies, actually, of course, making money in some sort, so there needs to be business model networks. All this is there. And still and still, I write about and I speak about, the hope that within each of us because all these people are moral agents as well and citizens and professionals, the hope that they can mobilize within themselves, motivations to do good. Now I spoke about the the the 4 ethical perspectives.

One is about pluses and minuses. I could talk more about that. 1 is Judaism rights that leans towards law and legislation. 3rd one is relational ethics. 4th one is virtue ethics.

And virtue ethics is, in a kind, kind of way, special because it not only talks about the virtues of the people using the technology like the citizens and whether the social media app, for example, helps them to cultivate self control or whether it grows self control. That's an easy example. I can talk more about that. But virtual ethics also talks about the virtues that, we, let's say, we, as professionals, as the people involved in developing and and deploying technology, the virtues that we need to cultivate. So, yeah, hope is hope is one of these virtues.

Justice is an obvious one. Self control is another one. Courage to speak up. I mean, these are some of the cardinal virtues of ancient Greece, of of Aristotle. So coming back to your question, with my expression of hope that people can mobilize within themselves, the good, the motivation to do good comes also the tool of virtue ethics, that can help people to reflect on, hey, What kind of virtues would I need, and how can I cultivate them and and bring them to the world, bring them to expression?

And that's very much also a developmental process. Well, actually, I don't know what Aristotle would say. You're born, you grow up, you learn stuff, you unlearn stuff, you educate, you get educate you get your job. Mhmm. And then lots of things happen in in lots of directions, but still, you can then, at any moment, learn to be a bit more courageous or learn to be a bit more, have a bit more concern for for justice.

Yeah. So it it it it's very much a hopeful book. Mhmm. Yep. And I guess, You're not answering your question because I've lost the I don't think questions are here to be answered.

They're just to be ruminated on. Right? Right. Right. No.

We we did that. So, you mentioned that you you kind of think that the virtue ethics is useful in this context. Like, you are really excited about that as opposed to maybe less so utilitarian on Cantism and relational ethics. But is it is it because of that hopeful side of things, or is there a process in which the the virtue ethics is is just more interesting or or Yeah. Illuminating enough?

Yeah. Let's put them in order of, appearance. Mhmm. The pluses and minuses for consequentialism are good. I mean, if there are pluses and minuses to your project that you can think of, so your project will deliver something, that something goes into the world.

And now let's talk about the pluses and minuses, its impact, its outcomes. Mhmm. It's a very good idea. However, sometimes, things are more difficult than at first sight. So how are the pluses and minuses distributed over different people or different, groups of people?

Some of the plus and minuses are kept out of the equation, out of the calculation. They're called externalities by economic people. So, yeah, the plus and minus are a good idea, good starting point for sure. Second one, duties and rights. Yeah.

We talked about that. You need to comply to, and you need to respect rights. And and there's lots more to be said about it. And then zooming out a bit on these 2, they're both products of the European enlightenment. So we're objective.

We can calculate. We're autonomous, independent individuals. We're rational. All all this. Yep.

And then the the the 3rd and 4th one are a, reaction to it. Well, obviously, virtue ethics cannot be a reaction to it, but but but the sort of the, the growing recent attention to virtue ethics can be thought of. Anyways, relational ethics looks at care, a combination of justice and care because all justice needs some care and all care needs some justice. They're like, 2 sides of the same coin. And, virtual sorry.

Relational ethics, also feminist ethics or ethics of care draws attention to, status quo, it's power, power distributions, inequality. And then you can look for for example, to how a technology or the introduction of some technology propagates that power imbalance or how on the in the other direction, you can use technology to to to rebalance, to empower people who normally wouldn't have power. And now comes the last bit. So I like relational ethics in a sense that it remedies some of the enlightenment, pitfalls, so to say, because there's more than independence. We are dependent.

There's more than rational because there's also relationships and emotions and and effects. Also, I wrote recently a bit more about what I write the book a bit about on, learning from indigenous knowledges. So you can argue in a nutshell that, climate crisis and lots of other things are a result of enlightenment ideals, derailed, taken too far, like, submit exploit nature. And then and then indigenous knowledges, would very much stress our relationship to nature, the interconnectedness of the plants, the animals, and us. So I do a bit of a tour of of Southern American, Northern American, African, Asian, Australian, indigenous knowledges, how they can help us look differently and remedy also some of the, pitfalls of enlightenment.

Coming back to the 4th one, the virtue ethics. Can you repeat the question, Ben? Because it is my favorite. Why again? I think he was asking why you thought virtue ethics was, interesting in this context or or you mentioned at the beginning of the podcast that, you would use your favorite one almost.

Yeah. Yeah. Yeah. It's it's my favorite one because it's it's the hopeful one because it stresses very much the ability of people to learn and to unlearn, like, bad habits to learn better habits. This whole this whole habit word is is is is is from Aristotle.

So I believe that virtues are nothing really more or less than than habits that you cultivate. So if I, all the time, take my mobile phone and scroll through my, social media timeline and I do it often enough, my self control is growth. It it it's gone away. On the other end, if I systematically learn myself, teach myself to put it, outside of the bedroom and that it it's not the first thing when I woke up wake up and the last thing when I go to sleep, then I form another habit that is more, convivate, that's more conducive to I don't know what you want to do else. Mhmm.

Mhmm. Other stuff. So that's one thing. It's a hopeful one. It's the developmental process, 1, virtual ethics.

And also because, it it's a it's a direct way into what professionals, the the people involved in development and deployment of technology, what they can do themselves. And I guess it stresses the autonomy of the individual to, you know, make dictate that direction, to to to make that moral decision as opposed to it being more like group think or structural. You know, you can almost you can all reflect and have a decision, and you can all change how the world looks. And, quite often these things come back to the idea of the good life and flourishing and these philosophical concepts. But, really, you are in the driving seat in that equation for working out how we get there, how we create the environment for people's flourishing instead of, you know, maybe one person's flourishing at the expense of 1,000, 1,000,000.

So yeah. I mean Yeah. And this for flourishing is one of the central words for virtue ethics. Living the good life, I always add to that together, how to live well together. Mhmm.

Because one of the possible misconceptions of virtue ethics that it's like an individual thing. Well, obviously, the virtues live within people. Mhmm. But the virtues are Yeah. Ideally, they are, they are directed at living well together in the in the Polish, in the in the city state, in Aristotle's case.

For us today, it would be like European Union. It's one of the Polish, of the size of Polish that I live in, maybe big big Polish because I also live in a city. But, anyways, it it's, it helps you to think about how do we want to live together well. Mhmm. So kind of outside of the the book, I was wondering, with all your experience with creating or helping people create products, innovations, technology product, services and things like that, do you think we are making a world which is, prime to to keep getting better, promoting wellness, promoting, flourishing of of citizens and individuals.

Are we are we making that future, that we want to kinda live in with these types of technologies, like we were talking earlier about AI, and briefly about large language models. But is that the world that we should be living in? Yeah. I already mentioned the the climate crisis, which I think is for many of us, including me, a big concern. Like, this can go wrong in so many ways, so so very soon already.

And we, that's governments, industry consumers, but I'm starting with governments for obvious reasons, legislation and policy. We need to do things to to, prevent or mitigate the worst effects of it because it is already happening. There are droughts in, in in Spain, etcetera. All the other examples that you can read about. Connection to technology.

Well, AI was not involved in creating the climate crisis. It has not. Can it help us do things differently? Yeah. I guess we need to use.

No. We we don't need to, but we could use technology differently. And I think AI is of a special interest. My next book will be about that, I guess, because AI is in is is is machine for thinking, is machine for words, is machines for conversation or tools for instruments for that, that's the way I like to think about it. So if we create different AI systems or different algorithms and use them differently, they can help us also to to think differently about, manners.

Sorry, about about topics that are of of public concern, like how to spend tech the taxpayers money, what kind of legislations do we need. Whereas on the other hand, the example that we spoke about in the beginning, the 100 or 1000 of fake news and misinformation that, AI bots can, can release on the Internet. They're unhelp in facilitating, good conversations about important topics. By the way, I forgot to say, a while back when we talk about virtue ethics, Shannon Vellar, she wrote the book Technology and the Virtues, and that has really inspired me to to to write my own book. She does a great job in, in revitalizing virtue ethics in the domain of technology development.

So yeah. Yeah. So, you were talking about technology. Does it help to improve the world? Does it on the other hand?

Does or does it work in the other direction? It can work in both directions. If if there's no good legislation, if there's no not enough wisdom within the industry, then there are huge risks of AIs, into the world that will, pollute very much all the conversations that we can have on on on topics that matter. Yeah. And the the link to what I just said of Shannon Veller and her book Technology and the Virtues, she talks about civility.

And, I guess that in British English, civility means something like politeness, but she uses it differently in the sense of, the ability and indeed the virtue of people to to come together and discuss matter, discuss matters that matter, and then to come to, to action, to to to solve real problems. And I like that very much, the idea of, using AI to facilitate that kind of civility. Yeah. Yeah. So we could I mean, hopefully, this is the the fear, right, that we we're creating these, lonely bubbles for individuals to live in.

But you're stipulating that we could hopefully harness this technology for bringing people together, having direct action, more local communities, that sort of thing. Yep. And there was one thing more that I want to say about it just remembering, it must have been 2 or 3 years ago. I can send you the link if I can yeah. I can find it.

It was a podcast in the view between, Azim Azar and, David Runciman. They were talking about artificial intelligence and then, David Runciman, political science background, I think. He he was drawing attention not so much to the intelligence bit of AI, but to the artificial part of AI, where we're saying, well, we need to worry more about the artificial. And then he explained what he meant with that. Like, the artificial is there already, has been there for a couple of centuries because that is like, the nation state, and it's the corporation with limited liability, And they can do enormous things.

Mhmm. And that's artificial in a sense. It's more than a one person or a group, a normal group of people can do. It can be, multiplied, 1,000,000 folds. And and in that sense, bringing that back to the, discussion on on AI, let's worry about the artificial bit of it.

I think the example of the AI bots spewing out, fake news is already an example of that. And here comes one of my themes that I often want to talk about is making stereotypes. You can say that in the US, United States, corporations can do anything that is not strictly illegal. And even if it is illegal, they pay lawyers to just do it. So the corporations have, like, big, big power and the state, not so much.

And that leads to AI, yeah, totally promoting consumerism Mhmm. And neoliberalism. So ads, goods, well, all the things that you can that you can order at Amazon, etcetera. And all the ads that are sold via Facebook, etcetera. The other stereotype would be like China, where the state has, like, too much power, and can do anything.

Well, yeah. Yeah. Very much anything to to control, to monitor, to control its people, its citizens. And, and their AI takes the form, for example, of cameras everywhere, social credit system, etcetera. And then to make the story complete, I'm imagining something else where the corporations don't have too man too much power and also the state doesn't have too much power.

But citizenship, the citizens or civic, stakeholders, and society, has power as well. And I think the European Union and, is making plans for such things that are in between corporations too much power and stay too much power. But then then we'll see. But that's one of the directions in which I guess I have hope for, a more positive, more helpful deployment of technologies. So, Mark, we've already answered some of this, in the previous question, I think.

But the last question we always ask on the podcast is what scares you and what excites you about living in a world with this, you know, this technologically mediated future? Yeah. What scares me is, many of these big industries that exacerbate, the climate crisis. I don't have to spell them out, but you can think of the fossil industry, etcetera, etcetera. Yeah.

That's reasonably scares me. What excites me is, finding ways out of, what I politely call a derailed, neoliberalism. Are there are there alternatives? So Kate Soper wrote a book. What is it called?

Kate Soper's book, Post Growth Living for Alternative Hedonism. There's post growth, this paradigm of can we do the same or even better Mhmm. With less production, with less consumption. So I think yeah. I I find that exciting to help, think of alternatives, that consume less, that produce less, that's that's pollute less, and that are at least as much fun.

Yeah. Because yeah. If you were to choose between ordering more and more packets to Amazon or or or having a picnic with friends in the park, you would I don't know what you would would choose, but, yeah, a picnic in the park sounds nice because and then, yeah, then we need, to to protect the the the the trees that we have in these parks. You know, we need to find ways to make time for friendships, for relationships. No.

We'll just take another Amazon fulfillment center on the park. Oh, yeah. Yeah. Actually, that that is being discussed now in in the outskirts of Amsterdam where there's agriculture. Mhmm.

The last bit of agriculture that is there, and it's even, bio organic, and, people have small plots of lands there, and they they they come there, in in, out of the city. And, yes, a distribution center that that is currently being discussed. So yeah. It's it's these questions, and, what excites me is, with also with with colleagues of mine at, at Tino to work on the, on on that alternatives for for some of the, how would I say that? To work on positive projects and positive outcomes.

Awesome. Thank you, Mark. Thanks so much for joining us on the podcast. If people wanna find out about you, find you, buy your book, how do they do that? They can go to, ethics for people who work in tech.com.

That's the book. No. That's not the book. That's the website that the company is the book, but there you can find the book, or they can go to, my personal page, which is marcsteyn.ml, and marc is, spelled with a c at the end. And if they're interested in the work of, the organization where I work at t and l, it's t and l dot n l.

Wicked. Thanks for your time. Thank you. Hi. And welcome to the end of the podcast.

Thanks again to Mark for spending his time and Angie with us. Still in the process of finishing off his books, so I'll put up a review on the Patreon, as soon as that's ready. One of the things that has resonated with me so far in the book is this idea of types of knowledges, which we touched on in the podcast, and how to think about those things in terms of their effect on society and therefore how they are related to ethics in that way. If you'd like to hear more about things like this, check out our episodes on the podcast machinedashfx.net. And if you can, you can support us on patreonpatreon.comforward/ machinephics.

One of the things I'm appreciating at the moment is the absolute acceleration of some of the ILMs, large language model stuff that's happening in media along with some of the previous things around stable diffusion and the image models and how they are are going through the courts and all that sort of thing. Although this podcast isn't necessarily a news podcast or a, news show specifically, we will endeavor to obviously cover some of that stuff with our interviewees in future and, hopefully, keep you up to date with the kind of some of that bleeding edge stuff as well. Thanks for bearing with us and hope you enjoyed.


Episode host: Ben Byford

Ben Byford is a AI ethics consultant, code, design and data science teacher, freelance games designer with years of design and coding experience building websites, apps, and games.

In 2015 he began talking on AI ethics and started the Machine Ethics podcast. Since, Ben has talked with academics, developers, doctors, novelists and designers on AI, automation and society.

Through Ethical by Design Ben and the team help organisations make better AI decisions leveraging their experience in design, technology, business, data, sociology and philosophy.

@BenByford