97. Running faster with Enrico Panai
Enrico Panai is an AI ethicist with a background in philosophy and extensive consulting experience in Italy. He spent seven years as an adjunct professor of Digital Humanities at the University of Sassari. Since moving to France in 2007, he has continued his work as a consultant. In 2017, he studied Strategies for Cyber Security Awareness at the Institut National de Hautes Études de la Sécurité et de la Justice in Paris. Holding a PhD in Cybergeography and AI Ethics, he is the founder of the consultancy BeEthical.be. He serves as a professor of Responsible AI at EMlyon Business School, ISEP in Paris, and La Cattolica in Milan. Additionally, he is the president of the Association of AI Ethicists.
Currently, his main role is as an officer of the French Standardization Committee for AI and convenor of the working group on fundamental and societal aspects of AI at the European CEN-CENELEC JTC21—the European standardization body focused on producing deliverables that address European market and societal needs. Among the core standards managed are Trustworthiness of AI, Competences of professional AI ethicists and Suistainable AI. His main research interests concern cyber-geography, human-information interaction, philosophy and ethics of information and semantic capital.
Transcription:
Ben Byford:[00:00:03]
This was recorded on the 13th of January, 2025. We chat about elements of the digital revolution, the importance of knowing tech as a tech philosopher. We talk about what it means to be an ethicist, and that ethicists should diagnose but not judge. We also talk about the idea of ethics being a burden in corporations, but maybe in reality, that ethics can make you run faster without breaking things. We also talk about quality and pasta, as well as finding a new Marx for this digital world.
Ben Byford:[00:00:44]
If you like this episode, you can find more at machine-ethics.net. You can contact us, hello@machine-ethics.net. You can find us on Instagram, Machine Ethics podcast, YouTube at machine-ethics. And if you can, you can support us on Patreon, patreon.com/machineethics. Thanks very much and hope you enjoy.
Ben Byford:[00:01:08]
Hope you're well. Welcome to the podcast. I'm just warming up, obviously. We are really happy to have you on the show. Could you introduce yourself, who you are, and what do you do?
Enrico Panai:[00:01:26]
Hi, Ben. Thanks for inviting me. I'm an AI ethicist. That means that I'm generally giving advices to companies on their AI ethics strategies and what tools they have to use or what standards they have to comply with this stuff.
Ben Byford:[00:01:52]
You're currently based in France. Is that right?
Enrico Panai:[00:01:55]
Yes. I'm an European guy. I'm Italian living in France
Ben Byford:[00:01:59]
At the head of the show, we always ask quite a simple question, which always has different answers, which I enjoy. To you, Enrico, what is AI?
Enrico Panai:[00:02:15]
Yes, it's a simple question, but a quite complex one. I think there are many different way to approach this question. At the moment, AI is a good label for the market. It's more marketable than other titles, and I admit I've done the same on a small scale. However, if we stick to the common definition of AI, we refer to all those set of technologies that are attempted to solve problems like or similarly to a human being. But somebody mistaken I think that artificial intelligence, because there is the word intelligence, it's related to the brain, and we want to duplicate the brain in an artificial way. In reality, it's more related to the functionality that we can have with this technology. Basically, you have the classical distinction from logical AI to data-driven AI or the methods that are used for creating AI, or finally, for the use, functional use you have that means classification or creation.
Enrico Panai:[00:03:38]
I would say that AI is the normal evolution of the digital revolution. That means that we had a big revolution, the digital, that is still misunderstood, I think, from the many. We have AI that is the only possible way to treat in a sensible manner all the big amount of data that we are producing every day to help us to treat those data.
Ben Byford:[00:04:09]
I feel like there's a load of information in there. What do you think is misunderstood about the digital revolution or the digital informational revolution in that trajectory?
Enrico Panai:[00:04:22]
Yeah, that's the very nature of this revolution. There is other revolution like the digital that had an impact on the world. Maybe the writing. The writing is a technology. We generally forget it, but writing is not something natural. It's a technology that human being has used to collect information, to transmit information, and to save information. Now, the digital is at at the same level, and it changed completely the world. The bad point is that we are still, the majority of time, using the physical paradigms to treat the digital.
Enrico Panai:[00:05:15]
A simple example is, for example, when you make or have an invoice from a company. This is thought on a A4 paper, so on a physical classical paper. But in reality, if you reason like somebody from the digital, it should be understood as a connexion of data, so a small database and several records. We shouldn't have the need to have a printed version of an invoice, but we still are so attached of the printed part that we are creating software to print it and then other software to scan it and reinterpret what was already digital. So we are missing something.
Ben Byford:[00:06:09]
Yeah. You're making this digital, physical, back to digital thing I don't know if you... Some listeners will remember the fax machine. I don't know if you remember the fax machine. You had this physical paper that goes through, they digitise it to go through the phone lines, and then it comes back out, the fax machine the other end. And it feels a bit like that. And I remember when I first got a fax via email and it was like...boom. Yeah. Yeah.
Enrico Panai:[00:06:37]
It's like what is... Exactly.
Ben Byford:[00:06:39]
We've mixed metaphors too much here, guys. So in that way, you think we are still grappling with being more digital in the digital way and more in the physical way and having maybe these things haven't got together in the way that the iconography or the way that we interact with them, the human interface with them, is quite juvenile still.
Enrico Panai:[00:07:08]
Yeah, it's not yet very well understood. But if you think of the time we started to democratise the digital, it started in the '90s because before, only specialists would use the computer. But in the '90s, we started to democratise it. I still remember at the end of the '90s, helping friends to create their first email. It's not so far for a revolution so big. Just to make an example if you think that the writing is generally supposed to be 2000 years before Jesus Christ, and before democratising it, we arrived to the last century where the schools were created, a lot of people started to read. It took a lot of time. In only 30 years, we wanted to understand the revolution that is deeply changing our way of thinking, organising, etc. We are still in a thinking in an industrial revolution, but working and living in a digital revolution.
Ben Byford:[00:08:26]
Right. Do you think, coming back to your and stuff, do you think that the AI part of that is part of that informational revolution, the digital revolution, or is that a sea change again?
Enrico Panai:[00:08:39]
Absolutely. No, I think that this is completing in the digital revolution. Basically, what AI is for is to transform data into information. Before we used software to do it that were deterministic with the logic that we decided, but always to transform data to information. AI is taking a lot of data, creating models just to have new information that makes sense to what we have to do. This is the big difference because in the past, in the physical world, information were never separated by the data or the support. You couldn't see in a poetry or in any old document data. You always see so information. This differentiation is harder to understand what data are and what information is. I use just to have an example that just show how important is. We were trained to do addition using the positional of number so that at school, in primary school, we know what column we should add to make an addition. Now, the positional system of number is so embedded in our thinking that when we are doing an Excel table, we ask the sum at the end of the table because our thinking is related related to the physical way of thinking.
Enrico Panai:[00:10:32]
In reality, if you think in an informational way, data should be completely separate by the total. So it should be total should be in another sheet, not even in the same because those are information. Information are used to why? To make a decision, to make a choice. Data, generally, are used to collect some basic and atomic information that then we can re-elaborate. In everyday life, we are still a connexion with the physical world that we use to cope with. Even new generation have exactly the same connexion.
Ben Byford:[00:11:22]
You're making this distinction between data and information. It seems to me that people are using these technologies to do exactly what you said, like transform data into information or into actionable information, things that we can then do something with. We've got hundreds of thousands of pictures of dogs, and now we can make assumptions about the fact that this new picture is a dog or whatever. It turns data into action, almost. But then you start getting things which generate more data, almost, or It's going in all different directions. Do you see that there are less useful ways of using the technology in that way, or that people are maybe spending more time in certain directions when the tooling could just be used like this over here?
Enrico Panai:[00:12:19]
Yes. I think the answer for it is how we design the technologies we are using at the moment. So whenever a technology is needed to treat a technology, there is a problem in the design of the base. I try to hyper-simplify. It's like if we are creating a technology to click on a button, there is something missing about it. We are doing that. We are doing exactly that. We are fed up of clicking on buttons. We are creating technologies to do it. The example of the invoices, I did it before. It's exactly like that. Anytime I'm travelling, going to the restaurant, I have a ticket. The ticket that was It's used by a digital machine that was printed, so it became analogical. Then I take my mobile, I take a picture, and the AI system is going to read what it was already digital. We are wasting time. This is just because we didn't create the right bridge to have this information.
Enrico Panai:[00:13:39]
Paradoxically, or today, we have a public administration or university that are asking for printed tickets. They just do not trust the information. The real information is the record of my bank account. The other one It can be fake, created by AI. I can use even some classical graphic to remake it. Everything else is fake. Still, we believe more that something that is possibly fake than the real information that is recorded in an official database.
Ben Byford:[00:14:22]
That's really interesting. It really rings with me because I feel like a lot of the time people are using, I guess, just digital tools, but also AI in places where it's a system problem. It's like, well, you just need to change the way it works. And then, most of what you've just done there will go away and we can just then make this new functionality on top. It'd be Great. I need to think of some good analogies for that, maybe. So I was hoping that we could come back to what your day job is. And you said you're an AI ethicist, and I was wondering, just before we get to that, what brought you to the interest in philosophy and then the philosophy of technology to start?
Enrico Panai:[00:15:10]
Yes. So I I finished my high school and I choose my university. I don't know why, but the natural choice for me was philosophy at the time, and maybe because I had a wonderful professor that helped me to make this choice. But at the same time, I was a small geek. In my generation, we started with home computing and I had my first algorithm's book when I was 10, and it was really interesting, and I felt really natural to move from natural languages to formal languages. So when I was studying at university, I found that the digital in general was the real world that was going to arrive. Now, In my faculty, I was the only one to do that. I was obliged to go around in other faculties to study IT, databases, networks, you name it. I had to fight with my professors sometimes to make them accept that I was doing exams that were not related to philosophy. But what I felt at the time is that philosopher had to know IT to cope with the new phenomenon because I don't like to talk about something without knowing it. I studied a lot of IT at the time.
Enrico Panai:[00:17:00]
In my final work, I was dedicated to the alienation of human rights in communicative communications. Where I already talked about the AI. I had also the opportunity to have another professor, a mathematician that was also a friend that was working on AI at the time. I'm talking about the '90s. I had already very clear the difference between the model-driven AI and the data-driven AI and how it could impact society in the future. But that wasn't my job for a year because as I generally say, companies weren't ready to pay an ethicist of the digital. They became ready with the new summer of AI. That's because AI raised a big problem, the problem of the allocation of the responsibility that before was easy to do. You are a developer, you are responsible for the software you are developing. It's as simple as that. We have AI system that have a capability to act, so an agency. It's more difficult to say an AI system is responsible. We do not accept it because in our In human form, we like to punish responsible people or to prize responsible people for what they are doing. It's difficult to accept, morally, that a machine is doing something for us, and we cannot give it the blame for what is doing.
Enrico Panai:[00:18:49]
Ethics raised again 10 years ago, more or less, and they started to pay ethicists to do their job. I would say that before my profession became really that one of AI ethicists, before I was working as a ICT consultant more than AI ethicists. I was doing ethics anyway, digital ethics, because I was designing software with my ethical background, but I wasn't paid for doing it. Then I became paid, there are still people who are doing that, but doing academics to have a sustainability in their job. I think that the path is still to arrive to have a real profession that is sustainable.
Ben Byford:[00:19:48]
Do you think that we'll get to a point where that area or AI ethicist or ethicist of technology or whatever you want to call it is more more professionalised, you have some stamp of approval, qualification, or society or standard that everyone has agreed that that's what we're doing now, guys.
Enrico Panai:[00:20:12]
Yes. This is because when I started to use my title as the AI Ethicist, and again, if I should use a very philosophical approach, I would call myself an informational ethicist. But or digital ethicist, but it's more sellable, more marketable AI ethic.
Ben Byford:[00:20:36]
When did you start calling yourself an AI ethicist? Out of interest.
Enrico Panai:[00:20:40]
I think six years ago, something like that. But before, I was an informational ethicist or more related to all the philosophy of information field. Yes, when I started, I had already a background in consulting and advisory. I realised that in the field, as always arrive, you had a lot of people that were just choosing to call themselves AI ethicist. It was good in some way for AI ethics because it shows that we needed something about AI ethics. But at the same time, companies couldn't to recognise who were the professional AI ethicists from the activists. Very important for society. I think that activists are important to advise companies outside the company. But AI ethicists are more than that. I started with some colleagues to think about it, and we started to build a strategy to say – we have this opportunity to be an AI ethicist. We must be humble enough and rigid enough against ourself to create a profession that is recognised by companies and public administration. Because the majority of company told me at the beginning, We are afraid to employ AI ethicists because they are going to judge us. They saw them as moralists.
Enrico Panai:[00:22:29]
In reality, I like more the analogy of a psychologist. You are a psychologist, you are talking with a patient, but you are not telling outside the room what are the problem of the patient. Your aim is not to be visible from the world, but to help the patient. This is what we are doing, and it's very hard. We have to understand what is our role in the industry, in the academy, on the public administration.
Ben Byford:[00:23:07]
I think that's a really nice way of phrasing it as well. I think we need T-shirts or something that says, We're here to help company patients succeed or whatever as a AI ethicist people. When you go into these companies public companies, private companies, do you find it that they have this specific remit now? And what I was discussing in the last previous couple of episodes is that that maybe some of that remit is changing because I feel like we had maybe 10 years of things getting made by people, and now we're having companies procure AI, basically. We've got this massive shift between creating machine learning products and services to having these AI tools and using them in all sorts of places. I don't know how you feel about that and if your work is changing because of that in the last year or so.
Enrico Panai:[00:24:19]
It depends on the company you are working with, first of all. I have a personal strategy, but everybody can have a different one. When I started to be paid to be an AI ethicist, I realised that the majority of companies put together ethics and compliance. They were saying – we are ethical because we are compliant, and this is not exactly the same. We must be compliant to the law, to standards, and we may still be ethically unacceptable in what we are doing. There is a big difference between the two of them. Because I like to know when I talk to people what I'm talking about, I started to do standards and to make standards and to participate in standard organisations. At the moment, I'm in the European Body for Standardisation, and I'm leading a group that is developing standards, for example, for the EUAA Act or for sustainable AI, for the competence of developers, so different societal and fundamental aspect of AI. I'm doing exactly the same at international level in ISO. I learned how standards work. So now I can talk to people that do compliance, they say, Okay, you are compliant, you are not still doing some ethically acceptable.
Enrico Panai:[00:26:07]
Why that? Because paradoxally, you have ethics in adjectives and adverbs. What it means? It means that it takes the EU AI Act, very famous in Europe and around the world at the moment, you have adjectives as acceptable, sufficient, preferable. How do you treat those adjectives? You can do only with a moral or ethical approach. You have to create a board of competent people that will assess what is the threshold of acceptability. You cannot have a number for that, or at least you will never have a universal number for that. Anytime you try to fix a universal number, you risk to make a big mistake on it. We have ethics in every interpretation of those rules. I generally used to quote a PhD student of mine because I work with several PhD students in my company. He paraphrases a phrase of Kant and he says, "Ethicist without procedures are blind. But procedures", and we have a lot of procedures at the moment, guidelines and you name it, but it's quite common. "But the procedures without the ethicist are mere intellectual play". This is what is important to know. Yes, you can say what should be transparent, but if you do not have people that understand what transparency is at this different level of abstraction, then you are just doing ethical washing because I'm not trusting you are doing anything good.
Ben Byford:[00:28:14]
That's part of what you were saying earlier about having the buyer's remorse for companies, almost having the difficulty of having lots of players in this space pop up over the last six years and say that the AI ethicists or be interested in the space of AI ethics, or the ethics of technology or ethics of information, and then having them deliver wildly different things. Hope that some of those things might have been okay for companies, and some of those things are just like. We had an interesting time and we delivered nothing useful, basically. You're trying to put together those standards and interact with all the different policies coming in so that you can be compliant and also take compliance to this higher echelon, almost, and be confident that you're both compliant, but also you're standing on the right side of what that compliance means in the future. I feel like it's almost like the goalpost will keep changing and we need to make sure that we are ahead of the goalpost, almost. Especially with technology, because it keeps changing. You're creating the future and you need to be ahead of what is acceptable in the future, or knowledge about what might come because otherwise you're going to shoot yourself in the foot. I just rambled at you there.
Enrico Panai:[00:29:54]
Enrico Panai:[00:29:54]. I'm exactly the same moral value than anybody else in the world. It's not because I studied philosophy that my judgement is better than other. Even more in my profession, I avoid to give judgement because they will put me in a very sensitive situation. It means that I will I have a conflict of interest.
Enrico Panai:[00:32:01]
If I make a moral judgement, I will defend my moral judgement, so I will stop listening for everybody else. What I have to do is to create the environment where the company can arrive to take a moral decision with the right information, with a good rationale, avoiding to have any internal conflict of interest. This is my process, not giving a judgement. So The point is that unlike what people can think, we are not judging. When we are judging, we value exactly as everybody else. I'm telling them to philosophers Okay, to ethicists, we are not better than other in judging. At the same time, we need to, in some way, be I am humble enough to understand that we need other observation to arrive to a good solution for a company. This is the big process we have to do when we are working with companies and helping them to make a judgement.
Ben Byford:[00:33:22]
I'm interested in your organisation. Do you work with other people? Is it just you or is it you work together with universities? Or how does that work?
Enrico Panai:[00:33:35]
Yes, I do work with some university. I'm teaching and doing research for two reasons. Doing research is fundamental in our field because you have so many new approaches that are just coming up every day that it's interesting to be updated. Teaching is quite interesting because it obliges to consolidate your practical knowledge for an audience, and so you have to explain it. These are the double effect to make your knowledge consolidated to make it easy to understand for people, and then you can reuse it when you are going to sell your knowledge to companies. It's a practical approach. So yes, I am working with a university for that. In my personal business model, I'm working with, as I said, industrial PhD candidates. Until now, I had a wonderful experience with them, wonderful people, people who doesn't want to become a researcher or a professor at university, very realistic people, but at the same time, they would like to have a higher level of competences. We are working and trying to push them to take the responsibilities to meet people around the world to do conferences or publish papers. Until now, I had the opportunity to have really great people around me that who are helping me every day.
Enrico Panai:[00:35:32]
They are getting wonderful offers. For example, a few of them had to stop the industrial PhD, still carry on the PhD, and I'm helping them in the PhD, but not in the industrial part, because they had a wonderful offer from consulting companies. I'm so glad for that and for them that the work we did together could produce this result.
Ben Byford:[00:36:01]
Awesome. Great. So you've also... Obviously, you do the research and you write papers and you're an author. So do you... This is a funny comment, isn't it? Is it required to make pasta while you're explaining ethics?
Enrico Panai:[00:36:21]
That's a wonderful one. Yes. I published in Italian and French a book and it's explaining AI ethics to my son. I didn't want to do an essay. I didn't want to to make a dialogue. In the very philosophical culture, it's not something new. We lost a bit this capacity during the last centuries, but in the past was quite common to have dialogues. Plato used to write dialogues, but mine is far away from him. It's just a small exercise where I'm talking with my son while we are cooking pasta. Now, as you can maybe guess from my accent, I am an Italian guy, and this has got a big impact in my axiological approach, just to use a difficult word, but in the values that shape my decision process.
Enrico Panai:[00:37:41]
So why pasta is important? Let's start from another point of view. If you try to offend the Pope in Italy, then we'll accept it better than you are trying to add some cream in pasta la carbonara. Then we have a real crisis.
Ben Byford:[00:38:05]
I think I've done that in the past. I'm sorry.
Enrico Panai:[00:38:10]
Yeah, this is quite unacceptable for Italian. Yes, we are quite strict about food, but this is quite interesting for different... I'm a big I'm really fan of a philosopher that still not considered the philosopher from the official academicians, and it's Robert Pirsig. Robert Pirsig, he wrote two books. In the book I wrote, I was inspired by him. The first is Zen and the Art of Motorcycle Maintenance, that is the most famous. The second one is Laila, that is less famous but quite interesting. Now, his research was about quality in all his life, moral quality, metaphysical quality. He did a wonderful research about it. He wrote not some essay, but some trips while I was cooking with my son, he was travelling with his son in the first book. He wrote about quality. Now, why I'm talking about it? Because there is a connexion between quality and pasta in Italy. I just quote the most difficult dish we generally do with pasta in Italy is a dish called caccio e pepe, and it's basically cheese and pepper. You say you have not a lot of ingredient. You have water, salt in the water to cook the pasta, and then you have cheese, a particular cheese and pepper.
Enrico Panai:[00:40:04]
Still, it's considered one of the most difficult dish to prepare. It means that you can reach quality with a few elements, but that are perfectly balanced. When it comes to ethics in my daily work, I take more or less the same approach. A few well-balanced ingredient, it might sound easy, but you arrive to do something using in the background a very complex knowledge to balance those ingredient. I think that we should do ethics using some elements in a good way, not just a lot of elements at the same time.
Ben Byford:[00:40:56]
Yeah. I'm interested, did you actually... was it a fiction dialogue or did you actually talk to your son about as you were going?
Enrico Panai:[00:41:07]
The starting point was a real life question. When it was 17 one day, he had just asked me, "Hey, dad, I don't know how to explain my friends of what you are doing". I started to explain to him what I was doing, and it started with a few evening discussing about it. Then I transformed it in something that we generally do together. We cook together, we are passionate about it. I tried to make it a dialogue around it with another character that came from Persig, that is the uncle Fedros, that is a character used by piercing in the books he wrote.
Enrico Panai:[00:42:11]
Maybe one thing about about how ethics is perceived at the moment by companies. Because we had business ethics in the past. It was generally more related to the legal aspects than philosophical one. Today we have AI ethics, and in the future, maybe we'll have more ethics related to the digital that will arise. Generally, ethics is felt by companies as a burden. We have the ethical guy, Okay, cost, and he's slowing down our innovation and our process. This is how it is felt. But because I had a long experience in designing software, I think that ethics is something different.
Enrico Panai:[00:43:04]
I like to use an example. We had in Italy a champion in archery. It was Marco Gagliazzo. He won several medals. In interview, sometimes a few years ago, he said, "sometimes you need a psychologist more than a trainer". This statement surprised me at the beginning, but it makes sense because precision is achieved through calmness. They needed to have psychological meetings to arrive to the right calmness to be more precise. Then I like to see ethics in the same way as a discipline that helps company to stay on target. When it comes to AI, the complexity is very important. An ethicist must possess a blend of different knowledge from technical, legal, ethical skills. But what we must aim to do is to make the company run faster and still be ethical. Anytime we are throwing down the companies, we are maybe not using all our ethical knowledge in the right way. I think I really love CEOs or project manager to understand that, okay, you have a good prepared ethicist in the room, and then you'll do better your job. You'll earn also more. It's just not a cost ethics. It's something that can help companies.
Enrico Panai:[00:44:56]
That's why we create an association to put together prepared and professional ethicists. It's the Association of AI Ethicists, where we are trying to push the right competence for them.
Ben Byford:[00:45:12]
I think that's something that several people have repeated, like Olivia and I think Alice as well, who are... They just say, ethics is part of innovation. When you're doing innovation, you can do it ethically using ethics as innovation, basically. That's what they would say, which chimes a bit with your... It helps you to move faster and to understand, basically. It's a better where you're going.
Enrico Panai:[00:45:46]
But by the way, just a point about it. Ethics is not something that you are adding somewhere. It's like language is there. Any time you take a decision, you have an ethical approach. The point is that your ethical approach is not explicit, it's implicit. When I wrote in the past a code I choose a library or I choose to collect some data, I was making decisions that had an ethical impact. The point is to be aware of the ethical impact. If you are aware, is It's there you are creating innovation. It's not something to add. It's not like salt. It's like language. It's there. You are using it.
Ben Byford:[00:46:42]
Perfect. Well, thank you, Enrico. So my final question, which might lead us somewhere else, is what scares you about our AI-mediated future and what is exciting you at the moment? I'm interested in what you're looking forward to in 2025 and things.
Enrico Panai:[00:47:04]
Okay. I don't know. I have more the approach of a sailor. I try to adapt to the winds. Honestly, I fear misuse of AI, even at the political level, because we are seeing reading news nowadays. A bad use of AI can be very dangerous. But what I also am afraid of is the overuse of AI using that where it's not necessary. I've been talking with a lot of developers saying, Okay, I went through my process to use AI to do something, then I came back to use some other approach, and they were even better. It's not because it's more sellable that you have to use AI. The marketer should stop to put AI everywhere if a better system is using classical software. Then at the same time, the underuse of AI. We are afraid of AI, so we will not use it. Okay, those are my two main fear. This is something I'd like to reduce all the principles. You have some very metaphysical principle about AI in the philosophy of information, about the entropy that should be reduced or avoid in the digital world. Then you have some characteristic that are related to AI, and everybody is citing some proxy characteristics to say that a system should be ethical, like transparency, human oversight, and robustness, accuracy.
Enrico Panai:[00:49:01]
You know them. They are always the same. I will take just one principle, only one. The principle is that any process or AI system or even any old software that you are going to put into the market or we have to develop should never, ever steal even a single second of someone's life. This is only my principle. If you arrive to put a new system that is not asking me to spend my time, this is good design.
Ben Byford:[00:49:39]
Yeah. Amen.
Enrico Panai:[00:49:41]
Thank you.
Ben Byford:[00:49:43]
Drop the mic. I think that's for me, it's like companies are in this very precarious situation where they want to do good because they want to make a good product, stuff like this. And they want people to buy it, but they can go too far the other way and be like, well, we can create the environment where people want our product and create the environment where people are unable to buy other people's products and create the environment. You know what I mean? So they can use their... They can exert their power to create wealth or create power for themselves over actual the needs or the wants or the requirements of society. So I think it's weird because sometimes you have this really different situation when you're working with public sector or you're working with research bodies and stuff like that, where it's like their requirement to do that is diminished. So the implicit values in that system are really different to what I see in private sector. So I don't know if that's a much simplified version of what's going on, but I think capitalism has a lot to say for what we're doing at the moment and how we're treating technology.
Ben Byford:[00:51:11]
I think a lot of the stuff that we talk about in AI ethics would probably disappear if the incentive structures were different. But that might be a different conversation.
Enrico Panai:[00:51:25]
But it's a good point what you are raising because with the industrial revolution, we had a new philosophical and political structure that we had to develop to cope with that revolution. We haven't developed the right structure to cope with the digital from a political point of view. Yes, there is this gap, I can say. We may be missing a Marx of the digital. Okay, let's say like that. Not just to have the Communist approach, but a different economical approach to what we are doing. I think that, if you are saying, I used to say that we are still making contracts with people that are saying that people should work eight hours per day. This is something that is completely related to the industrial world. People saying, I remember a former president candidate that was president also in France, he used to say that we have to work more to earn more. This is an industrial approach because if I work better, faster, I should be paying more. The idea is that we should shift our way of thinking, but we are far away from it. That's why, for example, with my collaborator, we are working a lot, but there is no time.
Enrico Panai:[00:53:21]
This means that we are trying to reduce all the possible activity to what is really necessary. Sometimes we work a lot because, happily or sadly enough, we have a lot of demand for our company at the moment, and I would like to have more time to do other stuff. So yes, we have to try to do new incentives to that.
Ben Byford:[00:53:53]
That's nice. I think that it sounds like the cultural vision of what we were sold less work, more play, more leisure time, better work. That would be ideal, right? Where we want to get to not-
Enrico Panai:[00:54:11]
It is related to my biblical, I don't know, my very religious principle of not wasting time of people.
Ben Byford:[00:54:22]
Right. Yeah. Is that your main value?
Enrico Panai:[00:54:26]
Yeah, it's always been the main value. From the very beginning, when I developed the software, it was to improve the life of people. Now, you were talking about markets, but you go in a university today and you have a stratification of different software, and you talk to everybody, say, We are spending a lot of time just filling forms and doing doing stuff that is not our work. The work of professor is teaching and doing research, not demonstrating that it's teaching and doing the research. You spend more time on demonstrating than doing. There is still a big problem around it.
Ben Byford:[00:55:18]
Yeah. Well, thank you very much. With that last thing, I don't want to take up any more of your time. Thank you very much for coming on the show. How do people find you follow you, all that sort of thing?
Enrico Panai:[00:55:33]
On LinkedIn, I'm very bad. I try to communicate what we are doing, but I'm very bad on doing it. I generally publish something on LinkedIn, but just get in contact with me and it's better. By the way, thank you. You have a very kind approach on the subject and an easy the one, and I like your way of asking questions.
Ben Byford:[00:56:03]
Thank you. Appreciate it. Thanks very much, and I will speak to you again.
Enrico Panai:[00:56:08]
Thank you. Bye, everybody.
Ben Byford:[00:56:09]. I was wondering if it would be useful for a machine ethics BlueSky account, and whether I should be talking about our LinkedIn account as we do post on LinkedIn as well. So if you could let us know if you have an opinion, where do you find out about things? How do you interact, comment on things these days? That'd be really useful to know and help me out. So thank you very much.
Ben Byford:[00:57:11]
Again, if you want to help the podcast, you can tell your friends and you can support us on Patreon, patreon.com/machineethics. And if you can, you can leave a comment or leave a review wherever you get your podcasts. Thanks again and I'll speak to you soon..