36. Metrics for wellbeing with John C. Havens

This month we're talking to John C. Havens about his work on IEEE's ethically aligned design, human rights & access to and data agency, signalling a persons values in respect to their personal data, GDP being an insignificant metric for our future, making sure no one is left out of the room when designing technology, and more...
Date: 15th of November 2019
Podcast authors: Ben Byford with John C. Havens
Audio duration: 55:01 | Website plays & downloads: 299 Click to download
Tags: IEEE, Wellness, GDP, Standards, Author | Playlists: Legislation, Standards

John C. Havens is Executive Director of The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems that has two primary outputs – the creation and iteration of a body of work known as Ethically Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems and the recommendation of ideas for Standards Projects focused on prioritizing ethical considerations in A/IS. Currently there are fifteen approved Standards Working Groups in the IEEE P7000™ series.

Previously, John was an EVP of Social Media at PR Firm, Porter Novelli and a professional actor for over 15 years. John has written for Mashable and The Guardian and is author of the books, Heartificial Intelligence: Embracing Our Humanity To Maximize Machines and Hacking Happiness: Why Your Personal Data Counts and How Tracking it Can Change the World. For more information, visit John’s site or follow him @johnchavens.


Transcription:

Ben:[00:00:03] Hi and welcome to the 36th episode of the Machine Ethics Podcast. I'm Ben Byford and this month I'm joined by John C. Havens, the author and director of IEEE's Ethically Aligned Design. We have a chat about human rights, access to data and people's data agency. Is privacy dead? Signaling people's values in respect to their personal data. And we talk about making sure that no one's left outside of the room while designing technology.

[00:00:28] Check out more episodes from us that machine dash ethics dot net or support us on Patreon dot com forward slash machine ethics. Thanks very much. I hope you enjoy.

Ben:00:00:41] Thanks for joining me on the podcast, John. If you could just start by introducing yourself and what you do.

John:[00:00:46] Sure. And thank you, Ben, for having me, I really appreciate it. I work with IEEE, which is the world's largest technology professional organization, has been around for about 100 years. There's about a half a million members and more than 160 countries, I should say today, I'm speaking as John, meaning not everything I say is formally board approved stuff. Disclaimer! And since 2015, I've had the honour of leading a large A.I. ethics program called the IEEE Global Initiative on the Ethics of Autonomous and Intelligent Systems. And I'm happy to tell you more about the work that we do.

Ben:[00:01:20] Great. Thank you so much. So I briefly was introduced to you, through the IEEE, and I came aboard and had a peruse about all the great stuff that you and IEEE are doing. So thank you for coming on the podcast and speaking to us. I realize that you've you've also got some books that you didn't push on us just then. So you tell us a bit about the books that you've written as well.

John:[00:01:45] Sure. Thanks for asking. Artificial Intelligence. The subtitle is Embracing Our Humanity to Maximize Machines. And besides just being excited to get the second book deal.

[00:01:57] It's actually my third book, but my second book deal with thing Penguin. I wrote it after I wrote a series of articles for Mashable that really initially started from a place of both fear, not really feeling, fearing getting killed by robots or any of that silliness, but more about how we can lose our humanity by sort-of assuming it without realizing it, kind of delegating things away. And then also really to look for a code of ethics for A.I., because since I'm not an engineer or an ethicist, naively, when I started the articles for Mashable, I was like, somebody call somebody who's going to point me to an established code of ethics. And pretty much everyone kept saying the laws of robotics from Asimov is what we use. And that was like, OK, I know that's not really enough of a basis to build on, but a good place to start. So that's that's what inspired the book.

Ben:[00:02:49] Yeah. And what kind of time was the 2010 sort of time or before that?

John:[00:02:55] No, I think was 2012 and 13. I started doing the interviews and then the book came out in February of 2016.

Ben:[00:03:02] Right. Yeah. If anyone's ever listened to this podcast at all before, then they'll know that the Asimov laws are like awesome fiction, but not necessarily like practical usable laws that we can use and want to use now, which is hopefully apparent. I feel like it's one of those things I keep hitting over the head of people on this podcast. But anyway, we'll dive into the first question, which is always what is A.I. to you, John?

John:[00:03:31] Well, I'm guessing you probably want something like, you know, A.I. is a form of either automated systems.

[00:03:42] For the applications of deep learning and all the different sort of machine learning or in inverse learning, what we normally say in our work as we talk about either A.I. systems or we delineate by saying automated and intelligent systems and automated is the logic that there's something that could be repeating something that does not have a sort of self generating algorithmic kind of basis. So that can be like, you know, the setting your speed in your car is automated because you take your foot off the pedal. But no one would really call that intelligent.

[00:04:17] The other thing about A.I., this may not be your question, but I'll answer it as an opportunity because I'm also I work as an executive director for this thing called the Council on Extended Intelligence with the M.I.T. Media Lab. And the phrase artificial intelligence, we feel has really been bogartted, as it were, by many media. And it's not, maybe this has also inspired my book.

[00:04:43] It's either dystopian or utopian. It's going to save us or kill us. And the narrative around that phrase, artificial intelligence, it feels like a wonderful opportunity to honour all the amazing technology, but to really start to be more careful with our language, which ironically is really a lot of what A.I. and machine learning is about is understanding and pulling apart, you know, with natural language processing, etc.. So anyway, longer answer, but I wanted to get both the technical ish and then kind of metaphorical ish response.

Ben:[00:05:13] Yeah, I think some of this came out when I was having a conversation with on the last podcast with Maria actually about this idea that maybe we haven't got like a very suitable language for intelligent system. You talked about system, autonomous systems not being necessarily intelligent, but that kind of a part of this thing which might have a little bit of intelligence. But there's all this other stuff going on, is that sort of this is the kind of thing we were talking about.

John:[00:05:41] Yeah. And the word intelligence, at least with the council and extended intelligence. Now, granted, we still use the word intelligence, but intelligence.

[00:05:49] Maybe this is more in the states in other places, but cognition or sort of like empirical data. You know, once when you even use the word intelligence to a certain degree can be anthropomorphic depending on the context. And that is dangerous or risk or at least not specific where it you know, no one would necessarily say even certain, quote, intelligent machines have sentience or consciousness and intelligence. So that's where I think it's really critical to be specific about the functionality, because saying things like the algorithm itself generating or whatever the best terms are there, then that's just sort of the definition of cool that's what it does. Versus once you attribute a word like intelligence and people in the room may not know the space, equate that with being the same type of modality as human intelligence. Then again, we go back to the sort of really messy kind of phrase, artificial intelligence, which is a lot of what we're trying to pull apart and delineate in our work.

Ben:[00:06:51] Yes, it's almost begun to communicate the, like you're saying before the capabilities of the system you're talking about.

John:[00:07:00] Yep. Sweet with your, with the research that you did for heartficial intelligence, was that a lot to do with roboethics and this idea of where should and shouldn't we use this kind of technology?

Ben:[00:07:17] And where is it going? I know that you kind of had ideas about, you know, possible future scenarios and that sort of thing. Can you say what the extremes there I guess.

John:[00:07:28] Sure. The. And thanks for mentioning. I opened each chapter up with fictional scenarios, largely because that was a way for me to work out on a personal level how I would deal with stuff. So, for instance, in the opening chapter in real life, I have an actual daughter and in the book the family is fictional, even though I have a son and a daughter and a wife. So there's the characters. However, that's the only thing that's similar to my actual family. So FYI, that's not just as a disclaimers it's true. But my dad passed away from Parkinson's and had he been able to physically handle it and had the technology been available when he passed in 2011? There is a pretty simplistic I guess in one sense, forget what you call it, metallic node or something. And a different friend of mine had this, put in herself, which is the little circuit can go into a human brain that sort of creates an electrical synapse, which apparently can slower, diminish the onset of dementia. And so that's not that new technologies like 2008. And if my dad had been able to get that, even though he would have had, like my friend did, a couple of wires coming out of the back of his head attached to batteries, it's fairly cyborgian, if someone said this will mean you get your dad for a couple more years or he won't suffer as much or it might improve, whatever, my answer would have been, put it in, period if he wanted it. So in the book I have the opening vignette of the book is a doctor saying to the fictional version of me, if your daughter doesn't have this she'll die. And so I wanted to push myself beyond a lot of my comfort levels and say what aspects of technology literally becoming part of us, whether it's cyborgian or transhuman or whatever. How do I feel about these things and what bothers me and why? And that doesn't mean I'm by any means I've made all those final choices about those things, except that they also wanted to give credit to the ideas of like, what do we want?

[00:09:31] We fear death. We fear losing our loved ones. But then a lot of the book to is where I'm trying to deal with that stuff on a personal level. Then there's also things where in a reality where we are now level meaning beyond talking too much about A.G.I. or A.S.I, which I'm happy to.

[00:09:51] But I just mean right now, today, I don't know if any experts that I know of that would say, look, there's artificial general intelligence. Certainly they wouldn't be able to say there's artificial super intelligence. So where we are today, mostly what my focus is on a lot of our work with IEEE is to say outside of what may come, what is here now are things like lack of human access to data and a lot of conversations about should robots have rights, etc.. Ignore the fact that humans can't access their data, which means then the human rights aspect of people not having that access to me is the first discussion that has to happen before, you know, potential, like those things. And then the other aspect is the financial underpinnings of how both data is collected and frankly how technology is done in general. In my book, I talk a great deal about data rights and what I call data agency giving people like blockchain type thing.

[00:10:50] So it's not just GDPR they can exchange your own data. It's there in terms of conditions. And then the other thing is wellbeing economics, which is much like New Zealand is doing now. Bhutan has done for years, is saying we can only use exponential growth in financial productivity as the one metric of societal success or any of the tools we built, including A.I.. The single key performance indicator of success is gonna be exponential growth with these amazing tools, in one sense, exponential speed and growth: done! But the unfortunate thing is that really is great for a single bottom line. But when it comes to the planet and people, that's where we're really screwed is if we don't recognize that, you know, the finances working for four or five percent of the planet, only GDP where it stands now has to be extended for A.I. or A.I. will only serve a few.

Ben:[00:11:44] So we've got these ideas of kind of alternative metrics to GDP. And this idea that there are kind of low hanging fruit almost where we need to do something about the public access to data and maybe something to do with literacy around digital stuff so they can actually do something with that data or govenment platforms. Like some way of going. You've got the use of this data and that's going to be useful and you can actually use in some way. Is that kind of like work going on in these two areas? I mean, either IEEE or projects that you've seen?

John:[00:12:26] Yeah, great questions. I'll start with the data. The thing about data, which I know is. It's minor to say it's confusing, but the word even privacy gets this a whole seven hours of show we could do, right. What does it mean in America? What does it mean in the EU with GDPR? And it has a lot to do with values, right? Meaning how Ben shares his data, how I share my data. There may be two different things. Data agency, I'll use that term here is what we do talk a lot about in our paper, ethically aligned design, which is free online.

[00:13:01] It's Creative Commons and that chapter I would recommend. Very proud of it. I was co-editor of but mainly because the woman Katrina Dow, who I was co-editor with, she's fantastic. I'd answer your question. There's a massive ecosystem of companies that are creating both technologies and then there's a lot of policy being done around this idea of giving clarity and parity with regards to data exchange. So I say clarity and imperative because let's put aside privacy for a second. So Ben and John are two people and we're UK in the States and we could get 10 other people here. We may have all very different thoughts about sharing your data and someone may be like our privacy. Privacy is dead. I want to share my data with anyone, someone maybe you like. Well, I'm a mom. I'm very conscious of my data choice that speaks to 10 different opportunities to have an infrastructure which lets each of them share their data how they would like.

[00:13:55] However, the massively obfuscationist largely and erroneous. Message is that privacy is dead. The horse has left the barn, so don't worry about it. Where the answer is for 10 years or more and I used to work in, I was an EVP to top 10 PR firm. The infrastructure for the advertising industry and for any technology firm to track our data from the outside in, was already existing.

[00:14:19] It's there. It's not like, oh, we can't build it. It's done right. So the issue is that a lot of people don't want us to be able to have a secure sort of data store, which means we have a not just PII, our most precious data, but like our own terms and conditions. If we have our own terms and conditions tied to our identity, then that means Ben can share his terms and conditions in an algorithmic level at all times. And by the way, that's a better opportunity for business than what exists now. Why? And I say this from experience working with huge brands like P&G and HP etc except for the people controlling the pipes, right? Telecoms and Facebook and whatever. Everybody else wants to reach consumers or customers or stakeholders directly and through surveys, whatever else. Tracking is now a lowest common denominator thing where I can get Ben's data about your actions from about 14 different places. There's no opportunity for competition there anymore. There's no opportunity for innovation. Saying I can track Ben faster and more deeper. That's all there is. So there is maybe some. But the real winning opportunity is to say, OK, we have people that we can track Ben's actions with. How do we hear from Ben? And the only way to actually hear from you at an algorithmic level is to encourage you to create. And by the way, we do have standards focused on this with IEEE, we have one about creating universally used personalized terms and conditions. Now, the thing about terms and conditions, the reason is a clarity and parody. Clarity means then you get just to sort of figure out at a values level, yeah I don't care as much about this, but with medical data: I think this; and with these things and I've envisioned it like being 10 sort of pull down menus that are similar to the work that Mozilla has done with their ad blocking, which is about signalling. Signalling, doesn't mean you don't want to work with brands.

[00:16:17] Signalling, doesn't mean you aren't going to be available for government surveillance issues. It simply means that you now are given the clarity and the truth. The truth is, if you want to have agency and to have your words spread the way you want them, especially when we get to the time of full on, you know, augmented and virtual reality environments, you have to be able to enter into those environments saying, hey, I sign your terms and conditions. I get what I'm going to do here. But here's mine. Now I've been saying this for years and people are like, well, the big companies aren't gonna honour these things. My first answer is that's not a reason to not do it. That's like saying, let's not honour then that country's not going to honour human rights. Ah, the horses' left the barn for human rights, Ah human trafficking. It's really extensive. No, it's it's a right.

[00:17:07] Right. It's not just democracy. It's not just a Western ideal. The logic is that we as humans are moving into a place where we're trapped by thousands of algorithms. We simply need to be able to speak from the inside out. Now, brands, there's cost of acquisition savings. There's massive savings on advertising spend. And more importantly, as I'm not saying eradicate the outside in, I'm saying add the inside out. The other thing we have is a standard, it's called, IEEE P 7006, which is about creating this whole new paradigm of algorithmic agents. What that means is in the same way you have a financial–if you're blessed to have enough money for this. If one has a financial advisor that says put this money here, put this money there, essentially, then an algorithmic agent, it's not a doppelganger of Ben and John. Right. It's not a actual like avatar. It is a functional agent, like a legal representative in one sense, where if an algorithm sort of contacts Ben's, legally, algorithmic agent, then you're notified the same way your lawyer would call you and say, hey, something happened that you asked me to let you know about it. This is simply just how we must move into the future. It's quite binary. Either we don't have these things, which is where we are now. We don't. And it's critical. I know people. I know Paul Nimitz, who's a friend. I know. Sandy Pentland at M.I.T. two of the primary makers of GDPR are right. GDPR privacy by design. Incredible. All of those things are us as people saying we hope and trust that government and business will protect our data.

[00:18:45] Right. The entire other side of the equation is us having a tool to speak. And right now, ostensibly we can go to a voting booth if we have that privilege. We don't have that in augmented and virtual reality. Anyway, I'll pause there before I get into the economic stuff. Did that make sense?

Ben:[00:19:04] Yeah. Yeah, that's great. So this is the Machine-ethics.net podcast. We'd like to talk about the, all these subjects and we touched on this data personal kind of thing which is emerging. And it's like you're saying it's going to have too much of a side of the coin. And I guess we're really doing this because we live in this digital age. And so all these digital systems that we want to interact with and we want to hopefully interact with them in a way that is reflective of our terms, like you said, our beliefs. And, you know, maybe we just don't want to interact in a certain way. And having the keys to the castle to be able to do that is useful. But also, we have these algorithms which are somewhat semi autonomous, somewhat intelligent. You might say, although we might have different language for that. And then we get into this model of like we've got all this data and making kind of like digital versions of people. Is this really this kind of activity hopefully going to mitigate some of that kind of like almost worrying trajectory of marketing companies trying to turn us into a homogenous, like, digital kind of artefact humans almost. It is not something that you feel like will dissipate if we have some of these things introduced?

John:[00:20:25] Oh, very much so. And I give the example a lot of tracking one's actions. You can probably no know who you're talking to, what you eat, what brands you like, etc. But when I speak for about the past year, I've been saying, what about if someone is gay?

[00:20:41] What about if someone is Jewish? What about if someone's a Democrat? There's a good chance that you could say, well, tracking actions and what people buy, this person is gay. But I say this a lot, which is instead of the question, what have you got to hide? Which really irritates me because everyone has things to hide, right. Like the picture of my wife. I don't want you looking at my kids in my house right through the cam with my, its not about hiding. It's about protecting. Right. But also, I like asking the question, what is ours to reveal? And in quote "real life" someone if they're gay, that decision about when they come out is up to them.

[00:21:19] Right. As a parent, I feel that way.

[00:21:23] And I think about an app or something that a kid, you know, pardon my language, but real dumb asses when we're younger, you know, and especially those guys, you know, you pointed app at someone to go, whoa, you're gay. And especially with suicide and young people these days, like the amount of power that is given to not just social networks, but the sort of like what you know, the algorithms behind this. Well, you must actually be whatever insert subjective truth here.

[00:21:50] That's not that's not for someone else to say. And even if it is, what happens if someone says, well, this app is 99 percent accurate. So you must be lying. What does that do to someone? And on top of that, also, it's like what if you say, like, I was Catholic and I've now made a decision to become Jewish? If I told that to you, Ben.

[00:22:12] I mean, we're just getting to know each other. But you'd be like, oh, if you were interested. Tell me about that. What what made you do that choice? What does that mean? But you wouldn't go like, you know. No, no, no. I've been observing your actions and there's no way you're not Catholic anymore. And I'm like, no, I know, because now I'm converted. And you'd be like, no, no, no.

[00:22:29] Because also all the sort of things to sell me and the things. To impress on me or to manipulate is the negative side. But the nudge have already been based on one thing and the sort of sense of where my signal is mine. But from a brand standpoint, you know, I used to work like with Gillette and stuff. And you're well groomed, guy. And I mean that literally you have a beard and moustache. Well done. In terms of you know, I know more about male grooming than anyone should, but the cost of acquisition in terms of finding someone online, which is actually the cost of CRM. Right. Cost per thousand sort of is so high that if I had a good connection with you and you had these terms and conditions. The thing about terms and conditions is a handshake. The power behind it. The reason that it will work or can is because it's not you just saying like, I don't want to get surveilled in airports. Right. Because those things you might get a message back that says, sorry, man, if you want to get in this airport, you're going to be surveilled, but you can put an actual through augmented reality, a physical. They are meaning it's invisible, but a physical marker saying I protest. You can have a your own set of documentation saying here all these places and you can raise your hand. And in the sense of being a citizen of the world, say FYI as Ben, as a citizen of the UK or whatever. These are my thoughts. And on your own, is it going to make a difference? I think so, for you. But then in aggregate, will it start to make a difference? Of course it will.

[00:23:59] But with the brand, they're gonna go, wait a second. The first point of contact with Ben is an ad or whatever else. And you say, I don't want this type of that. And they honour it. Think about that. They honour whatever. OK. Ben doesn't want ever. And you know that on some level. But then their message is still it's not going to change.

[00:24:17] You're going to. We want to sell your razor or whatever. Right. Which is fine. And you may like the razor. So now if you say I'm ready for that handshake, they could do some kind of peer to peer blockchain exchange of data, where by the way a lot of brands also were moving forward in this space, financial, insurance and other companies, they don't want tons of data about you because of GDPR. They want to delete all that stuff. But now that handshake in that building of trust means their cost of acquisition for you goes way down in aggregate. They start to save and make money. If a business decision as well as a way to move forward and talk about virtual reality a lot, because I think it's in the billions now of people gaming, especially things, you know, multiplayer gaming. This is the future. Some of the future, its future for some people. Like one the other four fifths of the planet. But the first fifth that's doing it. They're used to taking an active avatoristic self and being in a gaming environment for a number of hours. Other people who are not used to AR and VR, etc. in the next three or four years they're just going to be gonna be crazy. And after you play a game for like 17 hours and you're only kind of getting out of the game to use the bathroom or eat, the questions of agency and emotion and identity become a very different thing at a therapeutic level. And these are also sort of tethers, you know, back to well, at least in the real the real world. I'm John. I have kids. And this is my identity. So then that terms and conditions is protection, right? In a gaming environment even. And there's so many unfortunate examples of especially women being abused in games where these are just sort of like how we have protections in, quote, real life. They don't exist yet in this aether that we're creating. So that's the other real wonderful opportunity. And again, all these things I'm saying are not just like the ethics guy going, oh, I'm so worried. It's like, a wonderful opportunity to not just protect people, but to provide communication channels where humans have the same level of parity that the technology does.

Ben:[00:26:24] Great. Well, I feel like that sounds like you're projecting a... its almost a pseudo future where we're kind of we're already doing some of this stuff. Right. Hopefully with this other stuff, we can make what we're doing better. Do you have an idea of what the opposite side of the coin to that would be? Maybe we don't do some stuff and we carry on. What does that darker side look like?

John:[00:26:49] Sure, unfortunately. The darker side is sort of where we are now like, you know, we're talking over Skype. Your listeners won't know that. But I see, you know, a well groomed young guy from the UK, but you could be a deep fake, right? That's that technology has gotten so fast, so well. And I've seen quite a few services that are offering the above avataristic versions of ourselves, whereas most of them are still kind of in the cartoon level. But that's a that's only a matter of time where people take on other people's identity and start making Skype calls. And then again, without a trusted identity structure where if we were talking and I got some kind of text saying this is not this is not Ben. This is not the guy who actually hosted this podcast. Then there's those protections where in my life we already have those in different ways. And so there's that. And then the economics I wanted to touch on, because I didn't touch on that before. The economics are not built to help people, period. They're not built to help individuals. They're built to by and large, where it's a legal mandate, increase exponential shareholder growth. Now, I always use the word exponential because there's nothing wrong with paying shareholders. These are the people that create innovation, you know, get companies going. And also, it can be a hard line to know like what is what is, quote "good growth" and what is. I have to maximize shareholder growth. Right. But GDP, when you really get to know it, which I have over the past year. What it doesn't measure is critical, and it's things like caregiving or the environment. In the same way it measures financial returns. There's a lot of shortcomings for GDP and it's not about saying GDP is horrible, it's saying it's it is insufficient in and of itself. But GDP, I tell the story a lot.

[00:28:48] When I was 16, I worked at a record store, which was my age, but my boss was like, how much money do we make this week? And I'm like, two thousand dollars. We sold two thousand dollars worth of records. He said, Nope, that's gross profit. He said, Now take out what I pay you and take out what we pay for rent and these other things. And I was like, damn, I guess we made six hundred bucks. And he's like, right. That's net profit.

[00:29:10] Well that's called double bottom line accounting. I forget the term. It's an accounting term where it means you're factoring in both sides of that coin. The GDP oftentimes doesn't. So if an ocean liner crashes into an island and oil spills out, well, the GDP measures is only the jobs of people who are hired and the, quote, increase in productivity because there was more jobs that year or the GDP measures in a traffic jam, oil consumption goes up and GDP goes up. But no one's going to say that human well-being is increased during a traffic jam. So rather than try to pick on GDP in itself, it's critical to point out that if there's that single key performance indicator that both governments and businesses are held to, we are screwed.

[00:30:01] Period. And it can't just be like, well, let's do triple bottom line. We're really what we mean is every quarter it still maximize shareholder growth from the money and fiscal profits and we'll try to do cool things for the climate and be nice to people. Now, if the three of them aren't equal and there aren't metrics to point to, to say we've hit these numbers, then businesses also aren't given sort of a solution, an opportunity to say, look, corporate social responsibility wise, we can say we did whatever we're supposed to legally do from keeping the planet safe. That's not what we're doing. We're doing X amount more. And this is companies like Patagonia. This is companies like Danone. These are companies who in the states. There's something called B, as in Boy, B Corp, where people are changing their legal structure to say we want the world to know that these things are so important to us. We are changing our financials to have this triple bottom line mentality. That is the only way we're gonna get to any place where these technologies are actually going to help humans holistically long term.

[00:31:08] And good examples of, if I mentioned this here, but in New Zealand, they have this wonderful new wellbeing budget. And when you look at the word wellbeing, it can be confusing. People think it means mood or happiness. It doesn't, at least in these economic indicator terms. It means that along with the financial terms. Right. The GDP doesn't go away. But you're also saying things like what about caregiving? And caregiving is largely about women because women around the world are the main caregivers.

[00:31:35] But when you think about it just doesn't make sense that people who raise consumers, I hate that term, but if you raise a consumer as a woman or a man for 15, 18 years, you are creating a new revenue stream in one sense for that country. So there's something called the genuine progress indicator in the states that just has a simple formula that says, OK, for the person living at home, X amount of money, they did this and they're just in that. It's not rocket science. It's not like changing the nature of capitalism. But once you recognize those caregivers, they also take on a work that speaks to a lot of sense of purpose, et cetera. Anyway, the New Zealand budget has things like caregiving, the environment, climate, education, all these things along with the financials. But the budget. Right. The money is stemming from and they're keeping form it's indicators for all those different things. Much like the STGs, there's 17 of them. Right. It's not just three. We have to get all 17 of those by 2030.

[00:32:38] So the other thing about New Zealand, which I will keep talking about them, is they've taken their well-being budget and now they're tying their A.I. roadmap to that budget, which is in our case with ethically aligned design. We have a chapter on wellbeing and we have a standard called 7010, which is advocating the same thing, which is look, what a wonderful opportunity for people creating these amazing technologies to know that along with keeping people safe, avoiding risk and bringing value fiscally, say like an autonomous vehicle. Now when you build it, let's look to the Happy Planet Index. Let's look to be certain this STGs and not just say A.I. for good and sort of a general sense which is fine, that's better than A.I. for evil. But is to say here are the numbers that can show how much more we will decrease carbon emissions by having AVs in X City because we remove all these parking lots and have more green space, where we can plant trees and eradicate carbon because of the trees. And also we'll have that many less cars on the road on top of that, there's been a few studies out about AVs when families can turn all the chairs and face each other. It's almost like the reoccurrence of the dinner table, which in the U.S. there's many reports showing that ever since the dinner table hasn't been as much of a priority, the lack of family sitting around a dinner table together can be directly apparently correlated to different reports to an increase in drug use. Even suicide for kids. So that might sound like somewhat fanciful and like, I don't know. But it's also spreadsheet like people not hurting themselves, etc. saves money and I hate talking about it that way.

[00:34:17] But all this is to say, if we're if we're able to have discussions about systems maybe becoming sentient, then by God and I mean it, it's a prayer, not as a swear we certainly need to be able to say why can't we innovate a metric that was created 70 years ago, which was never intended to be the only primary metric of success for society.

Ben:[00:34:39] I wholeheartedly agree of all of those things. Do you think, I mean, this is kind of a pessimistic view maybe, that I'll just latched onto one of the things he said, which is to do with perhaps automated vehicles or having this idea that if we use this technology, we can create some good, which is to do wellness, not just economic growth. And you gave an example of the autonomous vehicles that could free up green space and parking lots and stuff, that there would be a pessimistic view that that's kind of space would get turned into housing, or commercial and we actually wouldn't end up with some of that new green space. Is that down to kind of again, that same kind of the structural metrics that are in play and that will enable us to get to that sort of future? Or do you think as I do? Is a sneaky opinion snuck in that maybe there's something more that we can do with capitalism itself or that the way that we think about commerce–go?

John:[00:35:44] That's great stuff. Me personally, again, I'm speaking as John not as IEEE or others.

[00:35:49] But you know, the words capitalism and socialism understandably have such emotional resonance. And in the West, I think especially if you say certain things or maybe it's in my experience in the states, you say certain things and the word socialism comes up and then you either think of Stalin and regimes or China. A lot of these terms they just don't service anymore in the current Zeit Geist except to make us angry at each other. Where I love, there's this idea of what's called Ubuntu ethics, which we talk about in our chapter of classical ethics and I AIS in the book Ethically Aligned Design. Yes, I'm pitching it, but it's free and it's Creative Commons. So Ubuntu ethics. And I'm not going to do it justice, but it's this wonderful idea that my well-being doesn't begin until your starts. And it's this sense of instead of it being so focussed on the kind of the monetary aspects of capitalism, etc., it's more sense of like I think just common sense. Right. Like so much of modern Americanism is about isolationism is about individuality. It's about like my power comes from me and from within. And I'm not putting that down, but in and of itself, that's sort of like, you know.

[00:36:56] I'm 50. So most of my adulthood, I grew up as like Clint Eastwood, you know, like resetting his arm after he got broken in a tree and screaming on his own and the sort of myth of the power of the individual. And frankly, if you study positive psychology, which I have and talk about a lot. My last couple of books and just common sense, of course, personal empowerment, self-improvement, having the power of the individual. These are critical things, but doing it in isolation, positive psychology that shows that that leads to a lack of ongoing well-being or what's called flourishing. And when you see, I hate the terms developed and not developed countries, I think they're frankly horrible terms. I think they are should be changed.

[00:37:38] And you see, quote "developing countries" where the family structure is still not even just maybe a nuclear family, but like aunts and uncles and like Costa Rica in different places around the world, in Denmark, you know, like they have these amazing housing things where, like, people have individual apartments, but then they're linked to a communal kitchen. You know, in the States, what's interesting is churches are not looked at as socialist. But they're framed religious institutions. But you go to a good church or a temple or whatever the, you know, Buddhist temple would. It's a lot of people in a neighbourhood or a community coming together saying, let's support each other, that, you know. Now, if you, for me, at least when I could write that to kind of get into economic stuff is, you know, I think of the word commons. So I heard the phrase, the commons. And there's this idea that is there, the possibility of sharing, of taking the abundence of the few and giving it to the many when you say that, then immediately, oh, you're a socialist and it means, you know, people parading into rich people's homes and stealing their stuff. Well, my answer is no. It's whether it's Social Security or whatever else. And this is where it starts to get political, of course.

[00:38:53] Right now, it's just, you know, not to sound maudlin, but it's the 4 or 5 percent of people owning all the world for more than half the world's wealth.

[00:39:02] Where has that lead us? Right. Is trickle down: True? And my answer is, of course it's not. There's nothing true about trickle down. People can want it to be true. That's that's their choice. But, and they can say it is. And then politically, whatever. But where the climate is where it is now. And we're just yesterday, The Washington Post had a new study about suicide increasing. Forget it's 40 percent since last year. I think if anyone wants to say a society can be measured in any way that doesn't take into account a vast a massive increase in suicide, especially among our young people. This is for the states. I can't speak for the rest of the world, but I know suicide is pretty common in Japan, right. Another quote, "developed nation" using an air quote for your listeners. Why are we saying that we're developed?

[00:39:51] Why? Because of money. Period. Period.

[00:39:54] All right. All the other financial things like more economically developed, but yet a family who may be living in what we would call squalor. And I'm not saying that that's OK. I'm not saying like whether they have a family structure when they love each other. So it's OK that they're living in a high risk area and don't have potable water? Not at all. Maslow has the floor, human rights has the floor. But if we can look to them and say, why is it that their long term flourishing is so high? Why is it that they seem to have joy every day and worked that is, quote "simple" that apparently automation to automate. Why are we wildly messing with something that doesn't seem like it's broken? And why can't we have the humility to say why are we not striving for the emotional and familial connections that bringing us back to a commons of loving each other first? Why look at these people, whoever they are. And I'm also not trying to sound wildly condescending or simplistic like everyone in Costa Rica's happy. No, that's not my point. My point is, is that where you have metrics, whether they're either written down or they're understood as part of a tradition, that this is what brings long term joy and this is the priority. Like with Ubuntu ethics, then that is also at least...there's perfect. It's not like people don't have wars. There's no utopia. But what we have is, certainly in the states know no health insurance. College grads graduate with thirty thousand dollars in debt. The education system is largely about going to tests only does a lot of wonderful things about all you know. I'm not saying the opposite of those things, too. I want to be fair, but what a wonderful opportunity to especially with these amazing tools, autonomous and intelligent systems, machines.

[00:41:39] Is to say, what about instead of just kind of running in there and saying, well, here's how we can do better than humans, because that's the big part of the narrative that I and we hate a lot, which is the US vs. them or the even the us complimented by them air quote "them" in the sense of helping and being complimented. Great. But where it's not immediately understood that we could be complimented to death literally. Or we can be complimented to the point where we have no worth. If the idea is that anything that can be automated will because there's money involved A or B, those other two things, the planet and people are not prioritized at the same level. If they were, then the automation side of things people might more often say, look, to your point, with housing in this area, maybe we should keep less or not have as many houses because we really need the green space. Or it might, the answer might be what a wonderful opportunity to use some automation, but give humans jobs and really now eradicate fossil fuels and bringing green tech. So all these new apartments, although we're losing green space and maybe a park, these hundred units will be LEED certified, you know, solar powered, wind powered. And now they'll actually power not just their own neighbourhood, but the entire city. And so it just means more urban planning. It means more foresight where the the short-termism is what IS killing us and will kill us. And the A.I. and all the technology is just because we're very positive about technology. But our tagline for IEEE is inventing technology for humanity, which also includes the environment. And so that that part of the tagline I love and adore. The tech is awesome. I'm a geek. But to your point, just to build stuff on green space without analyzing, should we or can we? And is it just about short termism and exponential growth? If those questions aren't asked, then they will always revert back to only the short termism. And that again is what will kill us.

Ben:[00:43:48] So I feel like you paved the road forward for us. And if if there was a suite of people who were listening to this conversation, let's say policymakers and technologists. What would you say to those kind of different camps of people which you haven't already said?

John:[00:44:08] Sure, that's a good question. It's also very hard.

[00:44:13] I think first would be for policy makers is to say respectfully, because I'm not a policymaker is, whereas a wonderful opportunity should be real. And to also have some elephants in the room to address those. And I first heard this idea of avoiding tax, avoiding paying corporate taxes when I was in South Korea at a well-being event with the OECD. And Jeffrey, Jeffrey Sachs gave this amazing talk. It was very intense about corporations not paying their corporate taxes and avoiding taxes by going on to offshore, etc.. So here's the question.

[00:44:53] I just don't know. In one sense is like if everybody knows something is illegal. Right? No one's going to be like offshore tax havens, like, you know, everyone sort of knows that they're there. Right. And then I realized how wildly ignorant and naive I am by saying this. Right. In one sense. But also is like, why can we talk about robots becoming real in one conversation? And that's innovation and don't hinder innovation. And that's going to happen. And then over here, everybody knows something where everyone's breaking the law.

[00:45:22] And rather than be mad about it, it's easy to be mad.

[00:45:25] It's just my question is like, can we just change that? Then like, maybe don't pay is like, how about this? You're paying five million dollars to do all that moving of stuff. So maybe you pay three and that 2 million goes to the planet. Like, I realize how wildly naive it is of me to say. Right. But I also feel like but yours, your system means that all that capital doesn't go back into the system.

[00:45:51] I'm not an economist, but I get it enough to know, especially after 2008 living in the states, all that money being hoarded means it doesn't go into the system and help anybody except again that four or five percent. So that's why the why won't happen. Or it may not happen, but my answer is why couldn't it happen? And that's where it's like can we at least talk about what might sound like ludicrous solutions, but the ludicrous and the status quo, the status quo is trickle down. We want to call it older capitalism, whatever GDP, whatever we want to call it. It's not working for the majority of the planet. So it's going to get to the point where whether it's something as up until a couple of days ago, the Extinction Rebellion, you know, sort of peaceful protest situation, or is it going to be the immigration issue or people saying, what choice do I have? Me and my family will die, if, I'm not talking about my family. Any particular family may die if certain structures don't change. How long can that be endured? And I'm not advocating ever. I would never advocate violence. This is why Extinction Rebellion and Martin Luther King, Gandhi, peaceful things.

[00:47:05] Anyway, but I think there is like there's that, then climate is a little bit easier. I don't know anyone. I mean, it can get very political in the states. But climate denying is also bad business. It's not smart, sustainable business, whether or not someone's Green, Republican or Democrat, whether they care. Right. Even if they kind of don't believe it, if you actually just start to look at numbers about how much it will cost and say five years by not paying the big numbers now like the Green New Deal, I get it. I understand why people are so upset about it. Seventy six trillion dollars. But if you dig into it, then you're like, if we don't replace certain infrastructure things, it's gonna be like five hundred trillion dollars in a couple of years. And that many more deaths. And so why people don't want to do it is short termism. They don't want to do it because then they might have to admit there's certain things and climate is very scary. I understand that. But those are other questions where it's like, you know, and this is not like 10 generations from now. These are my kids. I hope it's not because of me, but my kids will often now start to say, I hope I have grandkids to even think about this with when it comes to the climate, because seven million people last year died from pollution alone according to the World Health Organization. This is not myth. So to policymakers, I'd also say, like, what a wonderful opportunity. To prioritize climate and things like mental health over and above fiscal things, and everyone's going to say no, economic is the main thing. GDP is this. A man says, no, we're doing that for seven decades. That's why we were where we're here now. The triple bottom line mentality is when there's a healthy balance. We're not there. So that's policymakers.

[00:48:53] Technologists it's the same. It's the same message, which is look at all these beautiful things that you're creating. But the triple bottom line. And then the other side is a core of our work is to really consider the ethical or values based aspects of what you're building before you get to the blueprint design, which is the opposite of how a lot of things are done now, as you know.

[00:49:16] Thanks for the thumbs up.

[00:49:18] And wouldn't you call it responsible research and innovation, which I think came out of the UK, which is amazing when you call it values driven design. People like Bobby Friedman and Sarah Speakman. It simply means asking the hard applied ethics questions while you're building something with this is critical, not just technologists in the room.

[00:49:38] A lot of our work creating ethically aligned design at IEEE in general is about saying: engineers are our core demographic. But with these questions, we must have certainly data scientists, certainly anthropologists, therapists, marketers and hopefully the general public as much as possible, because when you're asking about end user values, you can't just have one set of stakeholders, policy makers or whatever in the room. You have to have as many examples of the humans who are going to be touched by the technology. And by the way, there's so much innovation. When you ask these questions, like we already talked about with the green example, with the autonomous vehicle, and these are also things that would make you have a market differentiation. People will buy your car or they'll use the autonomous vehicle they'll rented or whatever. Like if it's a service, hail it like Uber, because they respect that you have that you've thought ahead about these sustainable things and you'll win in the marketplace.

[00:50:37] And again, I'm not anti-profit. I'm anti-exponential profit. So let's the other thing is technologist is to say gone is the time when the unintended consequences of a non autonomous or an intelligent systems that always be there. And unfortunately, people might lose lives and whatever else. But when it comes to algorithms and human agency, identity and emotion, if you don't have therapists and apologists, psychologists on your teams, emotion experts, then you're going to be building stuff where I don't care if your intentions are good. I mean, I care. But your intentions being good and avoiding risk if you have up to date fantastic engineers and avoiding risk. The other ones to keep us safe from elevators dropping. Right. But if you don't know what you don't know, then you have to invite the people onto your teams that do. And because you don't have them on your team does not mean you get a free get out of jail free pass. You now know you've been instructed and entreated, but more importantly, not just in castigating way. It's an invitation to say you should not have to do these things alone Technologists, you deserve to be complemented by all these policymakers, anthropologists, so that you on your own don't have to be the determinants of safety and risk anymore. It's not fair to you. So let's work on this together.

Ben:[00:52:02] Sweet. So we're getting towards the end of the podcast as well as checking out the IEEE ethically aligned design document. What else can you recommend people to do other than obviously talk to people like us and anything else? As a person looking to create something you could could do right now?

John:[00:52:26] Sure. Well, thank you again. First of all, for having me on the show. Yeah. If you Google, IEEE and ethically aligned design, you can get the paper. And there's also 15 standards working groups which are open to anybody to join that have stemmed from ethically aligned design, things like facial recognition.

[00:52:46] That's a new one on a motion A.I. the terms and conditions in the data ones I talked about and I love, I love inviting people to that because you don't have to be an expert in a certain category. And the more people, especially who are not necessarily experts, the more these standards will be released and be used by policymakers and businesses and be really effective. So those are open to anybody to join. Certainly people want to read my book. I always appreciate that Heartificial Intelligence is on Amazon. I always welcome comments and ratings that really helps an author. And I worked really hard on that book. It's a real labour of love. And then beyond that, people are welcome to follow me on Twitter and be in touch. I'm @John C. Havens. JOHNCHAVENS on Twitter and you know, if anyone sends you questions, then let me know and I'm happy to answer individually to if that's useful.

Ben:[00:53:40] Awesome. Thank you so much for your time. It's been really, really interesting and I appreciate your, what I would see as a really, really positive look on our kind of technological future? Say thanks very much, John.

John:[00:53:52] My pleasure. Thanks again for having me.

Ben:[00:53:56] Hi, and welcome to the end of the podcast. Thanks again for John spending time with us. We dug into lots of lots of different topics there.

[00:54:01] I think we hung around the idea of these metrics for well-being and the idea that we could use technology and use ways of thinking about technology and data. We talked a lot about this, in redesigning kind of what the future might look like and keeping maybe profit there. But maybe we can incorporate some of these other social goods like environmental aspects and people's well-being and their mental health, etc. All these types of things which come into people living in the world and not just destroying it. So really, really interesting talk. Thanks again.

[00:54:39] More thoughts for me. Check out the patreon on patreon.com/machineethics. Thanks very much. And hopefully we'll see you next time. Bye.


Episode host: Ben Byford

Ben Byford is a AI ethics consultant, code, design and data science teacher, freelance games designer with years of design and coding experience building websites, apps, and games.

In 2015 he began talking on AI ethics and started the Machine Ethics podcast. Since, Ben has talked with academics, developers, doctors, novelists and designers on AI, automation and society.

Through Ethical by Design Ben and the team help organisations make better AI decisions leveraging their experience in design, technology, business, data, sociology and philosophy.

@BenByford