46. Belief Systems and AI with Dylan Doyle-Burke

This month we're chatting with Dylan Doyle-Burke of the Radical AI podcast about starting the podcast, new religions and how systems of belief relate to AI, faith and digital participation, digital death and memorial, what does it mean to be human, and much more...
Date: 4th of October 2020
Podcast authors: Ben Byford with Dylan Doyle-Burke
Audio duration: 54:09 | Website plays & downloads: 215 Click to download
Tags: Humanities, Transhumanism, Faith, Belief, Podcast, Posthuman, Education, RadicalAI | Playlists: Philosophy

Dylan Doyle-Burke is currently a PhD student at the University of Denver studying Human-Computer Interaction, Artificial Intelligence Ethics, Public Policy and Religious Studies. His research focus is on creating a Theory of Mind for Artificial Intelligence and creating Equal Representation at every level of AI Product Development and implementation. Dylan holds a bachelors of arts from Sarah Lawrence College and a Masters of Divinity from Union Theological Seminary at Columbia University.

Dylan is an experienced keynote speaker and consultant and has presented at and worked alongside multi-national corporations, the United Nations, world-renowned hospital systems, and many other conferences and institutions to provide insight, consultation, and engaging talks focused on Artificial Intelligence Ethics, responsible technology, and more.

Dylan co-hosts the RadicalAI podcast with Jessie Smith.


Transcription:

Ben Byford[00:00:04] Hi and welcome to the 46th episode of the Machine Ethics podcast. This month we're talking with Dylan Doyle-Burke, PhD student in Machine-Computer Interaction and Religious studies at the University of Denver. Dylan is also co-host to the Radical AI podcast, and this is part one of our two-part episodes with Dylan and Jessie, who host the show. Dylan and I chat about starting the Radical AI podcast, new religions and how systems of belief relate to AI, faith and digital participation, digital death and memorial, what does it mean to be a human, and what does it mean to have a good life? Computer science education and critical thinking, and much, much more. You can find more episodes from us at machine-ethics.net, you can follow us on Twitter at machine_ethics, on Instagram MachineEthicsPodcast. You can support our work at Patreon.com/machineethics. Thanks very much, and hope you enjoy.

Ben Byford[00:01:03] Hi Dylan, thanks for joining us on the podcast. If you could quickly introduce yourself, who you are and what you do?

Dylan Doyle-Burke[00:01:10] Absolutely, well first thank you Ben for having me on, it's an honour, especially as a fellow podcaster in this machine learning space. So, my name is Dylan Doyle-Burke. I am, I guess I wear multiple hats; so I am the executive director of Radical AI, which began as a podcast – and I’m sure we'll talk a little bit more about – that but has moved on to be a non-profit. We're based out of Denver, Colorado in the United States. I am also a PhD student specialising in religious studies at the University of Denver.

Ben Byford[00:01:48] Awesome, so before we dig into some of the stuff on the podcast – because it's really, really nice to both have met you and your co-host Jess previously – we had a conversation, and that kind of led into this conversation we're having now, about getting together and sharing some of that on the podcast. So before we get into some of the podcast stuff and your background, can you try and give me your definition of what AI is?

Dylan Doyle-Burke[00:02:12] Yes, yes I can. That’s always the toughest question, and it's all – this is the fun part, right? In being on the podcast as opposed to interviewing on the podcast – because I like to be the one on the other side asking that question. So I’m also trying to put in in my head the different things that people have told me, but I think for me – so, the way that I see myself, as in my discipline, which is in religious studies, is that I study the people who are studying the thing. So I'm not necessarily building the machine learning, I’m not necessarily doing the coding, even when I do, you know, consulting work I’m not necessarily as interested in the thing itself. But what I’m interested in is the people who are doing that designing work. So I’m interested in questions of representation, and questions of why are we designing things? What purpose? Are we even asking why?

So when I think about artificial intelligence, the first thing that I think about are the narratives and stories that we bring to talking about artificial intelligence. So one of the reasons why I got involved in this tech ethics arena is because I’m so curious about these narratives that we have. Whether it's out in pop culture, or in actually the stuff that we're building in terms of robotics – some of the work that I do is in social robotics – of this utopia and dystopia world. So it's different, right, when I say machine learning, it's a different thing. Like, machine learning you might think of as applied statistics, but when I think when you say artificial intelligence, it brings all of this baggage with it and that's kind of where I’m at. I like that that's where I want to study, that's what's really energising to me. It’s like, what is that baggage that we have around his concept of artificial intelligence, and then also machine learning? But I think those are different types of baggages. So that didn't really necessarily give you a definition, but I’m in the social sciences, so I think my job is to problematise the definition before actually giving you a definition, in that for me, at the core, it's that it's that narrative element of artificial intelligence.

Ben Byford[00:04:11] Yeah, I was going to say that was a very political way of putting it, but I guess what you're saying is something – if I’m wrong – that it's somewhat cultural. So we have this kind of cultural baggage, and we have also the kind of technological implementation which is over here somewhere, and then our minds expand into the possibility space which is to do with our own – kind of our own – physicality, but also the philosophy of being human. Like, what does it mean to be human if there's this thing which we can attribute some sort of aircraft's intelligence, whatever that means?

Dylan Doyle-Burke[00:04:48] Yeah, absolutely. So I guess what I would add to that, is when you bring in this concept of intelligence – so when you say it's artificial intelligence – you're making some sort of theoretical claim about intelligence. If you look at the history of intelligence, like this term, this concept, and you go back to say the Enlightenment, and things like that, it's coming from a very particular place, and understanding, and set of value systems. So my kind of invitation in general, is to interrogate those a little bit more, so when I’m talking about artificial intelligence, I think there is some level of machine learning that's happening out in the engineering space, and part of the reasons why we created – one of the reasons why we created Radical AI – was because there's a lot of hype out there, and there's continued need to separate that hype from the reality of what's capable. Especially when you look at things like The Terminator, and these ideas of killer robots – like, that's not where we're at right now, and yet it's interesting to me that we tell those stories. But I think you're absolutely right that I think part of artificial intelligence is that intercultural element, and the economic element, and there's so much that's wrapped up in these narratives, and also the realities of these technologies being deployed.

Ben Byford[00:06:06] Yeah, yeah, yeah. So, let's briefly go back and talk about Radical AI, so it's radicalai.org – check it out it's a really great podcast – and why did you and Jess get together? How did that happen, and why did Radical AI come out of that relationship?

Dylan Doyle-Burke[00:06:26] I tell this story differently every time, which is always – this always surprises me, the different ways that I tell this story – so we'll see how it comes out right now. So, I have been in the podcasting world, and then also the entrepreneurial world for a long time. For a decade. I should say that my first career before I started doing doctoral work was as a minister, and one way you can think of ministry – especially in small congregations – is really, it's non-profit management. And so I also worked in the faith sector, and the UN, and these other areas, and learned a lot about how to build a business, and also how to ask a lot of questions I didn't know the answer to.

So I arrived in this PhD program, again, with these different skills of how to build things, and how to ask questions, but not entirely sure what to do with it. I actually started not looking at AI ethics at all in my PhD work, I was looking at religious studies in the hospital, because I was also a hospital chaplain for a number of years, looking especially on questions of death and grief, and how the hospital system interplays with that. But I basically started being interested in this AI ethics space, and didn't know anything about it, so I was super-ignorant about everything about it, and so I started listening to podcasts like yours, and like TWIML, and like the podcast with Lex Fridman. There's these great podcasts out there, but a lot of them were still really hard for me to really get the meat of. Like, well if I just want an AI ethics-101 thing, or if I want the stories from all sorts of different cultures, where can I go for that?

So as I was doing this process of just being really ignorant, and trying to learn as much as I could, and soak up as a sponge as much as I could, I ended up in Barcelona for what was called the FAT* Conference, which was the Fairness, Accountability and Transparency Conference – it's now called FAccT– and there was a person there who was presenting, and her name was Jess Smith, who's now my business partner. She basically introduced herself, saying, “Oh yeah, I go to school in at UC Boulder – at the University of Colorado in Boulder,” and I was like, “Oh, that's right up the road for me.”

So when we got back, we went out and we got a beer, and I just started complaining about various things that I didn't understand. She’s an engineer and an information scientist by training, and so there were so many things that I didn't understand, and also so many things that I was frustrated about in my own academic journey – in terms of AI ethics – because what I had found was that there were so many stories out there that were not being told, and maybe some of your listeners have had this experience. I know I did anecdotally, where I would go to conferences in tech ethics, or in technology, and the people on stage would all be straight white men, and so as a straight white man myself – you know, it's like, okay, I’ve mixed feelings about that. I guess that's nice, but there's so much more that's happening out there, and I won't say too much about this, but I was kind of squabbling with my own University about that same issue. About, who are we representing on a podcast that I was trying to run there. And Jess was like, “Yeah, you know, I wish that we could just do something more radical.”

And so from there, we just started this Radical AI podcast, which has now become this organisation, and the goal of the organisation is to define and co-create what this Radical AI thing is, so we enter this conversation with a lot of humility. There’s a lot of different definitions of “radical”, not just in the tech world, but also in general. A lot of them say in the ‘70s, the United States came out of, like, a Black radical tradition, and so we're really intentional to not try to co-opt any of these traditions that have used this terminology in the past. But really, we're trying to centre voices that have been historically marginalised in this AI ethics and tech ethics space, because these stories and these histories have been a part of this conversation the entire time. It's just they haven't always been given that platform. So the story is just us starting this thing – we thought we were going to get, you know, like ten people listening to this in the first month – and it's just really exploded, and people have come in from all over the world to listen, and to be a part of this community. It’s been really amazing for us, not just as public scholars, but also just in our own personal development, as PhD students, and as people curious about the world, that people have been so supportive of the work that we're doing.

Ben Byford[00:11:17] Awesome, so again check that out – radicalai.org. And what I liked about it, is that you have these disparate personalities who are talking about their own relationship with technology, but also their own experience. And a lot of them are academics and have skin in the game, essentially, or business people. And then you're prolific, right? You’ve done a lot in a very small amount of time. Are you okay, you need to lie down, you know?

Dylan Doyle-Burke[00:11:45] I really wish that I could, but no, we're doing well. We're doing well. It's getting well – recording this is going to be August, and so we're actually taking some time off, but we have enough recorder that we can continue to release – we started this project – we launched on April 10th of 2020, and again now it's August of 2020 and we've released almost 30 episodes, and like 26 of those have been interviews with some of the – and I hate to use the word “expert” – but it's a word, so like, some of the biggest experts in this AI ethics space from across the board. So you know, Dr Ruha Benjamin from Princeton, who wrote Race After Technology, Timnit Gebru over in the AI ethics world in Google, and John C Havens from IEEE, and then the list goes on, and we still – every time someone says yes we're like, “Why, why? Are you sick? Why are you saying yes to us, to come on this podcast? But thank you so much for your support.” So we've just been so humbled by that, and yes, I will take time to lay down very, very soon.

Ben Byford[00:12:48] Great, we don't want to burn you two out, that's all I’m saying. So, I was really interested – in obviously the podcast – but I was interested in getting you and Jess on, because you have these very different backgrounds, and you're coming together to have these conversations. And I’m really interested in talking to you about these intersections. So for me, I’m not totally aware, other than some of the cultural artifacts – maybe Black Mirror episodes and things like that, which explore some of the ideas behind…maybe something you would like to talk more about – to do with belief systems and artifice or artificial objects. So, where is that intersection for you between AI and belief, and theology, and that sort of area?

Dylan Doyle-Burke[00:13:34] So, I think the first thing that I want to say as a preface, is that I am not the first person that has walked this path. So credit to Genevieve Bell, who now runs the 3AI Institute, down in Australia – at the, I believe it's the National University of Australia – and she did a lot of work on techno-spirituality, and we can talk a little bit about what that is. Then also Beth Singler is another one of my mentors. She's over at Cambridge – a professor over there, who does a lot of work on new religion, so if you think about say like Scientology or even like Mormonism – religions that have cropped up in the last few hundred years – and how they are interacting with technology. So there's Dr Ted Vial is another one, who's really in my institution at the University of Denver, who's looking at this intersection between theology and religious studies, and AI, and so I just first want to name that there's a lot of folks out there that are doing really incredible work, and I’m kind of following in their footsteps.

But for me, that intersection point is exactly what you mentioned earlier. My research question is what does it mean to be human and how do these technologies impact, influence, and shape our concept of what it means to be human? Because my thesis is that although technology has existed for thousands of years, so you know, fire – it's been around for a while – the technology that we are interacting with right now has a different substance to it. There's a different core there, when we're talking about artificial intelligence that is impacting us, I think, in ways that we're not even fully aware of. On that really deep state of being, like that epistemological, how do we know things? And ontology, like how are we human in in this world? So that's my core question of what it means to be human, and how are these things interacting with how we connect with one another?

Ben Byford[00:15:37] Yeah, so obviously these are kind of questions that we've been grappling with throughout history, right? You know, that is a core question and a core belief around the ontology of what that means. So you're basically taking some of this work and reflecting on some of those answers and going, “But this thing of AI, right guys?” And associated technologies, presumably, maybe. So, is there kind of like a headline thing within that at the moment, because there's obviously a lot to explore there, but is there kind of like, you're – what are you excited about in that?

Dylan Doyle-Burke[00:16:17] So I straddle, I’m still in the early years of my PhD, right? I’m still deciding what that dissertation is gonna look like, as it were. And because of that, one of the blessings of being newer to the space is that I come in with fresh eyes, and also I can study kind of what I want right now, and so I’m straddling multiple different disciplines in the world of religious studies and AI. So part of the research that I’m doing is on social robotics and different moral models that we can apply to this human-robotic interaction space, which is one discipline. Part of what I’m looking at is computer science education, and how we're training folks to think about artificial intelligence ethics, so there's that AI ethics space specifically. Then part of what I’m looking at are these broader philosophical questions of what it means to be human, and more of the anthropology of the space, and the history of the space, and so part of the work 've done with that is about white accountability as well. So following on the work of Timnit Gebru and Black in AI, of like well what does it mean to be a white dude in the space? And is there a way for there to be some level of greater accountability, and even like our design, and how we're in industry representing our teams? So the answer to your question is there are a lot of headlines in each kind of area.

Ben Byford[00:17:42] It’s a spread of stuff. I know that we were talking before, and Jess was saying that she was also doing some teaching as well, so what makes up the teaching for you both? Well, I’ll be talking to Jess in the future sometime, hopefully. It's really great that you're able to step in and do some of that teaching. I know that lots of good work has happened the last couple of years to bring that to computer science programmes. Are you enjoying that? What kinds of things are happening there?

Dylan Doyle-Burke[00:18:10] Yeah, so to that question, there's a lot that's happening there. I do actually want to put a finer point though, because I felt like I gave you an unsatisfactory answer to the other one, about what's happening in this religious studies space, and theology space, because I think it is important even for my research, if you're all right with me going on. So religious studies you can think of as like anthropology essentially, but looking at specifically how groups of people construct meaning around things, and then you have this theology thing which is more confessional, and about what people believe.

Both of those places, both of those disciplines, are currently trying to figure out, “Well, what is this AI thing, what does this technology thing mean to us?” and no one really knows, so the reason why I don't have a specific answer for you is more, it's not that people aren't asking the questions, it's that everything is still in so much process. So you have people that are asking these big philosophical questions of, “Okay, so we have this category of human, what does it mean? What do all these new technologies do with that?” Then you have these people that are really looking at the technology itself, so you can even think of like say priests – Buddhist priests in Japan – or you can think of like online communities in India, in Hinduism, where they'll pray for you if you pay them, and it's just like a natural part of what's going on in that world. Then you also have this – like these historians – people who study the Bible, and people who study Ancient Greek philosophy and history, who are still in that religious studies space, but they're looking at things like, you know, Pandora who is a woman – at least is seen as a woman – but was created in some way. So they are both human and not human, and then you also have, in that same idea, like these gender studies people that are saying, “Well okay, what does it mean for Pandora to be a ‘they’, as opposed to a ‘she’?” and all these things are kind of all in conversation with each other. And my role right now – and one of the things that I’m most passionate about – is a project that I’m working on, in trying to lay out the landscape for all of those things at once.

Then this goes into your question about computer science education, so Jess and I – one of the real blessings and wonderful parts of our relationship, and partnership, is that we think about things so incredibly differently. So she is absolutely a STEM person. Like, she's absolutely a math and science person, and I am absolutely not. Like I can maybe write like a small program in Python, and that's as far as you're going to get me. I can do some qualitative research, but that's – even that, you should trust someone else to look at my work. But Jess, that’s her place, and so the questions that I bring to this CS education element are very different than the questions that she's bringing. I’ll leave it to her to be able to talk about the questions he's asking, but for me, I’m very much interested in these, you know, classical questions of social science. Like, “Okay, so who's in the room? How are we designing these curriculum? And then more than anything, why?”

I think the question that religious studies and social sciences in general have to add to these questions of engineering and technological design, which historically social sciences have not necessarily been part of those conversations. But I think what they can add right now is that question of “why?” and the question of purpose, because I think the engineers already have the question of “how?” pretty well done, and I don't want to step on their toes. But that question of “why?” I think is so important to be able to implement at every level of our training and then also implementation. So when I’m looking at the CS education, for me it's what are we including, and then why? Are we thinking critically about what we're including, or are we just doing it because it's the way we've always done things? Because for me, that's not always a good strategy for why we're teaching our students, and this is going to have far-ranging implications for decades as this technology continues to develop. How we're teaching our students. So, I think there's a lot at stake in that conversation.

Ben Byford[00:22:22] I fully agree. I think the question of “why?” is paramount. I think it's the starting gun to any process that leads to a product, or a service, or design. It's not just, why is this going to be good for the company? Or why is this going to be profitable? Or how is it going to be profitable? Or some of these questions, these are kind of like inherent questions when you're working within the systems of capitalism, and things like that. Of course we're going to ask, you know, is this viable, is this going to make us money? Or is this going to be useful – if you're a non-profit, for our purpose? But why are we even doing this? Why are we spending our life? So when I think about anything that I’m spending a lot of time on, am I going to spend my life, my finite resource, on this thing, or am I going to do something else? Because this thing is not going to be as beneficial to me and the people around me, and therefore society, in doing so. You know, what benefit is it going to hold as a technology that can affect lots of people? So, you know, like you were saying just now, we are going to be affected by these decisions for many years into the future and we better think about it right.

Dylan Doyle-Burke[00:23:41] For me also, there's this – recently Jess and I were interviewing this man from the Center…I always get the acronym wrong. It's the CAS – I’m gonna forget this – the Center for Artificial Intelligence [in Society] at USC, and his name's Dr Eric Rice. And we were – basically, his research is all about AI for social good, and we got into this conversation with him about, “Well, who gets to determine social good?” Like, what is it, right? Like, is it is it the people at the top of Google who get to determine it? Is it the social workers? Is it the Government? Like, who is it, because at some level we're giving a definition that has long ranging implications about social good? I see that as another way that religious studies – but also the social sciences specifically – can really help this conversation is not just assuming that we know what we mean, or that what we're saying is the same thing that people are hearing when we say “social good”. Which is also where that inter-cultural conversation is so important. Also because these technologies are not – even if they're being designed in a particular place, it doesn't mean they're gonna stay in that particular place, right? So when we hold again – when we hold our academic conferences only in the US and Canada, or when we have designer teams that are only people, you know, that look like me, as a white man, it's like we're not doing our job ethically. I guess what I would say, I think there's a real ethical need to ask those questions of representation across the board. And all these conversations while we're designing technologies.

Ben Byford[00:25:24] And do you think the people at Google are the right people? I mean, there's a certain aspect to the situation we have at the moment, where we can try and build diverse teams, and have diverse voices, and do our research, and bring in anthropologists and sociologists to help us with that process – and I totally advocate that – but we still have these kind of large hierarchical organisations which are doing a lot of the money-based work, a lot of the large-scale AI work. Is that something that worries you?

Dylan Doyle-Burke[00:26:02] So Ben, 20-year-old Dylan is going to hate what I’m about to say right now, because 20-year-old Dylan was very much an idealist, and very much hated capitalism, and right now I’m much more of a pragmatist. So I don't think – I know, I feel shame saying that out loud – but I think that we have to work in the systems that we have. I think that it's not going to be possible for us to dismantle every system all at once, and that it needs to be an iterative process. But I think what's really important is that we have a clear goal – or at least a vision for where we're trying to head – so that we know what those processes are, and we know what the landmarks are that we're trying to hit along the way in this road trip. And it doesn't mean that this can't change in the future, right? Like, what we start with doesn't have to be what we end with, but I’m a strong believer that we do the best that we can with what we have, and his comes from my ministry too. It's like, people make mistakes, people screw up, people – the best laid plans, like always go awry – and also the best intentions are sometimes the ways that we hurt people the most. Even when we're designing technology, and so I don't think the people at the top of Google are bad people, necessarily. Just like I wouldn't want anyone to assume that I’m a bad person. And I think that there are these like really deeply ingrained systems that we need to look at really critically, and take our ego out of it as much as we can, in order to try to create a more just system and a more equitable system out there. And representation is just a part of that, I think.

Ben Byford[00:27:57] Yeah, I would like to go back quickly, if I may. So you were talking about – interested in the system of relationships, intersections, between what you're studying your PhD, and I’m just really interested to know if you have any thoughts about how those changes in belief systems are going to work into the future with AI? I’ll give you some examples. Maybe, how is our relationship with death going to change? How is our relationship to being a citizen going to change? When, you know, we have this kind of Nineteen Eighty-Four-type situation, and not only can we be somewhat tracked, but also from a baby we're going to have this data footprint which follows us through our lives as well, which can augment us as human beings. How does that change our situation? And also the transhumanist movement. How can we feel about being a human being ourselves, when we start more and more – I mean when I start, we're already kind of augmenting ourselves with phones and computers and all this sort of stuff, and outsourcing – there's a trope about outsourcing our memory and our factual information to these devices. So what kind of futures are we going to be living in basically? Go!

Dylan Doyle-Burke[00:29:19] I will tell you the future. That’s right. So I would love to solve all of that for you. I can give you a little bit of my sense, based on my research on that. So, I think the first thing to say is that I don't see what's happening in technology as distinct completely from what's happening in religion, and what's happening in society, and what's happening in our government spaces. I see it as all very interrelated, both practically – like, you know, government regulations impact technology – but also on this greater, like, philosophical sense of, there's always something happening. There's always trends right out in the world, and it's not just one trend, there's millions.

So I don't want to universalise, but there's something happening that's specifically being impacted by technology right now that is distinct from anything that's happened previously in the world. And so to use your death example, there's – one of my mentors at CU Boulder, his name is Dr Jed Brubaker – and he's done lots of work on Facebook and mortality. He was one of the people who worked with Facebook to create this memorial feature, where you can make a loved one's page, after they pass away, into a memorial. So people can go and they can leave comments, and there's all these different possibilities. And there are so many different ways that that changes our experience of death. How old was I? When I was 17 or whatever, and Facebook had just come out, and it was like only for college students, and I was lucky because I was 17 and I could get a Facebook, or whatever. I would have never thought that that would be a space where I would go and be able to, you know, mourn a friend of mine who had completed suicide, right? It's like, I never would have thought about that. And I think there are a lot of unseen consequences of that, including in our technological design, and that's going to keep happening.

So you get this interesting trend of, “Well, what do we do with embodiment, right?” Also with faith – like with the Hindu idea that I talked about earlier – like, before you would always have to go to a Hindu temple in order to get absolved, and light incense, and be able to commune with the divine in that space. Now it's no longer the case. Now it's more of like, I don't want to say “practical”, but it's like, “Oh, you can just pay and then you're absolved.” And there's some Catholic groups – I don't think the Pope likes it so much – but there are some Catholic groups that have leaned into this as well. Where you don't have to go do confession with a priest, you can do it on Zoom, or something like that. I think that is going – we're going to continue to see that trend occur where it's going to become more and more – especially with what's happening with Covid and the pandemic right now, things have been forced in academia, but also in faith communities, to become more and more virtual. That's going to have really long-lasting effects for us, and the question is, is that changing – right, is that changing – something fundamental about our humanity, and how we relate to these things that have always been a part of our humanity – like death, and like citizenship, and like faith? And my thought is that yes, it is, and it's still unclear about how deeply it's really going to play out into the future.

Ben Byford[00:32:45] You reminded me distinctly of a section from The Handmaid’s Tale – I’m not sure if you've read it – where they go, and there's like prayer shops, essentially. And they have – and they put a coin in or something like that, and there's a machine and you can use it to absolve you of your sins, or whatever. I can't remember exactly the thing, but it really struck me that you're painting a picture of some of these trends which are happening around the way that we think about faith, and think about participating in faith, actually, I guess. And how that's changing. I imagine if there isn't already, there must be a lot of potential in the idea of chat bots and GPT-3 participating in that world. And in the future, how we would feel about, you know, a pastor, or a vicar, or a cleric of some shape or form being somewhat artificial, right?

Dylan Doyle-Burke[00:33:46] So then it's a question of what artificial means, too. So you're – it's meaning, like, not human in that case, but then what if it's human-like. There's this – I don't know if, on your side of the ocean, if you all have Stop & Shop at all? It's just like a decent-size grocery store, basically, and they have this robot they've just introduced – or I guess they introduced maybe a year ago – called Marty. It's about five foot three, and it kind of looks like a just like a tube, and it cleans up spills, so that employees don't have to do it. But what they did, is they put these two massive eyeballs on it, and then a big fat – like a big smiley face – and people hate it. People have like penned op-eds saying, “I will not go to this grocery store because of this robot, because it scares the heck out of me,” and I think that's like a microcosm of this really interesting space that we're finding ourselves at right now, at this point in humanity. Where we're trying to figure out what this is.

The question for me is what it means to be human, because I think that if that robot didn't have those eyes, and didn't have that mouth – and research backs me up on this – people would feel very differently. There might be more trust, there might be less trust. But in that robot space, it just puts all of these things so plainly, and forces us to ask – like, it's no longer just a thing that we can ask in the ivory tower – it's like, “Oh, oh, this makes me feel weird,” like there's something threatening about this other being. So is it another being, or is it just like, what's going on in me? So there's a psychological aspect to this as well, but also an ethical aspect of like, how are we designing these spaces? Whether it's about faith, whether it's about citizenship, death, whether it's about this Marty, Stop & Shop, there are real consequences to this. Which is why I keep saying – because I think it's really important that there are real consequences to how we're designing this technology, philosophically as well.

Ben Byford[00:35:48] Yeah, and I guess as we become – as these technologies become – more proliferated, and Marty becomes part of our fabric of our normal life, I imagine that sort of reaction will change. We'll have a different set of reactions to maybe Marty, which is Marty 3.0 or something, that you know can actually talk back to us, and assist us in different ways. Maybe that becomes the new uncanny valley, or some sort of situation like that. It’s really interesting to see how this is going to change over the next couple of decades, and how this technology – both embodied, but also disembodied – becomes more part of our life, and less freaky, I guess, in that way.

Dylan Doyle-Burke[00:36:36] Or more freaky. Or more freaky. That's what I’m interested in, so I was thinking while we were talking there that back in the in the earlier 2000s, when you and I were both young, Ben, I don't know if you had this experience. I know maybe you're still young, I don't know. But I had a good amount of friends who were in their early 20s, who just didn't want to live in cities anymore, and so they were all pretty hip, and they – you know like to drink out of Mason jars, and stuff like that – and they all moved into like tiny homes, or farming communities, right? And in some ways, it's kind of funny, but in other ways it's like, they were trying to get back into the roots of what they thought of as human, and what was important. Like, they wanted to get their hands in the dirt, and that was because they had spent so much of their lives in these urban spaces that weren't giving them everything that they needed. Weren't giving them sunlight, etc. and I think we're – like my sense, and this is totally hearsay – but my sense is that we're going to see a very similar – I don't want to say conservative – pushback, but we're going to see a pushback to technology, in the same way. Or at least people who are trying to reclaim that embodiment, especially with what's happening with Covid, and like these virtual spaces. I think we're going to see a real need – you know if/when this pandemic kind of lifts, the spaces are going to be different – but I think we're going to see this pushback of like, “No, we need to be in person,” like need to meet with my community in person, and I think we're actually going to see an uptick in people going to in-person churches and things like that, following this. But time will tell.

Ben Byford[00:38:15] Yeah, I will. I’d like to say I’m not a bad man, but I am a betting man, so I’ll bet against you on this one and see what happens. I think I’m hoping you're right, because I like the in-person meeting and brainstorming, and my work environment is much more fulfilled in that way, and my social environment I guess, as well. I’m sat in my home at the moment, and it's boring me with the four walls I see every day, and you know, I have a lovely family, but it's nice to see other people. But anyway, I digress. Yeah, so just falling off track

Dylan Doyle-Burke[00:38:57] Yeah, I don't know. I think that you're, I think that question of fulfilment that you just raised is key here, and for me the question of what it means to be human is also a question of what it means to live a good life. And in that question, also you get into this these justice issues, and these issues of privilege, and all of that, which I don't know if we have time to talk about. But this question of, like that's a religious question for me, like that's a spiritual question. I think that as technology continues to evolve and change, and our relationship to it continues to evolve and change, perhaps we'll see a difference in what that fulfilment might look like, and that's something I’m really curious to see. Is how that good life alters as we get more and more used to different technologies going forward.

Ben Byford[00:39:52] I think for me, it's like a physiological-psychological-neurological-spiritual thing that good life is, which we've been grappling with for a long time, but I think there are fundamental things that we can say about being human. A lot of the physiological stuff is quite well-tread paths, you know. It's well known, and the neurological tuff is coming – just becoming more and more illuminated – our knowledge about how the inner workings of our brain works, but it doesn't – it still comes back to, you know, what are we going to do with that knowledge? And it won't tell us exactly that we should live in this perfect environment, and that we can all do that. And, you know, what is the – within society, how are we going to make that work, when we continuously know more about ourselves, are we going to be able to push for better equity in in that situation? I know obviously we can spend a lot of time talking about that, but you know, how we're going to shape our society so that hopefully the product of our tools, and the AI, the robots, and all this sort of stuff become fruitful for the common of us as humanity, rather than the few. So that's a big problem in my mind.

Dylan Doyle-Burke[00:41:19] Yeah, I think that that's the key problem, and again one of the reasons why we started Radical AI was that I get frustrated with academia, because we publish so any papers and then three people read them, or three people cite them, and that's wonderful, right? But I think it's important to be able to bring something out into the world that changes something for the better, and so I think the question that you're raising about, “Well, what do we do with it?” Like, we might have all this knowledge, but then how are we actually going to implement it is so critical, and I think the kind of – at least the two things that religion…and some people have a very allergic relationship with spirituality and religion, which is probably another podcast, because I also struggle with that, right? Even as a former minister.

But I think the things that religion can bring to this conversation, or spirituality can, is a sense of of awe, and wonder, and mystery is the first thing. So being able to say to your question of the future, “Well, we don't know,” and that's kind of cool, too. Because it's like limitless possibilities, to a certain degree. Then the other thing that I think spirituality can bring to it is a humility. I think that any time you're in a boardroom, and someone says, “I have the answer to this societal problem, and this is the product that will solve this,” I think you should be really sceptical, because I think that if we take some of that ego out of it, and be able to say, “Okay, well, I don’t know, but we're going to do our best with this,” then we're able to implement that design a little bit more effectively. And I see that as part of my role as a consultant. I do some consulting work with a group out of Brussels, called Ethical Intelligence, who I wanted to give a shout out to. Olivia Gambelin is the CEO over there, and some of the work that that they do – and that I do with them – is asking those questions of, you know, how do you keep options open, as opposed to think that you have the answer, or the option?

Ben Byford[00:43:21] Right, yeah, yeah. We’ve spoken to Olivia on the podcast previously, and she put us in contact, so thanks Olivia. Shout out. Moving forward a little bit, and touching on something that we spoke about briefly, do you have any thoughts or specific concerns about things like the transhumanist movement, and things concerning the augmentation of the human body? But also maybe the human mind, as well?

Dylan Doyle-Burke[00:43:53] One thing I think it's important to point out is the – although there is a greater transhumanist and posthumanist movement that's existed, especially in sci-fi for a long time – there's also a lot of transhumanist and posthumanist movements, as well. So there's kind of like the academic, overarching, and then there's also a lot of people that are living out these things in everyday life. So I can give a very brief definition, but I’m gonna do it wrong, because there are just so many different definitions that are out there right now. So, transhumanism. My understanding is you can kind of think about that as terms of augmentation, so that technology is relating to our body in a different way. So some people even argue that say, if I wear my Apple Watch, that I’m already transhuman in a certain way because I’m constantly looking at it, it's measuring all of my biometrics, things like that.

Whereas posthumanism is taking that to a little bit more of an extreme conclusion of, “Okay, what does it look like in a posthuman world?” So there's a lot of great, like, Black posthuman hip-hop that's really awesome, and it’s talking about, like you know, when the aliens come, what does this mean? And the reason why it's important that we name that – there's a Black tradition in that – is because posthumanism also has this tradition of – because Black folks have so often not been seen as human, in the Colonial context, and in the American context specifically – and so there's been this pushback of, like, Okay, well what does it mean again? What does it mean to be human in an inequitable thing, and looking through this posthuman lens, there's almost a way to understand that Black experience of never being seen as human. Being in chattel slavery, and things like that, in a different way.

So my sense of transhumanism is that it'll be really interesting to see what happens with those transhumanisms. So I believe in the argument that we're already transhuman, but also that we've always been transhuman – again, ever since we invented the wheel – it's like we've always been interacting with tools. It’s never just been humanity qua humanity, like only us – it's always been us working in some sort of greater context – so again, the question is whether this technology that we have now is different, which I believe it is. Then the posthuman; I think I’m gonna leave that to scholars that are better equipped to handle that question. I don't believe that we're in a posthuman world right now, but I do believe that the question of what it means to be posthuman – especially in the stories that we're telling ourselves about the utopian dystopia of AI – are really important questions, even as we think about justice in our world right now.

Ben Byford[00:46:48] Great, and I guess the popular idea which was maybe four or five years ago; this idea of the singularity – Dylan, are you going to give yourself over to the computer overlords and blend with the AI, or are you going to elevate yourself and become a new entity?

Dylan Doyle-Burke[00:47:11] Yes. We're gonna have to see. There’s this…in some of my HRI work – my human robotic interaction work – I’ve been trying to bring in this theory by Jonathan Haidt, and he talks about moral foundations theory. But his whole thing is like the tail wagging the dog, so like, your instinct is actually, we think that we're trying to be moral and that we're intentional about being moral, but really it's just whatever our instinct is, or whatever we're trained to that makes us take moral actions. So when the robot overlords come, when the computer overlords come, I think I’m just gonna have to trust my instinct and see if it's fight or flight, I guess. Or assimilation, if that's the other option.

Ben Byford[00:47:55] Yeah, so you're gonna lean into whatever your first reaction is?

Dylan Doyle-Burke[00:48:00] I don't know if I’m gonna have a choice, depending on if they're benevolent or not.

Ben Byford[00:48:03] Yeah, that's true. I think, from what I’ve read – I’m still on the fence, you know, for cultural – the stories and things like that about different ways these things can go, but I think like you, it'd be interesting to see how these sorts of technologies play out on the horizon, right? This is kind of horizonal stuff, wherever it's even possible, so yeah. But still fun to think about, right?

Dylan Doyle-Burke[00:48:29] So, right. Absolutely, well, and I think this is the point that I’ve like probably been obnoxious in trying to hammer home in this interview, and I’m not exactly sure why I feel so strongly about it right today, but it I think that these stories are not just stories, right? These stories matter, the stories that we tell ourselves about the singularity matter, in terms of how we treat one another, and also how we treat the development and the design of our technology. So let's continue to be really intentional about those stories, which is why I think the question you just asked is a really good one. Like, I think we should think about it even in the hypothetical, because it tells us something a little bit about where we are right now, even individually.

Ben Byford[00:49:06] Yeah, that's really interesting. Maybe we can go away and write some short stories, and come back and discuss. I think it’s probably useful. I think a lot of the science fiction writers have got a lot to say for themselves, haven't they, to do with, you know, our current thinking on this subject.

Dylan Doyle-Burke[00:49:26] Absolutely, it's – I mean, I think we're all storytellers, no matter where we enter this conversation, whether we're writing code, or whether we're writing stories, or whatever, I think we're all storytellers to a certain degree, and there's I think a lot of power in stories.

Ben Byford[00:49:40] Sweet. Well, we're coming up to nearing the end now Dylan, so thanks very much for spending this time with us. The last question I always ask, is what scares you, but also what really excites you about AI and our future?

Dylan Doyle-Burke[00:49:56] I think my answer is the same for both. I think what scares me – and what excites me – is the potential. So I think the potential of this technology, whether it's just applied statistics, or whether it's, you know, the eventual singularity, I think is incredibly exciting. Like, I think the possibility to make our world a better place for people is – and maybe also for robots – is I think wonderful, and I think really almost overwhelming, in just the raw possibility that we've been seeing as this technology has evolved so quickly in such a short amount of time. I think that the scariest part is the possibility – I mean like when you look at facial recognition tech – when you look at what happens when you have bunch of well-intentioned people who are not necessarily looking at their context, and then the technology ends up…you know, and I’m thinking about the ProPublica article about analysing recidivism rates, right, and how it was so racially biased.

There are just ways, when this technology, when it becomes unchecked, I mean, it's still powerful right. It's like powerful tool, but just like any other tool it can be terrifying how you use it. Like, I can use an axe to chop a piece of wood and be able to heat my house for my family, and I can use it to kill someone. I think that it's important for us to not treat this technology, I guess, any different. Even if it's a – maybe it might be a little different – maybe more powerful in certain ways, at least in terms of scope. But that potential and that possibility to impact the world is, I think, so exciting and so amazing in theory, but in practice, as you said earlier – it's a question of what do we do with it? Do we make the world a better place, or do we not?

Ben Byford[00:51:54] Yeah, so hopefully Dylan, let's make the world a better place. You're certainly doing a good job at the moment with your podcast series, and all the work that you've been looking into trying to make sense out of all these kind of really disparate – well, not disparate, but really interesting – threads, and bringing them all together and making sense out of what is interesting there. I find it all interesting, which is why I’m not an academic and I never get to write anything, but anyway. Thanks again for coming on the podcast Dylan, if people want to follow you, find your work, how can they do that?

Dylan Doyle-Burke[00:52:32] So if people want to follow up with me, you can always send me an email. You can find my email at my website which is dylandoylburke.com. I also invite you to check out Jess and I’s project, which is called Radical AI, which again you can find at radicalai.org or you can find both the project and myself on Twitter. The easiest way to find us would probably be through the Radical AI Twitter, which is just @RadicalAIPod and Ben, thank you so much for having me on today, it's been a pleasure.

Ben Byford[00:53:05] Hi, and welcome to the end of the podcast. Thanks again to Dylan for his time and knowledge. Obviously, for more from Dylan and Jessie, check out the Radical AI podcast. I was really excited about getting them both on the podcast, because they have really disparate disciplines, and I was really interested to talk more about faith and religion, and how these things intersect, and Dylan seemed like the perfect person to come and explain some of those aspects. I hope you enjoyed, and look forward to talking to Jessie in a couple of months’ time. Obviously, if you'd like more episodes, then check out the podcast at machine-ethics.net. This month we're actually supporting Manning's Women in Tech conference on 13th October. It's an online conference, and there are talks on careers, big data, VR, and loads of tech subjects. You can find more information on that on the episode page on the website. Thanks again for all my Patreon supporters, and to find out more, you can go to Patreon.com/machineethics. Thanks again, and I’ll see you the next time.


Episode host: Ben Byford

Ben Byford is a AI ethics consultant, code, design and data science teacher, freelance games designer with years of design and coding experience building websites, apps, and games.

In 2015 he began talking on AI ethics and started the Machine Ethics podcast. Since, Ben has talked with academics, developers, doctors, novelists and designers on AI, automation and society.

Through Ethical by Design Ben and the team help organisations make better AI decisions leveraging their experience in design, technology, business, data, sociology and philosophy.

@BenByford