52. Algorithmic discrimination with Damien Williams

This episode we chat with Damien Williams about types of human and algorithmic discrimination, human-technology expectations and norms, algorithms and benefit services, the contextual nature of sample data, is face recognition even a good idea? Should we be scared that GTP-3 will take our jobs and the cultural value of jobs, encoding values into autonomous beings, culture and mothering AI, AI and dogma, and more...
Date: 1st of March 2021
Podcast authors: Ben Byford with Damien Williams
Audio duration: 57:21 | Website plays & downloads: 232 Click to download
Tags: Academic, Discrimination, GTP-3, Government, Dogma, Job displacement | Playlists: Philosophy, Rights

Damien Patrick Williams (@Wolven) researches how technologies such as algorithms, machine intelligence, and biotechnological interventions are impacted by the values, knowledge systems, philosophical explorations, social structures, and even religious beliefs of human beings. Damien is especially concerned with how the consideration and treatment of marginalized peoples will affect the creation of so-called artificially intelligent systems and other technosocial structures of human societies. More on Damien's research can be found at AFutureWorthThinkingAbout.com


Transcription:

Ben Byford[00:00:05] Hi and welcome to the 52nd episode of the Machine Ethics Podcast, this month we’re talking to Damien Williams. We made this recording on 3rd February 2021 and we chat about types of human and algorithmic discrimination. Human-technology expectations and norms. Problems with government benefit services that use algorithms. The contextual nature of sample data. Is facial recognition even a good idea at all? Being honest about problems in your system. Should we be scared about GPT-3 taking all our jobs? Encoding values into autonomous beings, and AI and dogma. You can find more episodes from us at Machine-Ethics.net

Ben Byford[00:00:41] You can contact us at hello@Machine-ethics.net. You can follow us on Twitter Machine_Ethics or on Instagram, at Machine Ethics podcast. If you can support the podcast, then go to patreon.com/machineethics. And thanks again for listening.

Damien, thanks for joining me on the podcast. It’s awesome to see you again and have this conversation, so if you could quickly introduce yourself, who you are and what do you do?

Damien Williams[00:01:15] My name is Damien Patrick Williams, I’m a PhD candidate at Virginia Tech. My research is in the area of values, bias, algorithms, AI, and generally how human values intersect with, influence, and are reflected in the technologies that we create.

Ben Byford[00:01:33] Thank you very much, and there’s a question that we always ask at the head of the podcast, which I’m pretty sure you actually might have previously answered in another podcast, because we met at a AI retreat – which sounds a bit fancy now, and to be honest it was fancy –

Damien Williams[00:01:52] It was very fancy.

Ben Byford[00:01:54] I’m missing it now. We were just chatting about it, briefly. Yeah, so the brilliant Andy Budd put us and lots of other people together to talk about AI and ethics and stuff, in a lovely hotel in Norway, and that was maybe two and half years ago, or something like that? A while ago.

Damien Williams[00:02:15] Yeah, yeah.

Ben Byford[00:02:16] So, there is a podcast episode on that, which I will refer you to at the end, because I can’t remember which one it is. What is AI, Damien, to you? What is this stuff that we talk about, AI?

Damien Williams[00:02:32] To me, the question of what is AI is always really interesting, because a lot of what I’m very interested in is exactly that kind of weird untangling of what is AI, right? Because everybody I talk to uses it differently. So, for me, AI is both the automated algorithmic applications that we use to supplement and enhance and otherwise support the activities of human intelligence in everyday life, but it’s also that kind of bigger, broader, large-scale dream and aim of AI. That kind of artificial general intelligence, or as I like to think of it, that automated generative intelligence. That machine consciousness, that machine intelligence, that is itself capable of independent, autonomous, generative, intelligent thought. And that is a dream that I think we still have not yet fully reached, it’s a dream that I think is starting to fade into the background of a lot of AI discussions, as that kind of first definition is more and more attainable. And more people kind of go with it and they’re like, “Yeah, we can make supplements, we can make patches that help other people do their day-to-day stuff, so let’s just keep doing that,” and that idea of, “What about a conscious machine?”, has faded to the background.

Ben Byford[00:03:57] Yeah, awesome. And I believe when we last spoke – there’s so much passion coming out from you – obviously, the setting was very different. We had some really good conversations there, some of which I was able to capture, which was great. I’m really interested in talking to you about lots of things. And some of those are about the cultural artifacts, you know, you were talking about, how do we untangle these things? What are the values that we put on these types of technologies, but also what are the things that we want to get out of this? And AGI, you know, is one of those dreams that we have – general intelligence. But also, there’s this other stuff, to do with a lot of the uprising that happened last year around black and different minorities across the world. And how technologies can be discriminative and oppressive in some ways. So, I was wondering if we could start with that one, and then bridge over to the cultural stuff?

Damien Williams[00:05:01] Yes.

Ben Byford[00:05:02] So, what are the forms of discrimination that AI can take?

Damien Williams[00:05:07] So, that’s interesting, because the forms of discrimination that AI can take are pretty much all the forms of discrimination that humans can do. Plus some more that we never really anticipated. And so what ends up happening in most algorithmic and AI applications is that ultimately, we find ourselves in a place where the idea of what we think we’re trying to do – the application that we’re trying to figure out – takes precedence, and we don’t really think about what it is that we’re telling the programme to do. And so if we have in the design phase, and the coding phase, and the programming, and then the implementation – if we have certain biases that we’re unaware of, or that just go uninterrogated for a number of reasons – then they will make their way into the actual implementation of these programs. This is one of the things that we found. So we can see gender discrimination, in terms of gender discrimination based on names, based on the perception of what types of words are associated with what types of genders. And then we have racial discrimination. We see that most often played out in facial recognition, automated camera technologies, any type of – even just basic light readers – like, we have photo measurement systems that are automated, that don’t read darker skin well, and that’s been true for decades now.

We also have, in terms of, certain applications take those types of applications and play them out further when we have problems with, say, AI applications of facial recognition on the transgender community, or of body type readers – you know, body scanners – on the transgender and the disabled community. Body scanners – the big stand-up body scanners at the airport – if they happen to read something that they deem as non-standard, or anomalous, then they will flag that person for extra pat-downs, and extra searching. And one of the ways that that happens is if the human on the other side of the screen presses a button that says that the gender that they perceive that person to be is something, and what the scanner reads is for any number of reasons anomalous, then that person will get flagged. Transgender individuals are harmed by this, but also, not even just in terms of the gender aspect, but also in terms of just what the body is quote-unquote “supposed” to look like aspect. If someone has, say, an ostomy bag – you know a colostomy bag, or a urinalysis bag, that allows them to use the rest room, because they’ve had surgery on their lower intestine, or their kidneys, that is not a thing that a body scanner knows how to understand. And if the person who sees that shape – which is flagged as anomalous and rests right at about hip level, where a weapon might be - they’re going to, perhaps a bit more aggressively, pat that person down, and they are going to perhaps cause the contents of that bag to expel themselves onto the person being patted down. And I say perhaps, and what I mean is, that’s actually happened several times.

So, in all of these places, we have certain expectations about what a body should look like, what a person should be, how a person should behave – normative types of expectations, of “should” and “right kind”, right? And all of these things have been encoded into the systems, both the actual technological systems – the artefacts themselves – but also into the techno-system relationship between humans and those systems. And how humans are expected to relate to those systems, how they’re expected to use those systems. The expectations that the humans have in mind when they’re using it – the expectations that the systems have encoded into them – then create that kind of feedback loop where they go, “Oh, well this is what I’m supposed to expect here, and this is how I’m supposed to behave when that expectation is met, and so this is what I’m supposed to do in this situation,” without ever really interrogating, “Why might this be? What might be happening here? What haven’t we been told to expect? What might this mean about a human being’s way of living in the world that I didn’t get told to expect, and this machine wasn’t programmed to expect?” And so those are just a few – a smattering – of the ways that these things can go very, very wrong.

The long term, big picture stuff, one of the things that I’ve been really invested in recently is disability rights benefits, here in the US. Often, in many localities and states, they are administered by algorithm, and those algorithms are often black-boxed – their internal functions are hidden from the people whose lives their decisions are affecting. And many at the ACLU, the American Civil Liberties Union, here in the US, have had to sue on behalf of disabled patients – to say, “We need to be able to see what’s going on inside of this, because it made a decision about my benefits that reduced my benefits drastically, year on year, for reasons that I fundamentally don’t understand, when my situation hasn’t changed.” So, yeah. These are life and death kind of things, that happen every day, and all of these systems have been proliferated widely from this kind of base, uninterrogated state, right? And so all of those perceptions, all of those assumptions, all of those potential biases, have made their way into all those systems.

Ben Byford[00:10:53] Mm hmm, and it seems odd to me, because obviously you and I, and lots of people that we have on this podcast, we think a lot about this kind of stuff. But it seems kind of antithetical that people would just put in an algorithm and go, “Okay, well your benefit payments have changed and there’s nothing you can do about it,” right? That sounds like a bad situation from the get-go, right?

Damien Williams[00:11:23] Yeah. It is.

Ben Byford[00:11:25] So, I mean, I hope that people are waking up to the stupidity of this fact. And obviously we can’t always account for all the unintended consequences, but we better account for as many of the consequences and the impacts and the harms that these systems produce, because that instance that you described is quite a high-impact thing. And you’re not going to impact the normal people, like in the way that you were talking about the scanners, as well, like, if you look normal to the scanner, or if the distribution of your body shape is like, correct –

Damien Williams[00:12:08] Yeah, if the scanner sees that as quote-unquote, “that’s A-OK”. That is the normative frame, that is the thing it’s been told to expect.

Ben Byford[00:12:17] So, you’re like targeting the people that are non-conforming to whatever this dataset is, right?

Damien Williams[00:12:23] Precisely that. And that’s the thing that in a number of ways, when this is talked about, when defences are made for it, that’s the kind of answer that gets given. People are saying, “You know, we’re just trying to find the normal distribution of what people look like,” and that quote-unquote “normal distribution” of what people look like, that is very contextual. What is your sample size, what is the distribution of that sample? What is the context in which you are sampling? How are you pulling all these things together, and making what you then average out to be your normal distribution? Because what that looks like, and what you’ve taken into account – as you said at the outset – what you’ve tried to account for in the first place, is going to change what your actual end set looks like. And if that dataset has that lacuna, that occultation of, you know, I didn’t think to think about this – I didn’t think to think about people who exist in the world in x, y and z kinds of ways – then the dataset will in fact be pre-emptively discriminatory against those people. And it will in fact punish people who are not conforming to what you, probably unconsciously as a programmer, decided was quote-unquote “normal”. And that’s going to be a thing that, again, has wide-ranging implications for a lot of people down the line. But again, if you’re not thinking about that at the outset, if you’re not conscious of, “Oh maybe I should be thinking about these other cases, or maybe I should be asking what I haven’t thought about,” then you’re not going to have that moment of taking a step back, and trying to refigure what it is you’ve done.

And in some cases, there’s just no good way to fix that. There’s no good way to come in and re-tag that, and with things like facial recognition software and how they get used for police actions, I can’t think of good ways that that turns out to be – even if you were to say, train facial recognition to do a better job of tagging people’s faces with darker skin tones, you’re still dealing with the fact that facial recognition – in the over-arching uses of pretty much all of it – that facial recognition is most often used against people in minority groups. They’re used against black and brown individuals at a higher rate in Western society, because those communities have been pre-emptively deemed not by the technology itself, but by the people using the technology, as criminal. So, and these are all the things that go into exactly what you were talking about.

Ben Byford[00:15:29] I wonder, I’ve got a devil’s advocate question here, for the facial recognition stuff. Because you alluded to – just before we went on air – that Amazon have made an announcement about this too, but if the distribution was more sympathetic to a larger group of people, and maybe the environment of criminal activity wasn’t so biased in the States, as it has been historically, is facial recognition still a good idea?

Damien Williams[00:16:05] That’s the fundamental – for facial recognition to be a good idea, the antecedent conditions have to all be drastically different. It has to be the case that you’re not in a scenario where minority communities are primarily targeted by police action at the outset, where they’re primarily assumed to be criminal, where they’re primarily assumed to be criminal because they are by history immigrant populations, or enslaved populations that were targeted as criminal and propagandised as criminal from inclination from birth. This was the propaganda that got wheeled out in the States about why black people needed to remain enslaved, because they couldn’t be trusted on their own, why Mexican immigrants needed to be ostracised to the edges of society because they were by definition endemically lazy, endemically criminal. Why Chinese populations couldn’t be trusted, because they were inscrutable – and this whole sneaky Asian stereotype that gets pushed into all of that. All of these things are so thoroughly woven into the fundamental fabric of facial recognition, of policing in general which uses facial recognition, that to make facial recognition a good idea is to say that we need to go back in time and undo that vast history of prejudice that undergirds the policing in the context that uses it.

Ben Byford[00:17:50] Yep.

Damien Williams[00:17:51] As it stands, another potential way that we could go forward with it is to say, we would have to, culture-wide, interrogate all of those things. Confront them full on, and say, “Here’s what’s happening. Here’s what has happened for centuries. Now let’s take a shot at undoing that, of repairing that damage, of setting to rights the vast systemic harms that are caused by it.” And then, once that project is complete, we can have that framework of going, “Okay, now what do we do? Literally, what are we doing when we programme a facial recognition AI algorithm, to say, tag a person in a crowd for quote-unquote ‘suspicious behaviour’?”

Ben Byford[00:18:52] Yup.

Damien Williams[00:18:53] Once that tag of suspicious behaviour doesn’t then carry the full weight of, “Well, we’ve already determined that certain people, from certain parts of the world or with certain ethnicities, are de facto suspicious,” then, at that point, once it doesn’t have that woven into it anymore, then it can be maybe be used in a way that isn’t going to do vast harms to justice.

Ben Byford[00:19:19] Yeah.

Damien Williams[00:19:20] But it’s a much bigger project than just, you know, a lot of people do want to say that if we just include more diverse datasets, then we can get a fairer output. The output maybe fairer, but at the end of the day, what’s going to end up happening is, you’re going to get a facial recognition camera that sees my face better and can definitely distinguish me from a crowd better, but it’s still going to be used on populations that look like me more often, regardless of what those populations are actually up to in their daily lives.

Ben Byford[00:19:57] Yes, it’s almost like you’ve got this really big lens, and you’re pointing it at the people you expect to be –

Damien Williams[00:20:05] Yep, that’s exactly it. And so, this is like the lamp laws, like streetlight theory; it’s like you’re going to find crime where you look for it, and if you point that – you only look for it in one space, and you don’t find the crime you consciously expect to find, you’re going to engage as criminal behaviour any behaviour that you find. So as that gets reinforced and reinscribed over decades and centuries, then you get to this point where a certain population is always perceived as criminal, always perceived as a threat, always perceived as dangerous. At the outset.

Ben Byford[00:20:45] Yep. I feel like we’ve painted a really bleak picture there, but I guess –

Damien Williams[00:20:54] There are some bleak elements, definitely. There’s some unfortunately damaging and potentially some really big downsides. But there are people who are working to do AI work, working to do algorithmic work, from a perspective that specifically interrogates all of the questions that we’ve been talking about. People who are trying to do work that says, “Okay let’s encode from the perspective of marginalised people. At the outset.” Let’s – Kanta Dihal does work on decolonising AI, and her work is fantastic, it’s really interesting stuff on how do we take the cultural perceptions of what AI is and dig down on those, deconstruct those and say, “What are we doing, as a culture, when we make these kinds of assumptions, and how do we make different kinds of assumptions?” and there are people who are doing work on the implications of autism, and neurodivergent populations, and building from that perspective when building out automated systems, and building out algorithmic implications, or instantiations. So you’ve got people working, people like Os Keyes and their work on specifically interrogating fairness versus justice, in the face of neurodivergent and disabled communities.

And so, we have people who are specifically trying to do this work, and say it’s not like the whole project of machine learning, the whole project of AI, the whole project of algorithmic systems is toast. It’s not that. The answer is, we have to do better. And in order to do better, we have to be honest about what we’ve done poorly so far. We have to be real about what’s gone wrong. And then once we do that, we can unpack it all, and say, “Okay what do we want to do different?” How do we move forward in a different way, in a way that doesn’t continue to do active, real harm to the people that it has traditionally done active harm to? How do we do this in a way that allows us to not just include people, but really, truly represent them? To really include them in a way that is systemic, foundational and fundamental, and really genuinely honours who people are in the world, and doesn’t just try to appropriate a perspective from over here and slap it into the coding of an algorithm, and say, “Okay well we’ve included that perspective, and now we move on.” But how do we actually, truly have something that is encoded from deeper principles, that include and account for and seek to do real justice for people who have traditionally been harmed by this stuff.

Ben Byford[00:23:54] So, let’s do that. I think from my perspective, this area has really blown up. So, I feel like maybe five years ago, it was completely unknown. A burgeoning thing, like automation was problematic, and we started to see some things happening in industry that made it a bit more obvious that that could be the case. And then a few years after – well, actually from then on, we had questions, we just had lots of questions. How can this go wrong? How is this possible to go wrong? What kinds of things go wrong? That sort of thing. Then we had all these principles. We spent about two years across the whole world making principles, for better or worse. And I’d like to talk to you about that. And I feel like now, again from my point of view, that we’re getting to the point where we’re having to make this practical implementation side, like okay well maybe we should think about this. In this new implementation, apply some machine learning over here, and it’s going to be for this thing, and these people are going to be impacted by this thing. Hopefully in a positive way, because, you know, why are we doing this thing in the first place? But maybe we haven’t thought about the impact on this other group over here, who you’re not… and all this sort of stuff, and I feel like… is that the kind of perspective that you see?

Damien Williams[00:25:32] Yes. Definitely the fact that, yes, more and more people have really started to dig down on this idea of how exactly do we deal with all the unintended consequences of the things that we thought we were doing right, and then we realised that we hadn’t accounted for, and we had to rethink and we had to redo. And that constant back and forth has definitely been the scope over the past few years, where people have built this greater awareness that isn’t just within in the community, or just from people outside of the community looking in, going, “Hey what about this stuff over here?”, but the community and researchers external to the community, and the policy community. Politicians are starting to really get a sense of how all of this works, and what needs to be accounted for, what needs to be thought about, in the process of doing this work. And so, I do definitely think that this wider awareness is the constant thing right now. And I’m glad of it. Very, very glad of it. But at the same time, one of the problems of that wider awareness is the misperceptions that can come along with that, and the expectations that the fixes can be simple. So, you and I, and the people that you’ve talked to on this podcast, and the people that we’ve worked with, people who research this, people who’ve been in this community for mumbledy numbers of years, we know that these things, even when we’ve recognised the problem, that they require real, intense work. And it’s rarely just as simple as, “Whoops I forgot to code this thing over here, let me go plug that in and everything will be fine.” But, when you start to bring in perspectives of politicians who have no familiarity with this space, while I’m very glad that the politicians are interested in what’s happening in this space, because the implications of this work are definitely political implications, I am oftentimes dismayed at their level of unfamiliarity with what it actually takes to do this work well. Because it means that their asking the wrong questions. It means that when they go into a Senate hearing or when they go into a Parliamentary study or they go in to commission a group of people to do this work, that the questions they’re asking them to answer at the outset don’t fully encompass the nature of the problems that we’re facing.

And so, I am extraordinarily glad that more people are interested. I want for the wider population what I have always wanted for those of us more directly connected to this work, is for us to interrogate our assumptions. Ask what it is that we think we know, and what it is we do not know. And to then move forward and say, “Okay, with what I don’t know, who knows that? Who knows what I don’t? What perspectives do I need to bring in that I can’t account for? What kinds of questions should I be asking, and who’s living those questions right now? Who doesn’t need to ask those questions because they’re living them? Immediately, right, right now?” When I looked at that headline from Amazon, and I saw the new CEO going, we’re going to continue to work with law enforcement on facial recognition, so we can see whether they’re abusing it. The answer is, not whether they’re abusing it, the answer is reach out to the communities who have been actively telling you for the past ten years that they are definitely abusing it. It is happening. And people exist who can tell you exactly how, and to some great extent, why, it is happening. So, heed them. Actually take those perspectives in, so that you can really dig down on what it is you don’t know.

Ben Byford[00:29:57] Yeah, that sounds like the language of what I used to see a lot more of. And it used to be part of my talks on AI ethics and this area, where at the end of the talk I’d be like, it’s not good enough to go, “This is someone else’s problem,” or “I’m just making the thing, it’s not really my responsibility to use the thing, right?” And that’s never been a good enough argument. And this goes right the way back to the Manhattan Project, and all these man-made, big things that we’ve strived for and been able to achieve, and you are responsible for your own actions. And there’s very few instances where you can get around that, and I think as an organisation, that’s just vulgar to say that.

Damien Williams[00:30:51] Yes. Agreed. That’s absolutely right. I 100% agree with that, and you’re right. We did used to hear a lot more of the, “Oh I’m just a programmer, I’m just a dev, I’m just a designer, it’s not my job to think about how people are going to use it on the back end.” But in that same way that you’re talking about in the last five years or so people have started to realise that that’s not – that’s never been – a good answer. Like, the implications to what we do matter. And we need to interrogate as many of those as we can, before they come to pass. We need to show that we are at least willing to think about them. I think the last time we talked, one of the phrases I used was “the lag time”, between asking the question, realising we needed to do something, seeing the implications of that thing, and then turning around to fix it. There’s that gap in reaction time between each of those stages. And one of the things that I’m still very concerned about is shortening the lag time between those things. It has to – you can’t be fully prescient about any of this stuff. Stuff happens that we can’t foresee, because we’re doing so many things, round the world, as a species. So much is happening all the time, that they’re going to interact with each other in weird ways that we can’t expect. That’s fine. But we can reduce the amount of time it takes us to adapt to what comes out the other side. And in order to do that, we have to be thinking about what might go wrong, what haven’t I thought about. Right? We have to be asking what I haven’t considered. Because once that pops up, you’re not going to have a whole heck of a lot of time to really think about it. So you have to already be in a mode of, you know, something is going to be weird here. Something weird could always happen on the other side, and I have to be ready to think about that, to work with that, to adapt to that. And to possible correct for it. If it goes very wrong. Again, we can never be prescient, we can’t ever have perfect knowledge of exactly what the work we do will look like on the other end, and no, that doesn’t mean we should just stop doing that work. It means we have to do what we can, as much as we can at the front end. And then be prepared to do more, at any point after we’ve put it out in the world.

Ben Byford[00:33:13] So on the cultural side, you do a lot of writing around this area of how we think about these sorts of technologies: robots, and AI, and consciousness, and how these things play out in the media, and this sort of stuff. So, I was wondering if you had a kind of headline thing that you’re working on at the moment? In that space? Because there’s a lot that we could talk about there.

Damien Williams[00:33:48] There’s always more. One of the major things that’s been really interesting to watch happen is the GPT-3 conversation, and talking about, how do we think about the kind of outputs that can be trained on these systems, and the larger ideas of language, and natural language. How do we think about AI and automated processes that work with that language? One of the things that I think about in that space, is people get very worried at that societal, cultural level about, “Robots are going to steal our jobs!” and the idea of what jobs are quote-unquote “worth having” for humans? And this idea that there’s always going to be some automated system that comes in from the background, and snatches up what is supposed to be, by rights, a human field of endeavour. That’s always been something that has animated our fears about AI, robotics, everything in that space. But one of the things that we’ve also seen, as that has started to happen, is that some fields are still capable of being interrogated, still capable of being discerned, as the work of an automated – robotic or otherwise – programmed system. We can see when something is too perfect in a particular way, machined too well, on the mechanical side of things. But also, we can see when it’s trying to mimic something that it gets the actual formal structure of, but it doesn’t get the full context of. And so we can still see in these places where human endeavour, the output of humans, is not really in any way endangered by this. And that we have this sense of real understanding of something that makes a piece of writing or a piece of craft interesting to us, but that animation of this fear is always still there. That this thing is going to be able to write a paper, or write an article, or write a book better than a human.

And it’s like, I don’t know that we’re anywhere near there yet, because it’s still always about what you train it on. You want to write what kind of book? Would you be able to train GPT-3 to write a better Lovecraft book than H.P. Lovecraft ever wrote? Maybe? It would be a weird process. You know, give it all the Aldous Huxley writings that ever existed, and have it write like Aldous Huxley? You might be able to do that, but is it going to be able to – at this stage of what it is, of functionally what it can do – be trained on the full breadth and depth of all of literature, of all of prose that humans have written in any language, and then synthesise that into something that is interesting, that is functional, that is compelling? We’re not there yet. We’re still at a place where a text that does that, if you go and you do just a simple distribution of terms and associations, you’re still in the place where it’s associating words on a gender-biased, normative scale. It still has that basic level of word2vec associations going on in the background that we’ve already seen very easily get gender and race encoded into them in a fundamental way. It still hasn’t solved for that. So if it still can’t solve for that, if it still can’t self-interrogate to ask that question if it’s doing that. Obviously a lot of humans still can’t self-interrogate to see whether they’re doing that – but I still can’t, as it exists, put it to the system, to say, “Hey you’re doing some weird misogyny stuff over here. Maybe don’t do that. Maybe don’t associate those terms in that way all the time. Maybe don’t think of the idea of what philosophy is conceptually as this sort of nonsense practice.” Because that’s the other thing that’s happened. It’s that when it’s told to write about certain things, it writes in a way that is pejorative to some disciplines, because the things that it has been trained on is pejorative to those disciplines. So, it still can’t fully be made to interrogate itself on those levels, yet.

So that kind of cultural animation of, “I’m afraid that we’re going to lose our jobs to these things,” it’s like you might lose some kinds of jobs, but those jobs are all going to be real rope, and anybody could have always done those jobs. And those are not the jobs – those are middle management jobs, those are the jobs that, you know, you’ve got somebody who’s worried about being a junior executive kind of job. That’s fine. But it’s not necessarily the kind of thing that we’re deeply concerned about. My overarching thing has always been at a certain level of automation, we should be able to just let certain things be done in the background by any system that can and wants to do them, and then free up people to do the things that people actually can and want to do, more broadly. We have things that are just interesting to us as a species. I don’t mean things that apply to all of us as a species, I just mean things that we think are cool, and we want to be able to do. And as a species we should be able to do those things. And we’re tied to these ideas of the value of jobs, the worth of jobs. Work is interesting and fun, and keeps our lives meaningful, and being able to do stuff, but that stuff doesn’t necessarily need to be a job, it doesn’t necessarily need to be the thing that we literally depend on to live. It currently is, for all of us, but there’s still so much that happens in the space of automation, in what we fear AI might do, we’re still not talking about what it maybe could, what we hope it might help us do.

Why not take that space of AI is here to augment our lives. AI can help us think through tasks, can help us complete things, and then expand on that. And the answer is that currently most of the AI applications are controlled by people who want money. They want money and they want power, and so they’re keep those systems functional in those ways with those tools. But it doesn’t necessarily have to be that way. And this gets to that broader question of if we did ever get to that dream of an autonomous generated intelligence. A real conscious machine. If we came back around to that, what would we want it to be like? What would we want it to think like? What are the founding principles we would want it to have been built on, because if we build it on the founding principles of Amazon, Microsoft, IBM, Facebook, Google, those are principles that are going to very possibly paperclip-maximise. They’re going to try to build profit at any cost, because that’s what those founding principles are. But if they’re built on those marginalised perspectives that we were talking about earlier, if we think instead, not about that fear, but about that possibility, not about a kind of diversity-inclusion pastiche, but that real marginalised representation, and we build an AGI from that perspective, we say, “What lived experiences need consideration, need justice, need representation, need understanding in a way that hasn’t been possible before now?” That foundation, that scaffolding for a potentially conscious mind is a drastically different thing than something that’s build on the idea of maximise profit at all costs. And while again, I still think we’re not quite there yet – we’re not at the AGI place yet, and a lot of people as I said have unfortunately abandoned that goal – the landscape of the process is such that whatever does get built, if we come back to the training, it’s going to be built off the work that we’re doing now. The kinds of processes that we’re thinking about now. And so I would very much like for us to think about what values we as a culture are encoding into the automated applications that we are doing right now. What it is we are afraid of, and what it is we can hope for in that space. 


Ben Byford[00:43:43] Yeah. I’m trying to think of a way. There’s quite a lot of stuff in there. I’ve got this vision in my mind that we’re all doing things now which are a positive direction, more inclusive, and more thoughtful. Respecting human dignity and all this sort of stuff when we’re creating these things and actually it would be hard to think about making a system that wasn’t like that, in the future, that would be nice. “Oh, yeah, we made it like that because that’s how we make stuff, right? We don’t make stuff to optimise this one metric which is cash, right? That would be silly, why would we be doing that?”

Damien Williams[00:44:32] Why would we do that? Yeah, it would be nice to get to the place where that is the forerunner. And I do think that more people are actually starting to think in terms of that kind of wider and deeper, kind of values level. Rather than just, maximise efficiency, maximise output of cash. But I still think that so many of the corporations that do fund this work, so many of the groups that are at the back end that commission the designs of these tools, are still putting them to the purpose of maximisation of cash and efficiency. And I think that that’s a problem, because one of the things that I’ve struggled with and one of the things that I really try to hammer home is the way that we then implement these tools – if they can learn from themselves, from their environment, from the world, if they are in fact self-adapting, if they’re capable of improving themselves as they function, based on what they take in, what they can discern, what they can then eliminate, if they can move through this discriminative process – then they are going to learn, not just from what they’re encoded with, they’re going to learn not just from their initial dataset, but they’re going to also learn from how they’re implemented, how they’re put out into the world. How they’re made to function once they are in the world. And once they are in the world and they are functioning in a particular way, then it comes back to that facial recognition thing, right? If I point that facial recognition software at particular communities and tell that facial recognition programme, “Hey, these particular communities tend to be more criminal,” then if the facial recognition software is truly autonomous, if it gets to a place of truly being able to learn, the assumptions that are encoded into it are going to meet up with the implications of its implementation – how it has been told to function, where it has been told to operate, what kind of things it will tag as suspicious – it gets to the point where any black or brown body, regardless of what it is actually doing, is automatically tagged as a potential criminal.

And that kind of foundational problem, that baseline issue is still, and again, if we come back to that place, if we’re going to make a mind, we’re going to make an intelligent machine, a conscious machine, or even if we accident into it, even if we don’t intentionally do it – we create something that is complex enough and capable of self-reflection enough, and capable of self-determination enough, that it has to be considered conscious, then ultimately what it has been trained on, what it has been predisposed towards, what it has been given to think like, and to think about, to work with, is going to form the basis of how it understands the world. And we’ve got to be really careful about what it is about what we have given it as tools for how to understand the world.

Ben Byford[00:48:03] That’s a really good point. There’s an episode I did with a wonderful lady – I can’t remember her name, either, because I’m terrible with names – Julia Mossbridge.

Damien Williams[00:48:20] I love Julia. Julia’s fantastic.

Ben Byford[00:48:22] Julia is great.

Damien Williams[00:48:23] Julia and I know each other very well. We’ve worked together on a number of things.

Ben Byford[00:48:28] And she was talking about this idea that there’s this mothership kind of thing, not a spaceship, but you know, you’re mothering,. You’re teaching and, like you said, you can teach it in a way that you’re directly giving it the capabilities, right, which is a technical thing. But then you’re also showing it the world, you’re saying, “Let’s go, let’s do stuff, let’s do the useful stuff, let’s not like explode the world,” or whatever.

Damien Williams[00:48:59] Yeah, exactly that. And it’s the difference between giving it the capabilities to perform versus showing it to then use those capabilities – how to actually engage the world with its talents. And that’s different. Those things are both very drastically important, because if you give me the innate skills to understand math, chemistry and physics, and I then decide that I want to put those towards weapons-making that’s very different to if I decide I want to put those towards free energy and the most efficient solar panel that I can create. Right? Those are two very different implementations of the same basic skillsets.

So, I have to be shown an example that I want to follow, that I’m given to understand is good. And the implementation of those things, the society in which I’m steeped, the culture in which I’m steeped, is in many ways that example. We can push against the culture that we’re born into, obviously, but the more we’re steeped in it, without ever being given cause to interrogate it, without ever being given cause to question how we’re raised, the harder it is for us to push back against it. And we see the implications of that in humans all the time. The older people get without ever having had their worldview questioned, the harder it is for them to break out of that worldview if they are pushed against something that says, “Maybe don’t work that way, maybe don’t hold with the ideas that say that women are less than, or people of certain genders are less than, or people of certain ethnicities are less than. Maybe don’t react that way. Maybe that’s a bad way to be in the world.” The older someone gets, the more steeped in their environment they get, the more on all sides – even at a young age – that is simply their world. When some countervailing evidence comes along, the harder it is to include that evidence, to consider that evidence. And in fact, studies have shown that what happens is that people actually close themselves off to any countervailing evidence, the longer it goes on. They say, “No that can’t be real,” they ignore the reality of it. And we’ve seen very recently, in the United States, what that looks like when that happens on a wide scale, when countervailing evidence that says, your position has no tenability, no basis, it’s just a string of cards that were holding each other up by their own weight. There’s no underneath here. it’s a system that hangs itself on itself, here’s some countervailing evidence that maybe can help you reframe that, their response is, “No, and not only are you a liar, but you are a conspirator, in this vast conspiracy against me and mine.” And then once they’ve done that, once they’re in that space, the possibility of getting them out of it is always, it’s ever harder. It’s not impossible, but it is ever harder. Because any evidence against the conspiracy is immediately subsumed by the conspiracy.

I don’t know about you, but I don’t want AI that thinks like that. I don’t want a conscious machine that refuses to consider countervailing evidence, and reads any and all evidence against its position, against its understanding that paperclip maximisation is the best thing for humans, bar none. It says, “Any evidence against my position is automatically wrong, it’s part of the conspiracy against me, and my paperclip-maximised heart.” That’s not how I want it to live, but currently the culture that it will be implemented is a culture that is very easily tended toward that direction.

Ben Byford[00:53:16] Yup. So, Damien, thank you very much for your time. We’re getting towards the end now, because I can hear little feet and things in the background. That’s kind of my alarm bell. If you could as succinctly as possible, we have the last question on the podcast, is what scares you about this technology, and what excites you in this automated future that we have?

Damien Williams[00:53:44] Okay, what scares me about this is all of the potential downsides that we’ve talked about so far. The exacerbation of bias, the exacerbation of injustice, the continual, uninterrogated assumptions about what the quote-unquote “right way” for a person to live is, that’s being given to all of these systems. And then those systems iterate on all of those assumptions and expand them out, and make them something that we never really anticipated. That scares me. And it’s happening now, and it’s only likely to get worse if we don’t really dig down and try to fix it. And understand what it is we need to fix.

What excites me about it, what gives me hope, is the fact that there are people who recognise this, and want to do that work of digging down on this. The fact that there are people in high levels of the United States government, as of this year, who understand these things, like fundamentally recognise the implications on society of technology writ large. Now that’s something that we haven’t ever really had before. In the office of Science and Technology that was always something that was focused towards STEM, it was never really focused towards social sciences, so never really took the perspective of, “Okay, how do we talk about sociology in this framework?” But we have that now. We have people who think about how human life, human values, human society impacts the technology that we make and vice versa, and that gives me great hope. And I’m hopeful that we will be able to take that, that beginning crack, and widen it and build on it, and build something that allows us to really open up this conversation, and do work differently in this space.

Ben Byford[00:55:38] Wicked. Thank you very much for your time again. It’s been a joy to speak to you as always. And if people want to find out more about your work, follow you, contact you, how do they do that?

Damien Williams[00:55:53] Best way to find me online is either on Twitter, I’m on Twitter as @Wolven, also at my website afutureworththinkingabout.com, and from either of those places you can find contact me links, you can find a newsletter, all kinds of stuff that hopefully you can take some time to read.

Ben Byford[00:56:14] Wicked. Thanks for your time.

Damien Williams[00:56:15] Thank you Ben, you’re welcome.

Ben Byford[00:56:18] Hello, and welcome to the end of the podcast. Thanks again to Damien. It was really lovely talking again. And if you want to check out our first chat, please go to Episode 24, which was the AI Retreat, back in 2018, I think. Also, we mentioned Julia Mossbridge. You can check out her interview on Episode 30. We’re looking at more stats at the moment, so as the podcast is completely self-hosted, I’m having to do all that sort of stuff, and all the different stats are very convoluted and different from different platforms. I’m trying to get a handle on all the things, and do some programming myself on the server side, to work out what’s going on. On the basic level, we know that we get around about 1,000 subscribers, or we have around 1,000 subscribers, which is great. So, hello to you. If you’d like to support us some more, go to patreon.com/machineethics, and thanks again for listening, and obviously like, subscribe, all that sort of jazz. Thanks again and I’ll speak to you soon.


Episode host: Ben Byford

Ben Byford is a AI ethics consultant, code, design and data science teacher, freelance games designer with years of design and coding experience building websites, apps, and games.

In 2015 he began talking on AI ethics and started the Machine Ethics podcast. Since, Ben has talked with academics, developers, doctors, novelists and designers on AI, automation and society.

Through Ethical by Design Ben and the team help organisations make better AI decisions leveraging their experience in design, technology, business, data, sociology and philosophy.

@BenByford