91. What scares you about AI? Vol.2

This is a bonus episode we're looking back over answers to our question: What scares you about AI?
Date: 15th of July 2024
Podcast authors: Ben Byford and friends
Audio duration: 09:11 | Website plays & downloads: 89 Click to download
Tags: Bonus episode | Playlists: Special edition

Sarah Brin worries about economic growth being prioritized over labor rights. Ryan Carrier fears the concentration of power among a few. Roger Spitz highlights the rapid tech progress outpacing our awareness. Richard Bazer Yates discusses the lack of emotional intelligence in AI. Nadia Piet warns about AI-driven narratives. Mark Coeckelbergh fears totalitarian uses of AI, while Mark Steen focuses on the climate crisis. Wendell Wallach is concerned about poor decisions by a few and AI's cognitive misattributions. Alex Schwarzman points to copyright issues in generative AI. Lastly, Dr. Marie Oldfield sees AI as a challenge with potential benefits, advocating for its ethical and political management.

Thanks to our guests who made this episode possible:

Find more special edition episodes here.


Transcription:

Transcript created using DeepGram.com

Hi. And welcome to the 91st episode of Machine Ethics Podcast. In this episode, we have another bonus volume of answers to one of the questions we always ask our interviewees. What scares you about AI? You'll hear from 10 of our interviewees recorded over the last year or 2.

And in this order, you'll hear from Sarah Brin on company's economic growth over anything else and labor rights. Ryan Carrier of power in the hands of very few. Roger Spitz, the velocity of technology progress and our awareness of it. Richard Bazer Yates, unconsciousness and the lack of emotional intelligence. Nadia Piet, on narratives of AI.

Mark Cockerber on Totalism. Mark Steen on the climate crisis. Wendell Wallach on the small number of people doing stupid things and mistakes of cognitive attribution. Alex Schwarzman on copyright issues with Gen AI. Rounded off by doctor Marie Oldfield on making a better world.

If you enjoy this episode, you can find more at machinedashethics.net. You can contact us at hello at machinedashethics.net. You can follow us on x or Twitter, machine_ethics. Instagram, machine ethics podcast. YouTube at machinedash ethics.

And if you can, you can support us on patreon.patreon.comforward/machineethics. Ethics. Hope you enjoy. And, yeah, the the stuff that scares me is, you know, we've seen a lot of tech companies that prioritize economic growth above everything else. And so why would the approach to AI be different?

Which is why I've been outspoken about labor rights and protections for individuals whose work might be impacted by AI itself. Well, I'm still scared about the power and control being wielded, by some. You just mentioned GDPR, and GDPR is an excellent law for protecting data. Here's my problem with it. I don't think anyone in the world is compliant with GDPR.

Not one entity. So it's great to have the law, but until we're all complying with that law in a robust fashion, I still have fear. Right? I still have concern about how data is being used and and and appropriately protected. What scares me is is indeed, as you say, things we've we've touched upon, which is we can't ignore the trajectory and velocity of machines learning fast and what that means.

And we can't ignore our the way our current educational systems, government structures, and incentives are driving the outcomes of the world, and are relying on a stable, predictable, and linear world, which is not the one we live in. If we add to that some of the drivers of change, which are self reinforcing and which are kinda more on the dangerous side of the spectrum of neutrality, you're moving towards, humanity, which is potentially less and less able to keep the kind of clock from midnight if you take, you know, the midnight clock with the 0 being, by by our friends at the, you know, atomic atomic agency or whatever who keep their midnight clock, and and it's currently very close to midnight. And, you know, what scares me is how little it can take, especially in unpredictable, complex, nonlinear environments, for anything to overflow. And I'm including that even society, even the US. You know?

The the difference in the US and Venezuela, I don't know how much or the UK or or anything. So or or nuclear bombs or other events we do. So I I really it's that duality, the paradoxes, the tensions, and all that. And the starting point is kind of the awareness of some of the things we're doing, then we need to have the agency to do something about it. But one of the reasons we're writing what we're writing and doing the programs we're doing and and why I got interested in these topics is also because I feel that there's not sufficient awareness, whether it's decision makers or simply the 8,000,000,000 people on earth around these topics.

And if you're not aware of them, you're that much less likely to do something about it. Now what is what scares me is exactly the opposite, It's having very good technical people that don't have a don't have too much emotional intelligence. And they're seeing seeing that, for example, I think it was in the Economist, a very senior Google Fellow, that you see steps to to consciousness in in language models. And language models don't understand what they they read, and they don't understand what they they write. Mhmm.

So so this is like, I don't know, wishful thinking. I don't know exactly what's the difference between the human brain, what is what is about, what is real about consciousness. Even not even neuroscientists know what is consciousness. And and we will try to decide if if if machines can be conscious or not. So what I hope will not happen is this, yeah, very big like monopoly that some big tech companies are having on on AI and, that being the main driver and narrative.

Because from that point of view, the systems that do come into our lives, even if they work flawlessly, like, computationally, they will just have values embedded that will drive us, like, more apart and more out of ourselves and out of each other. Like, not in a, you know, a future I want. What what I'm really afraid of is that that these technologies will be combined with the worst of, human tendencies and and, risks. And, you know, politically, that is, I think, authoritarianism and totalitarianism. And so if if if that combination happens, then then I'm really worried because then we have these powerful tools in the hands of, of that kind of regimes, and and the kind of people who also want to turn our liberal democracies in in into, that kind of systems.

So that that I'm really worried about, and I but I think we don't need to, we don't need to be hopeless or desperate about this and just, you know, works towards, strengthening our democracies and also, making sure that, the the technologies like AI and robotics we develop, yeah, are developed in a way that's that's both ethically and politically sustainable. Yeah. What scares me is, many of these big industries that exacerbate, the climate crisis. I don't have to spell them out, but you can think of the fossil industry, etcetera, etcetera. Yeah.

That's reasonably scares me. My concern is that we will never get a handle, managing these technologies, and they're gonna be applied willy nilly. And they're gonna be applied in ways that serve the interest of small elites at the at the expense of humanity as a whole. So it's not like I sit here fearful of artificial superintelligence. I'm I'm less concerned with artificial and superintelligence than forms of intelligence that humans attribute more intelligence to than the systems really have.

And therefore, they put they put them in positions of making critical decisions. So I think that there's a tremendous amount of potential in AI technology, that is outside of the realm of art. Right? Like, the the problems that we're having with the with it kind of encroaching on the art world primarily are ethical problems. They're problems with how the models were trained and that, you know, there's a strong argument to be made that anything that it spits out is fact and is not copyrightable anyway.

I'm not scared because I see it as a really good challenge. I see it as something that that there's a lot to be there's a lot to be done, in terms of technology, but I think that there there are a lot of benefits from it as well. So we shouldn't be scared of it. We should move forward and see it as something that we can use as humans to, not necessarily improve our lives but but to our benefit. And we should look at these challenges and we should address them and then be able to move forward into a world that is slightly better than the one that we're in now.


Episode host: Ben Byford

Ben Byford is a AI ethics consultant, code, design and data science teacher, freelance games designer with years of design and coding experience building websites, apps, and games.

In 2015 he began talking on AI ethics and started the Machine Ethics podcast. Since, Ben has talked with academics, developers, doctors, novelists and designers on AI, automation and society.

Through Ethical by Design Ben and the team help organisations make better AI decisions leveraging their experience in design, technology, business, data, sociology and philosophy.

@BenByford