88. AI fictions with Alex Shvartsman

This episode we're chatting with Alex Shvartsman about what is our AI future, human crafted storytelling, the Generative AI use backlash, disclaimers for generated text, human vs AI authorship, practical or functional goals of LLMs, changing themes in science fiction, a diversity of international perspectives and more...
Date: 14th of May 2024
Podcast authors: Ben Byford with Alex Shvartsman
Audio duration: 35:22 | Website plays & downloads: 83 Click to download
Tags: Cultural Industries, Author, Stories, Science Fiction, Sci-fi | Playlists: Science Fiction, Creativity

Alex Shvartsman is the author of Kakistocracy (2023), The Middling Affliction (2022), and Eridani’s Crown (2019) fantasy novels. Over 120 of his stories have appeared in Analog, Nature, Strange Horizons, etc. He won the WSFA Small Press Award for Short Fiction and was a three-time finalist for the Canopus Award for Excellence in Interstellar Fiction.

His translations from Russian have appeared in F&SF, Clarkesworld, Tor.com, Analog, Asimov’s, etc. Alex has edited over a dozen anthologies, including the long-running Unidentified Funny Objects series.

Alex resides in Brooklyn, NY. His website is http://www.alexshvartsman.com.


Transcription:

Transcript created using DeepGram.com

Hello, and welcome to the 88th episode of the Machine Ethics Podcast. This episode, we're talking with Alex Schwartzman. This episode was recorded on the 9th April 2024. We chat about what is our AI future, disclaimers for generated text and generative AI use backlash, human versus AI authorship, practical or functional goals of LLMs, changing themes in science fiction, and much, much more. Like this episode, then you can find more.

Machine-ethics.net. You can contact us at hello at machine-ethics.net. You can follow us on Twitter, machine_ethics. Instagram, machineethicspodcast. YouTube, at machine-ethics.

And if you can, you can support us on Patreon, patreon.comforward/machineethics. Thanks very much, and hope you enjoy. Hi, Alex. Thanks very much for joining us on the podcast. If you could please introduce yourself, who you are, and what do you do.

Hi. Thank you so much for having me. My name is Alex Schwartzman. I am a writer, translator, anthologist, and editor from Brooklyn, New York, and I am here to talk about, an anthology that I recently edited and published called The Digital Aestat. It's human musings on the interaction of AI and art.

Awesome. So the first question we always ask on the podcast before we dig into your book and, some of the different kind of, ideas that people have run with in the anthology, For you, and I guess for some of their stories, what is this kind of notion of AI and artificial intelligence? What what is that for you? So like most people who are involved in creative endeavors, artists, writers, etcetera, I became primarily aware of the leap in, large language models, about a year and a half ago, when Midjourney launched, a much, much better version of image generator than, than what was previously available through WALL E and other sources. And they their images actually look good.

And so there was this moment of initial excitement very quickly followed by, oh my god. What's going to happen next? Are we all going to be out of the job? Is AI going like, we've all envisioned this wonderful future where robots would, like, sweep our floors and cook our meals, and we would sit there and paint and draw and write. But now the robots are painting and drawing and writing, and we're still sweeping and and cooking our meals.

Right? So, there was a lot of anxiety stemming from that. Mhmm. And there was also a lot of people not really understanding the technology, not really, like, kind of misdirecting their anxiety in some ways as well. So I thought it would be really interesting to ask some of the best writers in the industry to kind of provide their takes on what the interaction between art and AI might might look like.

And I wasn't limiting it to current technology, although some writers did write about generative models and things that we're more or less seeing today that was very near future implications. Others, we're envisioning, like, fully like, true AIs, like, truly intelligent artificial minds and how they would interact with art. So it was a really interesting experiment, and it allowed me to, not only learn a lot more about the technology and kind of, gave me incentive to stay updated and follow, all the developments legal and technological, but also to get all of these perspectives and put them together and share them with with interested readers. So it was a really rewarding project in many project in many ways. I was just looking at the book, and I I was interested to see if there was, the cover was an AI.

And I just looked and it and it isn't. This is an illustration. Right? Oh, it is definitely an illustration. One of the, very important things to us.

So there's a tremendous amount of pushback in publishing, against any version of AI being used in pretty much any way whatsoever and especially covers because of the ethic ethical concerns of how, those models are trained. So, this is actually a cover that is digitally drawn by an artist, in Russia named, KA Tirina. She who's a writer in her own right. And, unfortunately, she was one of the people I asked for a story for this, but, unfortunately, she was not able to fit that into her schedule, but she did produce, a wonderful cover art for us. So this is all the stories and all the, you know, and the artwork and everything else is human crafted in this book.

Wicked. And I I there's a a few stories that at the end of them that have a little bit of a strap line to say, this is all human created, and there are snippets of chat g p t or or whatever in there. And that they're specifically calling those things out, when they're used and and so that they can not take the copy I think one of them says, like, I don't want to take the copyright for that section of the text specifically. So that that that actually pertains. There's only one story like that in the book.

And this is a story by Ken Liu, who is a a wonderful writer, and technologist who works a lot on the interaction of, of of of tech and art. So he was a perfect, kind of person to write a story for this. So what he did is in his story, he needed a very small snippets. It's maybe a couple of 100 words of generated text because it is literally generated text, as part of the story. And because of ethical concerns and other concerns, a, he insisted that he has not paid for the for that part of the word count, and, b, he insisted that we'll put in the, the disclaimer, which I certainly had no issue with.

So it's just that one story. And in that case, it's not a a matter of a writer trying to sort of, like, cut corners and fool the readership by having, the chatbot do some of their work, but an intentional choice because he was, in the story, there was text being generated by, by a chatbot, essentially, by by by a large language model. So so, yeah, I thought it was an interesting way for him to handle that and to call out exactly, you know, like, how this would be handled to, you know, to say that, like, no. I don't own the copyright on the taxes being generated by, by this nebulous thing. Mhmm.

So, so, yeah, I I think he handled it very, very well, and and I'm glad that, that that you kind of zeroed in on that. Yeah. Yeah. Though it as a reader, it was interesting to come across that, And it's like, oh, yeah. That's, that's a nice aside to what you were because it obviously, interacts with the story you were just reading.

You you mentioned this that that the the the publishing industry is having this kind of reflective look at how it interprets these sorts of technologies at the moment. Do you get a sense that anyone is, got a handle on on what the route forward is? Or is is this anthology kind of like a a look at, you know, how we could explore that in a way almost? So in a way, it turned out to be more about writers kind of exploring their feelings on a matter. I don't think that, they have the arrogance to kinda say, like, these are the solutions.

This is what we should do going forward. They just explore different angles and different, sort of, like, scenarios where, technology like that is interacting with art. Mhmm. As far as the publishing industry is concerned, like all human beings, we're very, very, very bad at adapting to new technologies. I mean, look at social media.

I mean, it's it's 20 years old now, and we're still very bad as a society. You know? And we're it's kind of really shaking us off as a society, and we're having a a lot of trouble figuring out what the proper and healthy way is to, to interact and deal with it. Now the these, LLMs, we, you know, we're kind of incorrectly calling AI, but, whatever what what whatever label we wanna put on them, those technologies are very new. And so I would expect that it's gonna take us decades again to sort out the the the ethical implications, kind of like, you know, the the procedures.

And, of course, we're gonna have so many different jurisdictions with different laws, where, you know, in some places, it's gonna be illegal to train models on on copyrighted materials. And in some places, they'll say, well, you know, you don't break copyright by training these things. So all the tech companies are gonna have to do is just go to the locations where, these things are legal, which is actually something that was extensively covered in the 19 eighties, cyberpunk books. If you read some of the some of the, like, islands in the net and some of the other books from that era, this is exactly the sort of illegal stuff that they were talking about, how these companies would go and find digital havens where they would be able to do whatever may be considered illegal in the United States or the United Kingdom and other, in other places. So we are actually seeing this, being developed now in real time as, people in different countries and in even different states and cities trying to kinda wrap their minds around what should be done with with all of this.

Yeah. Yeah. And I guess it's less kind of kind of punky and more like just big industry. Right? Like, like, big players having to do these things.

Well, Cyberpunk has always been about how the little guy is interacting with these giant corporations. Right? This is the gamem, and it's exactly what we're seeing. It's the little guys, the artists, the writers who are maybe seeing their work, getting, you know, scrape off the Internet, scrape from books, etcetera, and and let these models are being trained on it, and nobody asked their permission or, you know, or or whether they, you know, they they were at all at all happy with it. But now it's kind of a done deal, and you have lawsuits and you have various concerns.

I don't think the lawsuits are gonna go very far personally, but I do think that public opinion is gonna have a lot to say about it. Very a a few very big publishers got, quote, unquote, caught, with, cop with covers book covers that were, digitally, you know, like, either created by AI or significantly manipulated by AI. And there was so much backlash against them, even though legally, they may not have done anything wrong by current laws. Yep. That they substituted those covers, but, you know, for real for real art because, the publicity was bad enough among the, among the potential readership, that they were, you know, they felt the need to act.

Act. Yeah. Yeah. But is it it feels like one of those things where it's almost good, bad publicity, still publicity sort of situation. So I don't know I don't know if there's if that negatively affected their sales at that point because it It may not have.

It may not have, but it certainly, but it certainly did push them to make the decision that the, you know, the community at large wanted. Yes. Right. Right. Definitely market forces, do work in that regard because if enough people kind of have the strong opinion about this thing, then big companies will listen because at the end of the day, you know, they're they they're they're profit driven.

They they want to they want their customers to to to to consume their products. And if people are saying, I'm not going to buy a book that has a, you know, that has a cover that's generated by, you know, by a language model Yeah. Then those publishers will listen. And at the end of the day, the $500 or or so that they're saving is probably not gonna be worth it for Yeah. Yeah.

100%. And in the introduction to your to the book, you mentioned that the the kind of the LLMs, the large language models, that they're really good, but, also, they're not quite there yet. Do you do you get a sense that at some point, they they may be there yet? And I guess at that point, are we going to be, you know, subsumed? We're gonna consume that sort of media?

Or do you think that it's more important that we are the, you know, the authors in the things that we consume or that there is this kind of human authorship. It's it feels like a big question, but, like, you you kind of start down that road in the introduction. I just wondered if if your thoughts have changed on that one. So I it's a very complex question, and, I think it's an undoubtedly these technologies going to get better, and, you're gonna be able to do more with them. Right?

There's no chance whatsoever that we're gonna stop the genies out of the bottle, and we're going to just like any other, new technology, and, you know, we're we're going you know, it's gonna get, you know, just progressively, you know, more powerful and stronger. Having said that, at the very basic level, this is still a a a a soup top database. It takes a whole bunch of data points, and it spits out, you know, a a paste generated from them. And so there is no creative element to it. And I feel like people who are artists and writers, who who are trained or, you know, by by many years of experience or sometimes by school to kind of think about storytelling, to think about, you know, creating artwork and, like, positioning everything in their piece, etcetera, it's pretty easy for them to identify when the piece is artificially generated.

So if I read, you know, if I read a bunch of text, if somebody sends me a a submission for one of my anthologies and this is chat generated nonsense, I can tell pretty quickly. Right. And I don't know that even with a lot more computing power and a lot more, data to pull from, I don't know that it is actually possible, to make a leap. I think that you may need, like, a true AI. If you, if we ever get to the point that we very well might and and maybe even soon.

Right? If we truly get to the point where we have an AI that's capable of of its own original thought, that AI should be able to produce real artwork and real stories, and it may, write better stories than I can. And if if it does, then I certainly wouldn't blame the readers for choosing to buy the books that it created rather than books that that I've written. Right? If they're better, if they're more interesting.

Yeah. But I think that that's that point is pretty far away. And I think that at the end of the day, art and I again, when I say art, I don't I mean everything, books, music, you know, painting, all of it. It is so subjective that, you know, you may absolutely love the book that I will hate and vice versa. Mhmm.

And so even if you have this powerful AI mind rise and produce a 1,000 books a minute and they're all good, there's still going to be people who won't like them, and there's still going to be people who, may prefer books that are written by the favorite author or by a new author they discovered. They just happen to dig because, stylistically, they just have to match the reader. So I don't think that, true creatives are going to go away. So many years ago, when when when the camera was first created, it was sort of like an apocalyptic event for artists because most most painters at the time made their bread by painting, portraits of families and and and and individuals for for for people who had enough money to commission such things. And that was a tremendous, tremendous, percentage of how artists, you know, drive their income.

Now very few people these days have their portraits drawn. Right? Like, I mean, it's either presidents or, like, people who of enormous means or just people that an artist chooses to paint because they want. Right? Yep.

But art certainly has not gone away. And so I think that, you know, art in general has survived with the invention of the camera. It has survived the invention of Photoshop, and it will survive the invention of AI. But there will be pain because a lot of people, will have to either retrain just in the same way as the artists who who drew portraits. Had to probably re retrain and do other things and, like, find other ways to, to earn a living.

But but as a as an art form, I don't think it's gone away. Right. So it's maybe taking away some of that work for the people who are, making perfunctory writing or, the writers of car manuals, for example. Their their work is going to be consumed, by the the next LLM, stuff like that. Whereas the novelists or, I guess the more creative products are gonna gonna stick around.

In a way in a way, I certainly people who certainly, it's much better at writing nonfiction. So if you if you want to write a a form letter to, you know, like, you know, you can certainly have a language model spit it out because this is the kind of thing that they're very good at because they've been trained by having read, you know, a 100000 form letters just like it, and so they can generate 1. And then you'll still need to modify it. Right? Because, as I wrote in my introduction, one of the worst things about these models is that their goal is not to be accurate.

Their goal is to approximate what you're looking for, and so they will lie very confidently. And you have to be able to then go and edit out those lies and put in the information you actually want in there. But, yes, writers of technical manuals and translators of technical texts are in trouble. People who translate fiction are not in trouble at all because let me tell you, these models are terrible at subtext. They're terrible at any anything where the meaning is not crystal clear, where the meaning may be intentionally or unintentionally, hidden by the text, the these models fail really, really hard, and that's what you need a a human translator to actually look at the nuance.

And I I don't think that's going away anytime soon. I feel like my I have a lot of job security. Right. You you've been reading, editing, and writing, science fiction for, like, decades, at this point. And I I wonder because the there's a lot of kind of AI and, the the that type of thing in science fiction, from all the way back, you know, almost aliens and that sort of thing could be seen as this other intelligence as well if you if you wanna put that into the ring.

But, like, have you seen a change in how people are writing about things over the last, maybe, 10 years as we've had this kind of new wave of AI or the kinds of things that coming in and the the times of worries that people were having when they're doing their writing? Absolutely. So science fiction, at at its core, science fiction is not about imagining the new technology. It's about imagining how that technology is going to influence, the life of of of human beings or or alien beings or whoever whoever are the characters in your story, but it's a commentary on what will this technology do to us or for us. Right?

And so, we absolutely write our own insecurities and fears and concerns into the story. One of the very easy kind of things to see right now because, of course, books and films and TV shows are a few years behind the actual technology because of the production cycle. Takes time to conceive of the story, write it, and then get it published, and that's a very slow moving machine. But if you look at stuff that's being put out today, invariably, one of the most common villains in the story is a billionaire. Right?

I mean, we, if you read science fiction from kind of the golden age, a lot of the it was a lot of, here's a plucky industrialist who is bucking the establishment and doing all the good things while the stupid bureaucrats are just, you know, under you know, like, meddling and and, and and they're they're failing to save humanity or advance humanity while this, this capitalist is able to do it. So that narrative has changed a lot because of how we perceive, you know, how we perceive the richest among us. And so we are we may be seeing the same thing. Not that we've always seen, our AI as as as positive or safe characters. I mean, it's always been the enemy of humanity in a way from, Space Odyssey 2,001, the Terminator.

Right, to meet to the matrix. So that that, trope has always been with us. But we're certainly, going to see, I think, a more nuance and and exploration of just how exactly, you know, these, you know, these technologies are, influencing us today. And because the short stories are so much easier and faster to write than novels and because the production cycle for ontologist tends to be quicker, the the kind of book the digital is that is may actually give you a preview of what sort of novels and what sort of characters and interactions between human and artificial minds you're going to be seeing in the stories going forward. Mhmm.

I I think it struck me is that not they're not completely then there's some very negative ones in there. Right? But most of them are kind of gray in their exploration. You know, here's the future, and there's this thing happened. And it might be kind of sort of amusing, or or it could be, you know, very horrific, or that it, you know, has consciousness and it's kind of fine.

There's all there's there's quite a lot of different, takes in there. Do you do you have an opinion about, you I don't know. Your own personal feeling behind how this how will this would go? Or I'm a tech optimist. I'm not one of the people that thinks that AI is going to, you know, take over the world, kill us all, etcetera, etcetera.

I think that we, the human beings, are the greatest danger to to to to to to the human society that exists because we are the ones with our fingers on the proverbial button. Right? Like, I mean, we can we we can use technology to unfortunately, do a lot more devastation today than, than was ever possible in the past. So that's where the danger stems from. I think that technologies are something that, there's going to be a lot of friction and strife as we adapt to them.

Like, again, look at social media. I think that in the long run, we will either grow past it or more likely adapt it into our lives in a way that is considered by future generations to be healthy and reasonable and not, you know, causing, like, high rates of suicide and and teenagers and and all all sorts of problems that we're seeing today whether whether real or perceived. Yeah. So I think that technology, you know, of this technology like any other, in the long run is going to be a net positive for us. As far as the selection of the stories in this particular book, as an editor, one of my goals was not to, become too much of a, yeah, I'm just gonna drill my personal viewpoint or or any one particular viewpoint into the reader.

I wanted stories, that approach this from every possible angle. I wanted some funny stories. I wanted some, like, positive, outlooks for the future. And, certainly, I was not going to say no to some very grim and negative ones as well. And I think that that's the best way to look at any problem is to get a a variety of viewpoints that compare and contrast.

And most writers are not very happy writing a story that is just really, really, really, like, one one point of view, and it's clearly, like, this is this is the right way. They want those shades of gray that you mentioned because that's what makes an interesting story. Otherwise, you know, otherwise, you're just gonna end up writing something that's kind of boring. Yeah. Yeah.

Yeah. Yeah. Or, like, too tropy. Right? Or Yeah.

Something that we've all seen before. You know, like, I always complain. Like, I mean, I look at, you know, when when talk about storytelling. Right? I always complain about Superman as a character because he's the most boring character ever invented.

And the reason is at any given point, in any given situation, you know exactly what how he's going to react and what he's going to do because there is no shade of gray to that character. He's just a good guy. And so I, for 1, am not very interested in characters like that. I want characters that are complex and that will struggle to make decisions and that will grow over the course of the story. And so, you know, you don't see as much character growth in short stories because by nature, they're they usually cover a much briefer period of time, sort of like a moment, define defining moments for a character rather than, like, you know, a lifetime or or or at least a significant, like, period of time.

You see more of it in novels. But I certainly want that complexity. Yeah. And and you see, this turning into the the next anthology. I mean, there feels like so much, creative possibility in this AI space.

Do you think it's something that you will revisit or, look at again in your own work? Absolutely. I think there's there's a strong possibility of, of revisiting it. So, for a number of years, I've edited a magazine titled Future Science Fiction Digest, and I certainly noticed that I published, like, a disproportionate amount of stories that dealt with artificial intelligence. Not necessarily art, but stories about, you know, AIs and robots and things like that, more so than aliens in space.

So I certainly have a weakness for it, which probably made me the appropriate editor for this anthology. And so, yes, I absolutely see, the possibility of doing more with that in the future. Mhmm. But, of course, you never want to kind of do too much of the same thing. So I I am working on other projects for the moment, but I may well revisit it.

And, it you know, there's also a a business element to it as well. So the better the book sells, the more likely you are to do some sort of a sequel to it. So I noticed that there are lots of different, international authors in the book. How did how did that work, and how did you get contact with all those people and ideas? So to me, you you want perspectives that are different, and there's so much wonderful science fiction being written around the globe.

I mean, there are thriving, you know, science fiction writer communities in China and Russia and, you know, like, wonderful, wonderful writers across the globe. And so whenever I'm working on anthology projects, I try to make sure that I don't just publish the same, you know, 20 North American or British authors. I want to reach out to to to those people, and there's a lot of extra work because often you are working with translators. Often you are working, with, people who cover science fiction in those languages, like critics and award, you know, like, award, administrators who are reading all the works and just kind of having conversations. And I'm saying, well, who are some of the most exciting, interesting people writing short fiction in your language?

Where what what works of theirs have been translated? What can I look at? But, you know, I I don't speak all the languages. Right? So it's much harder for me to source, a Chinese story than, let's say, Ukrainian story.

So for this book, I made sure, first of all, that we published several Ukrainian authors because I'm I was born in Ukraine, and so I support them. I I want to support the creators and kind of get them paid for their work and and and and support their message. So so it was very exciting. We were able to publish 3 stories from, from Ukrainian authors in the book. But we also, included fiction from places like Sri Lanka and Argentina and, you know, Madagascar and all or China, all sorts of places around the world.

And I think that the the book is much richer for it because you don't want to just see the world through the prism of sort of, like, the first world, North American western culture. You really want those different perspectives. And, anytime we can get them, it almost invariably enhances, the the the overall the over the overall book that, that you end up with. Thank you. And I I I think it really benefits from that.

I think, part of our collective work is to, obviously, have more people in the room. And when we're creating these things, when we're using them, when we're making decisions about how we use these tools and, get excited about them. So it's only right that we also include everyone in the conversation of imagining, you know, these different futures and, reacting to kind of what's going on, whether it's in art or, you know, or or business, to be honest. I think I think there's been a big, push for that to be the case, definitely in my world anyway. So one of the last questions we always ask on the podcast, I think you already said you're a tech a technology optimist.

But what are the things that excite you about AI and our, technology mediated future? And what are the kinds of things that kind of are negative and scare you about that? So I think that there's a tremendous amount of potential in AI technology, that is outside of the realm of art. Right? Like, the the problems that we're having with the with it kind of encroaching on the art world primarily are ethical problems.

They're problems with how the models were trained and that, you know, there's a strong argument to be made that anything that it spits out is fact and is not copyrightable anyway. So if you move to other parts of life, to other to other areas of exploration, then this technology could be truly, truly life changing. You know, you you have AI is now, you know, scanning test results. So, you know, like, instead of a human eye, they're they're better at detecting, like, you know, cancerous growths on on on on MRIs and scans than the humans potentially. So those are the kinds of technologies that, you know, that we can really use the computing power for.

Right? Like and and and and this is the kind of use that I think all of us can agree, is is a net positive. Right? It's still gonna cause jobs. Right?

Because if we determine that it's doing a better job than the tax, then less tax will be hired for that particular, for the for that particular part of the process. But at the end of the day, if it saves human lives and expands human lifespans, then who can argue with the outcome? Right? So, so that's, you know, that kind of the, you know, the the the primary argument for me is that, we we have lots and lots and lots of potential users for this technology. Now, unfortunately, very early on, it's gotten hijacked.

Right? It by the same people that, that were, you know, so, like, into digital, you know, the digital currency and and and and the blockchain. So that now those same people in the way that they were, like, slapping the word blockchain onto everything that it has nothing to do with, they're doing the exact same thing with AI. Right? Like, they're saying, oh, this is an AI powered technology.

Well, no. It isn't. It's the same technology that you were marketing differently 2 years ago. So I think that they're kind of poisoning, people's perception of of of what's happening, but the technology itself, has legs. I mean, it's definitely viable, and we're going to see some really life changing things in the coming decades from that stem from it.

And we're just in the very, very early stages of it. And so we're seeing all these kind of weird, you know, charlatans and and and and and, like, weird applications. But as the technology matures, I think that the future for it is is is is a positive one. Mhmm. And I think that we're gonna see those impacts in many areas of our lives that are not art.

Right. Yeah. And I I feel like we're we're feeling some of them already, and it's just gonna become more so, I guess. Right. Yeah.

Cool. Well, well, I I I'd certainly appreciate the opportunity. And, now a very important thing is that the stories are all actually available online for free so people don't even have to buy the book if they don't want to. They can go to future dash s f dot com and read all the stories individually. And, of course, if it's more convenient for them, then it is available as a print book, as an ebook, and soon as an audiobook as well.

So, I do encourage people to check it out. And, also, my own novels and short stories and body of work, including all of my anthologies are linked atalexswartzman.com. So it's just my name that's gonna be on the, you know, the appear on the podcast.com. And so, yeah, I mean, if you want, you know, fiction by an author who is new to you and who is definitely not a bunch of zeros and ones in a trench coat, check, you know, check my work out. And if you do like all the zeros and ones, then you know where to go.

You can go go see those type of guys. Thank you so much for your time, Alex. So go go check out, Alex's website and, buy the book. Check out the anthology. Check out the the, Future Science Fiction Digest.

And, yeah. Thanks for your time. Thank you. Hello, and welcome to the end of the podcast. Thanks again to Alex for coming on the show.

I'm currently in the process of finishing reading the book, and I'll have a review up on the Patreon very soon. I feel like science fiction is very much a reoccurring theme on the podcast. And in fact, recently, I put together some episode playlists. So if you go to machine-ethics.net/playlists, you can select a term or category, and you can select science fiction, for example, and find tagged episodes, which like my conversation with IBM's Christopher Nossal or another author, Callum Chase. In fact, one of our very earliest episodes, in fact.

I'm recording this outro on the 14th May 2024. And tomorrow, I'll be featuring the AI Ethics Regulation and Safety Conference here in Bristol. And, hopefully, we'll have time to create a podcast at VoxPops and things that are going on at that conference. So check it out next time. Thanks very much, and I'll speak to you soon.


Episode host: Ben Byford

Ben Byford is a AI ethics consultant, code, design and data science teacher, freelance games designer with years of design and coding experience building websites, apps, and games.

In 2015 he began talking on AI ethics and started the Machine Ethics podcast. Since, Ben has talked with academics, developers, doctors, novelists and designers on AI, automation and society.

Through Ethical by Design Ben and the team help organisations make better AI decisions leveraging their experience in design, technology, business, data, sociology and philosophy.

@BenByford