65. DeepDive: AI and Games

This first Deepdive episode we talk to Amandine Flachs, Tommy Thompson and Richard Bartle about AI in games, it's history, it's uses and where its going. We discover NPCs, games as a test bed for AI research, different game AI techniques, back office uses of AI, job displacement, bad actors and possible futures...
Date: 10th of December 2021
Podcast authors: Ben Byford with Amandine Flachs, Tommy Thompson and Richard Bartle
Audio duration: 35:16 | Website plays & downloads: 236 Click to download
Tags: Games, NPCs, QA, Machine Learning | Playlists: Special edition, Games

Amandine Flachs is the CEO & co-founder of WildMeta. After supporting startups founders for more than 10 years, she is now looking to help game developers create smarter and more human-like game AIs using machine learning. Amandine is still involved in the startup ecosystem as a mentor, venture scout and through her series of live AMAs with early-stage entrepreneurs. She can be found on Twitter @AmandineFlachs.


Dr Richard A. Bartle is Honorary Professor of Computer Game Design at the University of Essex, UK. He is best known for having co-written in 1978 the first virtual world, MUD, the progenitor of the £30bn Massively-Multiplayer Online Role-Playing Game industry. His 1996 Player Types model has seen widespread adoption by MMO developers and the games industry in general. His 2003 book, Designing Virtual Worlds, is the standard text on the subject, and he is an influential writer on all aspects of MMO design and development. In 2010, he was the first recipient of the prestigious Game Developers' Conference Online Game Legend award. https://mud.co.uk


Dr Tommy Thompson has over 15 years experience in artificial intelligence research in games, Tommy sought to provide a more accessible format for his area of expertise to those without the same scholarly background.

Releasing the first AI and Games YouTube episode in 2014, Tommy has continued to build upon this small platform to form a company around it. With the YouTube channel amassing over 5 million views and 100,000 subscribers, the fundamentals of what AI and Games has sought to do has never changed. Educate developers and students on how best to utilise AI in their games.


Transcription:

Tommy Thompson[00:00:03] Video games and AI: there's actually been a very heavy intersection since almost the birth of the medium. As soon as we started having anything that was an autonomous system that was reacting to the player's behaviour – and you know you can see this as early as Space Invaders, and things like that – but fundamentally these aren't what I would deem to be intelligent, rational, autonomous systems; they are heavily scripted behaviours that you're putting into this world.

Ben Byford[00:00:30] That was Tommy Thompson from AI and Games, talking about the intersection of AI technologies and video games. Hi and welcome to a new deep dive episode of the Machine Ethics podcast. In this new series we'll be talking to experts in industry and beyond to explore a specific aspect of artificial intelligence. In this first episode, we're going to be exploring AI and games. We'll be looking at how the technology is used; what issues there may be; the past, present and future of the technology; and why we might be using it at all. Where better to start than with the pioneering games designer Richard Bartle?

Richard Bartle[00:01:12] The first of these online virtual worlds were text based, because graphics weren't good enough to be sent down bandwidth back then, and we didn't have any graphics on the computers anyway. So I co-wrote the first one, it was called MUD – Multi-User Dungeon – back in 1978. The reason that we created the virtual worlds is essentially because we didn't like the real one, so I wanted to create something better. When you create a world, you want people to visit it from your world – from reality – but you also want the world to have denizens, things to live in the world. And the controllers of those denizens are obviously going to be using artificial intelligence. I did my PhD in AI, and the reason I did it in AI was because I wanted my non-player characters to have more wit about them than normal, so I did my research in that area in order to try and find out the ways that I could make them behave in intelligent-looking manners.

Ben Byford[00:02:26] That was Richard Bartle, academic and consultant, whose early multiplayer text adventures continue to inspire online games and NPC AI today. Richard talked there about denizens – game-world non-player characters or NPCs. Here's Tommy Thompson again telling me why games use NPCs.

Tommy Thompson[00:02:47] Historically, when we think of AI, we think of non-player characters or NPCs; this idea of a character or some sort of avatar that exists in the world and it is operating under some set of procedures. Sometimes, those procedures will actually react more to what the player is doing in the world and this is something that's becoming increasingly more prevalent over the last 30 years, 40 years, as the medium has really blown up. Now, I often think of this entire process as sort of akin to theatrical blocking, because you have this notion that the audience is the player – that they are sitting and they are trying to engage in this experience that you're crafting – so there's a real emphasis to communicate to the player that this world is much richer than just their involvement, and you want to craft that in a way that is as airtight as possible, although you have to accept that there are trappings of working in a video game environment.

Ben Byford[00:03:44] I love the way that Tommy talks about the theatrical nature of NPCs and of the game AI as if they're there in service of the player, and the overall experience and the feeling that the designer might be trying to convey. Here is Amandine Flachs, who runs a games AI consultancy, to tell us more about how they are controlled.

Amandine Flachs[00:04:04] So, the kind of techniques to create the behaviour of these bots or NPCs are done by what we call “game AI”. I know that can be confusing because today when we talk AI, I personally think about machine learning, but game AI has been around for a very long time and that has nothing to do, necessarily, with machine learning. So what we call game AI is actually what is going to control the behaviour and what controls the bots and NPCs – so all these different entities within the game that are not your own character that you're playing – and game AIs are actually a bunch of techniques that are well known. So for example, an algorithm for pathfinding, which is what enables a character to find its way on the map. Or if you look into game AIs 101 you also are going to come across decision trees, which is going to give you a lot of possible choices of actions and the consequences of it. So all these are actually scripted, and what it means is that it requires a lot of work for developers to make a good game AI.

Tommy Thompson[00:05:20] One of the big reasons for this is there's a lot of AI – traditional AI algorithms that can be used to solve very complex problems. That’s great, that's what we want. We want an autonomous system that can ideally come up with near-optimal solutions to complex problems. In video games, optimality is not often what you want. You want something that is malleable, and is reactive, so you start seeing a lot of things such as navigation meshes, which became a very – particular as games moved into 3D – you have navigation meshes which – actually was Quake III that really popularized this – and it's based on notions from robotics, of path planning.

You have things like finite state machines being used in Gold IX Half-Life, which is a finite state automata. We already know what that is, that's an idea that's been around in mathematics and in computing for many, many years. In the early 2000s, you have Halo 2 really proliferating the notion of behaviour trees, and First Encounter Assault Recon, or F.E.A.R., using what was called goal-oriented action planning. So behaviour trees are directed acyclic graphs, they are just something that already exists in broader computing. And the goal-oriented action planning system is based on the Stanford Research Institute problem solver, which was developed in 1976, if I remember correctly off the top of my head.

So you have a lot of these elements where this works really well in this real-world context, but we need something that really is built, that enables this kind of intelligent design that is becoming increasingly more pervasive. And what we're seeing now, with things like procedural content generation, and even narrative generation technologies, which are often built off planning, is that you're continuing to see this idea of, “Let's take something that we already know works in the more traditional AI space,” and then getting that to work in a space that would actually be really, really practical for game development.

Ben Byford[00:07:13] Here's Richard again on the connection between games and AI research.

Richard Bartle[00:07:13] A lot of the work in AI and games is trying to make AI better using games as a test bed, whereas I prefer the other direction: trying to make games better using AI as a technique. So all the academic publications will be about games, because games provide useful environments – interesting environments, challenging environments, ones that you can change – so they're very good for testing new AI techniques, and for measuring them. And this has been the case since AI came out. People were writing programs to play chess, not because they wanted to understand chess, but because they wanted to understand AI.

The games people will look at AI, but they've also got an advantage in that they don't have to justify themselves or publish any of the results, or anything. They don't have to run tests. One of the things that AI people have trouble with is the concept of believability. So you want to make your AI non-player characters or your AI opponents act in ways which are believable, so that they're like a human. And as soon as you start off on that, you end up with big problems, because how do you measure believability when you've basically got to ask your players? So every PhD student has got these wonderful ideas about how they want to make games where people act smarter, and they implement all these things, but ultimately they're going to have to do some kind of user test – see if they can scrape together 25 people and make them play the game and then say, “Well I thought this was believable and I thought that wasn't believable.” Now in a games industry though, they don't have to do that. They just think if they do that, that will make them more believable, so they just do it.

That means that in some areas, the games industry is ahead of academia, because they're not bound by academic restrictions. I mean you will obviously be having play testing, but you wouldn't be having to run all kinds of stats and publish papers and everything about it. You’d just be – the designers would be getting a feel for it, “Is this having the effect on the user that I want it to have, without me having to quantify what that effect is so I can measure it?”

Ben Byford[00:09:53] While game AI has borrowed a lot from traditional AI and search algorithms, it hasn't used machine learning techniques as much. Amandine is the co-founder of WildMeta; they're at the forefront of using AI technologies, including machine learning, for games. So how can we use machine learning?

Amandine Flachs[00:10:13] Well I guess the question is, what is a good game AI? So the players have some expectation, but I think the main aspect of a good game AI is creating a character that is believable. You know, there is nothing that makes you really tick. And so the problem is, because it's scripted it means that if you add the complexity of modern video games, it's very hard to create these good game AIs. Game technology is constantly evolving, but the way game AIs have been made hasn't really changed much, and so you see a gap there where it's getting more and more difficult. And it's funny, because when you ask players about good game AIs, they are going usually to mention games that are from ten years ago. And that's very interesting, because today you have very complex games, very interesting games, but it's just the game AIs themselves haven't really been following that. So even doing something simple like having your character moving from point A to point B on the map, today you find very modern and big games that are struggling with that, just because there is more and more complexity in the way the games are made. For example, one example that comes to mind usually is procedural generation, so when you are going to have elements of your environment that are going to be generated well, procedurally, as the name mentions. Well, making game AIs, making bots that are actually going to be able to work with that and integrate this concept it's actually hard – which is, I would say, slightly easier when you use machine learning like we do. So it's a very different way of doing game AIs.

Tommy Thompson[00:12:08] Interestingly, in the context of character behaviour and character design, and also building gameplay sequences and stuff like that, machine learning's never been something that's been that pervasive. So, if you go back to the late 1990s, you can find a lot of games that experimented with this. Creatures is a very well-known example of artificial life systems with neural networks for creating these “norns”, they were sort of virtual pets that you're looking after, but you're also training them. It's not as rich or as complex as I would imagine if you do something of this nature today, but I think it captured the imagination of a lot of people. But also, you have things like the original Total War experimented with neural nets for troop deployment, Black and White by Lionhead, which is a very famous example of the trained neural nets. Sometimes, often, in a lot of these cases, the neural networks were so small they were handwritten and hand tweaked. I do have a record somewhere that people were saying, “Oh yeah, like some of the stuff that we were using in Total War and Black and White,” they actually went in and tweaked the weights manually because there was only like 50 of them, or something like that. But it was a notion that really died off, and I think a lot of it comes down to a lack of trust and reliability in what was being crafted. As we said there, if you wanted to tweak it you had to go in and tweak the weights, and what are we going to get out of this? As game productions have become bigger, you have a bigger disconnect between the programmer who writes the AI systems from the designer who facilitates the gameplay.

Amandine Flachs[00:13:41] So you can use machine learning base bots for the opponent AI. So if you have a multiplayer game, you want to have enemies that are always available, or you want to populate the game with more players. I mentioned before multiplayer games and battle royale games for example, so if you take PUBG, which is a quite well-known battle royale game, they can have between 20 and 95 percent of bots per game – that's huge. But the problem is they're not really good, and they are obviously target practice in the game. But if you can't tell if that's a bot or a human playing, then that wouldn't be a problem. And another thing for multiplayer games is if you have players that get disconnected, you don't want the rest of the players to be affected, so you could replace also these players by your bots if they are good enough. You know, if they are as good as another player.

Ben Byford[00:14:47] So machine learning techniques can be very useful for making convincing bots, but it may not always have the desired outcome. Something that learns to be good in a certain sphere might not be fun to play against. Indeed how do we optimize for fun? Richard explains:

Richard Bartle[00:15:04] Well, machine learning is a tricky thing for games, because if you're playing a game and you develop a tactic and it's working, and then suddenly the game learns that and then counters it, some people will think, “Well that's good – that's just like playing against a person.” But other people will think, “Oh god, now I've got to do this differently now, haven't I?” Because dynamic difficulty adjustment – which is a consequence of this – and other areas where the game improves from watching you, just mean that in the end you end up gaming the AI. Where’s the fun in that? In some cases it makes sense, in others it doesn't. But in general, players don't like it when the game learns from them because eventually it will outlearn them, or they'll have to metagame it. You can use AI to train your monsters and so on, so that during play testing before the game's release, you build your neural network and you decide how they should respond to things. But the best thing to do then is to fix it in place so that the players have something that they can try and figure out how it works, and how to beat.

Ben Byford[00:16:24] So AI can be used for bots or NPCs. What other places are we seeing AI utilized?

Amandine Flachs[00:16:30] When you're thinking about adopting a new technology in a game, that means that for studios, it has to save them time or money or bring more money, but there is always this aspect to keep in mind. So, new technologies can be applied in many areas of game development – and here we're talking about machine learning specifically – and we've seen a lot of very interesting applications of game developers trying to experiment with different ways of using machine learning. For example, for creating animation that is more realistic, but that’s just one example. I've seen companies also trying to balance the difficulty of the game using machine learning, too. So new technologies applied to video game development can have like so many different areas of applications.

Tommy Thompson[00:17:30] Typically even if you look at something like the Unity game engine, it's built using this estate machine style technology, so it's all set logic where the human designer is saying, “Oh in these particular sets of situations, this is where the animation blends.” But there's millions of different possibilities based on what's happening in the world, so I know IO Interactive were very big on pushing this technology, and Ubisoft have taken this even further with their motion matching tech. But it's this idea that you are getting the machine learning system to figure out what are the best points to let these animations blend. That’s just one area that that's really taken a hold, but we're seeing it in cheat detection, we're seeing it in physics replication, texture synthesis where right now there's a lot of talk about AI upscaling and how that's being used. Even, most recently, the Mass Effect Remastered trilogy, which was where they used AI in order to help them upscale all the textures. So there's a lot of different spaces outside of traditional gameplay and non-player characters that now we're realizing, Hey, this is a great space within which we can really utilize – particularly machine learning technology – to make the developers’ lives a lot easier and streamline production.

Amandine Flachs[00:18:38] It can be about helping the development itself, or indeed helping monitoring the game. Helping with the release itself of the game, helping the feature within the game, but also after launch. After lunch you have a lot of work to do to keep interest for players in the game, so I’ve seen also application of new technologies and machine learning being used for moderation. For example, it's well known that you have some games that have rather toxic communities, so that can also be used for that. You know, to support the players facing these problems.

Tommy Thompson[00:19:20] This is the Machine Ethics podcast, so you might not all be video game players, but you might have heard that online video gaming has a bit of a bad reputation, particularly towards toxic behaviour among players. This comes in two forms, like one, you have cheats and people who are misbehaving in the game. But then also you have kind of toxic spaces, and particularly treatment of anyone who is not a white, cisgender male is an ongoing concern. So these are two things that are very relevant, and two things where again you're dealing with very specific pieces of behaviour that can be extrapolated and analysed, and so machine learning is actually providing a lot of gains in this space.

The notion of using deep learning to detect cheats and hackers in games has increased quite a lot. In fact, one of the biggest advocates of this – despite no one ever knowing about it – was Valve. So Valve actually implemented their system into Counter-Strike: GO, CS:GO, which is a very popular online shooter. What they did was they actually got it to feed in player data, because every match of CS:GO has a full log that you can go back and look at. And it's useful for replays and stuff like that, and for e-sports purposes. So they were able to train a network that could begin to identify irregular play and say, “Hey, this doesn't look right.” Now, what they then do is they actually file a cheat report and they send it to a human to analyse it, and then they can actually decide on it. And so this is something they implemented about three or four years ago, and they noticed that the number of accurate reports for the system was somewhere in the region of like 85%, whereas on average, humans – because humans do this, they have this report system built in the game where you're like, “Oh god, they were hacking, they're cheating then,” and you report them; I do it all the time – I think they said something like less than 20% of reports by humans are accurate, whereas they got to somewhere between 60 and 80% with the Overwatch system. But both of them tie into the same report framework, so it's okay if just the AI detects it. It sends the same type of report that a human would. It allows the assessors to not have to discriminate between, “Oh well, that's what a human said versus that's what an AI,” but they're seeing that performance is improving.

So that's one thing, but there's also a greater effort – and companies like Spirit AI were doing this a few years ago, Microsoft are a big advocate of this – of building AI systems that can pay attention to internal chat logs. Whether it's text chat, but also listening to audio chat, and being able to filter that out, being able to report players automatically for toxic behaviour of some sort. It's like, “Okay, well you're being overly aggressive towards this player.” You then offer to that player, “Do you want to block or ignore them, or mute them, because you don't want to listen to them?” Those are areas that are already actively in development and I think are really interesting to see continue to grow, because as we said, a lot of gaming – particularly around online gaming – has a toxic culture. And being able to address that in a way that will enable for broader – and certainly existing – diverse audiences to feel safer in that space, and be able to express themselves in that space, is what's going to enable games to continue to grow.

Ben Byford[00:22:37] As well as moderation of cheating and anomaly detection, AI can also be found in sometimes overlooked areas like QA. Tommy explains:

Tommy Thompson[00:22:49] For anyone who's not familiar, quality assurance is the process of having people come in and test the game during production, in order to have a certain quality standard before it's released. Also, anybody who's ever worked in quality assurance will tell you it is a long and difficult and complex process, because games – like any piece of software – are inherently broken almost up till release, and this is often a very difficult problem. You're trying to extract all these bugs, you're dealing with games that are becoming bigger and more complex. So how do we facilitate that in a way that is going to not kill your QA team? As I think Jason Schreier and many other journalists have pointed out, quality assurance is often one of the poorest treated sectors of the video game production pipeline, how can we make their lives easier?

So there's been a big push, I would say, over the past five, six years, to start automating testing in a really exciting way. You saw studios like Rare were really pushing for this during the development of Sea of Thieves. They really moved to a test-driven development framework but they were also using traditional AI to test parts of the game whenever builds were being produced on their servers. You have Massive Entertainment and Ubisoft, they have an entire test-driven framework where they have bots that play the game. They have two different types of bots that can play the game, and they can detect different types of bugs. Even more recently, you have external companies like Model AI, who are based out in Denmark, who are selling their glitch finder software, which is the idea of, “You give us your game, we have an AI toolkit that we can plug into your game, in Unity or Unreal, and we will find the bugs for you.” And you can tell them what type of bugs you want to look for, and they send AI players to go out and try and find all the bugs and report them back.

So I think this is really exciting because, it's for two reasons. One, it changes how quality assurance works, because we're moving away from the functional problems of quality assurance – the idea that, all right these are human testers that are sitting playing this game finding all these bugs, but the scope of the game is becoming so big finding all these bugs is for them nearly impossible. So if we can find a tool that can alleviate that to some extent, and then it's allowing them to use their human knowledge and their expertise to really zone in on, “Oh, right. These are all the bugs that we found in this part of the map, but we figured out why this is.” Or a lot of more subjective bugs in a game, so for example, a texture isn't loading correctly and animation doesn't look right in the game. It’s much faster for a human to spot that, and go, “Oh, that animation isn't working right – someone report that to tech and they can fix it,” versus, you know, trying to get a system to do that would be much harder. So it's an effective mechanism of using the tool to make their life easier. I think that's great. I think that should have a knock-on effect in actually improving the quality assurance overall within the industry.

I am a little worried, of course, that video game companies – particularly very big ones – have a habit. Like I said, quality assurance isn't treated the best because they're often zero hour contract clients that come in from external companies that then subsequently, does this mean that we give them even less resource, less time? That maybe the pay conditions are reduced even more, as a result of, “Well, we can automate a lot of this and alleviate the need for these humans,” when really we should actually be using these people for the skills that they have that a machine doesn't.

Ben Byford[00:26:18] Tommy touched there on the human impact of these technologies, and indeed on the question of ethics and AI, I also asked Amandine and Richard whether they think about the impact AI might be having in the industry.

Amandine Flachs[00:26:33] Misuse of machine learning is a big topic, and I think everyone – not just with machine learning, but as soon as you're building something new using new technologies – you need to think about the impact it can have and how it can be used in the future, even if you don't intend to use it this way. It's important just to understand potential – you know, what people can use it for, without your own control. So, I would say machine learning in games is too fast to give a precise answer of the potential misuse of the technology because yes, we all know about the problem of loot boxes, and also the addiction some games can have when it comes to spending money or pushing people to spend money. So these are all problems, but as I said before, there are so many ways to use machine learning in games, you have to ask yourself these questions depending on how you're using machine learning, how you can protect your players for that. It's your responsibility to just think about all these different aspects. What if someone wants to use it this way? What if someone has this kind of concern? You have to think about everything, and so ethics can be a big word, but just thinking about the different way it can be used and potentially misused is kind of your responsibility.

Richard Bartle[00:28:04] How can you weaponize games? It's really, really easy. If you wanted to you could do absolutely terrible damage to people through games. If you just flick the switch and then the two million people playing your game suddenly, well you could blind them, you could give them repetitive strain injury, you could empty the bank account in their favour. You could change their political views, you could give them epileptic fits, 25% of them you could put into a hypnotic state. There's all sorts of things you can do through games. The only reason we don't is because game designers are nice and don't, and also they're afraid of being punished by the law. That's the sort of thing where I think, you know, if a bad agent really wanted to do something – and there are plenty of state-funded bad agents out there – then you could do some serious hurt with this technology.

At the moment, the worst that we get is trying to get people into a gambling state of mind, with the loot boxes and Skinner boxes – Skinner loot boxes. There are things that people do which are unpleasant and they shouldn't be doing, but that's nothing compared to what you could do. People complaining that the worst things about games industries, you know toxicity – attitudes towards minorities and things like that. Yeah, those are really bad, but actually they could be far worse. That's not to excuse them. That’s just to say if a game was making people do that, then it would be worse. If a game was causing people to go out and commit sexual offenses – and again, could do that – I mean, so those are the kind of things that frighten me. You've seen what things like the different American television channels have caused different views of politics in their audiences, some of which you may agree with and some of which you won’t. Those are just television, things that people spend – you know, they watch their news programmes for 15 minutes, and that's that. If you're playing a game for several hours a night, and for MMOs people do do that, there's a lot more you can be telling people, subtly or otherwise. You can be flashing things onto their screen that they may not see, subliminal images.

There's all sorts of things you can do. You can tie people's views, so every time they see this, something good happens, and every time they see this, something good happens. Once they've associated with that, then you have some non-player character who looks like a politician you want to promote, with the thing that looks good, and you've suddenly… And there's so much you can do, and it's unpleasant, and I really don't want to see that. Add AI to the mix and it's even worse.

Ben Byford[00:31:02] Wow! Scary stuff there from Richard; bad actors utilising the medium for their own purposes. Amandine, however has a more upbeat view about their work at WildMeta.

Amandine Flachs[00:31:14] In our case, we spent quite some time thinking about machine learning base bots and how they could potentially be misused, and the thing is we don't really – like, it’s just bots in a game, so I would say worst case scenario will be for us that players enjoy so much playing with bots, they don't spend time playing with humans anymore. I would say that's not actually a misuse, that wouldn't even be a negative, really. But we really struggle finding misuse the way we work, because we also don't have access to any data, and so we are making sure that we just build bots that play the game without accessing anything sensitive.

Ben Byford[00:32:07] Thank you Richard, Tommy, and Amandine. We discovered the integral part that AI plays in the games industry. Whether controlling NPCs, catching cheaters, or keeping voice and text chat safe for little ears. Or indeed in upscaling graphics, or quality assurance. If you like this new deep dive episode, then let us know, and you can find more episodes at MachineEthics.net If you can, you can also support us on Patreon. Please go to Patreon.com/MachineEthics. For me, finding fun in video games is still important, and a place where the human designer is still king, but perhaps in the future we'll play a different role altogether with our interaction with games. Please join Richard and myself to discuss some of this further in a future episode, but for now here's Richard with the last word.

Richard Bartle[00:33:04] What I'm working on at the moment is AI in games centuries away. So, I want intelligent non-player characters. Let's suppose I have intelligent non-player characters, because it's a thousand years from now or five thousand, or a million, or 20 million. Take as many years as you want, we've got eternity. We've got planet-sized computers and everything. So we've got intelligent non-player characters, who are as intelligent or more intelligent than we are, and then the question is what can we do to them? What are we allowed to do? What should we allow ourselves to do? Can we switch off the virtual world that they're in and kill billions of non-player characters? So we have to ask ourselves questions. Should we implement concepts such as suffering? We don't have to implement suffering. These non-player characters can just sit around all day quite happy. If they do break a finger then they think, “Oh yeah, that was quite bad. I should get that fixed,” and it's mildly irritating, but it's not suffering. But on the other hand we could say, “Well, let's liven things up. Let's create some kind of natural disaster which kills a whole load of them. Let’s have some creatures that burrow into their brains and take them over.” We can do all these things, but should we? Is it ethically right?


Episode host: Ben Byford

Ben Byford is a AI ethics consultant, code, design and data science teacher, freelance games designer with years of design and coding experience building websites, apps, and games.

In 2015 he began talking on AI ethics and started the Machine Ethics podcast. Since, Ben has talked with academics, developers, doctors, novelists and designers on AI, automation and society.

Through Ethical by Design Ben and the team help organisations make better AI decisions leveraging their experience in design, technology, business, data, sociology and philosophy.

@BenByford