83. Avoidable misery with Adam Braus
ADAM BRAUS is a professor and polymath professional, author, and expert in the fields of ethics, education, and organizational management. He is a writer, speaker, teacher, podcaster, coach, and consultant. He lives in San Francisco, California. You can subscribe to his weekly podcast at solutionsfromthemultiverse.com, find links to his books, or contact him via his website adambraus.com.
ALSO, you can find episode 68. of Solutions from the Multverse featuring Ben Byford here.
Transcription:
Transcript created using DeepGram.com
Hello, and welcome to the 83rd episode of the Machine Ethics Podcast. This episode, we're talking with Adam Brause, who's recorded on the 25th October 2023. We chat about natural stupidity and natural intelligence, mis accordionism, and avoidable misery, the idea of a misery detector, human instincts. We chat about Asimov's rules. We also muse about the positive outcomes of AI technologies.
If you'd like to find more episodes, you can go to machinedashethics.net. You can contact us at hello@machinedashethics.net. You can follow us on machine_ethics, Instagram, machineethicspodcast, or youtube, youtube.comforward/at machinedashethics. And if you can, you can support us at patreon.comforward/machineethics. Thanks again, and hope you enjoy.
Hi, Adam. Welcome to the podcast. Thanks very much for joining us. If you could just introduce yourself, who you are, and what you do. Hey there, Ben.
Yeah. Thanks so much. Yeah. My name is Adam Brause, and, I'm a, my day job is a professor of, computer science. I'm the chair of computer science at Dominican University.
It's an applied program, so it's it's very much like training web developers. So that's kinda what I do day in and day out. But then, yeah, I'm also my my actual, training is in ethics, And, and I'm and I'm doing some work on that right now. So it was so exciting when you contacted me, and, I was able to listen to your great podcast. Oh, thank you very much.
You're too gracious. I I believe you sent me a message about a year ago, and I just put it in the kind of, maybe I'll get back to that pile in my inbox. And then I suddenly realized, I kind of went back to some of the stuff and went, oh, yeah, that that would be good. And it would be good now because, we've had quite a few kind of, sort of more practical or things which are more Zeitgeisty episodes. So it's nice.
So thank you as a bit of a cleanser to come back to like kind of a pure academic sort of stuff. And and that's kind of Yeah. How I feel about so you sent me your book as well. And hopefully, we'll have a review up of that once I finished it. I've got about probably 3 quarters of the way through it now.
And, yeah. It's great. So Yeah. Not too dense. The Future of Good.
Not not a very dense. It's it's pretty readable. Right? It's like stories mostly. And Yeah.
Yeah. Yeah. Yeah. Yeah. It's meant to be kind of like a approachable fun thing to read.
Yeah. It's it's very, let's say it's an easy read. Yeah. I think I did it kind of backwards from most academics. I think I do my whole, most things I do, I do backwards.
So, like, I don't have a PhD, but I actually am the chair of a department. People are like, how did that happen? And then I wrote a nonfiction sort of fun book to read, like, for popular nonfiction and now I'm writing academic papers about that. So I'm actually having to kind of tighten up the thinking and do all the citations and everything. Whereas most academics, they write the really, really academically thing and then they write a stilted sort of hard to read nonfiction book about it, you know?
So I'd like I don't know. I go backwards. It's good. It's good to go backwards sometimes. Well, I, still enjoying it.
And I will ask you, further about in a second, because I have quite a few things I have to say about it. First, on the podcast, the thing we always ask is, Adam, what is AI? You know, I I listened to so many episodes, and I got to hear this question over and over again. And I I decided I would be like a politician, and I would answer the question I wished you asked me instead of the one you do. And I I think what I'd like to say is, is is I'd like to define n s.
Do you know n s? You probably don't because I'm just I just made it up for this podcast, but it's it's natural stupidity. You know, we we don't talk about people are talking a lot about AI, but they're not talking about NS. You know? And I I think it's important for us you know, I'll just I'll just throw that out there that AI if AI is a kind of intelligence, if it's a kind of light, if it's like light, what is the darkness that it's inside of, that it's kind of illuminating and kind of dispelling?
And I think that's n s, natural stupidity. And there's some ethical concerns, I think. I mean, you might even argue what's gonna put the nukes in the air? Is it gonna be AI that's run amok, or is it going to be the kind of ever present, devil at our our elbow here, which is natural stupidity. Is that the kind of, like, human stupidity?
Or Yeah. Exactly. Humans humans. I mean, humans have been able to achieve a lot, you know, largely through the division of labor and, you know, allegiance to certain good ideals from the enlightenment and science and stuff. But but when you look around at the problems in society and the risks, you know, existential risks, right, x risk or whatever, it's mostly due to, like, really, really stupid choices and things that are persistent.
And we we we like to act like like things are complicated or like problems in the world are super complicated. And to some extent, they are. But other times, I think even most times, the problems in the world are actually just, like, really stupid. You know? It's clear, like, for all just I mean, health care in the United States of America, a for profit health care system that we spend 17% of our GDP, doing health care when anyone with a brain knows that you could just create a public option or a universal payer or adopt something more like a kind of Dutch system where there's universal coverage by private insurers, whatever.
There's a 1000000 ways to do it. But if you had universal coverage, the the prices would drop by half. You'd only be spending about 8% of your GDP on it. You'd save 1,000,000,000,000 of dollars over over decades. And and it's obvious.
It's not and and it's, you know, you some people say it's greed, but I think it's more than greed. It's it's like a kind of stupidity, that allows us to keep doing this, like, tremendously stupid stupid thing. And so and so when you put when you start to talk about AI, I mean, people get kind of nervous because it's new. But if but if the premise is that it's actual intelligence, it is intelligence, then boy do I have some stupidity for it to come and help with. There's so much stupidity everywhere.
Yeah. So I'm very I'm very I'm very pro AI. I'm very AI phillic. Yeah. I feel like you almost on the air to go, guys, why haven't you got the health care sorted out?
You know. Yeah. Yeah. You know. I can sort that out for you in about 10 minutes because anybody can.
It's really an easy question. Yeah. What that's just something you should have done by now. Come on. Let me get on it.
That would be nice. I I think a system which tells you all the it's kind of like a simulation, isn't it? Like, this is this is probably a better way of running things, guys. And Yeah. They're probably all things we know.
Yeah. I actually have a sort of I actually wonder if there isn't some interest by, like, you know, billionaires. Like, Elon Musk says, you know, we should, like, be really careful with AI and regulate it and be slow with it and sort of develop it in this really, really careful way. I, you know, I mean, I wonder if that isn't just him trying to not lose his job. Right?
I mean, I think an AI can do the job of, like, a, you know, billionaire fly boy a lot better than than any of our billionaire fly boys. You know? And I and so there's a there's a kind of risk not to like, there's a risk to, like, truck drivers and, you know, kind of, you know, technological unemployment for for the masses of people. But there's really a risk for, like, CEOs, stockbrokers, hedge fund managers, private equity people. Like, those are the people who I think AI can do their job way better.
Like, so, like, so much better. Still need someone to go to the golf course and stuff, though. Right? Oh oh, yeah. Well, we You still need someone to go to the golf course and stuff, though.
Right? Oh oh, yeah. Well, we can send a robot. We can send a golf a golf robot. There was just this, you know, there was this just expose on on John Oliver about McKinsey, right, who who supposedly is, like, you know, this intelligent right?
We're talking about intelligence and stupidity here. So supposedly, McKinsey is like, oh my god. You know, McKinsey is so intelligent. Oh, it's all blah blah blah blah. They're so smart.
They're so whatever. And then it turns out that pretty much all they do is they just get hired. They come in. They say you should fire, you know, a third of your people. You know, they just do the same thing.
They just do the same playbook for every company. Fire everybody, outsource everybody. You know? And then the then the executives say, oh, well, sorry. McKinsey said we have to fire you.
It's not that we wanna fire you. And that's really the only reason they hire McKinsey is so that they have someone to pin all the layoffs on. And then the McKinsey walks out and does it again the next day somewhere else. It's like they're not actually doing anything that intelligent. Right?
If they if there actually was an AI, right, that was actually intelligent, that would be amazing for the economy because all these numbskulls would be out of a job, and we would actually or maybe they would use the AI. Yeah. But they wouldn't just come in and say, fire everybody, you know, which is just stupid. They they'd say they'd come in and say, well, what's your innovation strategy? And how are we gonna transition?
How are we gonna do these things? You know? You know, instead of yeah. So I'm yeah. Let's bring in some intel let's have some actual intelligence in our society.
Now my job is actually what is is in the realm of Ni, natural intelligence. Natural intelligence is is quite I think is more in some ways I mean, AI is interesting because it's novel, But but natural intelligence, I think, is an actually more immediate opportunity, at least has been for the past, you know, couple 1000 years. And that's training human beings to be more intelligent. You know? And I I still think that's an under underappreciated opportunity.
Right? I guess the promise of AI. Right? The the kind of end game is that there is this thing which is going to always be air quotes, like more intelligent because it or in certain ways more intelligent for sure. Because it has access to things that we do not have access to.
And we might be more pliable and resilient and, multi dimensional dexterous, and things like that than machines that we have currently. But we we definitely don't have more access to memory, data. You know, these sorts of things, which it's kind of like this is, you know, it's not apples and it's not apples to apples at that point. It's like there's this other thing, and we just happen to be calling it AI. And it does this other thing, which is better in this way.
And Right. For me, for sure, it's like we well, we can leverage that stuff for cool stuff. We can we can do really interesting things with that new set of tools and capabilities. And peep the cultural aspect is we're jumping straight to, okay, it's gonna take over the world. It's gonna take over our jobs, whatever.
And, I guess coming back to that natural stupidity piece is that, well, we can use it for stupid we can carry on, you know, making stupid decisions with this or good decisions. It doesn't really help us with that, you know. Yeah. Right. Necessarily.
Yeah. And that's where that's where natural intelligence comes in. Right? Like, we have like, the opportunity is to so, like like, we spend how many 1,000,000,000 of dollars developing AI and how many more 1,000,000,000 of dollars will we will we spend, you know, trying to train these AIs? You know?
I I wonder about, like, what if those 1,000,000,000 were put towards natural intelligence? How much better of a society? You know, I I've been training people to be software engineers, and we we have a very novel program and I'm actually, I'm actually building a new college as well. That's kind of like an Oxford for everybody. It's a 1 on 1 tutoring based college called Elton College.
And, you know, we can educate someone with a a world class MBA for about $20,000 and it takes about a year, you know, and it's scalable. It's scalable globally to the whole world. I mean, we could, you know, we could do it infinitely because you just hire more teachers and train more students. So, like, you know, if you put, like, as much money as you're putting into AI startup into my new college, you know, we could train, like, many thousands of of of these amazing business people who could go around and and wouldn't just say, oh, cut cut cut people at the bottom and increase executive compensation. Hi.
We're McKinsey. That's all we do. That's our single play in our playbook. You know? We open our playbook.
It has one page in it. You know? Yeah. Yeah. So it's like, so yeah.
I I just, and also you wouldn't have the danger I mean, you'd have the danger of AI, but you'd you'd have more competent humans to cope with that danger. Mhmm. You know? And you might have a functioning democracy too. So one problem with natural stupidity and the lack of natural intelligence, you know, meaning education, is the biggest cleavage between like, demographically, the biggest cleavage between Donald Trump voters and not Donald Trump voters is a college education.
That that is the biggest cleavage, more than age, more than wealth, more than where they live geo geographically, more than anything. It's just if you're educated, you can't be tricked by the orange bad man. If you aren't educated, you can be tricked by the orange bad man. And so, you know, the to me, the real risk of, you know, existential risk, at least of democracy, is again natural intelligence, the shortage of natural intelligence, not the short, you know, not other things. Mhmm.
Well, I I feel like you could cut that mustard. Is that what you're saying? Cut the mustard? You can cut that. I don't know what that means, but I don't know what that means either.
Let's cut the mustard. Let's cut the mustard in, in different ways. But I'm I'm avoiding going there right now, because we we have only a short amount of time relatively. And I wanted to dig into your book because, it's provocatively named, and it has some, let's say, provocative ideas if you're a ethicist or a philosopher in it. Very good.
So I thought for those people who are interested in that sort of stuff, we could dig into that. And then we'll we'll we'll dig up the AI stuff as we go. Yeah. Yeah. And I talk about it in the book too.
Yeah. There's a whole chapter on it. So Exactly. So the the book is called The Future of Good, which is, interesting title. And I think, it miss it's misrepresented by the picture on the front.
I don't know how you feel about that. Uh-huh. Sure. Sure. Because it's got 2 little robots in the front.
So I feel like it's like like toy robots. And Mhmm. I don't know if that represents what you're going for. But anyway, the the key idea that you're trying to get across, again and again in the book, which is represented in different ways and exploring different avenues. And historically, is this idea of this word, which I'm gonna totally not be able to pronounce now, which is misericordianism?
Yeah. Misericordianism. Yeah. Misericordianism. Yeah.
Which is just my label for, for a kind of a kind of type of an ethical theory. Yep. Yeah. Yeah. Yeah.
So I I feel like if you're listening to the podcast, you're probably aware of, the types of ethical theories that are, out there. And this is tightly associated, I would say, with utilitarianism or consequentialism. I don't know. It it it seems to be. I'm actually I'm starting to think that it isn't actually.
But, yeah, it seems to be. It's sort of decision procedure is consequentialist, but its actual basis is not. It's it's based on human nature and evolution by brain science. Yeah. Whereas consequentialism isn't based on any of that.
That's just based on the sort of self evidentness of it being good to have better things, more happiness, more pleasure. Yeah. Yeah. Logical. It's self evident, really.
Yeah. Yeah. Utilitarianism is sort of self evident. Yeah. And if I'm I'm gonna basically have a go or what this means, and then we'll see how far it goes.
Okay? Yeah. Let's see. Yeah. So And if I wrote the book right.
Yeah. Exactly. It's like I didn't do the cover right. So I agree. Oh, no.
I mean I mean, that's just my opinion, isn't it? I agree with you. I agree. The the idea of I I bring up consequentialism because there's, like, often the scales. And you're weighing up the good and the bad.
And if it weighs out more good, then you probably go for it than one other option. You're like it's like this weighing procedure almost. Whereas, what you're saying is, like, we can weigh this. We can do the weighing procedure fine, but we should actually start actually, like, segmenting what we're weighing, like, and have 2 scales. Like, that's how I imagine it.
There's, like, 2 scales and all the things which are, you say avoidably, avoidable misery in in the book. You weigh all that stuff first and you kind of you deal with that stuff first. And then you weigh the things that we can do to increase happiness as a like Of course. Yeah. It's like, this is like a side effect of like having done with the misery stuff first.
Yeah. It's kinda like pouring water into, like, a basin, and then when that basin overflows, then you can fill in the next basin. Yeah. Yeah. Yeah.
Yeah. That's nice. Yeah. Or something like that. Yeah.
Except for it's removing. So anyways, but yeah. Yeah. Yeah. Yeah.
Yeah. It's a it's a it's a prioritizing of the of one over the other. Yeah. Yeah. And for me, it feels like if we just did that today, it would just it would kind of get over the idea that you have in the book around about bringing in some of these thought experiments.
Right? So there's a sort of experiment about the drowning girl. Yeah. And, how you think about, moral urgency when it's not in front of you. Right?
So there's there's something happening over here, but it's probably in a third world country or a developing country. And you it's it's very difficult to feel, urgency over that thing, which may definitely be happening still and and exist. Mhmm. And maybe if we were thinking more congenially about avoidable misery, we'd be I I like to think that we would be dealing with global poverty and and things like that, like quicker and at at haste. You know what I mean?
Mhmm. So the consequence of of doing this idea is that, you know, we tidy up the basics for people all over the world in my mind, you know. Yeah. It certainly is, you know, and and that and that idea that that, that ethical dilemma, which is called the drowning child scenario, which was invented by, Peter Singer, who who's also, some would say a kind of misery focused ethicist. Right?
He sort of he he he he still calls himself a utilitarian because he values the positive things like happiness and pleasure, but he also but he weighs enormously, more, misery and suffering. Mhmm. And so he created that scenario where, you know, you're walking along going to a new a job interview, and you're wearing your best suit and nice new leather leather shoes. And and then you happen across a a park where there's a pond, and in out in the pond a few feet out, there's a child struggling in the water and, like, clearly struggling to to not drown. And the question is, do you do you go in and and and and save the child?
And if you ask a group of people, which I have asked my students multiple times because it's a fun activity for, like, an ethics tech ethics, you know, kick start. They all say, absolutely. Like, what the hell? Why is it even a question? Of course, I'd go in, and I say and I remind them, it's gonna ruin your, you know, $1,000 suit and your your 900, you know, your $200 shoes.
And they go, it's a child's life at stake, you know. Of course, you go in. You know? And then I say, oh, well, you might miss your job interview because you can't go to your job interview soaking wet and and it might take time and who knows? So you're gonna miss your job interview.
That's that could be months of income. That's another, you know, maybe 10, $15,000 racked up. And they look at me like I'm just a monster. Like, it's a child's life. Right?
They're all so righteous at that moment. And then you start to sort of get it kind of is a trick. It's sort of tight you start tightening the noose. Right? And you say, well, what about if I gave you a button, a red button that you could press?
And when it when you pressed it, it would take $15,000 out of your bank account. And then across the world, on the other side of the world, you could be assured there'd be, like, a video. You could be assured that it would save a drowning girl across the world from drowning. And then they look at you and they're like, They start to tell that you're like, you've got them, you know, Because the reality is all of us have that button always, and it's not $15,000. It's like a $150.
Right? So if you give a $150 to UNICEF, they'll, like, vaccinate, like, a 100 children against malaria. And and those and and out of those 100 children, 2 of them would have gotten malaria and one of them would have died. You know? And so you can you can save a child's life for, like, only a 100 a couple $100.
Mostly, I would say unit if you wanna if anyone listening wants to do this, go give money to UNICEF. UNICEF is a fantastic charity that helps children all over the world from all kinds of things, malaria, malnutrition, education, you know, liberation of girls. They're fantastic, and and you should give them money. But but what people what peep but then people are they start to withdraw because they're like, well, well, well, well, I can't, you know, I can't save them all and I can't they start to have all these rationalizations. But so so there's this weird thing where when you're right there, there's this immediacy.
And then when you're distant from it in some way, you start to drift back and your urgency dies way down. And Pete Peter Singer, you know, he has an explanation for this which is he kinda just says, like, people are bad. Like, that's bad. They're immoral to think that way. And if they were more moral, they would have what he calls an expanding circle of care, right, where they would care they would care impartially about other human beings.
And basically for him, you know, sainthood is to like care impartially about all human beings everywhere. And not sainthood kind of being a craven sort of immoral sort of consumeristic narrow minded person is someone who just cares about the very narrow circle of themselves and them are those people around them and things that are, like, right near them. I I don't agree with this. I mean, I love the example, and Peter Singer is, you know, a great thinker, and I admire everything he's done. But but I have a different explanation for this, which is which is actually that human beings have that that moral panic, that that moral urgency, it only occurs when the human being believes that the misery is avoidable, and that means taking less misery to eliminate than it is to tolerate.
That's the definition of avoidable. So it's you could say it's it's it's it's intolerable because if it was tolerable, you they just tolerate it, but it's intolerable. It takes more it takes less misery to get rid of it. And so we're kind of trying to optimize the amount of misery around ourselves. We we've evolved to do this.
And so as the as the child gets further and further away and the ability to affect their lives becomes more and more hazy, the avoidability actually goes down. Right? Because you start to think, well, that's not really avoidable for me anymore because the money might be wasted or who knows, and it may not get there. I can't really perceive it being done exactly. Right?
And so that means that no one is act there there isn't really much many people who are more moral than others. I mean, there's probably a bit of a bell curve of the sentiment of misericordia, the sentiment of panic for the distress of others. But but most people are you know, probably 95% of people are inside, you know, 2 standard deviations of of that bell curve. We're all pretty much the same morality. We just have different appreciation of the facts and different predictions of the future, which change our perception of what's avoidable and what's not, and that changes our our moral urgency to do something.
So it it's a different interpretation of that same, dilemma that, that I think is more accurate. And I guess, in the book, you you point out that if you think rationally this, you're not actually thinking about what humans do. You're suggesting that, a more natural position or more instinctual position might be that we are caregivers or we are altruistic by default. And actually, you have to think rationally to to to not, to almost Right. To to, given this may be, you know, abstraction.
So the further away you get for something something being urgent, it it kind of falls off for you. But by default, you're you're gonna viscerally feel something on you Yeah. As a human being and and and react to that. Yeah. Yeah.
So so yeah. That it's our it's most ethics are based on the idea of some, you know, quote, unquote higher I hate the idea that it's higher because it's just a that's just a you know, that's just like a a made up idea. But, you know, this higher notion of of reason, maybe transcendental reason or practical reason, or, calculation of some you know, using our calculative kind of neocortex. That's, like, our highest That must be where our morality comes from. Or connection to the divine, you know, following God's law, must be, you know, these are the things that are that are the ways to be ethical in our society today.
I mean, Christian ethics is hugely, you know, hugely powerful in America today. Just like, you know, Muslim ethics is very powerful in, in Islamic world. And and so Misericordianism takes a different perspective which is actually uses our it actually says that our our actual morality comes from our our instincts and not just even our and our feelings, but not even just even our higher feelings like sophisticated, you know, romantic love or or joy for each other for some kind of global or something. Actually, it comes from our most basic feeling, which is fear and self preservation, which it turns out that human beings, out of all the animals, human beings uniquely have evolved to have our fear for self preservation has really become prehensile. I'll I'll give you an example.
So if you if you look at, like, a, like, a duck like, a duck with the little ducklings around. Right? If you, like, go up and grab some of the ducklings and pull them away from the mama duck, that mama duck's gonna be like, freak out, you know, like, get away from my babies, you know. Right? It's gonna have moral panic.
It's gonna have, you know, it's gonna feel panic. It's gonna have a heightened episodic memory, and it's gonna have and it's gonna have distress. Okay? That's moral moral urgency. It's a biological feeling.
Okay. Now if right in front of that duck, you take like a duck stuffed animal baby and you like mash it to bits right in front of them, it's just gonna look at you like you're crazy. Like, what are you doing, you stupid human? It's gonna have 0 0 moral panic. 0.
Right? Because it's a it's a stuffed animal. It has no Yeah. Connection. Okay.
Now take any human child. Okay? Any human child, if you take any stuffed animal that has 2 eyes and like a mouth, so it has some kind of face, And you take the stuffed animal and you put them right in front of them and the baby you know, the kid looks at the eyes of the stuffed animal, and then you just viciously punch the stuffed animal. I don't recommend doing this. But if you do that, the kid will be like, oh, they will have the same panic.
They will they will have episodic memory, panic, and distress at you, you know, punching that stuffed animal. Mhmm. This is my mom says this my mom's a psychiatrist. She says this is a good test for psycho psychopathy because a psychopath doesn't care if you punch the animal, the stuffed animal, but the everyone else does because we because psychopaths don't have misericordia. That that that's what a psychopath is is the person whose brain doesn't have misercordia.
The misercordia is the feeling of moral urgency at the distress of others. Mhmm. So so this is the difference between human beings and and all other animals. And Darwin talks about this. Darwin says it meant multiple times that the biggest difference between human beings and other animals is not tool usage, is not reason, is not you know, it's not it's not that.
He said the biggest difference is human beings really care about, like, everything. And other animals, they only care about, like, their family and them, and that's it. So name drop. Yeah. Darwin quote.
Boom. Name drop. So Because, that's because of who I am. So if if we are, like, hardwired let's say, we're hardwired for this. Right?
Does that kinda, like, throw out the window the the kind of, maybe, like, our decision making in that process? Like, do we have limited agency there? And does that account for like no cultural environmental factor? Do you think that is just the case. Right?
Like biologically, we're like hardwired, to to viscerally feel in this way. Yeah. And the the cultural aspect is less important. Maybe we can almost trick ourselves not to to react through logic, through experience rational. Is that kind of how you're how you're thinking about it?
Yeah. So, you know, this is a this is a amygdala function. So the the fear response is an amygdala function. Amygdala is 2 little almond shaped sized brain organs, organella right above your ears, right above and behind your ears are where the amygdala are. And, in especially the right amygdala, this is where, it seems like given kind of the current state of neuroscience, this is where this this, sentiment, this sort of process of being concerned about other beings, their well-being, it seems to be located there strangely.
It lights up like, you know, it lights up like crazy when when this happens, when you see another being in distress that you believe has a mind. So you have to believe that it has a mind. We don't do this for rocks. Right? But if it has a if it's a stuffed animal with 2 little eyes and a mouth, then our brain is like, there's a mind in there.
And then, oh, it got punched in the face. Ah, panic. Moral urgency. Right? But the amygdala is a pre rational a pre you know, it's before the eyeballs.
It goes like straight from the eyeballs back to the occipital lobe to process the visual information and straight to the the hypothalamus and the amygdala before it goes to the cerebral cortex. Okay? But after the fact, the cerebral cortex can be like, no. No. No.
No. No. Migula, you got you're just you're wigging out, man. Chill. You're wigging out for the wrong reason.
This is fine. And this is like this is what happens when you show somebody a picture. This is a classic amygdala brain scan. The way they do the amygdala brain scan is they show people pictures of faces of, like, totally comfortable happy faces. And then they show a face that's like like, ah, crazy.
Oh my god. I'm I'm in I'm in distress. And the and and the amygdala flashes, like, really blasts on this brain scan, but the cerebral cortex flashes right afterwards and is like, it's just a picture. Calm down. So your reason what that suggests is that exactly what Rousseau this is all in Rousseau.
Rousseau also said this. He said, the way that we're good is our natural pity. Natural pity is what he called it. I called it misericordianism. And then reason actually can suppress natural pity.
Reason can suppress misericordia, And that's how we get, you know, the ability to, first of all, not be wandering around, like, panicking all the time for stupid reasons. Right? Like, we don't wanna over over panic, but the we can also do things like I mean, we can do, like, horrible things like holocaust, other human beings because we've been convinced that that's actually, like, a needful thing, like, an and, again, unavoidably an unavoidable misery. Like, sorry. We just have to do this.
I have I've constructed all these, you know, horrible sort of ghoulish logic to support that. We can then be convinced to do it. So, yeah, we have to be careful with whenever we suppress that. And if you're if you're taking it like that one step further, for me, that that strikes me that we could have that response, right, to a and we do have that response to things that move and have eyes, like you said. You know, we have this kind of, I want to say emotional, but like like you said, it says this complex combination of things that happen to us when we identify something that looks like it could be a mind.
And do you think that's slightly worrying for like things that are artificial and that could have some behavior, actions in the world, feedback, interactions with us on, seemingly emotional level. That that that's an interesting kind of Like a manipulative Yeah. Yeah. Consequence of that instinct almost. Oh, sure.
We're we're manipulable that way. Yeah. Don't don't yeah. Don't be a creepy, you know, creepy insect robot that has a 1,000,000 eyes. Be just like a robot with a face with 2 eyes and a mouth.
Human beings will like you a lot better. That's true. I guess you at that point, you can make your, you know, Donald let's go back to Donald Trump. You're Donald Trump bot. Right?
Oh, yeah. People love him. Yeah. Exactly. Be a huge narcissist.
Everyone loves that. They do. Yeah. We could, wheel him out and, get him to do Yeah. Maybe that's what he does.
Maybe there is one. Oh my god. Donald bot. Donald bot 2,000. He just says, like, random stuff.
He's sputtering. He kinda sounds like an LLM, actually, like a hallucinating LLM that just sort of sputters out loose loosely connected things. We need to update him. Yeah. I don't blame Donald Trump.
Everyone hates Donald Trump, but I I see him as just a symptom of he's just a symptom. He's not the cause of anything. He he's a symptom of of our just degradation of our educational system, the degradation of our political system, you know, that that we haven't kept up with commonsensical reforms, like like universal health care is obvious, you know, improvement, or or or I'm really into ranked choice voting. Like, we just, you know, like, we know what ranked choice voting is. The fact that we don't do it is just really I'll just say it, like, it's stupid.
It's just stupid. We need to do it. It's the same thing. It's literally the same thing as if you were running a restaurant and every night, like, you got complaints of, like, 4 or 5 people getting sick. And then every day you notice that, like, Mike would go to the bathroom and not wash his hands.
You know, like, it's just that. It's so simple. It's just perfectly causal, you know? And and and if you don't fire Mike or teach him how to wash his hands, people are gonna keep getting sick. You know?
If you don't do ranked choice voting, if you don't implement universal health care, you're you're gonna destroy democracy. You're gonna, like, you know, devolve into demagoguery and and and psychos you know, this kind of political psychosis. I don't know. Maybe I'm oversimplifying things. But to me it to me it seems like we should think about things in terms of stupidity.
A lot a lot of great authors have done that, you know, like catch the catch 22 by by, by Heller. It really he's really talking about how stupid war is, every page. He he's not saying, oh, it's so complicated, and I'm so smart, so I've sort of figured it out. He's literally saying it's very stupid, and every little thing about it is very stupid and very obviously stupid to anyone who who isn't just kinda caught up in the rationalizations for it. Yeah.
Yeah. And I and I think I that's kinda what I'm trying to channel actually. Well, if anyone's interested specifically in listening to another episode of me chatting, sorry. Go to episode 35 where I talk to, Maria Slakovic about and and she she has a lot of stuff to say about, different ways of voting and systems and representation. So check that out.
It's really good. She knows a lot about it, much more than I could possibly ever know. I'm just going to underline that with, there's so much stupidity in the world, and, we should just be constantly fighting it. Yeah. Let's let's get some n I out there.
Let's get some a I out there. Like, let's just get as much I as we can, because there's so much s. We gotta get rid of the s. So much BS. So much BS.
Right? BS with no AI. Let's do it. You know? Let's do it.
Yeah. Okay. So I have I have another one for you. So I imagined, let's go back to the AI aspect of this. And, what I was thinking was, one of the things that you brought up in the kind of AI segment near the end of your book was maybe it would be cool if we had like a misery detector.
And that could be, like, a I think that should be the first yeah. That should be the first AI alignment thing created. Yep. Is is an AI that all it does is a red light turns on when someone's in misery. Yeah.
You know? Misery or not. It's like hot dog or not. Right? But it's just misery or not.
It's a it's a digital amygdala. Yeah. We should create a digital amygdala. Yeah. Mhmm.
Yeah. And it can it can poke us and be like, no no. Seriously though. Yeah. Seriously.
Yeah. Don't don't ignore this one. Right. Seriously. This is seriously a problem.
The red light's persistently on. Yeah. Yeah. Yeah. You should really check that.
No. No. It's like one of those This is important. I had to deal with a far a, carbon monoxide alarm which had run out of batteries earlier. And it's just like Yeah.
No. I'm running out of batteries. I'm running out of batteries. Yeah. You could die.
And it's it's really annoying. Yeah. Digital amigable. When people are talking about how do we do AI alignment? Well, my my the Misericordian suggestion, which I think is I mean, I argue in the book that Misericordianism is is actually superior to other ethics.
I mean, I think it can in a footrace with other ethics, it it beats them. It beats everybody. I I I'm really I'm really pleased I don't have, like, comments underneath the episodes. Why? Oh, yeah.
People would be like, no. No. Communism. Nah. We gotta be, you know, everyone's got their own axe to grind.
But I I I'm happy to go, you know, be in a very civil debate with anyone about any ethic and and I and argue why. It's quite a it's quite a powerful ethic. It explains everything. It works in every situation that I that I've been able to come up with, and I sit around thinking about all kinds of horrible things. But but, yeah, we should have a digital amygdala.
That would be the recommendation for AI alignment. Create a digital amygdala. Create something, an AI that has AI vision, or you could describe scenarios to it in in text or it has vision, and you could show it pictures and have it be, you know, 99.999% detect whether someone is in misery or not. And if you had that, I that'd be good because then that would be the core, that would be the conscience. The Jiminy Cricket of AI would be that module.
Put it into everything. Say by law, you have to be you have to have an amygdala in the being, in the AI, You know? And it has to be guarded so it can never be, like, self edited out by the by the AI. Then then you would have essentially roughly Asimov's rules. Right?
So Asimov's rules are they're actually an exact copy of misochordianism. But the only difference is without the instinct for self defense, self self preservation. I think that's isn't isn't that the third one, isn't it? Something like that? Yeah.
The first one is don't let any humans come to harm. The second one is I can look them up. But it's basically they they are mis accordionism without self, I thought I thought it had self preservation in there. It it does, but they're they're it's last. It's like Yeah.
Yeah. It's not but human beings, it's reversed. It's first. Right? So human beings are like Yeah.
The the most important avoidable misery is your own avoidable misery because it's your you have the most control over that because it's your life. Right? So, yeah, the first law, a robot may not injure a human being or through an inaction allow a human being to come to harm. So there's where the misacordianism is. It's not just an it's not just a robot can't harm a human being.
It's that through inaction, you can't let a human come to harm. This is an important problem. This is a major problem with Western philosophy that actually one of my academic papers I will write is about this. But the harm principle really bakes people's noodles. It kind of confuses them because there's transitive harm.
I harm you, subject verb object. Right? And then there's intransitive harm. They came to harm. Harm just happened to them.
They have a harmful situation. Right? Those are there there's no culpability. There's no I did this. Right?
And and if you read John Stuart Mills on liberty where the section where he describes the harm principle, he doesn't differentiate between those 2. He wish he fish tails around. Sometimes, he's saying, they're doing actions of commission that hurt people. Sometimes he's saying, oh, this is a harmful situation. We you know?
He doesn't he doesn't clarify, and so people fight all on all different sides. They they say, oh, you know, it's all about you know, the government can't do taxation. This is what hardcore libertarians say. The government can't do taxation because that's an act of commission of harm. Right?
But if the government's pulling away taxes from, like, wealthy people who have plenty of money in order to alleviate the harm of whatever it is, whether it's military defense or or welfarism, which is the 2 things the government does, then that's perfectly acceptable according to a intransitive harm principle. Right? So a transitive harm principle and in transit of harm people get people all all whacked out, because because Mil never clarified it and really no one since then even actually has really clarified it. So anyways yeah. So if we add a digital amygdala, we would essentially have the the Asimov laws because the Asimov laws actually require a digital amygdala because the robot would need to first identify if a human being was going to fall into some kind of avoidable misery or not.
But I guess, at that point, I I feel like I'm gonna try and twist you up in knots and see if Please do. Yoga class. Free yoga class. Yeah. Are you ready?
I'm ready. So it it strikes me that's that's a that's an urgency thing. Again, like, coming back to the the immediacy of the situation. So maybe there's a robot, let's say, a embodied AI, as you might say these days. A embodied AI or robot, which, has these azimuthal rules because it has this amygdala, unit, which, I'm definitely gonna go make tomorrow.
After my Please do. Holiday. And I think it would be very important to have the digital amygdala. Yeah. Yeah.
So there's a presumption there that we we we did it, and it knows when someone's going to come to harm. And maybe, they've dropped a knife or something. It's gonna land on their foot, and the robot's able to, like, knock it away. And it's it's a very kind of, like, instantaneous, visceral sort of harm. Right?
Which has been avoided, and is hopefully avoidable. But then you get the situation where a bit like the taxes, where they people drink drink alcohol or they smoke or they they partake in gambling or or things which are deemed to be in the excess negative or lead to negative negative outcomes. So there's this so that the I feel like there's still so much murkiness, in my mind anyway. I I I mean, for for you, maybe not so much, about what harm constitutes or what avoidable misery constitutes Yeah. And how those are enacted.
Because you, to the extreme, let's say, let's take this robot in the extreme, we effectively get the paperclip maximizing robot again, where it's trying to prevent all misery, all avoidable misery. And then And that's and that's the plot of the movie. That's the plot of the book, I, Robot. Right? Is the robots take over because they're, like, the humans are causing too much misery to humans, so we're gonna take over the world, you know.
Yeah. I mean So yeah. So that's very different to the film. But yeah. Oh.
Well, that's the film then. Yeah. Yeah. The films But like the Yeah. But that's the premise of the film.
Right? So there's a couple things. One is we say, you know, the word is avoidable, but I I kind of chalk that word up with a few other qualities. So one is that principle of intolerance, you know, that too much, you know, takes less misery to eliminate than to than to tolerate. That's one that's the core definition, but there's actually it also includes as consensual.
So if people consensually do things in in in copas mentis I mean, consensually in copas mentis do things. That's also Yeah. That's that's unavoidable misery. Drink yourself silly, that's unavoidable. Now could you do some you know, could you prevent that harm at a more systemic level?
You could. You could. You could could you prevent that harm at a more systemic level? You could. You, you know, you could make it harder to get liquor or something.
Yeah. And then and then it also includes, deserved. Deserved. So if people perceive a misery as deserved, then their their their moral urgency goes out the window. So it's more like imitating the the human kind of urgency, the moral urgency in a way.
Yeah. Well, this is the weird thing too is that misrecordianism is not a universal, purely rational like, it doesn't apply to space aliens. I have a whole chapter in the book about how Kant actually believed in space aliens. He believe he literally believed that there were aliens on other planets. Yeah.
And and he said that the categorical imperative applied to everyone, even the aliens. But misrecordingism actually doesn't apply to anyone else. It's it's a human it's a human species level trait. It's a homo sapien thing. It's not it's not some principle of the universe.
It's it's not some, you know, universalizable thing. But it is the right thing to do for human beings. And so it is the right so if we want human robots, humane, you could even say humane robots, then you would you would have them behave like a human being. They would, they would and they would not be taught they would not tolerate avoidable misery. They would feel panic and episodic memory.
You would be conscious of someone's autonomy, their rights to agency, and their, Yeah. They they consented to it. Yeah. And it would be like, well, then you can't do anything about it. And, you know Yeah.
Yeah. Exactly. Yeah. Okay. Fair enough.
That sort of simplifies things. Yeah. We don't want the robots running around shutting down tattoo parlors. Yeah. Well, I mean, all sorts of like, in in when I was reading the book, I was thinking like, the dentist.
I really don't like the dentist. I know it's good for me, but but the misery Yeah. No. Consensual and deserve so consensual and deserving miseries are Oh, fine. Those off yeah.
They're off the hook. And if you watch people, that's the way they behave. You watch you walk by a tattoo parlor, you don't care. Even though a tattoo parlor, if it there wasn't consent, would be a torture chamber. Right?
That's what a tattoo parlor is when there's no consent. It becomes absolutely horrific, you know. Yeah. So so so consent is makes all the difference. I mean, sex.
Right? Sex is intimacy consensually, non consensually is rape. One of the most horrible things in existence. So it's consent makes this huge literal kind of black and white shift, in our in our moral, urgency Yeah. That we that we feel.
And do you think that's the the sort of way of thinking will help us on a kind of more macro level? You know, if we we'd be talking day to day, people walking around the world, but maybe constructing those systems and, you know, the ways of working legislation, governments, that sort of thing. Is it is that sort of thing that can help us at that that level as well? Yeah. So this is the interesting thing.
So I think this is quite interesting. So the the the principle of misrecordingism comes from just human beings, like, on a biological level. Like, if you had humans in a zoo and, you know, biologists were, like, watching them, they'd be like, oh, just like Darwin did. It was like, oh, this is so interesting. Like, look how they behave.
They're so weird. You know? They they really care about other things, other beings, not even just their family, but other beings. They even care about invisible beings, like ghosts and gods and, you know, invisible minds that just they think float around in the universe, you know, or are the or the universe itself is a mind and so we care. Mother nature, you know, human beings are these weird so that so if you think about it like a zoologist, you're like, woah, it's so weird how they behave.
So then the question is, okay. Why should that should that then be the basis of how we build, like, policy and laws? And and I mean, I don't have a good reason why, but I just answer yes. Yes. Like, if you're gonna make laws for human beings, those laws should be in accord with those human beings' moral sentiments.
Yeah. And so, yes, the answer is yes. You know, if you were gonna make laws for sharks, I think you would make those laws according to the shark moral temperaments, you know. Yep. You know, they're sharks.
It's to govern the sharks, then they should use this, you know. Yeah. Yeah. So I think I think yeah. So the the the misrecordingism where the tires really hit the road, and this is the same for you to utilitarianism, where the tires really hit the road is institutional level behavior.
You know, individuals are mostly in what I call moral equilibrium, which means there is no avoidable misery in their environment. That that's that's your and my life most of the time. Right? We're just mostly you know, we get a little hungry, avoidable misery. We immediately go make a sandwich.
You know? Or our friends and family are right there, and they're hungry. Okay. Make some food. Or, like, you get sleepy.
You go to sleep. Like, there's only very little avoidable misery. It's very rare that there's a drowning child. Right? So human most individuals are mostly not feeling any or moral urgency, and therefore, they're just focusing on other things.
Mhmm. Because that's the cool thing about misercordism is when there's no avoidable misery, you can just make your life great. You can, like, bake cakes and have fun and do whatever because because because it's a free zone morally once there's no avoidable misery present, and, and people just try to make their lives good. It almost becomes a utilitarian world once there's no avoidable misery in your surroundings. Right?
So it's sort of in the second gear is utilitarian. In the 1st gear, it's misarcordianism. So but institutions like governments and huge corporations and wealthy people of, in estates with all this wealth, they have so much ability to reduce misery that they actually have almost all the moral, almost all the moral demands are on them. They're culpable for all these things because they have the agency and the ability So it's it's so it's more about capacity. Right?
They have they have the the the ability, the capacity to to do something about it. And there's a Latin phrase for this, ad impossibility nimotenentur. Tennetur. And that that's from Kant, and it just means ought implies can, which means, you know, you're not obligated to do something you can't do, and you are obligated to do something you can do. That's useful.
So, any high net worths? Anyone listening to this right now? Well, because high net worth people are always saying it's somebody else's fault. That they're just the victim of all these things. And it's it's crap.
It's total crap. It's like, no. With great power comes great responsibility. Spider man. It's not hard.
It's not hard. You know? Again, we're not stupid here. Like, we know Spider man. You know?
And and so yeah. We have to remember that, you know, you and I as just I don't know about you, but my net worth isn't high enough for it to matter. But, like, everyone has an obligation proportional to their agency. That's it. And so don't go to sleep crying on your pillow because you didn't do enough for, like, climate change today.
Mhmm. You know, it's not your responsibility. It's the responsibility of millionaires and billionaires and governments and corporations, and when they fail at those responsibilities, they're immoral. They're culpable. Individuals like us, we should do our part, but our part is teeny.
It's like a grain of sand in the in the in the sea compared to the the the responsibility of these large organizations and wealthy people. I I would, I partially agree with you. But I also think that the the consequence of the society that we live in is that, you know, we get people like, Greta Thunberg and people like that, who just Yeah. Who make it their thing to to to to get in the way, and and to to become give themselves the ability, let's say, to work at having the ability to do something. So I wouldn't I would disagree in light of great urgent catastrophes.
Yeah. Yeah. And I disagree that it's not it's it's everyone's job. It's just that more people some people have a greater ability to do something this instant. Right?
Now. Right. Right. And maybe they should be feeling the urgency. I mean, individuals do have an obligation, again, but proportional to their agency.
So if you see a way to make an impact, if you see a way to, like, you know, if you're like, no. I'm gonna build a movement and one takes one person to start a movement. So I'm gonna do that, then you do have the obligation to go do that. Even if it's just you on a street corner starting out because you see it. You see the avoidability of the misery.
But if you say it's just too big, I don't see how to do anything, Just just be be calm because it's it's not it's not your obligation. It's only your obligation if you do see a way to to to fix it or to or to impact it. And if you're like, you know what? All I can do today is I'm gonna hang out the laundry instead of using the dryer, then that's your only moral obligation. You don't have any additional obligation besides what you can do.
I would I'm gonna caveat with that with it's good to know what you're capable of. Yeah. Yeah. This is yeah. You know, the key the key prayer the key prayer of miserchordianism is the serenity prayer.
You know the serenity prayer? Yeah. I don't know. I don't know. Don't mind.
Yeah. Yeah. Yeah. That's okay. The serenity prayer is, god, grant me I mean, we're gonna talk about god.
Sorry. Yeah. You know? Yeah. You don't you don't have to be Christian.
It's just the thing. God grant me the serenity to accept the things I cannot change, the courage to change the things I can, and the wisdom to know the difference. That is it that is misrecordingism right there. Bam. It doesn't mean misrecordingism is Christian.
It just means that, you know, it's a universal ethic. I mean, it's the ethic of homo sapiens. So you it's gonna be in all different cultures. You're gonna find, you know, artifacts of mis accordionism. And that right there is, like, primo mis accordionism.
So Adam, the the last question we normally ask in the podcast, like the very first question is what, if anything scares and excites you about this AI technologically mediated future? Yeah. I'm mostly super excited. I like I've said, I'm I'm very AI philic. I'm very pro AI.
I think we should go for it. Maybe I'm, like, in the Marc Andreessen camp. I just recently watched his little talk on Joe Rogan or something where he was talking about his book where he's very, like, AI is gonna be all gumdrops and roses. But, yeah, I actually agree with him. I think I think AI I took Waymo I took a Waymo self driving car yesterday 2 times to and from a concert.
It was fantastic. It was a great great experience. Better better than with a human driver. And as I got out of the car, I said, thank you, Waymo. Waymo way It doesn't really it doesn't respond.
I wish it was like you know, like, did a little r two d two, you know. But, you know, so I thought it was great. I'm super excited about, education for AI. Like like like Khan Khan Khan Academy, Sal Khan is like right on it as usual. That guy is so awesome.
He was just like, boom. We're gonna implement, like, a ChatChipti, you know, teaching assistant inside of Khan Academy. So if you're stuck, you can just ask questions and it gives you great answers. Like, exactly. This is, like, the perfect use of AI.
It's absolutely perfect. You know, I'm super excited about, any kind of, like, you know, like helping doctors, you know, with knowledge. What's it called? Decision support, you know, getting rid of lawyers. You know?
Just getting rid of them. You know? What do they say? What's 20,000 lawyers at the bottom of the sea? A good start.
So I'm really excited to just yeah. I think AI is gonna be this, like, helper that just takes, you know, difficult high information, high knowledge, high cognitive load jobs and makes them, like, super, super more doable for the people doing them. And that is gonna be great because we need way, way more of that brain power, like, available. We need a lot more I to fight all the s. And I and I think if that's a I or n I, I don't really care as long as it's I.
Because that there's too much s. And is there something that, I mean, is there anything that scares you about the situation, or is it No. Nothing scares me about it. You know, all the all the examples that people come up with, I find entirely harebrained, you know, paper clips and blah blah blah blah. I I just think that's totally harebrained, mostly because people are assuming a ton of other componentry that is nowhere on the horizon to be created.
Like, for example, I have not encountered anyone. I'm I'm not totally into this space, but I'm a, you know, I'm a computer science professor in 2023. Like, I I mean, talking to a lot of people, reading a lot of stuff about AI. I've never heard anyone say we're gonna create a will, a will component, like a component that would make the AI make decisions for itself or have some priorities that it sets for itself. I've never heard anyone say that, and that would that would like that's a really important, thing, you know.
Also, like, you know, there's there's unplugging. Human beings, we can just, like, unplug anything, you know, like you know, and even if we couldn't really unplug it, we could, like, shoot its electrical source with a missile. It's like, oh, they took over the missile system. Cut their electrical system with a saw. You know, it's like the the the electrical systems and the systems of energy that would flow to an AI are incredibly vulnerable, really vulnerable.
So, you know, I think it I think a lot of worst case scenarios are really just total, they're trying to get clicks. They're just trying to get clout and clicks. And the reality is AI is just gonna be this, like, fantastic, benefit to especially the middle class, but also the poor. And and I think it's actually gonna it's actually gonna bring the well I hope it brings the wealthy down a peg or 2 because it'll kind of decentralize their power and and, oh, and and remove gatekeepers that are preventing normal people from from, from participating at a higher level in terms of decision making. And, you know.
So I actually think it's just gonna be an overwhelming success. Yeah. Well Maybe I'll eat my words. I hope I don't eat my words. I I feel like, I'm gonna be pleased if that is the case.
So Yeah. I mean, one okay. One thing I am scared about is technological unemployment. That I'm scared about. Because, you know, there's a world in which, you know, 20,000,000 people are made unemployed, you know, or for 30,000,000 people, 40,000,000, maybe even a 100,000,000 people are or not a 100,000,000.
There's only a 150,000,000 people employed in America. So maybe 50,000,000 people, a third of the workforce just unemployed the next you know, in the next 10 years because we make all fast food workers. That's 5,000,000 people. Robots do fast food. All drivers, you know, all the sec drivers, that's another, you know, 10,000,000 people.
So it adds up maybe you get to 30 or 40,000,000 people unemployed in 10 years. That's really dangerous. The thing is is that our response to that what I saw in COVID was the possibility where people are like, oh, send out checks and, you know, provide insurances and, you know, you know, suspend suspend evictions and get rid of the and, essentially, it would just force us to build out the welfare systems we should have built the last 30 years. You know? So and then we did that in COVID.
Like, we didn't do it great, but we did it under a Republican president, you know, under a right wing president. We, like, sent out, you know, $6,000 in checks to people and gave a child tax credit to everybody. We got, you know, 1,000 of dollars per every kid. And I just thought like, oh, this is gonna force us to build the welfare system we should have built anyways. So even the technological unemployment I mean, I'm I'm sort of scared because it's gonna be kinda scary there, kind of, as we go through the past.
But I feel like we're prepared to just be, like, send those people checks, like, basic income or whatever. And, you know. Adam, thank you very much for joining us on the podcast. How do people, get a hold of you, talk to you, follow you, all that sort of thing? LinkedIn is probably the best.
I'm kinda off Twitter ever since the Musk kinda trashed it. So LinkedIn, I'm AJ Brous, Adam Brous on LinkedIn. You can go to adambrous.com, and I always put up everything I'm working on there. And, yeah, you can listen to my podcast, solutions from the multiverse, which is a new unheard of solution every week to the world's problems, and personal problems, kinda small and big problems alike. And, yeah, all those ways are great.
Check me out on Amazon. I've got 4 books, a 3rd 4th book coming out, so 3 books live already, and they're all very different and very interesting, I hope, and fun to read, easy reads about nonfiction topics, well researched nonfiction topics. They'll give you superpowers. Every book is promised to give you at least one superpower. Sweet.
That's it. Cool. With that, I'm gonna fly home now. So Alright. Thank you, Adam.
Fly. Thanks, Ben. This was fun. Take care. Hi, and welcome to the end of the podcast.
Thanks again for Adam for coming on the show. Do check out his books. I'm getting to the end of the future of good by Adam Brass. So I'll put up a review on patreon, patreon.comforward/machineathix, when it's ready. What I really like about that book and what Adam was talking about in this episode was this kind of reframing or invigorating of this idea of evolutionary utilitarianism, this, coming together of these ideas around perhaps what is a social creature and how does that how do those each social creatures enact in the world?
And then how can you square that with more kind of macro ethical philosophy, kind of meta philosophy ideas there? I think probably some of my feelings will come out in that book review of how those things kind of hung together for me as well. I also had the privilege of going on Adam and Scott's show, solutions from the multiverse. So do check that out. My episode is episode 68, solutions from the multiverse.com.
On the episode, we're concentrating on AI ethics generally, but also on the idea of machine ethics and how do you imbue a system or a AI with ethics or morality in of itself, and also tying those back to some of Adam's ideas from this episode. So do check that out. Thanks again for listening, and I'll speak to you next time.