#56 — Abusing Dolores
Episode Stats
Length
1 hour and 9 minutes
Words per Minute
175.38338
Summary
In this episode, Dr. Paul Bloom joins us to discuss his new book, "The Case for Rational Compassion: The Case Against Empathy." But what exactly is empathy? And why does it matter if you're good at it or bad at it? And what does it have to do with sports and movies and literature and other forms of entertainment? And why is it so important that we should all be good at being good people when it comes to caring for others? And how does it affect our ability to empathize with others? And what role does it play in the development of the human brain? All that and more on this episode of The Making Sense Podcast by Sam Harris and David Deutsch, hosted by Sam and Alex Blumberg, on the first part of a two-part conversation on empathy and the brain. We don't run ads on the podcast, and therefore it's made possible entirely through the support of our subscribers, so if you enjoy what we're doing here, please consider becoming a supporter of the podcast by becoming one! You'll get access to our full-length episodes of the show and access to all our premium features, including the podcast's "Making Sense" episodes, as well as access to the show's "Best Fiends" and "Best Podcasts. and "Most Influential Podcasts." Subscribe to the podcast on iTunes, wherever you get your favorite podcast episodes, and much more. Thanks for listening and sharing the podcast with your fellow podcast listeners! Sam and I hope you enjoy the podcasting buddies! -- we really do appreciate what you're listening to this podcast. -- it means a lot to you're making sense of it! and we really appreciate what we do it! -- Your feedback helps us make sense. - Sam and the podcast is very much appreciate it. Thank you, Sarah, too, Sarah and we appreciate it, Sarah & the podcast really does make us a lot more of it. -- Your support is so much more than we can do that. -- Thank you! -- Sarah and I really appreciate it -- thank you, really really helps us out! -- Sarah & David, too much so much, really makes us appreciate it's a lot, really, really good, really means it helps us appreciate you, you're a lot of us really well, thanks you, thanks really much, much more, and we're making it really good stuff, really thanks you really good.
Transcript
00:00:10.880
Just a note to say that if you're hearing this, you are not currently on our subscriber
00:00:14.680
feed and will only be hearing the first part of this conversation.
00:00:18.440
In order to access full episodes of the Making Sense Podcast, you'll need to subscribe at
00:00:24.140
There you'll find our private RSS feed to add to your favorite podcatcher, along with
00:00:30.260
We don't run ads on the podcast, and therefore it's made possible entirely through the support
00:00:35.880
So if you enjoy what we're doing here, please consider becoming one.
00:00:51.860
You are now officially my, well, I have, I think, only two return guests.
00:00:57.620
But you have just edged out David Deutsch, who has two appearances.
00:01:02.080
So you're the only third appearance on this podcast.
00:01:06.500
It's not exactly like a 20th appearance on The Tonight Show, but it is a measure of how
00:01:16.760
Well, after we did our second show, people just emailed me saying, just have Paul on the
00:01:28.300
I think a little bit of what makes for a good discussion, which is you and I agree on a
00:01:32.740
We have a lot of common ground, but there's enough tension and enough things to rub against
00:01:39.120
We will see if we can steer ourselves in the direction of controversy, perhaps.
00:01:43.320
But you have just released a book, which we talked about to a significant degree, I think,
00:01:51.600
And we would be remiss not to talk about it some.
00:01:56.660
But people should just know that if they find what we were about to say about empathy intriguing,
00:02:02.760
our first podcast has a full hour or more on it, and it is an incredibly interesting and
00:02:10.020
consequential issue, which we will be giving short shrift here because we've already done
00:02:15.100
But the proper intro to this topic is that you have just released a book entitled Against
00:02:19.940
Empathy, which is a, I think I told you at the time, is a fantastic title.
00:02:25.260
You seem to steer yourself out of a full collision with the outrage of your colleagues in your
00:02:31.300
You have, as a subtitle, The Case for Rational Compassion.
00:02:38.060
Tell us about your position on empathy and how it's different from compassion.
00:02:45.220
So the distinction is super important because if you just hear him against empathy, it'd be
00:02:49.580
fair enough to assume I'm some sort of monster, some sort of person arguing for pure selfishness
00:02:54.900
or, you know, entire lack of warmth or caring for others.
00:03:00.840
And it's actually not what psychologists and philosophers mean by empathy either.
00:03:05.120
What I'm against is putting yourself in other people's shoes, feeling their pain, feeling their
00:03:14.080
I think empathy is a wonderful source of pleasure.
00:03:20.020
It's central to the pleasure we get from literature and movies and all sorts of fictional entertainments.
00:03:25.780
But what I argue is in the moral realm, when it comes to being good people, it steers us
00:03:35.560
And the reason why is that it zooms us in on individuals like a spotlight.
00:03:39.720
And in fact, the fans of empathy describe it as a spotlight.
00:03:47.360
I'll be more empathic towards somebody who is my skin color and then of a different skin
00:03:51.300
color towards somebody I know versus a stranger.
00:03:53.740
It's difficult to be empathic at all to somebody who you view as disgusting or unattractive or
00:04:02.760
And in fact, there's a lot of neuroscience studies we can get into that get at this not
00:04:07.080
only through self-report, which is kind of unreliable, but actually looking at the correlates
00:04:12.960
You know, finding that some studies find that one of my favorite studies tested male soccer
00:04:18.260
And they watch somebody who's been described as a fan of their same team receive electric
00:04:27.040
In fact, the same parts of their brain that would be active if they themselves were being
00:04:31.000
shocked light up when they see this other person being shocked.
00:04:35.100
But then in another condition, they observe somebody who's described as not being of the
00:04:44.100
And in fact, what you get is kind of a blast of pleasure circuitry when they watch the other
00:04:50.640
And so empathy is biased and narrow and parochial and I think leads us astray in a million ways,
00:04:56.880
much of which we discussed the last time we talked about this.
00:05:01.380
So my argument is what we should replace empathy with for decision making is cold-blooded reasoning
00:05:08.300
of a more or less utilitarian sort where you judge costs and benefits.
00:05:13.180
You ask yourself, what can I do to make the world a better place?
00:05:15.920
What could I do to increase happiness, to reduce suffering?
00:05:18.920
And maybe you could view that in a utilitarian way.
00:05:21.380
You could do it in terms of a Kantian moral principles way.
00:05:28.000
What's missing in that, and that's the rational part of my subtitle,
00:05:30.880
what's missing in that is everybody from David Hume on down has pointed out you need some sort of
00:05:41.160
So many people blur empathy and compassion together.
00:05:43.800
And I don't actually care how people use the terminology.
00:05:46.520
But what's important is they're really different.
00:05:51.460
I see you suffer and I feel your pain and I zoom in on that.
00:05:55.060
But you could also feel compassion, which is you care for somebody.
00:06:00.320
You want them to be happy, but you don't feel their pain.
00:06:03.720
And some really cool experiments on this, for instance, were done by – and this is going to
00:06:07.440
connect to one of your deep interests, that of meditation – were done by Tanya Singer,
00:06:12.540
who's a German neuroscientist, and Matthew Ricard, who's a Buddhist monk and so-called
00:06:18.180
And they did these studies where they trained people to feel empathy, to experience the suffering
00:06:25.600
And then they trained another group to feel compassion.
00:06:28.240
And the way they do it is through loving kindness meditation, where you care about others,
00:06:34.100
Now, it turns out these activate entirely different parts of the brain.
00:06:36.940
There's always some overlap, but there's distinct parts of the brain.
00:06:39.220
But more to the point, they have different effects.
00:06:52.720
They enjoy the feeling of kindness towards other people, and it makes them nicer.
00:06:56.080
And recent studies, like very recent studies, by the psychologist David Desteno in Northwestern,
00:07:02.760
back this up by finding that meditation training actually increases people's kindness.
00:07:07.900
And the explanation that they give – and it's an open question why it does so – the
00:07:11.460
explanation they give is, it ignites compassion but shuts down empathy circuitry.
00:07:15.880
That is, you deal with suffering, and you could deal with it better because you don't
00:07:21.260
So, this is one way I'd make the distinction between empathy and compassion.
00:07:24.340
Yeah, I think we probably raised this last time, but it's difficult to exaggerate how
00:07:30.220
fully our moral intuitions can misfire when guided by empathy as opposed to some kind of
00:07:35.620
rational understanding of what will positively affect the world.
00:07:40.220
The research done by Paul Slovic on moral illusions is fascinating here.
00:07:46.620
When you show someone a picture of a single little girl who's in need, they are maximally
00:07:54.340
But if you show them a picture of the same little girl and her brother, their altruistic
00:08:02.820
And if you show them 10 kids, it's reduced further.
00:08:05.620
And then if you give them statistics about hundreds of thousands of kids in need of this
00:08:14.960
And that, I think, relates to this issue of empathy as opposed to what is a higher cognitive act of
00:08:23.400
just assessing where the needs are greatest in the world.
00:08:27.100
One could argue that we are not evolutionarily well designed to do that.
00:08:32.800
I mean, I remember you cited the Slovic findings.
00:08:35.000
I think it was in the moral landscape where you say something to the fact that there's
00:08:40.060
never been a psychological finding that so blatantly shows a moral error.
00:08:44.340
Whatever your moral philosophy is, you shouldn't think that one life is worth more than eight,
00:08:51.860
Especially when the eight contain the one life you're concerned about.
00:08:58.160
And I mean, the cool thing is that upon reflection, we could realize this.
00:09:01.980
So I'm not one of these psychologists who go on about how stupid we are, because I think
00:09:05.660
every demonstration of human stupidity or irrationality has contained with it a demonstration
00:09:11.400
of our intelligence, because we know it's irrational.
00:09:14.180
We could point it out and say, God, that's silly.
00:09:16.500
I mean, and we have a lot of, my book cites a lot of research show demonstrating the sort of
00:09:21.520
phenomena you're talking about, but it's an old observation.
00:09:24.320
I mean, Adam Smith, like 300 years ago, over, yeah, but, but 300 years ago said, gave the
00:09:30.020
example of an educated man of Europe hearing that the country of China was destroyed.
00:09:36.400
At a time when they would have never known somebody from China.
00:09:39.140
And Smith says, basically, your average European man would say, well, that's a shame.
00:09:45.720
But if he was to learn that tomorrow, he would lose his little finger.
00:09:56.580
And he uses this example to show that our feelings are skewed in bizarre ways.
00:10:01.660
But then he goes on to point out that we can step back and recognize that the death of thousands
00:10:07.000
is far greater tragedy than the loss of our finger.
00:10:10.740
And it's this dualism, this duality that fascinates me between what our gut tells us and what our
00:10:17.660
I believe he also goes on to say that any man who would weigh the loss of his finger over
00:10:22.820
the lives of thousands or millions in some distant country, we would consider a moral
00:10:28.300
Yes, he says that human nature shudders at the thought.
00:10:32.620
It's one of the great passages in all of literature, really.
00:10:35.840
I think I quote the whole thing in The Moral Landscape.
00:10:38.660
So just a few points to pick up on what you just said about the neuroimaging research done
00:10:47.940
It's something that people don't tend to know about the meditative side of this.
00:10:52.920
But compassion as a response to suffering from a meditative first person, and certainly
00:11:00.480
from the view of Buddhist psychology, is a highly positive emotion.
00:11:07.380
You're not diminished by the feeling of compassion.
00:11:10.180
The feeling of compassion is really exquisitely pleasurable.
00:11:14.260
It is what love feels like in the presence of suffering.
00:11:18.100
The Buddhists have various modes of what is called loving-kindness, and loving-kindness
00:11:24.900
is the generic feeling of wishing others happiness.
00:11:29.680
And you can actually form this wish with an intensity that is really psychologically overwhelming,
00:11:37.160
which is to say it just drowns out every other attitude you would have toward friends
00:11:44.960
You can just get this humming even directed at a person who has done you harm or who is
00:11:54.040
You wish this person was no longer suffering in all the ways that they are and will to be
00:12:00.500
the kind of evil person they are, and you wish you could improve them.
00:12:04.640
And so Buddhist meditators acquire these states of mind, and it's the antithesis of merely being
00:12:12.900
made to suffer by witnessing the suffering of others.
00:12:17.120
It's the antithesis of being made depressed when you are in the presence of a depressed
00:12:23.420
And so it really is the fact that empathy and compassion are used, for the most part, as
00:12:29.260
synonyms in our culture is deeply confusing about what normative human psychology promises
00:12:38.780
and just what is on the menu as far as conscious attitudes one can take toward the suffering
00:12:47.560
I think that I'm now in a sort of getting in a debate in the journal Trends in Cognitive
00:12:53.180
Sciences with an excellent neuroscientist who disagrees with me.
00:12:57.700
And there's all sorts of interesting points to go back and forth.
00:13:00.240
But at one point, he complains about the terminology, and he says, compassion isn't opposed to empathy.
00:13:11.540
I'm totally comfortable to call them different types of empathy, in which case I'm against
00:13:17.620
But the distinction itself is absolutely critical.
00:13:20.980
And it's so often missed, not only in the scientific field, but also in everyday life.
00:13:26.480
I published an article on empathy in the Boston Review, and I got a wonderful letter, which
00:13:30.680
I quote in my book, with permission of the writer, by this woman who worked as a first
00:13:39.140
And after doing this for about a week, she couldn't take it anymore.
00:13:44.700
While her husband happily and cheerfully continued his work, and it didn't seem to harm him at
00:13:55.940
And I think we make sense of this by saying that there's at least two processes that lead
00:14:04.880
And one of them, empathy, has some serious problems.
00:14:07.520
And if we could nurture compassion, we not only can make the world a better place, but
00:14:13.960
To be clear, you also differentiate two versions of empathy, because there is the cognitive
00:14:18.940
empathy of simply understanding another person's experience.
00:14:23.400
And then there's the emotional contagion version, which we're talking about, which is you are permeable
00:14:29.080
to their suffering in a way that makes you suffer also.
00:14:33.220
The cognitive empathy is kind of a different bag, and it's very interesting.
00:14:36.680
And we might turn to this later if we talk about Trump, but it's an understanding of what
00:14:43.720
And sometimes we call this mind reading or theory of mind or social intelligence.
00:14:50.440
If you, Sam, want to make the world a better place and help people, help your family, help
00:14:55.700
others, you can't do it unless you understand what people want, what affects people, what
00:15:03.460
Any good person, any good policymaker needs to have high cognitive empathy.
00:15:07.800
On the other hand, suppose you wanted to bully and humiliate people, to seduce them against
00:15:18.380
If you want to make me miserable, it really helped to know how I work and how my mind works.
00:15:23.960
So cognitive empathy is a form of intelligence, like any sort of intelligence can be used in
00:15:30.400
So to say that someone is highly empathic in that way is to simply say that they can take
00:15:35.600
another person's point of view, but that can be used for good or evil.
00:15:40.740
The worst people in the world have high cognitive empathy.
00:15:46.660
I wanted to step back to something you said about meditative practice and Buddhism, because
00:15:50.660
there were two things you said, and one is easy really to get behind, which is the pleasure
00:15:54.940
that comes through this sort of practice in doing good, in loving people, in caring about
00:16:00.920
But one thing I struggle with, and I don't know whether we have different views on this, is
00:16:06.160
over the blurring of distinctions that comes through Buddhism in this meditative practice.
00:16:16.360
Have you heard about the Buddhist vacuum cleaner?
00:16:20.540
And so one of the criticisms of Buddhist practice, and to some extent a criticism of some of my
00:16:26.800
positions, is that there's some partiality we do want to have.
00:16:31.620
If I'm not only do I love my children more than, well, more than I love you, but I think
00:16:37.060
I'm right to love my children more than I love you.
00:16:43.200
One of the requirements of my podcast guests is that they love me as much as their own children.
00:16:57.280
I think I'm agnostic as to whether one or the other answers is normative here, or whether
00:17:03.200
there are equivalent norms, which are just mutually incompatible, but you could create
00:17:12.420
But I share your skepticism, or at least it's not intuitively obvious to me, that if you could
00:17:20.260
love everyone equally, that would be better than having some gradations of moral concern.
00:17:29.380
When we extend the circle in the way that Peter Singer talks about of our moral concern,
00:17:35.600
We want to overcome our selfishness, our egocentricity, our clannishness, our tribalism, our nationalism,
00:17:44.720
all of those things, all of those boundaries we erect where we care more about what's inside
00:17:53.120
Those all seem, at least they tend to be pathological, and they tend to be sources of conflict, and they
00:17:59.340
tend to explain the inequities in our world that are just, on their face, unfair, and in
00:18:11.100
But whether you want to just level all of those distinctions and love all homo sapiens
00:18:21.000
And I'm not actually skeptical that it is a state of mind that's achievable.
00:18:27.400
I've met enough long-term meditators, and I've had enough experience in meditation and with
00:18:34.280
psychedelics and just changing the dials on conscious states to believe that it's possible
00:18:42.540
to actually obviate all of those distinctions and to just feel that love is nothing more
00:18:50.980
than a state of virtually limitless aspiration for the happiness of other conscious creatures
00:18:59.160
and that it need not be any more preferential or directed than that.
00:19:06.520
When you're talking about a monk who has come out of a cave doing nothing but compassion meditation
00:19:11.600
for a decade, you're talking about somebody who, in most cases, has no kids, doesn't have
00:19:17.880
to function in the world the way we have to function.
00:19:21.160
Certainly, civilization doesn't depend on people like that forming institutions and
00:19:29.420
And so, you know, I don't know what to make of the fact that let's just grant that it's
00:19:33.320
possible to change your attitude in such a way that you really just feel equivalent love
00:19:39.140
for everybody, and there's no obvious cost to you for doing that.
00:19:44.080
I don't know what the cost would be to the species or to society if everyone was like
00:19:49.880
And intuitively, I feel like it makes sense for me to be more concerned and therefore much
00:19:55.560
more responsible for and to my kids than for yours.
00:20:00.820
But at a greater level of abstraction, when I talk about how I want society to be run, I
00:20:09.540
I just have to understand that at the level of laws and institutions, fairness is a value
00:20:16.440
that more often than not conserves everyone's interests better than successfully gaming a
00:20:26.040
I mean, I want to zoom in on the last thing you said because it was astonishing to me.
00:20:29.680
But for most of what you're saying, I'm nodding in agreement.
00:20:32.980
Certainly, the world would be much better if our moral circle was expanded.
00:20:38.600
And certainly, the world would be much better if we cared a little bit more for people outside
00:20:44.480
of our group and correspondingly, relatively less for people inside of our group.
00:20:49.080
It's not that we don't care for our own enough.
00:20:52.000
The problem is we don't care for others enough.
00:20:54.380
And I love your distinction as well, which is a way I kind of think about it now is, yeah,
00:20:59.640
I love my children more than I love your children.
00:21:02.920
But I understand stepping back that a just society should treat them the same.
00:21:10.060
So if I have a summer job opening, I understand my university regulations say I can't hire
00:21:18.680
And, you know, I actually think that's a good rule.
00:21:21.520
I'd like to hire them, a job for them and everything.
00:21:24.000
But I could step back and say, yeah, we shouldn't be allowed to let our own personal preferences,
00:21:31.060
our own emotional family ties distort systems that should be just and fair.
00:21:36.640
The part of what you said, which I just got to zoom in on, is do you really think it's
00:21:41.500
possible, put aside somebody who has no prior attachments at all, some monk living in a
00:21:46.840
cave, have you met or do you think you will ever meet people who have had children and
00:21:52.540
raised them who would treat the death of their child no differently than the death of a strange
00:22:05.540
I can tell you these are extraordinarily happy people.
00:22:09.180
So what you get from them is not a perceived deficit of compassion or love or engagement
00:22:16.540
with the welfare of other people, but you get a kind of obliteration of preference.
00:22:24.920
The problem in their case is it's a surfeit of compassion and love and engagement so that
00:22:31.900
they don't honor the kinds of normal family ties or preferences that we consider normative
00:22:40.220
and that we would be personally scandalized to not honor ourselves.
00:22:44.700
The norms of preference, which seem good to us and we would feel that we have a duty to
00:22:51.460
enforce in our own lives and we would be wracked by guilt if we noticed a lapse in honoring
00:22:57.260
those duties, these are people who have just blown past all of that because they have used
00:23:02.540
their attention in such an unconventional and in some ways impersonal way, but it's an
00:23:08.360
impersonal way that becomes highly personal or at least highly intimate in their relations
00:23:14.320
So for instance, I studied with one teacher in India, a man named Poonjaji.
00:23:21.900
He was Hindu, but he was not teaching anything, especially Hindu.
00:23:26.840
I mean, he was talking very much in the tradition of, if people are aware of these terms and I'll
00:23:32.060
get them from my book, Waking Up, the tradition of Advaita Vedanta, the non-dual teachings of
00:23:39.000
They're really just Indian and there's nothing about gods or goddesses or any of the garish
00:23:46.100
He was a really, I mean, there was a lot that he taught that I disagreed with and, or at least
00:23:52.180
there were some crucial bits that he taught that I disagreed with.
00:23:55.000
And again, you can find that in Waking Up if you're interested, but he was a really shockingly
00:24:02.220
charismatic and wise person to be in the presence of.
00:24:08.160
He was really somebody who could just bowl you over with his compassion and his, the force
00:24:18.360
If I were not as scrupulous as I am about attributing, you know, causality here, 90% of the people
00:24:26.840
who spent any significant time around this guy thought he had, you know, magic powers.
00:24:33.000
This is a highly unusual experience of being in a person's presence.
00:24:38.080
Part of what made him so powerful was that actually, ironically, he had extraordinarily high
00:24:45.000
empathy of the unproductive kind, but it was kind of anchored to nothing in his mind.
00:24:52.060
So, for instance, if someone would have a powerful experience in his presence and, you know,
00:24:57.680
start to cry, you know, tears would just pour out of his eyes.
00:25:01.940
You know, he would just immediately start crying with the person.
00:25:05.160
And when somebody would laugh, he would laugh, you know, twice as hard.
00:25:08.720
It was like he was a amplifier of the states of consciousness of the people around him in
00:25:17.020
And, again, there was, you know, a feedback mechanism here where, you know, people would
00:25:21.080
just have a bigger experience because of the ways in which he was mirroring their experience.
00:25:25.520
And there was no sense at all that this was an act.
00:25:28.220
I mean, he would have to have been the greatest actor on earth for this to be brought off.
00:25:32.960
But, yeah, he's, I think, I forget the details of the story, but the story about, you know,
00:25:37.700
how he behaved when his own son died would shock you with its apparent aloofness, right?
00:25:44.900
I mean, this is a person for whom a central part of his teaching was that death is not
00:25:49.420
a problem and he's not hanging on to his own life or the lives of those he loves with any
00:25:56.220
And he was advertising the benefits of this attitude all the time because he was the happiest
00:26:03.260
But I think when push came to shove and he had to react to the death of his son, he wouldn't
00:26:08.980
react the way you or I would or the way you or I would want to react given how we view
00:26:16.960
I mean, I, you have a lot of stories like that and waking up of people like that.
00:26:23.960
I met Matthew Ricard once and it was a profoundly moving experience for me.
00:26:30.700
I'm sort of, I tend to be cynical about people.
00:26:32.640
I tend to be really cynical about people who claim to have certain abilities and the
00:26:36.780
like, but I simply had a short meeting from a man out for tea and we just talked and there's
00:26:42.100
something about people who have attained a certain mental capacity or set of capacities
00:26:47.660
that you can tell by being with them that they have it, their bodies afford it.
00:26:52.460
They just, they just give it off from a mile away.
00:26:56.020
It's like, um, it's, it's, it's analogous to charisma, which some people have apparently.
00:27:02.660
Um, Bill Clinton is supposed to be able to walk into like a, a, a large room and people
00:27:10.700
And whatever it is that someone like Matthew Ricard has is, is extraordinary in a different
00:27:15.600
way, which is he has some literal sense exudes peace and compassion.
00:27:20.540
Having said that, um, some of it freaks me out and some of it morally troubles me.
00:27:25.300
I mean, we talked about the bonds of family, but I can't imagine any such people having
00:27:31.260
Um, you, I would imagine you get a lot of email, Sam.
00:27:34.740
I imagine you get a lot of email asking you for favors.
00:27:37.420
So when I email you and say, Hey, you know, I have a book coming out.
00:27:42.060
You, you, um, because we're friends, you respond to me different than if I were a total
00:27:47.500
Suppose you didn't suppose you treated everything on its merits with no bonds, no, no connectedness.
00:28:00.180
If you knew more about the details of his life, you might find that it's not aligned
00:28:08.840
For instance, the example you just gave, he might be less preferential toward friends or
00:28:15.500
I don't often see him, but I've spent a fair amount of time with him.
00:28:21.740
He's just like the most decent guy you're going to meet all year.
00:28:25.980
He's just a, he's just a wonderful person, but he's a, I studied with his teacher, Kensei
00:28:31.780
Rinpoche, who was a very famous lama and who many, you know, Tibetan lamas thought was,
00:28:37.540
you know, one of the greatest meditation masters of his generation.
00:28:40.440
He died, unfortunately, about 20 years ago, but maybe it's more than that.
00:28:44.300
And I now notice as I get older, whenever I estimate how much time has passed, I'm off
00:28:53.260
Self-deception, I think, has something to do with it.
00:28:55.180
So anyway, Kensei Rinpoche was just this 800-pound gorilla of meditation.
00:29:00.840
He'd spent more than 20 years practicing in solitude.
00:29:04.660
And Matthew was his closest attendant for years and years.
00:29:10.920
I think just to give you kind of to rank order what's possible here, Matthew certainly wouldn't
00:29:15.600
put himself anywhere near any of his teachers on the hierarchy of what's possible in terms
00:29:21.660
of, you know, transforming your moment-to-moment conscious experience and therefore the likelihood
00:29:29.300
Matthew's great because, as you know, he's got this, he was a scientist before he became
00:29:36.240
And the work he's done in collaborating with neuroscientists who do neuroimaging work on
00:29:45.300
And he's, you know, he's a real meditator, so he can honestly talk about what he's doing
00:29:53.120
But again, even in his case, he's made a very strange life decision, certainly from your
00:30:01.000
He's decided to be a monk and to not have kids, to not have a career in science, to
00:30:06.640
not, it's in some ways an accident that he, that you even know about him because he could
00:30:12.320
just be, and for the longest time he was just sitting in a tiny little monk cell in
00:30:21.620
And when I met him, he was spending six months of each year in total solitude, which again,
00:30:26.380
boggles my mind, because if I spend a half hour by myself, I start to want to check my
00:30:31.640
And I accept your point, which is, I need to sort of work to become more open-minded about
00:30:41.380
what the world would be like if certain things which I hold dear were taken away.
00:30:45.420
There's a story I like of why economics got called dismal science.
00:30:50.700
And it's because the term was given by Carlyle, and Carlyle was enraged at the economists who
00:30:57.360
were dismissing an institution that Carlyle took very seriously.
00:31:02.100
And the economist said, this is an immoral institution, and Carlyle says, you have no
00:31:05.840
sense of feeling, you have no sense of tradition.
00:31:09.640
And so, you know, he was blasting the economist for being so cold-blooded, they couldn't appreciate
00:31:16.580
And sometimes when I feel my own emotional pulls towards certain things, and I feel like,
00:31:21.900
I feel confident that whatever pulls I have along, say, racial lines are immoral, but
00:31:27.440
I'm less certain about family lines or friendship lines, I think I need to be reminded, we all
00:31:32.340
need to be reminded, well, we need to step back and look, what will future generations say?
00:31:36.860
What will we say when we're at our best selves?
00:31:39.640
It's going to take more than that for me to give up the idea that I should love my children
00:31:43.520
more than I love your children, but it is worth thinking about.
00:31:47.080
And it's interesting to consider moral emergencies and how people respond in them and how we would
00:31:55.040
So just imagine if, you know, you had a burning building and our children were in there and
00:32:02.000
I could run in to save them, say, I'm on site and I can run in and save whoever I can
00:32:09.300
save, but because I know my child's in there, my priority is to get my child and who could
00:32:17.380
So I run in there and I see your child who I can quickly save, but I need to look for
00:32:23.620
So I just run past your child and go looking for mine.
00:32:27.940
And at the, you know, the end of the day, I save no one, say, or I only save mine.
00:32:33.760
It really, really was a zero sum contest between yours and mine.
00:32:37.240
You know, if you could watch that play out, if you had a video of what I did in that house,
00:32:41.580
And you saw me run past your child and just go looking for mine, I think it's just hard
00:32:52.580
A certain degree of searching and a certain degree of disinterest with respect to the fate
00:32:58.880
of your child begins to look totally pathological.
00:33:04.680
But some bias seems only natural and we might view me strangely if I showed no bias at all.
00:33:12.740
Again, I don't know what the right answer is there.
00:33:15.280
We're living as though almost a total detachment from other people's fates, apart from the fates
00:33:25.600
And when push comes to shove, I think that is clearly revealed to not be healthy.
00:33:33.960
Imagine you weren't looking for your child, but your child's favorite teddy bear.
00:33:38.300
Well, then you're kind of a monster, you know, searching around for that while my child
00:33:43.800
I mean, to make matters worse, I mean, Peter Singer is famously, I think he very convincingly
00:33:48.140
pointed out that the example you're giving is a sort of weird science fiction example and
00:33:52.740
you might reassure, we might reassure, says, well, that'll never happen.
00:33:56.000
But Singer points out we're stuck in this dilemma every day of our lives.
00:34:00.520
As we devote resources, you know, I, like a lot of parents, spend a lot of money on my
00:34:06.020
kids, including things that they don't, you know, things that make your lives better but
00:34:11.160
And things that are just fun, expensive toys and vacations and so on, while other children
00:34:16.300
And, and Singer points out that, um, I really am in that burning building.
00:34:21.920
I am in that burning building buying my son an Xbox while kids from Africa die in the
00:34:28.760
And, and it's, it's difficult to confront this.
00:34:31.300
And I think people get very upset when Peter Singer rings it up, rings it up.
00:34:34.980
But it is a moral dilemma that we are continually living with and continually struggling with.
00:34:41.140
And I don't know what the right answer is, but I do have a sense that the way we're doing it
00:34:47.980
We are not devoting enough attention to those in need.
00:34:50.440
We're devoting too much attention to those we love.
00:34:55.440
I also had Will McCaskill, who's very much argues in the same line.
00:35:02.460
I think one thing I did as a result of my conversation with Will was I realized that I
00:35:08.240
just, I needed to kind of automate this insight.
00:35:11.420
So Will is very involved in the, the effect of altruism community.
00:35:15.860
And he arguably, I think, started the movement and their websites like givewell.org that rate
00:35:23.100
And they've quantified that to, to save a, an individual human life costs now $3,500.
00:35:29.240
I mean, that's, that's the amount of money you have to allocate where you can say as,
00:35:33.860
as a matter of likely math, you have saved one human life.
00:35:38.200
And the calculation there is, is with reference to the work of the Against Malaria Foundation.
00:35:43.260
They, they put up these insecticide treated bed nets and malaria death has come down by
00:35:49.820
It's still close to a million people dying every year, but it was very recently 2 million people
00:35:54.000
dying a year from mosquito-borne, not, not, not all mosquito-borne illness, just malaria,
00:35:59.240
So in response to my conversation with Will, I just decided, well, I'm still going to buy
00:36:04.800
I know, I know that I can't conform my life and my, you know, the fun I have with my kids
00:36:15.200
So that I, you know, strip all the fun out of life and just give everything to the Against
00:36:20.800
But I decided that the first $3,500 that comes into the podcast every month will just by definition
00:36:32.020
I would have to decide to stop it from happening.
00:36:37.200
I mean, so what Will does is there's actually a giving pledge where people decide to give
00:36:41.880
10% of their, I think it's at least 10% of their income to charity and to these most effective
00:36:48.200
charities each year, any kind of change you want to see in the world that you want to
00:36:53.500
be effective, automating it and taking it out of the cognitive overhead of having to
00:36:59.940
be re-inspired to do it each day or each year or each period, that's an important thing
00:37:06.240
That's why I think the greatest changes in human well-being and in human morality will
00:37:11.860
come not from each of us individually refining our ethical code to the point where we are
00:37:20.980
So that every time Paul Slovic shows us a picture of a little girl, we have the exact
00:37:26.880
And when we see eight kids, we have, you know, we have eightfold more or whatever it would
00:37:31.360
But to change our laws and institutions and tax codes and everything else so that more good
00:37:37.900
is getting done without us having to be saints in the meantime.
00:37:42.040
I think that this comes up a lot in discussions of empathy.
00:37:45.020
So I, you know, I talk about the failings of empathy in our personal lives, particularly
00:37:49.580
say giving to charity or deciding how to treat other people.
00:37:52.680
And a perfectly good response I sometimes get is, well, okay, I'm a high empathy person.
00:37:58.760
And, you know, one answer concerns activities like meditative practice.
00:38:02.900
But, you know, you could be skeptical over how well that works for many people.
00:38:06.760
Um, I mean, I think your answer is best, which is in a good society and actually as good
00:38:12.020
individuals, we're smart enough to develop procedures, uh, mechanisms that take things
00:38:22.060
Uh, the political theorist, Jan Elster points out, that's what a constitution is.
00:38:26.320
A constitution is a bunch of people saying, look, we are irrational people.
00:38:30.440
And sometime in the future, we're going to be tempted to make dumb, irrational choices.
00:38:37.780
And let's set us, let's set up something that, um, to override our base instincts.
00:38:42.940
We can change this, this stopping mechanism, but let's make it difficult to change.
00:38:49.380
So no matter how much Americans might choose, they want to reelect a popular president for
00:38:56.440
If all the white Americans decide they want to re-instantiate the institution of slavery,
00:39:07.700
And, and charitable giving could work that way, um, in that you have, uh, automatic withdrawal
00:39:13.620
So you, in an enlightened moment, you say, this is the kind of person I want to be.
00:39:17.500
And you don't wait for your gut feelings all the time.
00:39:20.880
I think, um, overriding other, uh, disruptive sentiments works the same way.
00:39:26.820
Like, um, suppose I have to choose somebody to be a graduate student or, or something like
00:39:32.440
And I know full well that there are all sorts of biases having to do with physical attractiveness,
00:39:39.820
And suppose I believe, upon contemplation, that it shouldn't matter.
00:39:44.200
It shouldn't matter how good looking the person is.
00:39:45.920
It shouldn't matter whether they were from the same country as me.
00:39:48.860
Well, one thing I could try to do is say, okay, I'm going to try to really be very hard.
00:39:59.740
So what we, what we do when we're at our best is develop some systems.
00:40:04.760
Like, for instance, you, um, you don't look at the pictures.
00:40:12.600
Now it's harder to see how this is done when it comes to broader policy decisions, but
00:40:19.520
Paul Slovic actually, who we've referenced a few times, talks about this a lot.
00:40:23.180
So right now, for instance, government's decisions over where to send aid or where to go to war
00:40:32.480
And they're basically based on sad stories and photographs of children washed ashore and
00:40:41.080
And people like Slovic wonder, can we set up some fairly neutral triggering procedures
00:40:46.360
that say in America, when a situation gets this bad, according to some numbers and some
00:40:51.780
objective judgments, it's a national emergency.
00:40:55.940
If this many people die under such and so circumstances, we initiate some sort of investigative
00:41:01.520
It sounds cold and bureaucratic, but I think cold and bureaucratic is much better than hot
00:41:09.180
There was something you said when referencing the soccer study in group empathy and out group
00:41:17.280
And this was a, this reminded me of a question we got on Twitter.
00:41:20.200
Someone was asking about the relationship between empathy and identity politics.
00:41:24.920
I guess, I guess based on the research you just cited, there's a pretty straightforward
00:41:33.180
We're, we're very empathic creatures, but it always works out that the empathy tends to,
00:41:39.720
to focus on those from within our group and not to the out group.
00:41:43.280
I got into a good discussion once with Simon Baron Cohen, the psychologist who's very pro
00:41:47.500
And he said that if only, um, we're talking about his time of the, of the, uh, war in
00:41:53.960
And, um, he's talking to only the Palestinians and Israelis had more empathy.
00:41:59.160
The Israelis would realize that, uh, the suffering of the Palestinians and vice versa, and there'd
00:42:04.940
And, and my feeling here is that that's exactly, it's exactly the opposite.
00:42:09.160
That, that conflict in particular suffered from an abundance of empathy.
00:42:14.100
The Israelis at the time felt huge empathy for the suffering of teenagers who were kidnapped
00:42:20.960
The Palestinians felt tremendous empathy for their countrymen who were imprisoned and tortured.
00:42:26.020
Um, there were abundant empathy and there's always abundant empathy at the core of any
00:42:31.580
And the reason why it drives conflict is I feel tremendous empathy for the American who is
00:42:38.240
And as a rule, it's very hard for me to feel empathy for the Syrian or for the Iraqi and
00:42:45.200
And, and, you know, we could, we could now pull it down a little bit in the aftermath of
00:42:51.760
I think, um, I think Clinton voters are exquisitely good at empathy towards other Clinton voters
00:42:58.680
Having empathy for your political enemies is, is difficult.
00:43:02.240
And I think actually, and for the most part, um, so hard that it's not worth attaining, we
00:43:08.980
I think we certainly want the other form of empathy.
00:43:11.880
I mean, we want to be able to understand why people decided what they decided.
00:43:16.460
And we don't want to be just imagining motives that don't exist or, or weighting them in ways
00:43:24.180
We inevitably will say something about politics.
00:43:28.320
By, by law, there could be no discussion of over 30 minutes that doesn't, uh, mention
00:43:32.920
I'm going to steer you toward Trump, but before we go there, as you may or may not know, I've
00:43:37.600
been fairly obsessed with artificial intelligence over the last, I don't know, 18 months or so.
00:43:43.760
And we solicited some questions from Twitter and many people asked about this.
00:43:53.460
And there was one question I saw here, which was given your research on empathy, how should
00:44:03.080
So I actually hadn't taken seriously the AI worries.
00:44:06.200
Um, and honestly, I'll be honest, I dismissed them as somewhat crackpot until I listened to
00:44:18.540
Um, and I, I found it fairly persuasive that there's an issue here.
00:44:22.440
We should be devoting a lot more thought to, um, the question of putting empathy into machines,
00:44:29.280
which is, is, um, is I think in some way morally fraught because, um, if I'm right that
00:44:36.680
empathy leads to capricious and arbitrary decisions, then if we put empathy into computers
00:44:42.180
or robots, we end up with capricious and arbitrary computers and robots.
00:44:45.660
I think when people think about putting empathy into machines, they often think about it from
00:44:49.620
a marketing point of view, such that, um, uh, you know, a household robot or even an interface
00:44:56.200
on a Mac computer that is somewhat empathic will, um, will be more pleasurable to interact
00:45:02.620
with more humanoid, more human-like, and we'll get more pleasure, uh, dealing with it.
00:45:08.880
I, I've actually heard a Contra review, uh, from my friend, David Pizarro, who points out
00:45:13.900
that when dealing with a lot of, uh, interactions, we actually don't want empathy.
00:45:20.260
We want a sort of, uh, cold-blooded interaction that we don't have to become emotionally invested
00:45:27.280
I think, I think of our super intelligent AI, I think we want professionalism more than
00:45:35.120
You don't want, if you're anxious and consulting your robot doctor, you don't want that anxiety
00:45:44.860
You want, you want as stately a physician as you ever met in, in the living world now embodied
00:45:52.680
So I'm, I'm very happy if I have a home blood pressure cuff, which just gives me the numbers
00:45:57.080
and doesn't say, oh man, I feel terrible for you.
00:46:00.840
Yeah, yeah, dude, dude, I'm, I'm holding back here.
00:46:05.180
It's, it's, it's, it's, you know, the machine starts to a little graphical cheers trickle
00:46:10.440
I, I'm sure people involved in marketing these devices think that they're appealing.
00:46:16.040
And we're going to discover that for a lot of interfaces, we just want, uh, a sort of an,
00:46:26.400
And, um, and I think we find, um, as I find, for instance, with, with, uh, uh, interfaces
00:46:33.060
where you have to call the airport or something, when it reassures me that, that it's worried
00:46:37.380
about me and so on, I find it cloying and annoying and intrusive.
00:46:43.020
Um, I, I want to, I want to save my empathy for real people.
00:46:46.320
But I think the question goes to what will be normative on the side of the AI?
00:46:52.660
So do we want AI, I guess, let's leave consciousness aside for the moment.
00:46:59.120
But do we want an AI that actually has more than just factual knowledge of our preferences
00:47:08.420
insofar as it could emulate our emotional experience?
00:47:16.120
That, that we, we want it to have so as to better conserve our interests.
00:47:19.840
So here, here's what I would, here's my take on it.
00:47:25.280
I think we particularly want AI of compassion towards us.
00:47:28.680
Um, I'm not sure whether this came from you or somebody else, but somebody gave the following
00:47:35.340
The world is going to end when someone programs a powerful computer that interfaces with other
00:47:40.060
things, um, to, um, to get rid of spam on email.
00:47:44.880
And then the computer will promptly destroy the world as a suitable way to do this.
00:47:48.880
Um, we want machines to be, have a guard against doing that where they say, well, human life
00:47:55.320
Human flourishing and animal flourishing is valuable.
00:47:58.100
So if I want, if I want AI that is involved in making significant decisions, I want it to
00:48:04.500
I don't, however, want it to have, uh, empathy.
00:48:08.360
I think empathy makes us, it makes us among other things, racist.
00:48:11.620
Uh, the last thing in the world we need is racist AI.
00:48:15.220
There's been some concern that we already have racist AI.
00:48:21.500
If I recall correctly, there, there are algorithms that decide on the paroling of prisoners and,
00:48:31.140
And there's some evidence, I could be making a, a bit of a hash of this, but there was some
00:48:36.420
evidence in, in one or both of these categories that the AI was taking racial characteristics
00:48:44.140
And then that wasn't, that hadn't been programmed in, that was just an emergent property of it
00:48:51.900
This data was, was relevant in the case of prisoners, the, the recidivism rate.
00:48:57.200
You know, if it's just a fact that black parolees recidivate more, reoffend more, I don't know
00:49:04.280
in fact that it is, but let's just say that it is.
00:49:06.460
And an AI notices that, well then of course the AI, if you're going to be predicting whether
00:49:11.740
a person is likely to violate their parole, you are going to take race into account if
00:49:16.680
it's actually descriptively true of the data, that it's a variable.
00:49:19.740
And so I think there, there was at least one story I saw where you had people scandalized
00:49:26.700
When I was, was young and very nerdy, more nerdy than I am now, I like gobbled up all
00:49:32.440
science fiction and Isaac Asimov had a tremendous influence on me and he had all of his work
00:49:37.560
on robots and he had these three laws of robotics.
00:49:41.060
And, and, you know, if you think about it as a, you know, from a more sophisticated view,
00:49:45.180
the laws of robotics weren't particularly morally coherent.
00:49:49.100
Like one of them is you should never harm a human or through an action, allow a human to
00:49:54.920
But does that mean the robot's going to run around trying to save people's lives all the
00:49:59.260
Because we're, we're continually not allowing people to come to harm.
00:50:06.380
Which is, I would wire up, I think, and in fact, I think in some way as, as robots now
00:50:11.440
become more powerful, you could imagine becoming compulsory to wire up these machines with some
00:50:18.560
This comes up with driving cars, sorry, with the, with the automatic, right?
00:50:22.660
The, the, the computer driven cars where, you know, are they going to be utilitarian?
00:50:27.620
And there's a lot of good debates on that, but they have to be something and they have
00:50:32.740
to have some consistent moral principles that take into account human life and human flourishing.
00:50:38.200
And the last thing you want to stick in there is, is something that says, well, if someone
00:50:45.140
Always count a single life as more than a hundred lives.
00:50:48.140
There's no justification for putting the sort of stupidities of empathy that we're often
00:50:52.060
stuck with to putting them into the machines that we create.
00:50:55.040
That's one thing I love about this moment of having to think about super intelligent AIs.
00:51:00.140
is it's amazingly clarifying of moral priorities.
00:51:05.740
And all these people who, until yesterday said, well, you know, who's to say what's true
00:51:12.140
Once you force them to weigh in on how should we program the algorithm for self-driving cars,
00:51:18.980
they immediately see that, okay, you have to solve these problems one way or another.
00:51:25.700
Do you want them to preferentially drive over old people as opposed to children?
00:51:31.220
And to say that there is no norm there to be followed is to just, you're going to be
00:51:39.760
If you make a car that is totally unbiased with respect to the lives it saves, well, then
00:51:48.000
you've made this kind of this Buddhist car, right?
00:51:50.320
You've made this, you've made the Mathieu Ricard car, say.
00:51:53.380
That may be the right answer, but you have taken a position just by default.
00:51:57.900
And the moment you begin to design away from that kind of pure equality, you are forced
00:52:07.640
And I think it's pretty clear that we have trolley problems that we have to solve.
00:52:12.160
And we have, at a minimum, we have to admit that killing one person is better than killing
00:52:18.020
And we have to design our cars to have that preference.
00:52:21.520
When you put morality in the hands of the engineers, you see that you can't take refuge in any
00:52:30.420
You actually have to answer these questions for yourself.
00:52:33.180
I envision this future where, you know, you walk into a car dealership and you order one
00:52:37.900
of these cars and you're sitting back and you're paying for it.
00:52:40.320
And then you're asked, what kind of setting do you want?
00:52:42.260
Do you want a racist, Buddhist, radical feminist, religious fundamentalist?
00:52:46.240
I don't know if you've heard this research, but when they ask people what the cars should
00:52:50.860
do on the question of, you know, how biased should it be to save the driver over the pedestrian,
00:52:57.780
So if it's a choice between avoiding a pedestrian and killing the driver or killing the pedestrian,
00:53:04.640
Most people say in the abstract, it should just be unbiased.
00:53:10.640
But when you ask people, would you buy a car that was indifferent between the driver's
00:53:19.020
They want a car that's going to protect their lives.
00:53:21.180
So it's hard to adhere to the thing you think is the right answer, it seems.
00:53:26.860
And there, I actually don't know how you solve that problem.
00:53:30.260
I think probably the best solution is to not advertise how you've solved it.
00:53:37.400
I think if you make it totally transparent, it will be a barrier to the adoption of technology
00:53:43.680
that will be, on balance, immensely life-saving for everyone, you know, drivers and pedestrians
00:53:50.600
We now have tens of thousands of people every year reliably being killed by cars.
00:53:55.640
We could bring that down by a factor of 10 or 100, and then the deaths that would remain
00:54:01.200
would still be these tragedies that we would have to think long and hard about whether the
00:54:07.120
But still, we have to adopt this technology as quickly as is feasible.
00:54:12.320
So I think transparency here could be a bad idea.
00:54:17.220
I mean, I find that I know people have insisted they would never go into a self-driving car.
00:54:22.260
And I find this bizarre because the alternative is far more dangerous.
00:54:27.920
And I think there's also this fear of new technology where there'll be a reluctance to use them.
00:54:33.500
Apparently, there was a reluctance to use elevators that didn't have an elevator operator for a
00:54:38.820
So they have to have some schnooks stand there in order so people would feel calm enough to
00:54:44.740
But I agree with the general point, which is a more general one, which is there's no opting
00:54:52.720
Failing to make a moral choice over, say, giving to charity or what your car should do is itself
00:54:58.800
a moral choice and driven by a moral philosophy.
00:55:03.260
I also just can't resist adding, and I think this is from the Very Bad Wizards group, but
00:55:08.180
you can imagine a car that had a certain morality and then you got into it and it automatically
00:55:12.300
drove you to like Oxfam and refused to let you move until you gave them a lot of money.
00:55:19.920
You want a car sort of just moral enough to do your bidding, but not much more.
00:55:25.460
Have you been watching any of these shows or films that deal with AI, like Ex Machina
00:55:31.640
I've been watching all of these shows that deal with AI.
00:55:35.460
And they all deal with, Ex Machina and Westworld all deal with the struggle we have when something
00:55:45.040
looks human enough, acts human enough, it is irresistible not to treat it as a person.
00:55:54.380
And there philosophers and psychologists and lay people might split.
00:55:58.620
They might say, look, if it looks like a person and talks like a person, then it has a consciousness
00:56:07.720
And it's interesting, different movies and different TV shows, and I actually think movies
00:56:11.580
and TVs are often instruments of some very good philosophy.
00:56:17.180
So Ex Machina, I hate to spoil it, but so viewers should turn on the sound for the next
00:56:25.080
But there's a robot that's entirely, you feel tremendous empathy for her and caring for her.
00:56:29.980
The main character trusts her, and then she cold-bloodily betrays him, locks him in a room
00:56:39.260
And it becomes entirely clear that all of this was simply a mechanism that she used to win
00:56:47.300
While for Westworld, it's more the opposite, where the hosts, I guess, Dolores and others
00:56:55.220
are seen as, they're really people, as viewers were supposed to see them as people.
00:57:00.380
And the guests who forget about this, who brutalize them, they're the monsters.
00:57:08.480
I think all of these films and shows are worth watching.
00:57:12.100
I mean, they're all a little uneven from my point of view.
00:57:14.780
There are moments where you think, this isn't the greatest film or the greatest television
00:57:18.660
But they all have their moments where they, as you said, they're really doing some very
00:57:24.580
powerful philosophy by forcing you to have this vicarious experience of being in the presence
00:57:31.300
of something that is passing the Turing test in a totally compelling way and not the way
00:57:40.400
I mean, we're talking about robots that are no longer in the uncanny valley and looking weird.
00:57:49.180
They're as human as human, and they are, in certain of these cases, much smarter than
00:57:55.240
And this reveals a few things to me that are probably not surprising.
00:58:00.880
But again, it's to experience it vicariously, just hour by hour watching these things, is
00:58:05.640
different than just knowing it in the abstract.
00:58:08.500
The best movies and films and movies and TV shows and books often take a philosophical
00:58:15.260
thought experiment, and they make it vivid in such a way you could really appreciate
00:58:20.760
And I think that sentient, realistic humanoid AI is a perfect example of these shows confronting
00:58:33.760
Once something looks like a human and talks like a human and demonstrates intelligence
00:58:41.360
that is at least at human level, I think for reasons I gave somewhere on this podcast
00:58:48.040
and elsewhere when I've talked about AI, I think human-level AI is a mirage.
00:58:54.260
I think the moment we have anything like human-level AI, we will have superhuman AI.
00:58:59.300
We're not going to make our AI that passes the Turing test less good at math than your
00:59:07.940
It'll be superhuman in every way that it does anything that narrow AI does now.
00:59:12.580
So once this all gets knit together in a humanoid form that passes the Turing test and shows
00:59:18.780
general intelligence and looks as good as we look, which is to say it looks as much like
00:59:25.300
a locus of consciousness as we do, then I think a few things will happen very quickly.
00:59:30.800
One is that we will lose sight of the fact of whether or not it's philosophically or scientifically
00:59:36.860
interesting to wonder whether this thing is conscious.
00:59:39.740
I think some people like me, you know, who are convinced that the hard problem of consciousness
00:59:47.400
But every intuition we have of something being conscious, every intuition we have that other
00:59:52.740
people are conscious, will be driven hard in the presence of these artifacts.
00:59:59.080
And it will be true to say that we won't know whether they're conscious unless we understand
01:00:03.940
how consciousness emerges from the physical world.
01:00:07.200
But we will follow Dan Dennett in feeling that it's no longer an interesting question because
01:00:13.560
we find we actually can't stay interested in it in the presence of machines that are functioning
01:00:18.720
at least as well, if not better than we are, and will almost certainly be designed to talk
01:00:25.860
about their experience in ways that suggest that they're having an experience.
01:00:30.200
And so that's one part that we will feel, we will grant them consciousness by default, even
01:00:35.740
though we may have no deep reason to believe that they're conscious.
01:00:38.500
And the other thing that is brought up by Westworld to a unique degree, I guess humans also, is
01:00:46.360
that many of the ways in which people imagine using robots of this sort, we would use them
01:00:51.800
in ways we at least we imagine that we wouldn't use other human beings on the assumption that
01:00:56.960
That they're just computers that really can't suffer.
01:00:59.660
But I think this is the other side of this coin.
01:01:02.320
Once we helplessly attribute states of consciousness to these machines, it will be damaging to our
01:01:12.800
We're going to be in the presence of digital slaves, and just how well do you need to treat
01:01:18.420
And what does it mean to have a super humanly intelligent slave?
01:01:24.380
How do you maintain a master-servant relationship to something that's smarter than you are and
01:01:30.620
But part of what Westworld brings up is that you are destroying human consciousness by letting
01:01:37.480
yourself act out all of your baser impulses on robots, on the assumption that they can't
01:01:43.100
suffer, because the acting out is part of the problem.
01:01:46.820
It actually diminishes your own moral worth, whether or not these robots are conscious.
01:01:54.680
One is that when it starts to look like a person and talk, it'll be irresistible to see
01:02:00.420
You know, you could walk around and you could talk to me and doubt that I'm conscious, and
01:02:08.000
It's irresistible to treat other people as having feelings, emotions, consciousness, and
01:02:15.600
it'll be irresistible to treat these machines as well.
01:02:20.240
And so in Westworld is a particularly dramatic example of this, where characters are meant to
01:02:25.700
be raped and assaulted and shot, and it's supposed to be, you know, fun and games.
01:02:31.900
But the reality of it is these two things are in tension.
01:02:35.340
Anybody who were to assault the character Dolores, the young woman who's a robot, would be seen
01:02:44.900
as morally indistinguishable from someone who would assault any person.
01:02:49.260
And so we are at risk for the first time in human civilization of, in some sense, building
01:02:56.740
machines that we are in morally, it's morally repugnant to use in the sense that they're
01:03:04.080
It would be like genetically engineering a race of people, but wiring up their brains so
01:03:09.680
that they're utterly subservient and enjoy performing at our will.
01:03:15.220
And, and I think we would, we're very quickly going to reach a point where we'll see the
01:03:22.880
And, and then what I would imagine is, and this goes back to building machines without
01:03:26.700
empathy or perhaps without compassion is there may be a business in building machines to do
01:03:34.720
I'd rather have my floor vacuumed by a Roomba than by somebody who has an IQ of 140, but is
01:03:42.440
I think the, the humanoid component here is the main variable.
01:03:47.960
If it looks like a Roomba, you know, it doesn't, it actually doesn't matter how smart it is.
01:03:52.020
You won't feel that you're enslaving a conscious creature.
01:03:59.180
And so far as you humanize the interface, you drive the intuitions that now you're in
01:04:05.880
But if you make it look like a Roomba and sound like a Roomba, it doesn't really matter
01:04:11.240
what its capacities are as long as it still seems mechanical.
01:04:16.600
I mean, the interesting wrinkle there, of course, is that ethically speaking, what really
01:04:21.460
should matter is what's true on the side of the Roomba, right?
01:04:25.000
So if the Roomba can suffer, if you've built a mechanical slave that you can't possibly empathize
01:04:32.020
with because it doesn't have any of the user interface components that would allow you to
01:04:37.100
do it, but it's actually having an experience of the world that is vastly deeper and richer
01:04:47.800
Well, then you have just, the term of jargon now in the AI community, I think this is probably
01:04:52.980
due to Nick Bostrom's book, but maybe he got this from somewhere, the term is mind crime.
01:04:58.200
You're creating minds that can suffer, whether in simulation or in individual, you know, robots.
01:05:08.840
I mean, you would be on par with Yahweh, you know, creating a hell and populating it.
01:05:14.560
If there's more evil to be found in the universe than that, I don't know where to look for it.
01:05:19.840
But that's something we're in danger of doing insofar as we're rolling the dice with some
01:05:25.580
form of information processing being the basis of consciousness.
01:05:29.360
If consciousness is just some version of information processing, well, then if we begin to do that
01:05:35.240
well enough, it won't matter whether we can tell from the outside.
01:05:38.620
We may just create it inside something we can't feel compassion for.
01:05:44.980
One point is your moral one, which is whether or not we know it, we may be doing terrible
01:05:52.180
We may be constructing conscious creatures and then tormenting them.
01:05:55.700
Or alternatively, we may be creating creatures that are machines that do our bidding and have
01:06:02.860
Well, it's no worse to assault the robot in Westworld than it is to, you know, to bang a
01:06:13.620
But it still could diminish you as a person to treat her like a toaster.
01:06:19.380
And that's I mean, so so raping Dolores on some level turns you into a rapist, whether
01:06:24.360
or not she's more like a woman or more like a toaster.
01:06:27.880
So so this is akin to this treatment of robots is akin to I forget the philosopher.
01:06:36.040
But the claim was that animals have no moral status at all.
01:06:40.500
However, you shouldn't torment animals because it will make you a bad person with regard to
01:06:48.340
It's it's I mean, you one wonders one after all, we do all sorts of killing and harming
01:07:03.380
If there is an effect on increasing our our our violence towards real humans, it hasn't
01:07:08.700
shown up in any of the homicide statistics or and the studies are a mess.
01:07:12.860
But I would agree with you that there's a world of difference between sitting on my Xbox
01:07:17.380
and shooting, you know, aliens as opposed to the real physical feeling, say, of strangling
01:07:26.240
And and that's the second point, which is even if they aren't conscious, even if as a matter
01:07:31.820
of fact, from from a God's eye view, they're just things, it will seem to us as if they're
01:07:40.000
And then the act of tormenting conscious people will either be repugnant to us or if
01:07:46.600
it isn't, it will lead us to be to be worse moral beings.
01:07:51.200
So those are the dilemmas we're going to run into probably within our lifetimes.
01:07:55.400
Yeah, actually, there's somebody coming on the podcast in probably a month who can answer
01:08:01.960
Realistically, machine intelligence that passes the Turing test or robot interface, you know,
01:08:14.620
I don't know which will be built first, but it is interesting to consider that the association
01:08:22.160
If you'd like to continue listening to this conversation, you'll need to subscribe at
01:08:29.660
Once you do, you'll get access to all full-length episodes of the Making Sense podcast, along
01:08:34.160
with other subscriber-only content, including bonus episodes and AMAs and the conversations
01:08:41.800
The Making Sense podcast is ad-free and relies entirely on listener support.