#52 — Finding Our Way in the Cosmos
Episode Stats
Words per Minute
154.1861
Summary
In this episode, I speak with the Nobel Prize-winning physicist David Deutsch about his new book, The Moral Landscape: A Critical Theory of the Foundations of Morality. We discuss his views on the nature of knowledge, and the role of morality in the creation of it, and how they relate to quantum physics and the theory of the multiverse. We also discuss the role that creativity plays in the development of knowledge and how it relates to our understanding of the world and our morality. This episode is the first part of a two-part conversation that will be released on the Making Sense Podcast. If you haven't read the first Making Sense episode, then you should do so before listening to this one. If you're not familiar with the way David thinks, many of his statements will blow by you without your realizing that something fairly revolutionary has just been said. And if you want to get more deeply into his ideas about knowledge and creativity, you should read his book, "The Beginning of Infinity," which is a critical book about what he thinks about them. The first book is out now, and it's a must-read. We don't run ads on the podcast, and therefore it's made possible entirely through the support of our listeners, so if you enjoy what we're doing here, please consider becoming a supporter of the podcast by becoming a subscriber. You'll get access to the full episodes of The Making Sense podcast. and much more! Subscribe to our private RSS feed, where you'll get exclusive ad-free episodes, unlimited access to all the latest episodes of Making Sense, plus more episodes, plus other perks! and more. Sam Harrisons.org Podcasts, including the latest in Making Sense and other great resources! Subscribe and subscribe to our Podcasts! You get 10% off your favorite podcatcher, plus a 20% discount when you become a subscriber only, you get 20% off the making sense podcasting membership starting next week! Learn more about your choice of a 5-piece ad-only version of the Making sense podcast! - Sam Harris - click here. - subscribe to the podcast? to become a member of The MINDING MESTERTERMISING MADE SENSE Podcast? and receive a discount of $5 or $10 or $20 off your first month, and receive $50 off the next month only get $5 off the second month of making sense?
Transcript
00:00:00.000
Welcome to the Making Sense podcast. This is Sam Harris. Just a note to say that if
00:00:12.120
you're hearing this, you are not currently on our subscriber feed and will only be hearing
00:00:16.260
the first part of this conversation. In order to access full episodes of the Making Sense
00:00:20.760
podcast, you'll need to subscribe at samharris.org. There you'll find our private RSS feed to
00:00:26.360
add to your favorite podcatcher, along with other subscriber-only content. We don't run ads on the
00:00:31.520
podcast, and therefore it's made possible entirely through the support of our subscribers. So if you
00:00:36.300
enjoy what we're doing here, please consider becoming one. Today I'll be speaking with the
00:00:48.380
physicist David Deutsch once again about the foundations of morality. And this podcast came
00:00:55.840
about in a slightly unusual way. Since we did our first podcast, David read my book, The Moral
00:01:03.220
Landscape, and he wanted to talk to me about it. And he wanted to do this privately, I think because
00:01:09.680
there were some fundamental things he disagreed with and he didn't want to break the news to me
00:01:13.280
on my own podcast. But I urged him to let me record the conversation so that we could release it if we
00:01:19.300
wanted to. Because if he was going to dismantle my cherished thesis, I actually wanted you all to hear
00:01:25.600
that. And I also wanted you to hear anything else he had to say, because he's just so interesting.
00:01:31.220
The problem, however, is that I ran into some equipment issues at the time and could only record
00:01:36.780
the raw Skype call. So the audio leaves a lot to be desired. And David's audio is actually better than
00:01:43.980
mine. So it actually sounds like I'm on his podcast. And because we weren't totally clear that we were
00:01:49.640
doing a podcast, there were parts of the conversation that needed to be cut out. And these cuts leave the
00:01:55.640
resulting exchange slightly free associative. We put in a few music cues to signal those cuts.
00:02:03.720
In any case, David is such an interesting person. And many of you are, I know, are interested in the
00:02:10.000
thesis I put forward in the moral landscape. So I decided the best thing to do is release the
00:02:14.580
recording, warts and all. I certainly hope to have David back on the podcast again, but I doubt
00:02:20.120
we'll cover this territory again, or cover it in the same way. So that is why I'm bringing you this
00:02:26.740
conversation now. One major caveat, however, is that I don't recommend you listen to this podcast
00:02:34.480
without first listening to my first conversation with David, episode 22, entitled Surviving the
00:02:41.640
Cosmos. Because we really just hit the ground running here. And if you're not familiar with David
00:02:47.760
or his way of thinking about knowledge and creativity, you really might get lost, or at least you won't
00:02:54.440
appreciate how interesting some of his seemingly prosaic comments are. David Deutsch is a physicist at
00:03:01.020
Oxford. He's best known as the founding father of quantum computation, and for his work on the
00:03:06.940
multiverse interpretation of quantum mechanics. His main area of focus is now something he has called
00:03:12.180
constructor theory, where he's developing a new way to connect information and knowledge to the language
00:03:19.820
of physics. And as with our last podcast, the irony is we don't discuss any of these things.
00:03:25.420
Though his views about knowledge and the implications of its being independent of any given physical
00:03:32.500
embodiment, the fact that you can have the same information in a molecule of DNA, or on a computer disk, or
00:03:39.060
chiseled into a piece of granite, this problem of understanding the substrate independence of information
00:03:46.260
and knowledge in the context of a physical world, that is occasionally working in the background.
00:03:53.600
And it's one of the things that makes David's take on more ordinary questions so interesting.
00:03:59.880
For instance, his view about something as pedestrian as why it's wrong to coerce people to do things
00:04:06.320
connects directly to his view about what it means for knowledge to accumulate in the physical universe,
00:04:12.380
and the error-correcting mechanisms that allow it to accumulate. And if you're not familiar with the
00:04:18.300
way David thinks, many of his statements will probably just blow by you without your realizing
00:04:22.760
that something fairly revolutionary has just been said. So again, please listen to that first podcast
00:04:29.420
if you haven't, and then maybe listen to it again. And you should read his book, The Beginning of
00:04:35.060
Infinity, if you want to get more deeply into his ideas. And now I bring you David Deutsch.
00:04:41.860
Knowledge is basically critical. So this is actually the connection with what I want to say about your
00:04:55.920
book, that the foundational idea of knowledge, that traditionally, the idea of knowledge has been
00:05:04.680
that we build it up. We build it up, you know, either from nothing like Descartes, or from the senses,
00:05:10.780
or from God, or what have you, or from our genes. And thinking consists of building brick upon brick,
00:05:21.320
and from our senses, of course. But Popper's view of science, which I want to extend to all thinking and
00:05:32.900
all ideas, is that our knowledge isn't like that. It consists of a great slew of not very consistent
00:05:42.600
ideas. And thinking consists of wandering about in this slew, trying to make consistent the ideas that
00:05:53.560
seem to be most worst offenders of being inconsistent with each other by modifying them. And we modify them
00:06:01.460
just by conjecture. We guess that something might cure the various inconsistencies we see. And if it does,
00:06:11.560
then we move to that. And to get to your book, I'm interested to see what you think of this take on your book.
00:06:20.560
Um, we're so coming from the same place in some respects, and so coming from opposite incompatible
00:06:28.920
places in other respects, that it's hard to even express to each other what we mean exactly. And we've
00:06:36.220
just just, I think, the reason, correct me if I'm wrong, or if I'm, if I'm, if I'm seeing this entirely the
00:06:42.840
wrong way, I think the reason you developed a theory of morality, and, and took the trouble to write this
00:06:51.320
book about it, is, is not an intellectual reason, it's or at least not primarily intellectual, that
00:06:58.200
it's not that you wanted to tweak the best existing theories, and improve them or to contradict some,
00:07:06.020
some prevalent erroneous theories, because there are a lot of true and false theories out there. And,
00:07:12.900
and usually, we don't write about them, we know, life is too short. So I think that the, the reason you
00:07:18.820
wrote this particular book, and develop this particular theory, is, as I said, it's not intellectual,
00:07:25.540
it's for a particular purpose in the, um, in the world, namely, um, to, to defend civilization, you
00:07:36.260
might say, in a grandiose term. Yeah. To defend it against, um, it, it, it's not really too much
00:07:45.500
hyperbole to say, it's an existential danger, from, or two existential dangers. One is moral relativism,
00:07:55.540
and the other is religious dogmatism. Yes, that's, that's very fair, and, and imputations of
00:08:02.500
grandiosity are, are, are also fair, because I really, I feel like what I was doing in that book
00:08:08.700
is attempting to draw a line in the sand to defend the claim that the most important questions in human
00:08:17.580
life, and that the questions that are, are by definition, the most important questions, and the
00:08:21.920
questions that, where the greatest swings in, in value are to be found, that, that answers to those
00:08:30.640
questions exist, whether or not we can ever get them in hand, and certainly better and worse answers
00:08:35.000
exist, and that it's possible to be right and wrong, or more right and more wrong about those questions,
00:08:41.900
and so, yes, it's, it's, it's very much a, I wanted to, to carve out the intellectual space
00:08:48.420
where we could unabashedly defend the intuition that moral truths exist, and that it's, that morality is
00:09:00.280
not completely different, morality and values altogether, you know, claims about right and wrong
00:09:06.300
and good and evil are not on some completely different footing from the rest of the truth
00:09:11.520
claims, and claims to fact that we, that we want to make about the universe. Okay, well, um, so I agree
00:09:19.120
that there's an existential danger, so I wasn't using the word grandiose pejoratively, I, I think there is
00:09:24.800
that danger, and those, those, whether they're the biggest dangers, I'm not entirely sure, but they are
00:09:30.740
existential dangers, which is bad enough, and, and I agree with, with what you just said about morality,
00:09:37.500
uh, there is true and false in morality, or right and wrong, they are objective, they can be discovered
00:09:43.600
by the usual methods of reason, which, um, are essentially the same as those of science, although there are
00:09:51.580
important differences, as I, as I said when we last spoke. Okay, so this was your purpose, you had an intellectual
00:09:59.780
purpose that was morally driven in developing this moral theory, and therefore you had this moral
00:10:08.180
purpose before you had the details of the moral theory, so you, you wanted in advance your theory to
00:10:17.600
have certain properties, um, as, as you just said, to create an intellectual space in which one could
00:10:24.540
assert and defend the, the, the proposition that there's objective right and wrong, and, and so
00:10:30.860
these properties that you wanted the theory to have in advance weren't just expressions of your
00:10:37.020
personality or something, they were the fact that you thought that the moral values that made you want
00:10:44.080
to write the book are true, objectively true. Well, um, forgive me, I'm starting to, I'm smiling now,
00:10:52.600
if you could see me, you'd see how much I'm smiling because I'm just amused at how tenderly you're
00:10:58.000
leading me down by the hand down the slippery slope to the dissolution of my theory. I think
00:11:03.000
theory is too big a word for what I thought I was putting forward. I think I'm, my theory, such as it
00:11:10.100
is, contains explicitly the, the assumption that there's, there are many things I can be wrong about
00:11:17.080
right now with the morality that I have in hand, right? So like, I'm not, my theory isn't based on
00:11:23.160
my current moral intuitions. It's based on some of them. It's based on the intuition of, of what I,
00:11:30.760
what I call in various places, moral realism, which is just the claim that it's possible to be wrong.
00:11:36.600
It's possible not to know what you're missing. It's possible to be cognitively closed to,
00:11:41.800
to true facts about wellbeing in this universe, about how good life could be if only you could
00:11:48.900
live it or could discover it. If only you had the right sort of mind that would give you access to
00:11:53.620
these states of consciousness. So it's, so that's, it's not so much that I think, well, my intuition
00:11:59.400
that gay marriage should be legal is so foundational that I know there's no state of the universe that
00:12:07.860
could disconfirm it. That's not, that's not where I'm standing. It's just, you know, it's the intuition
00:12:12.440
about realism and about, about the horizons. I wasn't making that sort of allegation. In fact, I think
00:12:18.620
I agree with everything you've just said about morality. You see, the thing is the ideas, the theory,
00:12:26.600
if you want to call it, don't want to call it a theory, whatever it is that you express in the book
00:12:30.340
contains that, but it also contains something else. It contains the something else that I disagree with.
00:12:36.020
There must be something else because I've, I've agreed with everything you've just said.
00:12:42.040
The thing, I suppose the basic thing I disagree with, uh, and this disagreement is probably deeper
00:12:50.440
than it sounds. Um, uh, that you, you, one of the properties you wanted to create this space
00:12:57.940
is that the, that this theory of morality or whatever you call it should be based on a secure
00:13:05.140
foundation, um, namely science. Well, and in particular, especially neuroscience. Well, actually,
00:13:13.480
well, that, that may be, I mean, I, the, the fault is certainly mine in, from the subtitle onward
00:13:20.500
and, and the subtitle, you know, the way subtitles of books get, get fashioned, you, as you probably
00:13:26.620
know, that's sometimes outside the author's control as it was in this case. But I wouldn't put it that
00:13:33.060
way. I would say that it doesn't, it's not that morality has to be founded, uh, on the secure
00:13:38.340
foundations of science. It's that the truth claims we want to make about morality are just as well
00:13:46.920
founded. However, well-founded that turns out to be as the truth claims we make in science. And that,
00:13:53.200
that really, I'm talking about this larger cognitive space in which we make truth claims. And some of it
00:14:00.620
for bureaucratic reasons or methodological reasons, we call these, these scientific claims. Some we
00:14:06.160
call historical, some we call merely factual, some sciences are not, are still struggling to be as
00:14:11.960
scientific as other sciences, but we still call them sciences. But there is just this claim, the claims
00:14:18.120
about subjectivity and in particular about wellbeing and what, what influences it. And those claims I think
00:14:27.060
are true, whether or not we can, or true or false, whether or not we can ever get the data in hand at
00:14:34.340
any moment in history. And I just want to say, I mean, the example I, I may have used this last time
00:14:39.980
with you, but the example I often use is there is a fact of the matter about what John F. Kennedy was
00:14:46.720
thinking the moment before he got shot. And we won't know what he was thinking. We want, we don't
00:14:52.780
actually know what it was like to be him. In fact, we know there's no way we could get access to the
00:14:58.260
data at this point. And yet we, there's, there's an infinite number of things we could say about
00:15:03.660
that, that we would know were wrong. I mean, I know he wasn't thinking about string theory. I know
00:15:09.500
he wasn't, you know, trying to, I know he wasn't, you know, reiterating the, the largest prime number
00:15:17.120
that we discovered a year after he died again and again in his mind. You can, you can go on like
00:15:21.880
that till the end of time, knowing what, what, what, what, what his state of consciousness excluded.
00:15:26.580
And that's, that's a fact, that's as factual a claim as we ever make in science. And so I,
00:15:31.520
what I was trying to argue is that, that morality, you know, rightly considered is a space of truth
00:15:38.280
claims that is on all fours with all the other kinds of truth claims we make differences of methodology
00:15:43.720
aside. Yeah. Well, there are two ways that, that something can be objective. Um, it, and I think
00:15:51.780
you are in favor of one of them and I'm in favor of the other. That is, um, things can be objective
00:15:58.120
in the sense that, um, their truths about them just are truths about the other thing. Like for example,
00:16:05.260
chemistry, the truths of chemistry just are truths about physics. Um, and that maybe wasn't obvious
00:16:13.080
when chemistry started, but it is obvious now that some of the truths are emergent truths, but still
00:16:19.320
in principle, every, everything, every, every law of chemistry, everything you can say about chemical
00:16:25.120
reactions and so on, uh, they are all statements about physics and chemistry then is, is objective
00:16:31.520
because physics is objective. Then there's a different way of being objective. The way in which,
00:16:38.900
um, the integers are exist objectively, they exist objectively, not because, uh, and again,
00:16:47.880
in the history of this, um, there were different theories about the integers that, that took different
00:16:53.100
positions about whether they're real and if they're real in what sense they're real. I think that they
00:17:01.040
are real in a separate sense from physics, that the truth about them are independent of the truths of
00:17:07.540
physics, not, not, not that integers are objective because they are some aspects of physical objects,
00:17:14.880
but, um, they, uh, they're objective because integers exist in some sense that is not the same as existing
00:17:22.620
physically. And, uh, although they, they, you know, they have, they have an influence, uh, in, uh, truths about
00:17:30.400
them are reflected in truths about physical objects, but they're not identified as them. If, if there's no,
00:17:38.100
nothing we could discover about the laws of physics could possibly change the truth of, um,
00:17:46.500
theorems about prime numbers. And that, that is the kind of truth. Uh, I mean, sorry, that's the kind of
00:17:55.300
independence that I think truths of morality have, um, the, um, you know, you, uh, actually, David,
00:18:05.020
can I interrupt you there and just, just explore this a little bit because, so I think I talk about
00:18:09.440
this in the book at some point. I follow the philosopher John Searle here. I don't follow him
00:18:14.300
in that many things, but he made a distinction between the ontological and the epistemological
00:18:20.880
sense in which we can use this word objective. And I think that that's a useful one that at least I've
00:18:27.880
been pressing to service a fair amount. One. So if something's ontologically objective, it exists
00:18:34.960
quote, you know, in the real world, whether or not anyone knows about it, it's independent of human
00:18:39.360
minds. It is the kinds of facts you just described with, you know, chemistry and physics. And we can
00:18:45.880
imagine a universe without any conscious creatures and those facts would still be the case, even though
00:18:53.240
there's no one around to know them. And so that's ontological objectivity. And then there's epistemological
00:18:59.640
objectivity, which is to say that there's the spirit in which we make various claims about facts of all
00:19:07.180
kinds, which is to say that, so to be objective in the epistemological sense, you're not being misled by
00:19:13.480
your own confirmation bias or wishful thinking, or you're making honest claims about data and the
00:19:22.100
consequences of logical arguments and all the rest. And what most people worry about with respect to
00:19:29.800
objectivity versus subjectivity, I guess I should talk about the subjective side of those two things.
00:19:35.060
So something can be ontologically subjective, which is to say it doesn't exist independent of human
00:19:41.860
minds or conscious minds. It is a fact that is only a fact given the existence of minds. So when I'm
00:19:49.300
talking about what JFK experienced the moment he got shot or prior to that moment, I'm making a claim
00:19:56.760
about his subjectivity, but I can make that claim in the sense of it being epistemologically objective,
00:20:04.380
which is to say it's not subjective epistemologically. I'm not being merely misled by my bias and
00:20:11.140
my dogmatic commitments. I can objectively say about, that is epistemologically about JFK's
00:20:20.320
subjectivity, that it was not characterized by him meditating on the truth of string theory
00:20:28.660
at that moment. And so I'm more worried that the ontological difference between objective and
00:20:36.120
subjective doesn't really interest me. It's useful for certain conversations and I think not useful
00:20:41.920
for others. And I think in the case of morality, what we're talking about is how experience arises in
00:20:51.400
this universe and what its character can be and the extremes of happiness and suffering that conscious
00:21:00.340
minds are susceptible to and what are the material and social and every other kind of requirements to
00:21:10.740
influence those experiences. And so part of that conversation takes us into the classically objective
00:21:18.620
world of, in our case, talking about neurotransmitters and neurons and economic systems and, quote,
00:21:27.380
objective reality at every scale that in any given instance may not actually require a human mind to be
00:21:36.160
talked about. But the cash value of all that, if you're talking about morality from my point of view,
00:21:41.020
is conscious states of conscious creatures and whether they're being made more or less happy
00:21:49.060
in as capacious a definition of happiness or well-being as possible. And as you know, that's a
00:21:54.900
kind of a suitcase word I use to incorporate the range of positive experience beyond which we,
00:22:01.980
the horizon beyond which we can't currently imagine, and the opposite being the worst possible misery for
00:22:07.800
everyone. So the status of integers, whether they occupy some kind of platonic zone of existence that
00:22:17.340
is not, in fact, linked to material reality in any way, but we still have to talk about as being real
00:22:23.880
whether or not anyone has discovered it. I actually don't, I don't have strong intuitions about that at
00:22:30.400
all. I mean, that seems like we, I feel like we touched that in our last conversation. And I think you could
00:22:36.260
probably argue that one way or the other, but to bring it back to what you were just saying, I guess
00:22:42.340
there's the physical reality, which is often called objective ontologically of chemistry and physics.
00:22:49.760
There are things like integers, which are not, as you just said, dependent on what we know about atoms.
00:22:56.800
But then there are the experiences of conscious systems, whether or not we can ever understand what
00:23:04.160
those experiences are, they have a certain character. And that character depends upon the, whatever
00:23:10.380
material requisites exist for those conscious systems, but that hasn't been worked out. And it's also,
00:23:18.180
it may, and even if you work that out perfectly, it's still, it's the subjective side of that coin
00:23:26.860
that is an interest. So, so yes. Um, uh, it's funny just at the end, you, you, you, you said what I was
00:23:37.880
about to say. It took me a while. So I know that you use the term science, for example, to, to more
00:23:45.740
broadly than some people. And, and, and I think that's quite right. So do I. Uh, and so you and I both
00:23:52.880
use it to encroach on things that some people who think they're purists, uh, would like to exclude
00:24:00.720
from science. Um, but to expand science to, you know, so therefore part of philosophy, you can call
00:24:08.560
part of science and the, the criteria, Popper's criterion of demarcation is not intended to be
00:24:14.600
either, uh, sharp, uh, or, um, uh, pejorative, you know, or criterion of meaning or worthwhileness
00:24:24.160
or anything like that. It's just a matter of convenience, a matter of convenient classification
00:24:29.700
of subject matter. If you want to extend the term science to cover certain things that are
00:24:36.080
traditionally considered, um, uh, philosophy, like, like the interpretation of quantum theory,
00:24:42.480
for example, which I think is definitely part of science. And, uh, but then if you want to sort of
00:24:49.260
make the connection between human wellbeing and neuroscience, then you, you know, you're, you're
00:24:57.900
trying to encroach on neurophilosophy as it were. And neurophilosophy is epistemology. It's
00:25:06.320
and the thing, but once you've extended it to neurophilosophy and into epistemology, you run into
00:25:13.500
a deep fact about the physical world, which is that epistemology is substrate independent. It is,
00:25:24.240
it, it, once you have, once knowledge or, or, uh, feelings or consciousness or any kind of information
00:25:32.840
or computation is instantiated in a universal device, then the laws it obeys are completely
00:25:41.200
independent of the physics and of the neurology and, and every kind of physical attribute of the
00:25:51.120
device falls away. And you can talk about the properties of those things as abstract things
00:25:57.900
or not at perhaps abstract is the wrong word because they're perfectly objective.
00:26:02.260
It's just that they're not atoms, right? They're not neurons.
00:26:07.100
I would just say that I think at this point, I'll go with you there. I think, I think that's probably
00:26:11.600
true, but what's your, what you seem to be smuggling in there in the, in the leap from atoms is a kind of
00:26:20.900
kind of information based functionalism where we just, we're assuming for the purposes of this
00:26:27.340
conversation that we know consciousness to be an emergent property of information processing.
00:26:34.320
And it's not some other constituent of physical reality that isn't based on bits. But if we,
00:26:43.100
if we assume that it is, if it's, if it is, if it is just something that computers,
00:26:47.560
non-biological computers can, can one day have, yeah, well then I'm with you.
00:26:51.980
This is something that, that is generally true of morality, that, that morality has reach.
00:27:03.120
If you don't steal a book from a library, when you, you realize that you easily could do so without
00:27:12.060
getting caught. This doesn't just affect you and the library. This, this, because this comes from
00:27:20.840
a universal machine, which is you. This, this machine has universal theories or theories which
00:27:30.540
try to be universal theories or are universal in some domain or other. And when you commit the crime,
00:27:37.760
for instance, you're changing the facts, you're changing something that you can't change back.
00:27:44.440
Isn't that change occurring in you, assuming that there's no one else who will ever discover
00:27:49.720
your act? I mean, where else would the change occur but in you?
00:27:54.040
It's, well, for example, suppose you're telling your children about morality. Do you say,
00:28:03.120
okay, well, when you're in that library situation, it's okay to steal the book because no one will ever
00:28:07.180
find out. Right. Or do you say, no, you shouldn't even in that situation? If the, if the first,
00:28:14.780
then it's affecting your child as well. Yeah. And if the second, then you are lying to a child.
00:28:20.940
Right. Which itself has vast implications. Yeah, no, I'm, I'm, I'm totally with you there. I just,
00:28:26.320
let's linger on this one point that, again, it's, I understand it's disconcertingly far afield,
00:28:30.480
but I just think it's interesting. So if you could apply a painless local anesthetic to the child
00:28:36.760
for the purposes of, of receiving a vaccine, that would be a better thing to do. And it's being
00:28:43.260
better is the measure of it's, or, or, or the claim that it's better is synonymous with the claim
00:28:52.620
that it's good to reduce needless suffering. And that the, and the suffering is, is both
00:28:59.360
needless and, and, and in fact, probably harmful for the child to whatever degree.
00:29:04.660
Well, yes, I I'd say that my first, the first line of my critique would be that it violates the
00:29:10.780
human rights of the child, but, but okay, there are, there are all these other things which are
00:29:14.900
related. I think that the way we interpret and value very powerful stimuli is remarkably susceptible
00:29:26.600
to the conceptual frame around which that experience is, is, is held and the conceptual,
00:29:35.020
conceptual frame within which it is held. So, which is to say your thoughts about your experience and
00:29:39.660
your thoughts about reality are in many cases, constitutive of, of the, the sum total of the
00:29:46.580
experience. And there are many things, but this does connect back to, I agree with you about, about
00:29:52.440
human rights and consent to a large degree. I think, I think we want, certainly when you're talking about
00:29:56.860
adults who can consent, you want them to be able to consent to various experiences, but I can still
00:30:04.660
imagine experiences that are unpleasant that it turns out are very good for a person and you have
00:30:13.140
done them a great favor if you subject them to these experiences. And you may, in fact, I mean,
00:30:21.920
this is just kind of a paternalistic claim of a possibility. You may, in fact, be doing someone
00:30:28.360
a favor to subject them to these experiences without their explicit consent. If in fact,
00:30:34.720
the benefits are so great. Now, I don't, I don't know what those experiences are, but let's just say
00:30:39.320
it's true that, you know, a culture finds that there's a certain ordeal that you can put teenagers
00:30:44.760
through and many of the teenagers don't want to do it, but it is just so good for you as a human being.
00:30:51.620
That strikes me as possible. I just don't have an example, but I do see, I see people who do consent
00:30:57.380
to do things which are really incredibly difficult, you know, like people become Navy SEALs. You know,
00:31:06.160
I've met some of these guys and, you know, they've got, they, in many cases, literally went through hell
00:31:12.820
to equip themselves with the skills they've got. And part of the training is a kind of culling of all
00:31:21.480
the people who are not fit to go through the training in the first place. And so it is a
00:31:27.180
selection procedure, but these guys go through an intense ordeal and come out in many ways enviably
00:31:34.420
strong psychologically and physically as a result. And I can see that there are extreme experiences that
00:31:43.640
we, we might not want to rule out just in principle as being bad for us.
00:31:48.960
As I, as I said that they, if it's a matter of knowledge, if we know this, then, um, we have an
00:31:58.520
explanation. If we have an explanation, we can give it to the people. Uh, if, if we have a machine that
00:32:05.600
can detect whether somebody would benefit from Navy SEAL training and it can just detect this by putting it on
00:32:12.060
their head and pressing a button, then you would probably find that a lot of people who aren't Navy SEALs
00:32:17.500
would benefit from it. And if it's true, if the theory on which this machine is based has a good
00:32:24.300
explanation, then you should be able to persuade those people to take the training or they might say,
00:32:31.720
but what it was. So what if you can't, or what if the benefits you're conferring on someone is out of, is out
00:32:38.120
of reach to them. So let's say, let's say you have people with severe autism who really can't consent
00:32:43.320
to much of anything and you can't really explain the benefits you're about to give them, but the
00:32:47.980
benefits you're about to give them is a cure for autism. Yes. Well, this reminds me of a, you know,
00:32:53.420
a cure for lesbianism or something. I mean, there are people who think that raping somebody will do
00:33:00.460
them good under various circumstances, but you can't base either a legal system or a moral system
00:33:08.440
on saying that if one thinks that that's true, one should do it. Well, no, but clearly in that case,
00:33:14.380
it's certainly sounds like on its face to be a, a delusional and unethical claim. Yes. We're
00:33:23.100
considering all sorts of, all sorts of implausible things here. What I hear you doing is using the
00:33:29.920
principle of consent and human rights to trump everything else that might- It's more epistemology
00:33:37.960
because I don't think the human rights are fundamental either. They are, they are just a
00:33:42.260
way of implementing, um, institutions that, uh, promote the growth of knowledge. And the reason
00:33:49.980
why knowledge trumps everything else here is fallibilism. Uh, in all these cases where we
00:33:56.540
have a theory that something is better, uh, we're, we're implementing a moral theory and we might be
00:34:03.420
mistaken about that. And the, uh, it must be a fundamental fact of morality of an objective truth
00:34:13.120
morality that it's immoral to close off the paths to correction of a theory if it turns out to be
00:34:21.380
false. Oh yeah. I'm totally, I'm totally with you there. But so, so, but that seems to be asserting
00:34:26.920
my, you know, underlying claim, which is human flourishing conceived as broadly as you want.
00:34:34.180
And I mean, it's a definition that is continually open in the manner you just described for refinement
00:34:39.800
and, and, and, you know, fallibilism. That is the point. And the, you know, we want to move in the
00:34:46.380
direction of better and better worlds with better and better experiences and who knows how far that
00:34:53.200
can go, but we know it's possible to move in the wrong direction. And we never want to, we never want
00:34:59.000
to tie our hands and make it impossible to correct course. Yes. So if once you have an institution that
00:35:08.520
allows that, this is, this is why consent isn't just a, you know, a nice thing to have, it's, it's,
00:35:15.440
it's a fundamental feature of the way we handle ideas. Um, if you have a system that allows people
00:35:22.960
to enforce an idea on another person who disagrees with the idea, then the, the means of correcting
00:35:30.600
errors are, are closed off. Uh, you know, you, you imagined people who, who had a disability or
00:35:37.420
something and couldn't, but, but could be cured of that disability, but it couldn't be explained to
00:35:41.280
them and so on. Well, the thing is either those people are in a constant state of suffering in,
00:35:47.780
in which case applying the thing to them won't change that. Or there is a thing that they prefer
00:35:54.140
to some other thing. And then there will be a path towards the better state that involves just
00:36:01.400
doing things that they prefer. Like if it involves an injection, then it might involve, um, either
00:36:08.620
an anesthetic or getting into a certain mood in which an injection, uh, doesn't matter.
00:36:14.960
Let me give you an example. Again, I, I want to get back to these core issues, but all of this is just,
00:36:19.720
I find too interesting. I think this is an example that I mentioned somewhere in the moral landscape,
00:36:24.980
but I'm not sure the Nobel laureate in economics, Danny Kahneman did some research. I think he was
00:36:31.760
just associated with this research. I don't think he was the main author on this paper, but they did
00:36:35.800
some fascinating research on people receiving colonoscopies. And this wasn't at a point where
00:36:43.220
there was no, like there was no twilight anesthesia associated with colonoscopy. So people really had to
00:36:49.100
suffer the full ordeal and they discovered they were, they were trying to figure out what accounted
00:36:56.700
for the subjective measures of, of suffering associated with this procedure. And also what
00:37:04.760
would positively influence the compliance of patients to come back and get another one on schedule
00:37:11.680
five years later or 10 years later or whatever it was. So they found that the, this confirmed,
00:37:18.480
I don't know if this was the first instance, but that, but there's something in psychology called the
00:37:22.840
peak end rule, which is your, your judgment about the character of an experience is largely determined
00:37:31.040
by the peak intensity of the experience, whether that was good or bad and what the character of the
00:37:37.280
experience was at the end of the episode. So those are the two, the real levers you can, you can
00:37:43.960
pull to, to influence whether someone thought they had a good time or a bad time.
00:37:48.040
And to test this, they, they gave, you know, the control group, they gave these ordinary
00:37:53.360
colonoscopies and, you know, took the appliance out at the first moment that where it was,
00:38:00.400
where the procedure was over. But in the experimental condition, they did everything the same, except
00:38:06.700
they left the apparatus in quite unnecessarily for some minutes at the end, providing a, a low intensity,
00:38:17.880
a comparatively low intensity, but, but decidedly negative. And again, unnecessary stimulus to the,
00:38:24.880
to the subjects. And the result was that their impression of how much they had suffered was
00:38:32.440
significantly reduced and their willingness to come back and get a potentially life-saving
00:38:38.600
colonoscopy in the future was increased. So the greater percentage of them showed up in five years
00:38:44.280
for the next colonoscopy. And so this was a, you know, by, by any real measure, this was a, a good
00:38:50.960
thing to have done to these people, except what in fact it was, if you just take the, the window of
00:38:58.180
time around the procedure, it was prolonging an unpleasant experience without any medical necessity.
00:39:05.340
Right. And so that's, so I just want you to, there's got to be a way of telling them that
00:39:11.800
you're doing this and it's still working. Presumably. But what if, what if in fact is true
00:39:17.160
that the placebo effect is ruined? If you, if you tell someone that, that, that might be what's
00:39:25.400
happening to them or that you've done, you've done, you've done this thing. It's not medically
00:39:29.660
necessary, but we're going to leave this tube in for a few minutes because you're going to feel
00:39:32.520
better about it afterwards. What if that actually cancels the effect? Again, the universe hasn't
00:39:39.280
got it in for us. It doesn't like us at all. It doesn't care about us, but it hasn't got it in for
00:39:45.040
us. If what you just said is the case, then you could, for example, there'll be a way of getting
00:39:52.540
rounded. For example, you could say to them, you could say to the patient, look, there is a way of
00:39:59.680
reducing the amount of perceived suffering of this procedure, but it involves a placebo.
00:40:07.220
But it won't work if we tell you what the placebo is. So, you know, do you give us permission to use
00:40:14.980
this placebo? And of course the administration will say yes. And if that doesn't work, there'll be some
00:40:21.280
other way. But, but is that really consent? Because what if we just, we'll run the alternate
00:40:27.620
experiment. What if we say, we pose it like that to people and then, you know, 99% say, sure, you know,
00:40:34.400
sign me up. But we, we have a, another condition where we just, now we're just doing research on
00:40:40.020
compliance and we say, we tell them exactly what the placebo is in this case. We're going to leave the
00:40:44.800
tube in you for five minutes, not doing anything. And you're, you're going to, for that, for those
00:40:50.060
full five minutes, those will be five minutes where you would have been saying, when's this going to be
00:40:54.760
over already? And you could have been off the table and driving home, but you know, now you're still on
00:40:59.500
the table with this tube in you, but that's the placebo. Let's say the people who sign up for that
00:41:04.000
drops down to 17%. So now we know that there's all these people in the first condition who are only
00:41:09.500
consenting because you have masked what the placebo is. And so in fact, they're not really
00:41:15.980
consenting to the thing you're doing. I think that's still consent rather like, you know, if you,
00:41:22.680
if you, um, you don't have to, you don't have to be a doctor and, and have no exactly what the heart
00:41:32.240
surgeon is going to do to your heart in order to, to, um, um, consent, validly consent to heart
00:41:40.120
surgery. And it's the, the, the same with the placebo. You know, if you're told that it won't
00:41:46.460
work if you know what the placebo is, but there will be one, then you're consenting. And the P the
00:41:53.560
1% who still say no, um, those people are just making, supposing which true, those people are
00:42:04.040
simply making a mistake, the same kind of mistake as you would be making if this whole theory wasn't
00:42:08.460
true. You know, we, we, you can't, you can't, uh, um, bias the rules under which people interact
00:42:17.800
towards a particular theory that they disagree about. But there are people who have ideas about
00:42:26.380
reality and ideas about how we should all live within it, which are so perverse and incompatible
00:42:35.760
with everything we could reasonably want to do in this world that we, we do have to, we have to
00:42:43.540
wall off their aspirations from the rest of what's going on. I mean, whether, whether that's locking
00:42:47.840
them in prison because they just are so badly behaved or just exiling them in some way from
00:42:54.420
the conversation, you know, so the, again, the people I use are like, you know, the Taliban or ISIS,
00:42:59.680
you know, they don't, they don't get to vote on our public policy and for good reason, because
00:43:04.940
their votes would be crazy. Um, yes. Well, we, again, we, we have institutions. We try to, uh,
00:43:11.920
tune the institutions to have the property that, um, uh, the political institution should have the
00:43:18.840
property that disputes between people are resolved without violence. Uh, and the, the moral institutions
00:43:26.500
include the idea that participating and obeying such institutions is morally right. Uh, and also in,
00:43:35.420
in, in interpersonal relationships that don't involve the law, uh, we still want, we want to be,
00:43:41.440
better than that. We want, we want, uh, interpersonal relationship, not only to resolve disputes without
00:43:47.800
violence, but we want them to resolve disputes without any kind of coercion. An institution that
00:43:55.120
institutionalizes coercion about something is ipso facto irrational. Now I'm not saying that I know of
00:44:03.740
institutions that achieve this perfectly. I am saying that this is a criterion any more than I do in the
00:44:08.500
political case. I'm saying that that's the criterion by which institutions should be judged by, by how
00:44:15.260
well they are, how good they are at resolving disputes between people.
00:44:20.260
If you'd like to continue listening to this conversation, you'll need to subscribe at
00:44:27.400
samharris.org. Once you do, you'll get access to all full-length episodes of the Making Sense
00:44:32.060
podcast, along with other subscriber-only content, including bonus episodes and AMAs and the conversations
00:44:38.820
I've been having on the Waking Up app. The Making Sense podcast is ad-free and relies entirely on
00:44:44.360
listener support. And you can subscribe now at samharris.org.