#374 — Consciousness and the Physical World
Episode Stats
Words per Minute
170.3479
Summary
Christoph Koch is a neuroscientist, former president of the Allen Institute for Brain Science, and a former professor at Caltech. He is the author of five books, including "Then I Am Myself: The World, What Consciousness Is, and How to Expand It," which is the topic of today's conversation. In this episode, Christoph and I speak about his development as a scientist, his collaboration with Francis Crick, his studies of the visual system, change blindness, and binocular rivalry, the significance of sleep and anesthesia for consciousness studies, the limits of physicalism, non-locality, and other quantum mechanical phenomena, brains as classical systems, the possibility of conscious AI, and panpsychism, integrated information theory. We also discuss Christoph s recent experience with psychedelics and other topics, including his recent experience of psychedelics, which has put pressure on his understanding of the nature of consciousness. We don t run ads on the podcast, and therefore, therefore, are made possible entirely through the support of our subscribers. So if you enjoy what we're doing here, please consider becoming a supporter of the podcast by becoming a subscriber! You'll get access to our scholarship program, where we offer free accounts to anyone who can't afford a full-time college education. We're making sense of the world, and we'll help make sense of it all by solving the puzzle at the heart of our existence. -Sam Harris, Making Sense. Thanks for listening to the Making Sense Podcast. To find a list of our sponsorships, go to gimletters.ee/makingsensepodcasts/themakingsense/websites/webinoculars.co/list=a&qid=3q&q&t=3s&q=1s&s=3d&q%3s=1&qx&qq&a=3D&qw&qn&qr&qk&qref=e&qb&qset=A&q And a&qt=e%3d This is a podcast by Sam Harris, . The Making Sense? Thank you, Sam Harris , and & Acknowledged by by , and ) In this podcast is a book by me, and a in this episode is by me
Transcript
00:00:00.000
Welcome to the Making Sense Podcast. This is Sam Harris. Just a note to say that if
00:00:11.640
you're hearing this, you're not currently on our subscriber feed, and we'll only be
00:00:15.580
hearing the first part of this conversation. In order to access full episodes of the Making
00:00:19.840
Sense Podcast, you'll need to subscribe at samharris.org. There you'll also find our
00:00:24.960
scholarship program, where we offer free accounts to anyone who can't afford one.
00:00:28.340
We don't run ads on the podcast, and therefore it's made possible entirely through the support
00:00:32.880
of our subscribers. So if you enjoy what we're doing here, please consider becoming one.
00:00:45.460
Today I'm speaking with Christoph Koch. Christoph is a neuroscientist at the Allen Institute
00:00:51.220
and at the Tiny Blue Dot Foundation. He's the former president of the Allen Institute for
00:00:56.520
Brain Science, and a former professor at Caltech. He's the author of five books. Most recently,
00:01:03.920
Then I Am Myself, the World, What Consciousness Is, and How to Expand It, which is the topic
00:01:10.300
of today's conversation. Christoph and I speak about his development as a neuroscientist,
00:01:15.900
his collaboration with Francis Crick, their studies of the visual system, change blindness,
00:01:21.960
and binocular rivalry, the significance of sleep and anesthesia for consciousness studies,
00:01:28.240
the limits of physicalism, non-locality, and other quantum mechanical phenomena, brains as
00:01:34.960
classical systems, the possibility of conscious AI, idealism and panpsychism, integrated information
00:01:43.440
theory, also known as IIT, what it means to say that something exists, the illusion of the self,
00:01:51.300
the possibility of brain bridging, that is connecting two human brains in a shared experience,
00:01:57.340
Christoph's recent experience with psychedelics, and other topics. Anyway, this is the deep end of
00:02:04.120
the pool with respect to the scientific understanding of consciousness, and I certainly enjoyed it.
00:02:09.160
And now I bring you Christoph Koch. I am here with Christoph Koch. Christoph, thanks for joining me.
00:02:23.580
So you've written a wonderful book. You've written several books, but your most recent is
00:02:28.320
Then I Am Myself, the World, What Consciousness Is, and How to Expand It. And I just really,
00:02:34.440
I want to follow the line you took in your book here, which traces the evolution of your thought as
00:02:41.940
a scientist focusing on the nature of consciousness. And I mean, you've had a very productive career as
00:02:48.900
a neuroscientist, and you had a very fruitful collaboration with Francis Crick, who we'll talk
00:02:55.700
about. But then you, you know, kind of late in your career, you took the first person side of
00:03:01.860
things with both hands and have had some experiences with psychedelics that have put pressure on your
00:03:09.740
ontology, one might say. So I would just love to talk about all of this. Perhaps you can start
00:03:15.420
somewhere near the beginning. How is it that you came to focus on consciousness? And you started out
00:03:22.620
more as a physicist, you know, that was the side of science you came in, I believe, as an undergraduate.
00:03:28.480
What led you to the study of consciousness? Well, so I did get a minor in philosophy in
00:03:35.240
Tübingen, which is a 550-year-old university in Germany. And I did grow up reading Schopenhauer
00:03:43.840
and Immanuel Kant. It was that sort of household. So I've always been interested in consciousness,
00:03:49.740
that voice inside the head, right? How is that voice compatible with everything else we know?
00:03:57.380
And I grew up reading physicists, including Ernst Mach and people like Schrödinger. And they all
00:04:04.760
wondered, they also had similar questions. And in fact, both Schrödinger and Ernst Mach were very
00:04:10.860
explicit. Before I can be a physicist, I am a conscious being that sees, that thinks, you know,
00:04:18.980
going back all the way to your René Descartes. And so before I can even read an oscilloscope or read
00:04:26.620
an instrument as a physicist, I depend on my conscious sensation. And so consciousness has to be at the
00:04:33.900
center of our explanation of everything in the world. And I was puzzled when I first came from
00:04:41.740
physics into neuroscience. So my PhD was in the 20th century in what we now would call computational
00:04:48.320
neuroscience. I was puzzled when I really went deep and became a full-time professional neuroscientist
00:04:54.480
that consciousness at the time was simply not discussed. It didn't figure in the index. You know,
00:05:00.280
if you got the standard textbook, you went index and a consciousness, nothing. That was simply
00:05:06.060
ignored. It was all about behavior and neurons, which is fine, which is what I dedicated most of
00:05:11.840
my life to. But ultimately, we also have to explain the puzzle at the heart of our existence,
00:05:16.840
the fact that I'm not just a behaving thing. I actually see, I hear, I dread, I fear, I dream,
00:05:22.900
I desire, I want, right? So how do all of these things get into the world?
00:05:28.840
And then I met Francis Crick. So I did my PhD in Germany, then went to do a postdoc at the
00:05:34.980
Artificial Intelligence Lab at MIT. And previously, I'd encountered Francis Crick, you know, who at the
00:05:41.480
time had shifted from molecular biology, where, you know, with Jim Watson, he decoded the molecular
00:05:48.520
code for life and deciphered the double helical shape of the DNA molecule. But then he shifted because
00:05:54.980
he was also interested in consciousness. And the two of us got together and then wrote roughly
00:06:01.120
20 papers over the next 14 years, where we really initiated sort of a modern empirical
00:06:07.500
program to discover the footprints of consciousness in the brain. And the argument was very simple,
00:06:14.960
never mind about all the philosophy, we aren't going to converge anytime soon. You know, are we a
00:06:20.000
physicalist, an idealist, a dualist, a panpsychist, whatever? Let's just focus, we all agree today,
00:06:25.920
it's not the heart, as most people thought, it's your brain, that's sort of the organ of consciousness.
00:06:32.400
So which particular bits and pieces of the brain? Is it the spinal cord? Is it the cerebellum? Is it the
00:06:38.140
retina? Is it the thalamus? Is it the cortex? And do they, does this bit of the brain that sort of
00:06:45.780
analyze or is the substrate of consciousness, does it have to oscillate? Does it have to buzz at a particular
00:06:50.780
frequency? Which genes are involved? Which cell types are involved? Is there sort of a conscious mode and an
00:06:57.100
unconscious mode? These are all questions that we felt will have an answer. No matter what your
00:07:03.560
philosophical predilection is, there will be an answer of the sort. These neurons in this state, in that part of
00:07:10.440
the brain at this time, express the fact that I see or that I hear. And it's causal. In other words,
00:07:18.460
if I can activate those neurons, if I can put these neurons into this particular buzzing state, for
00:07:24.860
instance, then you should have a conscious experience. Conversely, if I can somehow interfere
00:07:30.720
with this neural mechanism by a drug, by a neurosurgeon's electrode, by some external magnetic device, then you
00:07:40.640
shouldn't have the experience. And so this is sort of the modern program to study the footprints of
00:07:46.560
consciousness now called the Neuronal Correlates of Consciousness or the NCC. In patients, in neurotypical
00:07:53.840
volunteers in animals like mice and monkeys and rats and other animals.
00:08:02.860
But when I read you now, I recognize that you are someone who always took the so-called hard
00:08:09.760
problem of consciousness seriously, which is to say you thought that it was a non-trivial mystery, that it
00:08:16.320
should be like anything at all to be associated with a certain pattern of neural firing. Whereas
00:08:22.720
in my memory of Francis Crick's approach to this, he wrote a book, The Astonishing Hypothesis, way back
00:08:31.520
when, I remember him being more of an arch materialist of the sort that one met quite directly in my
00:08:42.980
friend Dan Dennett or in the Churchlands, right? I mean, somebody who just simply tried to ram past
00:08:49.380
the hard problem with the declarative statement that, you know, the mind is simply what the brain
00:08:54.660
is doing, right? You're just a pack of neurons. And there is no, you know, it seems like there's
00:08:59.380
something left out of that. That's just a symptom of what a bad view we have of the brain's role in
00:09:06.900
producing subjectivity. But so how did, from what you just said, it sounds like you sort of tabled
00:09:14.420
your philosophical differences of intuition there and just decided to go look for the neural correlates
00:09:20.020
of consciousness. It must be somewhere in the brain. We agree about that, whether we agree about anything
00:09:24.180
else in philosophy. But was that an impediment to your collaboration with Crick at all? Or do I have
00:09:29.700
Crick wrong? No. No, you don't have them wrong. That was the starting point. That was the explicit
00:09:35.140
starting point that once we understand neurons and their vast complexity, you know, of an untamed
00:09:41.780
complexity that's incomprehensible for us. You know, humans, we deal with simple networks that are,
00:09:47.620
that are, you know, that we can understand, but we've never been faced with the vast complexity
00:09:52.820
that we find in evolved systems like biology, whether that's a brain of a simple creature like a
00:09:59.220
worm that has three or two neurons or the brain of a, of a, of a human that has on the order of 100
00:10:05.380
billion neurons. But he was also, Francis was very clear. He said, this hypothesis may not be true.
00:10:11.380
There might be other ways we have to think about it. And he was sympathetic. So, for instance,
00:10:17.300
very early on, we encountered Jerry Edelman, another Nobel laureate who would also move from Manhattan,
00:10:23.300
from the, you know, from the Rockefeller Institute to the, to the Scripps Institute in, in La Jolla.
00:10:28.660
And he worked with a person at the time, Giulio Tononi is now also a very well-known
00:10:34.340
consciousness researcher. And he explored the possibility that maybe it has to do with the
00:10:39.060
complexity. They wrote, for example, an early influential paper called consciousness and complexity,
00:10:44.340
arguing that complexity had to be, had to be involved, which is a little bit more than just saying,
00:10:49.940
it's just a bunch of neurons, you know, because the, the, the, the most widespread belief among
00:10:55.140
neuroscientists is, well, it's an emergent property, just like wetness emerges from water.
00:11:00.900
You don't get, if you have two H2O molecules, they're not wet, but if you got, you know,
00:11:05.700
10 to the 23, like a, a liter of, of, uh, H2O molecules, then it gets wet. And similar,
00:11:12.980
if you have a few neurons, they're not conscious, but you got a hundred billion of them,
00:11:16.900
then somehow they're conscious. But then we also, Francis and I realize that's inadequate
00:11:21.540
because you have some structures like the cerebellum. Okay. So you have this little brain
00:11:26.740
tucked underneath your big brain, right? At the back of your head, it contains in fact,
00:11:31.380
80%, four to five neurons in your head are in the cerebellum. You can lose these neurons. Let's say
00:11:37.300
due to a stroke or due to a tumor, you will be impaired. You can't do fast speed typing on your
00:11:42.900
phone anymore. You can't play violin or piano anymore. You have a few other issues like that,
00:11:48.180
but basically you, you stagger about, you look like you're always drunk, but basically all these
00:11:54.420
people, these patients who have lost part or whole of their cerebellum, they see, they hear, they dread,
00:12:01.060
they fear, they imagine, they, their consciousness is essentially to first order unchanged. And so that
00:12:07.780
tells you that it can just be the number of neurons. It has to do with, at least with the way they're
00:12:12.900
organized. Same thing with the spinal cord, right? You can be quadriplegic. You've just lost all your
00:12:18.100
spinal cord, 200 million neurons, right? So you can't move. But again, your consciousness hasn't really
00:12:23.620
changed that dramatically. So it, it, it can't just be the number of neurons. It has to be the way they
00:12:29.700
organize. And Francis, the hope was similar to what he had accomplished in molecular biology,
00:12:36.260
that if we look at the right neural mechanism in the right way, then suddenly it'll become apparent,
00:12:42.500
just like it was apparent when you look at the double helical structure of DNA, that that's a
00:12:46.740
natural way to copy genetic information. But we, we, we, of course, we didn't find such a simple
00:12:53.620
explanation and he always was open to the possibility that other ways of conceiving of it may be necessary
00:13:00.820
to finally understand it. Hmm. Well, I, I should say that anyone who has seen me try to type on my phone
00:13:07.220
will wonder whether I have lost my cerebellum in some, some terrific accident. Uh, it's, uh,
00:13:13.140
Well, you know, a few number of people, and, and, you know, a few people are born without a
00:13:17.700
cerebellum altogether, you know, so-called agenesis of the cerebellum. Well, I can confirm I do have one,
00:13:24.180
having been, having done my fair share of, uh, MRI experiments. Oh, there you go. But, um, actually,
00:13:30.020
I don't know if you're aware of it. You and I once met, you, you, when I was doing my PhD at the
00:13:34.580
Brain Mapping Center at UCLA, you came and did a seminar in Mark Cohen's lab, so. Yes, a long time
00:13:41.700
ago with Mark. Yes. Yeah. I don't think our paths have crossed since, but. No, no. Okay. So now how has
00:13:47.380
your, you know, your, your evolution as a, um, a neuroscientist in, in trying to understand
00:13:54.420
consciousness as an emergent property of brain activity, you know, I remember the work you did
00:14:00.180
with you, you and, and Crick were, were focusing on the visual system if, if memory serves and you
00:14:06.900
were, you were looking at, um, things like bistable percepts and, and change blindness and
00:14:13.220
binocular rivalry. Right. Right. Binocular rivalry. That's, that's a fascinating phenomenon. Maybe you
00:14:18.980
could describe that and, and, and why that seemed like such a promising avenue, because it's, it's,
00:14:24.500
it's a, to experience it is, um, striking. Both binocular rivalry and, and change blindness are
00:14:30.420
quite wonderful to experience as a subject in an experiment. Yeah. So the general philosophy here in
00:14:35.460
these experiments is, is to create conditions where in, on the one hand, you're looking at the stimulus and
00:14:41.940
see it. And then when the same stimulus is present, you may not see it, or you may see it differently.
00:14:47.940
In other words, in the two conditions, you're always looking at the same thing, but sometimes you see
00:14:53.060
it one way and sometimes you see it the other way. So in change blindness, you know, you, you have an image
00:14:58.900
like there's a famous image of soldiers, UN soldiers boarding a plane. And then you see an image that's
00:15:06.340
slightly changed, uh, with a blank in between where, for example, the photographer has removed
00:15:12.180
using Photoshop has removed the engine of the airplane. And you go back and forth between the
00:15:17.620
original image, a blank image and the changed image and blank original image, blank changed image.
00:15:25.540
And you may be staring at this for 10 or 20 or 30 seconds and your, your eye move about, but you
00:15:32.260
don't see the actual change until, you know, suddenly you get, ah, of course it's the engine
00:15:38.180
that's changed. And then of course you cannot not see it. So there you have a situation where for 10
00:15:42.980
or 20 seconds, you're looking at the same thing, but you're simply not seeing what's in front of
00:15:47.380
your eyes. So you can track down where are the mechanisms in your brain that respond only to the
00:15:52.660
retinal input compared to where are the neurons that respond when, when you actually see it.
00:15:58.900
A separate case is this binocular rivalry. So there you, you have two different images,
00:16:03.860
let's say phase one and phase two. Let's say you're projecting phase one into one eye to make
00:16:09.060
it concrete. We can think about, for instance, you have an image that you can do this. And we did it
00:16:14.660
20 years ago with the two presidents at the time. You can imagine on, on your left image, you have Biden
00:16:20.660
on the right, in your right eye, you project the image of Donald Trump. Now, if you don't do things
00:16:26.500
correctly, you can see sort of a juxtaposition of, of the both of the two. But if you do things
00:16:32.020
correctly for a while, you will only see President Biden and then one or two seconds or three seconds,
00:16:40.900
and then Biden will fade and Trump will come in. And then Trump will stay for several seconds,
00:16:46.260
and then Biden will come in. It's sort of this never ending dance.
00:16:49.780
So just to clarify for listeners who haven't, who may not be able to visualize this, you have
00:16:56.180
one image projected to one eye and, and a different image to the other eye. And your conscious
00:17:02.980
experience tends to, if it's set up correctly, is you're, you're, you're simply seeing one of the
00:17:07.140
images and then it just randomly shifts to the other image. And it's something over which you can't
00:17:13.780
control, you can't control the shift. And what's interesting about this experimentally is that
00:17:18.820
the inputs to your nervous system are completely stable. I mean, all you have are these,
00:17:23.300
these two inputs to, you know, one to each eye and nothing is changing. And yet at the level of your
00:17:28.500
conscious, you know, phenomenology, there is this very bizarre shifting back and forth between
00:17:35.140
seeing one image and then seeing the other and then seeing one image and then seeing the other.
00:17:39.060
And it's, um, this was a wholesale change in the contents of visual consciousness. And yet there's
00:17:44.420
zero change happening at the level of the bottom of your nervous system from the, you know, the
00:17:50.340
retina onward. That's correct. Yeah, that's correct. So something is changing. They're sort of this
00:17:54.820
bi-stable. It's like the nervous system wants to see what's in the left eye. And then it wants to
00:18:00.100
sort of check what's on the right eye and it goes back and forth. And this can go on for, you know,
00:18:05.380
many minutes. So now you can, in principle, you can do what you used to do. You can put
00:18:10.740
people in a magnet or you can put people in a magnet like we've done at the time at UCLA with,
00:18:16.660
with Itzhak Fried on the wall. These are neurosurgical patients
00:18:20.660
that for other reasons, because the doctor has to know where they, where the elliptic seizure
00:18:24.980
originates, you put electrode into the, into the brain. You can see the same thing, or you can do it in a,
00:18:29.940
in a monkey, where you can attack throughout the monkey's visual system. Where are the,
00:18:34.980
the first stages of the neurons that actually respond to what you see? Not just what the left
00:18:40.100
eye quote sees or what the input received by the right eye, but what the animal sees or what the
00:18:45.700
person sees that tend, that typically tends to be a higher up in the, in the visual hierarchy.
00:18:51.060
And you can ask, where are those neurons? Are they of a particular type? Are they of a particular layer?
00:18:56.580
And then of course, what happens if I, if I can begin to manipulate those neurons, which at the
00:19:01.540
time was, wasn't really possible, but today you can do using fancy optogenetics. So this was one
00:19:07.860
of a number of ideas that we exploited to track, or that people exploit to track the, the neural
00:19:15.140
footprints of consciousness, in this case, visual consciousness throughout the brain. And people
00:19:19.460
continue to do that in all sorts of different ways. Is there a problem here conceptually because you're,
00:19:25.620
so you're, you're obviously conscious all the while there's a, there's a distinction we could
00:19:31.860
make between consciousness itself. I mean, the fact that it's like something to be you to use Nagel's
00:19:38.260
definition, which of which I've always been a fan and the, any, any specific contents of consciousness,
00:19:45.540
right? So you're like there, you're really interrogating specific contents, you know, and, and,
00:19:50.580
but one could argue that, I mean, you could completely lose, you know, a perception of
00:19:55.140
the visual field as well, and, and yet be just as conscious, right? I mean, now you're no longer
00:19:58.900
conscious of vision, but you're, you're conscious in other sensory channels and of just, you know,
00:20:06.820
It's a different way. It's a different way to study. So, so the other way to study is you point
00:20:11.380
out is the distinction between consciousness too cool and not being conscious. I like in sleep,
00:20:18.820
or in anesthesia. So you can take a subject, normal subject, hears, sees, et cetera, then you anesthetize
00:20:25.140
them and you can see what changes. And then as they wake up, it's called LOC loss of consciousness.
00:20:30.820
And then you can see what happens when they, when they come out of anesthesia and each you're
00:20:36.500
looking at different things that, you know, they're not similar and that different concerns
00:20:42.100
you have to have. So for instance, when you go to sleep or when you are anesthetized, all sorts of
00:20:46.980
other things as well as change as well. For instance, you don't have memory anymore. You can't move
00:20:52.100
anymore, right? Because you become paralyzed in deep sleep or in anesthesia, you also become paralyzed,
00:20:57.940
but it's much more dramatic compared to looking at something. And I'm, as you point out, I'm still
00:21:02.340
conscious. It's just the content of consciousness changes. So they're just different experimental
00:21:07.860
techniques to track different things. In one case, you're specifically tracking the content of my visual
00:21:13.060
consciousness. In the other case, you're tracking the, the entire physical substrate of being
00:21:18.260
consciousness in the legal sense. It feels like something to be me.
00:21:22.260
Yeah. The, the problem you always run into there, which you just alluded to is that it's hard to
00:21:28.580
differentiate and perhaps in principle, it could be impossible to differentiate a true cessation of
00:21:34.660
consciousness from merely a loss of memory, right? So I, you know, I, I've, I disagree.
00:21:40.660
Yeah. Well, it was, yeah. So let's, let's talk about that. I mean, I think it's, it's harder to make
00:21:44.180
this case for, for general anesthesia, given what we think about just the, the underlying state of the
00:21:50.180
brain. But for deep sleep, it's often, I've often felt that we're too quick to allege that it
00:21:58.740
interrupts conscious experience because many people, I count myself among them, at least believe,
00:22:06.340
imagine that they've had an experience of, of dreamless sleep. Now, whether that's, you know,
00:22:13.780
stage four delta sleep or not, I don't know, but there's a very common experience of being asleep and
00:22:18.660
dreaming and, and there's something that it's like to be doing that, but there's a more esoteric
00:22:23.080
experience that people have had of being asleep and experiencing just a, I mean, it's very much like
00:22:30.260
a, a meditative, you know, samadhi-like experience. I mean, it's just, you know, a vast experience of,
00:22:36.280
of contentless consciousness while asleep, right? And that's, that's certainly reported in,
00:22:42.300
among yogis who claim to, to have been able to train that. So, so I, I just think, you know,
00:22:48.360
that's the, the fact that most of us, most of the time don't remember a thing about what it was like
00:22:55.920
to be deeply asleep could be just analogous to, to most of us not remembering our dreams. You know,
00:23:02.020
I mean, I, you know, I presume I dream every night, but at this point in my life, I rarely remember
00:23:05.980
anything from my period of, of sleep. Well, so there is one technique at least that tries to get,
00:23:12.380
I can think of two techniques to try to get at that. One is to put you in a magnet or probably
00:23:18.320
more plausibly to put EG electrode on you as you sleep. And people have done this. I'm thinking of
00:23:24.300
one study by Siclari et al from the Tononi lab, in fact, where they did exactly that. They had high
00:23:30.260
density EG on these irregular people, but they're trained people, trained observers. So they've done
00:23:35.820
this for a while now and you randomly wake them up, right? So randomly I look at your EG and I wake
00:23:43.140
you up now and I ask you, you know, tell me, did anything, was anything in your mind just the moment
00:23:51.640
before you woke up? And I look at the 10 seconds before I wake you up and then I can correlate. And
00:23:57.460
of course, what they found, which is, if you can go back to the original report back in 53,
00:24:02.760
defining REM and non-REM sleep, it is true. 30% of the time, even in deep sleep, people do report
00:24:10.340
dreamlike experiences. Okay. So even in deep sleep, you can have dreamlike experiences. However,
00:24:17.960
they're typically not this sort of elaborate narrative structure. You know, they're more sort of episode,
00:24:23.160
oh, I just saw a face, or I just had this feeling of dread, something bad was
00:24:27.460
going to happen. And conversely, in REM sleep, only about 80% of the time do people actually
00:24:33.940
report, you know, have dream reports. So that's one way to get around this fact that most people
00:24:39.380
forget most of the dream. You wake them up spontaneously during the night and you ask them,
00:24:44.460
do you remember anything over the last 10 seconds? Now, of course, it's also possible
00:24:47.980
that if they have lost shorter memory as well, then of course, yes, then of course you wouldn't know
00:24:54.060
it all. But most people can report that they had experience, dream experiences, or that they
00:24:59.180
didn't have dream experiences. And here, by the way, so you can, again, track, people have used
00:25:04.340
this technique to track the footprints. It is true when you have dream experiences, it's typically
00:25:09.860
particular part of the back of the brain, the posterior hot zone that are activated in the
00:25:18.740
But when you're talking the posterior hot zone, you're not talking about visual cortex, you're
00:25:25.420
Yeah, exactly. Precuneus, posterior cingulate, sort of parietal areas in some higher order
00:25:32.280
temporal area. So behind the central fissure that goes across the brain in the back.
00:25:38.900
So, let's step back for a moment and review some of the underlying philosophical controversy
00:25:47.860
because there are many ways in which people think about what is here. And I want us to
00:25:55.580
get to integrated information theory, IIT, which I believe you still are a supporter of
00:26:03.100
and contributor to. I don't know if it fully captures your current view of things. But
00:26:08.140
we'll get there. But I mean, you express in the book a similar astonishment and frustration
00:26:14.120
that I've always felt with the eliminative materialists of the Dan Dennett and especially
00:26:20.520
Paul and Patricia Churchland sort, who just simply say that consciousness itself is an illusion.
00:26:28.440
It seems to me, it's always seemed to me to be absolutely obvious that consciousness is the
00:26:33.400
one thing about which you cannot say that. Because any illusion is as much a demonstration
00:26:38.780
of consciousness as any veridical perception of anything, right? I mean, it's the one thing.
00:26:43.300
The fact that something seems to be happening is the one thing we can be sure of, even if we're
00:26:49.880
René Descartes. I couldn't agree more. It's the one thing I cannot doubt, because in doubting,
00:26:55.160
I affirm it. And if I call it an illusion, well, then everything is an illusion and the word becomes
00:27:00.420
meaningless. Yes. So I think this is the biggest feeling by far of physicalism, right? So physicalism,
00:27:07.220
you know, it's a metaphysical idea that the only thing that exists is physical. Let's come, you know,
00:27:12.840
I think we have to discuss what is meant by that. But sort of most people have an intuition,
00:27:16.860
you know, it's matter and energy, right? Good old materialism. And then the challenge is,
00:27:23.040
well, if you believe that, fine. But then how, how does, how do feelings emerge? And, you know,
00:27:29.640
because we have been not able, you know, philosophy has been utterly unable or science has been utterly
00:27:35.180
unable to explain how any sort of feelings, it feels like something to be me emerges out of
00:27:41.100
atoms and the void. That's the biggest challenge that physicalism has utterly failed to me, number one.
00:27:48.440
Yeah. Which is to say, so again, I've spoken about the hard problem of consciousness,
00:27:53.040
a lot on my podcast and over at Waking Up in the app. And I've interviewed Chalmers and
00:28:00.300
Anil Seth. And I mean, we've covered this ground before, but the thing to recognize is that the fact
00:28:06.380
that there is this explanatory gap, the fact that our intuitions that seem to anchor every other type
00:28:16.080
of scientific explanation, you know, the wetness of water, the brittleness of, you know, any higher
00:28:22.320
level material, all of those physicalist reductions run through. And yet with the hard problem, with the
00:28:30.060
fact that the lights are on subjectively, something seems to be left out. The fact that there's,
00:28:35.840
that we have this, where this intuitive impasse doesn't suggest or much less prove that it isn't
00:28:44.980
simply so, right? That it isn't just, we just may, you know, we just may be in a bad position to think
00:28:50.380
about how consciousness emerges. And it may always seem like a miracle, even if we had, even if we had
00:28:56.760
the answer in hand and it just was whatever, 40 hertz oscillations in thalamocortical loops,
00:29:02.760
which I think was once a pet thesis of yours. Yes. If that's just the answer, if God told us
00:29:08.160
that's, that is how it happens. Well, it may always just seem like a brute fact that doesn't
00:29:14.280
actually explain anything. You're entirely right. My dog doesn't understand general, you know,
00:29:20.800
general relativity or the stock market or election. My dog is a perfectly intelligent member of a species
00:29:27.720
that survived for millions of years, but certain things are cognitive beyond it. And maybe this is
00:29:33.940
also beyond us. But now that the additional evidence, right, for, for physicalism being
00:29:39.380
inadequate, namely at the bottom, at the rock bottom of physicalism is how do we define the
00:29:46.180
physical? And if you listen to anything in the science, in, in quantum mechanics of the last 30
00:29:52.160
years, like we all know, it's deeply troubling and it's very difficult to define what is the physical.
00:29:59.060
And the fact that physical includes such bizarre things as two particles that are entangled that
00:30:03.740
are at opposite ends of the universe. If one, you know, if you observe one and determine its state
00:30:09.320
instantaneous, you know, instantaneous, the state of the other one is determined, right?
00:30:16.500
Non-locality. So what sort of, I mean, what sort of physicalism is that if things are entangled
00:30:22.100
across the universe, right? That's, that's certainly not my grandfather's. That's certainly
00:30:26.540
not Democritus, you know, atoms in the void. And then, you know, now turns out that the entire
00:30:32.180
school of physics, you know, that does what's called contextuality or called first person physics,
00:30:38.460
right? Where it is, where they accept as a, as a, as a, as a fact, as an observational,
00:30:43.580
as an empirical fact that what exists really depends on what you measure. And if you have
00:30:49.560
different measurement protocols, different things, sort of, you measure different things
00:30:54.300
that weren't there before. So the mere act of observing, you know, it's a participatory universe,
00:31:00.180
the mere act of observing creates reality. Well, how does that sit with standard sort of physicalism,
00:31:06.980
Let me bring in my, my, put on my arch materialist hat, which I, which I rarely wear, but what,
00:31:13.600
what about the claim that however strange quantum mechanics may be, and, and I guess there's two
00:31:20.920
things to acknowledge here. One is that there, there are very strange versions of it that seem
00:31:25.760
to move consciousness and the observer out of the, out of any part of the, the significant picture
00:31:32.320
and give us something super determinism or, or, or, or, um, you know, many worlds theory,
00:31:37.960
right? Where like, you know, the, the universe is splitting, you know, at every, at every possible
00:31:44.740
But how do we know this is exactly. So in fact, I've just written a paper with Hartmut Neven who runs
00:31:50.160
quantum Google AI lab. It may be precisely consciousness that's responsible for, at the moment you create a
00:31:59.440
superposition, there are two or a multitude of different multiverses that split off. So how do
00:32:07.340
we know that conscience isn't in fact responsible for that splitting off?
00:32:11.700
Well, I'm going to, I'm going to have you fight with David Deutsch about that, but leaving that aside,
00:32:16.080
what, what about the claim that at the scale of brains and even at the scale of neurons, kind of the
00:32:23.540
wet and hot mess of, of goo that is in our heads, quantum mechanics, you know, it is a perfectly
00:32:29.940
classical view of physics is good enough because everything, you know, everything decoheres and
00:32:36.260
there's just, you know, it's, it's at the scale of brains, none of that highfalutin physics need apply.
00:32:42.640
Yeah. In short, brains are warm and wet. So this is the argument I've always made in all my previous
00:32:47.340
books. The brain is warm and wet. So if you look at Google's quantum computers, they operate at 23
00:32:55.180
milli Kelvin. So that's about a thousand times colder than the temperature of out of space,
00:33:01.180
right? Which is the temperature of background radiation, a couple of degrees of Kelvin. And
00:33:05.580
of course, a hundred thousand times colder than right now, what you and I experienced here on the,
00:33:10.940
on the West coast, right? So, and my belief has always been, yeah, it's wet. The brain, as you said,
00:33:16.460
it's warm and wet. And so it's irrelevant. But now we learned over the last 10 years, well,
00:33:21.660
they can find entanglement in all sorts of soft metal system in some gas. There was a paper recently
00:33:27.340
out of Barcelona, I think that measured it in, in gas, that's sort of minus 50 Celsius. So people are
00:33:36.220
now, as they're looking, as we better understand in this, what some people call the second revolution
00:33:41.340
in quantum mechanics, it may not be true that it's only in these very extremely cold systems that
00:33:48.460
entanglement and superposition may occur even in what physicists think of soft matter, i.e., you know,
00:33:55.340
brains or bodies. This is an empirical question. In fact, together with some biologists and with this
00:34:03.180
previous mention, Hubwood-Naven, we're now doing some experiments in, in organoids and in flies to
00:34:10.780
exactly test it. There's this interesting phenomena that's xenon. So xenon is a rare gas and it acts as
00:34:17.740
an anesthetic. This is well known. In fact, it's a pretty good anesthetic, except it's expensive. But
00:34:23.020
there's this one paper many years ago in mice that claimed there's this differential isotope effect of
00:34:31.340
anesthetic potency. In other words, different isotopes of xenon. So 129, 131, 132, they all have
00:34:42.140
the same. So chemically, they're all the same. They all have eight outer electrons. So none of them
00:34:48.060
interact. They're called Nobel gases because they're so noble, they don't interact with anyone. Okay.
00:34:53.820
So only what differs is the nucleus inside, right? They have an additional one neutron or two neutrons or
00:34:59.500
three neutrons. And so their, their nuclear spin differs. Some of them have nuclear spin zero.
00:35:04.300
Some of them have nuclear spin one half. And it turns out those that have nuclear spin one half have
00:35:09.580
a different anesthetic potency in mice. Now they haven't been done in people yet, in mice than the
00:35:16.300
ones with spin zero. Now, if this is, so this has to be replicated, right? Two out of three experiments in
00:35:21.980
biology can't be replicated. So we have to repeat this. So which is what we're doing in flies and in
00:35:27.500
in cerebral organoids and in primary cultures, with Ken Cosek, who's a professor at UCSB,
00:35:34.140
if this is replicated, then it would seem to indicate that something as subtle as nuclear spin
00:35:41.180
actually makes a difference at room temperature in these organisms. So the great thing about science,
00:35:50.300
it would be weird. And that's a great thing about science, right? If you construct a part,
00:35:54.860
if you ask in nature the right question and you do, you know, you listen carefully,
00:36:00.300
you can get an answer, even if the answer may be extremely unexpected. So that would be cool. We'll see.
00:36:06.060
Yeah. Yeah. Well, yeah. Keep me apprised of that because that would, uh, that would shake some
00:36:11.900
assumptions I have working on the background here. Me too. So let's talk about, again,
00:36:17.980
I want us to get to IIT, but how would you characterize the, the most common ontology of
00:36:27.500
mind in neuroscience and the sciences of mind generally now? I think, I think in your book,
00:36:32.940
you talk about it as you described it as computational functionalism. It's certainly
00:36:37.420
the, the view of, uh, that is causing people to imagine that developments in AI have some
00:36:45.020
future implication for how we understand, uh, minds in their totality and that we may build
00:36:51.100
conscious machines. We may be able to upload ourselves into the matrix. All of this seems to
00:36:56.140
suppose that, that mind, you know, we already know this is true of intelligence. I mean,
00:37:00.700
it's clearly intelligence on some level as a matter of information processing, but the underlying thesis
00:37:04.620
seems to be that consciousness itself is an emergent property of information processing and
00:37:10.140
therefore really by definition is, is substrate independent. Is that an accurate view of what you
00:37:15.420
think most people think in, in the sciences of mind at the moment? Yeah. Based on some version of
00:37:21.180
functionalism, i.e. that, and everything has a function. Clearly that's true for intelligence.
00:37:26.620
Consciousness also has a function, you know, enable planning or, you know, long-term planning,
00:37:30.940
short-term planning, summary of the current situation that's ongoing in my body, whatever
00:37:37.100
it may be, it has a fun one or more function. And once you replicate those functions, in particular
00:37:41.980
on a Turing machine, right, this is called Turing unit, Turing functionality or, or computational
00:37:47.180
functionalism, then that machine will have all of the properties, including consciousness. So that's a deep
00:37:53.260
belief, particularly in the tech industry and among computational neuroscientists. And now, of course,
00:37:58.300
with LLMs, it's present to the, to the level of the general public that people assume, yeah,
00:38:04.940
these things sooner or later will be sent in, will be conscious. Right. Here, here is what I predict
00:38:11.100
is going to happen. And, um, it worries me on some level. I think, I think we will probably
00:38:17.020
build, you know, humanoid robots that fully pass the Turing test, you know, and, and, and better
00:38:24.060
because they'll, they'll be superhuman and in, in virtually every respect. And once we get out of
00:38:29.900
the uncanny valley with them, they'll certainly seem conscious to us because we'll build them to
00:38:34.300
seem conscious to us. And they'll talk about their experience and their, and their emotions,
00:38:38.860
et cetera. They'll certainly be very attentive to our emotions and, and better judges of that than
00:38:43.660
probably any person we've met. And we will just, you know, effortlessly slide into this sense of
00:38:49.740
being in relationship to these entities. And we will still not understand the neural basis of
00:38:55.820
consciousness or the computational basis of consciousness or any, or any other basis of
00:38:59.900
consciousness. And we'll be in the presence of these artifacts, which seem conscious to us. And we'll
00:39:05.980
simply just lose sight of whether it's an interesting problem to wonder whether or not they are conscious.
00:39:11.020
We will helplessly perceive them to be conscious because they will, they will seem that way.
00:39:16.140
Do you, can you imagine that we're going to stumble into that sort of Westworldian future,
00:39:21.820
where it's just, you're not going to be able to, you're certainly not going to be able to mistreat
00:39:26.060
these robots because you'll feel like a psychopath because you'll be, you know, only a psychopath could
00:39:30.300
want to mistreat something that is so perfectly seeming to be a locus of experience. It just seems like
00:39:37.900
the problem may evaporate for most people. I mean, obviously, some people will still hold on to it
00:39:41.820
and wonder whether these, these machines are conscious because obviously it'll be very important
00:39:46.620
ethically to understand whether we have built machines that can suffer. You know, if the lights
00:39:51.340
are on over there and, you know, we're basically murdering our robots every time we turn them off,
00:39:57.260
that will be an interesting problem to have created for ourselves. But I just think if, if the robots,
00:40:02.620
if we're out of the uncanny valley before we actually understand how consciousness is integrated
00:40:10.060
with the physics of things, we just might lose sight of the problem.
00:40:13.740
For a short time, for a short period, this may happen, yes. But I don't think it's stable. I don't
00:40:21.100
think it's a stable, it's a stable situation inherently. Because these intelligences will evolve
00:40:28.620
evolve at a, at a timescale that simply not matched the timescale that human society or human as
00:40:35.020
individuals, it's evolved. And so they rapidly will surpass us. And of course, if we believe
00:40:40.780
they're conscious and they, as you point out, they have all the moral attendant responsibilities and
00:40:46.060
right, it'll further dehumanize us and will further dehumanize nature and will become less and less
00:40:52.300
relevant. Because, you know, we're building these other guys, they are our successor. It's,
00:40:56.460
it's obviously the next step beyond human humanity. They're smarter than us. They're more robust than
00:41:01.580
us. Ultimately, they're more aggressive than us because we'll build them for warfare. You can see
00:41:06.460
that already, you know, happened beginning to happen in Ukraine and Russia. And where does this leave
00:41:13.660
homo sapiens sapiens? Yeah, but for most part, for most people, this problem will, will go away because
00:41:20.380
of course they're conscious. How can you not? You can talk to them. People will have intimate
00:41:24.540
relationships already now, right? You can talk to these companies or make these more.
00:41:29.420
If you'd like to continue listening to this conversation, you'll need to subscribe at
00:41:33.900
samharris.org. Once you do, you'll get access to all full length episodes of the Making Sense podcast.
00:41:40.140
The podcast is available to everyone through our scholarship program. So if you can't afford a
00:41:44.780
subscription, please request a free account on the website. The Making Sense podcast is ad free
00:41:50.540
and relies entirely on listener support. And you can subscribe now at samharris.org.