#113 — Consciousness and the Self
Episode Stats
Length
1 hour and 43 minutes
Words per Minute
157.10094
Summary
Anil Seth is a professor of cognitive and computational neuroscience at the University of Sussex, and founding co-director of the Sackler Center for Consciousness Science. He's focused on the biological basis of consciousness, and is studying it in a very multidisciplinary way, bringing neuroscience, mathematics, artificial intelligence, computer science, psychology, philosophy, and psychiatry into his lab. He is the editor-in-chief of the academic journal Neuroscience of Consciousness, published by Oxford University Press, and he has published more than 100 research papers in a variety of fields. His background is in natural sciences and computer science and AI. He also did postdoctoral research for five years at the Neurosciences Institute in San Diego under Nobel Laureate Gerald Edelman, and we cover a lot of ground here. We start with the hard problem, then talk about where consciousness might emerge in nature, talk about consciousness in all its aspects, including consciousness in sleep, consciousness in the waking state, consciousness as a controlled hallucination, and consciousness in general. I found it all fascinating, and I hope you do as well. Thank you for coming on the podcast, Anil. I am so grateful for inviting me on the show. - Dr. Jay Shetty and the team at the SNDS team at Oxford University, where I do research on consciousness, consciousness and AI, and all things related to consciousness and consciousness. I hope that you enjoy this episode, and that you find it as fascinating and engaging as I did in this episode. I can't wait to do more of this in the future! -Jay Shetty and his team at SNDS, which I'm working on a new episode on consciousness and all kinds of things! -A.K.A. (A.M. ) -Sally Sargent . And, as always, thank you for listening to this podcast, I really appreciate your support and support you're being awesome. -- Thank you so much, please don't forget to leave me a review of the podcast. I really really appreciate it. I mean it's really important that I do it. Thank you, really means it's not only that I can be heard in the world, I mean that I'm not just that, right? -- A. BECAUSE I LOVE YOU, THANK YOU, MURDERING ME, AND I'M KEEP THE PRAISE ME OUTSIDE OF ME.
Transcript
00:00:00.000
Today I spoke with Anil Seth. He is a professor of cognitive and computational neuroscience at
00:00:22.940
the University of Sussex and founding co-director of the Sackler Center for Consciousness Science.
00:00:30.000
And he's focused on the biological basis of consciousness and is studying it in a very
00:00:35.860
multidisciplinary way, bringing neuroscience and mathematics and artificial intelligence
00:00:41.680
and computer science, psychology, philosophy, psychiatry, all these disciplines together
00:00:47.300
in his lab. He is the editor-in-chief of the academic journal Neuroscience of Consciousness,
00:00:54.180
published by Oxford University Press, and he has published more than a hundred research papers
00:01:00.660
in a variety of fields. His background is in natural sciences and computer science and AI,
00:01:09.100
and he also did postdoctoral research for five years at the Neurosciences Institute in San Diego
00:01:15.160
under Gerald Edelman, the Nobel laureate. And we cover a lot of ground here. We really get into
00:01:22.200
consciousness in all its aspects. We start with the hard problem, then talk about where consciousness
00:01:28.280
might emerge in nature, talk about levels of consciousness, anesthesia, sleep, dreams,
00:01:35.040
the waking state. We talk about perception as a controlled hallucination, different notions of the
00:01:41.900
self, conscious AI, many things here. I found it all fascinating, and I hope you do as well.
00:01:50.820
And so, without further delay, I bring you Anil Seth.
00:01:57.640
I am here with Anil Seth. Anil, thanks for coming on the podcast.
00:02:03.940
So, I think I first discovered you, I believe I'd seen your name associated with various papers,
00:02:10.460
but I think I first discovered you the way many people had after your TED Talk. You gave a much-loved
00:02:16.880
TED Talk. Perhaps you can briefly describe your scientific and intellectual background.
00:02:22.520
It's quite a varied background, actually. I mean, I think my intellectual interest has always been in
00:02:27.440
understanding the physical and biological basis of consciousness and what practical implications
00:02:33.520
that might have in neurology and psychiatry. But, you know, when I was an undergrad student at
00:02:40.200
Cambridge in the early 1990s, consciousness was certainly as a student then, and then in a place
00:02:46.620
like Cambridge, not a thing you could study scientifically. It was still very much a domain
00:02:51.560
of philosophy. And I was still, at that time, I still had this kind of idea that physics was going
00:02:58.440
to be the way to solve every difficult problem in science and philosophy. So, I started off
00:03:04.480
studying physics. But then, through the undergrad years, I got diverted towards psychology as more
00:03:11.080
of a direct route to these issues of great interest and ended up graduating with a degree in experimental
00:03:17.020
psychology. After that, I moved to Sussex University, where I am now, actually, again, to do a
00:03:23.880
master's and a PhD in computer science and AI. And this was partly because of the need, I felt,
00:03:31.200
the time to move beyond these box and arrow models of cognition that were so dominating psychology and
00:03:38.020
cognitive science in the 90s towards something that had more explanatory power. And the rise of
00:03:44.720
connectionism and all these new methods and tools in AI seemed to provide that. So, I stayed at Sussex and
00:03:53.320
did a PhD, actually, in an area which is now called artificial life. And I became quite diverted,
00:03:58.960
actually, ended up doing a lot of stuff in ecological modelling and thinking a lot more here about how
00:04:04.180
brains, bodies and environments interact and co-construct cognitive processes. But I'd sort of
00:04:10.300
left consciousness behind a little bit then. And so, when I finished my PhD in 2000, I went to San Diego to
00:04:18.080
the Neuroscience Institute to work with Gerald Edelman. Because certainly then, San Diego was
00:04:23.340
one of the few places, certainly that I knew of at the time, that you could legitimately study
00:04:29.180
consciousness and work on the neural basis of consciousness. Edelman was there, Francis Crick
00:04:33.520
was across the road at the Salk Institute. People were really doing this stuff there. So, I stayed there
00:04:38.600
for about six years and finally started working on consciousness, but bringing together all these
00:04:44.220
different, you know, different traditions of math, physics, computer science, as well as the tools of
00:04:50.080
cognitive neuroscience. And then, for the last 10 years, I've been back at Sussex, where I've been
00:04:54.660
running a lab. And it's called the Sackler Center for Consciousness Science. And it's one of the growing
00:05:01.040
number of labs that are explicitly dedicated to solving or studying, at least, the brain and biological
00:05:07.660
basis of consciousness. Yeah, well, that's a wonderful pedigree. I've heard stories, and I never
00:05:13.440
met Edelman. I've read his books, and I'm familiar with his work on consciousness. But he was famously
00:05:20.080
a titanic ego, if I'm not mistaken. I don't want you to say anything you're not comfortable with. But
00:05:25.700
everyone who I've ever heard have an encounter with Edelman was just amazed at how much space he
00:05:32.580
personally took up in the conversation. I've heard that too. And I think, you know, there's some truth to that.
00:05:37.500
What I can say from the other side is that when I worked for him and with him, you know, firstly,
00:05:43.220
it was an incredible experience. And I felt very lucky to have that experience, because he had a
00:05:48.220
large ego, but he also knew a lot too. I mean, he really had been around and had contributed to
00:05:53.160
major revolutions in biology and in neuroscience. But he treated the people he worked with, I think,
00:05:59.320
often very kindly. And one of the things that was very clear in San Diego at the time,
00:06:05.120
he didn't go outside of the Neurosciences Institute that much. It was very much his empire. But when
00:06:11.420
you were within it, you got a lot of his time. So, you know, I remember many occasions just being in
00:06:16.360
the office. And, you know, most days, I would be called down for a discussion with Edelman about
00:06:21.360
this subject or that subject or this new paper or that new paper. And that was a very instructive
00:06:26.840
experience for me. I know he was quite difficult in many interviews and conversations outside
00:06:32.960
the NSI, which is a shame, I think, because his legacy really is pretty extraordinary. I'm sure
00:06:38.780
we'll get onto this later. But one of the other reasons I went there was one of the main reasons
00:06:43.360
I went there was because I'd read some of the early work on dynamic core theory, which has later
00:06:48.960
become Giulio Tononi's very prominent integrated information theory. And I was under the impression
00:06:55.140
that Giulio Tononi was still going to be there when I got there in 2001, but he hadn't, he left.
00:07:01.380
And he wasn't really speaking much with Edelman at the time. And it was a shame that they didn't
00:07:07.000
continue their interaction. And, you know, when we tried to organize a festschrift, a few of us,
00:07:13.100
for Edelman some years ago now, it was quite difficult to get the people together that had
00:07:19.600
really been there and worked with him at various times of his career. I think of the people that
00:07:26.780
have gone through the NSI and worked with Edelman, there are extraordinary range of people who've
00:07:30.320
contributed huge amounts, not just in consciousness research, but in neuroscience generally, and of
00:07:34.500
course, in molecular biology before that. So it was a great, great experience for me. But yeah,
00:07:39.560
I know he could also be pretty difficult at times too. He had to have a pretty thick skin.
00:07:44.740
So, well, we have a massive interest in common. No doubt we have many others, but consciousness is
00:07:50.260
really the center of the bullseye as far as my interests go. And really, as far as anyone's
00:07:56.860
interests go, if they actually think about it, it really is the most important thing in the universe
00:08:01.820
because it's the basis of all of our happiness and suffering and everything we value. It's the space
00:08:08.940
in which anything that matters can matter. So the fact that you are studying it and thinking about
00:08:15.080
it as much as you are just makes you the perfect person to talk to. I think we should start with
00:08:21.560
many of the usual starting points here because I think they're the usual starting points for a reason.
00:08:26.980
Let's start with a definition of consciousness. How do you define it now?
00:08:31.360
I think it's kind of a challenge to define consciousness. There's a sort of easy folk definition,
00:08:36.300
which is that consciousness is the presence of any kind of subjective experience whatsoever.
00:08:42.560
For a conscious organism, there is a phenomenal world of subjective experience that has the
00:08:49.220
character of being private, that's full of perceptual qualia or content, colors, shapes, beliefs,
00:08:56.820
emotions, other kinds of feeling states. There is a world of experience that can go away
00:09:02.580
completely in states like general anesthesia or dreamless sleep. It's very easy to define it that
00:09:08.060
way. To define it more technically is always going to be a bit of a challenge. And I think sometimes
00:09:14.360
there's too much emphasis put on having a consensus technical definition of something like consciousness
00:09:21.260
because history of science has shown us many times that definitions evolve along with our scientific
00:09:27.080
understanding of a phenomenon. We don't sort of take the definition and then transcribe it into
00:09:31.020
scientific knowledge in a unidirectional way. So, so long as we're not talking past each other and
00:09:36.420
we agree that consciousness picks out a very significant phenomenon in nature, which is the
00:09:43.840
presence of subjective experience, then I think we're on reasonably safe terrain.
00:09:48.980
Many of these definitions of consciousness are circular. We're just substituting another word for
00:09:55.100
consciousness in the definition, like sentience or awareness or subjectivity or even something like
00:10:01.520
qualia, I think is parasitic on the undefined concept of consciousness.
00:10:06.180
Sure, I think that's right. But then there's also a lot of confusions people make too. So,
00:10:09.740
I'm always surprised by how often people confuse consciousness with self-consciousness.
00:10:14.580
And I think our conscious experience of selfhood are part of conscious experiences as a whole,
00:10:21.200
but only a subset of those experiences. And then there are arguments about whether there's such
00:10:29.740
a thing as phenomenal consciousness that's different from access consciousness, where
00:10:33.460
phenomenal consciousness refers to, you know, this impression that we have of a very rich
00:10:38.960
conscious scene, perhaps envisioned before us now, that might exceed what we have cognitive access
00:10:45.120
to. And other people will say, well, no, there's no such thing as phenomenal consciousness beyond
00:10:49.600
access consciousness. So, there's a certain circularity, I agree with you there, but there
00:10:54.500
are also these important distinctions that can lead to a lot of confusion when we're discussing
00:11:01.980
And I want to just revisit the point you just made about not transcribing a definition of a
00:11:08.380
concept that we have into our science as a way of capturing reality. And there are things about
00:11:14.080
which we have a folk psychological sense, which completely break apart once you start studying
00:11:20.160
them at the level of the brain. So, something like memory, for instance, we have the sense that it's
00:11:25.060
one thing intuitively, you know, pre-scientifically. We have the sense that to remember something,
00:11:32.220
whatever it is, is more or less the same operation, regardless of what it is. Remembering what you
00:11:39.020
ate for dinner last night, remembering your name, remembering who the first president of the United
00:11:43.840
States was, remembering how to swing a tennis racket. These are things that we have this one word for,
00:11:51.880
but we know neurologically that they're quite distinct operations, and you can disrupt one and
00:11:57.280
have the other intact. The promise has been that consciousness may be something like that,
00:12:02.900
that we could be similarly confused about it, although I don't think we can be. I think consciousness
00:12:08.560
is unique as a concept in this sense, and this is why I'm taken in more by the so-called hard
00:12:16.100
problem of consciousness than I think you are. I think we should talk about that, but before we do,
00:12:22.060
I think the definition that I want to put in play, which I know you're quite familiar with,
00:12:27.060
is the one that the philosopher Thomas Nagel put forward, which is that consciousness is the fact
00:12:33.520
that it's like something to be a system, whatever that system is. So if a bat is conscious, this comes
00:12:39.880
from his famous essay, What Is It Like To Be A Bat? If a bat is conscious, whether or not we can
00:12:45.660
understand what it's like to be a bat, if it is like something to be a bat, that is consciousness in
00:12:52.560
the case of a bat. However inscrutable it might be, however impossible it might be to map that experience
00:12:58.160
onto our own. If we were to trade places with a bat, that would not be synonymous with the lights going
00:13:04.440
out. There is something that it's like to be a bat if a bat is conscious. That definition, though, it's
00:13:10.260
really not one that is easy to operationalize, and it's not a technical definition. There's something
00:13:16.480
sufficiently rudimentary about that that it has always worked for me, and when we begin to move away
00:13:23.900
from that definition into something more technical, my experience has been, and we'll get to this as
00:13:30.940
we go into the details, that the danger is always that we wind up changing the subject to something
00:13:36.240
else that seems more tractable. We're no longer talking about consciousness in Nagel's sense, we're
00:13:42.280
talking about attention, or we're talking about reportability, or mere access, or something. So how do
00:13:49.880
you feel about Nagel's definition as a starting point? I like it very much as a starting point. I
00:13:54.860
think it's pretty difficult to argue with that as a very basic fundamental expression of what we mean
00:14:03.940
by consciousness in the round. So I think that's fine. I partly disagree with you. I partly disagree
00:14:11.640
with you, I think, when we think about the idea that consciousness might be more than one thing. And here,
00:14:19.120
I'm much more sympathetic to the view that, heuristically at least, the best way to scientifically
00:14:24.420
study consciousness, and philosophically to think about it as well, is to recognize that we might be
00:14:30.240
misled about the extent to which we experience consciousness as a unified phenomenon. And there's
00:14:36.940
a lot of mileage in recognizing how, just like the example for memory, recognizing how conscious
00:14:42.860
experiences of the world and of the self can come apart in various different ways.
00:14:47.320
Just to be clear, actually, I agree with you there. We'll get into that. But I completely agree with
00:14:52.040
you there that we could be misled about how unified consciousness is. The thing that's irreducible to
00:14:57.980
me is this difference between there being something that it's like and not. You know, the lights are on or
00:15:05.540
they're not. There are many different ways in which the lights can be on in ways that would surprise us.
00:15:11.320
For instance, it's quite possible that the lights are on in our brains in more than one spot. We'll
00:15:19.120
talk about split brain research, perhaps. But there are very counterintuitive ways the lights could be
00:15:23.680
on. But just the question is always, is there something that it's like to be that bit of
00:15:29.140
information processing or that bit of matter? And that is always the cash value of a claim for
00:15:35.660
consciousness. Yeah, I'd agree with that. I think that it's perfectly reasonable to put the question
00:15:40.380
in this way, that for a conscious organism, it is something like it is to be that organism. And
00:15:46.600
the thought is that there's going to be some physical, biological, informational basis to that
00:15:53.940
distinction. Now, you've written about why we really don't need to waste much time on the hard
00:16:01.200
problem. Let's remind people what the hard problem is. David Chalmers has been on the podcast,
00:16:06.780
and I've spoken about it with other people. But perhaps you want to introduce us to the hard
00:16:11.140
problem briefly. The hard problem has been, rightly so, one of the most influential philosophical
00:16:17.760
contributions to the consciousness debate for the last 20 years or so. And it goes right back to
00:16:24.920
Descartes. And I think it encapsulates this fundamental mystery that we've started talking about now,
00:16:30.600
that for some physical systems, there is also this inner universe, there is the presence of
00:16:39.300
conscious experience, there is something it is like to be that system. But for other systems,
00:16:43.080
tables, chairs, probably most computers, probably all computers these days, there is nothing it is
00:16:48.760
like to be that system. And what the hard problem does, it pushes that intuition a bit further, and
00:16:54.740
it distinguishes itself from the easy problem in neuroscience. And the easy problem, according to
00:16:59.620
Chalmers, is to figure out how the brain works in all its functions, in all its detail. So to figure
00:17:07.020
out how we do perception, how we utter certain linguistic phrases, how we move around the world
00:17:12.280
adaptively, how the brain supports perception, cognition, behavior in all its richness, in a way
00:17:17.780
that would be indistinguishable from, and here's the key really, in a way that would be indistinguishable
00:17:23.820
from an equivalent that had no phenomenal properties at all, that completely lacked
00:17:29.600
conscious experience. The hard problem is understanding how and why any solution to the
00:17:35.760
easy problem, any explanation of how the brain does what it does in terms of behavior, perception,
00:17:40.640
and so on, how and why any of this should have anything to do with conscious experiences at all.
00:17:45.980
And it rests on this idea of the conceivability of zombies, and this is one reason I don't really
00:17:52.180
like it very much. I mean, the hard problem has its conceptual power over us because it asks us to
00:17:58.480
imagine systems, philosophical zombies, that are completely equivalent in terms of their function
00:18:05.540
and behavior to you or to me or to any or to a conscious bat, but that instantiate no phenomenal
00:18:13.360
properties at all, the lights are completely off for these philosophical zombies. And if we can
00:18:19.740
imagine such a system, if we can imagine such a thing, a philosophical zombie, you or me,
00:18:24.660
then it does become this enormous challenge. You think, well, then what is it or what could it be
00:18:30.600
about real me, real you, real conscious bat? That gives rise, that requires or entails that there are
00:18:39.360
also these phenomenal properties, that there is something it is like to be you or me or the bat.
00:18:46.280
And it's because Chalmers would argue that such things are conceivable, that the hard problem seems
00:18:53.140
like a really huge problem. Now, I think this is a little bit of a, I think we've moved on a little
00:19:00.100
bit from these conceivability arguments. Firstly, I just think that they're pretty weak. And the more you
00:19:06.540
know about a system, the more we know about the easy problem, the less convincing it is to imagine
00:19:13.840
a zombie alternative. Think about, you know, you're a kid, you look up at the sky and you see a 747
00:19:21.460
flying overhead. And somebody asks you to imagine a 747 flying backwards. Well, you can imagine a 747
00:19:27.420
flying backwards. But the more you learn about aerodynamics, about engineering, the harder it is
00:19:32.760
to conceive of a 747 flying backwards. You know, you simply can't build one that way. And that's my worry
00:19:38.900
about this kind of conceivability argument, that to me, I really don't think I can imagine in a serious
00:19:44.860
way the existence of a philosophical zombie. And if I can't imagine a zombie, then the hard problem loses
00:19:53.480
That's interesting. I don't think it loses all of its force, or at least it doesn't for me. For me, the hard
00:19:58.740
problem has never really rested on the zombie argument, although I know Chalmers did a lot with
00:20:04.800
the zombie argument. I mean, so let's just stipulate that philosophical zombies are impossible. They're at
00:20:12.180
least, you know, what's called in the jargon, nomologically impossible. It's just a fact that we
00:20:17.500
live in a universe where if you built something that could do what I can do, that something would
00:20:22.700
be conscious. So there is no zombie Sam that's possible. And let's just also add what you just
00:20:29.260
said, that really, when you get to the details, you're not even conceiving of it being possible.
00:20:36.000
It's not even conceptually possible. You're not thinking it through enough. And if you did, you would
00:20:40.740
notice it break apart. But for me, the hard problem is really that with consciousness, any explanation
00:20:48.560
doesn't seem to promise the same sort of intuitive closure that other scientific explanations do.
00:20:59.420
It's analogous to whatever it is, and we'll get to some of the possible explanations, but it's not like
00:21:06.000
something like life, which is an analogy that you draw and that many scientists have drawn to how we can
00:21:13.300
make a breakthrough here. It used to be that people thought life could never be explained in mechanistic
00:21:18.920
terms. There was a philosophical point of view called vitalism here, which suggested that you
00:21:25.620
needed some animating spirit, some Elan Vital in the wheelworks to make sense of the fact that living
00:21:33.180
systems are different from dead ones, the fact that they can reproduce and repair themselves from injury
00:21:39.000
and metabolize and all the functions we see a living system engage, which define what it is to be
00:21:45.940
alive. It was thought very difficult to understand any of that in mechanistic terms, and then lo and
00:21:52.220
behold, we managed to do that. The difference for me is, and I'm happy to have you prop up this analogy
00:21:59.820
more than I have, but the difference for me is that everything you want to say about life, with the
00:22:05.360
exception of conscious life, we have to leave consciousness off the table here, everything
00:22:10.580
else you want to say about life can be defined in terms of extrinsic functional relationships among
00:22:18.140
material parts. So, you know, reproduction and growth and healing and metabolism and homeostasis, all of this
00:22:25.460
is physics and need not be described in any other way. And even something like perception, you know,
00:22:33.580
the transduction of energy, you know, let's say, you know, vision, light energy into electrical and
00:22:40.140
chemical energy in the brain, and then the mapping of a visual space onto a visual cortex, all of that
00:22:46.220
makes sense in mechanistic physical terms until you add this piece of, oh, but for some of these
00:22:53.480
processes, there's something that it's like to be that process. For me, it just strikes me as a false
00:22:59.340
analogy, and with or without zombies, the hard problem still stays hard.
00:23:06.120
I think it's an open question whether the analogy will turn out to be false or not. It's difficult
00:23:11.060
for us now to put ourselves back in the mindset of somebody 80 years ago, 100 years ago, when vitalism
00:23:17.400
was quite prominent, and whether the sense of mystery surrounding something that was alive
00:23:24.540
seemed to be as inexplicable as consciousness seems to us today. So it's easy to say with hindsight,
00:23:33.180
I think, that life is something different. But, you know, we've encountered, or rather,
00:23:38.220
scientists and philosophers over centuries have encountered things that have seemed to be
00:23:43.240
inexplicable, that have turned out to be explicable. So I don't think we should rule out
00:23:49.520
a priori that there's going to be something really different this time about consciousness.
00:23:58.060
There's, I think, a more heuristic aspect to this is that if we run with the analogy of life,
00:24:04.380
what that leads us to do is to isolate the different phenomenal properties that co-constitute
00:24:12.380
what it is for us to be conscious. We can think about, and we'll come to this, I'm sure we think
00:24:17.000
about conscious selfhood as distinct from conscious perception of the outside world. We can think
00:24:21.360
about conscious experiences of volition and of agency that are also very sort of central to our,
00:24:28.300
certainly our experience of self. These give us phenomenological explanatory targets
00:24:34.040
that we can then try to account for with particular kinds of mechanisms.
00:24:39.200
It may turn out at the end of doing this that there's some residue. There is still something
00:24:45.260
that is fundamentally puzzling, which is this hard problem residue. Why are there any lights on for
00:24:53.720
any of these kinds of things? Isn't it all just perception? But maybe it won't turn out like that.
00:24:59.900
And I think to give us the best chance of it not turning out like that, there's a positive and a
00:25:06.300
negative aspect. The positive aspect is that we need to retain a focus on phenomenology.
00:25:13.620
And this is another reason why I think the hard easy problem distinction can be a little bit
00:25:19.820
unhelpful because in addressing the easy problem, we are basically instructed to not worry about
00:25:27.080
phenomenology. All we should worry about is function and behavior. And then the hard problem
00:25:31.740
kind of gathers within its remit everything to do with phenomenology in this central mystery of why
00:25:37.500
is that some experience rather than no experience. The alternative approach, and this is something I've
00:25:42.480
kind of caricatured as the real problem, but David Chalmers himself has called it the mapping problem
00:25:47.140
and Varela, Francisco Varela talks about a similar set of ideas with his neurophenomenology,
00:25:54.540
is to not try to solve the hard problem to court, not try to explain how it is possible that
00:26:01.520
consciousness comes to be part of the universe, but rather to individuate different kinds of
00:26:06.920
phenomenological properties and draw some explanatory mapping between neural, biological, physical
00:26:13.840
mechanisms and these phenomenological properties. Now, once we've done that and we can begin to explain
00:26:19.400
not why is their experience at all, but why are certain experiences the way they are and not other
00:26:24.900
ways? And we can predict when certain experiences will have particular phenomenal characters and so
00:26:32.800
on. Then we'll have done a lot more than we can currently do. And we may have to make use of
00:26:39.100
novel kinds of conceptual frameworks, maybe frameworks like information processing will run their course and
00:26:45.040
will require other more sophisticated kinds of descriptions of dynamics and probability in
00:26:50.260
order to build these explanatory bridges. So I think we can get a lot closer. And the negative aspect is
00:26:56.380
why should we ask more of a theory of consciousness than we should ask of other kinds of scientific
00:27:03.140
theories? And I know people have talked about this on your podcast before as well, but we do seem to
00:27:10.540
want more of an explanation of consciousness than we would do of an explanation in biology or physics,
00:27:16.600
that it somehow should feel intuitively right to us. And I wonder why this is such a big deal when it
00:27:25.600
comes to consciousness. Because we're trying to explain something fundamental about ourselves doesn't
00:27:32.420
necessarily mean that we should apply different kinds of standards to an explanation that we would
00:27:37.040
apply in other fields of science. It just may not be that we get this feeling that something is
00:27:44.880
intuitively correct when it is in fact a very good scientific account of the origin of phenomenal
00:27:52.260
properties. Certainly, scientific explanations are not instantiations. There's no sense in which
00:27:58.420
a good theory of consciousness should be expected to suddenly realize the phenomenal properties
00:28:03.160
that it's explaining. But also, I think we, yeah, we do, I worry that we ask too much of theories of
00:28:08.940
consciousness this way. Yeah, well, we'll move forward into the details. And I'll just flag moments
00:28:13.740
where I feel like the hard problem should be causing problems for us. I do think it's not a matter of
00:28:20.820
asking too much of a theory of consciousness here. I think it's, there are very few areas in science where
00:28:26.060
the accepted explanation is totally a brute fact, which just has to be accepted because it is the
00:28:34.860
only explanation that works, but it's not something that actually illuminates the transition from,
00:28:41.620
you know, atoms to some higher level phenomenon. So again, for everything we could say about life,
00:28:48.280
even the very strange details of molecular biology, just how information in the genome gets out,
00:28:55.740
and creates the rest of a human body, it still runs through when you look at the details. It's
00:29:04.380
surprising, it's at parts difficult to visualize, but the more we visualize it, the more we describe it,
00:29:11.260
the closer we get to something that is highly intuitive, even something like, you know, the flow of
00:29:17.540
water. The fact that water molecules in its liquid state are loosely bound and move past one another,
00:29:24.520
well, that seems exactly like what should be happening at the micro level, so as to explain
00:29:30.300
the macro level property of the wetness of water and the fact that it has characteristics, higher level
00:29:36.400
characteristics that you can't attribute to atoms, but you can attribute to collections of atoms, like
00:29:41.600
turbulence, say. Whereas with, you know, if consciousness just happens to require some minimum number of
00:29:48.280
information processing units knit together in a certain configuration, firing at a certain
00:29:54.720
hertz, and you change any of those parameters and the lights go out, that, for me, still seems like
00:30:02.540
a mere brute fact that doesn't explain consciousness. It's just a correlation that we decide is the crucial
00:30:11.060
one. And I've never heard a description of consciousness, you know, of the sort that we will get to, like,
00:30:15.840
you know, integrated information, you know, Tononi's phrase, that unpacks it any more than that. And
00:30:23.360
you can react to that, but then I think we should just get into the details and see how it all sounds.
00:30:28.120
Sure. I'll just react very briefly, which is that I think I'd also be terribly disappointed if
00:30:33.240
the, you know, you look at the answer in A Book of Nature and it turned out to be, yes, you need
00:30:37.640
612,000 neurons wired up in a small world network and, you know, that's it. You know, the hope is,
00:30:45.240
that does seem, of course, ridiculous and arbitrary and unsatisfying. I mean, the hope is that as we
00:30:50.780
progress beyond, if you like, just brute correlates of conscious states towards accounts that provide
00:30:59.760
more satisfying bridges between mechanism and phenomenology that explain, for instance,
00:31:05.380
why a visual experience has the phenomenal character that it has and not some other
00:31:10.700
kind of phenomenal character like an emotion, that it won't seem so arbitrary. And that as we follow
00:31:17.780
this route, which is an empirically productive route, and I think that's important that if we can
00:31:23.900
actually do science with this route, we can try to think about how to operationalize phenomenology in
00:31:28.420
various different ways. Very difficult to think how to do science and just solve the hard problem
00:31:33.420
head on. At the end of that, I completely agree, there might be still this residue of mystery,
00:31:40.780
this kernel of something fundamental left unexplained. But I don't think we can take that
00:31:47.620
as a given because we can't, well, I certainly can't predict what I would feel as intuitively satisfying
00:31:54.740
when I don't know what the explanations that bridge mechanism and phenomenology are going to look
00:32:00.140
like in 10 or 20 years' time. We've already moved further from just saying it's this area or that
00:32:05.900
area, to synchrony, which is still kind of unsatisfying, to now, I think, some emerging
00:32:11.400
frameworks like predictive processing and integrated information, which aren't completely satisfying
00:32:18.380
either. But they hinted a trajectory where we're beginning to draw closer connections between
00:32:24.720
mechanism and phenomenology. Okay, well, let's dive into those hints. But before we do, I'm just
00:32:30.280
wondering, phylogenetically, in terms of comparing ourselves to so-called lower animals, where do you
00:32:37.580
think consciousness emerges? Do you think there's something that it's like to be a fly, say?
00:32:43.440
That's a really hard problem. I mean, I have to be agnostic about this. And again, it's just striking how
00:32:51.140
people in general's views on these things seems to have changed over the last recent decades. It seems
00:32:59.360
completely unarguable to me that other mammals, all other mammals, have conscious experiences of one
00:33:08.460
sort or another. I mean, we share so much in the way of the relevant neuroanatomy and neurophysiology
00:33:13.800
exhibit so many of the same behaviours that it would be remarkable to claim otherwise.
00:33:20.440
It actually wasn't that long ago that you could still hear people say that consciousness was so
00:33:26.620
dependent on language that they wondered whether human infants were conscious, to say nothing of
00:33:32.960
dogs and anything else that's not human. Yeah, that's absolutely right. I mean, that's a terrific
00:33:37.840
point. And this idea that consciousness was intimately and constitutively bound up with language or with
00:33:45.300
higher order executive processing of one sort or another, I think just exemplifies this really
00:33:52.060
pernicious anthropocentrism that we tend to bring to bear sometimes without realising it. We think
00:33:58.480
we're super intelligent, we think we're conscious, we're smart, and we need to judge everything by
00:34:02.740
that benchmark. And what's the most advanced thing about humans? Well, if you're gifted with language,
00:34:09.400
you're going to say language. And now already, with a bit of hindsight, seems to me anyway,
00:34:15.620
rather remarkable that people should make these, I can only think of them as just quite naive errors
00:34:23.280
to associate consciousness with language. It's not to say that consciousness and language don't have any
00:34:28.620
intimate relation. I think they do. Language shapes a lot of our conscious experiences. But certainly,
00:34:33.500
it's a very, very poor criterion with which to attribute subjective states to other creatures.
00:34:40.860
So mammals, for sure. I mean, mammals, for sure. But that's easy, because they're pretty similar
00:34:46.000
to humans and primates being mammals. But then it gets more complicated. And you think about birds
00:34:54.040
diverged a reasonable amount of time ago, but still have brain structures that one can establish
00:35:02.120
analogies, in some cases, homologies with mammalian brain structures. And in some species, scrub jays and
00:35:09.580
corvids generally, pretty sophisticated behavior too. It seems very possible to me that birds have
00:35:19.400
conscious experiences. And I'm aware, underlying all this, the only basis to make these judgments is in
00:35:25.340
light of what we know about the neural mechanisms underlying consciousness and the functional and behavioral
00:35:30.340
properties of consciousness in mammals. It has to be this kind of slow extrapolation, because we lack
00:35:34.800
the mechanistic answer, and we can't look for it in another species. But then you get beyond birds,
00:35:40.940
and you get out to, you know, I then like to go way out on a phylogenetic branch to the octopus,
00:35:49.680
which I think is an extraordinary example of convergent evolution. I mean, they're very smart,
00:35:54.820
they have a lot of neurons, but they diverged from the human line, I think, as long ago as sponges or
00:36:01.880
something like that. I mean, really, very little in common. And, but they have incredible differences
00:36:06.480
too. Three hearts, eight legs, arms, I'm never sure whether it's a leg or an arm, that behave semi-autonomously.
00:36:16.920
And one is left, you know, when you spend time with these creatures, I've been lucky enough to spend a
00:36:22.460
week with them in a lab in Naples. You certainly get the impression of another conscious presence
00:36:27.940
there, but of a very different one. And this is also instructive, because it brings us a little
00:36:34.500
bit out of this assumption that we can fall into, that there is one way of being conscious, and that's
00:36:41.240
our way. There's, you know, there is a huge space of possible minds out there. And the octopus is a very
00:36:49.120
definite example of a very different mind and very likely conscious mind too. Now, when we get down to,
00:36:59.040
yeah, not really down, I don't like this idea of organisms being arranged on a single scale like
00:37:05.300
this, but certainly creatures like fish, insects are simpler in all sorts of ways than mammals.
00:37:12.400
And here it's really very difficult to know where to draw the line, if indeed there is a line to be
00:37:18.000
drawn, if it's not just a gradual shading out of consciousness, that with gray areas in between,
00:37:25.120
and no categorical divide, which I think is equally possible. And fish, many fish display
00:37:31.640
behaviors which seem suggestive of consciousness, they will self-administer analgesia when they're
00:37:37.600
given painful stimulation. They will avoid places that have been associated with painful stimulation,
00:37:42.900
and so on. You hear things like the precautionary principle come into play, that given that
00:37:49.880
suffering, if it exists, conscious suffering is a very aversive state and it's ethically wrong to
00:37:55.820
impose that state on other creatures, we should tend to assume that creatures are conscious unless we have
00:38:04.740
good evidence that they're not. So we should put the bar a little bit lower in most cases.
00:38:12.480
Let's talk about some of the aspects of consciousness that you have identified as being distinct.
00:38:18.600
There are at least three. You've spoken about the level of consciousness, the contents of consciousness,
00:38:24.420
and the experience of having a conscious self that many people, as you said, conflate with consciousness
00:38:32.480
as a mental property. There's obviously a relationship between these things, but they're not the same.
00:38:38.440
Let's start with this notion of the level of consciousness, which really isn't the same thing
00:38:43.420
as wakefulness. Can you break those apart for me? How is being conscious non-synonymous with being
00:38:50.160
awake in the human sense? Sure. Let me just first amplify what you said, that in making these
00:38:57.500
distinctions, I'm certainly not claiming, pretending, that these dimensions of level, content, and self
00:39:05.000
pick out completely independent aspects of conscious experiences. There are lots of interdependencies.
00:39:11.260
I just think they're heuristically useful ways to address the issue. We can do different kinds of
00:39:17.520
experiments and try to isolate distinct phenomenal properties in their mechanistic basis by making
00:39:23.080
these distinctions. Now, when it comes to conscious level, I think that the simplest way to think of this
00:39:28.220
is more or less as a scale. In this case, it's from when the lights are completely out, when you're dead,
00:39:36.020
brain death, or under general anesthesia, or perhaps in very, very deep states of sleep,
00:39:41.820
all the way up to vague levels of awareness, which correlate with wakefulness. So when you're very
00:39:50.620
drowsy, to vivid, awake, alert, full conscious experience that I'm certainly having now, I feel
00:39:59.000
very awake and alert, and my conscious level is kind of up there. Now, in most cases, the level of
00:40:07.400
consciousness articulated this way will go along with wakefulness or physiological arousal. When you fall
00:40:15.140
asleep, you lose consciousness, at least in early stages. But there are certain cases that exist which
00:40:24.200
show that they're not completely the same thing on both sides. So you can be conscious when you're
00:40:32.440
asleep. Of course, we know this. This is called dreaming. So you're physiologically asleep, but you're
00:40:36.940
having a vivid inner life there. And on the other side, and this is where consciousness science,
00:40:43.740
the rubber of consciousness science hits the road of neurology, you have states where behaviorally,
00:40:49.020
you have what looks like arousal. This used to be called the vegetative state. It's been kind of
00:40:57.020
renamed several times now, the wakeful unawareness state, where the idea is that the body is still
00:41:03.140
going through physiological cycles of arousal from sleep to wake, but there is no consciousness
00:41:09.780
happening at all. The lights are not on. So these two things can be separated. And it's a very
00:41:20.980
productive and very important line of work to try to isolate what's the mechanistic basis of conscious
00:41:27.660
level independently from the mechanistic basis of physiological arousal.
00:41:32.960
Yeah. And a few other distinctions to make here. Also, general anesthesia is quite distinct from
00:41:39.500
deep sleep, just as a matter of neurophysiology.
00:41:43.040
Certainly, general anesthesia is nothing like sleep. Certainly, deep level is general anesthesia. So
00:41:49.280
whenever you go for an operation and the anesthesiologist is trying to make you feel
00:41:55.420
more comfortable by just saying something like, yeah, we'll just put you to sleep for a while and
00:41:59.320
then you'll wake up and it will be done. They are lying to you for good reason. It's kind of nice
00:42:05.180
just to feel that you're going to sleep for a bit. But the state of general anesthesia is very different.
00:42:09.580
And for very good reason, if you were just put into a state of sleep, you would wake up as soon as the
00:42:13.400
operation started and that wouldn't be very pleasant. It's surprising how far down you can take
00:42:19.320
people in general anesthesia, almost to a level of isoelectric brain activity where there is pretty much
00:42:24.880
nothing going on at all and still bring them back. And many people now have had the non-experience of
00:42:35.620
general anesthesia. And in some weird way, I now look forward to it the next time I get to have this
00:42:41.560
because it's a very sort of, it's almost a reassuring experience because there is absolutely
00:42:48.020
nothing. It's complete oblivion. It's not, you know, when you go to sleep as well,
00:42:51.380
you can sleep for a while and you'll wake up and you might be confused about what, how much time has
00:42:57.620
passed, especially if you've just flown across some time zones or stayed up too late, something
00:43:02.920
like that. You know, might not be sure what time it is, but you'll still have this sense of some time
00:43:07.600
having passed. Except we have this problem or some people have this problem of anesthesia awareness,
00:43:13.260
which is every person's worst nightmare if they care to think about it, where people have the
00:43:19.760
experience of the surgery because for whatever reason, the anesthesia hasn't taken them deep
00:43:26.260
enough and yet they're immobilized and can't signal that they're not deep enough.
00:43:30.700
I know, absolutely. But I mean, that's a failure of anesthesia. It's not a characteristic of the
00:43:35.760
Do you know who had that experience? You've mentioned him on the podcast.
00:43:41.580
Oh, really? I didn't know that. I did not know that.
00:43:44.280
Yeah. Francisco was getting a liver transplant and experienced some part of it.
00:43:54.600
Yeah. I mean, of course, because the thing there is, you know, under most
00:43:57.300
serious operations, you're also administered with a muscle paralytic so that you don't jerk around
00:44:03.560
when you're being operated on. And that's why it's, it's particularly a nightmare scenario.
00:44:09.140
But, you know, if anesthesia is working properly, certainly the times I've had general anesthesia,
00:44:15.500
you start counting to 10 or start counting backwards from 10. You get to about eight and
00:44:21.280
then instantly you're back somewhere else, very confused, very disoriented. But there is
00:44:27.220
no sense of time having passed. It's just complete oblivion. And that, I found that really reassuring
00:44:34.580
because you, we can think conceptually about not being bothered about all the times we were not
00:44:40.320
conscious before we were born. And therefore we shouldn't worry too much about all the times we're
00:44:45.560
not going to be conscious after we die. But to experience these moments of complete oblivion
00:44:50.580
during a lifetime, or rather, you know, the edges of them, I think is a, is a very enlightening kind
00:44:58.260
of experience to have. Although there's a place here where the hard problem does emerge because
00:45:03.720
it's very difficult, perhaps impossible, to distinguish between a failure of memory and
00:45:09.780
oblivion. Has consciousness really been interrupted? Take anesthesia and deep sleep as separate but
00:45:17.100
similar in the sense that most people think there was a hiatus in consciousness. I'm prepared to believe
00:45:23.060
that that that's not true of deep sleep, but we just don't remember what it's like to be deeply
00:45:28.360
asleep. I'm someone who often doesn't remember his dreams and I'm prepared to believe that I dream
00:45:33.960
every night. And we know even in the case with general anesthesia, they give amnesic drugs so that
00:45:42.660
you won't remember whatever they don't want you to remember. And I recently had the experience of
00:45:47.940
not going under a full anesthesia, but having a, you know, what's called a twilight sleep for a
00:45:54.840
procedure. And there was a whole period afterwards where I was coming to about a half hour that I
00:46:01.880
don't remember. And it was clear to my wife that I wasn't going to remember it, but she and I were
00:46:06.760
having a conversation. I was talking to her about something. I was saying how, you know, perfectly
00:46:11.720
recovered I was and how miraculous it was to be back. And she said, yeah, but you're not going to
00:46:16.960
remember any of this. You're not going to remember this conversation. And I said, okay, well, let's
00:46:20.660
test it. You know, you say something now and we'll see if I remember it. And she said, she said, this
00:46:26.760
is the test, dummy. You're not going to remember this part of the conversation. And I have no memory
00:46:31.440
of that part of the conversation. It's a good test. Yeah. You're right, of course, that even in stages
00:46:39.500
of deep sleep, people underestimate the presence of conscious experiences. And this has been
00:46:44.500
demonstrated by experiments called serial awakening experiments, where you'll just wake somebody
00:46:50.160
up various times during sleep cycles and ask them straight away, you know, what was in your
00:46:55.000
mind? And quite often people will report often very simple sorts of experiences, static images
00:47:02.060
and so on in stages of non-REM, non-dreaming sleep. And I concede that there may be a contribution
00:47:10.620
of amnesia to the post hoc impression of what general anesthesia was like. But at the same
00:47:18.460
time, there's all the difference in the world between the twilight zone and full-on general
00:47:22.480
anesthesia, where it's not just that I don't remember anything. It's the real sense of a hiatus
00:47:28.560
of consciousness, of a complete interruption and a complete instantaneous resumption of that
00:47:34.080
Yeah, yeah. No, I've had a general anesthetic as well. And there is something quite uncanny
00:47:40.800
about disappearing and being brought back without a sense of any intervening time. Because you're
00:47:48.720
not aware of the time signature of having been in deep sleep, but there clearly is one. And
00:47:54.780
the fact that many people can go to sleep and kind of set an intention to wake up at a certain
00:48:00.180
time and they wake up at that time, often to the minute. It's clear there's some timekeeping
00:48:04.720
function happening in our brains all the while, but there's something about a general anesthetic
00:48:10.000
which just seems like, okay, the hard drive just got rebooted and who knows how long the computer
00:48:18.060
Yeah. Okay, so let's talk about these other features. We've just dealt with the level of
00:48:22.100
consciousness. Talk to me about the contents of consciousness. How do you think about that?
00:48:26.820
When we are conscious, then we're conscious of something. And I think this is what the
00:48:33.240
large majority of consciousness research empirically focuses on. You take somebody who is conscious
00:48:40.300
at a particular time and you try to, you can ask a few different questions. You can ask what
00:48:47.240
aspects of their perception are unconscious and not reflected in any phenomenal properties and
00:48:54.520
what aspects of their perception are reflected in their phenomenal properties. What's the
00:48:59.420
difference between conscious and unconscious processing, if you like? What's the difference
00:49:03.880
between different modalities of conscious perception? So at any one time, we may, certainly
00:49:11.780
outside of the lab, our conscious scene at any one time, we'll have a very multimodal character.
00:49:17.580
So there'll be sound, sight, experiences of touch, maybe if you're sitting down or holding
00:49:23.440
something. And then a whole range of more self-related experiences too, of body ownership,
00:49:30.360
of all the signals coming from deep inside the body, which are more relevant to self.
00:49:33.960
But the basic idea of conscious content is to study what the mechanisms are that give rise
00:49:40.020
to the particular content of a conscious scene at any one time. And here, the reason it's useful
00:49:49.760
to think of this as separate from conscious level is partly that we can appeal to different kinds
00:49:56.600
of theories, different kinds of theoretical and empirical frameworks. So the way I like to think
00:50:04.740
about conscious perception is in terms of prediction, in terms of what's often been called the Bayesian
00:50:12.020
brain or unconscious inference from Helmholtz and so on. And the idea that perception in general
00:50:20.020
works more from the top down or from the outside in than from the... Sorry, I got that wrong.
00:50:27.740
Perception works more from the top down or the inside out rather than from the bottom up or the outside in.
00:50:34.740
And this has a long history in philosophy as well, back to Kant and long before that too. I mean,
00:50:41.000
the straw man, the kind of the easily defeated idea about perception is that sensory signals impinge
00:50:48.720
upon receptors and they percolate deeper and deeper into the brain. And at each stage of processing,
00:50:56.080
more complex operations are brought to bear. And at some point, ignition happens or something happens
00:51:02.580
and you're conscious of those sensory signals at that point. And I think this is kind of the wrong
00:51:09.420
way to think about it, that if you look at the problem of perception that brains face, and let's
00:51:16.820
simplify it a lot now and just assume the problem is something like the following, that the brain is
00:51:22.440
locked inside a bony skull. And let's assume for the sake of this argument that perception is the
00:51:29.020
problem of figuring out what's out there in the world that's giving rise to sensory signals that
00:51:33.720
impinge on our sensory surfaces, eyes and ears. Now, these sensory signals are going to be noisy
00:51:39.700
and ambiguous. They're not going to have a one-to-one mapping with things out there in the world,
00:51:43.540
whatever they may be. So perception has to involve this process of inference, of best guessing,
00:51:49.860
in which the brain combines prior expectations or beliefs about the way the world is with the sensory
00:51:55.380
data to come up with its best guess about the causes of that sensory data. And in this view,
00:52:01.480
what we perceive is constituted by these multi-level predictions that try to explain away or account for
00:52:10.880
the sensory signals. We perceive what the brain infers to have caused those signals,
00:52:16.360
not the sensory signals themselves. In this view, there's nothing that it is for there to be
00:52:21.960
raw sensory experience of any kind. All perceptual experience is an inference of one sort or another.
00:52:28.300
And given that view, one can then start to ask all sorts of interesting experimental questions like,
00:52:35.020
well, what kinds of predictions? How do predictions or expectations affect what we consciously perceive,
00:52:39.940
consciously report? What kinds of predictions may still go on under the hood and not instantiate any
00:52:46.780
phenomenal properties? But it gives us this set of tools that we can use to build bridges between
00:52:53.020
phenomenology and mechanism again. In this case, the bridges are made up of the computational
00:52:58.380
mechanisms of Bayesian inference as they might be implemented in neuronal circuitry. And so instead of
00:53:05.340
looking for, you know, asking questions like, is V1, is early visual cortex associated with visual
00:53:12.780
experience? We might ask questions like, are Bayesian priors or posteriors associated with
00:53:19.120
conscious phenomenology? Or are prediction errors associated with conscious phenomenology? We can
00:53:23.480
start to ask slightly, I think, more sophisticated bridging questions like that.
00:53:27.740
Well, yeah, in your TED Talk, you talk about consciousness as a controlled hallucination.
00:53:32.520
And I think Chris Frith has called it a fantasy that coincides with reality. Can you say a little
00:53:39.380
more about that and how that relates to the role of top-down prediction in perception?
00:53:46.600
Yeah, I think they're both very nice phrases. And I think the phrase controlled hallucination actually
00:53:53.120
has been very difficult to pin down where it came from. I heard it from Chris Frith as well,
00:53:58.000
originally. And I've asked him and others where originally it came from. And we can trace it to
00:54:03.440
a seminar given by Ramesh Jain at UCSD sometime in the 90s. But it was a verbal, there the trail goes
00:54:11.920
cold. But anyway, the idea is sort of the following, that we can bring to bear a naive realism about
00:54:18.220
perception where we assume that what we visually perceive is the way things actually are in the real
00:54:25.540
world. That there is a table in front of me that has a particular color that has a piece of paper on it
00:54:30.780
and so on. And that's veridical perception, as distinct from hallucination, where we have a
00:54:36.160
perceptual experience that has no corresponding reference in the real world. And the idea of
00:54:43.720
controlled hallucination or fantasy that coincides with reality is simply to say that normal perception
00:54:51.780
is always a balance of sensory signals coming from the world and the interpretations,
00:54:59.780
predictions that we bring to bear about the causes of those sensations. So we are always seeing what we
00:55:07.280
expect to see in this Bayesian sense. We never just see the sensory data. Now, normally, we can see
00:55:15.840
this all the time. It's built into our visual systems that light is expected to come from above because
00:55:22.120
our visual systems have evolved in a situation where the sun is never below us. So that causes us to
00:55:27.580
perceive shadows in a particular way. Rather, we'll perceive curved surfaces as being curved one way or
00:55:33.640
another under the assumption that light comes from above. We're not aware of having that constraint
00:55:40.160
built deep into our visual system, but it's there. And the idea is that every perception that we have
00:55:48.180
is constituted, partly constituted by these predictions, these interpretive powers that the
00:55:56.580
brain brings to bear onto perceptual content. And that what we call hallucinations is just the tipping
00:56:02.980
of the balance slightly more towards the brain's own internal predictions. And another good everyday
00:56:09.560
example of this is if you go out on a day where there's lots of white fluffy clouds, and you can see
00:56:14.660
faces in clouds. If you choose, if you look for them, it's periordelia, you can see patterns,
00:56:18.780
you can see patterns in noise. Now, that's a kind of hallucination there. You're seeing something
00:56:23.520
that other people might not see. And it's not accompanied by delusion. You know, it's a
00:56:30.840
hallucination. But it's still, it just shows how our perceptual content is always framed by our
00:56:37.060
interpretation. Another good everyday example is dreams, because of dreams, we know are a situation
00:56:44.120
where our brain is doing something very similar to what it's doing in the waking state, except the
00:56:49.700
frontal lobes have come offline enough so that there's just not the same kind of reality testing
00:56:56.260
going on. And our perception in this case is not being constrained by outer stimuli. It's just,
00:57:03.680
it's being generated from within. But would this be an analogous situation where our top-down
00:57:09.460
prediction mechanisms are roving, unconstrained by sensory data?
00:57:15.020
I think, yeah, dreams certainly show that you don't need sensory data to have vivid conscious
00:57:21.120
perception, because you don't have any sensory input, apart from a bit of auditory input when
00:57:25.900
you're dreaming. I think the phenomenology of dreams is interestingly different. Yeah, dream content is
00:57:32.660
very much less constrained. There is this naive realism just goes nuts in dreams, doesn't it? I mean,
00:57:38.480
things can change, people can change, identity locations can change, weird things happen all the
00:57:42.320
time you don't experience them as being weird. That's the weirdest part of dreams, the fact that
00:57:47.440
it's not that they're so weird, it's that the weirdness is not detected. We don't care that
00:57:53.140
they're so weird. Yeah, which is, I think, a great example of how we often overestimate the insight we
00:58:00.520
have about what our conscious experiences are like. We tend to assume that we know exactly what's
00:58:06.600
happening in all our conscious experiences all the time, whether it's weird or not. Dreams show that
00:58:10.480
that's not always the case. But I think the idea of controlled hallucination goes, it's as present
00:58:17.200
in the normal, in non-dreaming perception as it is in dreaming. And it really is this idea that
00:58:24.520
all our perception is constituted by our brain's predictions of the causes of sensory input.
00:58:31.260
And most of the time, walking around the world, we will agree about this perceptual content. If I see a
00:58:37.900
table and claim it's this colour, you'll probably agree with me. And we don't have to go into the
00:58:42.640
philosophical inverted spectra thing here. It's just a case of we tend to report the same sorts of
00:58:47.340
things when faced with the same sorts of sensory inputs. So we don't think there's anything
00:58:54.100
particularly constructed about the way we perceive things, because we all agree on it. But then when
00:59:01.000
something tips the balance, maybe it's under certain pharmacological stimulus, maybe it's
00:59:05.740
in dreams, maybe it's in certain states of psychosis and mental illness, then people's
00:59:12.380
predictions about the causes of sensory information will differ from one another. And if you're an
00:59:17.960
outlier, then people will say, oh, now you're hallucinating because you're reporting something that
00:59:22.860
isn't there. And my friend, the musician Baba Brinkman put it beautifully. He said, you know,
00:59:29.360
what we call reality is just when we all agree about our hallucinations, which I think is a really
00:59:34.620
nice way to put that. This leaves open the question, what is happening when we experience
00:59:40.500
something fundamentally new, or have an experience where our expectations are violated? So we're using
00:59:47.340
terms like predictions or expectations or models of the world. But I think there's a scope for some
00:59:55.400
confusion here. Just imagine, for instance, that some malicious zookeeper put a fully grown tiger in
01:00:03.280
your kitchen while you were sleeping tonight. I presume that when you come down for your morning
01:00:08.340
coffee, you will see this tiger in the kitchen, even though you have no reasonable expectation to
01:00:15.780
be met by a tiger in the morning. I think it's safe to assume you'll see it even before you've had
01:00:20.140
your cup of coffee. So given this, what do we mean by expectations at the level of the brain?
01:00:27.360
That's a very, very important point. It's this whole language of the Bayesian brain and predictive
01:00:33.760
processing bandies around terms like prediction expectation and prediction error, surprise, and all these
01:00:40.140
things. It's very, very important to recognize that these terms don't only mean or don't really mean at
01:00:48.180
all psychological surprise or explicit beliefs and expectations that I might hold. So certainly,
01:00:54.800
if I go down morning, I am not expecting to see a tiger. However, my visual system, when it encounters
01:01:04.480
a particular kind of input, it's still expecting, you know, if there are sensory input that pick out
01:01:11.760
things like edges, it will best interpret those as an edge. And if it will pick out stripes, it will
01:01:17.680
interpret those as stripes. It's not unexpected to see something with an edge, and it may not be
01:01:23.780
unexpected to see something with a stripe. It may not even be unexpected from my brain's point of view
01:01:28.700
to see something that looks a bit like a face. And those become low-level best guesses about the
01:01:35.900
causes of sensory input, which then give rise to higher-level predictions about those causes.
01:01:42.000
And ultimately, the best guess is that there's some kind of animal there, and indeed, that it's a tiger.
01:01:49.020
So I don't think there's a conflict here. We can see new things, because new things are built up
01:01:54.960
from simpler elements for which we will have adequate predictions for, built up over evolution
01:02:01.820
and over development and over prior experience.
01:02:04.700
And one thing you point out, at least in one of your papers, maybe you did this in the TED Talk,
01:02:08.400
that different contents of consciousness have different characters, so that visual perception
01:02:15.880
is object-based in a way that interior perception is not. The sensing of an experience like nausea,
01:02:23.220
say, or even of an emotion, like sadness, does not have all of the features of perceiving an object
01:02:31.860
in visual space. You're looking at an object in visual space, there's this sense of location,
01:02:38.020
there's the sense that anything that has a front will also have a back, that if you walked around it,
01:02:42.520
you would be given different views of it, none of which may ever repeat exactly.
01:02:47.580
You know, I'm looking at my computer now, yeah, I've probably never seen my computer from precisely
01:02:52.740
this angle, and if I walked around it, I would see, you know, thousands of different slices of this
01:02:58.140
thing in, you know, the movie of my life, and yet there's this unitary sense of an object in space
01:03:04.900
that has a front and back and sides. And of course, none of this applies when we're thinking about
01:03:09.940
our internal experience. Do you have any more you want to say about that? Because that's a very
01:03:15.820
interesting distinction, which, again, is one of these places where the terminology we use for
01:03:21.780
being aware of things, or being conscious of things, or perceiving things, doesn't really get
01:03:26.580
at the phenomenology very well. Now, thank you for raising that. I think this is a great point,
01:03:31.880
and something I've thought quite a lot about. And there's a couple of elements here. So I'll start
01:03:36.920
by talking about this phenomenology of objecthood that you beautifully described for vision there,
01:03:42.460
and then get on to this case of interoception and perception of the internal state of the body.
01:03:48.380
So indeed, for most of us, most of the time, visual experience is characterized by there being a world
01:03:54.540
of objects around us. I see coffee cups on the table, computers in front of me, and so on.
01:03:59.900
Actually, that's not always the case. If I'm, for instance, trying to catch a cricket ball,
01:04:05.420
or a softball, or something someone's thrown to me, what my perceptual system is doing there is not
01:04:10.680
so much trying to figure out what's out there in the world, it's all geared towards the goal of
01:04:16.640
catching the cricket ball. And there's a whole branch of psychology, it has roots in Gibsonian
01:04:23.380
ecological psychology, and William Power's perceptual control theory, that sort of inverts things. And it
01:04:30.400
says that there's this whole tradition in thinking about perception, and its interaction with behavior.
01:04:38.180
I mean, we like to think that we perceive the world, and then we behave. So we have perception
01:04:42.320
and the control of controlling behavior. But we can also think of it the other way around,
01:04:46.860
and think of behavior controlling perception, so that when we catch a cricket ball, what we're really
01:04:54.160
doing is maintaining a perceptual variable to be a constant. In this case, it would be the acceleration
01:05:00.340
of the angle of the ball to the horizon. If we keep that constant, we will catch the cricket
01:05:06.160
ball. And if you reflect on the phenomenology of these things, if I'm engaged in an act like
01:05:11.180
that, I'm not so much perceiving the world as distinct objects arranged in particular
01:05:14.800
ways, I'm perceiving how well my catching the cricket ball is happening. Am I likely to
01:05:21.740
catch it? Is it going well or not? That's a different kind of description of visual phenomenology.
01:05:27.700
But most of them, this will become important a bit later when we talk about why our experience
01:05:31.980
of the inside of our bodies, of being a body, has the character that it has. I think it's
01:05:37.300
more like catching a cricket ball, but we'll get to that in a second. But if we think now
01:05:41.800
just back to when we're not catching things, we're just looking around and we see this visual
01:05:47.440
scene populated by objects. And you're absolutely right that one way to think of that is that
01:05:52.660
when I perceive an object to be, to have a volumetric extension, to be a three-dimensional
01:05:59.980
thing in the world occupying a particular location. What that means is that I'm perceiving how that
01:06:08.200
object would behave if I were to interact with it in different ways. This has another tradition,
01:06:14.660
well, it's back to Gibson again and ecological psychology, but also sensorimotor theory of
01:06:19.840
Alvin Noe and Kevin O'Regan, that what I perceive is how I can interact with an object. I perceive an
01:06:26.500
object as having a back, not because I can see the back, but because my brain is encoding somehow
01:06:32.220
how different actions would reveal that surface, the back of that object. And that's a distinctive
01:06:40.100
kind of phenomenology. In the language of predictive processing of the Bayesian brain, one thing I've
01:06:46.800
been trying to do is cash out that account of the phenomenology of objecthood in terms of the
01:06:53.440
kinds of predictions that might underlie it. And these turn out to be conditional or counterfactual
01:07:00.500
predictions about the sensory consequences of action. So in order to perceive something as having
01:07:07.660
objecthood, the thought is that my brain is encoding how sensory data would change if I were to move around
01:07:15.500
it, if I were to pick it up, and so on and so forth. And if we think about the mechanics that might
01:07:22.360
underlie that, they fall out quite naturally from this Bayesian brain perspective, because
01:07:27.040
to engage in predictive perception, to bring perceptual interpretations to bear on sensory data,
01:07:36.320
our brain needs to encode something like a generative model. It needs to be able to have a model of the
01:07:43.380
mapping from sensory data to, or rather the mapping from something in the world to sensory data, and be
01:07:50.140
able to invert that mapping. That's how you do Bayesian inference in the brain. And if you've got a
01:07:55.760
generative model that can invert that mapping, then that's capable of predicting what sensory signals
01:08:00.960
would happen conditional on different kinds of actions. This is, it brings in an extension of
01:08:08.880
predictive processing that's technically called active inference, where we start to think about reducing
01:08:14.520
prediction errors, not only by updating one's predictions, but also by making actions to sort
01:08:20.560
of make our predictions come true. But in any case, you can make some interesting empirical predictions
01:08:25.480
about how our experience of something as an object depends on what the brain learns about ways of
01:08:34.740
interacting with these objects. And we started to test some of these ideas in the lab, because
01:08:39.420
you can now use clever things like virtual reality and augmented reality to generate objects that will
01:08:47.440
be initially unfamiliar, but that behave in weird ways when you try to interact with them. So you can
01:08:53.440
either support or confound these kinds of conditional expectations, and then try to understand what
01:09:00.620
the phenomenological consequences of doing so are. And you can also account for situations where this
01:09:09.160
phenomenology of objecthood seems to be lacking. So for instance, in synesthesia, which is a very
01:09:15.660
interesting phenomenon in consciousness research, and yeah, I'm sure you know this, Sam, but a very
01:09:23.600
canonical example of synesthesia is when, as graphene color synesthesia, people may look at a black letter or
01:09:30.960
number or graphene, and they will experience a color along with that experience. They will have a color
01:09:38.640
experience, a concurrent color experience. This is very, very well established. What's often not focused
01:09:44.640
on is that pretty much across the board in graphene color synesthesia, synesthetes, they don't make any
01:09:52.440
confusion that the letter is actually red or actually green. They still experience the letter as black, they're
01:09:59.440
just having an additional experience of color along with it. They don't confuse it as a property. So this is why
01:10:05.760
whenever you see a kind of illustration of synesthesia with the letters colored in, it's a very, very poor
01:10:10.240
illustration. I'm guilty of using those kinds of poor illustrations in the past. But this color
01:10:16.760
experience does not have the phenomenology of objecthood. It lacks it. It doesn't appear to be
01:10:22.960
part of an object in the outside world. Why not? Well, it doesn't exhibit the same kinds of sensory
01:10:29.760
motor contingencies that an object that has a particular color does. So if I'm synesthetic
01:10:37.520
and I'm looking at the letter F and I change a lighting condition somewhat or move around it,
01:10:42.720
then a really red F will change its luminance and reflectance properties in subtle but significant
01:10:49.760
ways. But for my synesthetic experience, it's still just an F. So my experience of red doesn't
01:10:57.840
change. So I think this is just a promising example of how concepts and mechanisms from within predictive
01:11:07.600
perception can start to unravel some pervasive and modality-specific phenomenological properties
01:11:16.560
of consciousness. I think it's worth emphasizing the connection between perception and action because
01:11:22.400
it's one thing to talk about it in the context of catching a cricket ball. But when you talk about
01:11:29.280
the evolutionary logic of having developed perceptual capacities in the first place, the link to action
01:11:36.000
becomes quite explicit. We have not evolved to perceive the world as it is for some abstract
01:11:44.000
epistemological reason. We've evolved to perceive what's biologically useful. And what's biologically
01:11:50.160
useful is always connected, at least when you're talking about the outside world, to actions. If you
01:11:58.000
can't move, if you can't act in any way, there would have been very little reason to evolve a capacity for
01:12:04.880
Absolutely. I mean, there's that beautiful story I think of, is it the sea slug or the sea snail or
01:12:10.560
something of that sort? Some very simple marine creature that swims about during its juvenile
01:12:17.600
phase looking for a place to settle. And once it's settled and it just starts filter feeding,
01:12:24.000
it digests its own brain because it no longer has any need for perceptual competence now that it's not
01:12:30.720
going to move anymore. And this is often used as a slightly unkind analogy for getting tenure in
01:12:36.400
academia. But you're absolutely right that the perception is not about figuring out really what's
01:12:43.600
there. We perceive the world as it's useful for us to do so. And I think this is particularly important
01:12:49.280
when we think about perception of the internal state of the body, which you mentioned earlier,
01:12:55.200
this whole domain of interoception. Because if you think, what are brains for fundamentally?
01:13:04.160
Right? They're not for perceiving the world as it is. They're certainly not for,
01:13:08.080
didn't evolve for doing philosophy or complex language. They evolved to guide action.
01:13:13.920
But even more fundamentally than that, brains evolved to keep themselves and bodies alive.
01:13:21.120
They evolved to engage in homeostatic regulation of the body so that it remains within viable
01:13:32.880
physiological bounds. That's fundamentally what brains are for. They're for helping creatures
01:13:39.760
stay alive. And so the most basic cycle of perception and action doesn't involve the outside world at all.
01:13:48.960
It doesn't involve the exterior surfaces of the body at all. It's only about regulating the internal
01:13:57.280
milieu, the internal physiology of the body, and keeping it within the bounds that are compatible with
01:14:04.160
survival. And I think this gives us a clue here about why experiences of mood and emotion and of,
01:14:13.600
if you like, the most basic essence of selfhood have this non-object-like character.
01:14:22.800
So I think the way to approach this is to first realize that just as we perceive the outside world on
01:14:30.720
the basis of sensory signals that are met with a top-down flow of perceptual expectations and predictions,
01:14:37.200
the very same applies to perception of the internal state of the body. The brain has to know what the
01:14:45.760
internal state of the body is like. It doesn't have any direct access to it just because it's wrapped
01:14:49.600
within a single layer of skin. I mean, the brain is the brain. All it gets are noisy and ambiguous
01:14:55.520
electrical signals. So it still has to interpret and bring to bear predictions and expectations in
01:15:04.080
order to make sense of the barrage of sensory signals coming from inside the body. And this
01:15:08.320
is what's collectively called interoception, perception of the body from within. Just as a
01:15:13.520
side note, it's very important to distinguish this from introspection, which could hardly be more
01:15:18.000
different introspection. Consciously reflecting on the content of our experience, this is not that.
01:15:22.560
This is interoception, perception of the body from within. So the same computational principles
01:15:29.040
apply. We have to bring to bear, our brain has to bring to bear predictions and expectations.
01:15:34.640
So in this view, we can immediately think of emotional conscious experiences, emotional feeling states
01:15:44.640
in this same inferential framework. And I've written about this for a few years now that we can think of
01:15:50.800
interoceptive inference. So emotions become predictions about the causes of interoceptive signals in just
01:15:57.920
the same way that experiences of the outside world are constituted by predictions of the causes of sensory
01:16:02.800
signals. And this, I think, gives a nice computational and mechanistic gloss on pretty old theories of
01:16:12.320
emotion that originate with William James and Carl Langer. That emotion has to do with perception of
01:16:19.120
physiological change in the body. These ideas have been repeatedly elaborated. So people ask about
01:16:26.240
the relationship between cognitive interpretation and perception of physiological change.
01:16:31.920
This predictive processing view just dissolves all those distinctions and says that
01:16:36.320
emotional experience is the joint content of predictions about the causes of interoceptive signals
01:16:43.440
at all levels, at all low and high levels of abstraction. And the other aspect of this that becomes important
01:16:53.120
is the purpose of perceiving the body from within
01:16:57.040
is really not at all to do with figuring out what's there. My brain couldn't care less that
01:17:06.000
my internal organs are objects and they have particular locations within
01:17:11.680
my body. Couldn't care less about that. It's not important. The only thing that's important about
01:17:17.280
my internal physiology is that it works. That if you imagine the inside of my body is a cricket ball,
01:17:25.040
it really don't care where the cricket ball is or that it's a ball. All it cares is that I'm going to
01:17:30.800
catch the ball. It only cares about control and regulation of the internal state of the body.
01:17:36.720
So predictions, perceptual predictions for the interior of the body are of a very different kind.
01:17:43.840
They're instrumental. They're control oriented. They're not epistemic. They're not to do with finding out.
01:17:48.960
And I think that gets it. For me anyway, it's very suggestive of why
01:17:56.240
our experiences of just being a body have this very sort of non-object-based,
01:18:03.520
inchoate, phenomenological character compared to our experiences of the outside world.
01:18:08.880
But it also suggests that everything can be derived from that. That if we understand the original
01:18:14.880
purpose of predictive perception was to control and regulate the internal state of the body,
01:18:22.560
then all the other kinds of perceptual prediction are built upon that evolutionary imperative,
01:18:28.640
so that ultimately the way we perceive the outside world is predicated on these mechanisms that have
01:18:35.600
their fundamental objective in the regulation of our internal bodily state.
01:18:40.400
And I think this is really important for me because it gets away from these pre-theoretical
01:18:49.600
associations of consciousness and perception with cognition, with language, with all these higher
01:18:53.840
order things, maybe social interaction, and it grounds them much more in the basic mechanisms of life.
01:19:01.760
So here we have a nice thing that it might not just be that life provides a nice analogy
01:19:07.280
with consciousness in terms of hard problems and mysteries and so on, but that there are actually
01:19:12.320
very deep obligate connections between mechanisms of life and the way we perceive consciously and
01:19:19.840
Well, so now if interoception is purpose toward what is sometimes called allostatic control,
01:19:27.760
so the regulation of internal states on the basis of essentially homeostasis as governed by behavior and
01:19:35.360
action. If that's the purpose and emotion is essentially parasitic on these processes,
01:19:43.360
an emotion like disgust, say, or fear or anger, much of the same neural machinery is giving rise to
01:19:51.440
these kinds of emotions. How do you think about emotion by this logic? What precipitates an emotion is
01:20:00.160
most often, I mean, it can just be a thought, right, or a memory of something that's happened,
01:20:04.320
but its referent is usually out in the world, very likely in some social circumstance. What is the
01:20:10.960
logic of emotion in terms of this picture of prediction and control in, you know, our internal
01:20:19.440
system? It's very interesting. I think it's more of a research program than a question that's easy to
01:20:23.760
answer in the here and now, but I think the idea would be that emotions, emotional content of any
01:20:32.880
sort is ultimately marking out in our conscious experience the allostatic relevance of something
01:20:43.280
in the world, an object or a social situation or a course of action, so that our brain needs to be
01:20:51.360
able to predict the allostatic consequences. And here, you're absolutely right, allostasis is sort
01:20:56.720
of the behavioral process of maintaining of homeostasis. So our brain needs to be able to
01:21:03.520
predict the allostatic consequences of everything that it, every action that the body produces,
01:21:11.520
whether it's an internal action of autonomic regulation, whether it's an external
01:21:16.400
action, a speech act or just a behavioral act. What's that? What are the consequences of that for
01:21:22.480
our physiological condition and the maintenance of viability? And I think emotional content is a way
01:21:30.480
in which those consequences become represented in conscious experience. And they can be quite simple.
01:21:37.840
So if you think of, you know, probably primordial emotions like disgust have to do with a rejection
01:21:43.920
of something that you try to put inside your body that shouldn't be there, because the consequence
01:21:47.440
is going to be pretty bad. And that's a very non-social kind of emotion, at least
01:21:53.280
certain forms of disgust that have to do with eating bad things don't depend so much on social
01:21:59.520
context, though they can be invoked by social context later on. But then other more sophisticated or
01:22:06.560
more ramified emotions like regret, think about regret. It's not the same as disappointment.
01:22:15.920
And disappointment is, I was expecting X and I got Y, you know, like a lot of people might have done
01:22:21.680
Christmas last week. You can be disappointed, but regret has an essential counterfactual element that,
01:22:28.000
oh, I could have done this instead. And then I would have got X if I'd done this. And I think
01:22:37.680
certainly my own personal emotional life involves many experiences of regret, and even anticipatory
01:22:43.680
regret, where I regret things I haven't even done yet because I kind of assume they're going to turn out
01:22:47.920
badly. And the relevance of that is that these sorts of emotional experiences depend on quite
01:22:55.120
high-level predictions about counterfactual situations, about social consequences, about
01:23:01.920
what other people might think or believe about me. So we can have an ordering of the richness of
01:23:08.000
emotional experience, I think, that is defined by the kinds of predictions that are brought to bear.
01:23:13.360
But they're all ultimately rooted in their relevance for physiological viability.
01:23:19.600
Well, so we've been talking about the contents of consciousness and how varied they are and
01:23:25.120
how they're shaped by top-down predictive processes, perhaps even more than bottom-up processes.
01:23:34.240
But what do you think about the experience of, it's often described as being of pure consciousness,
01:23:41.760
consciousness without content or without obvious content? Is this something that you are
01:23:46.640
skeptical exists, or do you have a place on the shelf for it?
01:23:52.560
I think it probably does exist. I don't know. I mean, unlike you, I've not been a very disciplined
01:24:00.720
meditator. I've tried it a little bit, but it's not something that you probably gain very much from
01:24:06.960
dabbling in. I think it seems to me conceivable that there's a phenomenal state which is characterized
01:24:16.480
by the absence of specific contents. Now, I can imagine I'm happy with the idea that that state
01:24:23.120
exists. I'm somehow skeptical of people's reports of these states. And this gets back to what we were
01:24:30.080
talking about earlier, that we tend to somehow overestimate our ability to have insight into
01:24:38.880
our phenomenal content at any particular time. But yeah, I mean, the interesting question there,
01:24:44.320
which I haven't thought about a lot, is what would the computational vehicles of such a state be in
01:24:49.440
terms of predictive perception? Is it the absence of predictions or is it the prediction of that nothing
01:24:55.520
is causing my sensory input at that particular time? I don't know. I don't know. I have to think
01:25:01.600
about that some more. Yeah, I mean, it's an experience that I believe I've had. And again,
01:25:06.720
I agree with you that we're not subjectively incorrigible, which is to say we can be wrong about
01:25:12.480
how things seem to us. We can certainly be wrong about what's actually going on to explain the
01:25:19.120
character of our experience. But I would say we can be wrong about the character of our experience
01:25:23.280
in important ways. Just to say that if we become more sensitive to what an experience is like,
01:25:29.360
we can notice things that we weren't first noticing about it. And it's not always a matter
01:25:34.480
of actually changing the experience. Obviously, there's conceptual questions here about whether or
01:25:39.120
not being able to discriminate more is actually finding qualia that were there all the time that you
01:25:45.840
weren't noticing, or you're actually just changing the experience. When you learn how to taste wine,
01:25:51.280
are you having a fundamentally different experience? Or are you actually noticing things that you
01:25:56.480
might have noticed before? Or are both processes operating simultaneously? I think it's probably
01:26:01.760
both. Yeah, I mean, I think this whole predictive perception view would come down pretty firmly that
01:26:07.200
at least to some extent, your experience is actually changing because you're developing a
01:26:11.040
different set of predictions. Your predictions are better able to distinguish
01:26:15.760
initially similar sets of sensory signals. So I think, yeah, it's not just that you're noticing
01:26:20.400
different things. Your experiences are changing as well.
01:26:23.360
I mean, to take the experience of pure consciousness that many meditators believe they've had,
01:26:29.120
people have had it on psychedelics as well, and perhaps we'll touch the topic of psychedelics,
01:26:34.560
because I know you've done some research there. But the question is, what I'm calling pure consciousness,
01:26:40.160
was there something there that I could have noticed that was the contents of consciousness that I wasn't
01:26:45.760
noticing there? But the importance of the experience doesn't so much hinge for me on whether or not
01:26:52.960
consciousness is really pure there or really without any contents. It's more that it's clearly
01:27:00.720
without any of the usual gross contents. It's quite possible to have an experience where you're no longer
01:27:08.640
obviously feeling your body. There's no sensation that you are noticing. There's no sense of, you know,
01:27:15.840
proprioception. There's no sense of being located in space. In fact, the experience you're having
01:27:21.280
is a consciousness denuded of those usual reference points, and that's what's so interesting about it.
01:27:27.920
That's what's so expansive about it. That's why it suddenly seems so unusual to be you in that moment,
01:27:34.000
because all of the normal experiences have dropped away. So seeing, hearing, smelling,
01:27:39.760
tasting, touching, and even thinking have dropped away. This is where, for me, the hard problem does
01:27:46.160
kind of come screaming back into the conversation. On many of these accounts of what consciousness is,
01:27:51.600
we should probably move to Tononi's notion of integrated information. On his account,
01:27:57.040
and this is a very celebrated thesis in neuroscience and philosophy, on his account, consciousness
01:28:03.840
simply is a matter of integrated information. And the more information and the more integrated,
01:28:10.560
the more consciousness, presumably. But an experience of the sort that I'm describing of
01:28:17.120
pure consciousness, consciousness, you know, whether pure or not, consciousness stripped of its usual
01:28:22.560
informational reference points is not the experience of diminished consciousness. In fact, the people who
01:28:30.560
have this experience tend to celebrate it as more the quintessence of being conscious. I mean,
01:28:36.720
it's really some kind of height of consciousness as opposed to its loss. And yet, the information component
01:28:45.840
is certainly dialed down by any normal sense in which we use the term information. They're not
01:28:53.200
things being discriminated from one another. And I guess you could say it's integrated, but there are
01:28:58.720
other experiences that I could describe to you where the criteria of integration also seems to fall apart,
01:29:04.240
and yet consciousness remains. So again, this is one of those definitional problems. But if we're going to
01:29:10.480
call consciousness a matter of integrated information, if we find an example of there's something that it's
01:29:17.520
like to be you, and yet information and integration are not its hallmarks, well then it's kind of like
01:29:25.360
defining all ravens as being black and then we find a white one. What do we call it? A white raven or
01:29:31.200
some other bird. Do you have any intuitions on this front?
01:29:35.440
There's an awful lot in what you said just there. I think if we just put aside for a second
01:29:43.280
trying to isolate what we might call the minimal experience of selfhood. Is there anything left
01:29:50.000
after you've got rid of experiences of body and of volition and of internal narratives and so on and so
01:29:56.240
on? Have a thought about that. Just for one point of clarification, I would distinguish this from
01:30:01.920
the loss of self, which I hope we come to. I think you can lose your sense of self with all of the
01:30:08.800
normal phenomenology preserved. So you can be seen and hearing and tasting and even thinking just as
01:30:15.680
vividly, and yet the sense of self, or at least one sense of self, can drop away completely. This is
01:30:24.480
Yes, I mean that sounds like flow state type experiences in some way. But maybe we can get onto that
01:30:31.200
but if we move indeed to IIIT and think about how that might speak to these issues of pure
01:30:40.160
consciousness and whether these experiences serve as some kind of counter example, some
01:30:45.920
phenomenological counter example to IIIT. I think that's very interesting to think about. And it gets
01:30:51.920
that whether we consider IIT integrated information theory to be primarily a theory of conscious level,
01:31:00.480
of how conscious a system is, or of conscious content, or of their interaction. Perhaps it's
01:31:06.880
best to start just by summarizing in a couple of sentences the claims of IIT, because you're absolutely
01:31:13.200
right. It's become, it's come to occupy a very interesting position in the academic landscape of
01:31:19.200
consciousness research. A lot of people talk about it, although in the last couple of meetings of the
01:31:26.080
Association for the Scientific Study of Consciousness, certainly the last one, there was surprisingly
01:31:30.480
little about it. And I have a thought why that might be, which we can come on to. It's probably
01:31:36.320
worth trying to explain just very briefly what integrated information theory IIT tries to do. And what it
01:31:45.040
tries to do, it starts with a bunch of phenomenological axioms. So it doesn't start by asking the question,
01:31:51.920
what's in the brain and how does that go along with consciousness? It tries to identify
01:31:57.360
axiomatic features of conscious experience, things that should be self-evident, and then try to,
01:32:04.000
from there, derive what are the necessary and sufficient
01:32:08.880
mechanisms, or really what's a sufficient mechanistic basis given these axioms. IIT will call these
01:32:15.280
postulates. There are actually, in the current version of IIT, five of these axioms. But I think
01:32:21.840
we just consider a couple of them, and these are the fundamental ones, information and integration.
01:32:26.400
And these particular, you can call them axioms, or you can call them just generalizations of what
01:32:35.120
all conscious experiences seem to have in common. Information integration. So the
01:32:41.440
axiom of information is that every conscious experience is highly informative for the organism,
01:32:48.320
in the specific sense of ruling out a vast repertoire of alternative experiences. You're having
01:32:55.040
this experience right now, instead of all the other experiences you could have, you could have had,
01:33:01.360
you have had, you will have, you're having this particular experience. And the occurrence of that
01:33:06.720
experience is generating an enormous amount of information because it's ruling out so many
01:33:11.920
alternatives. As you go through this, I think it will be useful for me to just flag a few points where
01:33:18.640
this phenomenologically breaks down for me. So again, the reference here is to kind of non-ordinary
01:33:24.640
experiences in meditation and with psychedelics. But the meditative experiences for me, at least,
01:33:31.200
have become quite ordinary. I can really talk about them in real time. So the uniqueness of each
01:33:38.160
conscious experience as being highly informative because it rules out so many other conscious
01:33:45.040
experiences. In meditation, in many respects, that begins to break down because what you're noticing
01:33:53.920
is a core of sameness to every experience. What you're focusing on is the qualitative character of
01:34:01.920
consciousness that is unchanged by experience. And so the distinctness of an experience isn't what is so
01:34:11.040
salient. What is salient is the unchanging quality of consciousness in its openness, its centerlessness,
01:34:19.360
its vividness. And one analogy I've used here, and if you've ever been in a restaurant which has had a
01:34:25.760
full-length mirror across one of the walls and you haven't noticed that the mirror was a mirror and you
01:34:31.200
just assume that the restaurant was twice as big as it in fact is, the moment you notice it's a mirror,
01:34:37.440
you notice that everything you thought was the world is just a pane of glass. It's just a play
01:34:43.200
of light on a wall. And so all those people aren't really people or they're not extra people. They're
01:34:48.560
in the room just being reflected over there. And one way to describe that shift is almost
01:34:53.680
a kind of loss of information, right? It's just like there's no depth to what's happening in the
01:34:58.240
glass. Nothing's really happening in the glass. And meditation does begin to converge on that kind of
01:35:05.840
experience with everything. The Tibetan Buddhists talk about one taste, being that there's basically,
01:35:11.600
there's a single taste to everything when you really pay attention. And it is because these
01:35:17.600
intrinsic properties of consciousness are what have become salient, not the differences between
01:35:22.480
experiences. So I don't know if that just sounds like an explosion of gibberish to you, but it's a way
01:35:28.320
in which when I begin to hear this first criterion of Tononi's stated as you have, it begins to not map
01:35:36.880
on to what I'm describing as some of the clearest moments of consciousness. Again, not a diminishment of
01:35:44.320
consciousness. That's very interesting. And those states of being aware of the unchanging nature of
01:35:53.600
consciousness, I think that that's really very important. I'm not sure it's misaligned with
01:36:02.640
Tononi's intuition here, because I think the idea of informativeness is, so if you think about it,
01:36:08.160
I mean, there's one way to think about it, which is that specific experience that you're having in
01:36:12.240
that meditative state of becoming aware of one taste or of the unchanging nature that underlies all
01:36:20.000
experiences. That itself is a specific experience. It's a very specific experience. You have to have
01:36:25.760
trained in meditation for a long time to have that experience. And the having of that experience
01:36:32.560
is equally distinctive. It's ruling out all the other experiences when you're not having that
01:36:36.960
experience. So it's not so much how informative it is for you at the psychological level.
01:36:43.920
It's a much more reductionist interpretation of information. I think the other way to get at that
01:36:52.000
is to think of it from the bottom up, from the simple systems upwards, and Tononi uses an analogy,
01:36:58.320
which I think has got some value. Why is a photodiode not conscious? Well, for a photodiode,
01:37:04.000
the whole world, in the world, outer world, it's either dark or light. The photodiode doesn't have
01:37:10.080
an experience of darkness and lightness. It's just, you know, on or off, one or zero.
01:37:16.000
And generalizing that, that a particular state has the informational content it has
01:37:24.400
in virtue of all the things it isn't, rather than the specific thing that it is. So we can think
01:37:30.400
about this in terms of color. You know, red is red, not because of any intrinsic redness
01:37:36.400
to a combination of wavelengths, but because of all the other combinations of wavelengths
01:37:42.320
that are excluded by that particular combination of wavelengths. And I think this is really interesting.
01:37:48.800
This point actually precedes integrated information theory, goes right back to the
01:37:54.080
dynamic core ideas of Tononi and Edelman, which was the thing that first attracted me to go and work
01:37:59.600
in San Diego, nearly 20 years ago. And even then, the point was made that an experience of pure darkness,
01:38:07.280
or, you know, complete sensory deprivation, where there's no sensory input, no perceptual content,
01:38:13.600
call this a hypothetical conscious state for now. I don't know how, to what extent, you know,
01:38:18.320
it's approximated by any meditative states. That has exactly the same informational content,
01:38:24.640
as does a very vivid, busy, conscious scene walking down the high street, because it's ruling out
01:38:31.280
the same number of alternatives. And it may seem subjectively different, less informative,
01:38:37.360
because there's nothing going on. But in terms of the number of alternative states
01:38:42.960
that it's ruling out, it's the same. So I think there's a sense in which we can interpret
01:38:48.480
information, this axiom of informativeness, as applying to a whole range of different kinds of
01:38:59.440
conscious contents. Of course, this does get us onto tricky territory about whether we're talking
01:39:04.720
about a theory of level, or a theory of content. But this idea is, yeah, I think it can account for
01:39:13.120
your situation, though it does ask the question about, can we really get it at content specifically
01:39:19.360
there? So the number of states over which you can range as a conscious mind defines how much
01:39:26.640
information is encoded when you're in one of those states? That's right. That would be the claim. And
01:39:33.520
you know, you can think of it in terms of, so one of the quantities associated with this technical
01:39:40.480
definition of information theory is entropy. And entropy simply measures the range of possible
01:39:47.920
options and the likelihood of being in any particular one of those options. And so information,
01:39:55.760
so entropy is a measure of the kind of uncertainty associated with a system state. And so, you know,
01:40:03.920
a photodiode can only be in one of two possible states. A single die can be in six possible states,
01:40:10.320
a combination of two die can be in 12 possible states. And there's actually, I want to linger here
01:40:18.480
slightly longer because it's in these technical details about information theory that IIT, I think,
01:40:27.520
runs aground because it's trying to address the hard problem. It's because of this identity relationship
01:40:35.280
that Tononi argues for between integrated information. We'll get onto integration in a second,
01:40:40.640
but let's just think about information. It's because of this identity relationship in which he says
01:40:46.640
consciousness simply is integrated information measured the right way. That the whole theory becomes
01:40:54.560
empirically untestable and lame. Because if we're to make the claim that the content and level of
01:41:05.280
consciousness that a system has is identical to the amount of integrated information that it has,
01:41:12.480
that means in order to assess this, I need to know not only what state the system is in and what state
01:41:21.280
it was in previous time steps, let's say we measure it over time, but I also need to know
01:41:29.120
all the states the system could be in, but hasn't been in. I need to know all its possible combinations.
01:41:36.480
And that's just impossible for anything other than really simple toy systems. There's a metaphysical
01:41:42.880
claim which goes along with this too, which is that information has ontological status.
01:41:48.000
This goes back to John Wheeler and It From Bit and so on, that the fact that a system could occupy a
01:41:56.400
certain state but hasn't is still causally contributing to the conscious level and state of the system
01:42:05.440
now. And that's a very strong claim, and it's a very interesting claim. I mean, who knows
01:42:11.200
what the ontological status of information in the universe will turn out to be.
01:42:15.680
But you also have an added problem of how you bound this possibility. So for instance,
01:42:22.320
so not only can you not know all the possible states my conscious mind could be in so as to
01:42:29.600
determine the information density of the current state, but what counts as possible? If, in fact,
01:42:37.600
it is possible to augment my brain even now, I just don't happen to know how to do that,
01:42:42.960
but it's possible to do that, or it'll be possible next week. Do we have to incorporate those
01:42:47.840
possibilities into the definition of my consciousness? If you'd like to continue
01:42:53.280
listening to this podcast, you'll need to subscribe at SamHarris.org. You'll get access
01:42:57.920
to all full-length episodes of the Making Sense Podcast and to other subscriber-only content,
01:43:02.960
including bonus episodes and AMAs and the conversations I've been having on the Waking Up app.
01:43:08.640
The Making Sense Podcast is ad-free and relies entirely on listener support,