#178 — The Reality Illusion
Episode Stats
Length
1 hour and 7 minutes
Words per Minute
167.07355
Summary
Donald Hoffman is a cognitive scientist at the University of California, Irvine, and author of The Case Against Reality: Why Evolution Hid the Truth From Our Eyes. His work has appeared in Scientific American, Wired, and Quanta, and his work has been featured in The Atlantic, The New York Times, and Wired. And his new book is The Case against Reality: How Evolution Failed to Select for True Perception of Reality. In this episode, we talk about how evolution failed to select for true perceptions of reality. We talk about Hoffman's interface theory of perception, and the primacy of math and logic, and what justifies our conviction that space and time cannot be fundamental to our framework. We also talk about the threat of epistemological skepticism, the hard problem of consciousness, agency, and free will, panpsychism, what Hoffman calls conscious agents, and the relationship between consciousness and consciousness. We are here to bring you a conversation between mathematics and philosophy. This is the first time that Anika and I have jointly interviewed a guest on the podcast, and it's a great pleasure. This is a fairly steep conversation, but for those of you for whom this is your sort of thing, I do my best to define terms as we go along, I think you'll love it. And now you know why this is a special kind of conversation. It's a rare one. -- Sam Harris Thanks for listening to the Making Sense Podcast, and Happy New Year! -- and Happy Holidays, everyone! -- Timestamps: 1: 3:00 - The Last Housekeeping 4: What's a Good Year? 5:30 - What is a Good Day? 6: What is it? 7:20 - What does it mean to you? 8:15 - What are you looking for? 9:00 11:40 - How do you know what you're looking for in a good day? 12:00- What's your favorite thing? 13: What do you need? 15:30 16:15 17:00 -- What's the worst thing you're going to do with your brain? 18: What s your favorite piece of advice? 19: What would you like to hear from me? 21: Does it make you feel? 22:30 -- Why you need this? 23:40 -- Why do you like it?
Transcript
00:00:00.000
Welcome to the Making Sense Podcast. This is Sam Harris. Okay, housekeeping. Well,
00:00:24.700
the last housekeeping was intense. Got some new music all of you are dealing with emotionally.
00:00:33.180
Got some grief over the new music. Let's just hang out with it for a while. See how we feel
00:00:38.860
in the new year. Also dropped a paywall on the podcast. For those who need my rationale around
00:00:48.220
all that, you can listen to the last housekeeping in the public feed. Those of you who are
00:00:54.340
subscribers never even heard it. Anyway, to make a long story short, unless you subscribe to the
00:01:00.280
podcast through samharris.org, you will only be getting partial episodes now. For instance,
00:01:08.700
today's podcast is around three hours long, but if you're listening on the public feed,
00:01:15.120
you'll get the first hour, merely. So if you care about the conversations I'm having here and want
00:01:21.600
to hear them in their entirety, subscribing through samharris.org is the only option.
00:01:28.040
I'm clearly at odds with the trend here of all podcasts being free and ad-supported, but all I can
00:01:35.820
say is that the response has been fantastic, and the podcast is on much better footing even after
00:01:44.780
only a week. So thank you for that. As always, if you actually can't afford a subscription,
00:01:51.720
I don't want money to be the reason why you don't get access to my digital content, whether that's the
00:01:58.460
Making Sense podcast or the Waking Up app or anything else that I might produce in this space.
00:02:05.180
And the solution for that is, again, if you can't afford it, simply send an email to support
00:02:13.360
at samharris.org for the podcast and support at wakingup.com for the app, and you'll get a free
00:02:21.060
year. And you can do that as many times as you need. We don't means test these things. There are no
00:02:27.920
follow-up questions. This is based on your definition of whether you need this for free,
00:02:34.260
and that's as it should be. So anyway, this is the business model. The podcast is now a subscription,
00:02:42.000
just like the app, and if you can't afford it, you can have it for free.
00:02:48.460
Okay. So today I'm speaking with Donald Hoffman, and I'm joined by my wife, Annika. This is the first
00:02:59.900
time we have jointly interviewed a guest, and I'm sure it won't be the last. Annika's interest in this
00:03:06.960
topic definitely helped us get deeper into it. Donald Hoffman is a professor of cognitive science
00:03:13.120
at the University of California, Irvine. His writing has appeared in Scientific American,
00:03:18.460
and on edge.org, and his work has been featured in The Atlantic, Wired, and Quanta. And his new book
00:03:26.120
is The Case Against Reality, Why Evolution Hid the Truth From Our Eyes. And there was an article in
00:03:33.480
The Atlantic profiling him that made the rounds. He also had a TED Talk that many found bewildering.
00:03:42.520
As you'll hear, he has what he calls a user interface theory of perception,
00:03:47.160
and many people find this totally confounding, and it can seem crazy at first glance, and even at
00:03:56.840
second glance. And I must say, when I first read The Atlantic article and watched his TED Talk,
00:04:03.500
I wasn't entirely sure what Hoffman was claiming. As you'll hear, Annika got very interested in his work
00:04:09.340
and had several meetings with him, and then we finally decided to do this podcast. And it is a fairly
00:04:16.760
steep conversation. I do my best to define terms as we go along. But for those of you for whom this is
00:04:27.260
your sort of thing, I think you'll love it. Over the course of three hours, we really leave virtually no
00:04:35.680
stone unturned. In this area, we talk about how evolution has failed to select for true perceptions
00:04:43.040
of reality. We talk about Hoffman's interface theory of perception. We talk about the primacy of math and
00:04:50.800
logic, and what justifies our conviction there. We talk about how space and time cannot be fundamental
00:04:57.820
to our framework. We talk about the threat of epistemological skepticism. Causality is a useful
00:05:06.520
fiction. The hard problem of consciousness. Agency, free will, panpsychism, what Hoffman calls the
00:05:15.020
mathematics of conscious agents. Philosophical idealism, death, psychedelics, the relationship between
00:05:23.700
consciousness and mathematics, and many other topics. And now Anika and I bring you Donald Hoffman.
00:05:35.780
We are here with Donald Hoffman. Donald, thanks for joining us.
00:05:40.700
So this is unusual. This is the first time that Anika, my wife, who's only been on the podcast once,
00:05:47.400
many of our listeners will remember that podcast. It's the first time anyone has heard me laugh out loud
00:05:52.060
in a decade. So you came to my attention on the basis of an Atlantic article, I think, that was
00:06:01.180
making the rounds. And you also had a TED Talk. I don't know which preceded the other. But then Anika
00:06:06.940
just got completely obsessed with what you were doing. And maybe once a month or so, I would hear that
00:06:13.840
there was some export from a conversation she was having with you. So it just seemed like it would be
00:06:18.740
professional malfeasance for her not to really anchor this conversation. So Anika.
00:06:25.600
That was all in the context of my writing my book. I was doing research for my book. And
00:06:29.900
Don was working on a book on a similar topic, or really on the same topic, with a different
00:06:34.980
perspective. And so I had wanted his input on my manuscript and was honored that he trusted me with
00:06:43.860
his manuscript. And we kind of we actually gave each other we were kind of in the writing process
00:06:47.060
together. So gave each other notes. And then Don was extremely generous with his time and continued
00:06:53.400
to meet with me as I had many follow up questions. And yeah, yeah, put put up with with my curiosity,
00:07:01.300
even though I'm not sure any of it was helpful to you. But I it was it was great for me to
00:07:07.020
it was very much fun for me and very, very helpful because you also gave me feedback on my book and
00:07:11.500
really helped bring my book to a broader audience as well. So I was grateful. And I was really
00:07:15.740
grateful that you did all the driving. Yeah. Right.
00:07:19.460
So before we jump into your thesis, which is, I mean, has the virtue of being on what I think is
00:07:27.940
perhaps the most interesting topic of all. And some of the points you make are so counterintuitive
00:07:33.100
as to seem crazy on their face. So it's going to be fantastic to wade into this with you. But what is
00:07:39.740
how do you summarize your academic and intellectual background before we get started?
00:07:45.100
Well, so I did my undergraduate bachelor's at UCLA in what was called quantitative psychology. It was
00:07:52.160
like a major in psychology and a minor that had like computer science and math courses in it.
00:07:58.780
And while I was doing that, I took a graduate class with Professor Ed Carter-Retton, which we were
00:08:04.420
looking at artificial intelligence and ran across the papers of David Marr. This is like around 77,
00:08:10.580
78. And his papers just really grabbed my attention. Here was a guy that was trying to build visual
00:08:17.240
systems that worked with mathematical precision, not just waving your hands, but actually writing
00:08:21.940
down mathematics and something that you could actually build eventually into a robotic vision
00:08:26.180
system. So I found out he was at MIT in the AI lab and what's now the brain and cognitive sciences
00:08:31.780
department. And I was lucky enough to get to go there and work with him. He died a little over
00:08:38.900
a year after I was there. So I only got to work with him for 14 or 15 months.
00:08:44.660
35. He had leukemia. But I did get to work with him and see how his mind works. It was revolutionary.
00:08:51.940
It was a wonderful time there at MIT. And then my other advisor was Whitman Richards. David Marr and
00:08:59.700
Whitman Richards were my joint advisors. And then Whitman was my sole advisor after Marr died. And
00:09:05.300
so I was very interested in going there in the problem of, you know, are we machines? And I figured
00:09:12.580
what better way to get at that question than doing something in an artificial intelligence lab where we
00:09:17.860
try to build machines and understand the scope and limits of what machines could do. So I was always
00:09:22.740
very interested in human nature and how, you know, artificial intelligence is related to humans. Are
00:09:28.420
we just artificial intelligence as ourselves, just machines, or is there something more? And I didn't
00:09:33.380
want to hand wave. I really wanted to understand what it means to be a machine and what might be
00:09:38.180
different or not about humans. And so that's sort of my intellectual background. And what I focused on
00:09:44.420
because, you know, of Marr was perception, visual perception.
00:09:48.100
Yeah. So he wrote a book that was quite celebrated, a very, you know, early detailed look at visual
00:09:54.580
perception, which it's amazing what a contribution he made in such a short time. Decades after his death,
00:10:02.180
you know, his book is still recommended as a must read book in cognitive science and neuroscience.
00:10:07.700
Absolutely. It was brilliant. And he was brilliant in person. The, the, the lab meetings were,
00:10:13.220
were electric. He had assembled this world-class group of scientists around him. They, they, they
00:10:21.140
congregated around him. And I, I just was so lucky to be watching this new science being revolutionized
00:10:27.700
by, by this young man. Yeah. At 35, he, he did all this and died. It was, it was truly stunning.
00:10:34.820
Yeah. You're now at Irvine as a professor, right?
00:10:37.540
That's right. University of California at Irvine.
00:10:39.140
Yeah. And you have been meeting over the years with some of the great lights in
00:10:46.500
consciousness studies, for lack of a better word of it. There was these meetings of the Helmholtz
00:10:53.700
Helmholtz club. Yeah. So, so, uh, and that had Francis Crick in it. And I never met Francis,
00:10:58.900
but Joe Bogan, who you write about in your book is somebody who I, who I did meet and he was
00:11:06.340
Yeah. He's, he was the neurosurgeon who did the bulk of the split brain procedures for which
00:11:12.340
Roger Sperry won the Nobel prize. And that's right.
00:11:16.020
And, uh, Ron Zeidel at UCLA was involved in that work and Michael Gazzaniga.
00:11:20.100
Yeah. Before we jump in, I want our listeners to be sensitized to how seemingly preposterous
00:11:30.500
some of your initial claims will be. And, and I, and I can guarantee you that on certain of these
00:11:36.580
points, the sense of their counterintuitiveness will wear off. And there's something thrilling
00:11:42.500
about this. I mean, this is the thrill that was exemplified by Annika's obsession with your work.
00:11:48.260
I know has spread to, uh, other people. We have a friend who perhaps I shouldn't name who claimed that
00:11:54.020
she, she accosted you at some function and just completely fangirled you as a, as a groupie.
00:11:59.060
So we know that I think once you start wearing sunglasses indoors, you will, you will have
00:12:04.740
started a cult and then we will put the word out against you. But, um, in the meantime, perhaps
00:12:11.380
the best place to start, I mean, I would imagine we should just track through it the way you do it
00:12:15.380
in your book, starting with the interface theory of perception, but you can start wherever you want.
00:12:20.100
And we, we just want to go through it all and we'll have questions throughout.
00:12:23.140
Right. So most of my colleagues who study perception, assume that evolution by natural
00:12:33.220
selection has shaped us to see truths about the world. None of my colleagues think that we see
00:12:40.260
all of reality as it is, but most of my colleagues would argue that accurate perceptions, what we call
00:12:47.060
veridical perceptions, perceptions that tell us truths about the world, will make us more fit.
00:12:52.020
So accurate perceptions, veridical perceptions are fitter perceptions. And the argument that's
00:12:58.740
classically given is actually quite intuitive. So, so the idea is that those of our ancestors who
00:13:04.740
actually were better at feeding, fighting, fleeing and mating because they could see reality as it is,
00:13:09.620
were more likely to pass on their genes, which coded for the more accurate perceptions. And so after
00:13:15.860
thousands of generations of this process, we can be quite secure that our perceptions are telling
00:13:21.780
us truths about the world. Of course, not exhaustive truths, but the truths that we need. We see
00:13:27.460
those aspects of reality that we need to stay alive and reproduce. And that seems like a really
00:13:33.380
compelling argument. It seems very, very intuitive. How could it go wrong?
00:13:36.420
So at first glance, it seems some measure of veridicality, some measure of being in touch with
00:13:43.380
reality as it is would increase an organism's fitness. There must be a fit between tracking
00:13:55.220
Dr. Exactly. That's, that's the standard intuition for, for most of my colleagues.
00:13:59.940
Stephen Pinker has actually published papers where he points out some, some contradictions to that idea.
00:14:06.500
But most of my colleagues would go with the idea that, yeah, it's, it's better. It's more fit to
00:14:11.380
see reality as it is, at least part of reality. Well, I began to think that that might not be true
00:14:18.740
because my initial intuition was that maybe it would just take too much time and too much energy
00:14:24.740
to see reality as it is. So evolution tries to do things on the cheap. So maybe
00:14:29.620
the pressures to do things quickly and cheaply would, would maybe compromise our ability to see the
00:14:35.060
truth. And so I began to work with my graduate students at Justin Mark and Brian Marion around
00:14:40.180
2008 or so, 2009. And I had them write some simulations where we would simulate foraging
00:14:48.500
games where we could create worlds with resources and put creatures in those worlds that could roam
00:14:52.900
around and compete for resources. And some of the creatures we let see all the truth. So they were
00:14:58.820
the vertical creatures and others I didn't let see the truth at all. We, we had them only see the
00:15:04.420
fitness payoffs and we can talk about what fitness payoffs mean. That's an important concept. But
00:15:08.980
what we found was in these simulations that the, that the creatures that saw reality as it is
00:15:16.180
couldn't out-compete the creatures of equal complexity that saw none of reality and were just tuned
00:15:22.420
to the fitness payoffs. And so that began to make me think there was something real here. So now I should
00:15:28.420
say what fitness payoffs are. So think in evolution, you can think of evolution by natural selection
00:15:35.540
much like a video game. So in a video game, your focus is to collect points as quickly as you can
00:15:44.340
without being distracted by other things. And if you get enough points in a short enough time,
00:15:49.380
you then might get to go to the next level. Otherwise you die. And in evolution by natural selection,
00:15:54.500
it's very, very similar. The, instead of the game points, you have fitness payoffs and you
00:16:01.540
go around collecting them as quickly as you can. And if you get enough, you don't go to the next
00:16:06.740
generation, but your genes get passed to the next generation. And so, so to be a little bit more
00:16:11.940
specific, think about the fitness payoff that say a T-bone steak might offer. So that if you're a hungry
00:16:20.660
lion looking to eat, that T-bone steak offers lots of fitness payoffs. But if you're that same
00:16:26.740
lion and you're full and you're looking to mate, all of a sudden that T-bone steak offers you no
00:16:31.300
fitness payoffs whatsoever. And if you're a cow in any state and for any activity, that T-bone steak
00:16:36.900
is not going to, is not a fit thing for you whatsoever. And, and so that's gives you an intuition
00:16:43.700
about what we mean by fitness payoffs in evolutionary theory. Fitness payoffs do depend on the state of
00:16:51.700
the world, whatever the objective reality might be. They do depend on the state of that world,
00:16:55.540
but also, and importantly, on the organism, its state and the action. And so fitness payoff functions
00:17:04.820
are very complicated functions. And the state of the world is only one of the parts of the domain of
00:17:11.220
that function. There's lots of other aspects to it. And so they're really, really complicated
00:17:15.300
functions of the state of the world and the organism, its state and its action.
00:17:18.740
Right. Well, so now I think you should introduce the desktop analogy, because again,
00:17:24.420
what you just said can sound suspiciously similar to more or less what every life scientist and
00:17:32.340
certainly neuroscientists would agree is true, which is whatever reality is, we see some
00:17:41.140
simulacrum of it that is, you know, broadcast to us by the way our nervous system sections up the
00:17:47.540
world. So, you know, we see within a certain bandwidth of light, you know, bees detect, you know,
00:17:52.820
another bandwidth. And we, by the very nature of this, don't get all the information that's available to
00:18:01.460
be gotten. So we don't have a complete picture of the thing in itself or the reality that's behind
00:18:08.180
appearances. But implicit in that kind of status quo assumption is that the things we do see
00:18:17.940
really exist out there in the real world in some basic sense in space and time. Again, it's not clear
00:18:26.740
how much gets lost in translation, but there is some conformity between what we see as a glass of
00:18:32.580
water on the table and a real object in the world in, you know, third-person space. How is your vision
00:18:41.540
of things departing from what is now scientific common sense? Yeah, it does depart dramatically from
00:18:46.500
that standard view. The standard view, as you said, is that we may not see all of the truth, but we do see
00:18:51.940
some aspects of reality accurately. And what the evolutionary simulations and then later theorems that
00:18:58.340
my colleague Chaitan Prakash proved indicate is that our perceptions were shaped by natural selection
00:19:06.100
not to show us just the little bits of truth we need to see, but rather to hide truth altogether and to
00:19:12.340
give us instead a user interface. So, you know, a metaphor I like to use is if you're writing a book
00:19:19.140
and the icon for the book is blue and rectangular in the middle of your screen, does that mean that
00:19:25.540
the book itself in your computer is blue rectangular in the middle of the computer? Well, of course not.
00:19:30.580
Anybody who thought that really misunderstands the point of the user interface. It's not there
00:19:35.860
to show you the truth, which in this metaphor would be the circuits and software and voltages
00:19:41.300
in the computer. The interface is there explicitly to hide the truth. If you had to toggle voltages to
00:19:50.740
write a book, you'd never get done. And if you had to toggle voltages to send an email, people would
00:19:54.980
never hear from you. So the point of a user interface is to completely hide the reality and to give you
00:20:04.580
very, very simplified user interface to let you control the reality as much as you need to control
00:20:10.020
it while being utterly ignorant about the nature of that reality. And that's what the simulations that
00:20:16.820
I've done with my students and the theorems that I've done with with Chaitan Prakash indicate is that
00:20:22.340
natural selection will favor organisms that see none of the truth and just have this simplified
00:20:30.820
user interface. So to be very explicit, three-dimensional space, as we perceive it,
00:20:36.980
is just a three-dimensional desktop. It's not an objective reality independent of us.
00:20:41.860
It's just a data structure that our sensory systems use to represent fitness payoffs and how to get them.
00:20:49.700
And three-dimensional objects like tables and chairs, even the moon, are just three-dimensional icons
00:20:55.140
in that interface. So once again, they're not our species representations of a true glass that's really
00:21:04.340
out there or a true table that's out there. They are merely data structures that we're using to represent
00:21:11.940
So yes, in this first description of this wonderful analogy you use with the desktop and also of how
00:21:21.300
evolution gives us this false picture of what the deeper reality actually is. I have a few questions
00:21:27.700
here. I'm going to start, I'm not quite sure where it will go, but there are at least three things that
00:21:32.580
have been brought up so far that I feel like it's important for us to get clear on terminology and
00:21:38.580
framework before I start really disagreeing. And I should say that you and I have now spent
00:21:45.860
many meetings together. I spend a lot of time challenging you, mostly because I actually think
00:21:50.580
there's something very interesting that you're doing and I think you're onto something. And so,
00:21:54.820
you know, in the same way that in my editing work, I give the most notes to the books I'm most
00:22:00.100
passionate about. It's in that spirit. So beginning with evolution, I've actually said to you many times
00:22:06.660
that I don't actually think you need the evolution argument to make your case for your theory.
00:22:12.420
So some of the, some of this pushback is actually moot, but I still think it's interesting. And I think
00:22:17.460
I agree with this, with this evolution argument up to a point. So my, my first question is really to
00:22:24.420
just get us, you know, on the same page or see if we are on the, on the same page as a starting point.
00:22:29.020
I know that you believe that, or you're hopeful, you're optimistic about the fact that we can
00:22:37.020
ultimately understand what that deeper reality is. And so that, so there must be boundaries
00:22:44.940
to the systems that we're using, our brains, which have evolved, where we can actually get access
00:22:52.620
to the truth. So, so up until to a point, our brains are giving us all of this false information,
00:22:58.860
but there's some sense in which we can actually get access to things that are true about the nature
00:23:04.220
of reality. So my question is, where do you draw the boundary of an evolved system that by definition
00:23:11.980
gives us false information about the nature of reality? So that outside that boundary is where we
00:23:18.160
might be able to gain access to information that delivers us the truth. And there's kind of a
00:23:23.260
second part to that, which is where we might disagree. I believe we've already begun to cross
00:23:29.180
that boundary with science. And so the way I follow your evolution argument is simply about direct
00:23:36.480
perceptual information that we get rather than ideas, scientific experiments. So, so if you just take
00:23:46.060
light, light, I think is always the simplest example, we have not evolved perceptual systems
00:23:51.460
to really understand what light is, right? Like everything, everything we've learned about light
00:23:56.040
through the sciences, up to quantum mechanics, where it gets completely mysterious, and we really
00:24:01.600
don't actually know what light is. So, so we can kind of all agree, and not just the three of us in
00:24:08.180
this room, but all of us, you know, most scientists would agree that ultimately, we're still,
00:24:13.560
we still don't have this information about what the fundamental nature of reality is. We're, we're,
00:24:19.740
we're still stuck there. But I would say that we have learned, we've gotten much closer to that
00:24:26.120
by these processes that I think are outside the boundary of this evolved system that is by definition
00:24:35.880
delivering us false information. Right. Great question. And there's a couple points about it. First, that
00:24:43.560
the, the arguments that I've given from evolution by natural selection against veridical perceptions
00:24:49.520
do not hold against math and logic. So that's very, very different than some other like Christian
00:24:59.740
apologists like Alvin Plantinga, who have made an argument that sounds very similar to mine,
00:25:04.600
that they say that if our senses, if our cognitive capacities evolved, they would be unreliable.
00:25:10.460
That includes our theory building capacity, and therefore the theory of evolution is unreliable,
00:25:14.820
and therefore evolution is false. I'm making no such argument. Right. I'm, it's further,
00:25:19.760
the furthest thing from my mind. I'm focused only on the senses. And the reason why the argument
00:25:25.760
that says our senses are not veridical doesn't hold for math and logic is that
00:25:29.880
there are evolutionary pressures for us to reason about fitness payoffs. Two bites of an apple give
00:25:39.500
you roughly twice the fitness payoff of one bite of an apple. Whatever objective reality might be,
00:25:44.840
we need to be able to reason about fitness payoffs. And so, whereas the selection pressures are
00:25:50.240
uniformly against veridical perceptions, they're not uniformly against some elementary competence in
00:25:57.840
math and logic. Now, I'm not, of course, arguing that natural selection is shaping us to be geniuses
00:26:02.900
of math and logic. Far from it. It's just that the selection pressures are not uniformly against
00:26:07.420
ability. And every once in a while you get a, you know, a genius coming up.
00:26:11.340
But don't we think the math and logic are giving us space-time? I mean, this can get into a deeper
00:26:17.900
question because, of course, we now have quantum mechanics, which is putting all of this into
00:26:22.220
question. And many physicists, if not most, are talking about space-time being something that
00:26:28.360
emerges out of something more fundamental. But they would still say that it emerges. And so,
00:26:33.800
it seems that it's hard to take. So, I guess my argument with where you take this evolution
00:26:43.020
argument is as far as space-time itself. Because it seems that we don't yet know whether or
00:26:52.220
space-time is a true illusion in some sense. But I would say our math and logic has taken us
00:27:04.320
Actually, let me see if I can add to this point, because this is something that came up for me as
00:27:09.540
well. So, if we confine this to perception, for me, it's no longer counterintuitive. But again,
00:27:16.220
this will be counterintuitive for many, many people. But so, the claim is that
00:27:22.220
fitness trumps truth so fully that apprehending the truth, perceptually, is just not an evolutionarily
00:27:31.180
stable strategy. You're going to be driven to extinction among creatures that are optimized
00:27:35.520
for fitness. And that sounds a little crazy. But when you think of what fitness means,
00:27:41.020
fitness means simply being optimized for survival and procreation, right? So, as long as you're
00:27:47.920
optimizing for that, it's easy to see that you successfully out-compete anything that isn't
00:27:52.900
optimized for that. And there's also this additional piece, which you mentioned, which is that there's
00:27:58.380
clearly fitness value, i.e. survival value, in throwing away information that isn't related to
00:28:06.620
fitness, right? So that every organism is going to have some bandwidth limits and metabolic limits,
00:28:12.340
and tracking every fact that's out there to track can't be a priority. And then there's this additional
00:28:20.360
component, which is if the inability to make certain distinctions doesn't relate to increased
00:28:25.420
fitness, evolution would not have selected for that ability to make those distinctions, right? So you'll
00:28:30.540
expect organisms to be blind to certain features of reality just in principle. But there is a sense in
00:28:38.220
which your thesis does bite its own tail and seems to at least potentially subvert itself in that
00:28:47.860
the moment you start to say that, okay, space and time, they don't exist, they're data structures,
00:28:54.100
therefore our notion of objects is a pure interface issue. It's just, it's like a trash can on the
00:29:00.240
desktop. It doesn't really map onto reality as it is. You just bracketed logic and rationality.
00:29:07.160
Mm-hmm. Which may be defensible, but evolution itself, the very notion of natural selection
00:29:13.920
is more than just rationality. It is a causal picture. And we might say that causes, and the
00:29:20.960
notion of cause and effect, right, or the notion that causes precede their effects rather than some
00:29:26.540
notion of teleology, these things are also just data structures. So that like every piece you want
00:29:34.040
to put on the board to give a Darwinian account of anything does sort of fall in the bin of more
00:29:40.120
space and time, more objects. And so how doesn't this thing completely subvert itself and land you
00:29:45.600
in something like just a global skepticism, which says, you know, we're in touch with some seeming
00:29:51.940
reality which we really can't ever know anything fundamental about?
00:29:57.120
Yeah, great question, both of you. So the idea first that evolution by natural selection,
00:30:03.660
as we all know and love it, involves things like DNA and organisms in space and time and so forth.
00:30:10.880
So how could I ever use the theory of evolution to show and claim to show that things like DNA
00:30:15.860
are just data structures? They're just interface symbols. And the reason I can do that is because
00:30:22.800
John Maynard Smith actually took the theory of evolution by natural selection and mathematized
00:30:29.320
it. He realized that we could abstract away from all of the sort of the extraneous empirical
00:30:33.940
assumptions of space and time and DNA and so forth. And we could look at what he calls just
00:30:38.980
evolutionary game theory. And so that the logic of natural selection itself can be reduced to
00:30:44.780
competing strategy, where you make no ontological assumptions whatsoever about the world in which
00:30:50.600
those strategies are playing. So it allows one, when someone says natural selection favors true
00:30:58.840
perceptions, evolutionary game theory provides you precisely the tool you need to ask how to assess
00:31:05.900
that question independent of all these other empirical assumptions that are standard in biological
00:31:11.300
evolutionary theories. And so that allowed me to do this. Now, there's another aspect to the
00:31:17.660
argumentative strategy that I'm taking here. And that is that one reason that I went after the
00:31:23.540
evolutionary argument was I actually announced the interface theory in my book in 1998, Visual
00:31:28.780
Intelligence. And people liked the book except for the chapter on the interface theory. And I thought
00:31:33.040
that was nuts. And I realized I wasn't going to get my colleagues to pay attention to that idea
00:31:38.200
unless I talked to them in a language that they really understood. And it was that that motivated me
00:31:43.400
to go after the evolutionary argument a few years later. So the reason I use evolution is not because
00:31:49.100
maybe it's the best argument, it's because it's the argument that I knew my colleagues would listen
00:31:53.940
to. So first, I'm abstracting away from the whole apparatus of biological evolution to just the
00:32:00.220
nuts and bolts of evolutionary game theory, which doesn't bring the ontological assumptions.
00:32:05.200
And second, my attitude as a scientist toward any scientific theory is they're just the best tools we have
00:32:10.980
so far. I don't believe any scientific theories, including my own. I think belief is not a helpful
00:32:16.100
attitude. This is the best tool we have so far. Let's look at what this tool says about the claim
00:32:22.560
that natural selection favors vertical perceptions. And whatever deeper, so what that tool is saying to
00:32:29.340
me is there's just no grounds for thinking that any of our perceptions of space and time and objects
00:32:34.940
in any way capture the structure of whatever objective reality might be.
00:32:41.540
And one thing that's nice about this mathematics as well is you might say, well, how in the world
00:32:45.880
could you possibly show that the structure of our perceptions doesn't capture the structure of the
00:32:52.020
world unless you knew already what the structure of the world is? I mean, aren't you shooting yourself in the
00:32:56.640
foot there? And it turns out you don't have to, it's really that wonderful in the mathematics that you can
00:33:02.680
show that whatever the structure of the world might be, the probability is zero, but that's what we're
00:33:08.560
Right. And that makes sense to me too. I'm still stuck on how it extends all the way to space and time.
00:33:17.800
And I think we shouldn't spend too much time on the evolution piece, mostly because I actually think
00:33:23.940
you don't need it. But just from a philosophical perspective, I think it's very interesting. And I'm still
00:33:30.340
curious myself kind of how far this goes, because it's clearly true up to a point at least. So if
00:33:38.040
Darwinian evolution by natural selection is a theory about objects in space and time, I mean, this is
00:33:44.860
just a question for you about how you view this. Where can you stand outside of space, time, and matter
00:33:52.000
to talk about evolved perceptual systems? But more specifically, what does evolution look like? Or
00:34:01.740
how do you even talk about evolution outside of space, time? So what are we saying is evolving?
00:34:07.400
What are we saying is surviving? What do evolution and survival even mean in a context outside of space
00:34:14.600
and time? Or is that just an abstract idea that you haven't?
00:34:17.780
No, that's the right question. And that's the power of evolutionary game theory. What John
00:34:22.860
Maynard Smith was able to do was to show we could talk about abstract strategies competing,
00:34:29.560
not in any particular assumption about space and time. He was able to abstract away from all the
00:34:35.940
details of biological evolution in space and time and organisms and say the essence of Darwin's idea
00:34:43.380
are these abstract strategies. And we can look at how these strategies compete in an abstract space.
00:34:51.440
So what is it that's surviving? It's an idea? It's a meme? It's a, what survives?
00:34:56.520
So what you do is you have an, you imagine that there are, there's a population of entities that are
00:35:04.640
competing using these strategies. So they're abstract entities in an abstract space with these strategies.
00:35:10.420
And what you do is you, you just, there's something called the replicator equation. And what you find
00:35:16.580
in the replicator equation is that the number of entities that have a good fitness strategy will
00:35:22.840
start to increase. Their proportion will increase. The strategies that have a bad fitness strategy
00:35:27.680
or, you know, a lesser strategy. And so what you have is the proportion of the population
00:35:37.280
Well, then I guess my question goes back to what, what do you mean by entity?
00:35:41.380
So these are just abstract entities that, that in evolutionary game theory,
00:35:45.480
you don't need to know what the entities are. They're, they're just place markers.
00:35:48.440
You're imagining they're, they're entities outside of space and time.
00:35:52.640
That's, and that's what the mathematics allows you to do.
00:35:54.960
Well, let me just piggyback on this. Now you're getting tag team.
00:35:59.580
So, uh, I apologize in advance, but isn't the very notion of competition and differential
00:36:07.020
success based on the parasitic on the notion of time, parasitic on the notion of causes preceding
00:36:14.680
their effects and entities is, you know, I think what Anika is fishing for there is entities
00:36:20.000
seem somehow derivative of objects, at least the concept of an object. I mean, we're talking
00:36:26.260
about something that's discrete, that's not merely a continuous reality, right? Things
00:36:32.220
can be differentiated. So how are we not using the same cognitive tools that have got hammered
00:36:37.360
into us by evolution, whose process is only selected for fitness and therefore left us
00:36:43.400
epistemologically closed to the nature of reality?
00:36:46.420
Absolutely. So you're right that the evolutionary, the replicator equation itself does have a time
00:36:52.260
parameter, right? Or at least a sequence parameter. It depends on whether you do it discreetly or,
00:36:56.240
or continuously. And so that's going to be built into it. Absolutely. So by the way, as I said,
00:37:02.340
I'm not committed to the truth of evolution by natural selection. I'm just using that theory
00:37:07.580
itself to say that whatever the structure of the world is, that that theory says the chance is zero
00:37:14.680
that our perceptions actually have captured that structure. It leaves it open to ask, is there a
00:37:21.080
deeper theory of objective reality that will give back evolution by natural selection as a special
00:37:26.680
case within what I call our space-time interface? And that's actually what I'm hoping for is to have
00:37:32.220
a deeper theory that will have, that'll go beyond space and time. It'll go beyond time in the sense
00:37:37.840
that there will be sequence and there will be perhaps a notion of cause following effect, but not in a
00:37:44.240
global space-time temporal framework. It'll be completely asynchronous and so forth. And we'll get
00:37:50.660
what we call causality in like a Minkowski space, Einstein's Minkowski space or a general relativistic
00:37:56.200
curved space-time as a projection of a much more deep theory of reality in which the very notion of
00:38:05.260
dimension doesn't hold and which time doesn't hold. But we can show that though, that so far, I'm thinking
00:38:10.420
about dynamics on, on abstract graphs and asynchronous dynamics, but that can be projected
00:38:16.860
and simplified into what we call space-time and its causality, say in Minkowski space.
00:38:21.600
Mm-hmm. I think it's just useful as a, as a launching off point to, to every place we'll go from here
00:38:28.140
to just say that at the very least, I think this evolution argument is very useful in terms of
00:38:35.600
opening our eyes to something that I actually think we, in some sense, we already know. And,
00:38:41.380
and again, you know, looking at something like light is a good example where we clearly, we have not been
00:38:46.800
given any tools, perceptual tools to understand how electrons operate, how, you know, what is actually
00:38:55.860
happening at a fundamental level. And, and of course there are all these theories now from
00:39:00.420
everything in string theory to many worlds, trying to sort out all of these things that we see through,
00:39:06.300
through our science that we have absolutely no intuitions for, we have no insight into, we're just
00:39:10.720
getting at through math and logic. And, and, and so clearly we haven't evolved systems that help us
00:39:16.360
here. And so I feel like we can agree to two points that we can move from here onward. And, and the first one is that
00:39:25.860
we can all agree, and, and, you know, scientists in general, we don't know what's fundamental,
00:39:31.020
nor do we perceive the truth about the fundamental building blocks of reality. And two, and this is
00:39:37.880
where I'd like to set this up for, for where consciousness is going to, it was about to come in,
00:39:43.280
we can agree that physical science has not given us an explanation for consciousness. We have no
00:39:49.240
understanding of how consciousness arises out of physical processes. And so it seems that we can at least
00:39:54.520
agree that it's a legitimate question or it's a legitimate project to wonder if consciousness is
00:40:00.360
something that's more fundamental and that we're missing that piece and that, that we've thought
00:40:04.420
about it backwards all this time. And that's one of the things that I think is so great about your
00:40:08.600
work. And I think this is a very important project.
00:40:11.920
Okay. So before we get to consciousness, which is central to our interest and where there's more
00:40:18.060
controversy, at least in my mind, I want to anchor what you've said to a very straightforward perception
00:40:24.900
so that our listeners can get in touch with how counterintuitive your thesis is. So when, you know,
00:40:32.980
the three of us are in a room together, apparently there are objects we can see, what is the status of
00:40:39.980
those objects like a glass of water when none of us are looking at it? And what is its status given
00:40:51.000
the fact that it apparently is always there for any one of us to look at? We have some kind of
00:40:57.360
consensus, intersubjective language game we can play here that can reference the glass of water,
00:41:04.260
water, you know, at will. How does that map onto your theory of non-veridical perception?
00:41:10.300
Right. So I think a good way to see what I'm saying and how counterintuitive it is,
00:41:14.820
is to think about, say, playing a game like Grand Theft Auto, but with a virtual reality add-on.
00:41:21.500
So you're, you have a headset and you're seeing a three-dimensional world of cars and your own
00:41:26.100
steering wheel and so forth. And, and it's, it's a multiplayer game. So there are people around
00:41:29.840
the world that, that see the same car that you're driving and see all the other cars that you see.
00:41:35.360
And in that case, there of course, is no real car that anybody's seeing. There's just some,
00:41:42.680
in this metaphor, a bunch of circuits and software and so forth, that that's the objective reality in,
00:41:47.820
in this metaphor. But all the players will agree that they see a red Corvette chasing,
00:41:53.520
you know, a green Mustang down the highway at, you know, 70 miles. They, they all agree,
00:41:58.260
not because there's literally a red Corvette chasing a green Mustang. There is some objective
00:42:02.620
reality, but it's not, it's not Corvettes and Mustangs. That's what we each see. And, and each
00:42:08.840
person with their own headset is getting, in this example, photons, you know, thrown to their eyes
00:42:15.020
and they're rendering in their own mind, the Corvette chasing the Mustang. So there are as many
00:42:21.320
Corvettes and Mustangs as there are people playing the game because they each see the one that they
00:42:26.640
render. And I might be looking at the, at the Corvette and I, I'm, I look away and I'm now
00:42:33.000
looking at my steering wheel. I no longer see the Corvette. I've, I have garbage collected the
00:42:37.560
Corvette. I'm not making that data structure anymore. Now I'm rendering a steering wheel.
00:42:42.680
And now I look back over at the Corvette. Now I'm re-rendering the Corvette. So, so it looks like
00:42:47.940
the Corvette was always there because, you know, when I look away and look back, it's, it's right where I
00:42:52.180
expect it to be. But in fact, there, there is a reality. It's not Corvettes, it's not Mustangs,
00:42:56.780
it's not steering wheels. So, so, and now I, so here's the counterintuitive claim. I'm claiming
00:43:01.960
we all have a headset on. Yeah. All of us. Yeah. And we all have this space, time, physical objects,
00:43:08.660
the glass of water. Those are all things that I render on the fly when I look at them. And then I
00:43:14.260
garbage collect them. And, and that's part of the evolutionary argument. I garbage collect them
00:43:18.980
because I'm trying to save energy and time and memory. So I render it only as I need it. And
00:43:25.000
it's really just the glass I'm seeing is a representation of fitness payoffs. Those are
00:43:29.720
the fitness payoffs I need to pay attention to now. Now I'm throwing that fitness payoff description
00:43:33.900
away. Now I'm looking at fitness payoffs over here. So it's, it's a rapid rendering of fitness
00:43:38.460
payoffs in real time. So here's one of the areas where I worry that the language that you're using,
00:43:45.900
the terminology you're using may actually give a false impression of what you're saying. This
00:43:50.420
is where some of my notes came in. I don't know how many of these notes that you, you have taken
00:43:55.740
or, or will take, but I worry that I actually think I agree with you there, but, but there's
00:44:00.000
something about the way you're saying it that I think gives a false impression of what you're saying.
00:44:04.380
So if you say, you know, the, the race car isn't there, you know, the moon is an example you give
00:44:09.300
often. I mean, you, you also will say, which, which I think is, is more accurate and closer to what
00:44:14.760
you're saying is something exists. Something is, is there in reality that my perceptual systems are
00:44:21.040
kind of turning into this, this site of a moon. And I think it's confusing to readers and listeners
00:44:28.120
when you say it doesn't exist as if the fundamental nature of reality behind whatever that moon is
00:44:36.040
doesn't exist, that there's, that there's nothing there.
00:44:39.040
Fair point. I agree. So it seems more accurate to say, we simply don't understand the deeper
00:44:44.460
reality behind the moon and behind apples. And that this is something in a way, like it's less
00:44:51.300
controversial. This is something we can all admit given our, our current understanding of the physics.
00:44:56.200
And so I, part of my, my gripe there, I think is just with the language that you're using.
00:45:01.880
And there's something incredibly interesting about that, that, that something is there.
00:45:05.560
There's something I'm interacting with. The example I often like to use with you when
00:45:09.040
when we meet as a tree, if we plant a tree and, and leave it, it is, it is out of our
00:45:14.140
conscious experience. There are all these processes that will be taking place in, you know,
00:45:18.400
what we call them, how we view them as water and nutrients being sucked up from the earth and
00:45:22.980
it will grow and we'll come back in a year. And all of those processes would have taken place,
00:45:27.340
whatever they are at bottom, we, we may not understand, but something is going on in the universe
00:45:33.640
that we have our access to. However far from the truth it is, there is something taking place there.
00:45:41.240
And so to explain it as when I leave, there's absolutely nothing there and there's no tree.
00:45:47.640
And then I come back and somehow I create this as if it's...
00:45:51.640
Yeah, I think that's a very important clarification. So I agree with you completely that I'm not saying
00:45:56.320
that there isn't an objective reality that would exist even if I don't look at it. There is an
00:46:00.480
objective reality. It's just that the way, what I see is utterly unlike that objective reality. And in
00:46:06.980
the, the metaphor that I was giving of virtual reality, I might see a red Corvette. The reality
00:46:15.300
in that metaphor would be circuits and software that aren't red, that don't have the shape of a
00:46:19.540
Corvette. They're utterly unlike a Corvette. But, but when I interact with that objective reality
00:46:23.940
that's there, even if I don't see the Corvette, I then will see the Corvette. So that's how
00:46:28.560
different. I think it's potentially confusing as an analogy only because as a user of video games,
00:46:34.800
you can turn, you can turn the video game off. It's not a self-sufficient world. It's not reality
00:46:41.180
that, that continues on and does its, does its thing. I agree with you on that. Or at least,
00:46:45.740
yeah, it gives a slightly false impression. So. Right. I agree that the reality is continuing on
00:46:50.820
regardless of what, I have life insurance. Right, right. And, and, and the reason I have life
00:46:55.320
insurance is because I agree with you that there is some, some reality that will continue to go on
00:46:59.100
even if I'm, if I'm not here. Right, right. Okay. So let, let me make that point with a slightly
00:47:05.500
different topspin because those concessions seem to bring us back to the standard consensus view of
00:47:12.800
science in some way. So there's this appearance reality distinction. There's our sensory experience,
00:47:19.140
which is our interface, which, you know, everyone agrees does not put us in direct contact with the
00:47:25.660
thing in itself or underlying reality. But you're conceding that there is an underlying reality
00:47:32.420
and there must be some lawful mapping between what we see on the interface and that underlying reality,
00:47:41.360
which actually renders our mutual perceptions of things like trees and glasses and cars predictable,
00:47:49.140
where we can both agree that if we go to look for the same object, each one of us is likely to
00:47:54.540
independently find it, whatever the relationship is between that interface data structure and reality
00:48:01.220
itself. So there's, there has to be some kind of isomorphism between our virtual reality experience
00:48:07.380
and reality itself, even though we don't have, by virtue of evolution, all of the right conceptual
00:48:15.080
tools so as to say what it is. There is going to be a mapping between objective reality and,
00:48:22.220
and our perceptions. And that mapping will be as complicated or more complicated as the mapping
00:48:27.560
between all the circuits and software in a virtual reality machine and the actual like Grand Theft Auto
00:48:34.840
world that I, that I perceive. And if you think about it, there's going to be hundreds of megabytes of
00:48:40.660
software, all these complicated circuits, all I'm seeing is simple cars and so forth. So there's
00:48:46.560
going to be in computer science, there are all these virtual machines that you create, many,
00:48:49.860
many levels of virtual machines between what you see in, in the Grand Theft Auto game and the actual
00:48:55.600
objective reality in this metaphor that's going on there. And so I'm saying that, that the idea that,
00:49:00.380
that the reality is going to be isomorphic to space-time is too simplistic, right? There's going to be some,
00:49:08.260
I agree that there's going to be some systematic mapping is going to be quite complicated. So
00:49:13.880
another way to put it is this, if I said to you, I want you to use the language of what you can see
00:49:19.460
in your interface in the virtual reality. So the pixels that you can see, the colors and pixels,
00:49:24.280
that's the only language you can use. And I want you to tell me how this virtual world works.
00:49:28.840
You can't do it because the language of pixels is an inadequate set of predicates to actually
00:49:34.020
describe that world. And I'm making the very strong claim that whatever objective reality is,
00:49:39.320
the language of space and time and physical objects in space and time is simply the wrong
00:49:44.620
language. There is a systematic mapping, but the language of objects in space and time could not
00:49:51.060
possibly frame a true description of that objective reality. That's the strong claim.
00:49:56.560
So it's similar to J.B.S. Haldane, the famous physiologist, gave us an aphorism that almost
00:50:04.800
contains this thesis in seed form, which is not only is reality stranger than we suppose,
00:50:11.200
it's stranger than we can suppose. By giving a deflationary account of our notion of space and time,
00:50:18.280
you are saying, whatever this mapping is between appearance and reality,
00:50:22.460
we are so ill-equipped to talk about it based on this interface analogy that it is, on some level,
00:50:32.600
far stranger and far more foreign to the way in which we're thinking about things than anyone has.
00:50:41.140
So your claim isn't actually, so I'm just trying to get at what is truly novel about your claim.
00:50:47.700
One thing that's novel is the expectation that evolution has selected for some approximation to what is true
00:50:56.880
seems false, right? So fitness trumps truth. And as a result, whatever this mapping is to underlying reality,
00:51:08.340
we are in a far greater state of ignorance about it than most people expect.
00:51:13.440
That's right. Absolutely. You've nailed it on the head. And I would say this, that it's the relationship
00:51:19.600
between a visualization tool and whatever it is that we're visualizing, right? So there's going to be
00:51:26.680
this objective reality that's out there. And evolution just gave us this very, very dumbed down,
00:51:32.860
species-specific visualization tool. The very language of that tool is probably, I mean,
00:51:37.580
the whole point of a visualization tool is to hide the complexity of the objective reality and just
00:51:43.520
give you, you know, a dumbed down tool that you can use. And so the very language of space and time
00:51:50.460
and objects is just the wrong language for whatever the thing is. Just like-
00:51:53.520
I would say, though, that, you know, as far as I understand, most up to this point,
00:52:00.080
I know we're going to talk about consciousness soon and then we'll get into a different realm,
00:52:03.340
but up to this point, everything that you've just said, I think most physicists would agree with and
00:52:09.540
is part of the conversation in quantum mechanics right now. And many physicists are talking about
00:52:17.400
this problem of space-time and of space and time independently as well, clearly not being
00:52:26.340
the final answer to what is fundamental. And everything we see out of quantum mechanics
00:52:34.420
gives us a real philosophical problem similar to the one you're describing, which is it seems that
00:52:42.200
the fundamental nature of the universe, what the universe is actually made of, is not anything
00:52:48.560
like what we experience it all the way to the point of space and time.
00:52:52.940
That's right. And so it's really interesting because if you look at our biggest scientific
00:53:00.160
theories in physics, general relativity and also special relativity are about space-time, right?
00:53:09.420
Space-time is assumed to be an objective reality and a fundamental one. In quantum field theory
00:53:15.140
as well, the fields are defined over space-time. And so physics, as Neymar Arkani-Hamed has put it,
00:53:25.740
and he's a professor at the Institute for Advanced Study at Princeton, he's pointed out that for the
00:53:30.580
last few centuries, physics has been about what happens in space-time. But now they're realizing
00:53:39.720
that to get general relativity and the standard model of physics to play well together, they're
00:53:45.820
going to have to let go of space-time. It cannot be fundamental. And he's not worried about it.
00:53:50.560
And in fact, he says most of his colleagues agree that space-time is doomed and there's going to be
00:53:55.480
something deeper. And that's wonderful because we're about to learn something new. There's a deeper
00:53:59.800
framework for us to be thinking about physics and space-time will have to be emergent from that deeper
00:54:07.560
I actually, I watched a lecture of his recently and I wrote down this short quote. He says,
00:54:12.500
all these things are converging on some completely new formulation of standard physics where space-time
00:54:17.640
and quantum mechanics are not our inputs but our outputs. And I thought that was very well said.
00:54:23.180
But that, so as far as I, you know, understand where physics is at at this point, I think all of
00:54:30.600
these physicists would agree with you up until this point. And I think now we can probably cross over.
00:54:37.040
Although I would just point out that they might agree for different reasons, right? They're not
00:54:44.420
But that there's nothing intrinsic in what Don is saying about how false our view of the fundamental
00:54:52.460
nature of reality is. That it is that. That you can actually take it all the way to space-time
00:54:58.300
and that we're probably wrong in all of those assumptions about what we think.
00:55:03.520
I agree. And I think it's really interesting that the pillars of science are all saying the
00:55:09.440
same thing. Evolution by natural selection is saying you need to let go of space-time. And then
00:55:14.320
the physicists trying to get general relativity and quantum field theory to play, right, they're
00:55:18.180
saying you have to let go of space-time. When our best science is saying that, it's time for an
00:55:22.300
interesting revolution. That's going to be fun. I mean, it's going to be very exciting to see what happens
00:55:26.340
when we go behind space-time. It's so counterintuitive though, right? We've just assumed
00:55:31.320
that, I mean, our story is space-time came into existence 13.8 billion years ago at the
00:55:37.080
Big Bang. It was the fundamental reality. We're saying there's a deeper story. That story
00:55:42.680
is only true up to a point. There's a much, much deeper story. And that's more like an interface
00:55:50.380
story. That's the projection of a much deeper story we're going to have to find. And that
00:55:57.200
Yeah, well, so we're now going to move on to consciousness, which will be interesting. I
00:56:01.700
just, I guess, I want to flag my lingering concern that your rationale, if taken in deadly
00:56:10.440
earnest, may still kick open the door to epistemological skepticism, for me at least. Because
00:56:16.940
I think, you know, if one space and time are dispensed with causality and kind of an evolutionary
00:56:24.080
rationale does, I mean, this is kind of the Plantinga argument you referred to. It's just
00:56:30.340
once you start pulling hard at those threads, I'm not sure how much the fabric of epistemology
00:56:37.680
I agree with you, Sam, in the following sense. I think that it might actually go that way
00:56:43.160
just on the evolutionary arguments alone. So what I'm going to want to do is to, whatever
00:56:48.120
the deeper theory of reality that I propose, it needs to be such that it will not fall
00:56:52.280
into the epistemological problems that you're raising. So the deeper theory needs to avoid
00:56:57.480
those epistemological problems and show why that deeper theory looks like evolution by
00:57:03.600
natural selection when we project it into our space-time interface. In other words, so that
00:57:08.260
these kinds of problems might arise because evolution by natural selection itself is not
00:57:12.640
the deepest theory. It's just an interface version of a deeper theory.
00:57:17.800
Yeah. So on this topic of causality in time and whether this project even makes sense, which
00:57:24.400
as I know is a place you and I have gotten to before in our conversations, when you say things
00:57:29.320
like the brain and neurons are not the source of causal powers and that we need to find another
00:57:35.520
source. My question is, why would you assume that there are causal powers at all in the fundamental
00:57:43.060
nature of reality? So it's not clear to me why we include causal powers as part of a fundamental
00:57:53.240
I don't quite see how there is causality without time, at least in the way that we typically think
00:58:00.360
I mean, just to take an example, which is kind of standard physics, although often neglected,
00:58:05.900
the notion of a block universe, right? The notion that the future exists just as much as the present
00:58:11.320
as the past. And so there really are no events. There's just a single datum, which is the entire
00:58:21.240
So causality under that construal is really an illusion.
00:58:26.880
That's right. And without endorsing the block universe view, I would say that causality in space
00:58:35.460
and time is a fiction. It's a useful fiction that we've evolved in our interface. But that strictly
00:58:42.880
speaking, causality in space and time is not, because space and time is not the fundamental
00:58:49.360
reality, the appearance of causality, like my hand pushing this glass and moving it, it gives the
00:58:54.860
appearance that my hand has causal powers and is causing the glass to move. But in fact, that's just a
00:59:01.980
useful fiction. It's like if I drag an icon on my desktop to the trash can and delete the file. It looks
00:59:08.540
like the movement of the icon on the desktop to the trash can caused the file to be deleted. Well,
00:59:14.140
for the casual user, that's a perfectly harmless fiction to believe. If you move the icon to the
00:59:19.960
trash can, it causes the file to be. It's perfectly harmless. But for the user, for the guy who actually
00:59:25.100
wants to build the software interface for this, go under the hood, that fiction has to be let go.
00:59:30.580
So I'm claiming that within space and time, all the appearance of causality is a fiction. Now,
00:59:38.820
in terms of a deeper theory, because you're asking in a deeper theory, what about causality there?
00:59:42.420
Well, my argument is that causality is part of the illusion of time. Assuming time is some sort
00:59:48.640
of illusion and time is not fundamental, at least as far as we usually talk about. I mean, I can think
00:59:54.160
of, this is another conversation of how we can almost redefine causality, which in my view, I have.
01:00:01.380
I think there's a way to talk about different things being connected. But in terms of the way
01:00:06.640
we, our definition of causality and how we use it, it is dependent on time. It is a part of things that
01:00:12.300
play out in time. You need something, you need something to happen in the past to cause something
01:00:18.240
to happen in the future. It is this direct relationship in time. And so I don't even know
01:00:25.460
how you would talk about causality without time. It needs time for its own definition.
01:00:32.260
So I think if we're redefining causality, which I think is kosher, actually, I think that's something
01:00:36.540
we can talk about. I've never been clear whether that is what you mean. Are we kind of redefining
01:00:41.880
what causality is? And is it more like connections between things rather than one thing happens and
01:00:50.080
then another thing happens in response? Yeah. I would also add another aspect here,
01:00:57.080
which is that the notion of possibility may be spurious, right? So that it may in fact be that
01:01:03.980
nothing is ever possible. There's only what is actual, right? There's only what happens.
01:01:10.360
And our sense that something else might have happened in any circumstance, that just might
01:01:16.380
be a, again, part of this user interface that has seemed useful because it is useful to try,
01:01:23.300
like when we're apparently making decisions between two possibilities and we need to model
01:01:28.200
counterfactuals, right? Counterfactual thinking is incredibly useful. And yet, what if it is simply
01:01:33.340
the case as it, you know, as it would be in a block universe that there's just, you know,
01:01:38.180
the novel is already written and you're on page 75, but page 168 exists already in some sense.
01:01:46.200
And I also, I don't think you need the block universe though, because I think there's just
01:01:49.700
one way of getting at the point. Yeah. I mean, it's a good visualization,
01:01:52.260
but I think most physicists will have some argument about it being described that way. But I think the
01:01:57.660
analogy holds, and I was just reading Carlo Rovelli's book on time, and he makes this point as well, that at a
01:02:08.420
certain level, there is no difference between past and future. And essentially, I mean, his thesis in the
01:02:14.640
book is that time is an illusion. It is not something... So, sorry, go ahead.
01:02:21.680
So, yeah, I think that we'll need a notion of causality that's outside of space-time,
01:02:27.020
that is not going to be dependent on time. It'll be more like relationship, as you talked about.
01:02:33.020
And in terms of the counterfactuals and possibilities, I think we'll want to have a
01:02:39.700
conversation about probability and how we interpret probabilities in scientific theories.
01:02:45.120
whether they're... I mean, so there are probabilities that are epistemic in the sense that maybe there's
01:02:51.200
a deterministic reality out there, and I just don't know enough about it. So, the probabilities
01:02:59.480
There's frequency, but our sense of probability may be spurious.
01:03:04.180
That's right. But then, if there are probabilities in which, no matter how much my knowledge increases,
01:03:11.380
the probability will not disappear. And so, we often call those in science objective chance.
01:03:18.520
And I think we'll want to have a conversation about how we think about probabilities and objective
01:03:23.940
chance. It will actually take us into the question about free will and so forth, my version of notions
01:03:30.540
of free will versus determinism. So, I think that that's going to be an interesting conversation.
01:03:35.080
So, I agree that we need a notion of causality that transcends time, and I'm proposing one.
01:03:44.940
You talked with Judea Pearl, and he's got, of course, these directed acyclic graphs,
01:03:50.660
models of causal reasoning, which are brilliant, and they've actually given us a mathematical science
01:03:57.700
But when, you know, in his book, Pearl doesn't define causality. He refuses to define the notion
01:04:05.900
of causality. In some sense, what we're facing here is that every scientific theory, and this is
01:04:12.920
a really important idea, I think. No scientific theory is a theory of everything. There's no such
01:04:18.840
thing. Every scientific theory makes certain assumptions. We call them the premises or the
01:04:24.680
assumptions of the theory. And only if you grant the theory those assumptions can it go and explain
01:04:30.600
everything else. And so, we're going to have, in every scientific theory, certain primitives that
01:04:37.380
are unexplained. They are the miracles vis-a-vis that theory. Now, you may say, well, I can get you
01:04:44.560
a deeper theory for which those assumptions come out as consequences, but you will have a deeper set
01:04:50.640
There's going to be an axiom somewhere at the bottom.
01:04:52.460
Absolutely. And that's a humbling recognition for a scientist, to realize that we will never
01:04:59.220
have a theory of everything. We will always have a miracle or a few miracles. We want to keep them
01:05:06.100
I don't like that you call them miracles. I would like to have the record show.
01:05:09.860
I understand that. But we call them assumptions.
01:05:14.660
Well, because I want to really make the point...
01:05:16.540
It's another place where I think people might actually be confused about what you mean,
01:05:23.440
Right. So, I'll just say that there are things that the theory cannot explain.
01:05:28.660
And there will always be things that every scientific theory cannot explain, and it's
01:05:33.260
a principal problem. So, the interesting thing will be, in a deeper theory, will we have
01:05:38.900
something that's like a causal notion that will be a primitive of the theory? And it may not
01:05:43.780
be dependent on time, but it'll be... There will be primitives, and an explanation will
01:05:49.400
I guess. So, my question, my issue really is, why use the word causality when you're speaking
01:05:56.340
in more fundamental terms? So, why not say something like, connections, relationships,
01:06:01.840
to me, seem much more, much closer analogies. And so, to say, what we view as causality is,
01:06:07.960
in fact, something more like a connection or a relationship at a more fundamental level.
01:06:12.900
I'm completely on board. I agree with you completely. I think a deeper theory, we may
01:06:15.620
think that the term causality is just not a very useful term anymore. It was useful in space and
01:06:21.000
time. And connection or influence is a better term at a deeper level.
01:06:26.320
Okay. So, on to consciousness and free will and other dangerous topics.
01:06:31.260
What, in your view, is the connection between consciousness and the base layers?
01:06:38.920
If you'd like to continue listening to this podcast, you'll need to subscribe at samharris.org.
01:06:44.140
You'll get access to all full-length episodes of the Making Sense Podcast and to other subscriber-only
01:06:49.140
content, including bonus episodes and AMAs and the conversations I've been having on the Waking Up app.
01:06:55.700
The Making Sense Podcast is ad-free and relies entirely on listener support.