The Jordan B. Peterson Podcast - October 12, 2023


387. What You See and Feel is Not Reality | Dr. Donald Hoffman


Episode Stats

Length

1 hour and 37 minutes

Words per Minute

162.8927

Word Count

15,958

Sentence Count

988

Misogynist Sentences

6

Hate Speech Sentences

8


Summary

In this episode, cognitive neuroscientist Dr. Donald Hoffman discusses his research on the nature of reality and consciousness, and how consciousness might be best understood as a vast probability space within which we orient ourselves. Dr. Hoffman s research has been published in the Journal of Cognitive Psychology, and he is a regular contributor to the New York Times and other publications. He is a professor of psychology at the University of California, Los Angeles, and is the author of several books, including The Mind and Reality: A Guide to Consciousness and the Search for Meaning. He has also appeared on the BBC, CNN, and NPR. His work has been featured in the New Yorker, the New Scientist, and the Guardian, among other publications, and has been the subject of many books and articles in the scientific press, including a recent article in the journal Nature, in which he explains the relationship between consciousness and reality, and our perception of the world around us. Let this be the first step towards the brighter future you deserve. With decades of experience helping patients, Dr. Jordan B. Peterson offers a unique understanding of why you might be feeling this way, and offers a roadmap towards healing. In his new series, Dr. Peterson provides a roadmap toward healing, showing that while the journey isn t easy, it s absolutely possible to find your way forward. If you're suffering, please know you are not alone, and there's hope. Go to Dailywireplus.net/Dailywireplus to join our new series on Depression and Anxiety, where we know how isolating and overwhelming these conditions can be a lifeline to help you find a place to begin to feel better. We know how to reach out to those listening who may be struggling. with a better future you can help you feel better, and we know that you deserve a brighter, more positive outlook on the brighter tomorrow you deserve it. Today's episode is a new series that could be a light at the end of the tunnel to a brighter future that's possible because of Dr. . of the Daily Wire Plus. Subscribe to DailyWire Plus to get immediate access to all the latest episodes of Dailywire Plus, free of ads, free training and support, and more information on how to get a better night's rest, and access to the information you need to get the most out of your best night out and access so you won't have to wait for the next episode.


Transcript

00:00:00.940 Hey everyone, real quick before you skip, I want to talk to you about something serious and important.
00:00:06.480 Dr. Jordan Peterson has created a new series that could be a lifeline for those battling depression and anxiety.
00:00:12.740 We know how isolating and overwhelming these conditions can be, and we wanted to take a moment to reach out to those listening who may be struggling.
00:00:20.100 With decades of experience helping patients, Dr. Peterson offers a unique understanding of why you might be feeling this way in his new series.
00:00:27.420 He provides a roadmap towards healing, showing that while the journey isn't easy, it's absolutely possible to find your way forward.
00:00:35.360 If you're suffering, please know you are not alone. There's hope, and there's a path to feeling better.
00:00:41.780 Go to Daily Wire Plus now and start watching Dr. Jordan B. Peterson on depression and anxiety.
00:00:47.460 Let this be the first step towards the brighter future you deserve.
00:00:57.420 Hello everyone watching and listening.
00:01:11.660 Today I'm speaking with author and cognitive neuroscientist Dr. Donald Hoffman.
00:01:16.460 We discuss Dr. Hoffman's research on what we know as reality.
00:01:20.700 Why space-time itself is now considered by many a doomed framework of interpretation,
00:01:27.920 and how consciousness might be best understood as a vast probability space within which we orient ourselves.
00:01:35.980 Hello Dr. Hoffman. It's very good to see you.
00:01:39.040 I've been interested in your theory for a long time, partly because I'm quite attracted by the doctrine of pragmatism,
00:01:47.580 which was really part of what I tried to discuss with Sam Harris many, many times.
00:01:53.080 And it seems that your work bears, well, it's a broad general interest, but it also bears on specific interests of mine,
00:02:00.180 because I've always been curious about the relationship between Darwinian concepts of truth,
00:02:06.380 and let's say the concepts of truth put out by the more Newtonian, say, objective materialists.
00:02:13.560 They don't seem commensurate to me, and so would you start by explaining your theory, your broad theory of perception?
00:02:22.140 I know that'll take a while, but it's a tricky theory.
00:02:26.860 So do you want to lay it out for us to begin with?
00:02:29.560 Most Darwinian scholars would agree that evolution shapes sensory systems to guide adaptive behavior,
00:02:36.820 that is, to keep organisms alive long enough to reproduce.
00:02:42.380 But many also believe that, in addition, evolution shapes us to see reality as it is,
00:02:51.440 at least some aspects of reality that we need for survival.
00:02:55.500 So that's often among my colleagues in studying evolution with natural selection.
00:03:01.300 They'll say, yeah, seeing the truth will make you more fit in many cases.
00:03:06.200 And so even though Darwin says it's, you know, evolution shapes sensory systems just to keep you alive long enough to reproduce,
00:03:14.620 many people think that seeing aspects of reality as it is will also make you more fit and make you more likely to reproduce.
00:03:23.020 So I decided with my graduate students a few years ago to look into this.
00:03:30.040 There are tools.
00:03:32.140 Darwin's theory is now a mathematical theory.
00:03:34.360 We have the tools of evolutionary game theory that John Maynard Smith and others invented in the 1970s.
00:03:41.400 And so it's a wonderful theory.
00:03:42.880 So Darwin's ideas can now be tested with mathematical precision.
00:03:46.420 And I thought that maybe what we would find is that, you know, evolution tries to do things on the cheap.
00:03:54.660 It doesn't, you know, if you have to spend more calories, then you have to go out and kill something to get those calories.
00:04:01.440 And so there are selection pressures to do things cheaply and quickly, heuristics.
00:04:07.300 And so I went into it thinking that maybe that would make it so that many sensory systems didn't see all of the truth.
00:04:16.860 But I just wanted to check and see what would happen.
00:04:19.060 To my surprise, when we actually started studying this, there came up principles that made me realize that the chance that we see reality as it is on Darwinian principles is essentially zero.
00:04:34.200 And that was a stunning result.
00:04:36.520 Why zero?
00:04:37.120 Result for me.
00:04:38.020 Zero is a very low number.
00:04:39.540 So why zero?
00:04:41.120 That's right.
00:04:42.200 So, and I can, it's a bit technical, but in evolutionary theory, there are, in the evolutionary game presentation of it, you think of evolution as like a game.
00:04:54.520 And in a game, you're competing with other players and you're trying to get points.
00:04:59.160 Now, in the game of evolution, the way it's modeled is there are these fitness payoff functions.
00:05:03.760 And those are sort of the points that you can get for being in certain states and taking certain actions.
00:05:08.580 And so these fitness payoffs are what guides the selection.
00:05:16.180 They guide the evolution.
00:05:17.480 And so we began to analyze those fitness payoffs, right?
00:05:22.500 The fitness payoffs, to be very, you know, concrete about a fitness payoff, suppose that you're a lion and you want to mate.
00:05:31.020 Well, a steak won't be very useful for you for that process, right?
00:05:37.060 You'll have very little fitness payoff for a steak if you're a lion looking to mate.
00:05:41.260 If you're a lion that's looking to eat and you're hungry, then, of course, the steak will have high fitness payoffs for you.
00:05:46.220 So fitness payoff depends on the organism, like a lion versus, say, a cow.
00:05:52.700 A steak is of no fitness payoff for any cow for any purposes.
00:05:56.520 Quite the contrary.
00:05:57.000 Whereas it could be quite the contrary.
00:05:59.420 That's right.
00:06:00.240 So the fitness payoff depends on the organism, its state, I mean, hungry versus sated, for example, and the action, feeding, fighting, fleeing, and mating, for example.
00:06:09.720 So these fitness payoffs are functions of the world.
00:06:13.600 They depend on the state of the world and its structure and the organism, its state, and its actions.
00:06:18.760 So they're complicated functions.
00:06:20.780 And in some sense, you could think that there's just effectively one fitness payoff function.
00:06:24.880 There's this one big fitness payoff function which handles the world and all possible organisms, all possible states and actions.
00:06:32.020 So there's a big fitness payoff.
00:06:34.640 The question is, but we can think about it as many fitness payoffs if we want to as well.
00:06:39.240 The question is, suppose then, so this fitness payoff function, it takes as its starting point the state of the world, right?
00:06:48.540 That's the domain of the function.
00:06:51.340 And the range of the function might be the fitness payoff value, say, from zero to 100.
00:06:56.280 Zero means you lose.
00:06:57.820 100 means you did as good as you could possibly do.
00:07:00.620 So zero to 100, say.
00:07:01.660 So it's a function from the state of the world, cross-organism, into state and action, into this number, from zero to 100, to zero to 1,000, whatever you want to use.
00:07:13.220 So the question then is, does this function preserve information about the structure of the world?
00:07:22.420 This is the function that's guiding the evolution of our sensory systems.
00:07:26.900 So does this function, if the function is what mathematicians call a homomorphism, a structure-preserving map.
00:07:35.820 So, for example, the world might have an order relationship, like one is less than two is less than three, like a distance or a distance metric or something like that.
00:07:43.800 Then, to be a homomorphism would mean that if things were in a certain order in the world, the function would take them into that same order or some homomorphism of that order onto the states of the payoffs.
00:08:01.640 So that's a technical question.
00:08:05.420 What is the probability that a generically chosen payoff function will be a homomorphism of a metric or a total order or a partial order or a topology or a measurable structure?
00:08:20.600 Any structure that you can imagine the world might have, you can ask, what is the probability that a generically chosen payoff function will preserve it?
00:08:31.280 If it doesn't preserve it, there's no information in the payoff function to shape sensory systems to see that truth, to see that structure of the world.
00:08:40.120 So, what's remarkable is that evolutionary theory is indifferent about the payoff functions.
00:08:49.500 They don't say they have to be a certain shape.
00:08:51.360 In other words, every fitness payoff function that you could imagine is on equal footing on current evolutionary theory to every other one.
00:08:59.600 There's nothing in Darwin's theory that says these are the fitness payoff functions and this is their structure.
00:09:04.200 So, what we had to do then is to say, okay, we have to just look at all possible fitness payoff functions and ask how many of them, what fraction of these payoff functions would preserve a total order or a metric or a measurable structure or whatever it might be?
00:09:21.360 And here's the remarkable and in retrospect obvious thing.
00:09:25.720 For a payoff function, to preserve a structure like a metric or a total order, it must satisfy certain equations.
00:09:35.280 So, you have to write down these equations that the homomorphism must satisfy, that the function, the fitness payoff function must satisfy to be a homomorphism.
00:09:44.360 Well, once you write down an equation, most payoff functions simply aren't going to satisfy it.
00:09:50.120 I mean, the equations are quite restrictive.
00:09:51.760 And in fact, in the limit, as you look at, you know, a world that has an infinite number of states and payoff values that go from zero to infinity,
00:10:01.700 the fraction of payoff functions that actually are homomorphic goes to zero, precisely.
00:10:10.180 All right.
00:10:10.560 So, this is going to be a somewhat meandering question because it's a very complicated thing to get right.
00:10:16.020 So, people who think that the world is made out of self-evident facts underestimate the complexity of perception.
00:10:27.600 And so, here's how I'll make that case and you can tell me what you think.
00:10:31.620 You can imagine, you could ask an engineer a simple question.
00:10:36.460 Can you build a bridge?
00:10:37.960 And you might think, the fact of the bridge will be a fact and the answer to the question, which would be yes or no, will be a fact.
00:10:45.860 And that's that.
00:10:46.660 It's all self-evident.
00:10:48.140 It's sort of like the behaviorists assuming that the stimulus was self-evident.
00:10:52.840 It's very much analogous to that.
00:10:54.920 Okay.
00:10:55.100 But here's the problem.
00:10:56.900 There's a whole set of assumptions built into that question that people don't even notice.
00:11:02.380 And so, let me walk through some of the assumptions.
00:11:04.980 It's like, well, I can't build a bridge if you want it to last 50 million years.
00:11:12.120 So, I could build a bridge that would last a century or two centuries.
00:11:17.180 I can't build a bridge for no money with no labor, with materials that are just at hand.
00:11:24.740 So, the thing you define as a bridge is already subject to all sorts of constraints.
00:11:31.020 Now, you and I mutually understand those constraints without even having to speak about them.
00:11:36.080 So, I'm also going to assume that if you say, if I ask you, can you build a bridge?
00:11:40.140 And you say yes, you're also saying, I'm willing to work with you.
00:11:43.300 I'm willing to work honestly.
00:11:44.480 I'm willing to hire the right number of people.
00:11:46.700 I'm not going to screw you during the construction.
00:11:49.560 The bridge that we build, we both understand that human beings will be able to walk across it.
00:11:54.300 And as many as will fit on the bridge without the bridge falling down.
00:11:57.400 And also cars, and that means it'll have to be about the same width as a car.
00:12:01.020 Or a truck.
00:12:01.800 Or four lanes of cars or trucks.
00:12:03.580 And it'll have to abide by all the building codes and so forth.
00:12:07.220 There's so many constraints in that question.
00:12:10.480 That it would take you an unlimited amount of time to list them all.
00:12:15.040 And you don't because you're talking to an engineer.
00:12:18.640 And he's a human being like you.
00:12:20.420 Enculturated like you.
00:12:21.440 And so, he understands the world like you do.
00:12:23.500 And so, there's a hundred million things you don't have to talk about.
00:12:27.940 But they're there.
00:12:29.100 They're constraining the set of facts that's relevant to the issue.
00:12:34.600 And they're constraining them seriously.
00:12:37.220 Okay.
00:12:37.380 So, now those constraints, those are nested in an even higher order set of constraints, which are Darwinian, right?
00:12:47.440 It's like, well, the axiomatic agreements that you and I come to as a consequence of our shared perceptions, our shared embodiment, and our shared enculturation, are a consequence of a broader process, which is essentially Darwinian.
00:13:02.480 Now, that Darwinian set of constraints is instantiated in motivational systems, in part.
00:13:11.180 So, we might say, well, anything that you and I do together will have to be done while taking into account hunger and anger and fear and pain, the whole emotional potentiality of people, plus our fundamental motivational systems.
00:13:30.360 The manner in which we lay out this particular task will have to satisfy all that.
00:13:34.940 Now, that's also unspoken.
00:13:36.900 Now, when you talk about evolutionary Greek game theory and pragmatic constraints, let's say you talked about the lion who wants to mate and not eat.
00:13:47.260 You're referring to one motivational system or another, one governing sex, per se, and the other governing hunger.
00:13:54.020 And then the manner in which the lion is going to perceive the world, or the manner in which we're going to perceive the world, is going to be bounded by the operation of that motivational system.
00:14:05.840 And the perception is going to be deemed sufficient if, when we enact it, the motivational system is satiated.
00:14:13.660 Fair enough?
00:14:15.220 Okay.
00:14:15.680 Okay, now, but then there's a more interesting issue that pertains to the big fitness payoff.
00:14:22.180 So, if you look at how the nervous system is structured, you have these underlying motivational systems, which are goal-setting machines in which define the parameters within which a perception is valid.
00:14:35.900 But all those systems have to interact together, and they cause conflict, right?
00:14:40.900 So, if you're hungry and tired, you don't know whether you should get up and make a peanut butter sandwich, or if you should just go to sleep and leave it till the morning.
00:14:47.720 Like, there's inbuilt conflict.
00:14:49.500 And part of the reason that the cortex evolved was to mediate subcortical conflicts.
00:14:56.480 And then, even at the cortical level, the manner in which you integrate your fundamental motivations, and the manner in which I integrate mine, have to be integrated or will fight.
00:15:09.060 And so, I would say, and I don't know if evolutionary theorists have dealt with this, and it's relevant to your theory that perception doesn't map the real world.
00:15:20.760 Is there a higher order set of integrated constraints that serves reproduction over the long run that all the lower order fitness payoffs are necessarily subordinate to?
00:15:33.820 And I know this is a terribly complicated question.
00:15:37.840 Is that the reality that perception serves?
00:15:41.680 You know, you made the case that perceptions will not map one-to-one on reality, and I suppose that's partly because reality is infinitely complex, right?
00:15:51.100 I mean, you can fragment it infinitely, and you can contextualize it infinitely.
00:15:56.700 So, it's very hard to calibrate.
00:15:58.140 All right, so we got to put that aside, but then I would say, well, maybe there's another transcendent fundamental reality that's Darwinian in nature that integrates everything with regards to optimized long-term survival, and perceptions are optimized to suit that.
00:16:20.620 So, I know that's a terribly complicated question, but this is a terribly complicated subject.
00:16:24.840 Going online without ExpressVPN is like not paying attention to the safety demonstration on a flight.
00:16:31.380 Most of the time, you'll probably be fine, but what if one day that weird yellow mask drops down from overhead and you have no idea what to do?
00:16:39.120 In our hyper-connected world, your digital privacy isn't just a luxury.
00:16:42.940 It's a fundamental right.
00:16:44.240 Every time you connect to an unsecured network in a cafe, hotel, or airport, you're essentially broadcasting your personal information to anyone with a technical know-how to intercept it.
00:16:53.460 And let's be clear, it doesn't take a genius hacker to do this.
00:16:56.440 With some off-the-shelf hardware, even a tech-savvy teenager could potentially access your passwords, bank logins, and credit card details.
00:17:04.200 Now, you might think, what's the big deal?
00:17:06.280 Who'd want my data anyway?
00:17:07.820 Well, on the dark web, your personal information could fetch up to $1,000.
00:17:12.220 That's right, there's a whole underground economy built on stolen identities.
00:17:16.480 Enter ExpressVPN.
00:17:18.220 It's like a digital fortress, creating an encrypted tunnel between your device and the internet.
00:17:22.500 Their encryption is so robust that it would take a hacker with a supercomputer over a billion years to crack it.
00:17:28.580 But don't let its power fool you.
00:17:30.400 ExpressVPN is incredibly user-friendly.
00:17:32.740 With just one click, you're protected across all your devices.
00:17:35.760 Phones, laptops, tablets, you name it.
00:17:37.940 That's why I use ExpressVPN whenever I'm traveling or working from a coffee shop.
00:17:42.080 It gives me peace of mind knowing that my research, communications, and personal data are shielded from prying eyes.
00:17:47.800 Secure your online data today by visiting expressvpn.com slash jordan.
00:17:52.780 That's E-X-P-R-E-S-S-V-P-N dot com slash jordan, and you can get an extra three months free.
00:17:59.180 Expressvpn.com slash jordan.
00:18:01.080 Well, so I think we have to think a little out of the box on this question, because when we conclude that evolution shapes us not to see reality as it is, then the question is, well, what is it shaping our sensory systems to give us?
00:18:21.420 As well as what is reality, right?
00:18:24.480 That question also comes up, yeah.
00:18:26.240 Absolutely, and so the way I like to think about it is that evolution shapes sensory systems to serve as a user interface.
00:18:37.460 So like the desktop on your computer, for example.
00:18:41.140 So when you're actually working on a computer, you're, in this metaphor, what you're literally doing is toggling millions of voltages in a computer, in circuits.
00:18:52.220 And you're having to toggle them in very specific patterns, millions of them in exactly the right pattern.
00:18:58.060 Well, if you had to do that by hand, if you had to deal with that reality and interface with that reality, one voltage and get it in, well, it'd take you forever, and you probably wouldn't get it right, and you wouldn't be able to write your email or edit your picture, whatever you're doing on your computer.
00:19:11.160 So we spend good money, and people spend a lot of time building interfaces that allow you to be ignorant, completely ignorant.
00:19:20.120 Most of us have no idea what's under the hood in our laptops.
00:19:23.800 We have no idea.
00:19:24.620 We know that there's circuits and software, but most of us have never studied it.
00:19:28.540 And yet we're able to very swiftly and expertly edit our images and send texts and emails and so forth without having any clue, literally no clue, what's under the hood, what's the reality that we're actually toggling.
00:19:43.620 And so it seems that that's what evolution has done for us, has given us an incredibly dumbed-down interface.
00:19:51.140 We call it space and time and physical objects.
00:19:54.000 So we think of space and time as the fundamental reality and physical objects as truly existing in that objective reality.
00:20:01.260 But it's really just, in this metaphor, a virtual reality headset.
00:20:06.000 We've evolved a virtual reality headset that utterly hides the very nature of reality and on purpose, quote-unquote, on purpose, so to speak.
00:20:16.980 Right.
00:20:17.220 Because it would be not functional.
00:20:19.720 We drone in the complexity.
00:20:21.760 Right.
00:20:21.920 You're driving the complexity.
00:20:23.520 Okay.
00:20:24.100 So some evidence for that, as far as I'm concerned, is the following.
00:20:27.640 I mean, first of all, if you look at a desktop, it consists, let's say, in part of folders.
00:20:34.000 Now, folders are actually something in the real world that you can pick up, and we understand them.
00:20:38.660 You can manipulate them.
00:20:39.780 You can see how they operate by using your – as a consequence of your embodiment.
00:20:45.920 And so that embodiment gives you a deep understanding of the function of a folder, and then you can represent it abstractly, and you can put it on a desktop, and everyone understands what it means.
00:20:56.220 And that understanding is something like, able to map a certain set of functions for a certain set of purposes.
00:21:03.400 That's what – and it's a constrained set of purposes.
00:21:06.260 This is what really struck me about reading the pragmatists.
00:21:08.740 They said – and Peirce and James studied Darwin deeply, and they were the first philosophers to realize exactly what implications Darwinian theory had for both ontology and epistemology.
00:21:22.460 And ontology, which is the study of reality, for everyone listening, that was a real surprise.
00:21:28.360 You could understand that, you know, Darwin's theory might have epistemological implications, implications for the theory of knowledge.
00:21:35.280 But the fact that it had implications for what reality is, per se, is something that very few scientists have yet grappled with.
00:21:42.900 And the pragmatists always said, look, when you accept something as a fact, one of the things you don't notice is that you set up conditions for that to be factual.
00:21:53.980 And the fact is something like, this definition will do, during this time span, for this very constrained set of operations.
00:22:05.080 Fact.
00:22:05.560 Okay, but the problem with that is that's not a dead objective fact just lying on the ground.
00:22:10.600 That's a fact, by necessity, nested inside a motivational system.
00:22:15.140 So facts now all of a sudden become motivated facts, and that just wreaks havoc with the notion of objective – like of a distant objective materialism.
00:22:24.160 Because the facts are supposed to be separate from motivation.
00:22:27.620 And the pragmatists, as far as I'm concerned, following Darwin, demonstrated incontrovertibly that that's like you pointed to.
00:22:35.800 I think it's analogous.
00:22:38.120 That's actually impossible.
00:22:40.320 Now, because you have to constrain reality in order to perceive it, because it's too complex.
00:22:46.200 You drown in the details, otherwise.
00:22:47.940 You drown in the complexity.
00:22:49.020 Now, you made the claim, and I want to interrogate this a bit, that there's really no direct relationship, let's say, between the desktop icon that you think is an object when you look at the world and the actual world.
00:23:06.980 But let me offer you an alternative and tell me what you think about this.
00:23:11.520 So, there's this idea.
00:23:15.400 This is a weird way of approaching this, but I'm going to do it anyways.
00:23:19.020 There is a very strange stream of primarily Catholic thought, I believe, that tried to wrestle with the idea of how God could become man.
00:23:29.260 So, because God, of course, is infinite and everywhere, and man is finite and bounded.
00:23:33.980 And so, the question is, well, how do you establish a relationship between the infinite and the bounded?
00:23:38.760 And that's analogous to the same problem that we're trying to solve.
00:23:42.360 And they came up with this hypothesis of kenosis, which means emptying.
00:23:47.200 And their notion was, well, Christ was God, but in some ways like a low-resolution representation of God, an image of God, right?
00:23:56.060 So, there was a correspondence, but not a totality, at least not in any one instance.
00:24:02.600 Now, the reason I'm bringing that up is because it seems to me that when we perceive an object, that it isn't completely without, you call it homomorphism, I believe, with the underlying world.
00:24:19.320 It's just extremely low-resolution.
00:24:21.240 Like, it's a low-resolution functional tool.
00:24:25.260 That's what an object is.
00:24:27.040 But, and it's, now, and I would say, I would advance in support of that, for example, obviously, the icons that we have on a computer screen, we can use, and we treat them like they're real, and clearly, they're low-resolution.
00:24:39.540 But also, when we watch an animated show, for example, like The Simpsons, we're looking at cartoon-like icons, right?
00:24:49.560 They're emptied even further than, like, if I saw a Simpson cartoon of you, it would be like a very low-resolution representation of the you I see, which is a very low-resolution representation of whatever the hell you are in actuality.
00:25:06.200 Like, it's a secret, but I think the, there's an element of that perception that's an unbiased sampling of the underlying reality, although it's bent to pragmatic ends, pragmatic motivational ends.
00:25:21.800 Now, I don't know what you think about that.
00:25:23.500 I've thought about it for a long time.
00:25:24.820 I can't find a hole in it, but I'm wondering what you think.
00:25:27.880 Well, I think here's an analogy that might help explain the way I see it.
00:25:32.780 And suppose you're playing a VR version of Grand Theft Auto.
00:25:36.880 So, you have a headset and bodysuit on, and you're playing a multiplayer Grand Theft Auto.
00:25:40.960 You're playing with someone in China and England and so forth.
00:25:43.940 And I'm sitting there in my ride.
00:25:46.220 I've got a steering wheel and gas pedal and dashboard, and I'm looking out, and I see, to my right, I can see a red Ferrari.
00:25:52.140 And to my left, I see a green Mustang.
00:25:53.800 Well, now, of course, what I'm really interacting with in this analogy is some supercomputer somewhere, right?
00:26:01.360 And if I looked inside that supercomputer and looked for a red Ferrari, I would find no red Ferraris anywhere inside that supercomputer.
00:26:07.640 I would find voltages.
00:26:08.320 So, in that sense, the red Ferrari is a symbol in my headset, in the game, and there's nothing in the objective reality, in this metaphor, that it's a low-resolution version of.
00:26:21.700 It's just literally a completely different kind of beast.
00:26:25.620 Okay, okay.
00:26:26.060 There are no red Ferraris.
00:26:26.920 Okay, so let me ask you about that.
00:26:28.600 So, I get your point, especially, Jermaine, with regards to the online game.
00:26:33.040 But is it not the case that in that supercomputer architecture, there's a pattern that is analogous to the red Ferrari pattern that's the externalized representation of the pattern, let's say, on your retina, and then that propagates into your brain?
00:26:51.800 Like, there is a conservation of pattern.
00:26:56.560 Now, that Ferrari pattern in the supercomputer would be a very tiny element of an infinite landscape of patterns in the computer, but it's not, and it's definitely not a pattern of a car, per se, right?
00:27:11.940 It's a pattern of a representation of a car.
00:27:14.380 But it's still got some correspondence with a pattern of voltages, let's say, that does have some existence within the supercomputer architecture.
00:27:27.440 Well, so, in that case, I would say that there is a causal connection, that what's going on inside the supercomputer has a causal connection with the sequence of pixels that are being illuminated in my headset so that I see a red Ferrari.
00:27:43.360 So, there's a causal connection.
00:27:45.380 But if I asked, is there some sense in which there's a homomorphism of structure between what's going on inside the computer and what I'm seeing on the screen as a red Ferrari, I would say there's probably no homomorphism at all.
00:28:00.100 And in that sense, we can't think about it as like a low-resolution version of something.
00:28:04.340 So, to be specific, the electrons in the computer have no color.
00:28:12.320 My Ferrari is red.
00:28:14.000 The shape of the Ferrari and the shapes of the electrons or even the pattern of motion of the electrons is independent.
00:28:20.780 And what's going on in part is that the pattern of electrons in the supercomputer, they're programmed to operate in a certain way to cause certain other things to happen in my headset, to trigger voltages that trigger pixels to have certain colors.
00:28:39.520 And so, there's a whole sequence, a whole cascade of events that are going on there.
00:28:45.960 And so, to say that there's a homomorphism, I think it's just barking up the wrong tree.
00:28:53.780 Okay.
00:28:54.340 So, I want to push on this a bit more because I want to understand it.
00:28:57.640 All right.
00:28:59.400 So, I'm going to do that from two angles.
00:29:02.100 The first is that in the supercomputer architecture, let's say, there are levels of potential patterning, ranging from quantum, subatomic, atomic, molecular, etc., all the way up to the apprehensible phenomenological world.
00:29:21.180 Multiple layers of potential patterning.
00:29:24.340 So, I would say, in response to your objection that if you looked at the electrons, for example, they have no color, that color is only a pattern that can even be replicated analogously at certain levels of that multilevel patterning.
00:29:43.320 So, you won't detect it in the quantum realm.
00:29:46.740 You won't detect it at the subatomic realm, maybe not even at the atomic realm.
00:29:50.000 So, you'd detect it at the level of patternings of molecules at one level and then not above that.
00:29:57.220 It'd be a very specific level.
00:29:58.520 So, it could still be there even though it wasn't propagating through the entire system.
00:30:02.300 And then I want to add another twist to that that I think is relevant.
00:30:06.980 So, I was talking to a biologist last week about how the immune system functions.
00:30:11.600 And basically, the way that it functions, you imagine there's a foreign molecule in your bloodstream and it's got a shape.
00:30:20.160 Well, it has a very complex, has an endless number of very complex shapes that make up its surface.
00:30:26.660 And the complexity of that shape would be dependent on the resolution of analysis, right?
00:30:31.580 Because the subatomic contours would be different than the atomic contours and different than the molecular contours.
00:30:37.940 Okay. Now, what the immune system wants to do is get a grip on that molecule.
00:30:43.940 And it just has to get enough of a grip so that it can register the pattern, replicate the pattern, and get rid of the molecule.
00:30:54.200 So, that's its goal. You could say that it's motivational frame.
00:30:57.440 Now, the way it does that is sort of the way your arm works.
00:31:00.520 Imagine you were trying to figure out how to pick up a basketball.
00:31:04.240 Now, a baby will do that in the crib. The first thing a baby will do when it's trying to figure out how to use its arms is it uses them very non-specifically.
00:31:12.960 It'll flail about. Maybe it'll hit the ball.
00:31:15.700 Now, hitting the ball isn't throwing the ball, but it's more like throwing the ball than not hitting the ball, right?
00:31:22.300 And then the baby does this, and then that works, and then it gets a little bit more sophisticated and does this, and then it gets a little more sophisticated and it does this, and then finally it can manipulate its fingers.
00:31:34.600 So, it's specifying the grip. At some point, the baby can grab the ball and throw it, and that's kind of what the immune system does.
00:31:41.800 It makes the molecules that kind of stick to the surface, and then those modify so they stick even better, and then the sticky molecules modify so it sticks even better.
00:31:54.000 But the point I'm making is that the immune system appears to generate a sufficient homologue of the molecule to grab it and get it out.
00:32:05.040 Now, you could say that that homologue that it generates, there's many levels of reality that the foreign body participates in that aren't being modeled by the immune system homologue.
00:32:19.600 But I would say, yeah, but there's enough of a homology so that the immune system can get a grip and get rid of the molecule.
00:32:29.240 Now, and we're running around the world, this is a very good analogy, because we're running around the world trying to get a grip all the time, and we presume that the map that we've made of the world is sufficiently real if we get a good enough grip to perform the operation that we're intending to perform.
00:32:47.940 But that still, to me, that still implies that there's some level of representation that has at least the echo of a genuine homology.
00:32:59.420 So I'm wondering, you know, if you have objections to that or what you think about that.
00:33:04.360 Starting a business can be tough, but thanks to Shopify, running your online storefront is easier than ever.
00:33:12.080 Shopify is the global commerce platform that helps you sell at every stage of your business, from the launch your online shop stage all the way to the did we just hit a million orders stage.
00:33:21.260 Shopify is here to help you grow.
00:33:23.480 Our marketing team uses Shopify every day to sell our merchandise, and we love how easy it is to add more items, ship products and track conversions.
00:33:30.700 With Shopify, customize your online store to your style with flexible templates and powerful tools, alongside an endless list of integrations and third-party apps like on-demand printing, accounting and chatbots.
00:33:42.940 Shopify helps you turn browsers into buyers with the internet's best converting checkout, up to 36% better compared to other leading e-commerce platforms.
00:33:51.300 No matter how big you want to grow, Shopify gives you everything you need to take control and take your business to the next level.
00:33:57.060 Sign up for a $1 per month trial period at shopify.com slash jbp, all lowercase.
00:34:03.680 Go to shopify.com slash jbp now to grow your business, no matter what stage you're in.
00:34:09.000 That's shopify.com slash jbp.
00:34:13.920 I think that we can't count on any kind of homology or homomorphism.
00:34:20.960 I think that, for example, the way I think about it now is that space-time itself and all the particles that we see at the subatomic level and the whole bit, that's all just a headset.
00:34:35.740 And physicists actually agree.
00:34:38.680 They say space-time is doomed.
00:34:40.140 So, Neymar Kani-Hamed, David Gross, and many are saying that we need a new framework for physics that's utterly outside of space-time and quantum theory.
00:34:51.740 And they're finding structures like decorated permutations and so forth.
00:34:55.000 These are structures not sort of curled up inside of space-time, but utterly outside of space-time.
00:35:00.620 And so, I think science is telling us.
00:35:05.800 Darwin's theory, I think, is agreeing.
00:35:07.460 It's saying that space-time is not fundamental.
00:35:09.580 It's just a headset.
00:35:11.060 Okay, okay.
00:35:11.860 So, if I said there's no ultimate homology, but there are proximal local homologies, would that do the trick?
00:35:18.100 I have a reason for torturing you about this, and I'll leave it soon.
00:35:22.140 Sure, sure.
00:35:22.220 But I'm…
00:35:24.100 Because the issue of grip really makes a difference, as far as I'm concerned,
00:35:27.600 because getting a grip is very…
00:35:29.860 It's sort of the basis of understanding, is all of our cognitive enterprises, you could think, in some real sense,
00:35:36.120 are extensions of our ability to manipulate the world with our hands.
00:35:40.080 I mean, the fact that our left hemisphere is linguistically specialized
00:35:44.240 looks like it's a consequence of its specialization for articulation at the level of the hand.
00:35:51.740 And so, getting a grip is crucial here.
00:35:53.660 And the homology seems to me to be demonstrated in the fact that, like, if you pick up a hammer,
00:35:59.700 it actually comes off the ground.
00:36:02.980 Now, I think you could reasonably object that that homology is tremendously limited,
00:36:10.680 but it's hard for me to accede to the notion that it's absent.
00:36:16.620 Now, having said that, I don't want to push that point to stop you, let's say, from questioning
00:36:25.740 something as fundamental as the objective reality of space and time.
00:36:30.620 I think you can have your cake and eat it, too, in that regard.
00:36:34.080 And I want to turn to those more radical claims right away.
00:36:37.300 But if I said, well, if I pick up a hammer and it does, in fact, raise off the floor,
00:36:44.880 how is that not an indication of a homology?
00:36:49.000 Would you just, you would reduce that again to mere function?
00:36:52.380 Like, it's merely the case that it worked, and that's not demonstration of anything beyond,
00:36:58.560 the thing is, it worked.
00:36:59.720 That's the thing.
00:37:00.580 That's why I can't shake the notion of some homology.
00:37:03.340 Well, there's, I would again say that there's a causal connection.
00:37:09.360 You could talk about, you know, a causal connection between the reality behind your headset
00:37:14.460 and what you're seeing in the headset.
00:37:17.080 But I think it would be a stretch to talk about some kind of homology of structure.
00:37:23.780 It's not, it's actually not necessary, right?
00:37:26.800 To be successful is not necessary.
00:37:29.220 Well, and as you pointed out very early in this discussion,
00:37:32.280 it also might be hyper expensive, right?
00:37:34.540 You actually don't want to know more about something than you need to know
00:37:38.340 in order to perform the requisite action.
00:37:40.400 That's part of efficiency, right?
00:37:42.680 So, okay.
00:37:43.900 So, all right.
00:37:44.440 So, let's leave that aside.
00:37:45.800 Let me, let me grind away on that.
00:37:48.060 I'll just say one little, if you have like a desktop folder on your laptop
00:37:52.560 and for a file and it's blue and rectangular in the middle of your screen,
00:37:57.340 well, the file is not blue.
00:37:59.300 It's not rectangular and it's not in the middle of the computer.
00:38:01.340 There's literally no homology for anything that you can see in the symbol on the screen
00:38:07.060 and the file itself.
00:38:08.540 It's just a useful symbol without homology,
00:38:12.960 but there is a causal connection between the voltages, but no homology.
00:38:18.140 So, then what do you, okay, so, okay.
00:38:19.820 So, let, let, let, maybe we can go down that route.
00:38:22.100 Sure.
00:38:22.740 I guess I'm then unclear about what you mean.
00:38:26.180 What exactly do you mean by causal then?
00:38:28.820 Right.
00:38:30.540 Right.
00:38:31.080 So, that's already sort of smuggling in a space-time kind of analogy.
00:38:34.740 Right, right, right.
00:38:35.960 Exactly.
00:38:36.640 Exactly.
00:38:37.000 So, so, so I'll, I'll just say that there is a mathematical connection that, maybe not causal,
00:38:43.040 but there's some kind of mathematical connection, but, but the mathematics need not be a kind of
00:38:47.880 mathematics that preserves, you know, structure, for example.
00:38:52.220 Right.
00:38:52.820 So, there's a mathematical connection.
00:38:54.100 Okay, I'm going to have to grind away on that for a bit.
00:38:57.320 Okay.
00:38:57.440 Because, you know, you, you are stating that there is a relationship, at least of function,
00:39:04.660 and I'm unable to, on the fly, thoroughly discriminate between some grip of structure
00:39:11.500 and some function, because grip is a function.
00:39:14.260 So, so, so, I'll just put that aside now.
00:39:16.120 Now, let's go on to consciousness itself.
00:39:18.840 Now, you said a variety of very radical things, including criticizing the entire notion of
00:39:23.600 space and time, and so we'll delve into that.
00:39:26.820 But, but I want to tell you something that I learned from reading mythology, and I want
00:39:31.620 you to tell me how that relates, if at all, to the way that you're conceptualizing consciousness,
00:39:38.880 which is obviously not the way that people generally conceptualize it.
00:39:42.560 Okay, so, I've read a lot of different mythological accounts, and I've studied a lot of analysis of
00:39:49.500 mythological accounts, and I think I've been able to extract out commonalities and regularities
00:39:56.360 across the methods of assessment, and I think I've been able to triangulate them against
00:40:00.820 findings from neuroscience, let's say, the neuroscience of perception.
00:40:05.020 Now, the mythological stories that represent the structure of reality proclaim, you could
00:40:13.940 say, that there are three interacting causal, three interacting fundamental causal agents
00:40:21.720 or structures.
00:40:24.640 Causal agents is probably a better way of thinking about it.
00:40:26.940 There's, there's a realm of potential from which order can be extracted.
00:40:31.380 That's often given feminine symbolism, the realm of potentiality, and I think that's because
00:40:37.720 feminine creatures are the creatures out of which new creatures emerge.
00:40:42.500 So, there's a deep analogy there.
00:40:44.780 So, there's a realm of potentiality.
00:40:47.800 Then there's a realm of a priori order.
00:40:50.580 That's often given patriarchal or paternal symbolism.
00:40:54.420 That's the great father.
00:40:55.420 And so, if you read a book, let's say, the book offers you a realm of potentiality, which
00:41:01.540 is the multitude of potential interpretations that the book consists of, but then you impose
00:41:08.220 an order on that that's a consequence of, well, every book you've ever read and every
00:41:12.860 experience you've ever had.
00:41:14.820 And the book itself is a phenomena that emerges as a consequence of the interplay between the
00:41:20.360 interpreter and the realm of potentiality.
00:41:22.420 Then there's one additional factor, which I think is identical to consciousness itself.
00:41:29.480 It's associated in mythology with the sun, with the sun that sets and then rises triumphant
00:41:35.100 in the morning.
00:41:35.760 It's associated with the conquering hero.
00:41:38.340 And it's the thing, it's the active agent that transforms this infinite potentiality into
00:41:44.500 concretized reality.
00:41:46.980 It literally makes order out of chaos.
00:41:49.280 That's the right way to think about it, and that we, as conscious beings, we partake in
00:41:54.800 that process.
00:41:55.620 In fact, that process is our essence, and that's what makes us made in the image of God, let's
00:42:00.780 say, but also instantiated with something like intrinsic value.
00:42:04.880 Now, you have a very strange concept of consciousness.
00:42:08.640 And so, partly because you're attempting to make the case that what we think of as objective
00:42:15.740 reality, so that's just the facts, ma'am, objective reality, is actually an emergent property.
00:42:22.660 Tell me if I've got this wrong.
00:42:24.120 It's actually an emergent property of consciousness itself.
00:42:26.820 And so, that in your scheme of things, consciousness is more fundamental than objective reality.
00:42:35.240 It's not even obvious in your scheme that objective reality, so to speak, exists.
00:42:40.480 So, tell me how you've grappled with the relationship between consciousness and the world as such.
00:42:47.080 What have you concluded?
00:42:48.340 Darwin and physics, high-energy theoretical physics, agree that space-time is doomed.
00:42:52.660 It's not fundamental reality, and the search is on in the last 10 years among physicists
00:42:57.380 to find structures entirely beyond space-time, not curled up inside space-time, beyond space-time.
00:43:04.140 And they found structures I mentioned like the decorated permutations, amplituhedron, and
00:43:08.940 so forth.
00:43:10.140 And so, I'm also thinking about consciousness utterly outside of space-time.
00:43:15.740 So, it's a fundamental reality, and space-time, which we have thought of for,
00:43:22.660 most of human history as the fundamental reality that we're embedded in, is a trivial headset.
00:43:27.880 That's all it is.
00:43:28.820 We've mistaken a headset for the truth, because it's easy.
00:43:32.700 If that's all you've seen all your life is a headset, it's hard to imagine something
00:43:36.800 outside of it.
00:43:37.820 But science is good enough to recognize that space-time is just a headset.
00:43:42.980 So, now we're free using mathematics to ask, what kind of structures could we posit beyond
00:43:51.200 space-time?
00:43:52.500 And in my case, I'm trying to also deal with the mind-body problem.
00:43:55.820 How is consciousness related to what we call the physical world?
00:43:58.740 So, I've decided to try to get a mathematical model of consciousness.
00:44:02.020 Now, of course, spiritual traditions and humanity for thousands of years has thought about consciousness
00:44:09.260 and so forth.
00:44:10.020 But as a scientist, what I want to do, of course, is listen to their insights, but I need to
00:44:13.760 write down as minimal a mathematical structure as I can to boot up a completely rigorous theory.
00:44:20.360 And so, what we've done in our theory, we call it the theory of conscious agents, is a very
00:44:26.440 minimal structure.
00:44:27.200 A conscious agent has a probability space that it's defined on.
00:44:33.680 So, it's a probability space.
00:44:36.920 Probability.
00:44:37.660 Is that probability space equivalent to, let's say, a realm of potential around?
00:44:43.700 Yes.
00:44:44.240 My students and I tried to model anxiety as a response to entropy.
00:44:49.400 Okay.
00:44:49.520 So, imagine that what you have in front of you is a set of branching possibilities, some
00:44:56.080 of which can be realized with comparatively less effort.
00:45:00.660 So, they're more probable, let's say, given your current state, some of which are virtually
00:45:05.740 impossibly distal, but in principle could be managed if you were smart enough and could
00:45:10.620 gather the resources.
00:45:11.800 But, so you have a probability space in front of you, some of which is sort of at hand.
00:45:15.660 Like, it's pretty easy for me to pick up this pen, right?
00:45:18.760 So, that's a high probability pathway laid out in front of me.
00:45:22.160 So, I mean, the mythological motifs that I referred to insist that what people face is
00:45:30.240 something akin to the pre-cosmogonic chaos that God himself faced when the cosmos first
00:45:36.600 sprang into being, right?
00:45:37.760 And so, that the way to construe the world isn't as a place of clockwork, automaton machines,
00:45:44.720 self-evident objects, but as a realm of possibility that differs in probability.
00:45:49.720 And then the issue becomes, how do you best orient yourself so that you contend, you can
00:45:56.520 contend properly with that probability landscape?
00:46:01.180 Now, is that, am I walking on parallel ground here?
00:46:05.620 We're in broad agreement in that, in the sense that our theory of conscious agents, by writing
00:46:12.060 down a probability space, it is a space of potentiality.
00:46:16.420 For example, to be very, very concrete, suppose my experiment is just to flip a coin twice,
00:46:21.720 heads and tails.
00:46:22.740 Well, what's my probability space?
00:46:24.200 Well, I could get heads, heads, heads, tails, tails, tails, or tails, heads, right?
00:46:28.980 So, there's four possible-
00:46:29.880 Or it could land on the edge.
00:46:31.880 Yeah, right, right.
00:46:33.360 So, yeah, yeah.
00:46:34.800 Well, then I'd have to increase my probability space if I wanted to include that.
00:46:38.760 But now notice, I write down the probability space first, but I haven't flipped my coin yet.
00:46:43.480 So, it's the space of potential outcomes of things that I can do.
00:46:49.120 And that's what probability spaces are.
00:46:51.100 And so-
00:46:51.520 Yeah, okay.
00:46:52.620 So, when I write down a probability space for consciousness, it's a probability space in
00:46:56.280 which I'm thinking about in the first instance that it's about what is the probability of
00:47:00.460 this I'll experience green, or mint, or the sound of a trumpet, or so all these different
00:47:08.700 conscious experiences.
00:47:09.440 So, the probability space is a space of all possible kinds of conscious experiences that
00:47:14.000 this particular agent might have.
00:47:16.240 And you can imagine that there's, for some agents, maybe they're simple, they only have
00:47:20.600 the experience of red, period.
00:47:21.940 That's it.
00:47:22.320 That's all this agent has, red.
00:47:23.620 The other one can experience red and green.
00:47:26.180 And the other one can have 10 trillion experiences.
00:47:28.980 You could imagine agents with-
00:47:30.780 And then they can be related.
00:47:31.920 Well, maybe the red agent can be thought of as a subspace of the one that says red and
00:47:36.460 10 million other things.
00:47:37.360 So, we can now-
00:47:38.580 Right, right.
00:47:38.800 Depends on how articulated the organism is, right?
00:47:41.540 So, yeah.
00:47:41.860 The simpler organisms, exactly.
00:47:43.500 The probability space around them collapses.
00:47:46.100 That's right.
00:47:46.720 And so, right, right.
00:47:48.440 And so, all the infinite number of potential probabilities that we see in front of us just
00:47:53.120 collapse into maybe five choices, something like that.
00:47:56.560 That's right.
00:47:57.640 Yeah.
00:47:58.240 Okay.
00:47:58.580 So, you know, Carl Friston, so this is quite interesting.
00:48:01.920 So, I talked to Carl Friston about emotion, about hope, positive emotion, let's say, incentive
00:48:10.900 reward, positive emotion.
00:48:12.380 So, positive emotion in that sense is a reward that signals advancement towards a goal.
00:48:18.180 Now, I'd already been conceptualizing with my students, as had Friston, anxiety as a marker
00:48:24.040 for the emergence of entropy.
00:48:25.780 But Friston pointed out, now, and I want to make a connection between his thinking and
00:48:31.280 yours here.
00:48:31.840 Friston pointed out that you can map positive emotion with respect to entropy, too, because
00:48:36.960 if you're looking for a desired outcome, so imagine you're trying to get a grip on the
00:48:42.160 world to bring about a certain reality.
00:48:45.900 If you see yourself making a step towards that end such that the number of potential
00:48:52.020 pathways to that end decreases somewhat, that produces a dopamine kick.
00:48:57.780 And that's a signal of reduced entropy in relationship.
00:49:01.020 And it seems to me that entropy is always calculated in relationship to a goal, right?
00:49:04.840 You're saying, well, how entropic is the current space?
00:49:07.340 And you can't answer that.
00:49:08.800 You have to say, how entropic is the current space in relationship to the ordered state
00:49:13.240 that I'm trying to bring about as a consequence of my actions?
00:49:16.940 And then, now and then, you'll stumble across something that blows up in your face, let's
00:49:20.880 say.
00:49:21.420 Like, I've always thought about this.
00:49:22.700 Like, imagine you're driving your car to work.
00:49:26.680 When a woman experiences an unplanned pregnancy, she often feels alone and afraid.
00:49:31.920 Too often, her first response is to seek out an abortion, because that's what left-leaning
00:49:36.320 institutions have conditioned her to do.
00:49:38.820 But because of the generosity of listeners like you, that search may lead her to a pre-born
00:49:43.320 network clinic, where, by the grace of God, she'll choose life, not just for her baby,
00:49:48.120 but for herself.
00:49:49.660 Pre-born offers God's love and compassion to hurting women and provides a free ultrasound
00:49:53.820 to introduce them to the life growing inside them.
00:49:56.800 This combination helps women to choose life, and it's how pre-born saves 200 babies every
00:50:02.000 single day.
00:50:02.600 Thanks to the Daily Wire's partnership with Pre-Born, we're able to make our powerful
00:50:06.860 documentary, Choosing Life, available to all on Daily Wire+.
00:50:10.820 Join us in thanking Pre-Born for bringing this important work out from behind our paywall,
00:50:15.740 and consider making a donation today to support their life-saving work.
00:50:20.000 You can sponsor one ultrasound for just $28.
00:50:22.800 If you have the means, you can sponsor Pre-Born's entire network for a day for $5,000.
00:50:27.840 Make a donation today.
00:50:29.380 Just dial pound 250 and say the keyword baby.
00:50:32.220 That's pound 250 baby.
00:50:34.240 Or go to preborn.com slash Jordan.
00:50:37.060 That's preborn.com slash Jordan.
00:50:39.220 Okay, and you might say, well, what is your car?
00:50:45.700 And the objective materialist would say, well, it's an enclosed shell with four tires.
00:50:50.860 It would give you a materialist description.
00:50:53.500 But I would say, no, no, no, that's not how your nervous system is responding at all.
00:50:57.480 Your nervous system, for your nervous system, the car is a conveyance from point A to point
00:51:03.620 B, so it's a tool.
00:51:04.760 And it's a tool that signifies zero entropy, essentially, as long as it performs its function.
00:51:10.940 And then let's say your car breaks down, and now you're on the side of the road.
00:51:16.200 Now what happens to you is the probability space around you, I would say it becomes more distal.
00:51:21.880 Any of your desired goals become more expensive and harder to compute, right?
00:51:26.780 What's wrong with my car?
00:51:28.140 Was I an idiot for buying that car?
00:51:29.980 Am I generally an idiot?
00:51:31.320 Am I going to get in trouble with my boss?
00:51:33.060 What's going to happen to the rest of the day?
00:51:34.760 You know, what's going to happen when I go see the mechanic, right?
00:51:38.920 The landscape blows into a broader range of unconstrained potentialities, and that seems
00:51:45.300 to be signaled by anxiety.
00:51:47.040 And anxiety then prepares your body for a multitude of potential actions.
00:51:52.100 And the problem with that is that it's very physiologically costly, right?
00:51:56.780 So that's stress, and that'll wear you to a frazzle.
00:51:58.980 So, okay, so is any of that not in accord with the manner in which you are modeling your
00:52:05.720 theory of conscious agents?
00:52:08.000 Right.
00:52:08.620 So in the theory of conscious agents, I should say that in addition to the probability space
00:52:13.060 and the conscious experiences that it allows, there is the dynamics.
00:52:19.300 It's a Markov chain, a Markovian dynamics, where you have these matrices that describe
00:52:25.560 the probability if I'm experiencing red now, what's the probability I'll experience green
00:52:29.020 the next time I have an experience?
00:52:30.680 So there is a dynamical, and when we do the analysis, it turns out that our Markovian
00:52:37.480 dynamics need not have an entropic arrow of time.
00:52:42.840 It can be a stationary dynamics in which the entropy does not increase.
00:52:46.920 So entropy...
00:52:47.860 Right, right, right.
00:52:48.820 In this realm of conscious...
00:52:49.700 That's kind of what you hope.
00:52:51.200 Right.
00:52:51.500 You know, that's one of the things that makes things constant, right, is that you assume
00:52:55.940 that the entropic transformation is negligible.
00:52:58.880 That's why you can ignore things, right?
00:53:01.020 When you ignore things, and you ignore almost everything, you're assuming that the entropic
00:53:04.480 transformation is negligible.
00:53:07.080 Well, what I'm saying is that it's possible to model a reality in which entropy doesn't
00:53:11.660 increase, period.
00:53:13.020 It's not ignoring anything.
00:53:14.340 That's the nature of this deeper reality outside of space-time.
00:53:17.240 But then it turns out to be a theorem that if you take a projection of that non-entropic,
00:53:24.340 you know, there's no arrow of time in the sense of increasing entropy of this Markovian
00:53:28.580 dynamics, but if you take a projection of it by conditional probability, any projection
00:53:32.480 of it, it's a theorem that you will, as an artifact of projection, have the illusion
00:53:38.080 of an arrow of time.
00:53:39.560 You will get an...
00:53:40.220 Right, well, is that because...
00:53:42.980 Well, look, if you're pursuing a pragmatic goal, things can fall apart and go wrong.
00:53:51.700 And that is an increase in entropy within the universe defined by that goal.
00:53:58.200 That may say nothing about entropy per se as a characteristic of broader reality.
00:54:03.640 See, I've always had this issue with entropy because entropy always seemed to me to be by
00:54:08.000 necessity subjectively defined.
00:54:11.220 It has to be disorder in relationship to some positive state of order.
00:54:16.420 And then you get back into the Darwinian problem at that point.
00:54:20.060 Like if it's, well, if it's bounded by motivation, then it's encapsulated within a Darwinian space.
00:54:25.500 So, okay, so in terms of your conception of objects, let me try this out.
00:54:29.920 So I'm looking at this teleprompter here, and you're sitting in the middle of it.
00:54:36.080 Now, I'm treating that like a set of conditional probabilities, right?
00:54:41.160 I'm presuming that what this machine is doing right now is very much predictive of what it's going to do in a second.
00:54:49.300 And I'm predicating my perception itself on that reality.
00:54:55.020 Now, you know, it could burst into flames.
00:54:57.740 Now, I feel that the probability of that is very low.
00:55:00.200 So I'm not going to perceive the machine that way.
00:55:03.980 Now, you know, there are disorders.
00:55:05.740 Obsessive-compulsive disorder is a good example, where people stop being able to reduce that probability landscape to predictable safety,
00:55:14.860 and they start reacting to almost everything as if it's unpredictably dangerous.
00:55:19.320 And, you know, things are.
00:55:20.980 So I had clients, for example, they would go into a building, and the first thing they would do is look for all the fire escapes.
00:55:28.640 And what they asked me was, well, why don't you do that?
00:55:34.580 Because the building could burn down, and people do get trapped in buildings, and that's a horrible way to die.
00:55:39.460 So the mystery isn't why they did that.
00:55:42.080 The mystery for them was why everyone didn't do that all the time.
00:55:44.840 And I actually do believe that the great mystery is why people aren't scared out of their skulls all the time,
00:55:49.840 not why they're sometimes calm.
00:55:52.080 But so can you imagine an object now?
00:55:53.880 Now, the object is surrounded by a probability distribution, I would say.
00:55:58.300 And that probability distribution is all the things that object might turn into in some period of time, let's say.
00:56:06.140 And I would say, to some degree, when you look at the object, you actually also perceive that probability space.
00:56:12.920 Because, you know, although I see that this teleprompter is stable,
00:56:17.980 it's unstable enough and dynamic enough to provide me with a representation of you.
00:56:23.100 And so I'm playing with the—by seeing the object and interacting with it,
00:56:29.260 I'm playing with the probability space around it.
00:56:31.880 So is it the case that you see the damn probability space when you look at the object?
00:56:38.620 Well, I don't know if we see the space itself.
00:56:42.880 We certainly—we're estimating what we think are the probabilities for various good things and bad things to happen.
00:56:49.260 But I would say that this whole business about entropy increasing and so forth—
00:56:56.680 First, I should point out that Shannon entropy, which is what we're talking about here,
00:57:01.660 it turns out not to be the most general notion of entropy.
00:57:04.220 There are—mathematicians and physicists are looking at broader definitions of entropy.
00:57:09.960 There's something called the solace entropy and others.
00:57:12.400 So there are technical reasons for why—I mean, Shannon entropy is great and it's very, very useful.
00:57:17.960 And when I was talking about the entropy of our dynamical systems and not having, you know, increasing entropy,
00:57:23.920 I was talking about Shannon entropy.
00:57:25.200 But there are more general notions of entropy that are important.
00:57:29.300 So I would say that the very whole—the whole structure of needing to estimate probabilities and worrying about outcomes and rewards and so forth,
00:57:44.540 from the point of view of our dynamics of conscious agents, all of that—in fact, all of Darwinian theory is an artifact of projection.
00:57:53.000 So here's a dynamic of conscious agents outside of space-time.
00:57:59.140 There need not be any competition, no limited resources, no arrow of time.
00:58:06.800 And yet, when I take any projection of that dynamics to get a new Markovian dynamics that has lost just a little bit of information,
00:58:14.580 I will have an arrow of time, and it can look like separate organisms competing for resources and so forth.
00:58:20.800 In other words, I mean, I love Darwin's theory of evolution, but natural selection is very powerful.
00:58:25.500 I think the entire theory is not a deep insight into reality.
00:58:28.260 I think it's an artifact of projection.
00:58:30.320 The very arrow of time—think about the arrow of time.
00:58:33.320 It is the fundamental limited resource in evolutionary theory.
00:58:37.420 Time is the fundamental limited resource.
00:58:39.880 If I don't get food in time, I die.
00:58:41.480 If I don't mate in time, I don't reproduce.
00:58:43.020 If I don't breathe air in time—so time is the fundamental limited resource.
00:58:47.080 And the arrow of time itself need not be fundamental.
00:58:50.400 It could be entirely an artifact of projection.
00:58:53.620 So what that means is—and this gets again to the—
00:58:56.780 Okay, well, then I'd like to know—this is back to the most fundamental possible question we could be describing is,
00:59:05.080 well, what's the nature of reality itself?
00:59:07.680 I mean, when I was debating with Sam Harris, we got hung up on this consistently,
00:59:11.420 because I wasn't willing to use the same definition of truth that he was.
00:59:15.580 He uses an objective materialist definition, and I think that, you know, truth flies like an arrow, let's say.
00:59:23.360 It's got a functional element to it that you cannot eradicate.
00:59:27.800 There's no way out of that with an objective materialism, as far as I can tell.
00:59:31.280 Now, you said the Darwinian race and the arrow of time is just an artifact.
00:59:36.560 But if I said, well, hold on a second, I don't exactly know what you mean by artifact, then,
00:59:42.800 because if I don't act like there's an arrow of time and restricted resources in that regard,
00:59:49.820 then I'm going to die.
00:59:50.880 And that's real enough for me.
00:59:52.940 You know, you might even say, well, my death has little to do with the fundamental structure of reality.
00:59:57.020 But I would say, well, it has enough to do with it, so it happens to concern me.
01:00:01.900 And so, you know, we start to get into a discussion about what constitutes reality itself.
01:00:07.940 If this is just a projection, what, in principle, would be real?
01:00:14.000 Right, so on this theory, then, consciousness is the fundamental reality,
01:00:18.480 and the conscious experiences that observers have is the fundamental reality.
01:00:23.080 And the experience that we have of space and time is a projection of a much deeper reality.
01:00:33.240 And that projection, because it loses information, is necessarily going to have artifacts in it.
01:00:38.040 And among the artifacts are things like separate objects in space and time.
01:00:44.400 Space and time itself is an artifact.
01:00:46.360 So one reason I'm not a materialist is because our best materialist theories,
01:00:52.180 namely evolution of natural selection and also quantum field theory and Einstein's theory of gravity,
01:01:00.540 they tell us that space-time has no operational meaning at 10 to the minus 33 centimeters or 10 to the minus 43 seconds.
01:01:08.040 In other words, our theories, our scientific theories that are the foundation of our materialist ideas,
01:01:14.360 tell us precisely the scope and the limits of materialism.
01:01:18.320 Materialism, that kind of materialism, is fine down to the Planck scale, 10 to the minus 33 centimeters.
01:01:23.200 And after that, it completely falls apart.
01:01:25.540 It's utterly irrelevant.
01:01:29.080 That's right.
01:01:29.660 The space-time, physicalist, matter kind of materialism falls apart.
01:01:33.860 And it's not because of religious ideas, I'm saying.
01:01:35.700 And I'm just listening to the science.
01:01:38.000 Science tells us space-time has no meaning beyond the Planck scale.
01:01:41.260 And that's why the avant-garde high-energy theoretical physicists are now looking for structures entirely outside of space-time,
01:01:48.160 not curled up inside space-time, entirely beyond.
01:01:50.480 So, it's in that sense that, yeah, materialism, and by the way, I should say this about all scientific theories.
01:01:59.700 My view about all scientific theories is that each scientific theory starts with certain assumptions,
01:02:05.180 the premises of the theory.
01:02:06.120 And it says, if you grant me those assumptions, then I can explain all this wonderful stuff.
01:02:11.080 Okay, so how did you come to that conclusion?
01:02:13.360 Because that's, see, this is, hmm.
01:02:16.520 I've been trying to wrestle with this with regards to, say, the potential relationship between the integrity of the scientific process and an underlying transcendent ethic.
01:02:29.160 So, I think, for example, I talked to Richard Dawkins about this a little bit, although we didn't get that far for a variety of reasons.
01:02:34.480 But, like, I think that to be a scientist, there's certain things that you have to accept on faith.
01:02:39.980 These would be equivalent to those axioms.
01:02:41.860 And I'm not talking about necessarily a scientific theory here, as you were, but the practice of science itself.
01:02:47.080 So, for example, you have to act as if there is truth.
01:02:50.580 You have to act as if the truth is discoverable.
01:02:53.940 You have to act as if you can discover it.
01:02:57.300 Then you have to act as if you discovering the truth and communicating it is good.
01:03:03.460 And none of that is provable scientifically.
01:03:07.760 You have to start with those axioms before you can even make a move.
01:03:11.340 And it could be wrong, you know.
01:03:13.000 I mean, we think that delving into the structure of the world with integrity is redemptive.
01:03:19.720 We think that knowledge is useful pragmatically.
01:03:22.120 But, you know, we've invented all sorts of things that could easily wipe us out, the hydrogen bomb perhaps being foremost among those.
01:03:29.040 And so the evidence that that set of claims is true is sorely lacking.
01:03:34.560 Or you could say it's 50-50.
01:03:36.460 That's another way of thinking about it.
01:03:37.780 But I'm very curious about how you came to the conclusion that scientific theories themselves have to be axiomatically predicated.
01:03:45.140 How did you walk down that road?
01:03:46.860 Because that's not a road that very many people walk down.
01:03:49.100 Well, if you just look at any scientific theory, say Einstein's theory of special relativity, he says, let's start with two assumptions.
01:03:57.160 That, you know, the speed of light is universal for all observers and that the laws of physics are the same in all inertial frames.
01:04:05.080 He says, if you grant me those two miracles, then the whole…
01:04:09.100 Right, right, right.
01:04:10.240 And away we go.
01:04:11.540 It's the same thing.
01:04:12.760 And so does Riemann.
01:04:14.280 And Darwin starts off and says, grant me that there are organisms in space and time and resources, and these organisms are competing for resources.
01:04:22.360 Now I'll give you a theory.
01:04:24.840 So every…
01:04:25.820 Right, plan?
01:04:26.160 When you just…
01:04:26.940 If you just look at any scientific theory, a good theory will make explicit the assumptions.
01:04:33.240 But if it's not, you can find what the assumptions are.
01:04:36.560 So there's no theory…
01:04:37.540 Okay, so…
01:04:38.820 There's no theory of everything.
01:04:39.660 Do you think that there's…
01:04:41.100 Is there any difference between…
01:04:43.580 Technically, I'm thinking, philosophically, I don't see any difference between the claim that a given theory has to have axioms that aren't provable from within the frame of that theory.
01:04:55.280 That's Goodell's theorem, as far as I can tell, applied much more broadly.
01:04:59.300 I don't see any difference between that and the proposition that to get the game started, there has to be…
01:05:05.280 It's something akin to a miracle.
01:05:06.840 I mean, because these axioms…
01:05:09.620 Imagine that a miracle inside a system is defined as any occurrence that isn't governed by the rules that apply within that system.
01:05:19.320 That's a good working definition.
01:05:21.180 Now your proposition is, well, I don't care what theory you're coming up with, there's going to be a set of axiomatic presuppositions that are a launching point.
01:05:28.860 See, I also think those axiomatic presuppositions are where you put all the entropy.
01:05:34.980 You say, grant me this.
01:05:36.580 It's like, well, that takes care of 95% of the mystery, so we'll just shelve that invisibly, right?
01:05:42.980 Because it's hidden inside the axioms.
01:05:45.400 And then you can go about manipulating the small remnant of trouble that you have left over.
01:05:50.060 I also think this is why people don't like to have their axioms challenged, Dave, because if you say, well, I'm not going to accept that, then you let loose all the demons that are encapsulated within those axioms.
01:06:00.940 And they start roaming about again, and people don't like that at all.
01:06:04.320 Well, yeah.
01:06:04.800 A good scientist will want to have their assumptions made absolutely mathematically precisely and explicit.
01:06:11.540 So they're just laid out there, and they say, these are the assumptions of the theory, and given these assumptions, I can now prove this.
01:06:18.360 And this is the glory of science, where we put down precisely what our assumptions are, and then we look at it mathematically, and we can get both the scope of those assumptions, how much can we do with those assumptions, and the limits.
01:06:35.560 Like in the case of space-time, the limits are 10 to the minus 33 centimeters.
01:06:39.060 Game over.
01:06:39.880 By the way, it's not that deep, in my view.
01:06:42.620 It's not 10 to the minus 33 trillion centimeters.
01:06:44.720 It's just 10 to the minus 33, and the game is over for space-time.
01:06:48.400 So that's a good antidote for dogmatism, because your own theory, a mathematically precise theory, will tell you the limits of your assumptions, and then say, okay, now you need to look for a broader framework with deeper assumptions.
01:07:02.500 But they will be new assumptions.
01:07:04.440 And so I view this as infinite job security for scientists, because we will never, ever get a theory of everything.
01:07:12.780 We'll always have a theory of everything except our current assumptions.
01:07:16.060 And I agree with you that those assumptions will essentially be the whole bailiwick of what we're doing.
01:07:23.400 So there is a reality, whatever it is.
01:07:27.880 Now, this is, for me, something of an interesting mystery.
01:07:30.340 Our theories, in some sense, don't even scratch the surface of the truth.
01:07:38.840 And yet, because this process will go on forever and we'll still essentially have measure zero of the truth.
01:07:46.300 And yet, Einstein's theory and quantum theory gave us the technologies that are allowing you and me to talk across the country.
01:07:55.160 Well, you could say that partly what's happening there is that the more sophisticated the theory, the broader range of probable states of any given object or system of objects can be predicted.
01:08:11.140 It's something like that.
01:08:12.120 But Piaget pointed that out when he was talking about developmental improvement in children's cognitive theories.
01:08:18.160 And so, you know, if you look at someone like Thomas Kuhn, Kuhn presumed that we undertook multiple scientific revolutions, but there was no necessary progress.
01:08:32.280 There were just different sets of axioms.
01:08:35.460 And Piaget knew about Kuhn's theory, by the way.
01:08:37.980 But Piaget's point was, no, you've got it slightly wrong, because there is a progression of theory in that a better theory allows you to predict everything the previous theory allowed you to predict, plus some additional things.
01:08:50.840 Now, your point would be, well, we can just continue that movement upward forever, right?
01:08:56.100 Because the landscape of potentiality is inexhaustible.
01:09:00.600 And so, again, you can have your cake and eat it, too.
01:09:03.500 We can learn more.
01:09:05.040 Einstein got us farther than Newton.
01:09:07.980 Which doesn't mean that Einstein's axiomatic set is the final set.
01:09:13.000 Okay, so let me put a twist in this.
01:09:15.400 I've been thinking about this recently.
01:09:17.300 I'm writing a new book, and one of the things I'm doing in that book is doing an analysis of the story of Abraham.
01:09:23.380 Abraham's a very interesting story, okay?
01:09:25.460 So Abraham is called out into the world, even though he sort of hung around his father's tent until he's like 70.
01:09:32.260 So he had utopia at hand.
01:09:35.880 He didn't have to do any work to get everything he needed.
01:09:39.260 But that wasn't good enough.
01:09:40.560 So a voice comes to him.
01:09:42.260 It's the voice of conscience, I would say, and says, look, you've got all this security, but that isn't what you're built for.
01:09:47.680 Get the hell out there in the world.
01:09:49.320 And so he does that, and then all hell breaks loose.
01:09:52.240 It's one bloody catastrophe after another.
01:09:54.700 Starvation and tyranny and warfare and the necessity of sacrificing his son.
01:09:59.940 It's just like one bloody thing after another.
01:10:02.960 Okay, but during that process, Abraham continues to aim up, and he makes the proper sacrifices.
01:10:09.880 And the consequence of that is that God promises him that his descendants will be more numerous than the Starks.
01:10:17.080 So I was reading that from an evolutionary perspective, and I thought, okay, what's happening here is that the narrative is trying to map out a pathway that maximizes reproductive fitness, all things considered.
01:10:31.300 Now, the problem I have with theories like Dawkins, let's say, is Dawkins reduces, and you tell me if you think this is wrong, Dawkins implicitly reduces sex to lust.
01:10:43.380 Then he reduces reproduction to sex.
01:10:47.680 And the problem with that is that reproduction is not exhausted by lust or sex, quite the contrary, especially in human beings, because not only do we have to chase women, let's say,
01:10:59.360 but then when we have children, we have to invest in them for like 18 years before they're good for continual reproduction.
01:11:07.340 And we have to interact with them in a manner that's predicated on an ethos that improves the probability of their reproductive fitness.
01:11:17.320 And so reproduction, see, this is something that the Darwinists, the casual Darwinists, do very incautiously, as far as I'm concerned,
01:11:26.240 because they identify the drive to reproduction with sex.
01:11:30.600 And that's a big mistake, because sex might ensure your reproduction proximally for one generation.
01:11:40.040 But the pattern of behavior that you establish and instantiate in your offspring, which would be an ethos,
01:11:47.160 might ensure your reproduction multigenerationally, you see.
01:11:50.920 And that appears to be what's being played out in this story of Abraham is that the unconscious mind,
01:11:56.940 let's say, trying to map the fitness landscape, is attempting to determine what pattern of behavior is most appropriate
01:12:06.140 if the goal is maximal reproductive fitness calculated across multiple generations,
01:12:13.080 or maybe across infinitely iterating generations.
01:12:15.840 And so that points to something, again, like you said earlier, you called it a general fitness,
01:12:24.160 what was it?
01:12:25.460 I've got to get it here.
01:12:26.540 Big fitness payoff, right?
01:12:28.960 And that could be the ethos to which all these subsidiary ethoses are integrated.
01:12:36.600 See?
01:12:37.500 Okay.
01:12:38.580 Okay, okay.
01:12:39.500 So I'm wondering what you think about that.
01:12:41.560 First of all, what you think about the proposition that evolutionary biologists,
01:12:46.600 Dawkins is a good case in mind, have erred when they've too closely identified reproduction with short-term sex.
01:12:55.180 It's like, that isn't a guarantee of reproduction.
01:12:57.640 We wouldn't invest in our children if that was the case.
01:13:00.360 We would just leave them.
01:13:01.940 The sex is done.
01:13:03.440 We've reproduced.
01:13:04.660 You need an ethos to guarantee reproductive fitness across time.
01:13:09.480 Well, there's several levels here.
01:13:11.280 First, Dawkins, of course, understands that most reproduction is asexual, right?
01:13:19.660 So sexual reproduction is a relatively recent thing.
01:13:23.880 Most reproduction has been asexual.
01:13:27.540 So Dawkins is very famous for talking about the selfish gene.
01:13:30.220 And it's really, when he talks about reproduction, it's about genes reproducing themselves.
01:13:34.040 It's really not so much about sex.
01:13:35.440 Sex is one way of having that happen, but bacteria do it without sex.
01:13:39.120 And so the emphasis on sex was, I would say, Dawkins, of course, understands that sex isn't fundamental.
01:13:47.780 Now, when it comes to human motivations and mammal motivations, perhaps in that specific context, you might then be talking about it.
01:13:55.620 But even there, when you start talking about sexual reproduction, there are many, many strategies that organisms use.
01:14:01.780 So, for example, some spiders will have just hundreds of babies and eat some of them.
01:14:07.320 They'll eat some of them, you know, and let the others do that.
01:14:11.280 Having the babies is their only job.
01:14:13.980 And after that, the babies are on their own.
01:14:15.720 And so there are different strategies.
01:14:18.480 So this is where, you know, Dawkins is quite famous, justifiably, for his work on the selfish gene idea.
01:14:24.180 That is, there are different strategies, but the only thing that matters in this framework is, what is the probability that particular genes spread through the population in later generations?
01:14:36.640 Sex came along, apparently, to deal with...
01:14:39.500 Okay, as one of the pathways to that, right?
01:14:42.500 One of the pathways, that's right.
01:14:43.900 And so, but there's another framework in thinking about all this as well.
01:14:49.960 So, again, I love evolutionary theory, I think, in terms of models of evolution and so forth, of creatures and their behaviors.
01:14:59.260 It's an incredibly powerful theory.
01:15:00.800 I've used it a lot.
01:15:01.620 My book, Case Against Reality, talks about it in great detail.
01:15:04.180 It's a wonderful theory.
01:15:05.180 But I think that from this deeper framework that science is now moving into beyond space-time,
01:15:09.560 all of evolutionary theory, all of it is an artifact of projection.
01:15:15.080 It's not...
01:15:15.600 In other words, if you're looking, like, from a spiritual point of view, for some deep principles, deep spiritual principles, evolution, I don't think, is deep enough.
01:15:25.000 I think that all of it is an artifact of space-time projection.
01:15:30.360 And if you're going to be looking for deep principles about the spiritual tradition, talking about Abraham, and really thinking big, I think that thinking inside space-time is not big enough.
01:15:41.320 You've got to step entirely outside of space-time.
01:15:44.220 Space-time has all these artifacts.
01:15:46.200 And we're so used to being stuck in the headset.
01:15:50.680 Well, there is an insistence upon that in the Judeo-Christian tradition, because God is conceptualized, what would you say, traditionally as being entirely outside of time and space.
01:16:02.460 And so whatever works for human, like the human landscape and the divine landscape, they're not the same.
01:16:08.660 There's a relationship between them, however, but they're not the same.
01:16:12.720 Okay, so now, okay, so let me ask you about that.
01:16:15.400 Now, you have made the case, not least in this interview, that consciousness is primary.
01:16:24.300 Now, consciousness uses these projections.
01:16:28.300 So how do you reconcile the notion that consciousness is primary?
01:16:32.300 And I want to make sure I'm not misreading what you're saying, that consciousness is primary.
01:16:36.480 But consciousness operates in the world with these projections.
01:16:40.500 See, because this is the thing I grapple with, is that if survival itself is dependent on the utilization of a scheme of pragmatic projections,
01:16:51.160 in what sense can we say that reality is something other than that?
01:16:55.360 Like, because, see, part of this is something that Peirce and William James wrestled with, too.
01:17:03.880 It's like, well, why make the claim that there is a reality outside of the human concern with survival and reproduction?
01:17:13.460 And if consciousness is the primary reality and it's using projections to orient itself so that it can survive and reproduce in the biological sense,
01:17:24.120 how can you even begin to put forward a claim that there is a reality that transcends that?
01:17:30.840 Like, on what grounds does it transcend it?
01:17:33.460 In relationship to what?
01:17:34.760 Right, so, these are deep waters.
01:17:39.480 And the idea that I'm playing with right now is that this consciousness is, there's one ultimate infinite consciousness.
01:17:49.860 And it, what is it up to?
01:17:53.660 Knowing itself.
01:17:54.760 But how do you know yourself?
01:17:57.080 Well, there are certain theorems that say that no system can actually completely know itself.
01:18:03.580 Right, right, right.
01:18:05.100 So, if this one infinite consciousness wants to know itself, all it can do is start looking at itself through different perspectives.
01:18:12.700 So, putting on different headsets.
01:18:14.260 So, space-time is one headset.
01:18:16.260 And from that perspective, here's a, so this is a projection of the one infinite consciousness.
01:18:22.180 And in that perspective, it looks like evolution by natural selection.
01:18:25.760 It looks like quantum field theory and so forth.
01:18:28.660 And it looks like I need to play the game this way.
01:18:32.040 But this is a trivial headset.
01:18:34.760 This is actually, I think, one of the cheaper headsets.
01:18:37.120 Okay, that's very interesting.
01:18:38.960 Okay, so, one of the things, so, while writing the book that I'm writing now, I've been walking through all these biblical narratives.
01:18:44.900 And one of the things they do, every single narrative provides a different characterization of the infinite.
01:18:54.400 There's no real replication.
01:18:56.540 It's like, well, here's a picture of the divine, and here's another one, and here's another one, and here's another one.
01:19:04.140 Now, there's an insistence that runs through the text.
01:19:06.980 This unites the text, that those are all manifestations of the same underlying reality.
01:19:11.640 But it is definitely the case that what's happening is that these are movies, so to speak, shot from the perspective of different directors.
01:19:19.500 And it does seem to me akin to something coming to know itself.
01:19:23.580 There's this ancient Jewish idea.
01:19:25.280 This is a great, it's like a Zen cone.
01:19:26.960 It's a great little mystery.
01:19:28.980 It says, so here's the proposition.
01:19:30.540 So, God has traditionally imbued the following characteristics.
01:19:36.960 Omniscience, omnipresence, and omnipotence.
01:19:40.360 What does that lack?
01:19:43.200 And, you know, you think, well, that's a ridiculous question, because by definition, that lacks nothing.
01:19:47.920 But the answer is limitation.
01:19:50.520 That lacks limitation.
01:19:51.860 And that's actually the classical explanation for God's creation of man,
01:19:56.480 is that the unlimited needs the limited as a viewpoint.
01:20:01.020 It has something to do with the development of, as you pointed out, I believe,
01:20:05.160 it has something to do with the possibility of coming to, it's something like conscious awareness.
01:20:11.440 You see this in T.S. Eliot, too.
01:20:13.400 I don't remember which poem, where he talks about coming back to the point of origin,
01:20:18.180 which is like the return to childhood, you know, that heavenly notion that to enter the kingdom of heaven,
01:20:23.080 you have to become as a little child.
01:20:24.440 It's like, but there's a transformation there,
01:20:26.960 so that that return to the point of origin is accompanied by an expansion of consciousness.
01:20:31.360 It's not a collapse back into childish unconsciousness.
01:20:36.560 It's the reattainment of a, what would you say?
01:20:41.020 It's the reattainment of the state of play, that's a good way of thinking about it,
01:20:45.260 that obtained when you were a child, but with conscious differentiated knowledge.
01:20:48.860 So, there is this tremendous narrative drive in the Western tradition towards differentiated,
01:21:00.080 comprehensive understanding as a positive good.
01:21:03.060 And that seems tied up with the continual drama between God and man.
01:21:07.340 So, and I do think the scientific enterprise is an offshoot of that.
01:21:10.300 That's what it looks like to me historically.
01:21:11.940 So, okay, so how in the world do you survive in psychology departments, given what you're thinking about?
01:21:19.920 Well, I've got the mathematics.
01:21:21.840 So, as long as, if I was just talking this stuff without any mathematical underpinnings to it,
01:21:27.760 it would be dismissed, of course.
01:21:29.380 But, you know, we've, in the case of the evolutionary stuff, we've published papers in the Journal of Theoretical Biology,
01:21:36.780 for example, and elsewhere, where we actually put the mathematics out there.
01:21:40.560 So, it's peer-reviewed.
01:21:41.960 And I think that it's a bit surprising, but, and I, you know, I'm a minority, a small minority,
01:21:49.920 but, you know, that's the way science progresses.
01:21:52.840 It precedes one funeral at a time.
01:21:55.020 Yeah, it progresses by minorities of one.
01:22:01.060 Exactly right.
01:22:01.900 So, and scientists understand that, you know, you want to have independent ideas,
01:22:07.620 think out of the box, make it mathematically precise.
01:22:09.680 Most of our ideas will be nonsense, including mine,
01:22:12.380 but you've got to put them out there and push them and see what happens.
01:22:17.380 I have, I'll say in terms of, I've gotten some stiff pushback.
01:22:22.260 For example, some philosophers have published papers recently where they give the following argument against my Darwinian theory.
01:22:30.900 They'll say, look, Hoffman uses evolutionary game theory to show that space and time and physical objects and organisms don't exist.
01:22:38.820 Well, he's got himself, what they say, an unenviable dialectical situation.
01:22:45.300 Either evolutionary game theory faithfully represents Darwin's ideas or it doesn't, they say.
01:22:53.840 Okay.
01:22:54.200 So if it doesn't, then he can't use it to say that the organisms and resources are not fundamental in space-time.
01:23:00.800 And if it does faithfully represent Darwin's ideas, well, Darwin's ideas are that space-time is fundamental and there are organisms and resources.
01:23:07.960 So it couldn't possibly contradict that.
01:23:10.600 So either way, Hoffman is screwed, right?
01:23:12.500 There's nothing he can do.
01:23:14.420 So, and that's been published actually in high-value philosophy journals.
01:23:21.140 And my response is quite simple.
01:23:23.640 It misunderstands science completely.
01:23:26.880 Every scientific theory has, when you write it down mathematically, it has a scope and its limits.
01:23:33.640 And the mathematics tells you both the scope and the limits.
01:23:36.060 So, for example, just to be very concrete, Einstein's theory of gravity, right?
01:23:40.180 And I think 1907 or so, he had this, the big idea.
01:23:43.100 If I was standing on a weighing machine in an elevator and all of a sudden the cord was cut and I was in free fall, I would all of a sudden be weightless.
01:23:51.820 That was his big idea for his theory of gravity.
01:23:53.900 It took him years, seven or eight years to actually make the mathematics.
01:23:57.900 But he wrote down his field equations.
01:23:59.580 So those field equations are Einstein's mathematics to capture his idea that space-time is fundamental and has certain properties.
01:24:09.520 Well, a year after he published it, Schwarzschild, a German scientist, discovered that they entailed black holes.
01:24:17.640 And we've eventually found out that his theory entails that space-time itself has no operational meaning beyond 10 to the minus 33 centimeters.
01:24:25.700 So, we could use the same argument that's been used against me against Einstein.
01:24:29.120 Now, look.
01:24:29.740 Okay.
01:24:30.180 Einstein's field equations.
01:24:31.460 Either they're faithfully representing Einstein's ideas or they're not.
01:24:34.640 So, we can use the same argument against Einstein that's been used against my theory.
01:24:39.760 Now, either Einstein's field equations capture his ideas faithfully or they don't.
01:24:43.620 If they don't, then we couldn't use them to show that space-time isn't fundamental.
01:24:46.640 And if they do, they couldn't possibly show that space-time isn't fundamental.
01:24:50.080 That last step is the wrong one.
01:24:51.940 The equations are there to show you the limits of your concepts.
01:24:56.100 They give you precise.
01:24:57.680 And so, that's what these philosophers have missed is that the equations that we write down tell us not just the scope, but the limits of our theories.
01:25:06.660 And that's why science is so valuable because it tells us your theory, your assumptions go this far and no further.
01:25:13.040 So, that's all I've done with Darwin's theory of evolution is to say-
01:25:16.480 Well, that also, okay, man.
01:25:19.480 That also sounds to me very much like a vindication of the fundamental claim of the pragmatists, which is that we accept something as true without noticing that what we mean is true in a time frame with certain implications for instantiation.
01:25:37.960 It's something like that.
01:25:38.960 And so, true is a lot more like, does the bridge stand up when a hundred cars go across it?
01:25:45.180 It's not some final, comprehensive, all-encompassing definition of the truth for all time.
01:25:51.780 And you've already made the case that it can't be because that truth is an ever-receding goal.
01:25:56.560 It's always bounded.
01:25:57.720 Okay, so when I came across that, I thought, okay, well, bounded by what?
01:26:01.320 And it's, well, it's bounded by our aim.
01:26:04.840 And then that's bounded by our motivation.
01:26:07.400 And then that's nested inside a Darwinian world.
01:26:10.240 Okay, now, let's go after the game theory.
01:26:12.960 Well, let me just say one thing about that.
01:26:14.640 So, the first thing I'd like, oh, sorry, go ahead, go ahead.
01:26:16.440 Yeah, I would just say that the very deepest spiritual traditions really say that up front.
01:26:21.160 Like, the Tao Te Ching starts off that says the Tao that can be spoken of is not the true Tao.
01:26:25.660 Once you understand that, then go ahead and read the rest of it.
01:26:28.240 That's a good example because that's a great book.
01:26:30.460 Yeah, that's right.
01:26:31.120 It's a great book.
01:26:31.460 And I think that that's also the way we should think about our science.
01:26:36.960 The science that can be spoken of is not the final reality.
01:26:40.020 But given that, it's a wonderful thing to do science.
01:26:43.180 And we should do science, and we should do it very, very rigorously.
01:26:46.260 But we should always understand that if we're talking about a theory of everything, it should be with a wink and a nod.
01:26:53.440 Because there is no theory of everything that we can write down.
01:26:56.160 It's the theory of everything that we've discovered so far, maybe.
01:26:59.760 But it will never be the final theory of everything.
01:27:02.960 Right, and it might have a broader and broader range of potential applications as well.
01:27:06.740 But that doesn't mean that we've exhausted the landscape of comprehensive theories.
01:27:11.520 Right, okay.
01:27:12.040 So now, the philosophers that you described as objecting to your theory said that if evolutionary game theory is correct,
01:27:21.000 and it models Darwin's propositions appropriately, then.
01:27:24.840 Well, so, game theory is extremely interesting to me, although I wouldn't say I'm an expert in its comprehension.
01:27:30.760 But I understand its gist, I believe.
01:27:33.100 And it seems to me to be something like this, is that if you iterate interactions, an ethos of one form or another emerges.
01:27:41.000 So, for example, if you play tit-for-tat simulations, you find out that the best trading strategy is cooperate but slap when necessary,
01:27:49.320 and then forgive, something like that.
01:27:51.280 And so, what it points to, very interestingly, is something like a concordance between objective reality,
01:27:58.700 insofar as objective reality is an emergent pattern coming out of iterative interactions,
01:28:04.420 and something like an ethos.
01:28:06.480 So, the first question I have is, why are you interested in evolutionary game theory,
01:28:12.620 and why do you think that it is a valid representative, a more differentiated representative,
01:28:18.680 if I've got the language right, of Darwinian theory?
01:28:22.280 Oh, well, I'm interested in it because that's within the field of evolutionary theory itself.
01:28:28.400 Evolutionary game theory is taken as the prize mathematical tool for really understanding things.
01:28:35.140 So, that's just the framework of the science itself.
01:28:37.620 Okay, so that's accepted, as far as you're concerned.
01:28:39.680 Yeah, I mean, of course, there's always debate.
01:28:42.460 But by the vast, but it's the received opinion.
01:28:47.860 So, if I wanted to, as a scientist, if I wanted to analyze Darwin's theory for this issue about truth,
01:28:54.060 and I wanted to do it rigorously, the tool was evolutionary game theory.
01:28:58.900 That was the tool to use.
01:29:00.820 And that's not because I think it's the final word or the truth.
01:29:04.520 It's just our current state of play in the field.
01:29:07.480 It's the best we have.
01:29:08.680 That's the best we have, and I wanted to use the best tool we have.
01:29:11.500 And that's the way, we're always pulling ourselves up by the bootstraps in science, right?
01:29:15.000 We always say, these are the best theories we have and the best tools we have so far.
01:29:19.260 Of course, our goal is not to prove that we're right.
01:29:21.800 Our goal is to find the limits of our current theories and transcend them.
01:29:25.920 So, we're looking for, are the best tools that will say, aha, Darwin goes this far and no further.
01:29:32.300 Space-time goes this far, you know, high-energy theoretical physics.
01:29:36.160 Einstein's wonderful theories, they're an incredible gift.
01:29:39.280 They go to 10 to the minus 33 centimeters, and they stop.
01:29:42.420 That gift stops right there.
01:29:43.740 And now we have to go entirely outside.
01:29:45.740 And that will be the never-ending pattern of science, is that whatever the scientists are finding outside of space-time,
01:29:52.120 that will just be our next baby step.
01:29:53.960 And we'll analyze that and then say, okay, what's beyond that and beyond that?
01:29:57.800 And science will continue to go.
01:29:59.260 So, as long as you recognize that that's the game, you'll realize that there's no theory of everything in science.
01:30:05.220 And then the question is, who am I?
01:30:08.580 Who are we that are able to do this game?
01:30:11.760 And that's a very interesting question.
01:30:15.760 Well, you know, there's lots of things I'd like to ask you about, but that's a pretty good place to stop.
01:30:20.880 And we're damn near at an hour and 30.
01:30:22.580 So, I hope I have the privilege of furthering my discussion with you at some point in the not-too-dear future.
01:30:29.700 I would like to say, is there anything in closing that you would like to bring to the attention of the listening audience,
01:30:37.340 the watching audience, that you think that we needed to cover to make what we have covered comprehensible?
01:30:43.840 Or is that also, in your estimation, a good place to stop?
01:30:48.220 I'll just say one little thing, I guess.
01:30:50.040 And that is, some people might think, well, he's got this theory of consciousness outside of space-time.
01:30:55.140 So what?
01:30:56.040 Who cares?
01:30:56.600 And I would agree with that, unless I did something more.
01:31:01.780 So what we're trying to do now as scientists is to say, we have this mathematical model of consciousness outside of space-time.
01:31:07.980 We just published a proposal for how to actually test it.
01:31:13.860 So we're going to have a projection into space-time.
01:31:16.580 We're working on that projection.
01:31:17.960 We'd like to model the inner structure of the proton.
01:31:21.060 We would like to have a dynamics of conscious agents that projects down and gives us what's called the momentum distributions of quarks and gluons inside a proton,
01:31:30.060 and all the Bjork and X and Q-squared, the different spatial and temporal resolutions that particle physicists have studied.
01:31:36.080 And the reason we're going there is not because I think that's the most important application of a theory of consciousness.
01:31:42.040 It's the most accessible one.
01:31:44.600 That's the simplest part of our science right now.
01:31:47.180 Now, ultimately, of course, the brain has the nice neural correlates of consciousness.
01:31:51.820 We want to understand that.
01:31:53.040 But that's really complicated.
01:31:54.400 So we're going to go after – if we can model the proton and get it exactly right, get the momentum distributions to several decimal places,
01:32:00.880 it doesn't mean our theory is right, but it does mean it can't be dismissed out of hand.
01:32:05.000 And so that's what our goal is, to take a theory of consciousness, not just to airy-fairy wave our hands,
01:32:10.560 but to actually get in there and predict the inner structure of the proton with great detail.
01:32:17.180 If we can do that, then I would say we then can start to move up to molecules and then ultimately to neural systems in the brain
01:32:26.640 and try to understand the neural correlates of consciousness.
01:32:29.640 But not the neural correlates – the brain does not cause consciousness on this model.
01:32:34.640 The brain is merely a symbol inside the headset, right?
01:32:38.740 So, in fact, I would say this.
01:32:42.040 Neurons do not even exist when they're not perceived.
01:32:45.600 Neurons cause none of our behavior.
01:32:47.340 And yet I'm a cognitive neuroscientist.
01:32:49.760 And I think that we should study – neuroscience is wonderful and we need more funding for it because it's more complicated than we thought.
01:32:58.520 We thought – we look inside the brain, we see neurons.
01:33:01.040 That's because that's the reality.
01:33:02.320 There are neurons.
01:33:02.780 No, that's the interface description of something that's much, much more complicated.
01:33:08.300 We have to reverse engineer neurons to this network of conscious agents outside of space-time.
01:33:13.860 So, we need more funding for neuroscience that's much more complicated.
01:33:17.500 So, I would just a little – of course, as you can imagine, I'm talking about something that could take hours to go into detail.
01:33:23.200 But just to put those out there and say these are objections and people might have.
01:33:28.340 So, we're headed toward dealing with them.
01:33:29.780 Okay, well, I do have one – okay, I do have one other question then.
01:33:32.500 I guess I do have to throw it out.
01:33:33.800 So, you have a very radical conception of consciousness.
01:33:38.500 What has that done for you existentially, do you think?
01:33:42.240 I mean, you're obviously thinking about the place of consciousness – well, you're thinking about it existentially.
01:33:48.340 You're thinking about the place of consciousness in the cosmos and you regard it as a fundamental reality.
01:33:53.700 So, what has that done to the manner in which you contemplate your own, say, mortality or the purpose of your life?
01:34:00.380 And what's that done for you on that side of things?
01:34:04.740 Quite a bit.
01:34:05.840 It's really hit me in the face because I'm intuitively as much a physicalist and a materialist as anybody else.
01:34:14.140 I mean, I'm wired up to believe all that.
01:34:18.680 And so, it's come as a terrible shock to me.
01:34:22.680 My whole self-image has had to change.
01:34:25.160 And I do spend –
01:34:26.260 In what direction?
01:34:27.200 In what direction your self-image changed?
01:34:30.060 What changed?
01:34:31.920 Well, I thought of myself as a little object in space-time.
01:34:34.920 Right, right, right.
01:34:36.120 And the death of the body is ultimately the death of me.
01:34:38.980 And now, it's – well, our best science says that this is – you know, my body is just an icon in a headset.
01:34:45.780 So, in some sense, it's just an avatar.
01:34:47.880 This body is just an avatar.
01:34:49.220 It's not – and so, death is more like taking off a headset.
01:34:53.940 So, but my emotions don't agree with that.
01:34:57.820 So, I've got this really interesting –
01:34:59.480 Yeah, well, that's probably just as well.
01:35:02.040 Right, exactly.
01:35:02.920 So, I do spend a lot of time in meditation, and my father was a Protestant minister, a fundamentalist Protestant minister.
01:35:11.220 So, I was raised in the Christian church.
01:35:13.320 And so, I look at those points of view.
01:35:15.900 I look at the Eastern mystical stuff.
01:35:18.120 I meditate myself.
01:35:20.480 And my ultimate thinking about this is, as I said, we can never have a theory of everything, and that includes of who I am.
01:35:28.840 So, the question about who I am, my best guess right now is, at the deepest level, I and you are, in fact, the one consciousness just looking at itself through different avatars.
01:35:43.820 So, it's really the one using a Jordan avatar to talk to the one Ianna Hoffman avatar, and that's what's going on here.
01:35:53.920 So, are you responsible for being the best possible avatar you can be, so to speak?
01:36:04.380 Well, in some sense, within this projection, within this headset, morals of a certain kind are the rules of the road.
01:36:16.260 But my guess is that when we take the headset off, we'll just laugh.
01:36:19.280 That was what we had to do in this headset, but that was, I am not this avatar.
01:36:26.420 I am the consciousness that transcends space and time.
01:36:31.020 Well, you know, the next time we talk, maybe that's a road we should wander down.
01:36:35.360 Sure.
01:36:35.460 Because we didn't get into the metaphysics of ethics, let's say, during this conversation.
01:36:41.460 And there's plenty of that.
01:36:42.740 That's obviously a whole other area.
01:36:44.340 Okay, okay.
01:36:44.920 Well, that would be good.
01:36:45.580 All right.
01:36:46.020 Well, so, to everyone watching and listening, thank you very much for tuning into this podcast.
01:36:52.040 As most of you know, I'm going to talk to Dr. Hoffman for another half an hour behind the Daily Wire Plus platform,
01:36:57.540 and I'm going to see if I can find out where in the world his interests stemmed from and how they initially manifested themselves and developed across time.
01:37:06.720 We'll do that as much as we can in half an hour.
01:37:08.580 Thank you to the crew here up in Northern Ontario for journeying up here to do this podcast.
01:37:14.440 Thank you, Dr. Hoffman, very much for your time today.
01:37:16.720 To the Daily Wire Plus people for making this possible.
01:37:19.440 That's also much appreciated.
01:37:20.920 And we'll see all of you watching and listening, hopefully, on another podcast.
01:37:25.000 Thank you very much, sir.
01:37:27.080 Thank you, Jordan.
01:37:27.980 Thank you, Jordan.
01:37:28.000 Thank you, Jordan.