Making Sense - Sam Harris - May 22, 2023


#320 — Constructing Self and World


Episode Stats

Length

51 minutes

Words per Minute

153.77081

Word Count

7,903

Sentence Count

4


Summary

Shaanil Chandaria is a philanthropist, an entrepreneur, a technologist, and an academic with multidisciplinary research interests spanning cognitive neuroscience, machine learning, and artificial intelligence, and the philosophy and science of human well-being. He got his PhD at the London School of Economics in mathematical modeling of economic systems, and later completed a Master's in Philosophy from the University College London where he developed an interest in the philosophy of science and the philosophical issues related to biology and neuroscience and ethics. In 2018, he helped endow the Global Priorities Institute at Oxford University, and in 2019 he was a founder of the Center for Psychedelic Research in the Department of Brain Sciences at Imperial College London. He s also funding research on the neuroscience of meditation at Harvard University and the University of California Berkeley, and he s funding research in the field of meditation and psychedelics at Harvard, Yale, and Princeton. In this episode, we discuss how he came to pursue his research interests, and how they intersect with his many intersecting interests, including his work in the fields of cognitive neuroscience and meditation, and his work as a strategic advisor at DeepMind, a venture capital firm that invests in the development of artificial intelligence and other forms of cognitive science, including artificial intelligence. We also discuss his journey into meditation, psychedelics, and existential risk, and why meditation has become so central to his thinking on the nature of the world and the best way to live in the 21st century. This episode is a must-listenjoyment. To find a list of our sponsors and supporters of the Making Sense Podcast, go to bit.ly.org/sponsors/Making Sense/Our Sponsorships/Becoming a supporter of the making sense Podcast? We don t run ads on the podcast? We do not run ads, but we do make it possible entirely through the support of our products and services, so you can support the podcast by becoming a patron of Making Sense by becoming one of our patrons. We make possible by using our excellent sponsorships, we make possible entirely by making sense of the things we do in the world through the power and services we consume, and we make the podcast makes possible by their support by the support we do so through our products, and they make the world a better place for people like us all can become a better friend of the podcast, everywhere else, everywhere you get a chance to learn more of us, everywhere they get the chance to meet us.


Transcript

00:00:00.000 welcome to the making sense podcast this is sam harris just a note to say that if you're hearing
00:00:12.500 this you are not currently on our subscriber feed and will only be hearing the first part
00:00:16.900 of this conversation in order to access full episodes of the making sense podcast you'll
00:00:21.800 need to subscribe at sam harris.org there you'll find our private rss feed to add to your favorite
00:00:27.020 podcatcher along with other subscriber only content we don't run ads on the podcast and
00:00:32.500 therefore it's made possible entirely through the support of our subscribers so if you enjoy what
00:00:36.680 we're doing here please consider becoming one today i'm speaking with shamil chandaria shamil is a
00:00:50.340 philanthropist an entrepreneur a technologist and an academic with multidisciplinary research
00:00:56.940 interests spanning computational neuroscience machine learning and artificial intelligence
00:01:01.980 and the philosophy and science of human well-being he got his phd at the london school of economics
00:01:07.860 in mathematical modeling of economic systems and he later completed a master's in philosophy
00:01:13.720 from university college london where he developed an interest in the philosophy of science and the
00:01:19.720 philosophical issues related to biology and neuroscience and ethics in 2018 shamil helped
00:01:25.140 endow the global priorities institute at oxford university and in 2019 he was a founder of the
00:01:30.280 center for psychedelic research in the department of brain sciences at imperial college london he's
00:01:36.040 also funding research on the neuroscience of meditation at harvard university and the university of
00:01:40.880 california berkeley and shamil and i spoke about many of our intersecting interests with the main focus
00:01:47.540 being on how the brain constructs a vision of the self and the world we discussed the brain from first
00:01:53.720 principles bayesian inference the hierarchy of predictive processing in the brain how vision is constructed
00:02:01.980 psychedelics and neuroplasticity beliefs and prior probabilities the interaction between psychedelics and
00:02:10.540 meditation the risks and benefits of psychedelics my recent experience with mdma non-duality love gratitude and
00:02:20.620 the bliss the self model the buddhist concept of emptiness human flourishing effective altruism and other topics
00:02:29.760 and now i bring you shamil chandaria
00:02:32.720 i am here with shamil chandaria shamil thanks for joining me yeah it's great honor to be on
00:02:44.300 so i forget how i discovered you i think i saw you in conversation with will mccaskill the young
00:02:51.980 philosopher who i'm a big fan of and who's been on the podcast several times and it just seemed to me
00:02:58.680 that just based on your conversation with him that you and i have a an unusual number of topics we
00:03:04.400 intersect on and i i think you just judging from what i've seen of you you've you've arrived at these
00:03:12.280 various topics by different routes than i have so it'll be interesting to hear your story but
00:03:17.280 briefly i think we are both very interested in the brain and the nature of mind you both as it can be
00:03:24.980 understood through neuroscience and and also through first person methods like meditation and psychedelics
00:03:31.800 you are you also have a lot of experience with artificial intelligence which is an interest and concern of
00:03:38.520 mind and um also effective altruism and considering topics like existential risk and you know the long
00:03:46.640 term challenges there's just a lot here so um perhaps you can just summarize your journey into
00:03:54.020 some or all of these areas how have you come to focus what are you focusing on and how have you come
00:04:00.100 to focus on these things yeah so you're right you know we actually i think share um there's a huge
00:04:06.600 amount of overlap in fact funnily enough i think we first met in puerto rico uh if you remember that
00:04:14.300 conference oh interesting 2015 so i was there and i mean we may have had a short conversation i'm i was
00:04:21.400 a big fan of waking up the book in those days so nice okay so i forgive me because i'm not aware of
00:04:27.900 having met you but i it's very likely we did meet because um it was not that large a group and and that
00:04:33.460 was an interesting conference 70 people right so so yeah that was just before well i was already at
00:04:39.980 the future of humanity institute there um where you know nick bostrom and others are but uh yeah so
00:04:48.380 there's so many threads to the story but i i'm surprised because actually i thought you must have
00:04:54.360 discovered me by seeing this talk that i gave called the bayesian brain and meditation
00:04:58.620 no i don't i i've since seen that talk or i or at least a podcast with you discussing that talk i
00:05:06.420 now forget but yeah your discussion with will at first okay yeah well so that's that's obviously
00:05:12.400 something we'll get into that's the the big thing which has kind of gone you know become very central in
00:05:18.080 my thinking on kind of how does meditation work but let's let's just uh rewind yeah so i have a kind of a
00:05:27.100 mathematical background my phd was in um mathematical economics actually using techniques of like
00:05:35.240 stochastic optimal control which actually become later the mathematics behind reinforcement learning
00:05:42.880 which is obviously central in ai and so i've done so many different things in my life including
00:05:49.180 you know finance and technology but but i think that i joined deep mind as a strategic advisor in
00:05:59.160 2015 and was there until 2021 and you know like you one of my central concerns of course is ai safety but
00:06:09.720 i'm also interested on a technical side and kind of you know really one of the i mean i have lots of
00:06:18.060 interest in ai but one of the real interests is to understand how the brain works because i think
00:06:24.200 that um machine learning in ai has been is is actually a very good way to start thinking about how the brain
00:06:31.940 works and at the same time i was also a research fellow at the institute of philosophy at london
00:06:41.120 university and looking at this kind of intersection between neuroscience and philosophy and at the time
00:06:49.400 i think you know back in 2013-14 you know they asked me since i was the kind of mathematical guy
00:06:57.240 there you know there's this thing called the free energy principle coming out of carl fristen's lab
00:07:03.580 and you know can you explain how this really works you know you know about entropy and stuff like
00:07:08.120 that uh so i started really getting into it and it was very interesting because of course
00:07:14.980 it's deeply connected with with information theory and machine learning and to some extent
00:07:23.220 i would say i now take the position i think many neuroscientists do that it's the closest thing we have
00:07:31.280 to a kind of general algorithm of what might be going on in the brain from a big picture perspective
00:07:38.960 and as i as i kind of got into it more and more the more i thought that wow this is very similar to
00:07:50.720 you know what i'm going through in my meditation journey and kind of what the central ideas of
00:07:57.220 buddhism and eastern spiritual traditions are and you know because because essentially i guess we'll
00:08:04.120 get into this but what seems to come out is that really the brain is having to construct or fabricate
00:08:13.620 or simulate a world a phenomenal world and a phenomenal self and the free energy principle kind
00:08:20.740 of go through like you know how does it how do we do that so so that was very interesting and then
00:08:27.440 interestingly as a deep mind i started really looking at some of these architectures these uh
00:08:32.920 unsupervised learning architectures uh using deep mural networks and i started to be able to
00:08:40.460 understand the free energy principle a lot better than i did before and and and i think in a in a much
00:08:47.420 more heuristic and um practical way compared to the sort of usual explanations in neuroscience come
00:08:55.740 you know which are notoriously difficult sometimes using tensor calculus and all sorts of things
00:09:01.780 so yeah so that's that's some of the background you know bringing in the neuroscience and meditation
00:09:08.220 so did you ever work with friston uh yeah well i continue to do i mean so well in fact i was
00:09:15.760 with him at a workshop i think about a month ago on computational neurophenomenology and uh yeah he's
00:09:23.880 he's he's pretty amazing yeah yeah very smart and quantitative neuroscientist yeah i think the
00:09:30.480 is he the most cited neuroscientist at this point it's i i i believe so yeah i believe so yeah yeah so um
00:09:38.380 you know a couple more background questions before we jump in one to just to remind people that deep mind is
00:09:43.760 the ai company that was acquired by google that gave us alpha zero and alpha go and alpha fold and
00:09:51.620 made some of these initial breakthroughs with uh deep learning in in recent years that that have
00:09:58.540 really been the core i would say of the the renaissance in ai i mean that people are talking
00:10:04.680 more about open ai at the moment as a result of chat gpt but deep mind really has been the front
00:10:11.540 runner for several years in ai and uh and it's it's joined together with um with google brain right
00:10:19.580 so it's back again as google deep mind yeah yeah how did you come to meditation and and what practices
00:10:26.960 have you been doing and who were your what teachers have been important for you yeah so that's that's
00:10:33.080 actually very central to my life i started meditating 35 years ago i write when i started my phd i initially
00:10:44.620 started with tm which was the way you know back then in the 80s that's what you know pretty much a lot
00:10:52.160 of the early meditators started with and i found that you know actually very useful and and as my practice
00:10:59.200 as i've gone through my practice i've only come to understand that it was actually a really good
00:11:04.880 foundation and then i guess maybe around 20 years ago i started my sort of first buddhist retreats
00:11:14.460 and then yeah maybe maybe um seven or eight years ago i was i started really spending a lot of time at
00:11:23.600 retreat center in uk called gaia house where rob barbeo was the resident teacher and i was very
00:11:30.640 influenced by by his kind of framework on emptiness and his meditation practices you know unfortunately
00:11:37.440 i never met him i discovered him after he died he died unfortunately quite young and he has this
00:11:44.360 wonderful book on emptiness the scene that frees and it's uh he really seemed like he was quite a gem
00:11:50.980 he he he really was i mean i he's actually i think exactly the same age as me to the month i think
00:12:00.100 that he yeah unfortunately by the time i was there he was a lot of the time pretty sick so i kind of
00:12:07.900 never really got to sit with him too much but i was still you know in the in the orbit and you know
00:12:16.960 look and and my meditation practice deepened a lot into the jhanas and other kind of techniques and
00:12:24.460 then other emptiness uh meditations of proper bear and then i suppose in the last three four years i i
00:12:34.260 kind of felt that what my practice really needed was a move to non-dual non-dual style and so i did a
00:12:44.460 retreat with lock kelly but then pretty much a little after that started working with michael taft
00:12:50.660 who is a non-dual teacher in a kind of i mean he's he's non-dual style but not not um under any
00:12:57.900 particular lineage right and that's that was perfect for me because he's very his experience is very broad
00:13:05.640 and he's he can kind of integrate many styles and so um yeah i've been been working with him so
00:13:13.940 yeah it's been a long it's been a long and interesting journey and along the way something
00:13:18.780 that we we haven't yet touched on i also have been very involved in the psychedelic uh kind of
00:13:27.140 renaissance i'm also a research fellow at imperial college where where robin carhart harris used to be
00:13:34.700 yeah and um robin's now of course in san francisco but ucsf and actually i worked quite closely with
00:13:43.720 robin and carverston on the kind of computational model of what might be going on with psychedelics
00:13:50.400 the rebus model so you know which basically uses a predictive processing framework nice nice and you
00:13:58.560 funded some of that research right didn't you yeah so that's that's yet another research because
00:14:04.400 apart from being like on the on the science side and the research side i'm also another hat is being
00:14:10.680 a philanthropist just as it happens because of my career you know i i i'm able to be uh have the
00:14:18.900 financial resources to also have a philanthropic role and um i take it i'm very influenced obviously
00:14:27.960 by effective altruism and one of the kind of tenets of effective altruism is that you know we want to be
00:14:34.000 in areas that are kind of neglected and when i was and you know these funding you know when i sort of
00:14:42.560 helped to set up the um the first psychedelic research center uh in the world you know that
00:14:49.020 was it was still pretty undefunded right right well okay well so we have many things on the menu here
00:14:55.900 let's start with the brain and um i guess uh we should probably you know some of these topics are
00:15:04.580 fairly complex and and some of the interesting details are in the math and and we obviously are are
00:15:11.540 working with audio only so there's no visual aids here but i think it's it would be worth trying to
00:15:18.600 explain what you mean by the free energy principle uh what you mean by you know predictive inference or
00:15:28.900 predictive coding part of that picture is also the work you've done on bayesian inference in the brain
00:15:36.220 we might just to make things difficult we might also mention integrated information theory come at
00:15:42.900 that tangle however you want but what do you think is the best hypothesis at the moment describing what
00:15:50.800 the brain is doing yeah and you know we might want to start by differentiating that from everyone's
00:15:58.160 common sense idea of what what the science probably says about what the brain is doing yeah okay no that's
00:16:05.380 that's that's great so why don't we look at the brain from first principles and then maybe we can
00:16:09.560 later apply to meditation and spirituality so the thing is that you know maybe 20 years ago
00:16:17.400 the consensus of you know what the brain was doing was it was kind of taking bottom-up sensory
00:16:25.200 data sensory information and kind of processing it up a stack and then eventually the brain would know
00:16:33.680 what was would figure out what was going on and that's that uh view of what the brain is doing
00:16:42.020 is in fact precisely upside down according to the latest theory of how the brain works and i think the
00:16:51.300 the you know the way to start at this question is really from first principles really it really does help
00:16:58.780 to look at it philosophically which is you know we're an organism with this central processing unit
00:17:06.920 the brain which is enclosed in a kind of dark cell within the within the skull i mean we are already
00:17:15.780 brains in vats you know we are already thought experiments exactly exactly and all this brain has
00:17:23.160 access to is some noisy time series data some some dots and dashes coming in you know sort of from
00:17:32.960 the nervous system now how on earth is it going to figure out what is going on in the world before you
00:17:41.420 proceed further this is i love the the angle you're taking here but let's just reiterate what is meant by
00:17:48.860 by that because it's it can be difficult to form an intuition about just how strange our circumstance is
00:17:58.420 i mean we hit you know we open your eyes and you see the world or you seem to see the world and people
00:18:03.960 lose sight of of the significance of you know light energy being transduced into electrochemical energy
00:18:11.460 that is not it is not vision right it is not after it hits your retina you're not dealing with light
00:18:19.660 anymore and it's this has to be a reconstruction and we're now going to talk about the details of that
00:18:27.580 reconstruction but um to say that we're brains in vats right and being piped with electrochemical
00:18:35.340 signals divorced from how experience seems you know out there in in the world that it just seems
00:18:43.960 given to us that's not hyperbole it really is you know there is a a fundamental break here at least in
00:18:50.480 in how we conceive of our sectioning of reality based on our on what our nervous system is yeah i mean
00:18:58.280 in fact i don't know how deep you want to go with this but actually you can even start before that which is
00:19:04.780 from the philosophical problem which is you know what plato and emmanuel kant kind of pointed to
00:19:12.400 which is that we only know our appearances our experience we have no contact with reality most
00:19:22.860 people's common sense view is that oh look we're looking out at the world through little windows in the
00:19:29.780 you know on the front of our our our skulls and we're seeing trees as they really are now of course
00:19:37.900 that cannot be true for precisely the the reasons that that that you said we're just receiving some
00:19:45.160 noisy random electrical signals coming in and the brain has never seen reality as it is i was gonna you
00:19:55.840 know the tree as it is in itself if that makes any sense now what the brain has to do is figure out
00:20:04.060 the causes of its sensory data in other words it's trying to figure out what is causing its sensory
00:20:12.140 data so we can get some grip on the environment and that of course is important from an evolutionary
00:20:18.680 perspective because if we don't know what's going on in the environment we won't know where the food is
00:20:24.080 we won't know where the tiger is so we need to find out the causes of our sensory data you know and this
00:20:30.580 is ultimately formally exactly the statistical inference problem the bayesian inference problem
00:20:38.000 and bayesian inference is trying to figure out the probability that given my sensory data i'm seeing
00:20:47.400 a tree okay now as we said it turns out that the brain can't solve this problem because actually formally
00:20:55.580 solving you know the bayesian inference problems turns out for technical reasons to be computationally
00:21:01.620 explosive so what evolution has to do and what we have to do in artificial intelligence is use another
00:21:09.000 algorithm it's called approximate bayesian inference and the way you solve it because bayesian inference is so
00:21:16.980 difficult the way you actually solve it is going at it backwards and what you have to do is you
00:21:22.640 essentially have to have all this data come in and try to learn what you think you're seeing and from what
00:21:30.300 you think you are seeing you then simulate the pixels that you would be seeing if your guess is correct
00:21:37.400 so if i think i'm seeing a tree what your brain then has to do is go through something called a
00:21:44.160 generative model and actually simulate the sensory data that it would be seeing if this was indeed a
00:21:50.980 tree now that is incredible because what it means is that well you know the the upshot of that just to
00:21:58.820 cut to the chase what this is the real kind of what's called a neurophenomenological hypothesis which is
00:22:05.980 that in fact what we experience if we're aware of it is our internal simulation is precisely that
00:22:15.360 internal generative model now you might just then conclude well we're just hallucinating we're just
00:22:22.040 simulating how do we have any grip on reality and this is where the free energy principle comes in
00:22:27.780 it says that you know what we have to do is we have to simulate what we think is going on but it's
00:22:36.200 not any old simulation it's a simulation that minimizes the prediction error from the output of
00:22:44.480 your simulation and the few bits of sensory data that we get in other words what we actually do with
00:22:52.160 the sensory data is use it to calibrate our simulation model our generative model and there's another part
00:23:00.220 of the free energy principle which is it turns out that minimizing prediction error isn't good enough
00:23:05.540 it turns out we also have to have some prior guesses some prior probabilities about what we're experiencing
00:23:14.600 in other words you know as i grow up you know through childhood and you know as you're enculturated you
00:23:22.340 come to learn that there are things like trees and and so there's a kind of a high prior probability of
00:23:27.800 finding trees in your environment now what you want to do is you want to have a simulation which is
00:23:35.040 minimizing the prediction error with the raw with the sensory data but also minimizing the informational
00:23:41.880 distance between the output of your generative model the simulation and your priors in other words
00:23:49.060 you want a simulation that is as close to what you would normally expect before seeing the sensory data
00:23:57.580 so this is really what the free energy is the free energy has two terms the first is roughly kind of a
00:24:05.420 prediction error and the second is an informational distance to the prior of what you'd be
00:24:11.700 expecting so it turns out that we can actually do approximate bayesian inference which is the
00:24:19.140 mathematically optimal thing to do if we simulate the world and use that simulation to and and create
00:24:27.640 the simulation in such a way that minimizes the prediction error with the sensory data that we get
00:24:33.560 and also minimizes the deviation from the divergence from our prior probability distribution prior
00:24:42.480 probabilities so that's kind of the free energy in a nutshell and it's kind of as i said it's very
00:24:49.360 interesting because it helps us think about phenomenology which is you know what i'm interested
00:24:55.440 because like you know it's if we if we open our eyes as you say and we find the world just appear in front of us
00:25:03.660 you know what is this what is this experience that we're having and the answer is it's a kind of
00:25:12.380 we're somehow aware of our internally generated a model of the world and that model happens to be
00:25:25.300 kind of calibrated correctly with the sensory data yeah yeah it was a great overview i maybe i'll track
00:25:33.560 back through some of that just to give people a few handholds here and also give them areas they may
00:25:40.120 do some further research if they're interested so so many people will have heard of bayesian statistics
00:25:45.860 or you know bayes's theorem and uh it's actually a pretty simple piece of mathematics that it's worth
00:25:52.780 looking up because it's unlike many equations you once you track through the terms it does
00:26:00.480 repay one's intuitive sense of how things should be here i mean this is a a mathematical description of
00:26:08.660 of how we revise our probability estimates based on evidence and so when you when you look at this
00:26:14.880 equation i just pulled it up to remind myself of its actual structure if you want i can just do a
00:26:20.180 little very simple example sure yeah i mean i was i was imagining something like you know what's the
00:26:25.920 probability that it's raining given that the street is wet you know yeah so i i mean i'll stick to i'll
00:26:31.180 stick to the brain and the tree sure and the data but yeah yeah so so what what bayes's theorem says
00:26:37.320 to think about our our tree in the brain example you know what it's it's giving you a formula for
00:26:44.020 calculating the probability that of there being a tree given your sensory data okay in fact is
00:26:53.720 calculate you know bayesian inference the way we're doing in the free energy is calculating the whole
00:26:58.320 probability distribution but you can just think of it that what we're trying to calculate is the
00:27:02.680 probability that what you're seeing is a tree given the sensory data that's coming through to you
00:27:08.500 and what basis theorem says is that you can calculate that probability by going at it in a kind of a
00:27:16.960 backwards way which is you can say it's equal to the likelihood of the data and that that's roughly saying
00:27:26.180 how likely is it that i would be seeing exactly this sensory data if it was indeed a tree times
00:27:34.580 another term called the prior probability which is what's the prior probability of seeing trees
00:27:40.640 okay so those are the two main terms of basis theorem the likelihood of the data which is what's
00:27:46.580 the probability of seeing the date this particular data give you know on the basis that it's from a tree
00:27:52.560 and the second term is the prior which is the probability of seeing trees in general and then
00:27:58.140 these two terms are just divided by a normalizing term which is which is very simple it's just
00:28:04.580 what's the probability in general of seeing this particular sensory data so that just that's just
00:28:10.340 there to make sure the probabilities add up to one one thing i'll flag here is that this connects with
00:28:16.160 some very common reasoning errors of the sort that danny kahneman and amos twersky pointed out like
00:28:22.280 base rate neglect for the prior probability of seeing a tree given that you're you're walking
00:28:28.520 someplace on earth is very high but the prior probability of seeing a spaceship or a lion or
00:28:35.640 something else is is lower and we it's only against those background probabilities that we can
00:28:41.380 finally judge how likely it is that our our perceptions are veridical right i mean right and
00:28:48.360 neglecting though that you know what is called base rate is a source of uh some very right and
00:28:54.140 common reasoning errors in fact in fact if i can draw it back to the brain and that's a great example to
00:28:59.520 illustrate it exactly because because this goes to the heart of the free energy principle and how
00:29:06.040 predictive processing and active inference works which is okay so you're looking down the street and
00:29:12.740 you see you know it's kind of a little foggy but you see this this four-legged animal coming up the
00:29:20.280 street and actually it kind of looks like it's it's it looks like a lion uh the probability that the
00:29:31.120 sensory data is coming from a lion is actually higher than the probability that this sensory data is
00:29:38.080 coming from a dog okay so let's just take that as given that in fact it's it's however the prior
00:29:45.280 probability of seeing a lion is way way lower than seeing a dog and so in fact uh and this this this
00:29:55.480 can be actually you know this is tested in in lots of experiments in fact you will perceive that as a
00:30:02.160 dog you will actually perceive it as a dog because that's the way bayesian inference works out now
00:30:09.180 actually there's a slight wrinkle to this which is uh well you know which which gets into the
00:30:14.640 nitty-gritty the free energy principle if it wasn't a foggy day and you get a really clear read on the
00:30:22.080 sensory data okay then the weight of that likelihood of the data term will take precedence over the prior
00:30:30.040 so it will actually overrule the prior so it doesn't mean that you know you're just constrained by your
00:30:35.060 priors forevermore it's just a way of weighting the sensory data with the prior probabilities and um you
00:30:44.040 know if it's a foggy day the sensory data is lowly weighted technically we say the it's got low
00:30:49.600 precision which is the inverse inverse of variance and yeah that's that's a really great example of
00:30:56.760 how the bayesian inference actually works in the brain okay so just to give some neuroanatomical
00:31:04.480 plausibility to this picture so again the common sense view of the science here is that we have a
00:31:10.400 world let's stick to vision because i think it's the the most intuitive we have a world which we
00:31:16.560 see with our open eyes and the way we see it is that in the light hits the retina and then it it gets
00:31:23.600 transduced into electrochemical energy in the brain and transits through various brain areas and along the
00:31:32.420 way various features of the visual scene are detected and encoded so there there are neurons that respond
00:31:41.800 to straight lines there are cortical columns and in the in visual cortex that build up a more complex and
00:31:49.720 abstract image and you know eventually you get to some cell in the cortex that responds to faces rather
00:31:58.240 than anything else and even you know you'll get cells that respond to specific faces like the fabled
00:32:04.980 grandmother cell or i think there was one experiment about 25 30 years ago that that showed that there
00:32:10.680 were cells that were responding to the face of bill clinton and not any other and so you have this kind
00:32:16.200 of one-way feed-forward picture of a mapping of the world and yet in in your description here are
00:32:24.060 seeming to reverse the causality one interesting piece of neuroanatomical trivia is that we have
00:32:31.600 something like 10 times the number of connections going top down rather than bottom up from returning to
00:32:39.200 visual cortex from the frontal lobes that has always been somewhat inscrutable that you know we we know that
00:32:46.940 you can modify the activity and even structure of visual cortex by learning right so you can learn
00:32:54.160 to see the world differently and that learning largely takes place frontally or you know or in areas of
00:33:01.920 cortex that are not strictly limited to vision and yet they connect back to visual cortex and so you
00:33:08.160 imagine what what is required neurologically to learn to recognize you know let's say you become a
00:33:16.060 radiologist and you learn to read cat scans say that learning has to be physically inscribed
00:33:23.080 somewhere and when and we find that the changes propagate all the way down to visual cortex there's
00:33:29.320 a picture of layers some of these deeper layers yeah that are above vision are now encoding a model of the
00:33:38.380 world on your account that is predictive that is making guesses uh that is a kind of yeah i believe
00:33:45.760 uh anil seth uh when he was on this podcast described it as a controlled hallucination it's very much
00:33:51.320 like what the dreaming brain is doing except in waking life it is constrained by visual inputs to
00:33:58.960 the system of the sort that you just described right and we're getting this error term in predictive
00:34:04.980 coding so maybe you can kind of fill in the gap i've created here what what are these deeper layers
00:34:11.180 of the network doing and how is this reversal of you know that this is now a a feedback story more than
00:34:20.140 it is a feed-forward story how how does that change our sense or how might it change our sense of
00:34:26.780 the role that uh our kind of worldview and and and self model plays in determining the character of
00:34:35.660 our experience right great so exactly as you say that you know it's kind of always been a bit of a
00:34:44.340 mystery why there are 10 times as as many feedback neurons as there are kind of feed forward in some of
00:34:51.460 these systems and the picture that we just talked about where the generative model the simulation model
00:34:59.480 actually points down from the higher cortical areas towards the low level inputs the where the sense data is
00:35:09.720 coming in now in fact you know so so one way to think about this this model is that we we've got this
00:35:16.180 kind of generative model which starts with our priors what we think is going on and makes a simulation
00:35:24.420 and what flows up the the feed forward part is just the prediction errors so the prediction errors
00:35:32.800 say look your model's a little wrong here because you know it's different so then the model will be
00:35:39.120 adjusted so to minimize the prediction errors now it's not just one huge model going all the way from top
00:35:48.060 to bottom as you intimated the scheme that that is now thought to arise is something called predict is
00:35:55.300 hierarchical predictive processing so it's essentially that you have a whole series of low-level models near
00:36:03.100 the data you know the first layers of the visual cortex might be you know having you know models that are
00:36:08.380 detecting um edges and corners and then you know you you build up from there exactly like you do in a
00:36:16.520 neural network where higher layers in the network are essentially processing higher level features
00:36:23.700 except that these are all being driven down by these priors that are generating what we would expect to
00:36:32.900 see and all that's flowing up the funny thing is that the data actually never flows up the brain
00:36:39.900 all that's flowing up is the prediction errors up this feed forward network what's coming down
00:36:46.340 is the output of the generative model so the brain is only generating what is thinks it's seeing
00:36:53.180 and there is no actually what we're seeing it's just prediction errors flow up and say can you please
00:37:01.440 adjust it there's a large prediction error here so what we think is going on is that we have these
00:37:06.980 kind of models that sit one on top of another and the higher level model sends down you know is where
00:37:15.240 the priors come from now you might ask well where do the priors of that higher level model come from
00:37:20.720 well they come from priors a layer above and you know we don't know how many layers in this hierarchy
00:37:27.600 there are but you know there might be something like half a dozen layers uh in the hierarchy and
00:37:33.600 right at the top of the hierarchy you know we we get things like concepts and you know multi-sensory
00:37:42.400 integration concepts and you know reasoning and language maybe in the middle layers of this
00:37:47.820 hierarchy we get things like faces and motion and and at the low levels of the hierarchy we get these
00:37:56.300 very raw unfabricated parts of the sensory formation percepts low level sensory perception of curiosity how
00:38:04.580 many layers are deep learning networks working with now well like in the transformer level you know
00:38:12.680 model that's behind um you know chat gpt and and and google's bard they're like you know close to 100
00:38:21.480 you know maybe 95 or 125 depending on the particular architecture so there are you know there are a lot
00:38:28.340 that that that that that being said you know obviously the brain has kind of is is way more parallel
00:38:37.260 and and complex architecture i would guess than than some of these neural networks but but hierarchy is
00:38:44.340 key and i think that's precisely why you're able to get such sophisticated behavior out of some of these
00:38:50.580 large language models but but you know we've known for for over a decade that neural networks work in
00:38:58.580 you know use generative models uh unsupervised neural networks work in the same way as as as as the brain
00:39:05.500 and they extract these features like edges and corners and then noses and eyes and and mouths and ears
00:39:16.220 and then whole faces you know further up the hierarchy so that's that's the way that you know
00:39:23.200 we think that the brain is kind of constructing our model of the world now i mean at the top of the
00:39:31.240 you know to really kind of think about what you know well what what's at the top of this you know
00:39:36.420 what are we actually trying to do well one of the most important and one of the most important
00:39:39.940 conjectures is that in fact it's kind of like a self-model a phenomenal self-model
00:39:46.120 which must emerge at some of these kind of higher levels in the hierarchy and you know i don't know
00:39:55.520 well i guess we'll get into that when we talk about the meditation yeah yeah so i want to take a turn
00:40:01.980 toward psychedelics and meditation and the nature of the self and just how flexible our interaction
00:40:11.060 with reality might prove to be and just what what is possible subjectively here to realize and and how
00:40:18.200 might that matter and how that might connect to human flourishing overall just to take one point of
00:40:24.560 contact here is that you you know there's some evidence now that psychedelics in particular promote
00:40:29.660 neuroplasticity and and uh offering some clues to how you know a fairly short experience might
00:40:38.400 create durable changes in one sense of one's being in the world strangely i think it was a recent
00:40:45.720 paper that suggested these are the neuroplasticity is mediated through intracellular 5-ht2a receptors which
00:40:52.820 are not as many people know psychedelics like lsd and and psilocybin are active through serotonin receptors
00:41:01.500 but they they obviously have a different effect than serotonin normally does and the idea that they
00:41:08.180 may be reaching inside the cell seemed i mean maybe that's been in the air for a while but it was the
00:41:13.880 first i heard of it which struck me as interesting but before we get there let look i just want to see
00:41:18.840 if we can make this picture of predictive coding and error detection somehow subjectively real for
00:41:26.880 people so you know you and i are having this conversation my eyes are or have generally been
00:41:31.120 open i'm i've been looking at a fairly static scene i just have my desk in front of me nothing has been
00:41:38.020 moving right there's no there are no changes to the visual scene really apart from what is introduced by
00:41:44.480 my moving my eyes around and i've surveyed this scene you know fairly continuously for the last 45 minutes
00:41:51.980 as we've been speaking and again it's a scene of very little change right and and yet i i'm continuing
00:41:57.940 to see everything and some things presumably i'm i'm now seeing for the first time as i pay attention
00:42:03.960 in new ways now if something fundamentally changed if you know a a mouse suddenly leapt onto the surface
00:42:11.980 of my desk and and began scurrying across it it would get a a strong reaction from me and i i would
00:42:18.200 perceive the novelty but before that happens i'm perceiving everything quite vividly anyway and
00:42:25.160 nothing is changing so in what sense is my perception merely a story of my continuous prediction errors
00:42:35.140 with respect to the visual scene yeah so so the i think the idea is that if i mean you are creating
00:42:44.300 a simulation of what your best guess is on you know how the contents of your desk and as you say if
00:42:57.400 there is a if something like a mouse runs across your desk you know that would be something that would
00:43:06.640 cause a very large prediction error and your attention would go to it in fact what we
00:43:14.160 we we we didn't get into this but there is actually a kind of a a real homologue of what attention
00:43:21.580 is within the predictive processing framework essentially what happens is that is that when
00:43:26.760 you attend to something you you give more weight to parts of the predictive processing hierarchy
00:43:39.000 stack and specifically you give more precision weighting to the sensory data the likelihood of
00:43:49.240 the data and so you would say there's a very large prediction error here and you would be instead of
00:43:57.180 your priors dominating the posterior what you actually see the sensory data would have a greater weight
00:44:04.960 in determining the contents of the generative model so you you know this is a kind of a two-way street
00:44:12.640 that's going on constantly between the the likelihood of the data and the priors your expectations and
00:44:20.100 and you know it's it's it's interesting just to take a step back you know you're seeing this relatively
00:44:25.720 constant scene in front of you you know presumably in these beautiful colors in a cartoonish definition
00:44:32.900 and yet if you look at what's coming through your eyes i mean you could only see a very small portion
00:44:40.580 of the visual scene uh at any one time because that's where you know your macula the only part
00:44:47.120 that sees in in color and accurately is is is like a tiny portion of the visual field and yet you're seeing
00:44:55.100 everything clearly in color so this kind of you know makes it very clear that what you are seeing
00:45:01.340 is not your sensory data but in fact the output of your general term model you know just to remind
00:45:07.060 people so your peripheral vision while it seems to you to be occurring in color uh it really isn't
00:45:15.280 beginning you can test this you can you can have someone hold a colored object however brightly colored
00:45:20.720 you want at the very edge of your peripheral field of view you know keeping your eyes forward
00:45:27.520 and you will find it impossible right at the edge to determine what that what the color of that
00:45:35.320 that object is until it comes further into your field of view and yet we we're not walking around
00:45:42.400 feeling that our visual world is ringed with black and white imagery and so it is with you know as you
00:45:49.400 point out with the area of the vast region beyond the very narrow spot of foveal focus right
00:45:57.240 you see something in focus but the rest isn't in focus until you you direct your gaze to it and
00:46:04.700 yet we don't tend to notice that and that's a so it's there's something it's a little bit like a you
00:46:10.720 know a video game engine that is just you know it's kind of rendering parts of the world when they're
00:46:14.660 needed but they're not you know they're just presumed otherwise and we're we seem to be content to live
00:46:22.360 that way because it doesn't until we start bumping into hard objects that we didn't know were there
00:46:27.560 and it's the stability of all i guess there's another piece here we have you know we're constantly
00:46:32.620 moving our eyes in what are called visual saccades and we're effectively blind when we do that for the
00:46:39.880 brief moment of our eyes lurching around we're not consciously getting visual data and we're not
00:46:46.320 noticing that either right so this there are various clues and you can notice that when you
00:46:51.280 if you go to a mirror and stare into your own eyes and then look around and then look back at your
00:46:55.520 eyes you never catch your eyes you know moving around and there's this gap and if you still doubt
00:47:03.980 that you can notice how different it is to move your eye by you know taking your finger and touching
00:47:09.740 the side of your one of your eyes and jiggling it and you can see how the world lurches around there
00:47:14.960 that's because your you know ocular motor cortex can't correct for that that kind of motion and
00:47:21.120 it's kind of forward-looking copy of what it expects to see because you're you're accomplishing that with
00:47:25.600 your finger but when you move your eyes in the normal way it's discounting the data of that's being
00:47:32.440 acquired during that movement so in all these ways you can see that you're not getting this crystal
00:47:39.040 clear comprehensive photographic image of the world when you're seeing this is a a piecemeal
00:47:46.000 vision again based in large measure on on what you're expecting to see and yet that's not consciously
00:47:54.480 obvious yeah exactly and and of course you know it's only when you go through meditation or experiences
00:48:03.280 and psychedelics or or you know other times you know people can suddenly come to notice ah you know
00:48:10.480 isn't it odd that when i push my eyeball the whole world moves you know maybe maybe what i'm seeing is
00:48:18.320 a kind of a mental construction and not the world as it really is so i want to talk about the self
00:48:24.640 in particular and then and what um we might describe as the self-model i think thomas messinger
00:48:32.160 has also been on the podcast might have given us that phrase i'm not sure yeah yeah he's he's he's
00:48:38.640 he's done phenomenal work on this uh over the years and and i think that that's actually central this
00:48:45.280 messinger concept of the phenomenal self-model but before we do it many people will be um interested in
00:48:54.160 how psychedelics help us make sense of some of this this neuroscience because you know unlike
00:49:01.520 meditation i mean meditate is obviously a fair amount of neuroscience done on meditation as well but
00:49:06.800 the strength of psychedelics is that you can take really anyone there are some very rare exceptions to
00:49:13.600 this but you know virtually anyone can be sat down and given the requisite substance and an hour later
00:49:20.960 uh they are having uh some very predictable and and sweeping changes made to their perception of
00:49:28.720 the world for better or worse almost no one comes away from a large dose of lst or psilocybin saying
00:49:35.600 nothing happened or it didn't work uh whereas with meditation as many people who have tried the
00:49:41.280 practice know many many people simply bounce off the whole project they they close their eyes they try to
00:49:46.400 follow their breath or they you know get they use whatever technique has been given to them and
00:49:52.080 they feel like nothing has happened right there's just it's just me here thinking and you know i do that
00:49:58.400 all the time anyway and they come away with the sense that it's not for them or maybe it's there's
00:50:03.760 really nothing to it it's just people are just deceiving themselves that there's anything especially
00:50:08.880 important going on there but psychedelics don't tend to have that effect on people
00:50:13.120 what do you think we know about psychedelics at this point that gives us some you know perspective
00:50:20.720 here and i guess perhaps you might describe if you're willing your own experience with psychedelics
00:50:26.880 have they have they been an important part of your coming to be interested in any of this yeah
00:50:32.800 absolutely okay well why don't we take the kind of the predictive processing theory that's out there
00:50:41.360 in terms of how what is the mechanism of action from a computational perspective
00:50:49.120 if you'd like to continue listening to this conversation you'll need to subscribe at
00:50:52.720 samharris.org once you do you'll get access to all full-length episodes of the making sense podcast
00:50:58.400 along with other subscriber only content including bonus episodes and amas and the conversations i've
00:51:04.400 been having on the waking up app the making sense podcast is ad free and relies entirely on
00:51:09.680 listener support and you can subscribe now at samharris.org