Making Sense - Sam Harris - July 09, 2021


#255 — The Future of Intelligence


Episode Stats

Length

58 minutes

Words per Minute

181.05328

Word Count

10,544

Sentence Count

239

Misogynist Sentences

1

Hate Speech Sentences

6


Summary

Jeff Hawkins is the co-founder of Numenta, a neuroscience research company, and the founder of the Redwood Neuroscience Institute. Before that, he was one of the founders of the field of handheld computing, starting Palm and Handspring. He s also a member of the National Academy of Engineering and the author of two books: On Intelligence and the second, A Thousand Brains, a new theory of intelligence. In this episode, Jeff and I talk about intelligence from a few different sides. We start with the brain, where the cortex creates models of the world, the role of prediction in experience, and how thought is analogous to movement in conceptual space. But for the bulk of the conversation, we have a debate about the future of artificial intelligence and the prospect that AI could pose some kind of existential risk to us. As you ll hear, our intuitions divide fairly sharply, and as a consequence, our views on that problem are very different. But it was a lot of fun, and I hope you enjoy it. Thanks for having me, Jeff. I m here to bring you the latest episode of the Making Sense Podcast. by Sam Harris This is a podcast made possible entirely through the support of our subscribers, and therefore, we don t run ads on the podcast. So if you enjoy what we re doing here, please consider becoming a supporter of the podcast, too consider becoming one. It s made possible by the support we re getting, too! by becoming a subscriber! - Sam Harris, making sense and . by Jeff Hawkins, Thanks for listening to the podcast by in the making sense podcast by me, by , and so on and so much more. - Thank you, your support is helping us make sense? on this is making sense. by so much so that we can be a better of this podcast? - thank you, by the Making sense ? Thank you to you, , at & so on , so that you can help us be more of a good day out there to help us make it so much out of this episode so that I can be more like that that s better than that , etc., etc., etc., so that it s better of it, etc. ...and so on, etc., and so forth, etc. etc., in so on...


Transcript

00:00:00.000 Welcome to the Making Sense Podcast.
00:00:08.860 This is Sam Harris.
00:00:10.900 Just a note to say that if you're hearing this,
00:00:13.120 you are not currently on our subscriber feed
00:00:15.180 and will only be hearing the first part of this conversation.
00:00:18.460 In order to access full episodes of the Making Sense Podcast,
00:00:21.600 you'll need to subscribe at samharris.org.
00:00:24.180 There you'll find our private RSS feed
00:00:25.960 to add to your favorite podcatcher,
00:00:28.000 along with other subscriber-only content.
00:00:30.560 We don't run ads on the podcast,
00:00:32.420 and therefore it's made possible entirely
00:00:33.960 through the support of our subscribers.
00:00:35.840 So if you enjoy what we're doing here,
00:00:37.700 please consider becoming one.
00:00:46.660 Today I'm speaking with Jeff Hawkins.
00:00:49.740 Jeff is the co-founder of Numenta,
00:00:52.680 a neuroscience research company,
00:00:54.660 and also the founder of the Redwood Neuroscience Institute.
00:00:58.600 And before that, he was one of the founders
00:01:00.180 of the field of handheld computing,
00:01:03.620 starting Palm and Handspring.
00:01:07.160 He's also a member of the National Academy of Engineering.
00:01:10.600 And he's the author of two books.
00:01:12.640 The first is On Intelligence,
00:01:14.340 and the second and most recent is A Thousand Brains,
00:01:19.760 a new theory of intelligence.
00:01:22.040 And Jeff and I talk about intelligence
00:01:24.520 from a few different sides here.
00:01:27.140 We start with the brain.
00:01:29.640 We talk about how the cortex creates models of the world,
00:01:33.420 the role of prediction in experience.
00:01:37.240 We discuss the idea that thought is analogous to movement
00:01:41.100 in conceptual space.
00:01:42.700 But for the bulk of the conversation,
00:01:46.240 we have a debate about the future of artificial intelligence,
00:01:49.820 and in particular, the alignment problem,
00:01:53.260 and the prospect that AI could pose
00:01:56.120 some kind of existential risk to us.
00:01:59.280 As you'll hear, Jeff and I have very different takes
00:02:03.580 on that problem.
00:02:05.420 Our intuitions divide fairly sharply.
00:02:08.140 And as a consequence, we have a very spirited exchange.
00:02:14.120 Anyway, it was a lot of fun.
00:02:15.780 I hope you enjoy it.
00:02:17.480 And now I bring you Jeff Hawkins.
00:02:26.560 I'm here with Jeff Hawkins.
00:02:28.340 Jeff, thanks for joining me.
00:02:30.480 Thanks for having me, Sam.
00:02:31.560 It's a pleasure.
00:02:32.740 I think we met probably just once,
00:02:35.360 but I feel like we met about 15 years ago,
00:02:38.140 at one of those Beyond Belief conferences
00:02:40.440 at the Salk Institute.
00:02:41.640 Does that ring a bell?
00:02:42.880 You know, I was at one of the Beyond Belief conferences,
00:02:45.960 and I don't recall meeting you there,
00:02:48.100 but it's totally possible.
00:02:49.560 And I just...
00:02:50.020 Yeah, it's possible we didn't meet,
00:02:51.720 but I remember, I think we had an exchange
00:02:54.200 where one of us was in the audience,
00:02:56.520 and the other was...
00:02:57.400 I mean, so we had an exchange over 50 feet or whatever.
00:03:00.380 Yeah, oh, that makes sense.
00:03:02.020 Yeah, I was in the audience, and I was speaking up.
00:03:04.560 Yeah, okay, and I was probably on stage
00:03:06.960 defending some cockamamie conviction.
00:03:09.980 Well, anyway, nice to almost meet you once again.
00:03:13.340 And you have a new book,
00:03:15.280 which we'll cover part of,
00:03:17.340 by no means exhausting its topics of interest,
00:03:20.940 but the new book is A Thousand Brains,
00:03:23.420 and it's a work of neuroscience
00:03:26.000 and also a discussion about the frontiers of AI
00:03:30.100 and where all this is heading.
00:03:32.780 But maybe we should start with the brain part of it
00:03:36.260 and start with the really novel
00:03:40.060 and circuitous and entrepreneurial route
00:03:43.820 you've taken to get into neuroscience.
00:03:47.500 This is the non-standard course
00:03:50.040 to becoming a neuroscientist.
00:03:52.000 Give us your brief biography here.
00:03:55.000 How did you get into these topics?
00:03:56.380 Well, I fell in love with brains
00:03:59.840 when I just got out of college.
00:04:01.640 So I studied electrical engineering in college.
00:04:04.500 And right after I started my first job at Intel,
00:04:08.100 I read an article by Francis Crick
00:04:10.660 about brains and how we don't understand they work.
00:04:14.160 And I just became enamored.
00:04:15.780 I said, oh my God, we should understand this.
00:04:17.320 This is me.
00:04:17.940 I am my brain.
00:04:19.300 No one seems to know how this thing is working.
00:04:21.740 And I just couldn't accept that.
00:04:23.420 And so I decided to dedicate my life
00:04:26.400 to figuring out what's going on when I'm thinking
00:04:28.340 and who we are, basically, as a species.
00:04:32.600 And it was a difficult path.
00:04:34.780 So I quit my job.
00:04:37.460 I essentially applied to become a graduate student,
00:04:41.200 first at MIT in AI,
00:04:43.000 but then I settled at Berkeley in neuroscience.
00:04:46.600 And I said, okay, we're going to spend my life
00:04:49.040 figuring out how the neocortex works.
00:04:51.240 And I found out very quickly
00:04:53.380 that that was a very,
00:04:54.880 not difficult thing to do scientifically,
00:04:56.680 but difficult to do from the practical aspects of science,
00:05:00.460 that you couldn't get funding for that.
00:05:02.200 It was considered too ambitious.
00:05:03.980 You know, there was theoretical work
00:05:05.140 and then people didn't fund theoretical work.
00:05:07.480 So after a couple of years
00:05:08.740 as a graduate student at Berkeley,
00:05:10.620 I set a different path.
00:05:12.060 I said, okay, I'm going to go back
00:05:13.220 to work in industry for a few years
00:05:15.400 to mature,
00:05:16.940 to figure out how to make institutional change
00:05:19.460 because I was up against an institutional problem,
00:05:21.320 not just a scientific problem.
00:05:23.400 And that turned into a series of successful businesses
00:05:26.660 that I was involved with and started,
00:05:28.880 including Palm and Handspring.
00:05:30.720 These are some of the early handheld computing companies.
00:05:34.180 And we were having a tremendous amount of success with that.
00:05:36.920 But it was never my mission
00:05:38.640 to stay in the handheld computing industry.
00:05:42.360 I wanted to get back to neuroscience.
00:05:43.700 And everybody who worked for me knew this.
00:05:45.240 In fact, it was, you know,
00:05:48.000 I told the investors,
00:05:48.920 I'm only going to do this for four years.
00:05:50.340 And they said, what?
00:05:51.180 I said, yeah, that's it.
00:05:52.620 But it turned out to be a lot longer than that
00:05:54.340 because all the success we had.
00:05:55.460 But eventually, I just extracted myself from it.
00:05:58.900 And I said, I'm going to go.
00:06:00.120 And I have so many years left in my life.
00:06:02.440 So after having all that success
00:06:03.800 in the mobile computing space,
00:06:05.080 I started a neuroscience institute.
00:06:07.100 This is at the recommendation
00:06:08.180 from neuroscience friends of mine.
00:06:09.840 So they helped me do that.
00:06:11.500 And I ran that for three years.
00:06:12.880 And now I've been running sort of a private lab
00:06:14.740 just doing pure neuroscience for the last 17 years.
00:06:19.540 That's Numenta, right?
00:06:21.220 That's Numenta, yeah.
00:06:22.980 And we've made some really significant progress
00:06:26.300 in our goals.
00:06:28.680 And the book documents
00:06:30.080 some of the recent,
00:06:31.320 really significant discoveries we've made.
00:06:34.160 So am I right in thinking
00:06:35.400 that you made enough money
00:06:37.520 at Palm and Handspring
00:06:39.820 that you could self-fund
00:06:41.180 to your first neuroscience institute?
00:06:43.420 Or did you, or is that not the case?
00:06:45.040 Did you have to go raise money?
00:06:46.480 It was, well, it was a bit of both.
00:06:48.660 It was, certainly, I was a major contributor.
00:06:51.160 I wasn't the only one.
00:06:52.540 But I didn't want the funding
00:06:54.480 to be the driver of what we did
00:06:56.480 and how we spent all our time.
00:06:58.080 So at the institute,
00:06:59.160 we had collaborations
00:07:00.700 with both Berkeley and Stanford.
00:07:03.160 We didn't get funds from them,
00:07:04.280 but we did work with them on various things.
00:07:07.180 And then we had,
00:07:09.020 but that was mostly funded by myself.
00:07:11.820 Numenta is still,
00:07:13.400 I'm a major contributor to it,
00:07:14.440 but there are other people
00:07:15.240 who've invested in Numenta.
00:07:16.520 We have one outside venture capitalist
00:07:18.440 and several people,
00:07:20.880 but I'm still a major contributor to it.
00:07:23.320 I don't,
00:07:24.000 I just view that as a sort of a necessary thing
00:07:26.780 to get onto the science
00:07:28.340 and not have to worry about it.
00:07:29.620 Yeah.
00:07:30.240 Because when I was at Berkeley,
00:07:31.540 what I was told over and over again,
00:07:33.300 and I really came to understand this.
00:07:34.840 In fact,
00:07:35.820 I went and eventually,
00:07:37.740 after that,
00:07:38.860 when I was running
00:07:39.320 the Redwood Neuroscience Institute,
00:07:41.660 I went to Washington
00:07:42.500 to talk about
00:07:43.280 to the National Science Foundation
00:07:45.000 and the National Institute of Health
00:07:46.220 and also to DARPA,
00:07:48.320 who were the funders of neuroscience.
00:07:50.140 And everyone thought
00:07:50.940 what we were doing,
00:07:51.660 which is sort of big theory,
00:07:53.160 large scale theories
00:07:54.000 of neocortical function,
00:07:55.860 that this was like
00:07:56.460 the most important problem to work on,
00:07:58.280 but everyone said
00:07:59.200 they can't fund it
00:08:00.080 for various reasons.
00:08:01.100 And so over the years,
00:08:03.380 I've come to appreciate
00:08:04.060 that it's very difficult
00:08:05.400 to be a scientist
00:08:06.420 doing what we do
00:08:07.320 with traditional funding sources.
00:08:10.180 But we don't work outside of science.
00:08:12.900 We partner with labs
00:08:14.220 and we go to conferences
00:08:15.420 and we publish papers.
00:08:16.360 We do all the regular stuff.
00:08:17.880 Right, right.
00:08:19.060 Yeah, it's amazing
00:08:19.640 how much comes down to funding
00:08:21.280 or lack of funding
00:08:22.460 and the incentives
00:08:23.880 that would dictate
00:08:25.300 whether something gets funded
00:08:26.720 in the first place.
00:08:27.440 It's by no means
00:08:29.600 a perfect system.
00:08:31.260 It's a kind of
00:08:32.000 an intellectual market failure.
00:08:34.080 Yeah, it is fascinating
00:08:35.560 and we could have
00:08:36.560 a whole conversation
00:08:37.180 about that sometimes perhaps
00:08:38.320 because I ask myself,
00:08:39.960 why is it so hard?
00:08:40.920 Why do people can't fund this?
00:08:42.580 And there's reasons for it
00:08:43.820 and it's a complex,
00:08:45.820 strange thing
00:08:46.760 when people were telling me
00:08:47.960 this is the most important thing
00:08:49.100 anyone could be working on
00:08:50.400 and we think your approaches are great,
00:08:52.900 but we can't fund that.
00:08:54.460 And why is that?
00:08:55.620 You know, but it's, you know,
00:08:57.700 I just accepted the way it was.
00:08:59.380 I said, okay,
00:08:59.720 this is the world I'm living in.
00:09:00.960 I'm going to get one chance here.
00:09:02.820 If I can't do this through like,
00:09:04.540 you know, being, you know,
00:09:06.020 working my way as a graduate student
00:09:07.620 to getting a position in university,
00:09:09.960 how am I going to do it?
00:09:11.460 And I said, okay,
00:09:12.340 it's not what I thought,
00:09:13.120 but this is what it's going to be.
00:09:15.640 Nice.
00:09:16.100 Well, let's jump into
00:09:17.840 the neuroscience side of it.
00:09:19.860 Generally speaking,
00:09:20.480 we're going to be talking about
00:09:21.780 intelligence
00:09:23.060 intelligence and
00:09:24.120 how it's accomplished
00:09:25.540 in physical systems.
00:09:27.440 So let's,
00:09:28.260 let's start with a,
00:09:29.260 a definition,
00:09:30.840 however loose.
00:09:33.420 What is intelligence
00:09:35.180 in your view?
00:09:37.220 So I didn't know
00:09:38.220 and didn't have any
00:09:39.100 pre-ideas about
00:09:40.080 what this would be.
00:09:40.940 It was a mystery to me.
00:09:42.540 And, but we've learned
00:09:43.900 what a good portion
00:09:45.460 of your brain is doing.
00:09:46.480 And so we started
00:09:47.080 the neocortex,
00:09:47.680 which is about 70%
00:09:49.360 of the volume
00:09:49.980 of a human brain.
00:09:50.640 And I now know
00:09:52.480 what that does.
00:09:53.200 And so I'm going to
00:09:54.080 take that
00:09:54.820 as my definition
00:09:56.160 for intelligence here.
00:09:57.900 What's going on
00:09:58.820 in your neocortex
00:09:59.600 is it's learning
00:10:00.520 a model of the world,
00:10:01.960 an internal
00:10:02.560 recreation
00:10:03.440 of all the things
00:10:05.080 in the world
00:10:05.840 that you know of
00:10:06.540 and how it does.
00:10:08.120 That's the key
00:10:09.140 in what we've discovered,
00:10:10.500 but it's this internal model.
00:10:12.340 And intelligence
00:10:12.860 is,
00:10:13.620 requires having
00:10:14.500 an internal model
00:10:15.440 of the world
00:10:16.480 in your head.
00:10:17.500 And it allows you
00:10:18.360 to recognize
00:10:19.020 where you are.
00:10:19.540 It allows you
00:10:20.060 to act on things.
00:10:21.060 It allows you
00:10:21.460 to plan
00:10:21.960 and think about
00:10:22.540 the future.
00:10:23.060 So if I'm going to say
00:10:23.680 what happens
00:10:24.420 when I do this,
00:10:25.340 the model tells you that.
00:10:27.380 So to me,
00:10:28.140 intelligence is just
00:10:29.100 about having a model
00:10:29.780 in your head
00:10:30.400 and using that
00:10:31.420 for planning and action.
00:10:33.040 It's not about
00:10:33.860 doing anything particular.
00:10:35.440 It's about
00:10:36.100 understanding the world.
00:10:38.360 Yeah, that's interesting.
00:10:39.060 I think most people would,
00:10:40.400 that's kind of
00:10:40.700 an internal
00:10:41.600 definition
00:10:43.020 of intelligence,
00:10:44.160 but I think
00:10:44.680 most people would
00:10:45.720 reach for
00:10:46.860 an external one
00:10:48.240 or a functional
00:10:50.000 one that has
00:10:50.900 to take in
00:10:51.640 the environment.
00:10:53.340 I mean,
00:10:53.460 it's something about
00:10:54.080 being able to
00:10:55.240 flexibly
00:10:56.620 meet your goals
00:10:58.160 under a range
00:10:59.420 of conditions,
00:11:00.260 you know,
00:11:00.460 more flexibly
00:11:01.760 than rigidly.
00:11:02.660 I guess there's
00:11:03.060 rigid forms
00:11:04.540 of intelligence,
00:11:05.280 but when we're
00:11:06.340 talking about
00:11:06.740 anything like
00:11:07.620 general intelligence,
00:11:09.100 we're talking about
00:11:10.080 something that is
00:11:11.320 not merely
00:11:12.980 hardwired
00:11:14.040 and reflexive
00:11:15.260 but flexible.
00:11:16.560 Well, yes,
00:11:16.820 but if you have
00:11:17.920 an internal model
00:11:18.900 of the world,
00:11:19.460 you had to learn it.
00:11:20.480 I mean,
00:11:20.680 at least from a human
00:11:21.560 point of view,
00:11:22.020 there's some things
00:11:22.600 we have built in
00:11:23.420 when we're born,
00:11:24.520 but the vast majority
00:11:25.940 of what you and I know,
00:11:27.420 Sam,
00:11:27.720 is learned.
00:11:29.220 You know,
00:11:29.400 we didn't know
00:11:29.800 what a computer was
00:11:30.540 when you're born.
00:11:31.080 You don't know
00:11:31.340 what a coffee cup is.
00:11:32.260 You don't know
00:11:32.600 what a building is.
00:11:33.780 You don't know
00:11:33.960 what doors are.
00:11:34.700 You don't know
00:11:35.040 what computer codes are.
00:11:36.540 None of this stuff.
00:11:37.700 Everything that,
00:11:38.740 almost everything
00:11:39.240 we interact with
00:11:39.920 in the world today,
00:11:41.600 in language,
00:11:42.300 we don't know
00:11:42.800 any particular language
00:11:43.800 when we're born.
00:11:45.000 We don't know
00:11:45.580 mathematics.
00:11:46.100 So we had to learn
00:11:46.680 all these things.
00:11:47.600 So if you want to say
00:11:48.820 there might be
00:11:49.320 an internal model
00:11:50.080 that wasn't learned,
00:11:50.980 well,
00:11:51.080 that's pretty trivial.
00:11:52.160 But I'm talking
00:11:52.860 about models
00:11:53.380 that are learned
00:11:53.920 and you have to
00:11:54.620 interact with the world
00:11:55.540 to learn it.
00:11:56.060 You can't learn it
00:11:56.760 without being present
00:11:58.420 in the world,
00:11:58.880 without having an embodiment,
00:11:59.980 without moving about,
00:12:00.920 touching and seeing
00:12:01.680 and hearing things.
00:12:03.020 So a large part
00:12:04.160 of what people think about,
00:12:05.200 like you brought up,
00:12:05.960 is, okay,
00:12:07.060 you know,
00:12:07.320 we are able to solve a goal.
00:12:09.220 But that's what
00:12:10.180 a model lets you to do.
00:12:11.340 It's not the,
00:12:11.920 that is not
00:12:12.820 what intelligence itself is.
00:12:14.660 Intelligence is having
00:12:15.440 this ability
00:12:16.020 to solve any goal,
00:12:17.240 right?
00:12:17.560 Because you have,
00:12:18.360 if your model
00:12:19.000 covers that part
00:12:20.320 of the world,
00:12:20.700 you can figure out
00:12:21.440 how to manipulate
00:12:22.140 that part of the world
00:12:22.920 and achieve what you want.
00:12:25.060 So it's,
00:12:25.660 I'll give you
00:12:26.080 a little further analogy.
00:12:27.740 It's a little bit
00:12:28.080 like computers.
00:12:28.860 When we talk about
00:12:29.420 like a universal
00:12:30.080 Turing machine
00:12:30.840 or what a computer is,
00:12:32.220 it's not defined
00:12:33.200 by what the computer
00:12:34.640 is applied to do.
00:12:36.160 It's like,
00:12:36.760 a computer isn't something
00:12:37.580 that solves
00:12:38.040 a particular problem.
00:12:38.900 A computer is something
00:12:39.540 that works
00:12:39.900 on a set of principles
00:12:40.740 and that's how
00:12:42.400 I think about intelligence.
00:12:43.460 It's a modeling system
00:12:45.200 that works
00:12:45.540 on a set of principles.
00:12:47.040 Those principles
00:12:47.660 can exist in a mouse
00:12:49.140 and a dog
00:12:49.700 and a cat
00:12:50.120 and a human
00:12:50.520 and probably birds,
00:12:52.460 but don't focus
00:12:54.300 on what those animals
00:12:55.080 are doing.
00:12:55.520 Yeah, I think it's important
00:12:58.220 to point out
00:12:58.680 that a model
00:12:59.580 need not be
00:13:00.360 a conscious model.
00:13:01.640 In fact,
00:13:01.940 most of our models
00:13:02.940 are not conscious
00:13:04.320 and might not even be
00:13:06.840 in principle
00:13:07.420 available to consciousness,
00:13:09.740 although I think
00:13:10.400 at the boundary,
00:13:11.920 something that you'd say
00:13:14.120 is happening entirely
00:13:15.560 in the dark
00:13:16.140 does have a kind of,
00:13:17.620 or can have a kind of
00:13:18.600 liminal conscious aspect.
00:13:21.440 So I mean,
00:13:21.760 to take, you know,
00:13:22.520 the coffee cup example,
00:13:23.620 this leads us into
00:13:25.040 a more granular discussion
00:13:26.940 of what it means
00:13:28.000 to have a model
00:13:28.720 of anything
00:13:29.340 at the level
00:13:29.980 of the cortex.
00:13:31.600 But, you know,
00:13:31.980 if I reach for my coffee cup
00:13:33.400 and grasp it,
00:13:35.480 the ordinary experience
00:13:37.220 of doing that
00:13:37.940 is something
00:13:38.620 I'm conscious of.
00:13:40.080 I'm not conscious
00:13:41.260 of all of the prediction
00:13:43.580 that is built
00:13:44.840 into my accomplishing that
00:13:47.480 and experiencing
00:13:49.440 what I experience
00:13:50.460 when I touch a coffee cup,
00:13:51.580 and yet it's prediction
00:13:53.820 that is required
00:13:55.800 having some ongoing expectation
00:13:58.060 of what's going to happen there
00:13:59.140 when I, you know,
00:14:00.160 when each finger
00:14:00.960 touches the surface
00:14:02.620 of the cup
00:14:03.220 that allows for me
00:14:04.940 to detect
00:14:05.700 any error there
00:14:07.840 or to be surprised
00:14:08.920 by something
00:14:09.600 truly anomalous.
00:14:10.680 So if I reach
00:14:11.120 for a coffee cup
00:14:11.900 and it turns out
00:14:13.300 that's a, you know,
00:14:13.860 it's a hologram
00:14:14.540 of a coffee cup
00:14:15.560 and my hand
00:14:16.400 passes right through it,
00:14:17.860 the element of surprise
00:14:19.440 there seems predicated
00:14:20.820 on some ongoing
00:14:22.740 prediction processing
00:14:25.120 to which the results
00:14:26.800 of my behavior
00:14:27.540 is being compared.
00:14:29.120 So maybe you can talk
00:14:30.080 about what you mean
00:14:31.140 by having a model
00:14:33.460 at the level
00:14:34.020 of the cortex
00:14:34.420 and how prediction
00:14:35.960 is built into that.
00:14:38.780 Yeah.
00:14:39.260 Well, my first book,
00:14:40.480 which I published
00:14:40.960 like 14 years ago,
00:14:42.300 called On Intelligence,
00:14:43.180 was just about that topic.
00:14:45.680 It was about
00:14:46.160 how it is
00:14:47.160 the brain is making
00:14:47.920 all these predictions
00:14:48.760 all the time
00:14:49.520 and all your sensory modalities
00:14:50.980 and you're not aware of it.
00:14:52.940 And so that's
00:14:54.000 sort of the foundation
00:14:54.780 and you can't make
00:14:55.740 a prediction
00:14:56.220 without a model.
00:14:57.000 I mean, a model,
00:14:57.800 to make a prediction,
00:14:58.760 you had to have
00:14:59.120 some expectation,
00:15:00.500 the expectation,
00:15:01.340 whether you're not aware
00:15:01.900 of it or not,
00:15:02.540 but you have an expectation
00:15:03.660 and that has to be driven
00:15:05.480 from some internal
00:15:06.600 representation of the world
00:15:07.660 that says,
00:15:08.360 hey, you're about
00:15:09.720 to touch this thing,
00:15:10.460 I know what it is,
00:15:11.200 it's supposed to feel
00:15:11.920 this way.
00:15:13.040 And even if you're not aware
00:15:14.480 that you're doing that.
00:15:16.160 One of the key discoveries
00:15:17.700 we made,
00:15:18.720 and this was maybe
00:15:19.560 about eight years ago,
00:15:21.220 we had to get to the bottom,
00:15:23.320 like how do neurons
00:15:24.120 make predictions?
00:15:25.360 What is the physical manifestation
00:15:27.140 of a prediction
00:15:28.160 in the brain?
00:15:29.680 And most of these predictions,
00:15:30.820 as you point out,
00:15:31.420 are not conscious,
00:15:32.400 you're not aware of them.
00:15:33.520 They're just happening
00:15:34.240 and if something is wrong,
00:15:36.320 then your attention
00:15:36.940 is drawn to it.
00:15:38.320 So if you felt the coffee cup
00:15:39.480 and there was a little burr
00:15:40.260 on the side or a crack
00:15:41.240 and you didn't know
00:15:41.920 that was expected,
00:15:43.020 that you'd say,
00:15:43.580 oh, there's a crack.
00:15:44.340 What was the brain doing
00:15:46.680 when it was making
00:15:47.240 that prediction?
00:15:48.560 And we have a theory about this
00:15:51.020 and I wrote about it
00:15:51.660 in the book a bit
00:15:52.220 and it's a beautiful,
00:15:55.160 I think it's a beautiful theory,
00:15:57.440 but it's basically
00:15:59.180 most of the predictions
00:16:00.120 that are going on
00:16:00.800 in your brain,
00:16:01.220 most of them,
00:16:01.820 not all of them,
00:16:02.500 but most of them,
00:16:03.680 happen inside individual neurons.
00:16:06.920 It is internal
00:16:08.420 to the individual neurons.
00:16:09.680 Now, not a single neuron
00:16:11.480 can predict something,
00:16:12.580 but an ensemble of neurons
00:16:13.900 do this,
00:16:15.040 but it's an internal state
00:16:16.460 and we wrote a paper
00:16:18.000 that came out in 19,
00:16:20.980 2016, excuse me,
00:16:22.740 2016,
00:16:23.800 which is,
00:16:24.980 it's called,
00:16:25.520 why do neurons
00:16:26.040 have so many synapses?
00:16:27.920 And what we posited
00:16:29.920 in that paper,
00:16:30.940 and I'm pretty sure
00:16:31.460 this is correct,
00:16:32.680 is that neurons
00:16:33.920 have these thousands
00:16:34.680 of synapses.
00:16:35.940 Most of those synapses
00:16:37.120 are being used
00:16:37.620 for prediction.
00:16:38.720 And when a neuron
00:16:39.440 recognizes a pattern
00:16:40.600 and says,
00:16:41.240 okay, I'm supposed
00:16:42.020 to be active soon,
00:16:43.020 I should be,
00:16:43.620 I should be becoming
00:16:45.220 active soon.
00:16:45.860 If everything is according
00:16:46.740 to our model here,
00:16:47.920 I should be becoming
00:16:48.480 active soon,
00:16:49.540 and it goes into
00:16:50.200 this internal state,
00:16:51.420 the neuron itself
00:16:52.160 is saying,
00:16:52.600 okay, I'm expecting
00:16:53.740 to become active.
00:16:55.220 And you can't detect
00:16:57.280 that consciously.
00:16:58.040 It's internal to the,
00:16:59.060 it's essentially
00:16:59.800 just a depolarization
00:17:00.980 or a change
00:17:01.480 of the voltage
00:17:01.940 of the neuron.
00:17:03.700 And so when,
00:17:04.980 but it,
00:17:06.140 we showed how
00:17:06.840 the network
00:17:07.380 of these neurons,
00:17:08.080 what will happen
00:17:08.620 is if your prediction
00:17:09.800 is correct,
00:17:11.020 then a small subset
00:17:12.440 of the neurons
00:17:12.840 become active.
00:17:13.780 But if the prediction
00:17:14.460 is incorrect,
00:17:15.440 a whole bunch
00:17:15.980 of neurons
00:17:16.340 become active
00:17:16.980 at the same time.
00:17:18.280 And then that draws
00:17:19.420 your attention
00:17:19.900 to the problem.
00:17:21.140 So it's a fascinating
00:17:22.080 problem,
00:17:22.560 but most of the predictions
00:17:23.520 going on in your brain
00:17:24.320 are not accessible
00:17:25.380 outside of individual neurons.
00:17:27.620 So there's no way
00:17:28.240 you could be conscious
00:17:28.800 about it.
00:17:29.760 Hmm.
00:17:31.020 I guess most people
00:17:32.160 are familiar
00:17:32.820 with the general anatomy
00:17:34.160 of a neuron
00:17:34.660 where you have
00:17:35.160 this spindly
00:17:36.680 looking thing
00:17:38.140 where there's
00:17:39.460 a cell body
00:17:40.160 and there's
00:17:40.600 a long process,
00:17:41.880 the axon
00:17:42.600 leading away,
00:17:44.140 which carries
00:17:45.000 the action potential
00:17:46.680 if that neuron
00:17:47.700 fires to the synapse
00:17:49.520 and communicates
00:17:50.960 neurotransmitters
00:17:52.200 to other neurons.
00:17:54.040 But on the other
00:17:55.100 side of,
00:17:56.400 in the standard case,
00:17:57.900 on the other side
00:17:58.760 of the cell body,
00:18:00.680 there's this really,
00:18:01.920 often really profuse
00:18:03.960 arborization
00:18:04.860 of dendrites,
00:18:06.420 which is kind of
00:18:07.420 the mad tangle
00:18:08.240 of processes
00:18:08.900 which receive
00:18:10.960 information
00:18:12.160 from other neurons
00:18:14.320 to which this neuron
00:18:15.700 is connected.
00:18:16.700 And it's the integration
00:18:18.860 of information
00:18:19.660 on that side,
00:18:20.720 but before
00:18:21.400 that neuron fires,
00:18:23.800 the change,
00:18:24.500 the probability
00:18:25.140 of its firing,
00:18:26.520 that that's the place
00:18:28.280 you are locating
00:18:29.960 in the full set
00:18:31.420 of predictive changes
00:18:32.720 or the full set
00:18:33.460 of changes
00:18:34.120 that constitute
00:18:35.120 prediction
00:18:35.760 in the case
00:18:36.340 of a system
00:18:37.620 of neurons.
00:18:38.480 Yeah.
00:18:38.880 It was interesting
00:18:39.880 for many years,
00:18:40.960 people looked
00:18:41.420 at the connections
00:18:42.820 on the dendrites,
00:18:44.240 on that bushy part
00:18:45.200 called synapses,
00:18:46.980 and when they
00:18:47.920 activated a synapse,
00:18:49.360 most of the synapses
00:18:50.500 were so far
00:18:51.800 from the cell body
00:18:52.700 that they didn't
00:18:54.040 really have much
00:18:54.660 of an effect.
00:18:55.420 They didn't seem
00:18:56.180 like they could
00:18:56.720 make anything happen.
00:18:58.700 And so,
00:18:59.440 but there are
00:19:00.020 thousands and thousands
00:19:00.980 of them out there,
00:19:02.260 but they don't seem
00:19:03.400 powerful enough
00:19:04.100 to make anything occur.
00:19:06.060 And what was discovered
00:19:07.180 basically over the last
00:19:08.560 20 years,
00:19:09.660 that there are,
00:19:11.080 there's a second
00:19:11.800 type of spike.
00:19:12.840 So you mentioned
00:19:13.620 the one that goes
00:19:14.220 down the axon,
00:19:15.060 that's the action potential,
00:19:16.980 but there are spikes
00:19:17.900 that travel along
00:19:18.840 the dendrites.
00:19:20.440 And so basically
00:19:21.560 what happens is
00:19:22.600 the individual sections
00:19:24.200 of the dendrite,
00:19:24.940 like little branches
00:19:25.640 of this tree,
00:19:26.340 each one of them
00:19:27.560 can recognize
00:19:28.140 patterns on their own.
00:19:29.500 They can recognize
00:19:30.280 hundreds of separate
00:19:31.860 patterns on these
00:19:32.580 different branches,
00:19:33.660 and they can cause
00:19:34.480 this spike to travel
00:19:35.900 along the dendrite.
00:19:37.560 And that lowers,
00:19:39.720 changes the voltage
00:19:41.020 of the cell body
00:19:41.880 a little bit.
00:19:42.900 And that is what we
00:19:43.960 call the predictive state.
00:19:45.060 The cell is like prime.
00:19:46.500 It says,
00:19:46.900 oh,
00:19:47.580 if I fire,
00:19:49.280 I'm ready to fire.
00:19:50.820 And it's not actually
00:19:52.140 a probability change,
00:19:53.680 it's the timing.
00:19:55.460 And so a cell
00:19:56.180 that's in this
00:19:56.700 predictive state
00:19:57.340 that says,
00:19:57.740 I think I should
00:19:58.520 be firing now
00:19:59.420 or very shortly,
00:20:01.260 if it does generate
00:20:03.080 the regular spike,
00:20:04.140 the action potential,
00:20:04.920 it does it a little bit
00:20:05.840 sooner than it would
00:20:07.040 have otherwise.
00:20:07.700 And it's the timing
00:20:08.480 that is the key
00:20:09.480 to making the whole
00:20:10.020 circuit work.
00:20:11.280 We're getting pretty
00:20:11.840 down in the weeds
00:20:12.400 here about neuroscience.
00:20:13.320 I don't know
00:20:14.240 if all your readers
00:20:15.340 or your listeners
00:20:16.480 will appreciate that.
00:20:17.860 Yeah,
00:20:18.100 no,
00:20:18.260 I think it's useful though.
00:20:19.900 More weeds here.
00:20:21.120 One of the novel things
00:20:22.680 about your argument
00:20:24.360 is that it was inspired
00:20:26.900 by some much earlier
00:20:28.440 theorizing.
00:20:29.380 You mark your debt
00:20:30.960 to Vernon Mountcastle,
00:20:33.080 but the idea is
00:20:34.320 that there's a common
00:20:36.320 algorithm operating
00:20:38.300 more or less everywhere
00:20:39.620 at the level
00:20:40.540 of the cortex
00:20:41.100 that is,
00:20:42.540 it's more or less,
00:20:43.560 the cortex is doing
00:20:45.060 essentially the same thing,
00:20:47.520 whether it's producing
00:20:49.280 language or vision
00:20:52.080 or any other sensory channel
00:20:54.520 or motor behavior.
00:20:56.580 So talk about
00:20:57.540 the general principle
00:20:59.640 that you spend
00:21:01.360 a lot of time on
00:21:02.200 in the book
00:21:02.580 of just the organization
00:21:04.080 of the neocortex
00:21:05.440 into cortical columns
00:21:06.920 and the implications
00:21:08.460 this has
00:21:09.880 for how we view
00:21:11.620 what the brain is doing
00:21:13.180 in terms of sensory
00:21:15.160 and motor learning
00:21:16.800 and all of its consequences.
00:21:20.160 Vernon Mountcastle
00:21:21.100 made this proposal
00:21:21.800 back in the 70s
00:21:22.860 and it's just
00:21:24.780 a dramatic idea
00:21:26.120 and it's an incredible idea
00:21:27.840 and so incredible
00:21:28.500 that some people
00:21:29.080 just refuse to believe it,
00:21:30.260 but other people
00:21:30.860 really think
00:21:31.700 it's a tremendous discovery.
00:21:34.240 But what he noticed
00:21:35.040 was if you look
00:21:35.680 at the neocortex,
00:21:36.760 if you could take one
00:21:37.580 out of your head
00:21:38.300 or out of a human's head,
00:21:39.820 it's like a sheet.
00:21:41.300 It's about
00:21:41.740 two and a half millimeters thick.
00:21:43.560 It is about
00:21:44.720 the size of a large
00:21:45.520 dinner napkin
00:21:46.240 or 1,500 square centimeters
00:21:47.820 and if you could fold it,
00:21:50.320 lay it flat
00:21:50.820 and the different parts of it
00:21:52.720 like that do different things.
00:21:54.620 There's parts that do vision,
00:21:55.760 there's parts that do language
00:21:56.740 and parts that do hearing
00:21:57.760 and so on.
00:21:59.000 But when you,
00:21:59.680 if you cut into it
00:22:00.760 and you look at
00:22:02.280 the structure
00:22:03.560 in any one of these areas,
00:22:05.720 it's very complicated.
00:22:07.360 There are dozens
00:22:08.160 of different cell types
00:22:09.220 but they're very
00:22:10.220 prototypically connected
00:22:11.760 and they're arranged
00:22:13.220 in certain patterns
00:22:14.160 and layers
00:22:14.700 and different types of things
00:22:15.620 so it's a very
00:22:16.240 it's a very complex structure
00:22:18.040 but it's almost
00:22:19.060 the same everywhere.
00:22:20.760 It's not the same everywhere
00:22:21.940 but almost the same everywhere
00:22:23.340 and so this is not just true
00:22:25.040 in a human neocortex
00:22:26.020 but if you look
00:22:26.900 at a rat's neocortex
00:22:27.840 or a dog's neocortex
00:22:29.020 or a cat
00:22:29.720 or a monkey,
00:22:31.060 this same basic structure
00:22:32.560 is there
00:22:33.060 and what Werner Malkus
00:22:34.780 said
00:22:35.380 is that
00:22:36.240 all the parts
00:22:37.320 of the neocortex
00:22:38.020 are actually,
00:22:39.480 we think of them
00:22:40.140 as doing things,
00:22:40.960 different things
00:22:41.460 but they're actually
00:22:42.060 all doing
00:22:42.720 some fundamental algorithm
00:22:44.000 which is the same.
00:22:45.440 So hearing
00:22:45.960 and touch
00:22:46.440 and vision
00:22:46.800 are really
00:22:47.260 the same thing.
00:22:48.780 He says
00:22:49.060 if you took part
00:22:49.820 of the cortex
00:22:50.240 and you hook it up
00:22:50.840 to your eyes
00:22:51.260 you'll get vision.
00:22:51.920 If you hook it up
00:22:52.340 to your ears
00:22:52.760 you'll get hearing.
00:22:53.940 If you hook it up
00:22:54.460 to other parts
00:22:55.000 of the neocortex
00:22:55.520 you'll get language
00:22:56.280 and so
00:22:57.620 he spent many years
00:22:59.700 giving the evidence
00:23:01.020 for this.
00:23:02.100 He proposed further
00:23:03.040 that this algorithm
00:23:04.720 was contained
00:23:05.500 in what's called
00:23:05.960 a column
00:23:06.440 and so
00:23:07.360 if you would take
00:23:08.640 a small area
00:23:10.360 of this neocortex,
00:23:11.520 remember it's like
00:23:12.060 two and a half
00:23:13.800 millimeters thick,
00:23:14.840 you take a very
00:23:15.500 sort of skinny
00:23:16.160 little one millimeter
00:23:18.100 column out of it
00:23:19.680 that that is
00:23:20.740 the processing element
00:23:21.740 and so
00:23:22.760 our human neocortex
00:23:24.560 we have about
00:23:25.420 150,000 of these columns
00:23:27.240 other animals
00:23:28.900 have more or less.
00:23:30.240 People should picture
00:23:30.780 something resembling
00:23:31.940 a grain of rice
00:23:32.940 in terms of scale here.
00:23:34.260 Yeah, yeah.
00:23:34.920 I sometimes say
00:23:35.660 take a piece
00:23:36.180 of skinny spaghetti
00:23:36.980 like you know
00:23:37.520 angel hair pasta
00:23:38.300 or something like that
00:23:39.040 and cut it into
00:23:39.620 two little
00:23:40.060 two and a half
00:23:40.920 millimeter lengths
00:23:41.640 and stack them
00:23:42.480 side by side.
00:23:43.240 Now the funny thing
00:23:44.720 about columns
00:23:45.320 is you can't see them.
00:23:46.340 They're not visual
00:23:47.040 things.
00:23:47.760 You can't look
00:23:48.460 under a microscope
00:23:49.020 you won't see it
00:23:49.760 but he pointed out
00:23:51.440 why they're there.
00:23:53.900 It has to do
00:23:54.540 with how they're connected
00:23:55.640 so all the cells
00:23:57.240 in one of these
00:23:57.820 little millimeter
00:23:58.480 pieces of rice
00:23:59.660 or spaghetti
00:24:00.280 if you will
00:24:00.720 are all processing
00:24:02.080 the same thing
00:24:02.800 and the next
00:24:03.400 piece of rice
00:24:04.600 over processing
00:24:05.320 something different
00:24:06.060 and the next piece
00:24:06.620 of rice over
00:24:07.040 processing
00:24:07.480 something different
00:24:08.140 and so he didn't
00:24:10.320 know what was
00:24:11.220 going on
00:24:11.860 in the cortical
00:24:12.680 column.
00:24:13.840 He articulated
00:24:15.560 the architecture
00:24:16.360 he talked about
00:24:17.920 the evidence
00:24:18.580 that this exists
00:24:19.400 he said
00:24:20.200 here's the evidence
00:24:20.820 why these things
00:24:22.000 are all doing
00:24:22.480 the same thing
00:24:23.360 but he didn't
00:24:25.120 know what it was
00:24:26.520 and it's kind of
00:24:26.980 hard to imagine
00:24:27.540 what it is
00:24:28.320 that this algorithm
00:24:29.820 could be doing
00:24:30.520 but that was
00:24:31.440 essentially the core
00:24:32.380 of our research
00:24:33.000 that's what we've
00:24:33.660 been focused on
00:24:34.340 for close to 20 years.
00:24:35.400 It's also hard
00:24:36.920 to imagine
00:24:37.340 the microanatomy
00:24:38.500 here because
00:24:39.040 in each one
00:24:39.580 of these little
00:24:41.220 columns
00:24:41.720 there's something
00:24:42.760 like 150,000
00:24:44.260 neurons on average
00:24:45.640 and if you
00:24:46.780 could just unravel
00:24:47.920 all of the
00:24:49.500 connections there
00:24:51.160 you know
00:24:52.020 the tiny filaments
00:24:53.700 of nerve endings
00:24:55.540 what you would
00:24:56.440 have there
00:24:57.100 is on the order
00:24:57.960 of kilometers
00:24:58.820 in length
00:24:59.920 you know
00:25:00.680 all wound up
00:25:01.640 into that
00:25:02.100 tiny structure
00:25:03.100 so it's
00:25:04.060 this is a
00:25:04.680 strange juxtaposition
00:25:06.540 of simplicity
00:25:07.880 and complexity
00:25:08.660 but it's
00:25:09.460 there's certainly
00:25:10.580 a mad tangle
00:25:12.220 of processes
00:25:12.820 in there.
00:25:14.080 Yeah this is why
00:25:14.620 brains are so hard
00:25:15.360 to study
00:25:15.700 you know if you
00:25:16.040 look at another
00:25:16.600 organ in the body
00:25:17.520 whether it's
00:25:18.040 the heart
00:25:18.720 or the liver
00:25:19.240 or something like
00:25:19.820 that
00:25:20.080 and you take
00:25:21.160 a little section
00:25:21.760 of it
00:25:22.000 it's pretty uniform
00:25:22.920 you know what I'm
00:25:23.420 saying
00:25:23.560 but here
00:25:25.140 if you take
00:25:25.560 a teeny piece
00:25:26.200 of the
00:25:26.480 teeny teeny piece
00:25:27.620 of the cortex
00:25:28.100 it's got this
00:25:28.860 incredible complexity
00:25:30.020 in it
00:25:30.400 which is not
00:25:30.960 just a
00:25:31.400 it's not random
00:25:32.380 it's very specific
00:25:34.420 and so
00:25:35.720 yeah it's hard
00:25:37.120 to get
00:25:37.480 wrap your heads
00:25:38.180 around how complex
00:25:38.960 it is
00:25:39.500 but we need
00:25:40.540 to be complex
00:25:41.220 because what we do
00:25:42.440 as humans
00:25:42.860 is extremely complex
00:25:44.080 and you know
00:25:45.580 we shouldn't be fooled
00:25:46.520 that we're just
00:25:47.100 a bunch of neurons
00:25:47.680 that are doing
00:25:48.040 some mass action
00:25:48.960 no there's a very
00:25:50.040 complex processing
00:25:51.560 going on
00:25:52.200 and in your brain
00:25:54.300 that it's
00:25:55.380 it's not just
00:25:55.960 a blob of neurons
00:25:57.200 that are pulsating
00:25:58.160 you know
00:25:58.740 very detailed
00:26:00.120 mechanisms
00:26:01.100 that are undergoing it
00:26:02.460 and we figured out
00:26:03.300 what some of those are
00:26:04.080 so describe to me
00:26:06.320 what you mean
00:26:07.840 by this phrase
00:26:08.840 a reference frame
00:26:10.740 what does that mean
00:26:12.340 at the level of
00:26:13.040 the cortex
00:26:14.080 and cortical columns
00:26:15.520 yeah
00:26:16.560 so
00:26:17.480 we're jumping
00:26:18.760 to the end point
00:26:19.660 because that's not
00:26:20.560 where we started
00:26:21.220 we were trying to figure
00:26:22.500 out how cortical columns
00:26:23.380 work
00:26:23.740 and what we realized
00:26:25.880 is that
00:26:26.880 they're little
00:26:27.800 modeling engines
00:26:28.600 they
00:26:28.880 each one of these
00:26:29.740 cortical columns
00:26:30.400 is able to build
00:26:31.220 a model
00:26:31.900 of its input
00:26:32.880 and
00:26:33.860 that model
00:26:34.820 is what we would
00:26:35.460 call a sensory
00:26:36.180 motor model
00:26:36.800 that is
00:26:37.280 it's getting
00:26:38.600 let's assume
00:26:39.360 it's getting
00:26:39.700 from your finger
00:26:40.840 right
00:26:41.460 a tip of your finger
00:26:42.300 one of the columns
00:26:43.020 is getting input
00:26:43.620 from the tip
00:26:44.100 of your finger
00:26:44.600 and as your finger
00:26:46.300 moves and touches
00:26:47.020 something
00:26:47.460 the input changes
00:26:48.600 but it's not
00:26:50.500 just sufficient
00:26:51.020 to how the input
00:26:51.640 changes
00:26:52.100 for you to build
00:26:53.320 a model
00:26:53.820 of the object
00:26:54.640 you're touching
00:26:55.120 and I use
00:26:55.560 the coffee cup
00:26:56.060 example quite a bit
00:26:56.960 because that's
00:26:57.680 how we did it
00:26:58.320 if you move
00:26:59.140 your finger
00:26:59.520 over the coffee cup
00:27:00.360 and you're not
00:27:01.040 even looking
00:27:01.420 at the coffee cup
00:27:01.920 you could learn
00:27:02.480 a model
00:27:02.840 of the coffee cup
00:27:03.420 you could feel
00:27:03.860 just with one finger
00:27:05.320 you could feel
00:27:05.700 like oh
00:27:06.000 this is what
00:27:06.600 its shape is
00:27:07.320 but to do that
00:27:08.900 your brain
00:27:09.760 that cortical column
00:27:10.780 your brain as a whole
00:27:11.600 but that cortical column
00:27:12.500 individually
00:27:13.100 has to know something
00:27:14.300 about where your finger
00:27:15.220 is relative
00:27:16.160 to the cup
00:27:16.720 it's not just
00:27:17.780 a changing pattern
00:27:18.660 that's coming in
00:27:19.360 it has to know
00:27:20.580 how your finger's moving
00:27:21.480 and where your finger
00:27:22.220 is as it touches it
00:27:23.580 so the idea
00:27:25.040 of a reference frame
00:27:25.960 is a way of
00:27:26.680 noting a location
00:27:28.240 you have to have
00:27:28.780 a location signal
00:27:30.060 you have to have
00:27:30.440 some knowledge
00:27:31.080 about where things
00:27:32.540 are in the world
00:27:33.500 relative to other things
00:27:34.760 in this case
00:27:35.280 where's your finger
00:27:36.060 relative to the object
00:27:36.960 you're trying to touch
00:27:37.820 the coffee cup
00:27:38.500 and we realize
00:27:40.200 that for you
00:27:40.740 your brain
00:27:42.140 to make a prediction
00:27:42.900 of what you're going
00:27:43.700 to feel
00:27:44.220 when you touch
00:27:44.800 the edge of the cup
00:27:45.620 and again
00:27:46.400 you mentioned earlier
00:27:47.100 you're not conscious
00:27:47.760 of this
00:27:48.100 you'd reach the cup
00:27:48.860 and you just
00:27:49.300 but your brain's predicting
00:27:50.360 what all your fingers
00:27:51.540 are going to feel
00:27:52.120 it needs to know
00:27:53.820 where the finger's
00:27:54.280 going to be
00:27:54.720 and it has to know
00:27:56.500 what the object is
00:27:57.120 it's a cup
00:27:57.580 it needs to know
00:27:57.980 where it's going to be
00:27:58.600 and that requires
00:28:00.360 a reference frame
00:28:00.980 a reference frame
00:28:01.860 is just a way
00:28:02.580 of noting a location
00:28:03.960 it's saying
00:28:04.940 relative to this cup
00:28:06.440 your finger's over here
00:28:07.860 not over there
00:28:08.880 not on the handle
00:28:09.620 up at the top
00:28:10.540 whatever it is
00:28:11.260 and this is a deduced property
00:28:14.000 we can say for certainty
00:28:15.020 that this has to exist
00:28:16.240 if your finger's
00:28:17.100 going to make a prediction
00:28:17.800 when it reaches
00:28:18.360 and touches the coffee cup
00:28:19.200 it needs to know
00:28:19.820 where the finger is
00:28:20.520 that location
00:28:21.840 has to be relative
00:28:22.460 to the cup
00:28:22.940 so we can just say
00:28:24.620 for certainty
00:28:25.260 that there need
00:28:26.680 to be reference frames
00:28:27.380 in the brain
00:28:27.760 and this is not
00:28:28.260 a controversial idea
00:28:29.980 what we
00:28:31.120 perhaps is novel
00:28:31.980 is that we realize
00:28:32.680 that these reference frames
00:28:33.460 exist in every
00:28:34.120 cortical column
00:28:34.800 and it's the structure
00:28:36.460 of knowledge
00:28:37.100 it applies to not just
00:28:38.380 what your finger feels
00:28:39.220 on a coffee cup
00:28:39.960 and what you see
00:28:40.580 when you look at it
00:28:41.520 but also how you arrange
00:28:42.980 all your knowledge
00:28:43.680 in the world
00:28:44.200 is stored
00:28:45.360 in these reference frames
00:28:46.320 and so when
00:28:47.440 this we're jumping ahead
00:28:48.760 here many steps
00:28:49.840 but when we think
00:28:51.260 and when we posit
00:28:53.380 when we try to
00:28:54.380 you know
00:28:54.720 reason in our head
00:28:55.900 when even my language
00:28:57.200 right now
00:28:58.240 is where
00:28:59.300 the neurons
00:29:00.200 are walking
00:29:01.080 through locations
00:29:01.820 in reference frames
00:29:02.600 recalling the information
00:29:03.660 stored there
00:29:04.380 and that's what
00:29:05.500 comes into your head
00:29:06.320 or that's what you say
00:29:07.180 so it becomes
00:29:08.600 the core reference
00:29:10.020 the reference frame
00:29:10.700 becomes the core structure
00:29:11.740 for the entire
00:29:12.480 everything you do
00:29:13.540 it's knowledge
00:29:14.120 about the world
00:29:14.680 is in these reference frames
00:29:15.600 yeah you make
00:29:17.000 a strong claim
00:29:17.920 about the primacy
00:29:19.500 of motion
00:29:20.280 right
00:29:20.700 because there's
00:29:21.660 everyone knows
00:29:22.540 that there's part
00:29:23.140 of the cortex
00:29:23.800 devoted to
00:29:25.340 motor action
00:29:26.620 we refer to it
00:29:27.420 as the motor cortex
00:29:28.760 and distinguish it
00:29:30.340 from sensory cortex
00:29:31.280 in that way
00:29:31.980 but it's also true
00:29:33.800 that other regions
00:29:35.040 of the cortex
00:29:35.600 and perhaps
00:29:37.240 every region
00:29:37.980 of the cortex
00:29:38.460 does have some
00:29:40.220 connection
00:29:41.240 to lower structures
00:29:42.400 that can affect
00:29:44.160 motion
00:29:45.000 right
00:29:45.380 so it's not
00:29:45.860 it's not that
00:29:46.980 it's just motor cortex
00:29:48.640 that's in the
00:29:49.340 motion game
00:29:50.240 and by analogy
00:29:51.720 or by direct
00:29:53.160 implication
00:29:53.680 you think of
00:29:55.120 thought
00:29:56.420 as
00:29:57.440 itself
00:29:58.660 being a kind
00:29:59.440 of movement
00:30:00.120 in conceptual
00:30:01.420 space
00:30:02.080 right
00:30:02.260 so there's a mapping
00:30:02.860 of the
00:30:03.260 the sensory world
00:30:04.740 that can really
00:30:05.920 only be accomplished
00:30:06.680 by acting
00:30:07.620 on it
00:30:08.220 you know
00:30:08.600 and therefore
00:30:09.420 moving
00:30:10.620 right
00:30:10.880 so the way
00:30:11.700 to map the cup
00:30:12.580 you know
00:30:13.240 is to touch it
00:30:14.000 with your fingers
00:30:14.580 in the end
00:30:15.520 there is a
00:30:16.680 an analogous
00:30:17.540 kind of motion
00:30:18.420 in conceptual
00:30:20.100 space
00:30:20.760 and you know
00:30:21.500 even you know
00:30:22.260 abstract ideas
00:30:23.680 like I think
00:30:25.100 some of the examples
00:30:25.760 you give in the book
00:30:26.280 are like you know
00:30:26.700 democracy
00:30:27.260 right you know
00:30:28.180 or money
00:30:29.080 or how we
00:30:30.600 understand these things
00:30:31.420 so let's go back
00:30:32.800 to the first thing
00:30:33.440 you said there
00:30:33.880 the idea that
00:30:35.160 there's motor cortex
00:30:36.100 and sensory cortex
00:30:37.020 is sort of
00:30:38.180 no longer
00:30:38.920 considered right
00:30:39.620 as you mentioned
00:30:41.040 we
00:30:41.620 the neurons
00:30:42.900 that
00:30:43.540 in these cortical
00:30:44.440 columns
00:30:44.820 there are certain
00:30:45.360 neurons that are
00:30:46.140 the motor output neurons
00:30:47.400 these are in a
00:30:48.500 particular layer 5
00:30:50.000 as they're called
00:30:50.700 and so in the
00:30:52.220 motor cortex
00:30:52.780 they were really big
00:30:53.720 and they project
00:30:54.280 to the spinal cord
00:30:55.520 and say oh
00:30:56.420 that's how you
00:30:56.920 move your fingers
00:30:57.560 but if you look
00:30:58.780 at the neurons
00:30:59.500 the columns
00:31:00.640 in the visual cortex
00:31:01.760 the parts that get
00:31:02.500 input from the eyes
00:31:03.480 they have the same
00:31:04.880 layer 5 cells
00:31:05.700 and these cells
00:31:06.860 project to
00:31:07.800 a part of the brain
00:31:08.660 called the superior
00:31:09.200 colliculus
00:31:09.660 which is what
00:31:10.920 controls eye motion
00:31:11.860 so this goes
00:31:13.360 against the original
00:31:13.940 idea oh
00:31:14.500 there's sensory cortex
00:31:15.360 and motor cortex
00:31:16.060 no one believes
00:31:17.020 that well
00:31:17.380 I don't know
00:31:17.840 nobody
00:31:18.080 but very few
00:31:19.120 people believe
00:31:19.580 that anymore
00:31:20.020 it's as far
00:31:21.040 as we know
00:31:21.500 every part
00:31:22.160 of the cortex
00:31:22.620 has a motor output
00:31:23.520 and so every part
00:31:24.960 of the cortex
00:31:25.360 is getting some
00:31:26.040 sort of input
00:31:26.600 and it has
00:31:27.200 some motor output
00:31:28.100 and so the basic
00:31:29.780 algorithm of cortex
00:31:30.600 is a sensory
00:31:31.440 motor system
00:31:32.440 it's not divided
00:31:33.660 it's not like
00:31:34.320 we have sensory
00:31:34.940 areas and motor
00:31:36.000 areas
00:31:36.260 as far as we know
00:31:37.240 ever it's been seen
00:31:38.160 there's these
00:31:38.580 motor cells
00:31:39.520 everywhere
00:31:39.980 so we can put
00:31:41.680 that aside
00:31:42.300 now
00:31:43.340 I can
00:31:44.720 very clearly
00:31:46.340 walk you through
00:31:47.160 and in some sense
00:31:48.840 prove
00:31:49.340 from logic
00:31:50.400 that
00:31:51.200 when you're learning
00:31:52.480 what a coffee cup
00:31:53.480 feels like
00:31:54.020 and I could even
00:31:54.520 do this for vision
00:31:55.220 that you have to have
00:31:56.940 this idea of a reference
00:31:57.960 frame
00:31:58.260 that you have to know
00:31:59.780 where your finger is
00:32:00.500 relative to the cup
00:32:01.320 and that's how you
00:32:02.480 build a model of it
00:32:03.140 and so we can build out
00:32:04.600 this cortical column
00:32:05.540 that explains
00:32:06.060 how it does that
00:32:07.060 how do your
00:32:07.580 parts of your cortex
00:32:09.440 representing your fingers
00:32:10.520 are able to learn
00:32:11.360 the structure
00:32:11.760 of a coffee cup
00:32:12.320 now
00:32:13.420 Mountcastle
00:32:14.200 go back to him
00:32:14.900 he said
00:32:15.600 look
00:32:15.840 it's the same
00:32:16.300 algorithm everywhere
00:32:17.020 and he says
00:32:19.200 it looks the same
00:32:19.760 everywhere
00:32:20.060 so it's the same
00:32:20.620 algorithm everywhere
00:32:21.300 so that would sort of
00:32:22.540 say hmm
00:32:23.100 well if I'm thinking
00:32:23.940 about something
00:32:24.640 that doesn't seem
00:32:25.520 like a sensory
00:32:26.660 motor system
00:32:27.440 like I'm not
00:32:28.000 touching something
00:32:28.680 or looking
00:32:29.020 I'm just thinking
00:32:29.700 about something
00:32:30.300 that would
00:32:31.060 if Mountcastle
00:32:31.860 was right
00:32:32.340 then the same
00:32:33.400 basic algorithm
00:32:34.160 would be applying
00:32:34.800 there
00:32:35.060 so that was
00:32:35.540 one constraint
00:32:36.200 like well
00:32:36.700 that you know
00:32:38.000 and the evidence
00:32:38.780 is that Mountcastle
00:32:39.520 is right
00:32:39.800 I mean the physical
00:32:41.060 evidence suggests
00:32:41.740 he's right
00:32:42.140 it just becomes
00:32:43.080 a little bit odd
00:32:43.580 to think like
00:32:44.060 well how is language
00:32:45.140 like this
00:32:45.620 and how is
00:32:46.080 mathematics like
00:32:47.260 you know
00:32:47.640 touching a coffee cup
00:32:48.520 but then we realize
00:32:50.100 that it's just
00:32:51.060 reference frames
00:32:51.840 are a way of
00:32:52.220 storing everything
00:32:53.020 and in the way
00:32:55.260 we move
00:32:55.860 through a reference
00:32:56.460 frame
00:32:56.660 it's like
00:32:57.160 how do you move
00:32:57.720 from one location
00:32:58.460 how do the neurons
00:32:59.460 activate one location
00:33:01.280 after another location
00:33:02.160 after another location
00:33:03.000 we do that
00:33:04.160 to this idea
00:33:05.420 of movement
00:33:05.860 so I'm moving
00:33:06.920 if I want to
00:33:07.460 access the locations
00:33:08.680 on a coffee cup
00:33:09.300 I move my finger
00:33:10.080 but the same concept
00:33:11.600 could apply to mathematics
00:33:12.700 or to politics
00:33:14.460 but you're not
00:33:15.480 actually physically
00:33:16.300 moving something
00:33:17.000 but you're still
00:33:17.800 walking through
00:33:18.980 a structure
00:33:19.560 a good bridge example
00:33:21.800 is if I say to you
00:33:23.440 you know
00:33:23.680 imagine your house
00:33:24.540 and you know
00:33:25.560 I ask you to walk
00:33:26.440 you know
00:33:26.800 tell me about your house
00:33:27.640 what you'll do
00:33:28.240 is you'll mentally
00:33:29.560 imagine walking
00:33:30.600 through your house
00:33:31.340 it won't be random
00:33:32.600 you just won't have
00:33:33.180 random thoughts
00:33:33.800 come to your head
00:33:34.340 but you will mentally
00:33:35.880 imagine walking
00:33:36.640 through your house
00:33:37.180 and as you walk
00:33:37.880 through your house
00:33:38.440 you'll recall
00:33:39.360 what is supposed
00:33:40.120 to be seen
00:33:40.580 in different directions
00:33:41.320 you can say
00:33:41.740 oh I'll walk
00:33:42.180 in the front door
00:33:42.740 and I'll look
00:33:43.060 to the right
00:33:43.420 what do I see
00:33:44.000 I'll look to the left
00:33:44.580 what do I see
00:33:45.100 this is sort of
00:33:46.720 an example
00:33:47.640 you could relate
00:33:48.240 it to something
00:33:48.640 physically
00:33:49.020 you could move to
00:33:49.840 but that's pretty much
00:33:51.180 what's going on
00:33:51.780 when you're thinking
00:33:52.200 about anything
00:33:52.760 if you're thinking
00:33:53.880 about your podcast
00:33:54.600 and how you get
00:33:55.380 more subscribers
00:33:56.080 you have a model
00:33:57.200 of that in your head
00:33:57.980 and you're
00:33:58.360 you are
00:33:59.160 trying it out
00:34:00.960 thinking about
00:34:01.540 different aspects
00:34:02.180 by literally
00:34:02.860 invoking these
00:34:03.740 different locations
00:34:04.440 and reference frames
00:34:05.240 and so that's
00:34:06.800 sort of the core
00:34:07.540 of all knowledge
00:34:08.200 yeah it's interesting
00:34:09.380 I guess back to
00:34:10.720 Mountcastle for a second
00:34:11.640 one piece of evidence
00:34:13.360 in favor of this view
00:34:14.600 of a common
00:34:15.760 cortical algorithm
00:34:16.840 is the fact that
00:34:18.220 adjacent areas
00:34:19.180 of cortex
00:34:19.740 can be
00:34:20.460 appropriated
00:34:22.020 by
00:34:22.800 various functions
00:34:24.580 you know
00:34:25.160 if you
00:34:25.960 lose your vision
00:34:27.080 say
00:34:27.440 classical visual cortex
00:34:29.760 can be appropriated
00:34:30.720 by other senses
00:34:31.940 and there's this
00:34:32.700 plasticity
00:34:33.640 that can
00:34:35.000 ignore
00:34:35.620 some of the
00:34:36.240 previous boundaries
00:34:37.480 between
00:34:38.240 separate senses
00:34:39.560 in the cortex
00:34:40.560 yeah that's right
00:34:41.880 there's this
00:34:42.700 tremendous plasticity
00:34:43.740 and
00:34:44.360 and you can also
00:34:45.580 recover from
00:34:46.240 various sorts of
00:34:47.020 trauma and so on
00:34:47.880 I mean there's
00:34:48.500 some rewiring
00:34:49.160 has to occur
00:34:49.760 but it does show
00:34:51.060 that that
00:34:51.500 whatever's going
00:34:52.260 whatever the
00:34:52.800 circuitry in the
00:34:53.380 visual cortex
00:34:53.960 was you know
00:34:55.000 quote
00:34:55.480 if you were
00:34:56.880 a sighted person
00:34:57.660 what it would do
00:34:58.740 if you're not
00:34:59.560 a sighted person
00:35:00.200 well it'll just
00:35:00.840 do something else
00:35:01.620 and so
00:35:02.560 it's not
00:35:03.320 and so
00:35:04.080 that is a
00:35:04.680 very very strong
00:35:05.360 argument for that
00:35:06.580 there's a famous
00:35:07.220 scientist
00:35:07.680 Bacharita
00:35:08.880 who did
00:35:09.360 an experiment
00:35:10.600 where he
00:35:11.160 I'm trying to
00:35:12.080 remember the
00:35:12.400 animal he used
00:35:13.120 maybe you can
00:35:14.120 recall it
00:35:14.600 but anyway
00:35:14.980 it'll come to me
00:35:16.220 a ferret
00:35:16.980 I think it was a
00:35:17.480 ferret
00:35:17.680 we took the
00:35:18.640 they took a
00:35:19.500 before the
00:35:20.480 animal's born
00:35:21.100 he took the
00:35:21.560 optic nerve
00:35:22.220 and ran it
00:35:23.100 over to one
00:35:23.600 part of the
00:35:24.040 a different part
00:35:24.760 of the neocortex
00:35:25.280 and took the
00:35:25.820 auditory nerve
00:35:26.340 and ran it
00:35:26.680 to a different
00:35:26.980 part of the neocortex
00:35:27.940 you know
00:35:28.380 basically rewired
00:35:30.100 the animal
00:35:30.520 I'm not sure
00:35:30.880 we do these
00:35:31.360 experiments today
00:35:32.040 but and you
00:35:33.420 know and the
00:35:34.260 argument was that
00:35:34.920 the animals you
00:35:35.820 know still saw
00:35:36.520 and still heard
00:35:37.200 and so on
00:35:37.680 maybe not as
00:35:38.340 well as an
00:35:39.660 unaltered one
00:35:40.540 but the evidence
00:35:41.660 was that yeah
00:35:42.360 that really works
00:35:43.240 so what is
00:35:45.500 genetically determined
00:35:46.900 and what is
00:35:48.360 learned here
00:35:49.020 it seems that
00:35:50.040 the genetics
00:35:51.100 at minimum
00:35:51.880 are determining
00:35:52.900 what is hooked
00:35:55.020 up to what
00:35:55.820 initially
00:35:56.580 right you know
00:35:57.400 barring
00:35:57.660 yeah that was
00:35:58.220 roughly roughly
00:35:59.260 that's right
00:35:59.760 I think you
00:36:00.720 know like where
00:36:01.380 do the eyes
00:36:01.860 the optic nerve
00:36:02.740 from the eyes
00:36:03.180 where do they
00:36:03.620 project and
00:36:04.280 where do the
00:36:04.720 regions that get
00:36:05.440 the input from
00:36:06.580 the eyes where
00:36:07.040 do they project
00:36:07.640 and so this
00:36:08.880 rough sort of
00:36:10.080 overall architecture
00:36:11.240 is specified
00:36:12.620 and as we
00:36:13.800 just talked
00:36:14.240 through trauma
00:36:14.840 and other reasons
00:36:15.500 sometimes that
00:36:16.100 architecture can
00:36:16.820 get rewired
00:36:17.660 I think also
00:36:18.940 the basic
00:36:20.780 algorithm that
00:36:21.840 goes on in
00:36:22.500 each of these
00:36:22.900 cortical columns
00:36:23.760 the circuitry
00:36:24.780 inside the
00:36:26.080 neocortex is
00:36:26.780 pretty well
00:36:27.180 determined by
00:36:27.900 genetics and
00:36:29.940 in fact what
00:36:30.740 one of
00:36:31.040 myocast's
00:36:31.520 arguments was
00:36:32.300 that humans
00:36:33.120 the human
00:36:33.980 neocortex got
00:36:34.780 large and
00:36:35.940 we have a
00:36:36.280 very large
00:36:36.720 one relative
00:36:37.240 to our body
00:36:37.780 size just
00:36:38.680 because all
00:36:39.160 evolution had
00:36:40.420 to do is
00:36:40.800 discover just
00:36:41.360 make more
00:36:41.740 copies of
00:36:42.300 these columns
00:36:42.820 you don't
00:36:43.380 have to do
00:36:44.400 anything new
00:36:45.040 just make
00:36:45.480 more copies
00:36:46.000 and that's
00:36:46.300 something easy
00:36:46.860 for genes
00:36:47.300 to specify
00:36:47.880 and so
00:36:49.060 human brains
00:36:49.940 got large
00:36:50.680 quickly in
00:36:51.420 evolutionary time
00:36:52.300 by that just
00:36:53.320 replicate more
00:36:54.180 of it type
00:36:54.720 of thing
00:36:55.040 okay so
00:36:56.800 let's go
00:36:57.640 beyond the
00:36:58.700 human now
00:36:59.540 and talk
00:37:00.400 about artificial
00:37:01.940 intelligence
00:37:02.580 and before
00:37:04.900 we talk about
00:37:05.380 the risks
00:37:06.000 or the
00:37:06.620 imagined risks
00:37:07.860 tell me
00:37:09.380 what you
00:37:09.860 think the
00:37:11.040 path looks
00:37:11.840 like going
00:37:12.640 forward
00:37:13.040 what are
00:37:13.580 we doing
00:37:13.840 now and
00:37:14.460 what do
00:37:14.660 you think
00:37:14.900 we need
00:37:15.420 to do
00:37:15.900 to have
00:37:16.760 our dreams
00:37:17.320 of true
00:37:18.400 artificial
00:37:19.220 general
00:37:19.820 intelligence
00:37:20.380 realized
00:37:21.260 well
00:37:22.040 you know
00:37:23.680 today's
00:37:24.280 AI
00:37:24.680 as powerful
00:37:25.880 as it is
00:37:26.540 and successful
00:37:27.160 as it is
00:37:27.900 I think
00:37:29.260 most senior
00:37:30.700 AI practitioners
00:37:31.640 will admit
00:37:32.420 and many of
00:37:33.900 them have
00:37:34.340 that they
00:37:35.220 don't really
00:37:35.520 think they're
00:37:35.960 intelligent
00:37:36.380 you know
00:37:37.140 they're
00:37:37.420 they're
00:37:37.820 really
00:37:38.060 wonderful
00:37:38.460 pattern
00:37:38.860 classifiers
00:37:39.480 and they
00:37:39.920 can do
00:37:40.160 all kinds
00:37:40.500 of clever
00:37:40.800 things
00:37:41.260 but there
00:37:42.520 are very
00:37:42.840 few
00:37:43.080 practitioners
00:37:43.600 would say
00:37:44.220 hey this
00:37:44.720 AI system
00:37:45.420 that's
00:37:45.780 recognizing
00:37:46.180 faces is
00:37:46.860 really
00:37:47.080 intelligent
00:37:47.540 and there's
00:37:48.960 sort of a
00:37:49.420 lack of
00:37:49.800 understanding
00:37:50.360 what intelligence
00:37:51.300 is and how
00:37:52.080 to go forward
00:37:52.700 and how do
00:37:53.400 you make a
00:37:53.760 system that
00:37:54.200 could solve
00:37:55.260 general
00:37:55.720 problems
00:37:56.380 could do
00:37:57.220 more than
00:37:57.580 one thing
00:37:58.140 right
00:37:58.520 and so
00:38:00.200 in the second
00:38:00.920 part of my
00:38:01.260 book I lay
00:38:01.840 out what I
00:38:02.620 believe are the
00:38:03.360 requirements to
00:38:04.040 do that
00:38:04.460 and my
00:38:05.440 approach has
00:38:05.920 always been
00:38:06.580 for 40
00:38:07.660 years has
00:38:08.240 been like
00:38:08.660 well I
00:38:09.040 think we
00:38:09.320 need to
00:38:09.540 first figure
00:38:09.960 out what
00:38:10.260 brains do
00:38:10.940 and how
00:38:12.340 they do
00:38:12.660 them and
00:38:13.320 then we'll
00:38:13.840 know how
00:38:14.180 to build
00:38:14.480 intelligent
00:38:14.840 machines
00:38:15.200 because we
00:38:16.340 just don't
00:38:16.820 seem able
00:38:17.700 to intuit
00:38:18.380 what an
00:38:19.060 intelligent
00:38:19.320 machine is
00:38:20.060 so I
00:38:21.560 think what
00:38:22.580 I the way
00:38:23.100 I look at
00:38:23.500 this problem
00:38:24.040 if we want
00:38:24.660 to make you
00:38:25.400 know what's
00:38:25.680 the what's
00:38:26.100 the recipe
00:38:26.560 for making
00:38:27.020 an intelligent
00:38:27.420 machine is
00:38:28.960 you have to
00:38:29.400 say what are
00:38:29.840 the principles
00:38:30.380 by which the
00:38:31.100 brain works
00:38:31.600 that we need
00:38:32.080 to replicate
00:38:32.540 and which
00:38:32.940 principles don't
00:38:33.620 we need to
00:38:34.000 replicate and
00:38:35.380 so I made
00:38:36.420 a list of
00:38:36.820 these in
00:38:37.100 the book
00:38:37.360 but if you
00:38:38.980 can think of
00:38:39.300 a very high
00:38:39.820 level they
00:38:40.460 have to have
00:38:40.920 some sort of
00:38:41.520 embodiment they
00:38:42.260 have to have
00:38:42.580 the ability to
00:38:43.160 move their
00:38:43.600 sensors somehow
00:38:44.700 in the world
00:38:45.380 you know you
00:38:46.360 can't really
00:38:47.480 learn how to
00:38:48.760 use tools and
00:38:49.580 how to you
00:38:50.460 know run
00:38:51.220 factories and
00:38:52.000 and how to
00:38:53.120 you know do
00:38:53.520 things unless you
00:38:54.280 can move in the
00:38:54.840 world and
00:38:56.160 it requires these
00:38:57.540 reference frames I
00:38:58.380 was talking about
00:38:58.920 because movement
00:38:59.640 requires reference
00:39:00.680 frames but that's
00:39:01.540 not a controversial
00:39:02.200 statement it's
00:39:03.040 this it's just a
00:39:04.040 fact you're going
00:39:05.060 to have to have
00:39:05.500 know where things
00:39:06.480 are in the world
00:39:07.320 and and then the
00:39:08.900 final there's a
00:39:10.340 set of things but
00:39:11.320 one of the other
00:39:12.500 big ones which we
00:39:13.500 haven't talked about
00:39:14.180 yet and which is
00:39:14.820 where the title of
00:39:15.500 the book comes
00:39:15.940 from a thousand
00:39:17.400 brains is that the
00:39:18.840 way to think about
00:39:19.500 our near cortex it
00:39:21.180 has 150,000 of
00:39:22.480 these columns we
00:39:23.700 have essentially
00:39:24.260 150,000 separate
00:39:26.120 modeling systems going
00:39:27.240 on in our brain and
00:39:28.660 they work together
00:39:29.580 by voting and so
00:39:31.960 that concept of a
00:39:33.220 distributed
00:39:33.840 intelligence system
00:39:35.600 is important we're
00:39:37.200 not just one thing
00:39:38.160 we it feels like
00:39:39.140 we're one thing but
00:39:40.120 we're really 150,000
00:39:41.240 of these things and
00:39:42.640 we're only conscious
00:39:43.460 of being one thing
00:39:44.460 but that's not
00:39:45.060 really what's
00:39:45.440 happening under the
00:39:46.020 covers so those
00:39:47.960 are some of the
00:39:48.400 key ideas I would
00:39:49.440 just stick to very
00:39:50.160 very high ideas it
00:39:50.960 has to have an
00:39:51.540 embodiment has to
00:39:52.340 be able to move
00:39:52.960 its sensors has to
00:39:54.520 be able to organize
00:39:55.260 information and
00:39:56.020 reference frames and
00:39:57.640 it has to be
00:39:58.320 distributed and
00:39:59.500 that's how we
00:40:00.000 can do multiple
00:40:01.320 sensors and
00:40:02.160 sensory integration
00:40:03.000 things like that
00:40:03.700 I guess I question
00:40:06.840 the criteria of
00:40:09.140 embodiment and
00:40:10.720 movement right I
00:40:12.680 mean I understand
00:40:13.580 that practically
00:40:15.060 speaking that's how
00:40:16.660 a useful intelligence
00:40:17.660 can get trained up
00:40:18.920 in our world to
00:40:20.640 do things you know
00:40:22.040 physically in our
00:40:23.120 world but it
00:40:24.480 seems like you
00:40:24.920 could have a
00:40:25.580 perfectly intelligent
00:40:27.060 system you know
00:40:28.220 i.e. a mind
00:40:29.660 that is turned
00:40:32.340 loose on you
00:40:34.160 know simulated
00:40:34.800 worlds and are
00:40:36.180 capable of solving
00:40:37.280 problems that don't
00:40:38.560 require effectors of
00:40:41.280 any kind I mean
00:40:41.960 you know chess is
00:40:43.220 obviously a very low
00:40:44.680 level analogy but
00:40:45.760 just imagine a
00:40:46.940 thousand things like
00:40:48.040 chess that represent
00:40:49.680 real you know
00:40:51.560 theory building or
00:40:52.700 cognition you know
00:40:53.960 in a box yeah I
00:40:56.040 I think you're
00:40:56.740 right and and so
00:40:57.620 when I use the
00:40:58.820 word movement or
00:40:59.580 embodiment and I
00:41:00.880 I'm careful to
00:41:01.620 define it in the
00:41:02.200 book because it
00:41:03.660 doesn't have to be
00:41:04.480 physical it you
00:41:06.980 know I example I
00:41:08.240 gave you can imagine
00:41:09.200 intelligent agent that
00:41:10.440 lives in the
00:41:11.020 internet and right
00:41:12.700 in movement is
00:41:13.640 following links right
00:41:15.040 it's not a physical
00:41:15.840 thing but there's
00:41:17.420 still this conceptual
00:41:18.980 mathematical idea of
00:41:20.140 what it means to
00:41:20.640 move yeah and so
00:41:22.040 and so you're
00:41:22.880 changing the location
00:41:23.960 of of some
00:41:25.420 representation and
00:41:26.940 that could be
00:41:27.400 virtual it could be
00:41:28.460 you know it doesn't
00:41:29.360 have to have a
00:41:29.800 physical embodiment
00:41:31.120 but but in the end
00:41:32.620 you can't you can't
00:41:34.240 learn about the world
00:41:34.960 just by looking at a
00:41:35.660 set of pictures
00:41:36.300 right that's not
00:41:38.040 not going to happen
00:41:38.760 you can learn to
00:41:39.440 classify pictures but
00:41:40.620 so so some AI
00:41:43.100 systems will have to
00:41:44.100 be physically embodied
00:41:45.340 like a like like a
00:41:46.800 robot if I guess if
00:41:47.760 you want many will
00:41:48.840 not be many will be
00:41:49.840 virtual but they
00:41:51.040 all have this
00:41:51.780 internal process
00:41:52.820 which we could I
00:41:53.900 could point to the
00:41:54.640 thing that says here's
00:41:56.040 where the reference
00:41:56.620 frame is here's where
00:41:57.400 your current location
00:41:58.160 is here's how it's
00:41:58.980 moving to a new
00:41:59.600 location based on some
00:42:00.700 movement vector you
00:42:02.040 know like a verb and a
00:42:03.160 word can you can think
00:42:04.140 of that as like an
00:42:04.680 action and so you can
00:42:06.720 have an action that's
00:42:07.820 not physical but it's
00:42:08.880 still an action and it
00:42:09.760 moves to a new location
00:42:10.720 in this internal
00:42:11.640 representation right
00:42:12.860 right okay well
00:42:15.020 let's talk about risk
00:42:15.940 because this is the
00:42:16.660 place where I think you
00:42:17.560 and I have very
00:42:18.380 different intuitions
00:42:20.120 you you are as far
00:42:22.540 as I can tell from
00:42:23.300 your book you seem
00:42:24.980 very sanguine about
00:42:26.940 AI risk and really
00:42:30.700 you seem to think
00:42:32.580 that the only real
00:42:33.620 risk and the serious
00:42:34.800 risk of things going
00:42:35.940 very badly for us is
00:42:37.440 that bad people will
00:42:39.740 do bad things with
00:42:41.060 much more powerful
00:42:41.960 tools so the
00:42:44.100 heuristic here would
00:42:44.940 be you know don't
00:42:45.560 give your super
00:42:46.200 intelligent AI to the
00:42:47.440 next Hitler because
00:42:49.020 that would be bad
00:42:49.780 but other than that
00:42:51.420 the generic problem
00:42:53.080 of self-replication
00:42:54.380 which you talk about
00:42:55.820 briefly and we you
00:42:57.380 point out we face that
00:42:58.460 on other fronts like
00:42:59.440 with you know with the
00:43:00.440 pandemic where we've
00:43:01.880 been dealing with
00:43:02.640 I mean so natural
00:43:03.260 viruses and bacteria
00:43:04.440 or computer viruses
00:43:06.220 I mean there's there's
00:43:06.980 anything that can
00:43:07.700 self-replicate can be
00:43:09.300 dangerous but that
00:43:11.480 aside you seem quite
00:43:13.700 confident that AI will
00:43:15.120 not get away from us
00:43:16.640 there won't be an
00:43:17.840 intelligence explosion
00:43:18.880 and we don't have to
00:43:21.340 worry too much about
00:43:22.260 the so-called
00:43:22.960 alignment problem
00:43:23.940 and at one point you
00:43:25.620 even question whether
00:43:26.920 it makes sense to
00:43:28.080 expect that we'll
00:43:29.060 produce something that
00:43:31.000 can be appropriately
00:43:31.940 called superhuman
00:43:33.140 intelligence so perhaps
00:43:35.140 you can explain the
00:43:36.760 basis for your optimism
00:43:38.500 here so I think what
00:43:41.400 most people and perhaps
00:43:42.500 yourself have fears
00:43:44.160 about is is they they
00:43:46.760 they use humans as an
00:43:48.680 example of how things
00:43:50.240 can go wrong and so we
00:43:51.960 think about the
00:43:52.420 alignment problem or
00:43:53.440 you think about you
00:43:54.700 know motivations of an
00:43:55.820 AI system well okay
00:43:57.780 does the AI system have
00:43:59.020 motivations or not does
00:44:00.500 it have a desire to do
00:44:01.660 anything now as a human
00:44:03.600 an animal we all have
00:44:04.600 desires right but if you
00:44:06.940 if you take apart what
00:44:08.880 parts of the human brain
00:44:10.300 are are doing different
00:44:12.340 parts there's some parts
00:44:14.160 that are just building
00:44:14.860 this model of the world
00:44:15.860 and this is the core of
00:44:17.400 our intelligence this is
00:44:18.320 what it means to be
00:44:19.620 intelligent that part
00:44:21.220 itself is is is benign
00:44:23.200 it has no motivations on
00:44:24.680 its own it doesn't desire
00:44:25.940 to do anything I use an
00:44:27.620 example of a map you know
00:44:28.900 a map is a model of the
00:44:30.040 world and you can a map
00:44:33.540 can be very powerful tool
00:44:36.960 for some to do good or to
00:44:38.180 do bad but on its own the
00:44:39.940 map doesn't do anything
00:44:40.920 so if you think about
00:44:42.220 the neocortex on its own
00:44:43.920 it it sits on top of the
00:44:45.540 rest of your brain and
00:44:47.060 the rest of your brain is
00:44:48.120 really what makes us
00:44:49.360 motivated it gets us you
00:44:50.960 know we have our our our
00:44:52.680 good sides and our bad
00:44:53.860 sides you know our desire
00:44:55.140 to maintain our life and
00:44:57.000 have sex and aggression
00:44:58.680 and all this stuff the
00:44:59.960 neocortex is just sitting
00:45:00.900 there it's like a map it
00:45:01.740 says you know I
00:45:02.400 understand the world and
00:45:03.500 you can use me as how as
00:45:04.500 you want so when we did
00:45:05.880 when we build intelligent
00:45:06.780 machines we have the
00:45:08.920 option and and I think
00:45:10.680 almost the imperative not
00:45:12.200 to build the old parts of
00:45:13.240 the brain to you know why
00:45:14.360 do that we just just have
00:45:16.080 this thing which is
00:45:17.200 inherently smart but on
00:45:19.180 its own doesn't really want
00:45:19.940 to do anything and and
00:45:21.760 so there's some of the
00:45:22.700 some of the risks that
00:45:23.700 come about from the
00:45:25.340 people's fears about the
00:45:27.340 alignment problem
00:45:28.220 specifically is that the
00:45:31.280 the intelligent agent will
00:45:32.900 decide on its own or
00:45:34.000 decide for some reason to
00:45:35.840 do things that are in
00:45:36.920 its best interest not in
00:45:38.060 our best interest or
00:45:38.840 maybe it'll listen to us
00:45:40.440 but then not listen to
00:45:41.320 us or something like
00:45:42.100 this I just don't see
00:45:43.740 how that can physically
00:45:44.760 happen and and for
00:45:46.680 people most people don't
00:45:47.800 understand the separation
00:45:49.020 they just assume that
00:45:50.040 this intelligence is
00:45:50.900 wrapped up in these all
00:45:52.020 these all the things that
00:45:52.960 make us human the
00:45:54.420 intelligence explosion
00:45:55.300 problem is a separate
00:45:56.220 issue I'm not sure
00:45:58.340 which one of those you're
00:45:59.040 more worried about
00:45:59.940 yeah well let's let's
00:46:01.380 deal with the alignment
00:46:03.360 issue first I mean I do
00:46:05.240 think that's more
00:46:06.040 critical but let's see
00:46:08.700 if I can capture what
00:46:09.640 troubles me about this
00:46:10.840 picture you painted
00:46:12.500 here it seems that
00:46:13.700 you're to my mind
00:46:15.140 you're you're being
00:46:16.300 strangely anthropomorphic
00:46:20.100 on one side but not
00:46:22.260 anthropomorphic enough
00:46:23.880 on the other I mean so
00:46:25.400 like you know you think
00:46:27.180 that to understand
00:46:28.680 intelligence and and
00:46:29.940 actually truly implement
00:46:31.080 it in machines we
00:46:32.720 really have to be focused
00:46:35.460 on ourselves first and
00:46:37.080 we have to understand
00:46:38.000 how the human brain
00:46:39.160 works and then emulate
00:46:40.460 those principles pretty
00:46:42.660 directly in machines
00:46:44.440 that strikes me as
00:46:46.120 possibly true but
00:46:47.240 possibly not true and
00:46:48.560 if if I had to bet I
00:46:50.320 think I would probably
00:46:51.780 bet against it although
00:46:53.400 even here you seem to
00:46:55.080 be not taking full
00:46:58.060 account of what the
00:46:59.660 human brain is doing I
00:47:00.620 mean like we you know we
00:47:01.500 can't partition reason
00:47:03.580 and emotion as clearly
00:47:05.600 as we thought we could
00:47:07.360 hundreds of years ago and
00:47:08.440 in fact you know certain
00:47:09.680 emotions you know certain
00:47:10.860 drives are built into our
00:47:12.800 being able to reason
00:47:14.560 effectively I think that's
00:47:16.580 you know I I'll take an
00:47:17.780 exception to that I know I
00:47:19.320 know this is a opinion
00:47:20.500 that you had Lisa Barrett on
00:47:23.380 your program recently
00:47:24.100 yeah like Antonio Damasio is
00:47:25.860 the person who's banged on
00:47:27.160 about this the most yeah I
00:47:28.260 know and I just disagree I
00:47:30.180 just it's you know you
00:47:31.940 can separate these two and
00:47:34.080 and I can say this because
00:47:36.440 I understand actually what's
00:47:38.180 going on in the neocortex
00:47:39.620 and I can see what I have a
00:47:42.080 very good sense of what
00:47:43.100 these actual neurons are
00:47:44.060 actually doing when it's
00:47:45.240 modeling the world and so
00:47:46.300 on and in you do not it
00:47:48.800 does not require this
00:47:50.000 emotional component a human
00:47:51.880 does now you say you know
00:47:53.800 on one hand I don't argue
00:47:55.440 we should replicate the
00:47:56.340 brain I say we should
00:47:57.200 replicate the structures
00:47:58.240 of the neocortex right
00:47:59.580 which is not replicating
00:48:00.600 the brain it's just one
00:48:02.600 part of the brain and so
00:48:04.120 I'm specifically saying
00:48:05.320 you know I don't really
00:48:06.860 care too much about how
00:48:07.740 the spinal cord works or
00:48:09.000 how you know the brain
00:48:10.380 stem does this or that
00:48:11.560 it's interesting maybe I
00:48:12.960 know a little bit about
00:48:13.580 it but that's not
00:48:14.700 important the cortex sits
00:48:15.840 on top of another
00:48:16.760 structure and the cortex
00:48:17.820 does its own thing and
00:48:19.100 they interact of course
00:48:20.200 they interact and our
00:48:21.980 emotions affect what we
00:48:23.300 learn and what we don't
00:48:24.040 learn but it doesn't
00:48:25.080 have to be that way in a
00:48:27.180 system another system
00:48:28.580 that we build that's the
00:48:29.980 way humans are
00:48:30.580 structured yeah okay so
00:48:31.400 I would I would agree
00:48:32.380 with that except the
00:48:33.600 boundary between what is
00:48:35.880 an emotion or a drive or
00:48:37.700 a motivation or a goal and
00:48:40.060 what is a value neutral
00:48:42.560 mapping of reality you know
00:48:45.220 I think that boundary is
00:48:46.840 perhaps harder to specify
00:48:48.900 than than than you think it
00:48:51.340 is and that certain of
00:48:53.640 these things are connected
00:48:54.920 right which is to I mean
00:48:56.560 here's an example this is
00:48:57.660 probably not a perfect
00:48:59.200 analogy but this gets at
00:49:00.980 some of the surprising
00:49:01.940 features of cognition that
00:49:03.340 may await us so we think
00:49:05.420 intuitively that
00:49:07.040 understanding a proposition
00:49:09.260 is cognitively quite
00:49:12.100 distinct from believing it
00:49:14.160 right so like I can give you
00:49:15.600 a statement that you can
00:49:17.320 believe or disbelieve or be
00:49:19.140 uncertain about and I can
00:49:20.340 say you know there's two
00:49:21.600 plus two equals four two
00:49:22.760 plus two equals five and
00:49:24.640 that can give you some
00:49:25.480 gigantic number and say
00:49:26.720 this number is prime and
00:49:28.380 presumably in the first
00:49:30.060 condition you'll say yes I
00:49:31.660 believe that in the second
00:49:32.700 you'll say no that's false
00:49:33.960 and in the third you won't
00:49:35.740 know whether or not it's
00:49:37.520 prime or not so those are
00:49:39.240 distinct states that we can
00:49:40.720 intuitively differentiate but
00:49:42.400 there's also evidence to
00:49:44.440 suggest that merely
00:49:46.060 comprehending a statement if I
00:49:47.840 give you a statement and you
00:49:49.040 parse it successfully the
00:49:51.520 parsing itself contains an
00:49:54.500 actual default acceptance of
00:49:56.840 it as true and rejecting it
00:50:00.080 as false is a separate
00:50:01.920 operation added to that I
00:50:03.700 mean and there's there's not
00:50:04.720 a ton of evidence for this
00:50:05.740 but there's certainly some
00:50:07.280 behavioral evidence so if I
00:50:08.580 put you in a paradigm where
00:50:10.340 we gave you statements that
00:50:12.360 were true and false and all
00:50:13.560 you had to do was judge them
00:50:14.800 true and false and they were
00:50:16.260 all matched for complexity so
00:50:18.760 you know two plus two equals
00:50:20.020 four is no more or less
00:50:22.040 complex than two plus two
00:50:23.220 equals five but it'll take
00:50:25.040 you longer systematically
00:50:26.640 longer to judge very simple
00:50:28.920 statements to be false than
00:50:30.260 to judge them to be true
00:50:31.540 suggesting that you're doing a
00:50:33.340 further operation now we can
00:50:35.440 remain agnostic as to whether
00:50:37.220 or not that's actually true but
00:50:38.500 if true it's counterintuitive
00:50:41.160 that merely understanding
00:50:43.060 something entails you know some
00:50:45.240 credence giving epistemic
00:50:47.400 credence given to it by default
00:50:49.100 and that to reject it as false
00:50:51.400 represents a subsequent act but
00:50:54.220 like that's the kind of thing
00:50:55.200 that you know already we're on
00:50:56.720 territory that is not coldly
00:50:59.540 rational some of the all too
00:51:02.120 apish appetites have kind of
00:51:03.500 crept into cognition here in
00:51:05.260 ways that we didn't really budget
00:51:08.340 for and so the question is just
00:51:11.040 how much of that is avoidable in
00:51:13.400 building a new type of mind
00:51:15.240 well I you know I'm not familiar
00:51:17.500 with that specific research and
00:51:19.540 so I haven't heard of that but I
00:51:22.300 you know to me none of these
00:51:23.660 things are surprising in any way
00:51:26.200 you just if you start thinking
00:51:28.580 about the brain is basically
00:51:30.380 trying to build models it's
00:51:31.900 constantly trying to build models
00:51:33.140 in fact you're as you walk around
00:51:35.440 the your life day-to-day moment to
00:51:37.800 moment and you see things you're
00:51:39.600 building the model the model is
00:51:40.680 being constructed even like where are
00:51:42.100 things in the refrigerator right
00:51:43.260 now your brain will update you
00:51:44.920 open the fridge oh the milk's on
00:51:46.060 the left today whatever and so
00:51:47.620 if someone gives you a
00:51:48.440 proposition like two plus two
00:51:49.840 equals five you know I don't
00:51:51.560 know what the evidence that you
00:51:52.540 believe it and then falsify it
00:51:54.340 but I certainly imagine you can
00:51:55.980 imagine it trying to see if it's
00:51:57.520 right it'd be like me saying to
00:51:58.760 you hey you know Sam the milk was
00:52:00.520 on the right in your
00:52:01.140 refrigerator and you'd have to
00:52:02.780 think about it for a second you
00:52:03.780 say well let me think okay I
00:52:05.040 opened it no last time I saw it
00:52:06.240 was on the left you know no
00:52:07.300 that's wrong but you would walk
00:52:08.960 through the process of trying to
00:52:10.420 imagine it and trying to see does
00:52:12.600 that fit my model and yes or no
00:52:14.580 and I can I don't it's not
00:52:16.380 surprising to me that would that
00:52:18.760 you would have to process it the
00:52:20.520 way as if it was true it's just
00:52:22.120 matter saying can you imagine this
00:52:23.640 go imagine it do you think it's
00:52:24.780 right it's not like I believe that
00:52:27.040 now I falsified it it's it's more
00:52:29.140 like well actually I'll just give you
00:52:30.600 one other datum here so because
00:52:32.500 it's just intellectually interesting
00:52:33.880 and socially all too
00:52:35.120 consequential this effect goes by
00:52:38.240 several names I think but you know
00:52:40.240 one is the illusory truth effect
00:52:41.980 which is even in the act of
00:52:43.760 disconfirming something you know to
00:52:46.040 be false you know a some specious
00:52:47.920 rumor or conspiracy theory merely
00:52:50.640 having to invoke it I mean have
00:52:52.680 people entertain the concept again
00:52:54.720 even in the context of debunking it
00:52:56.640 ramifies a belief in it in many many
00:53:01.500 people it's just oh yeah it becomes
00:53:03.060 harder to discredit things because
00:53:05.040 you have to talk about them in the
00:53:06.180 first place yeah I mean so look
00:53:08.320 we're talking about language here
00:53:10.100 right yeah and in language so much
00:53:12.800 of what we humans know is via
00:53:14.320 language and we have no idea if it's
00:53:16.420 true when someone says something to
00:53:17.860 you right how do you know and so you
00:53:20.520 it's you have to so I mean I gave an
00:53:23.660 example like I've never been to the
00:53:25.100 city of Havana well I believe it's
00:53:27.280 there I believe it's true but I don't
00:53:28.940 know I've never been there I've never
00:53:30.240 actually touched it or smelled it or
00:53:31.720 saw it so maybe it's false so I just
00:53:35.280 I mean this is one of the issues we
00:53:37.440 have I have a whole chapter on false
00:53:38.740 beliefs because so much of our
00:53:40.940 knowledge of the world is built up on
00:53:42.440 language and the default assumption
00:53:45.460 under language that if someone says
00:53:47.860 something it's true it's like it's a
00:53:50.100 pattern in the world you're going to
00:53:51.340 accept it if I touch a coffee cup I
00:53:53.240 accept that that's what it feels right
00:53:54.980 and if I look at something I accept
00:53:57.520 that's what it looks like well someone
00:53:58.960 says something my initial acceptance is
00:54:01.020 okay that's what it is so you know and
00:54:03.860 then instead of fact well someone says
00:54:05.240 something that's false of course well
00:54:07.460 that's a problem because just just by
00:54:10.040 the fact that I've experienced it it's
00:54:11.880 now part of my world model and I and
00:54:13.680 if that's what you're referring to I can
00:54:15.200 see this is really a problem of
00:54:16.580 language we face and this is this is the
00:54:19.200 root cause of almost all of our false
00:54:20.760 beliefs is that someone just says
00:54:22.740 something enough times and that's good
00:54:25.560 enough and you have to seek out
00:54:28.600 contrary evidence evidence for it yeah
00:54:30.960 sometimes it's good enough even when
00:54:32.200 you're the one saying it you just
00:54:33.440 overhear you're you're the voice of
00:54:35.840 your own mind saying it and no I know
00:54:37.800 that's that's that's it's been proven
00:54:41.060 that everyone is susceptible to that
00:54:43.080 kind of distortion of our beliefs or
00:54:45.340 especially our memories just remembering
00:54:47.420 something over and over again changes it
00:54:48.840 you know yeah okay so let's get back to
00:54:51.240 AI risk here because here's where I think
00:54:54.600 you and I have very different intuitions I
00:54:57.120 mean the intuition that many of us have
00:54:59.720 the people who have informed my views
00:55:02.620 here people like Stuart Russell who you
00:55:05.400 probably know at Berkeley yeah and Nick
00:55:07.800 Bostrom and Eliezer Yudkowsky and just
00:55:10.760 lots of people in this spot worrying about
00:55:14.460 the same thing to one another degree the
00:55:17.540 intuition is that you don't get a second
00:55:20.660 chance to create a truly autonomous
00:55:24.580 super intelligence right like it seems that
00:55:27.720 in principle this is the kind of thing you
00:55:30.040 have to get right on the first try right
00:55:32.840 and having to get anything right on the
00:55:34.140 first try just seems extraordinarily
00:55:36.540 dangerous because we we rarely if ever do
00:55:40.080 that when doing something complicated
00:55:42.040 and another way of putting this is that
00:55:44.460 it seems like in the space of all possible
00:55:47.620 super intelligent minds there are more ways
00:55:51.580 to build one that isn't perfectly aligned
00:55:54.780 with our long-term well-being then there are
00:55:59.460 ways to build one that is perfectly aligned
00:56:02.020 with our long-term well-being and you know
00:56:05.860 from my point of view what what you know
00:56:08.240 your optimism and the optimism of many other
00:56:11.020 people you know who take your side of this
00:56:12.920 debate is based on is a it's not really
00:56:16.300 taking the prospect of intelligence seriously
00:56:20.940 enough and the autonomy that is intrinsic to
00:56:24.180 it I mean if we actually built a true
00:56:27.600 general intelligence what that means is that
00:56:31.740 we would suddenly find ourselves in
00:56:33.620 relationship to something that we actually
00:56:37.860 can't perfectly understand it's like it will
00:56:41.740 be analogous to a strange person walking into
00:56:45.180 the room you know you're in relationship and if
00:56:48.400 this person can think a thousand times or a
00:56:52.120 million times faster than you can and has
00:56:56.160 goals that are less than perfectly aligned with
00:56:59.520 your own that's going to be a problem
00:57:02.160 eventually we can't find ourselves in a state
00:57:05.780 of perpetual negotiation with systems that
00:57:09.560 are more competent and powerful and intelligent
00:57:12.220 I think I think there's I think there's two
00:57:14.220 mistakes in your argument the first one is
00:57:16.760 you say my intuition and your intuition I think
00:57:19.520 most of the people who have this fear have an
00:57:22.080 intuition about what
00:57:23.520 if you'd like to continue listening to this
00:57:32.320 conversation you'll need to subscribe at
00:57:34.520 samharris.org once you do you'll get access to
00:57:37.460 all full-length episodes of the making sense
00:57:39.200 podcast along with other subscriber only content
00:57:41.980 including bonus episodes and amas and the
00:57:45.460 conversations I've been having on the waking up app
00:57:47.400 the making sense podcast is ad-free and relies
00:57:50.760 entirely on listener support and you can
00:57:53.120 subscribe now at samharris.org
00:57:55.980 AWF
00:58:02.140 for example
00:58:03.160 in a
00:58:03.620 SMITH
00:58:04.740 from the
00:58:05.020 moment
00:58:05.280 honest
00:58:06.340 in time
00:58:08.380 go
00:58:09.300 to the
00:58:10.000 報告
00:58:10.200 another
00:58:12.040 balloon