Making Sense - Sam Harris - April 15, 2019


#153 — Possible Minds


Episode Stats

Length

1 hour and 38 minutes

Words per Minute

170.11063

Word Count

16,761

Sentence Count

767

Misogynist Sentences

3

Hate Speech Sentences

8


Summary

This episode is the result of a series of interviews I conducted with three people who have been on the podcast before. In this episode, I talk about the first event I'm hosting with the Great Tibetan Lama, Mingyur Rinpoche. It's happening in Los Angeles on July 11th, and tickets are selling fast! You can find more information about that event here. And you can get tickets to the Waking Up Event at The Wiltern in LA, where I'll be sitting down with the Dalai Lama to discuss his new book, "In Love with the World," which is out now. You can also get tickets for the event at the Wiltern, where you can ask questions about the book and meditation, and I'll answer them here. This episode is sponsored by The Wakening Up App, which is a service that allows you to connect with your friends, family, and colleagues through the practice of mindfulness and meditation. We don't run ads on the app, and therefore, therefore, it's made possible entirely through the support of our subscribers. So if you enjoy what we're doing here, please consider becoming a supporter of the podcast, and/or become a supporter by becoming a subscriber. You'll get access to all sorts of great resources, including: The Making Sense Podcast courses, books, meditations, and more! The Wakingup App courses, including a web-based version of the Making Sense podcast course, which will be launching soon. And you'll get more information on all of that at wakingup.org.org/makingsense. If you're not a supporter yet? Then you'll need to subscribe to the podcast to access the full-length episodes of the making sense podcast, which includes the podcast. and all of the other great resources you'll be getting access to everything you need to know about making sense of this podcast, including the podcast and much more. making sense. It's a great resource. -Sam Harris . Sam Harris - Make Sense Podcast and the podcast is made possible through the work of John Brockman, a good friend of mine, John D. Brockman (and the book I mentioned in this episode of Making Sense, by my book, by the excellent book, Possible Minds, 25 Ways of Looking at AI, by Dan Dennett, by the great author, Steve Pinker, Dan Dennen, and so on.


Transcript

00:00:00.000 Welcome to the Making Sense Podcast.
00:00:08.820 This is Sam Harris.
00:00:10.880 Just a note to say that if you're hearing this, you are not currently on our subscriber
00:00:14.680 feed and will only be hearing the first part of this conversation.
00:00:18.420 In order to access full episodes of the Making Sense Podcast, you'll need to subscribe at
00:00:22.720 samharris.org.
00:00:24.060 There you'll find our private RSS feed to add to your favorite podcatcher, along with
00:00:28.360 other subscriber-only content.
00:00:30.520 We don't run ads on the podcast, and therefore it's made possible entirely through the support
00:00:34.640 of our subscribers.
00:00:35.880 So if you enjoy what we're doing here, please consider becoming one.
00:00:46.740 Welcome to the Making Sense Podcast.
00:00:48.920 This is Sam Harris.
00:00:51.280 Okay, a few things to announce here.
00:00:53.520 I have an event in Los Angeles on July 11th.
00:00:58.360 If you're a supporter of the podcast, you should have already received an email.
00:01:02.860 This is actually the first event for the app.
00:01:06.520 It's the first waking up event.
00:01:08.460 It is at the Wiltern on July 11th.
00:01:12.420 And it is with a great Tibetan Lama by the name of Mingyur Rinpoche.
00:01:18.800 And Mingyur is a fascinating guy.
00:01:21.200 He's the youngest son of the greatest Dzogchen master I ever studied with, Tukurgen Rinpoche.
00:01:29.080 And I wrote about him in my book, Waking Up, so that name might be familiar to some of you.
00:01:34.600 I studied with him in Nepal about 30 years ago.
00:01:37.660 And I've never met Mingyur.
00:01:41.780 And he's about, I don't know, seven years younger than me.
00:01:45.760 I was in my 20s when I was in Nepal, and he was a teenager.
00:01:50.080 And he was on retreat for much of that time.
00:01:54.520 He did his first three-year retreat when he was, I think, 13.
00:01:58.400 And he was always described as the superstar of the family.
00:02:03.460 I studied with two of his brothers, Chokinima Rinpoche and Sokini Rinpoche.
00:02:07.800 But I've never met Mingyur, and I'm really looking forward to it.
00:02:12.560 He has a very interesting story, because at some point he started teaching and started running monasteries.
00:02:19.960 I believe he has three monasteries he's running, as well as a foundation.
00:02:23.560 But then in 2011, when he was 36, he just disappeared from his monastery in India
00:02:32.260 and spent the next four and a half years wandering around India as a mendicant yogi,
00:02:40.720 living in caves and on the streets and encountering all kinds of hardships.
00:02:46.720 I believe he got very sick and almost died.
00:02:49.540 Anyway, he's written a book about this, titled In Love with the World.
00:02:54.360 Which I haven't read yet, but I will obviously read it before our event.
00:02:58.860 And we will discuss the book and the nature of mind and the practice of meditation.
00:03:05.160 And take your questions.
00:03:07.720 And again, that will be happening at the Wiltern in Los Angeles on July 11th.
00:03:13.240 And you can find more information on my website at samharris.org forward slash events.
00:03:19.780 And tickets are selling quickly there, so if you care about that event, I wouldn't wait.
00:03:27.460 And the audio will eventually be released on the podcast.
00:03:29.840 The Waking Up app.
00:03:35.200 There have been a few changes.
00:03:37.200 We've added Anika's Meditations for Children, which are great.
00:03:41.300 And there are some Metta Meditations coming from me as well.
00:03:47.380 Also, we'll soon be giving you the ability to sit in groups,
00:03:51.820 where you can organize a virtual group with your friends or colleagues,
00:03:57.100 and sit together, either in silence or listening to a guided meditation.
00:04:01.160 And very soon there will be a web-based version of the course.
00:04:05.940 You can get more information about all that at wakingup.com.
00:04:10.540 So this podcast is the result of three interviews.
00:04:14.320 And it is organized around a new book from my agent, John Brockman, who edited it.
00:04:21.600 And the book is titled,
00:04:22.760 Possible Minds, 25 Ways of Looking at AI.
00:04:25.860 And you may have heard me mention John on the podcast before.
00:04:30.400 He's not just a book agent,
00:04:32.020 though between him and his wife, Katinka Mattson, and their son, Max Brockman,
00:04:36.240 they have a near monopoly on scientific nonfiction.
00:04:40.200 It's really quite impressive.
00:04:42.700 Many of the authors you know and admire,
00:04:45.420 Steve Pinker, Richard Dawkins, Dan Dennett,
00:04:48.280 and really most other people in that vein you could name,
00:04:52.780 and many who have been on this podcast,
00:04:54.280 are represented by them.
00:04:56.880 But John is also a great connector of people and ideas.
00:05:02.160 He seems to have met every interesting person in both the literary and art worlds
00:05:06.740 since around 1960.
00:05:09.440 And he's run the website edge.org for many years,
00:05:13.300 which released its annual question for 20 years,
00:05:18.180 and got many interesting people to write essays for that.
00:05:22.240 And there have been many books published on the basis of those essays.
00:05:25.840 He's also put together some great meetings and small conferences.
00:05:30.160 So he's really facilitated dialogue to an unusual degree,
00:05:34.620 and at a very high level.
00:05:36.340 And he's written his own books,
00:05:38.340 The Third Culture, and by the late John Brockman.
00:05:41.220 But this new book is another one of his anthologies,
00:05:44.180 and it's organized around a modern response to Norbert Wiener's book,
00:05:50.480 The Human Use of Human Beings.
00:05:52.940 Wiener was a mathematical prodigy and the father of cybernetics,
00:05:57.660 and a contemporary of Alan Turing and John von Neumann and Claude Shannon
00:06:03.200 and many of the people who are doing foundational work on computation.
00:06:06.660 And Wiener's thoughts on artificial intelligence
00:06:09.800 anticipate many of our modern concerns.
00:06:14.180 Now, I didn't wind up contributing to this book.
00:06:16.940 I had to sit this one out,
00:06:18.520 but I will be speaking with three of the authors who did.
00:06:22.460 The first is George Dyson.
00:06:25.180 George is a historian of technology,
00:06:27.120 and he's the author of
00:06:28.620 Darwin Among the Machines and
00:06:30.700 Turing's Cathedral.
00:06:31.840 My second interview is with Allison Gopnik.
00:06:35.500 Allison is a developmental psychologist at UC Berkeley.
00:06:38.840 She's a leader in the field of children's learning and development,
00:06:42.060 and her books include The Philosophical Baby.
00:06:45.800 And finally, I'll be speaking with Stuart Russell,
00:06:47.920 who's been on the podcast before.
00:06:49.960 Stuart is a professor of computer science and engineering at UC Berkeley,
00:06:53.520 and he's also the author of the most widely used textbook on AI,
00:06:58.440 titled Artificial Intelligence, A Modern Approach.
00:07:01.840 This is a deep look at the current state
00:07:05.720 and near and perhaps distant future of AI.
00:07:11.360 And now, without further delay, I bring you George Dyson.
00:07:20.860 I am here with George Dyson.
00:07:22.880 George, thanks for coming on the podcast.
00:07:25.000 Thank you. Happy to be here.
00:07:26.900 So, the occasion for this conversation
00:07:29.900 is the publication of our friend and mutual agent's book,
00:07:36.940 Possible Minds, 25 Ways of Looking at AI.
00:07:40.400 And this was edited by the great John Brockman.
00:07:42.860 I am not in this book.
00:07:43.840 I could not get my act together when John came calling,
00:07:46.360 so unfortunately, I'm not in this very beautiful and erudite book.
00:07:50.860 Previously, you wrote Turing's Cathedral,
00:07:54.360 so you've been thinking about computation for quite some time.
00:07:58.640 How do you summarize your intellectual history and what you focused on?
00:08:04.740 Well, my interest goes back much farther than that.
00:08:08.820 Turing's Cathedral is a recent book.
00:08:11.320 So, 25 years ago, I was writing a book called Darwin Among the Machines
00:08:15.300 at a time when there actually were no publishers publishing, you know,
00:08:19.280 any general literature about computers except Addison and Wesley,
00:08:23.780 so they published it, thanks to John.
00:08:27.120 The thing to remember about John is John and Katinka's family business,
00:08:30.580 and Katinka's father was a literary agent,
00:08:33.640 and John's father, I think, was in the flower merchant business.
00:08:38.860 So, they have this very great combination of flowers have to be sold the same day
00:08:45.340 and books have to last forever.
00:08:47.080 It sort of works really well together.
00:08:49.020 Yeah.
00:08:49.640 And your background is, you also have a family background that's relevant here
00:08:54.100 because your father is Freeman Dyson,
00:08:56.800 who many people will be aware is a famous physicist.
00:09:02.320 He got inducted into the Manhattan Project right at the beginning as well, right?
00:09:07.540 He was at the Institute for Advanced Study.
00:09:10.120 Correct my sequencing here.
00:09:11.500 First of all, the important thing in my background is not so much my father,
00:09:15.980 but my mother.
00:09:16.560 My mother was a mathematical logician.
00:09:18.800 So, she worked very closely with Kurt Goodall and, you know,
00:09:23.220 knew Alan Turing's work in logic very well,
00:09:26.340 and that's where the world of computers came out of that.
00:09:28.920 My father, they both came to America at the same time in 1948.
00:09:33.840 So, the Manhattan Project was long over.
00:09:36.420 My father had nothing to do with it.
00:09:38.300 Oh, okay.
00:09:39.020 He was working for the conventional bombing campaign
00:09:43.640 for the Royal Air Force during the war, but not the Manhattan Project.
00:09:48.620 So, your mother, so you have deep roots in the related physics of
00:09:53.320 and logic and mathematics of information,
00:09:56.780 which has given us this now century of, or near century, of computation
00:10:02.820 and has transformed everything.
00:10:04.760 And this is a fascinating intellectual history because the history of computing
00:10:10.740 is intimately connected with the history of war,
00:10:14.480 specifically, you know, code breaking and bomb design.
00:10:18.380 And you did cover this in Turing's Cathedral.
00:10:21.820 You're often described as a historian of technology.
00:10:26.140 Is that correct?
00:10:27.560 Does that label fit well with you?
00:10:29.900 That's true, yes.
00:10:30.560 I mean, I'm more a historian of people, of the people who build the technologies,
00:10:34.960 but somehow the label is historian of technology.
00:10:38.080 I'm not a historian of science.
00:10:39.520 That's also, I don't know why that's always, you know,
00:10:41.900 it's just sort of a pigeonhole they put you into.
00:10:44.120 So, you know, maybe we can walk through this topic by talking about some of the people.
00:10:51.920 There are some fascinating characters here,
00:10:53.960 and the nominal inspiration for this conversation, for John's book,
00:10:59.160 was his discovery or rediscovery of Norbert Wiener's book,
00:11:04.960 The Human Use of Human Beings.
00:11:06.500 But there were two, there were different paths through the history of thinking about information
00:11:13.680 and computation and the prospect of building intelligent machines.
00:11:17.640 And Wiener represented one of them, but there was another branch that became more influential,
00:11:24.160 which was due to Alan Turing and John von Neumann.
00:11:28.280 Maybe, I guess, who should we start with?
00:11:29.920 Probably Alan Turing at the outset here.
00:11:32.640 How do you think of Alan Turing's contribution to the advent of the computer?
00:11:39.220 Well, it was very profound.
00:11:41.680 Norbert Wiener was working, you know, in a similar way at almost the same time.
00:11:46.880 So they all sort of came out of this together.
00:11:49.100 Their sort of philosophical grandfather was Leibniz,
00:11:55.580 the German computer scientist and philosopher.
00:11:58.840 They all sort of were disciples of Leibniz and then, you know, executed that in different ways.
00:12:07.000 Von Neumann and Wiener worked quite closely together at one time.
00:12:11.680 Turing and Wiener never really did work together, but they were very aware of each other's work.
00:12:17.840 The young Alan Turing, which also people forget, he came to America in 1936.
00:12:24.680 So he was actually in New Jersey when his great sort of paper on computation was published.
00:12:31.140 So he was there in the same building with von Neumann.
00:12:33.700 He was a bright kid and offered him a job, which he didn't take.
00:12:37.460 He preferred to go back to England.
00:12:39.480 Yeah, so that's, I don't know how to think about that.
00:12:43.060 So just bring your father into the picture here and perhaps your mother, if she knew all these guys as well.
00:12:50.200 Did they know von Neumann and Turing and Claude Shannon and Wiener?
00:12:56.080 What of these figures do you have some family lore around?
00:13:00.780 Yes and no.
00:13:02.660 They knew, you know, they both knew Johnny von Neumann quite well because he was sort of in circulation.
00:13:09.880 My father had met Norbert Wiener, but it never worked with him, didn't really know him.
00:13:15.460 And neither of them actually met Alan Turing.
00:13:19.020 But of course, my father came from Cambridge where Turing had been sort of a fixture.
00:13:23.300 What my father said was that when, you know, he read Turing's paper when it came out and he, you know, he thought like many people, he thought this was sort of the least likely, you know, this was interesting logic, but it would have no great effect on the real world.
00:13:38.300 I think my mother was probably maybe a little more prescient that, you know, logic really would change the world.
00:13:45.680 Von Neumann is perhaps the most colorful character here.
00:13:49.460 I mean, there seems to be an absolute convergence of opinion that regardless of the fact that he may not have made the greatest contributions in the history of science,
00:14:04.480 he seemed to have just bowled everyone over and given a lasting impression that he was the smartest person they had ever met.
00:14:12.160 Does that ring true in the family as well, or have estimations of von Neumann's intelligence been exaggerated?
00:14:20.600 No, I don't think that's exaggerated at all.
00:14:22.700 I mean, he was impressively sharp and smart, extremely good memory, you know, phenomenal calculation skills, sort of everything.
00:14:31.340 Plus he had this, you know, his real genius was not entrepreneurship, but just being able to put everything together.
00:14:40.020 His father was an investment banker, so he had no shyness about just asking for money.
00:14:47.020 I mean, that was sort of in some ways almost his most important contribution was he was the guy who could get the money to do these things that other people simply dreamed of.
00:14:56.340 But he got them done, and he hired the right people.
00:14:59.620 He's sort of like the orchestra conductor who'd get the best violin player and put them all together.
00:15:05.920 Yeah, and these stories are, I think I've referenced them occasionally on the podcast,
00:15:11.960 but it is a, it's astounding to just read this record, because you have the, really the greatest physicists and mathematicians of the time,
00:15:23.140 all gossiping, essentially, about this one figure who, certainly Edward Teller was of this opinion,
00:15:31.100 and I think, you know, he's, I think there's a quote from him somewhere, which says that, you know,
00:15:36.260 if we ever evolve into a master race of super intelligent humans, you will recognize that von Neumann was the prefiguring example.
00:15:47.460 Like, this is, this is how we will appear when we are fundamentally different from what we are now.
00:15:52.000 Yeah, it's sort of, in other ways, it's a great tragedy, because he was doing really good work and,
00:16:22.000 you know, pure mathematics and logic and game theory, quantum mechanics, and those kinds of things,
00:16:29.060 and then got completely distracted by the weapons and the computers.
00:16:34.060 Never, never really got back to any real science, and then died young, like Alan Turing, the very same thing.
00:16:40.160 So we sort of lost these two brilliant minds who not only died young, but sort of professionally died very early,
00:16:48.000 because they got sucked into the war, never came back.
00:16:50.400 Yeah, there was an ethical split there, because Norbert Wiener, who was, again, part of this conversation fairly early,
00:16:59.860 I think it was 47, published a piece in The Atlantic, more or less vowing never to let his intellectual property
00:17:08.560 have any point of contact with military efforts.
00:17:12.200 And so at the time, it was all very fraught, seeing that physics and mathematics was the engine of destruction, however ethically purposed.
00:17:23.220 You know, obviously, there's a place to stand where the Manhattan Project looks like a very good thing,
00:17:28.620 you know, that we won the race to fission before the Nazis could get there.
00:17:33.560 But it's an ethically complicated time, certainly.
00:17:38.480 Yes, and that's where, you know, Norbert Wiener worked very intensively and effectively for the military in both World War I.
00:17:46.420 He was at the proving ground in World War I and World War II, but he worked on anti-aircraft defense.
00:17:54.380 And what people forget was that it was pretty far along at Los Alamos when we knew, when we learned that the Germans were not actually building nuclear weapons.
00:18:04.920 And at that point, people like Norbert Wiener wanted nothing more to do with it.
00:18:09.520 And particularly, Norbert Wiener wanted nothing to do with the hydrogen bomb.
00:18:13.020 There was no military justification for a hydrogen bomb.
00:18:17.400 The only use of those weapons still today, it's against, you know, it's genocide against civilians.
00:18:23.700 They have no military use.
00:18:26.300 Do you recall the history on the German side?
00:18:29.560 I know there is a story about Heisenberg's involvement in the German bomb effort,
00:18:36.520 but I can't remember if rumors of his having intentionally slowed that or not are, in fact, true.
00:18:44.180 Well, that's a whole other subject.
00:18:47.480 I'm trying to stay.
00:18:48.740 Stay away from?
00:18:49.700 Not getting into that, and I'm not the expert on that.
00:18:52.780 But what little I do know is that it became known at Los Alamos later in the project that there really was no German threat,
00:19:03.800 yet then the decision was made to keep working on it.
00:19:06.480 There were a few people.
00:19:07.680 Now, there's one whose name I don't remember who it was.
00:19:10.840 You know, one or two physicists actually quit work when they learned that the German program was not a real threat,
00:19:17.720 but most people chose to keep working on it.
00:19:20.940 That was a very moral decision.
00:19:23.100 Yeah, but how do you view it?
00:19:25.440 Do you view it as a straightforward good one way or the other, or how would you have navigated that?
00:19:33.100 Extremely complicated, very, very complex.
00:19:35.100 I mean, of the, you know, those people you were talking about, the Martians, the sort of extraterrestrial Hungarians,
00:19:41.880 they all kept working on the weapons except Leo Tzilard, who actually, he was at Chicago.
00:19:49.300 He'd been sort of excommunicated from Los Alamos.
00:19:52.300 Groves wanted to have him put in jail, and he circulated a petition.
00:19:57.160 I think it was signed by 67 physicists from Chicago to not use the weapon against the civilians of Japan,
00:20:04.380 to at least give a demonstration against an unpopulated target.
00:20:09.760 And that petition never even reached the president.
00:20:13.220 It was sort of embargoed.
00:20:14.860 I've never understood why a demonstration wasn't a more obvious option.
00:20:21.700 I mean, it was the fear that it wouldn't work and...
00:20:25.080 Yes, because they didn't know, and they had only a very few weapons at that time.
00:20:31.340 They had two or three, so there were a lot, but that's, again, a story that's still to be figured out.
00:20:37.920 And I think the people like von Neumann carried a lot of that to the grave with them.
00:20:42.880 But, you know, Edward Teller's answer to the Tzilard petition was, you know,
00:20:49.100 I'd love to sign your petition, but I think his exact words were,
00:20:52.800 the things we are working on are so terrible that no amount of fiddling with politics will save our souls.
00:20:58.880 That's pretty much an exact quote.
00:21:01.220 Yeah, so I think Teller was, first Teller was, yeah, another one of these Hungarian mutants,
00:21:06.580 along with von Neumann.
00:21:07.580 And the two of them really inspired the continued progress past a fission weapon and on to a fusion one.
00:21:18.000 And computation was an absolutely necessary condition of that progress.
00:21:24.760 So the story of the birth of the computer is largely, or at least the growth of our power in building computers,
00:21:34.340 is largely the story of the imperative that we felt to build the H-bomb.
00:21:41.720 Right. And what's weird is that we're sort of stuck with it.
00:21:44.220 Like, you know, for 60 years, we've been stuck with this computational architecture
00:21:49.620 that was developed for this very particular problem to do numerical hydrodynamics
00:21:54.940 to solve this hydrogen bomb question, to know.
00:21:59.460 The question was, would the Russians, they knew the Russians were working on it
00:22:04.040 because von Neumann had worked intimately with Klaus Fuchs, who turned out to be a Russian spy.
00:22:10.340 So they knew the Russians, sort of knew everything they did.
00:22:13.620 But the question was, was it possible?
00:22:15.640 And you needed computers to figure that out, and they got the computer working.
00:22:20.360 And then, you know, now, 67 years later, our computers are still exact copies of that particular machine
00:22:26.560 they built to do that job.
00:22:28.800 It's a very—none of those people would—I think they would find it incomprehensible
00:22:32.420 if they came back today and saw that, you know, we hadn't really made any architectural improvements.
00:22:38.240 Is this a controversial position at all in computer circles,
00:22:42.500 or is this acknowledged that having the von Neumann architecture, as I think it is still called,
00:22:48.840 we got stuck in this legacy paradigm, which is by no means necessarily the best for building computers?
00:22:59.220 Yeah, no, they knew it wasn't.
00:23:00.220 But, I mean, already, even by the time Alan Turing came to Princeton,
00:23:05.940 he was working on completely different kinds of computation.
00:23:08.700 He was already sort of bored with the Turing machine.
00:23:11.660 He was interested in much more interesting sort of non-deterministic machines.
00:23:16.420 And the same with von Neumann.
00:23:17.660 He, you know, long before that project was finished, he was thinking about other things.
00:23:22.900 And what's interesting about von Neumann is he only has one patent.
00:23:26.140 And the one patent he took out was for a completely non-von Neumann computer
00:23:30.940 that IBM bought from him for $50,000.
00:23:34.680 This is another strange story that hasn't quite, I think, been figured out.
00:23:39.600 Presumably that was when $50,000 really meant something.
00:23:42.360 It was an enormous amount of money.
00:23:44.260 I mean, just a huge amount of money.
00:23:46.420 So, yeah, so he—they all wanted to build different kinds of computers.
00:23:51.600 And if they had lived, I think they would have.
00:23:54.800 In your contribution to this book, you talk about the prospect of analog versus digital computing.
00:24:03.280 Make that intelligible to the non-computer scientist.
00:24:08.820 Yes.
00:24:09.120 So there are really two very different kinds of computers.
00:24:14.380 There's—it sort of goes, again, back to Turing in sort of a mathematical sense.
00:24:18.620 There are continuous functions that vary continuously, which is sort of how we perceive time or the frequency of sound or those sorts of things.
00:24:28.300 And then there are discrete functions, the sort of ones and zeros and bits that took over the world.
00:24:34.160 And Alan Turing gave this very brilliant proof of what you could do with a purely digital machine.
00:24:40.400 But both Alan Turing and von Neumann were almost, you know, sort of at the end of their lives, obsessed with the fact that nature doesn't do this.
00:24:51.180 Nature does this in a—in our genetic systems.
00:24:54.280 We use digital coding because digital coding is, as Shannon showed us, is so good at error correction.
00:25:01.640 But, you know, continuous functions in analog computing are better for control.
00:25:07.700 All control systems in nature, all nervous systems, the human brain, the brain of a fruit fly, the brain of a mouse, those are all analog computers, not digital.
00:25:17.700 There's no digital code in the brain.
00:25:21.120 And von Neumann, you know, wrote a whole book about that that people have misunderstood.
00:25:25.060 I guess you could say that whether or not a neuron fires is a digital signal, but then the analog component is downstream of that, just the different synaptic weights and perceptors.
00:25:37.400 Right, but there's no code.
00:25:38.260 There's no code with a logical meaning.
00:25:41.600 It's a—you know, the complexity is not in the code.
00:25:44.840 It's in the topology and the connections of the network.
00:25:49.000 Everybody knew that.
00:25:50.300 You can take apart a brain, you don't find any sort of digital code.
00:25:53.820 There's no—I mean, now we're sort of obsessed with this idea of algorithms, which is what Alan Turing gave us.
00:25:59.140 But there are no algorithms in a nervous system or a brain.
00:26:04.720 That's a much, much, much sort of higher-level function that comes later.
00:26:10.620 Well, so you introduced another personality here and a concept.
00:26:14.360 So let's just do a potted bio on Claude Shannon and this notion that digitizing information was somehow of value with respect to error correction.
00:26:27.860 Yes, I mean, what Claude Shannon's great contribution was sort of modern information theory, which you can make a very good case.
00:26:35.640 He actually sort of took those ideas from Norbert Wiener, who was explaining them to him during the war.
00:26:40.600 But it was Shannon who published the great manifesto on that, proving that you can sort of communicate with reliable accuracy given any arbitrary amount of noise by using digital coding.
00:26:56.460 And that none of our computers would work without that, the fact that basically your computer is a communication device and has to communicate these hugely complicated states from one fraction of a microsecond to the next billions of times a second.
00:27:10.980 And the fact that we do that perfectly is due to Shannon's, you know, his theory and his model of how can you do that in an accurate way.
00:27:18.080 Is there a way to make that intuitively understandable why that would be so?
00:27:22.720 I mean, what I picture is like cogs in a gear where it's like you're either all in one slot or you're all out of it.
00:27:30.820 And so any looseness of fit keeps reverting back to you fall back into the well of the gear or you slip out of it.
00:27:39.360 Whereas something that's truly continuous, that is to say analog, admits of errors that are undetectable because you're just, you're kind of sliding off a more continuous, smoother surface.
00:27:53.640 Do you have a better?
00:27:54.440 Yeah, that's a good, that's a very good way to explain it.
00:27:57.060 Now it has this fatal flaw that you sort of, there's always a price for everything.
00:28:02.220 And so you, you can get this perfect digital accuracy where you can make sure that every bit, billions of bits and every bit is in the right place, your software will work.
00:28:15.840 But the fatal flaw is that if for some reason a bit isn't in the right place, then the whole machine grinds to a halt.
00:28:22.740 Whereas the analog machine will keep going as much, much more robust against failure.
00:28:27.960 So are you in touch with people who are pursuing this other line of building intelligent machines now?
00:28:36.420 I mean, what does analog computation look like circa 2019?
00:28:41.160 Well, it's, it's coming at us from two, in two directions.
00:28:44.840 There's bottom up and there's sort of top down.
00:28:47.740 And the bottom up is actually extremely interesting.
00:28:50.940 And I'm, you know, I'm professionally not a computer scientist.
00:28:53.900 I just, you know, I'm a historian.
00:28:56.140 So I look at the past, but occasionally I get dragged into a meeting a couple of years ago that was actually held at Intel.
00:29:05.340 You'll have a meeting like that and they like the voice of a historian there.
00:29:08.040 So I get to go.
00:29:08.960 And there, this was an entire meeting of people working on building analog chips from the bottom up.
00:29:14.640 Using the same technology we use to build digital computers, but to build completely different kinds of chips that actually do analog processing on them.
00:29:23.880 And that's extremely exciting.
00:29:25.020 I think it's, I think it's going to change the world the same way the microprocessor changed the world.
00:29:30.240 We're sort of at the stage where, like we were when we had the first four-bit calculator you could buy.
00:29:36.360 And then suddenly, you know, somebody figured out how to play a game with it.
00:29:39.200 The whole thing happened.
00:29:42.180 So that's from the bottom up.
00:29:44.260 Some of these chips are going to do very interesting things like voice recognition, smell, things like that.
00:29:48.760 Of course, the big driver, you know, sort of killer app is drones, which is sort of the equivalent of the hydrogen bomb.
00:29:55.160 That's what's driving this stuff.
00:29:57.500 And self-driving cars.
00:30:00.800 And cell phones.
00:30:01.960 And then from the top down is a whole other thing.
00:30:04.960 That's the part where I think we're sort of missing something.
00:30:07.480 That if you look at the sort of internet as a whole, or the whole computational ecosystem, particularly on the commercial side,
00:30:15.560 enormous amount of the interesting computing we're doing now is back to analog computing,
00:30:19.720 where we're computing with continuous functions, it's pulse frequency coded, something like, you know, Facebook or YouTube doesn't care that, you know, the file that somebody clicks on,
00:30:32.520 they don't care what the code is, they just sort of care, the meaning is in the frequency that it's connected to,
00:30:37.840 very much the same way a brain or a nervous system works.
00:30:40.560 So if you look at these large companies, Facebook or Google or something, actually, you know, they're large analog computers.
00:30:48.360 The digital is not replaced, but another layer is growing on top of it.
00:30:53.720 The same way that after World War II, we had all these analog vacuum tubes and the oddballs like Alan Turing and von Neumann and even Norbert Wiener figured out how to use the analog components to build digital computers.
00:31:06.760 And that was the digital revolution.
00:31:08.580 But now we're sort of right in the midst of another revolution where we are taking all this digital hardware and using it to build analog systems.
00:31:18.260 But somehow people don't want to talk about that analog is still sort of seen as this archaic thing, I believe, differently.
00:31:26.300 In what sense is an analog system supervening on the digital infrastructure?
00:31:33.260 Are there other examples that can make it more vivid for people?
00:31:35.980 Yes.
00:31:37.220 I mean, analog is much better.
00:31:39.460 Like, nature uses analog for control systems.
00:31:42.360 So you take an example like, you know, an obvious one would be Google Maps with live traffic.
00:31:50.780 So you have all these cars driving around and people have their digital cell phone in the car.
00:31:58.280 And you sort of have this deal with Google where Google will tell you what the traffic is doing and the optimum path.
00:32:05.980 If you tell Google how fast, where you are and how fast you're moving.
00:32:11.200 And that becomes an analog computer, sort of an analog system where there is no digital model of the, you know, all the traffic in San Francisco.
00:32:22.860 The actual system is its own, it is its own model.
00:32:28.580 And that's sort of a Neumann's definition of an organism or a complex system that it constitutes its own simplest behavioral description.
00:32:37.980 There is no trying to formally describe what's going on makes it more complicated, not less.
00:32:43.620 There's no way to simplify that whole system except the system itself.
00:32:49.400 And so you're using, you know, Facebook's very much the same way.
00:32:53.340 It'd be impossible to build.
00:32:54.760 You could build a digital model maybe of, you know, social life in a high school.
00:32:58.940 But if you try to do social life and anything large, it becomes just collapses under its own complexity.
00:33:05.840 So you just give everybody a copy of Facebook, which is a reasonably simple piece of code that lives on their mobile device.
00:33:13.680 And suddenly you have a full-scale model of the actual thing itself.
00:33:18.520 So the social graph is the social graph.
00:33:23.120 And that's what's a huge transition.
00:33:25.940 We've sort of, I think, is at the root of some of the unease people are feeling about some of these particular companies.
00:33:33.620 It's that suddenly, you know, it used to be Google was someplace where you would go to look something up.
00:33:39.340 And now it really effectively is becoming what people think.
00:33:43.540 And the big fear is that something like Facebook becomes what your friends are.
00:33:48.760 And that can be good or bad, but it's a real, you know, just in an observational sense, it's something that's happening.
00:33:56.820 So what most concerns you about how technology is evolving at this point?
00:34:03.000 Well, I wear different hats there, you know.
00:34:05.760 I mean, my other huge part, most of my life was spent as a boat builder.
00:34:10.280 And I still, I'm right here in the middle of a, you know, kayak building workshop and want nothing to do with computer.
00:34:18.340 I mean, that's really why I started studying them and writing about them because I was not against them, but, you know, quite suspicious.
00:34:25.660 So I, and that's, you know, the big thing about artificial intelligence, AI, it's not a threat.
00:34:34.420 But the threat is that, not that machines become more intelligent, but that people become less intelligent.
00:34:39.700 So I spent a lot of time out in the wild with, you know, no computers at all, lived in a treehouse for three years.
00:34:46.820 And you can lose that sort of natural intelligence, I think, as a species reasonably quickly if we're not careful.
00:34:53.820 So that, that's what worries me.
00:34:55.420 I mean, obviously the machines are clearly taking over.
00:34:57.780 There's no, if you look at the, just the span of my life from when von Neumann built that one computer to where we now, you know, almost biological growth of, of this technology.
00:35:10.780 So as a, you know, sort of as a member of living things, it's, it's, it's something to be concerned about.
00:35:16.780 Do you know, uh, David Krakauer from the, uh, Santa Fe Institute?
00:35:21.060 Yes, I don't know him, but I've, you know, I've, I've met him and talked to him.
00:35:24.000 Yeah, because he, he has a rap on this very point where he distinguishes between, I think his phrasing is cognitively competitive and cognitively cooperative technology.
00:35:35.860 So there are forms of technology that compete with our intelligence on some level, and insofar as we outsource our cognition to them, we get less and less competent.
00:35:48.300 And then there is other forms of technology where we actually become better even in the absence of the technology.
00:35:55.000 And so the, unfortunately, the only example of the latter that I can remember is the one he used on the podcast was the abacus,
00:36:02.420 which apparently if you learn how to use an abacus, well, you internalize it and you can do calculations you couldn't otherwise do in your head, in the absence even of the physical abacus.
00:36:13.880 Whereas if you're relying on a pocket calculator or your phone or for arithmetic or you're relying on GPS, you're eroding whatever ability you had in those areas.
00:36:24.880 So if we get our act together and all of this begins to move in a better direction or something like an optimal direction, what does that look like to you?
00:36:35.580 If I told you 50 years from now we arrived at something just far better than any of us were expecting with respect to this marriage of increasingly powerful technology with some regime that conserves our deepest values, how do you imagine that looking?
00:36:57.200 Well, it's, yeah, it's certainly possible and I guess that's where I would be slightly optimistic in that sort of my knowledge of human culture goes way back and we, we grew up, we, you know, as a species, I'm speaking of just all humanity.
00:37:14.700 Actually, most of our history was, you know, was among animals who were bigger and more powerful than we were and things that we completely didn't understand and we sort of made up our, not religions, but just views of the world that, that, that we couldn't control everything.
00:37:35.180 We had to, we had to, we had to live with it and I think in a strange way we're kind of returning to that, that childhood of the species in a way that we're, we're building these systems that we no longer have any control over and we in fact no longer even have any real understanding of.
00:37:53.820 So we're sort of, so we're sort of in some ways back to that world that we're, that we are, you know, originally we're quite comfortable with where we're, where we're at the power of things that we don't understand.
00:38:02.920 Sort of mega fauna and I think that's, that could be a good thing, it could be a bad thing, I don't know, but I'm, it doesn't, it doesn't surprise me.
00:38:12.520 And I'm just personally, I'm interested, like if you take, you know, to get back to why we're here, which is John's book, almost everyone in that book is talking about domesticated artificial intelligence.
00:38:26.980 I mean, they're talking about sort of commercial systems, products that you can buy, things like that.
00:38:32.500 I mean, I'm just personally, I'm in, you know, I'm sort of a naturalist and, and I'm interested in wild AI that, you know, what, what evolves completely in the wild out of, out of human control completely.
00:38:43.040 And that's a very interesting part of the whole sphere that, you know, that doesn't, doesn't get looked at that much.
00:38:49.160 It's sort of the focus now is so much on, you know, marketable captive AI, self-driving cars, things like that, that, but it's the wild stuff that, that to me, that's.
00:39:02.500 Like, I'm not, I'm not afraid of bad AI, but I'm afraid, I'm very afraid of good AI, the kind of AI where some ethics board decides what's good and what's bad.
00:39:12.280 I don't think that's, what's going to be really important.
00:39:14.800 But don't you see the possibility that, so what we're talking about here is powerful, increasingly powerful AI, so increasingly competent AI.
00:39:23.380 But those of us who are worried about the prospect of building what's now called AGI, artificial general intelligence, that is, that proves bad is, is just based on the assumption that there are, there are many more ways to build AGI that is not ultimately aligned with our interests than there are ways to build it perfectly aligned with our interests.
00:39:47.920 Which is to say, we could build the, the megafauna that tramples us perhaps more easily than we could build the megafauna that lives side by side with us in a durably benign way.
00:40:02.200 You don't share that concern?
00:40:32.180 To run, so this view is that, well, the programmers are in control, but if you have non-algorithmic, there is, there is no program, there's no, there's, by definition, you don't control it.
00:40:47.460 And to expect control is, is absolutely foolish.
00:40:50.900 But I think it's much better to be realistic and assume that you won't, you won't, won't have control.
00:40:55.560 Well, so then why isn't your bias here one of the true counsel of fear, which says we shouldn't be building machines more powerful than we are?
00:41:06.360 Well, we probably shouldn't, but we are.
00:41:09.940 I mean, the reality, the fact is we, we're, we've done it.
00:41:12.500 I mean, it's not something that we're thinking about.
00:41:14.920 It's something we've been doing for, for a long time and it's probably not going to stop.
00:41:19.880 And then, then the point is to be realistic about, and then, and maybe optimistic that, you know, humans have not been the best at controlling the world.
00:41:28.480 And, and, and something else could well be, could well be better, but, but this illusion that we are going to program artificial intelligence is, I think, provably wrong.
00:41:38.320 I mean, Alan Turing would have proved that wrong.
00:41:40.820 You can, you know, he, that was how he got into the whole thing at the beginning was, was proving this, this statement called the Entscheidungsproblem, whether by, you know, it's any systematic way to look at a string of code and predict what it's going to do.
00:41:54.040 You can't.
00:41:54.480 And, and it baffles me that people don't sort of, somehow we've been so brainwashed by this, because the digital revolution was so successful.
00:42:04.660 Nobody, you know, it's amazing how it has sort of clouded everyone's thinking.
00:42:08.980 They don't think of, you know, if you talk to biologists, of course, they, they know that very well.
00:42:14.340 I mean, people who actually work with brains of frogs or mice, you know, they know it's not digital.
00:42:19.820 Why, why, why people think more intelligent things would be digital is just, again, it's sort of baffling.
00:42:27.740 How did, how did that sort of take over the world, that, that thought?
00:42:31.120 Yeah, so it does seem, though, that if you think the development of truly intelligent machines
00:42:40.740 is synonymous with machines that not only can we not control, but we, on some level, can't form a reasonable expectation of what they will be inclined to do.
00:42:53.660 There's the assumption that there's some way to launch this process that is either provably benign in advance, or, so I'm looking at the book now,
00:43:07.480 and, you know, the person there who I think has thought the most about this is Stuart Russell.
00:43:12.480 And, you know, he's, he's just trying to think of a way in which AI can be developed where its master value is to continually understand in a deeper and more accurate way what we want, right?
00:43:30.400 So, and what we want can obviously change, and it can change in dialogue with this now super intelligent machine, but its value system is in some way durably anchored to our own,
00:43:42.220 because its concern is to get our situation the way we want it.
00:43:47.920 Right, but all, all the most terrible things that have ever happened in the world happened because somebody wanted them.
00:43:51.920 I mean, it's, it's, it's, that's, that's no, there's no safety in that.
00:43:55.800 I admire Stuart Russell, but we disagree on this sort of provably good AI.
00:44:00.580 Yeah, so I, so, but I guess at least what you're doing there is collapsing it down to one fear rather than the other.
00:44:09.740 I mean, the fear that provably benign AI or provably obedient AI could be used by bad people toward bad ends,
00:44:18.780 that's obviously a fear, but the greater fear that many of us worry about is that developing AGI in the first place can't be provably benign,
00:44:27.560 and we will find ourselves in relationship to something far more powerful than ourselves that doesn't really care about our well-being in the end.
00:44:36.620 Right, and that's, again, sort of the world we used to live in, and we, I think we can make ourselves reasonably comfortable there,
00:44:42.300 but we, we no longer become the, you know, sort of the classic religious view was there, there are humans, and there's God,
00:44:50.500 and there's only nothing but angels in between.
00:44:53.420 That can change.
00:44:55.140 Nothing but angels and devils in between, no.
00:44:57.600 Right.
00:44:58.220 So, you know, Norbert Wiener sort of, the last thing he published before, well, he was actually published after he died,
00:45:04.800 but, I mean, there's a line in there, which I think just gets it right, that the world of the future will be an ever more demanding struggle
00:45:11.980 against the limitations of our own intelligence, not a comfortable hammock in which we can lie down to be waited upon by our robot slaves.
00:45:21.200 That's, those are the two, two sort of paths that so many people want.
00:45:26.500 So, oh, the cars are going to drive us around and be our slaves.
00:45:29.960 It's probably not going to happen that way.
00:45:31.720 On that dire note.
00:45:34.040 It's not a dire note.
00:45:34.900 I mean, it could be, it could be a good thing.
00:45:36.500 We've, we've been the sort of chief species for a long time, and it, it, it could be time for something else,
00:45:43.100 but, but at least be, be realistic about it.
00:45:45.720 Don't, don't have this sort of childish view that, that, that everything's going to be obedient to us.
00:45:51.560 That hasn't worked, and I think it was, you know, it did a lot of harm to the world that, that, that, that sort of we had that view.
00:45:58.300 And, again, one of the signs of, of any real artificial intelligence would immediately be intelligent enough not to reveal its existence to us.
00:46:07.160 I mean, that would be the first smart thing it would do would be not, not reveal itself.
00:46:11.820 So the fact that, that AI has not revealed itself is, to me, is no, that, that's zero evidence that it doesn't exist.
00:46:20.660 I mean, I would take it the other way, that if, if, if it, if it existed, I would expect it not to reveal itself.
00:46:27.860 Unless it's so much more powerful than we are that it, that it, that it perceives no cost and it reveals itself by merely steamrolling over us.
00:46:37.380 Well, there would be a cost.
00:46:38.680 I think it's sort of, sort of, sort of faith is better than proof.
00:46:41.680 So, so anyway, you can see where I'm going with that, but it's, it's not necessarily a, a malevolent.
00:46:47.460 It's just as likely to be, you know, benevolent as malevolent.
00:46:51.760 Okay.
00:46:52.160 So I have a few bonus questions for you, George.
00:46:54.080 These, these can be short form.
00:46:56.460 If you had one piece of advice for someone who wants to succeed in your field, and you can describe that field however you like, what would it be?
00:47:04.500 Okay, well, I'm a historian, is what I became, or, or, and a boat builder, and so the advice in all those fields is just specialize.
00:47:14.760 I mean, find something and become obsessed with it.
00:47:17.420 I became obsessed with the kayaks that the Russians adopted when they came to Alaska, and then I became obsessed with how computing really happened.
00:47:25.880 And if you are obsessed with one little thing like that, you immediately become, you know, you can very quickly know more than anybody else.
00:47:32.400 And that's a, that helps to be successful.
00:47:36.560 What, if anything, do you wish you'd done differently in your 20s, 30s, or 40s?
00:47:42.080 Oh, that's, I mean, you can't, you can't replay that, that tape.
00:47:46.160 I wish, well, I can be very clear about that.
00:47:47.680 I wish in my 20s, I had gone to the Aleutian Islands earlier while, while more of the old time kayak builders were still alive and, and kind of interviewed and learned from them.
00:48:01.640 And then very much the same in my 30s, I mean, all these projects I met, I did go find the surviving Project Orion people and technicians and physicists and interviewed them.
00:48:14.660 But I should have done that earlier.
00:48:16.180 And the same with computing, you know, in my 40s, I could have interviewed a lot more people who really were there at that important time.
00:48:25.120 I sort of caught them, but almost too late.
00:48:27.080 And I wish I had done that sooner.
00:48:29.280 10 years from now, what do you think you'll regret doing too much of or too little of at this point in your life?
00:48:36.220 Probably regret not, you know, not getting out more up the coast again, which is what I'm trying to do.
00:48:42.420 That's what I'm working very diligently at, but, but I keep getting distracted.
00:48:46.600 Yeah, you got to get off the podcast and get into the kayak.
00:48:51.060 Yeah, well, podcast, you know, we could be doing this from, uh, Orca Lab, they have a good internet connection.
00:48:55.720 I mean, that's the beautiful thing is that you can do this.
00:48:58.000 And, and I, I, the other thing I would say is, this is a side, but I grew up, you know, I grew up as a, since a young teenager in Canada where the country was united by radio.
00:49:08.400 I mean, in Canada, people didn't get newspapers, but everybody listened to one radio channel.
00:49:12.520 And so in a way, podcasts are, again, back to that past where we're all listening to the radio again.
00:49:17.780 And I think it's a great thing.
00:49:20.240 What negative experience, one you would not wish to repeat, has most profoundly changed you for the better?
00:49:27.460 I very nearly choked to death.
00:49:31.120 I mean, literally, it's the only time I've had a true near-death experience seeing the tunnel of light and reliving my whole life and not only thinking about my daughter and other profound things, but thinking how stupid this was.
00:49:44.260 You know, this guy who'd, like, kayak to Alaska six times with no life jacket dies in a restaurant on Columbus Avenue in New York.
00:49:52.700 Wow.
00:49:53.140 And John Brockman saved my life, ran out and came back with a New York City off-duty fireman who, you know, who literally saved my life.
00:50:01.880 Wow.
00:50:02.320 I'm so glad I asked that question.
00:50:03.680 I had no idea of that story.
00:50:05.220 So again, learned Heimlich maneuver.
00:50:07.580 Dr. Heimlich really did something great for the world.
00:50:11.380 Fascinating.
00:50:12.800 We may have touched this in a way, but maybe there's another side to this.
00:50:18.540 What most worries you about our collective future?
00:50:20.900 Yeah, kind of what I said, that we lose our, we lose all these skills and intelligences that we've built up over such a long period of time.
00:50:31.480 The ability to, you know, survive in the wilderness and understand animals and respect them.
00:50:39.320 It's, I think that's a very sad thing that we're losing that, of course, and losing the, losing the wildlife itself.
00:50:45.700 If you could solve just one mystery as a scientist or historian or journalist, however, however you want to come at it, what would it be?
00:50:56.280 One mystery.
00:50:57.040 Well, one of them would be the one we just talked about.
00:50:59.880 You know, cetacean communication, what's really going on with these whales communicating in the ocean.
00:51:05.640 That's something I think we could solve, but we're not looking at it in the right way.
00:51:10.020 If you could resurrect just one person from history and put them in our world today and give them the benefit of a modern education, who would you bring back?
00:51:19.680 I guess the problem is that most people I'm interested in history sort of had extremely good education.
00:51:25.240 You're talking about John von Neumann and Alan Turin, yeah, you're right.
00:51:28.760 Yeah, and Leibniz, I mean, he was very well, yeah.
00:51:31.020 But lately, the character in my, the project I've been working on lately was kind of awful, but fascinating.
00:51:38.220 It was Peter the Great.
00:51:39.660 He was so obsessed with science and things like that.
00:51:43.720 So I think to have brought him, you know, if he could come back, it might be a very dangerous thing.
00:51:48.220 But he sort of wanted to learn so much and was, again, preoccupied by all these terrible things and disasters that were going on at the time.
00:51:57.600 What are you doing on Peter the Great?
00:51:59.240 I've been writing this very strange book where it kind of starts with him and Leibniz.
00:52:05.760 They go to the hot springs together and they basically stop drinking alcohol for a week.
00:52:11.400 And Leibniz convinces him, he wants him to support building digital computers, but he's not interested.
00:52:19.380 So the computer thing failed, but what Leibniz did convince him was to launch a voyage to America.
00:52:25.640 So that's where the, that's how the Russians came to Alaska.
00:52:28.860 It became the Bering-Churikov way.
00:52:30.660 But it all starts in this hot springs where they, you know, they can't drink for a week.
00:52:35.840 So they're just drinking mineral water and talking.
00:52:39.440 There is a great biography on Peter the Great, isn't there?
00:52:42.280 Is there one that you recommend?
00:52:44.300 Several.
00:52:44.680 So I wouldn't know which one to recommend, but he's, you know, again, that's why he's Peter the Great, because he's been well-studied.
00:52:52.580 His relationship with Leibniz fascinates me in that that's not, you know, there's just a lot there we don't know.
00:52:59.100 But it's kind of amazing how this sort of obscure mathematician becomes very close to this great, you know, leader of a huge part of the world.
00:53:10.660 Okay, last question, the Jurassic Park question.
00:53:14.400 If we are ever in a position to recreate the T-Rex, should we do it?
00:53:18.640 I would say yes, but this, you know, this comes up as a much more real question with the woolly mammoth and these other animals.
00:53:26.760 The stellar sea cow, there's another one we could maybe resurrect.
00:53:30.060 So I'm, yeah, I've had these arguments with, you know, with Stuart Brand and George Church who are realistic about could we do it?
00:53:37.380 So I would say yes, don't expect it to work, but certainly worth trying.
00:53:44.120 What are their biases? Do Stuart and George say we should or shouldn't do this?
00:53:51.100 Well, yeah, if you haven't talked to them, you definitely, that would be a great program to go to that debate.
00:53:56.800 I mean, the question more is, if you can recreate the animal, does that recreate the species?
00:54:03.740 One of the things they're working on is, I think, trying to build a park in Kamchatka or somewhere over there in Siberia,
00:54:10.240 is so that if you did recreate the woolly mammoth, they would have an environment to go live in.
00:54:15.740 So to me, that's actually the payoff.
00:54:17.820 The payoff to creating, recreating the woolly mammoth is that it would force us to create a better environment.
00:54:25.340 Same as, you know, we should bring the, we did, I mean, buffalo are coming back and we should bring the antelope back.
00:54:30.220 It's sort of the, you know, American cattle industry that's sort of wrecked the great central heart of America
00:54:37.240 that could easily come back into the grasslands it once was.
00:54:41.500 Well, listen, George, it's been fascinating.
00:54:43.320 Thank you for your contribution to this book and thanks for coming on the podcast.
00:54:48.040 Thank you.
00:54:48.980 And it's a very interesting book.
00:54:50.080 There's short chapters, which just makes it very easy to read.
00:54:52.980 Yeah, it's a sign of the times, but a welcome one.
00:55:00.220 I am here with Alison Gopnik.
00:55:02.440 Alison, thank you for coming on the podcast.
00:55:04.300 Glad to be here.
00:55:05.260 So we are, the occasion of our conversation is the release of John Brockman's book,
00:55:10.280 Possible Minds, 25 Ways of Looking at AI.
00:55:13.820 And I'm sure there'll be other topics we might want to touch, but as this is our jumping off point,
00:55:20.740 first, give me your background.
00:55:23.700 How would you summarize your intellectual interests at this point?
00:55:27.400 Well, I began my career as a philosopher,
00:55:29.580 and I'm still half appointed in philosophy at Berkeley.
00:55:33.060 But for 30 years or so, more than that, I guess now,
00:55:36.800 I've been looking at young children's development and learning
00:55:41.100 to really answer some of these big philosophical questions.
00:55:44.520 Specifically, the thing that I'm most interested in is,
00:55:47.020 how do we come to have an accurate view of the world around us
00:55:50.100 when the information we get from the world seems to be so concrete and particular
00:55:54.800 and so detached from the reality of the world around us?
00:55:58.140 And that's a problem that people in philosophy of science raise.
00:56:01.240 It's a problem that people in machine learning raise.
00:56:03.600 And I think it's a problem that you can explore particularly well by looking at young kids,
00:56:09.180 who after all are the people who we know in the universe who are best at solving that particular
00:56:13.200 problem.
00:56:13.680 And for the past 20 years or so, I've been doing that in the context of thinking about
00:56:18.820 computational models of how that kind of learning about the world is possible for anybody,
00:56:24.780 whether it's a scientist or an artificial computer or a computational system,
00:56:30.780 or, again, the best example we have, which is young children.
00:56:35.660 Right.
00:56:35.820 Well, you'll get into the difference between how children learn and how our machines do,
00:56:41.500 or at least our current machines do.
00:56:43.980 But just a little more on your background.
00:56:45.740 So did you, you did your PhD in philosophy or in psychology?
00:56:49.940 I actually did my first degree, my BA in honors philosophy.
00:56:55.540 And then I went to Oxford to actually wanting to do both philosophy and psychology.
00:57:00.800 I worked with Jerome Bruner in psychology, and I spent a lot of time with the people in philosophy.
00:57:04.820 And my joke about this is that after a year or two in Oxford, I realized that there was
00:57:11.200 one of two communities that I could spend the rest of my life with.
00:57:14.300 One community was of completely disinterested seekers after truth who wanted to find out
00:57:18.960 about the way the world really was more than anything else.
00:57:21.440 And the other community was somewhat spoiled, narcissistic, egocentric creatures who needed
00:57:26.740 to be taken care of by women all the time.
00:57:28.800 And since the first community was the babies and the second community was the philosophers,
00:57:31.940 I thought it would be, I'd be better off spending the rest of my life hanging out with the babies.
00:57:36.780 That's a little unfair to the philosophers, but it does make the general point, which is
00:57:42.220 that I think a lot of these big philosophical questions can be really well answered by looking
00:57:48.440 at a very neglected group in some ways, namely babies and young children.
00:57:53.240 Yeah, yeah.
00:57:54.180 So I did my PhD in the end in experimental psychology with Jerome Brunner.
00:58:00.420 And then I was in Toronto for a little while and then came to Berkeley, where, as I say,
00:58:04.560 I'm in the psychology department, but also affiliated in philosophy.
00:58:08.100 And I've done a lot of collaborations with people doing computational modeling at the same
00:58:12.680 time.
00:58:13.040 So I really think of myself as being a cognitive scientist in the sense that cognitive science
00:58:17.600 puts together ideas about computation, ideas about psychology and ideas about philosophy.
00:58:23.240 Yeah, well, if you're familiar with me at all, you'll understand that I don't respect the
00:58:29.080 boundaries between these disciplines really at all.
00:58:31.800 I just think that it's just interesting how someone comes to a specific question.
00:58:36.500 But whether you're doing cognitive science or neuroscience or psychology or philosophy of
00:58:41.700 mind, this can change from sentence to sentence, or it just really depends on what building
00:58:47.360 in a university campus you're standing in.
00:58:51.480 Well, I think I've tried, you know, I've tried and I think to some extent succeeded in actually
00:58:56.200 doing that in my entire career.
00:58:58.380 So I publish in philosophy books and collaborate with philosophers.
00:59:01.860 I had a wonderful project where we had half philosophers who were looking at causality, people
00:59:07.880 like Chuck Leymour and James Woodward and Chris Hitchcock, and then half developmental psychologists
00:59:13.340 and computational cognitive scientists.
00:59:15.120 So people like me, like Josh Tenenbaum at MIT, like Tom Griffiths.
00:59:22.360 And that was an incredibly powerful and successful interaction.
00:59:26.760 And the truth is, I think one of my side interests is David Hume.
00:59:31.920 And if you look at people like David Hume or Barclay or Descartes or the great philosophers
00:59:36.340 of the past, they certainly wouldn't have seen boundaries between the philosophy that they
00:59:41.180 were doing and psychology and empirical science.
00:59:44.560 Let's start with the AI question and then get into children and other areas of common interest.
00:59:52.860 So perhaps you want to summarize how you contributed to this volume and your angle of attack on this
01:00:03.020 really resurgent interest in artificial intelligence.
01:00:06.800 It was this period where it kind of all went to sleep.
01:00:10.760 And I remember being blindsided by it, just thinking, well, AI hadn't really panned out.
01:00:15.560 And then all of a sudden, AI was everywhere.
01:00:17.660 How have you come to this question?
01:00:19.840 Well, as I say, we've been doing work looking at computational modeling and cognitive science
01:00:24.120 for a long time.
01:00:24.820 And I think that's right.
01:00:25.820 For a long time, even though there was really interesting theoretical work going on about how
01:00:30.820 we could represent the kinds of knowledge that we have as human beings computationally, it didn't
01:00:37.020 translate very well into actual systems that could actually go out and do things more effectively.
01:00:43.480 And then what happened, interestingly, in this new AI spring wasn't really that there was some
01:00:49.460 great new killer app, new idea about how the mind worked.
01:00:54.000 Instead, what happened was that some ideas that had been around for a long time, since the 80s,
01:00:58.240 basically, these ideas about neural networks, and in some ways, you know, much older ideas
01:01:03.740 about associative networks, for example, suddenly, when you had a whole lot of data, the way you
01:01:09.760 do with the internet, and when you also had a whole lot of compute power with good old Moore's
01:01:14.300 law running through its cycles, those ideas became very practical so that you could actually
01:01:21.200 take a giant data set of all the images that had been put on the net, for example, and train
01:01:26.800 that data set to discriminate between images.
01:01:29.320 Or you could take the giant data sets of all the translations of French and English on the
01:01:33.860 net, and you could use that to actually design a translation program.
01:01:39.080 Or you could have something like Alpha, something like Alpha Zero that could just play millions
01:01:45.260 and millions and millions of games of chess against itself, and then you could use that data
01:01:49.720 set to figure out how to play chess.
01:01:54.320 So the real change was not so much a kind of conceptual change about how we thought about
01:01:59.340 the mind.
01:01:59.740 It was this change in the capacities of computers.
01:02:02.340 And I think, to the surprise of everybody, including the people who were, you know, including
01:02:06.440 the people who had designed the systems in the first place, it turned out that those ideas
01:02:10.820 really could scale.
01:02:12.040 And the big problem with computational cognitive science has always been not so much that finding
01:02:18.480 good computational models for the mind, although that's a problem, but finding ones that could
01:02:23.700 do more than just solve toy problems, ones that could deal with the complexity of real
01:02:28.800 world kinds of knowledge.
01:02:29.920 And I think it was surprising and kind of wonderful that these learning systems could actually turn
01:02:34.860 out to work at a broad scale.
01:02:38.300 And the other thing that, of course, was interesting was that not just in the history of AI, but
01:02:43.080 in the history of philosophy, there's been this constant kind of ping-ponging back and
01:02:47.640 forth between two ways to solve this big problem of knowledge, this big problem of how we can
01:02:52.480 ever understand the world around us.
01:02:54.280 And a way I like to put it is, here's the problem.
01:02:57.620 We seem to have all this abstract, very structured knowledge of the world around us.
01:03:03.800 We seem to know a lot about the world, and we can use that knowledge to make predictions
01:03:07.600 and change the world.
01:03:09.540 And yet, it looks as if all that reaches us from the world are these patterns of photons
01:03:14.120 at the back of our eyes and disturbances of air at our ears.
01:03:17.080 And the question is always, how could you resolve that conundrum?
01:03:20.960 And one way, going back to Plato and Aristotle, has been to say, well, a whole lot of it is built
01:03:26.520 in in the first place.
01:03:27.540 We don't actually have to learn that abstract structure.
01:03:29.680 It's just there.
01:03:30.440 Maybe it evolved.
01:03:31.560 Maybe if you're Plato, it was in a past life.
01:03:34.380 And then the other approach, going all the way back to Aristotle, has been to say, well,
01:03:38.920 if you just have enough data, if you just had enough stuff to learn, then you could develop
01:03:45.440 this kind of abstract knowledge of the world.
01:03:47.240 And again, going back to Plato and Aristotle, we kind of ping-ponged back and forth between
01:03:51.980 those two approaches to trying to solve the problem.
01:03:54.060 And sort of good old-fashioned AI said, well, if we just, you know, famously, Roger Shank
01:04:02.400 said, well, if we just had like a summer's worth of interns, we'll figure out all of our
01:04:06.840 knowledge about the world, we'll write it all down, and we'll program it into a computer.
01:04:11.040 And that turned out not to be a very successful project.
01:04:14.780 And then the alternative, the kind of neural net idea was, oh, if we just have enough data
01:04:18.780 and we have some learning mechanisms, then the learning mechanisms will just be able
01:04:22.540 to pull out the information from the data.
01:04:24.820 And that's kind of where we are now.
01:04:26.660 That's the latest iteration in this back and forth between having, building in knowledge
01:04:33.260 and learning the knowledge from the data.
01:04:37.940 Yeah.
01:04:38.040 So what you've done there is you've sketched two different approaches to generating intelligence.
01:04:44.160 One, I guess, could be considered top-down and the other bottom-up.
01:04:47.880 And what AI has done of late, the great gains we see in image recognition and many other things,
01:04:56.840 is born of a process that really is aptly described as bottom-up, where you take in an immense amount
01:05:03.980 of data and do what is essentially a statistical pattern recognition on it.
01:05:09.120 And some of this can be entirely blind and black-boxed, such that the humans who have
01:05:16.220 written these programs don't even necessarily know how the machines are doing it.
01:05:21.600 And yet, given enough processing power and enough data, we're now getting results that are
01:05:28.460 human-level and beyond for specific tasks.
01:05:31.720 But, of course, you make this point in your piece that we know this is not how humans
01:05:38.200 learn, that there is some structure, undoubtedly given to us by evolution, that allows us to
01:05:46.300 generalize on the basis of comparatively small amounts of data.
01:05:53.180 And so this makes what we do non-analogous to what our machines are doing.
01:05:59.360 And I guess, I mean, now both top-down and bottom-up approaches are being combined in AI.
01:06:06.720 I guess one question I have for you is, is the difference between the way our machines
01:06:13.600 learn and the way human brains learn just of temporary interest to us now?
01:06:20.220 I mean, can you imagine us kind of blowing past this moment and building machines that
01:06:25.440 we just, we know are developing their intelligence in a way that is totally unlike the way we
01:06:32.100 do it biologically?
01:06:33.200 And yet, it is successful.
01:06:35.180 It becomes successful on all fronts without our building any analogous process into them.
01:06:41.060 And we just lose sight of the fact that it was ever interesting to compare the ways we
01:06:46.180 do it.
01:06:46.540 I mean, there's an effective way to do it in a brute force way, let's say bottom-up,
01:06:51.820 on every front that will matter to us.
01:06:54.300 Or do you think that there's some problems for which it will be impossible to generate,
01:06:59.440 you know, true artificial intelligence unless we have a deeper theory about how biological
01:07:04.620 systems do it?
01:07:06.080 Well, I think we already can see that.
01:07:08.000 So one of the reasons, one of the interesting things is that there's this whole really striking
01:07:12.640 revival of interest in AI, among people in AI, in cognitive development, for example.
01:07:18.640 And it's because we're starting to come up against the limits of this kind of pattern
01:07:23.680 of having this technique of doing a lot of statistical inference from big data sets.
01:07:29.180 So there are lots of examples, for instance, even if you're thinking about things like image
01:07:33.220 recognition, where, you know, if you have something that looks like a German shepherd, it'll
01:07:38.820 recognize it as a German shepherd, but if you just have something that to a human just looks
01:07:43.520 like a mass that has the same textural superficial features as the German shepherd, it will also
01:07:48.720 recognize it as a German shepherd.
01:07:50.940 You know, if it sees a car that's suspended in the air and flooded, it will report this
01:07:57.180 is a car parked by the side of the road and so forth.
01:08:00.080 And there's a zillion examples that are like that.
01:08:02.600 In fact, there's a whole kind of area of these adversarial examples where you can show
01:08:07.620 that the machine is not actually making the right decision.
01:08:10.980 And it's because it's only paying attention to the sort of superficial features.
01:08:14.880 And in particular, the machines are very bad at making generalizations.
01:08:18.600 So even if you, you know, taught, teach AlphaZero how to play chess, and then you said, all right,
01:08:24.580 we're going to just change the rules a little bit.
01:08:26.920 So now the rooks are going to, are going to be able to move diagonally and you're going to
01:08:31.760 want to capture the queen instead of the king.
01:08:33.720 And that kind of difference, which for a human who had learned chess would be really easy
01:08:38.600 to adjust to for the more, more recent AI systems leads to this problem they call catastrophic
01:08:45.200 forgetting, which is having to relearn everything all over again when you get a new data set.
01:08:50.960 So in principle, of course, you know, there's no in principle reason why we couldn't have
01:08:55.680 an intelligence that operated completely differently from the way that, that say human children learn.
01:09:00.200 But human children are a demonstration case of the capacities of an intelligence, presumably
01:09:06.340 in some sense, a computational intelligence, because that's the best way we have of understanding
01:09:10.260 how human brains work.
01:09:12.120 But that's the best example we have of a system that actually really works to be intelligent.
01:09:17.420 And nothing that we have now is really even in the ballpark of being able to do the same
01:09:22.120 kinds of things that those systems, that system can do.
01:09:24.560 So in principle, it might be that we would figure out some totally different way of being
01:09:28.600 intelligent. But at the moment, the best case we have is, you know, a four year old, a four
01:09:34.240 year old human child. And we're very, very, very far from being able to simulate that.
01:09:39.520 You know, I think part of it is if, if people had just labeled the new techniques by saying
01:09:43.980 statistical inference from large data sets, instead of calling it artificial intelligence,
01:09:47.720 I think we would be having a very different kind of conversation, even though statistical
01:09:51.920 inference from large data sets turns out to be an incredibly powerful tool, more powerful
01:09:56.480 than we might have thought.
01:09:58.140 We should remind people how alarmingly powerful it is in narrow cases. I mean, we take something
01:10:03.240 like Alpha Zero. What happened there was fairly startling because you have a, an algorithm
01:10:10.100 that is fairly generic in that it can be taught to play both a game like Go and a game like
01:10:17.420 chess and presumably other games as well. And, you know, we have this history of developing
01:10:22.160 better and better chess engines. And finally, the human grandmaster ability was conquered.
01:10:28.620 I forget when that was, 1997 or so, when Gary Kasparov lost famously. And ever since, there's
01:10:37.140 just been this incremental growth in the power of these machines. And what Alpha Zero did was
01:10:43.960 create a, again, a far more general algorithm which, over the course of four hours, taught
01:10:52.920 itself to be better than any chess engine ever. So, I mean, you're taking the totality of human
01:10:58.180 knowledge about this 2,000-year-old game, all of the engineering talent that went into making
01:11:05.500 this better and better over decades. And here we found an algorithm which turned loose on the
01:11:11.440 problem, beat every machine and every person in human history, essentially. When you extrapolate
01:11:18.160 that kind of process to anything else we could conceivably care about, you know, the recognition
01:11:24.460 of emotion in a human face and voice, say. Now, again, coming at this not in a AGI way where this
01:11:32.640 is, you know, we have cracked the code of, you know, what intelligence is on some level and built it
01:11:37.340 from the bottom up. But in a piecemeal way where we take the, you know, the hundred most interesting
01:11:44.060 cognitive problems and find brute force methods to crack them, it's amazing to consider how quickly
01:11:53.580 a solution can appear. And once it does, and this is the point I've always made about so-called human
01:12:01.580 level intelligence, for any ability that we actually do find an AI solution, even a narrow
01:12:08.460 one in the case of, you know, chess or arithmetic, once that solution is found, you're never talking
01:12:15.660 about human level intelligence. It's always superhuman. So the moment we get anything like
01:12:20.780 a system that can behave or learn like a four-year-old child, it won't be at human level
01:12:28.140 even for a second, because you're not going to, you'd have to degrade all of its other abilities
01:12:33.500 that you could cobble together to support it. You wouldn't make it worse than your iPhone
01:12:38.220 as a calculator, right? So it's already going to be superhuman.
01:12:42.020 Yeah. But I mean, you know, I think there's a, there's a, there's a question though about
01:12:47.380 exactly what different kinds of problems require and how you solve those problems. And I think an idea
01:12:53.320 that is, is pretty clearly there in computer science and neuroscience is that
01:12:58.140 there's trade-offs between different kinds of properties of a solution that aren't just
01:13:03.500 because we happen to be biological humans, but are built into the very nature of trying to solve
01:13:07.820 the problem. And in some ways, the most striking thing about the progress of AI all through has been
01:13:13.700 what people sometimes call Moravitch's paradox, which is that actually the things that really
01:13:18.700 impress us as humans are the things that we're not very good at, like doing arithmetic or playing
01:13:24.140 chess. So I think of these sometimes as being like the corridas of nerd machismo. So the things
01:13:30.660 that you have to just be, have a particular kind of, kind of ability that most people don't have,
01:13:36.100 and then really train it up to do really well. It turns out those things are things that computers
01:13:40.500 are good at doing. On the other hand, if you might, the, uh, an example I give is my grandson,
01:13:45.480 who's three, play something that we call Adi chess. His name is Atticus. So how do you play Adi chess?
01:13:51.520 Well, what the way you play Adi chess is you take all the pieces off the board and then you throw
01:13:55.280 them in the wastebasket and then you pick them up out of the wastebasket and you put them more or
01:13:59.380 less in the same places they were in before. And then you take them all off and throw them in
01:14:03.060 the wastebasket again. And it turns out that Adi chess is actually a lot harder than grandmaster
01:14:09.080 chess because Adi chess means actually manipulating objects in the real physical world so that you have
01:14:15.340 to figure wherever it is that that piece lands in the wastebasket, whatever orientation it is,
01:14:19.800 I can pick it up and perform the motor actions that are necessary to get it on the board.
01:14:25.720 And that turns out to be incredibly difficult. If you, you know, go and see any robotics lab,
01:14:32.600 they have to put big walls around the robots to keep them from destroying each other, even trying
01:14:38.580 to do incredibly simple tasks like picking up objects off of a tray. And there's another thing
01:14:44.240 about Adi chess that makes it really different from what even very, very powerful, powerful,
01:14:49.760 artificial intelligence can do, which is, as you, as you said, what you can, what these new systems can
01:14:55.280 do is you can take what people sometimes call an objective function. You can say to them, look,
01:14:59.600 this is what I want you to do. Given this set of input, I want you to produce this set of output.
01:15:05.100 Given this set of moves, I want you to get the highest score, or I want you to win at this game.
01:15:12.160 And if you specify that, it turns out that these neural net learning mechanisms are actually
01:15:17.240 remarkably good at solving those problems without a lot of additional information,
01:15:21.900 except just here's a million examples of the input, and here's a million examples of the output.
01:15:26.700 But of course, what human beings are doing all the time is going out and making their own
01:15:31.480 objectives. They're going out and creating new objectives, creating new ideas, creating new
01:15:36.140 goals, goals that are not the goals that anyone has created before, even if they might look kind
01:15:41.760 of silly, like playing Adi chess. And in some way that we really don't understand at all, there's
01:15:48.180 some sense of a kind of progress in those goals that we're capable of setting ourselves goals that
01:15:53.560 were better than the goals that we had before. But again, that's not even kind of in the ballpark.
01:15:58.040 It's not like, oh, if we just made the machines more powerful, then they would be able to do those
01:16:04.020 things too. They would be able to go out and physically manipulate the world, and they would
01:16:07.340 be able to set novel objectives. That's kind of not even in the same category. And as I say,
01:16:17.060 I think an interesting idea is that there might really be trade-offs between some of the kinds of
01:16:22.020 things that humans are really good at, like, for instance, taking very complicated, high-dimensional
01:16:27.080 spaces of solutions, having to think of an incredibly wide range of possibilities versus,
01:16:35.500 say, being able to do something really quickly and efficiently when it's well-specified. And I think
01:16:39.800 there's reasons to think those things. It might not, you know, you might think, well, okay, if you
01:16:43.900 could do the thing that's really well-specified and just do that better and better, then you're going
01:16:48.180 to be able to solve the more complicated problem and the less well-defined problem. And I think there's
01:16:53.260 actually reasons to believe that that's not true, that there's real trade-offs between the kinds of
01:16:57.220 things you need to do to solve those two kinds of problems. Yeah, well, so the paradox you point to
01:17:01.600 is interesting and is a key to how people's expectations will be violated when automation
01:17:10.840 begins to replace human labor to a much greater degree than it has. Because people tend to expect that,
01:17:17.300 you know, menial jobs will be automated first, or, you know, lower-skilled, you know, lower,
01:17:23.400 not, not, you know, famously high-cognition jobs will be the first to be automated away. But,
01:17:29.860 you know, as you point out, many of the things that we find it amazing that human beings can do
01:17:35.260 are easier to automate than the things that any, or virtually any human being can do. And, you know,
01:17:43.880 which is to say it's easier to play grandmaster-level chess than it is to walk across a room
01:17:49.640 if you're a computer. So, you know, your oncologist and your local mathematician are likely to lose
01:17:57.000 their jobs to AI before your plumber will, which is a harder task to move physically into a space and
01:18:04.980 manipulate objects and make decisions across tasks of that sort. So it's, there's a lot that's
01:18:12.120 counterintuitive here. I guess my sense, however, is that, I mean, well, one, you're not at all
01:18:17.920 skeptical, are you, that intelligence is substrate independent, ultimately, that we could find some
01:18:25.820 way of instantiating human-like intelligence in a non-biological system. Is there something
01:18:33.580 potentially magical about having a computer made of meat, from your point of view, or not?
01:18:37.740 Well, I think the answer is that we don't really know, right? So the, again, the one,
01:18:41.800 we have a kind of, you know, species of one, or maybe species of a couple of examples of systems
01:18:48.880 that can really do this. And the ones that we know about are indeed biological. Now, I think the most,
01:18:55.660 it's rather striking, and I think maybe not appreciated enough that this idea that really comes with,
01:19:01.380 with Turing, the idea of thinking about a human mind as being a computational,
01:19:05.200 a computational system. That's just been an incredibly productive idea. That's ended up
01:19:10.160 enabling us to make really, really good predictions about many, many, many things that human beings do.
01:19:16.620 And we don't have another idea that's as good at making predictions or providing explanations for
01:19:22.680 intelligence as that idea. Now, again, maybe it'll turn out that there is something that we're missing
01:19:28.560 that is, that is contributing something important about biology. But I think at the moment,
01:19:34.520 the kind of computational theory of the mind is the best, the best one that's on the table. It's the
01:19:39.820 one that's been most successful just in empirical scientific terms. So for instance, when we're
01:19:44.760 looking at young children, if we say, are they doing something like Bayesian inference of structured
01:19:51.220 causal systems, that's a computational idea. We can actually say, okay, well, if they're doing that,
01:19:56.460 then if we give them this kind of problem, they should solve it this way. And sure enough,
01:19:59.800 it turns out that over and over again, that's what they do kind of independently of knowing very
01:20:04.520 much about what exactly is going on in their brains when they're doing that. So again, it could be that
01:20:10.800 this gap between the kinds of problems that we can solve computationally now and, and the kinds of
01:20:15.640 problems that every four-year-old are solving, it could be that that's got something to do with having
01:20:19.740 a biological substrate. But I don't think that's kind of the most likely hypothesis given the
01:20:24.940 information that we have now. I think actually one of the interesting things is the problem is not so
01:20:32.400 much trying to figure out what our representations and rules are, what's going on in our head, what
01:20:38.260 the computations look like. The problem is, is what people in computer science call a search problem.
01:20:44.040 So the problem is really given all the possible things we could believe about the world, or given
01:20:49.460 all the possible solutions we could have to a problem, or given all the possible things that we could do in
01:20:54.360 the world. How is it that we end up converging? How is it that we end up picking ones that are,
01:21:02.260 as it were, the right ones, rather than all the other ones that we could consider? And that, I think
01:21:06.600 that's, at the moment, that's the really, that's the really deep, serious problem. So we kind of know
01:21:13.280 how a computational system could be instantiated in, in a brain. We have ideas about how neurons could be
01:21:21.220 configured so they could do computations. We kind of figured that part out. But the part about
01:21:25.000 how we, how we take all these possibilities and end up narrowing in on ones that are
01:21:32.500 relatively good, relatively true, relatively effective, I think that's a really, that's the
01:21:38.700 really next deep problem. And, and looking at kids can help us to think about looking at how kids
01:21:43.580 solve that problem. We know that they do solve it, could help to help to let us make progress.
01:21:49.320 Another name for this is common sense. What computers are famously bad at is, as you say,
01:21:55.900 narrowing the search space of solutions to rule out the obviously ridiculous and detrimental ones,
01:22:02.060 right? So you, I mean, this is where all the cartoons of AI apocalypse come in. The idea that,
01:22:08.600 you know, you're going to design a computer to remove the possibility of spam. And, you know,
01:22:15.360 an easy way to do that is just kill all the people who would send spam, right? So this is obviously,
01:22:19.420 this is nobody's actual fear. It's just points out that unless you build the common sense into
01:22:28.060 these machines, they're not going to have it necessarily for free, the more and more competent
01:22:34.060 they get at solving specific problems.
01:22:35.940 But see, it's, it's, in a way it's even worse than that because, you know, one thing is,
01:22:40.960 one thing you might say is, well, okay, you know, we have some idea about what our everyday common
01:22:45.300 sense is, is like, you know, we have these principles. So if we could just sort of specify
01:22:50.100 those things enough so we could take our, our everyday ideas about the mind, for example,
01:22:56.080 or our everyday ideas about how the physical world works, and we could build those into the computer,
01:23:00.980 that would help. And, and it is true that the systems that we have now don't even have that.
01:23:05.940 But the interesting thing about people is that we can actually discover new kinds of common sense.
01:23:11.780 So we can actually go out in the world and say, you know, that thing that we thought about how
01:23:15.940 the physical world worked, it's not true. Actually, we can have action at a distance or even worse,
01:23:22.620 you know, it turns out that actually space and time can be translated into one another,
01:23:26.980 which is certainly not anything that anyone intuitively thinks about how, how physics works.
01:23:31.860 Or for that matter, we can say, you know, that, that thing that we thought that we knew about
01:23:37.340 morality, that it turns out that no, actually, when we think about it more carefully, something like
01:23:44.900 gay marriage is not something that should be perceived as being immoral, even though lots and
01:23:51.120 lots of people for a long time had thought that that was true. So we have this ability to go out into
01:23:56.040 the world and both see the world in new ways and actually change the world, invent new environments,
01:24:02.720 invent new niches, invent new worlds, and then figure out how to thrive in those new worlds and
01:24:08.780 look around the space of possibilities and create yet other worlds and repeat.
01:24:13.760 So even if we could build in sort of what in 2019 is everybody's understanding about the world or
01:24:19.540 build in the understandings about the world that we had in the police decene,
01:24:23.300 that still wouldn't, wouldn't capture this ability that we have to, to search the space,
01:24:29.240 to consider new possibilities, to think about new things that aren't there. And, you know,
01:24:32.940 let me give you some examples. For instance, the sort of things that people are concerned about,
01:24:37.780 I think legitimately concerned about that AI could potentially do is, for example, you could give
01:24:44.440 the kind of systems that we have now, examples of all of the verdicts of guilty and innocent that had
01:24:51.260 gone on in a court over a long period of time and then get it to give it a new example and say,
01:24:57.040 okay, how would this, how would this case be judged? Will it be judged innocent or will it be judged
01:25:02.020 guilty? And the systems that we have now could probably do a pretty decent job of doing that.
01:25:07.160 And certainly, you know, changes to those systems could, you could, it's easy to imagine an extension
01:25:11.420 of the systems we have now that could solve that kind of problem. But of course, what we can do is to
01:25:17.400 say, you know what, all that law, that's really not right. That isn't really capturing what we want.
01:25:23.240 That's not enabling people to thrive. Now we should think of a different way of thinking about
01:25:28.300 making these kinds of judgments. And, and that's exactly the sort of thing that, that the current
01:25:32.920 systems, again, it's not just like, if you gave them more data, they would be able to do that.
01:25:36.820 They're not really even conceptually in the ballpark of being able to do that.
01:25:40.360 And that's probably a good thing. Now, I, you know, I think it's important to say that,
01:25:45.900 and I think you're going to talk to Stuart Russell, who will make this point, you know,
01:25:49.420 these systems don't have to have anything like human level general intelligence to be really
01:25:54.440 dangerous. Electricity is really dangerous. I just heard, uh, was talking to someone who made a
01:26:01.600 really interesting point, which is about like, how did we invent circuit breakers? It turns out the
01:26:06.840 insurance companies actually started insisting that people have circuit breakers on their electrical
01:26:11.080 systems because houses were being set on fire. Um, so, you know, electricity, which we now think
01:26:17.040 of as being this completely benign thing, we put on a switch and electricity comes out and none of us
01:26:22.000 is sitting there thinking, Oh my God, is our house about to burn down? That was only a very long,
01:26:27.040 complicated process of regulation and legislation and work to get that to be other than a really,
01:26:34.440 really dangerous thing. And I think that's absolutely true. Not about, you know, some
01:26:38.540 theoretical artificial general intelligence, but about the AI that we have now that it's a really
01:26:43.300 powerful force. And like any powerful technology, we have to figure out ways of regulating it and
01:26:48.260 having it make sense. But I don't think that's like a giant difference in kind from all the issues
01:26:53.860 we've had about dealing with powerful technologies in the past. Yeah. Yeah. Like, I guess this issue of
01:27:00.120 creativity and, you know, growth in intuitions is something, I guess my intuitions divide from many
01:27:10.000 people's on this point because creativity is often held out as something that's fundamentally different
01:27:15.500 that our, you know, our machines can't do this and we routinely do this. But in my view, creativity is,
01:27:22.660 isn't, isn't especially creative in the sense that it's clearly proceeds on the basis of rules we
01:27:31.380 already have and nothing is fundamentally new, you know, down to the studs. Nothing that's meaningful
01:27:41.540 is. I mean, you can, you can create something that essentially looks like noise that is new. Something
01:27:46.560 that strikes us as insightful, meaningful, beautiful is functioning on the basis of properties that we,
01:27:54.800 that our minds already acknowledge as relevant and already using. And so we're, I mean, you take
01:27:59.700 something like, again, a simple case of a, you know, a mathematical intuition that, you know, was fairly
01:28:05.620 hard won and took, you know, thousands of years to emerge in someone's mind. But, you know, once you've
01:28:10.320 got it, you sort of got it. And it's, it's really the same thing you're, you're doing anyway, which is,
01:28:15.340 you know, you take a, you know, a triangle having 180 degrees, you know, on a flat plane, but you,
01:28:21.400 you know, you curve the plane and it can, it can have more or less than that. And, you know,
01:28:27.300 it's strange that it took so long to see that, but the scene of that doesn't strike me as
01:28:32.220 fundamentally more mysterious than the fact that we can understand anything about triangles in the
01:28:37.360 first place. I mean, I think I would just set that on its head in the sense that, you know,
01:28:42.500 again, this is one of the real advantages of studying young children is that, you know,
01:28:48.100 when you say, well, it's no more mysterious than understanding triangles in the first place,
01:28:51.580 people have actually tried to figure out how is it that we can understand triangles? How is it that
01:28:56.700 that children can understand basic things about how number works or in the work that I've done,
01:29:01.940 how do children understand basic things about the causal structure of the world, for example?
01:29:06.160 And it turns out that even very basic things that we take for granted, like,
01:29:10.480 like understanding that you can believe something different from what I believe,
01:29:14.520 for example, it's actually very hard to see exactly how it is that children are taking
01:29:20.600 individual pieces and putting them together to putting them together to come to realizations
01:29:27.280 about, say, how, how other people's minds work. And, and the problem is in sort of, you know,
01:29:33.160 if you're doing it backwards, once you know what the answer is, then you can say, oh,
01:29:36.620 I see, this is how you could put that together from, from pieces that you have in the world or
01:29:41.120 from data that you have. But of course, if you're sort of doing it prospectively, then there's all
01:29:46.380 sorts of mill, you know, incredibly large number of different other ways that you could have put
01:29:51.640 together, could have put together those pieces, or you could have, you could have interpreted the
01:29:56.340 data. And the, the puzzle is how, how is it that you came upon the one that was both new and
01:30:04.340 interesting and, and wasn't just random. Now, again, I don't think there's any kind of, you know,
01:30:10.380 giant reason why we couldn't solve that problem. But I do think that's looking at even something as
01:30:16.240 simple as, you know, children figuring out basic things about how the world around them and the people
01:30:20.980 around them work. That turns out to be a very, very, very tricky problem to solve. And one interesting
01:30:26.900 thing, for example, that we found in our data in our research is that in many respects, children are
01:30:32.220 actually better at coming to unlikely or new solutions than adults are. So again, this is this
01:30:37.620 kind of trade-off idea where actually the more, you know, in some ways, the more difficult it is for
01:30:44.260 you to conceive of something new. We use a lot of Bayesian ideas when we're trying to characterize what the
01:30:50.420 children are doing. And one way you could think about it is that, you know, as your priors get to
01:30:55.640 be more and more peaked, as you know, more and more, as you're more and more confident about certain
01:31:00.660 kinds of knowledge, and that's a good thing, right? That's what lets you go out into the world and build
01:31:05.380 things and make the world a better place. It gets to be harder and harder for you to conceive of, of
01:31:11.460 new possibilities. And, and one idea that, that I've been arguing for is that you could think about the
01:31:16.840 very fact of childhood as being a solution to this kind of explore exploit tension, this tension
01:31:22.500 between exploring, being able to explore lots of different possibilities, even if they're maybe not
01:31:27.500 very good, and having to narrow in on the possibilities that are really relevant to, to a
01:31:32.520 particular problem. And again, that's the sort of, that's the sort of thing that people or humans over
01:31:38.220 the course of their life history and culture seem to be pretty good at doing in a way that we don't
01:31:43.900 really have a good, we don't even really have a good start on thinking about how a computational
01:31:48.800 system could do that. Now we're working on it. I mean, you know, we're, we're hoping that we could
01:31:53.260 get a computational system that could do that. And we have some sort of have some ideas, but that's a
01:31:57.880 dimension that really, really differentiates what the current powerful AI systems can do and what
01:32:03.720 every four-year-old can do. Yeah. Yeah. No, I, I'm granting all of that. I guess I'm just putting
01:32:09.000 the line at a different point because again, people often hold out creativity and being able
01:32:15.420 to form new goals and insights, intuitions, as though this were a uniquely human thing that was,
01:32:24.080 it's very difficult to understand how a machine could do. But, you know, as you point out, just
01:32:30.200 being able to walk across the room is, is fairly miraculous from the point of view of, you know,
01:32:36.620 how hard it is to instantiate in a robot and to, you know, to ride a bicycle and to do things that
01:32:41.900 kids routinely learn to do very early. My point is that once we crack that, these fairly basic
01:32:49.560 problems that evolution has, has solved for us and really for even non-human animals in many cases,
01:32:57.420 then we're talking about just incremental gains into something that is fundamentally beyond the human.
01:33:05.840 I mean, because no one's putting the line at, nobody says, well, yes, you know, you can, you can,
01:33:10.860 might be able to build a machine that could run across a room like a human child and, you know,
01:33:17.600 balance, you know, something on its finger, but you are never going to get something that can produce
01:33:25.080 the creative genius of an Olympic athlete or a professional basketball player. But I don't, I mean,
01:33:31.480 that's where I think the intuitions flip. I mean, once you, once you could build something that
01:33:35.700 could move exactly like a person, then there is no limit to, there's, there's no example of human
01:33:42.460 agility that it is, will be out of sight at that point. And I think, I guess what I'm reacting to
01:33:47.420 is that people seem to think different rules apply at the level of cognition and artistic creativity,
01:33:55.160 say.
01:33:55.300 Well, I think it's just an interesting empirical question. You know, we're collaborating now on a
01:34:00.480 big project with a bunch of people who are doing things in computer vision, for example.
01:34:04.180 And that's another example where something that we think is very simple and straightforward, you know,
01:34:09.900 I mean, we don't even feel as if we do any effort to go out into the world and actually see the
01:34:13.920 objects that are out there in the world. That turns out to be both extremely difficult and,
01:34:20.260 and in some ways very mysterious that we can do that as well as, that we can do that as well as
01:34:25.100 we can. That not only do we identify images, but we can recognize that, you know, there's an object
01:34:30.980 that's closer to me or an object that's further away from me, or that objects have texture, or that
01:34:35.080 objects are really three-dimensional. Those are all really, really challenging problems. And an
01:34:39.240 interesting thought is that at a very high abstract level, it may be that we're solving some of those
01:34:46.160 problems in the same way that enables us to solve some of these creativity problems. So let me give
01:34:50.980 you an example. One of the things that the kids very characteristically do is do experiments, except
01:34:57.840 that when they do experiments, we call it getting into everything. They explore. They're not just sort
01:35:03.120 of passively waiting for data to come to them. They can have a problem and actually go out and get the
01:35:08.360 data that's relevant to that problem. Again, when they do that, we call it playing or getting into
01:35:13.060 everything or making a mess. And we sit there and nod our heads and try and keep them from
01:35:17.840 killing themselves when they're doing it. But that's a really powerful technique, a really powerful
01:35:23.700 way of making progress, actually getting more information about what the structure of the world
01:35:28.060 is like, and then using it to change what you think about the world, and then repeating by actually going
01:35:33.220 out into the real world and getting data from the real world. And that's something that kids are very
01:35:38.200 good at doing. That seems to play a big role in our ability to do things like move around the world
01:35:44.180 or do perform skilled actions. And again, that's something that at least at the moment isn't very
01:35:49.800 characteristic of the way the machines work. Here's another nice example of something that we're
01:35:54.520 actually working on at Berkeley. So one of the things that we know about kids is their motivation and
01:36:00.800 affect is that they're insatiably curious. They just want to get as much information as they can about the
01:36:07.060 world around them. And they're driven to go out and get information and, and especially get new
01:36:11.600 information, which again, is why just thinking about the way that we evolved isn't going to be enough
01:36:16.280 to answer the problem. One of the things that's true about lots of creatures, but especially human
01:36:21.460 children is that they're curiosity driven. And in work that we've been doing with computer scientists at
01:36:27.340 Berkeley, you can design an algorithm that instead of say, wanting to have a higher score, wants to
01:36:33.520 have the predictions of its model be violated. So actually when it has a model and things turn
01:36:39.980 out to be wrong, instead of, instead of being depressed, it goes out and says, huh, that's
01:36:44.920 interesting. Let me try that again. Let me see what's going on with that little toy car that it's
01:36:49.960 doing that strange thing. And, and you can show that a system that's got that kind of motivation
01:36:56.220 can solve problems that your typical say reinforcement learning system can't solve. And that what we're
01:37:02.380 doing is actually comparing children and these curious AIs on the same problems to see the ways
01:37:08.880 that the children are being curious and how that's related to the ways that the AIs are being curious.
01:37:14.240 So I think you're absolutely right that the idea that the place where humans are going to turn out
01:37:19.900 to be unique is in, you know, the great geniuses or the great, the great artists or the great athletes,
01:37:27.400 they're going to turn out to have some special sauce that the rest of us don't have. And that's going to
01:37:31.060 be the thing that AI can't do. I think, I think you're right that that's not really going to be
01:37:35.360 true. That what, what those people are doing is an extension of the things that every, every two
01:37:40.480 and three-year-old is equipped to do. But I also think that what the two and three-year-olds are
01:37:45.000 equipped to do is going to turn out to be very different from at least what the current batch
01:37:49.700 of AI is capable of doing. Yeah. Well, I don't think anyone is going to argue there.
01:37:54.360 Well, so how do you think of consciousness in the context of this conversation? For me, I'll just
01:37:59.360 keep you quiet.
01:38:07.160 If you'd like to continue listening to this conversation, you'll need to subscribe at
01:38:11.000 samharris.org. Once you do, you'll get access to all full-length episodes of the Making Sense
01:38:15.680 podcast, along with other subscriber-only content, including bonus episodes and AMAs and the conversations
01:38:22.420 I've been having on the Waking Up app. The Making Sense podcast is ad-free and relies entirely on
01:38:27.960 listener support. And you can subscribe now at samharris.org.