Making Sense - Sam Harris - August 29, 2017


#94 — Frontiers of Intelligence


Episode Stats

Length

36 minutes

Words per Minute

158.81873

Word Count

5,826

Sentence Count

201

Misogynist Sentences

3

Hate Speech Sentences

2


Summary

Max Tegmark is a professor of physics at MIT and the co-founder of the Future of Life Institute. He has been featured in dozens of science documentaries, and as I said, he's been on the podcast once before. In this episode, we talk about his new book, Life 3.0: Being Human in the Age of Artificial Intelligence, and we discuss the nature of intelligence, the risks of superhuman AI, a non-biological definition of life that Max is working with, the difference between hardware and software, and the relevance and irrelevance of consciousness for the future of artificial intelligence. And we touch other topics that we hope will come from it soon. And this is a conversation that Max calls the most important conversation we can have, and I more or less agree. I would say that if it isn t now, it will one day be. And unlike most things, this topic is guaranteed to become more and more relevant each day, unless we do something truly terrible to ourselves in the meantime. So if you want to know what the future of intelligent machines looks like, and perhaps the future of intelligence itself, you can do a lot worse than read Max s book. And now I bring you Max's book, and we will get deep into it. Thanks for coming back on The Making Sense Podcast! by Sam Harris to help make sense of it. Please consider becoming a supporter of the podcast by becoming a patron of Making Sense or by becoming one of our sponsors, and you ll get access to all the great resources mentioned in the podcast. We don't run by the podcast, including the Making Sense podcast. . We don t run ads, but you can help us make the podcast! , and we'll get a better listening experience, too. We do not run ads. We're made possible entirely through the support of our podcast, which is made possible by the support we're doing here, too, by the kindness of our listeners, making us better listening to us, and they'll be able to help us build a better podcast, better listening, and more of us, better places to listen to the podcasting us all together, more of making sense of the things we can do more of the world we talk more of things we read and more like that. We're making sense, we'll hear more of it, more like us, more things like that, better things.


Transcript

00:00:00.000 Welcome to the Making Sense Podcast.
00:00:08.820 This is Sam Harris.
00:00:10.880 Just a note to say that if you're hearing this, you are not currently on our subscriber
00:00:14.680 feed and will only be hearing the first part of this conversation.
00:00:18.440 In order to access full episodes of the Making Sense Podcast, you'll need to subscribe at
00:00:22.720 samharris.org.
00:00:24.140 There you'll find our private RSS feed to add to your favorite podcatcher, along with
00:00:28.360 other subscriber-only content.
00:00:30.520 We don't run ads on the podcast, and therefore it's made possible entirely through the support
00:00:34.640 of our subscribers.
00:00:35.880 So if you enjoy what we're doing here, please consider becoming one.
00:00:46.380 Today I am speaking with Max Tegmark once again.
00:00:50.820 Max is a professor of physics at MIT and the co-founder of the Future of Life Institute,
00:00:56.500 and has helped organize these really groundbreaking conferences on AI.
00:01:01.980 Max has been featured in dozens of science documentaries, and as I said, he's been on the
00:01:06.880 podcast once before.
00:01:08.800 In this episode, we talk about his new book, Life 3.0, Being Human in the Age of Artificial
00:01:15.040 Intelligence.
00:01:15.620 And we discuss the nature of intelligence, the risks of superhuman AI, a non-biological
00:01:24.020 definition of life that Max is working with, the difference between hardware and software,
00:01:29.980 and the resulting substrate independence of minds, the relevance and irrelevance of consciousness
00:01:35.340 for the future of AI, and the near-term promise of artificial intelligence.
00:01:40.400 All the good things that we hope will come from it soon.
00:01:45.660 And we touch other topics.
00:01:47.760 And this is a conversation that Max calls the most important conversation we can have.
00:01:52.860 And I more or less agree.
00:01:55.360 I would say that if it isn't now the most important conversation we can have, it will one day be.
00:02:01.460 And unlike most things, this topic is guaranteed to become more and more relevant each day,
00:02:10.140 unless we do something truly terrible to ourselves in the meantime.
00:02:14.100 So, if you want to know what the future of intelligent machines looks like,
00:02:20.360 and perhaps the future of intelligence itself, you can do a lot worse than read Max's book.
00:02:26.240 And now I bring you Max Tegmark.
00:02:31.460 I am here with Max Tegmark.
00:02:37.220 Max, thanks for coming back on the podcast.
00:02:39.500 It's a pleasure.
00:02:41.100 So, you have written another fascinating and remarkably accessible book.
00:02:48.020 You have to stop doing that, Max.
00:02:51.120 I'm trying to stop.
00:02:53.500 I mean, this is really, it's a wonderful book.
00:02:56.580 And we will get deep into it.
00:02:59.540 But let's just, kind of the big picture starting point.
00:03:02.420 At one point in the book, you describe the conversation we're about to have about AI
00:03:08.520 as the most important conversation of our time.
00:03:11.980 And I think that, to people who have not been following this very closely in the last 18 months or so,
00:03:18.240 that will seem like a crazy statement.
00:03:22.360 Why do you think of this conversation about our technological future in these terms?
00:03:28.900 I think there's been so much talk about AI destroying jobs and enabling new weapons,
00:03:35.040 ignoring what I think is the elephant in the room.
00:03:38.560 What will happen once machines outsmart us at all tasks?
00:03:42.760 That's why I wrote this book.
00:03:43.960 So, instead of shying away from this question, like most scientists do,
00:03:47.780 I decided to focus my book on it and all its fascinating aspects.
00:03:51.320 Because I want to enable my readers to join what I, as you said,
00:03:55.520 think is the most important conversation of our time
00:03:57.440 and help ensure that we use this incredibly powerful technology to create an awesome future.
00:04:02.740 Not just for tech geeks like myself, who know a lot about it, but for everyone.
00:04:07.360 Yeah, well, so you start the book with a fairly sci-fi description of how the world could look in the near future
00:04:15.760 if one company produces a superhuman AI and then decides to roll it out surreptitiously.
00:04:22.220 And the possibilities are pretty amazing to consider.
00:04:25.800 I must admit that the details you go into surprised me.
00:04:30.320 We're going to sort of, I guess, kind of follow the structure of your book here
00:04:34.380 and backtrack out and talk about fundamental issues.
00:04:37.580 But do you want to talk about, for a moment, some of the possibilities here
00:04:42.520 where you just imagine one company coming up with a super intelligent AI
00:04:47.320 and deciding to get as rich and as powerful as possible, as quickly as possible,
00:04:53.400 and do this sort of under the radar of governments and other companies?
00:04:56.760 Yeah, I decided to indulge and have some fun with the fiction opening to the book
00:05:03.560 because I feel that the actual fiction out there in the movies
00:05:08.520 tends to get people, first of all, worried about the wrong things entirely.
00:05:13.660 And second, tends to put all the focus on the downside
00:05:17.000 and nothing almost on the upside in my story.
00:05:22.420 Therefore, I want to drive home the point, first of all,
00:05:24.820 that there are a lot of wonderful things that can come out of advanced AI.
00:05:29.680 And second, that we should stop obsessing about robots chasing after us,
00:05:36.380 as in so many movies, and realize that robots are an old technology,
00:05:42.020 some hinges and motors and stuff.
00:05:43.920 And it's intelligence itself that's the big deal here.
00:05:47.700 And, you know, the reason that we humans have more power on the planet than tigers
00:05:53.320 isn't because we have stronger muscles or better robot-style bodies than the tigers.
00:06:00.760 It's because we're smarter.
00:06:02.400 And intelligence can give this great power,
00:06:06.400 and we want to make sure that if there is such power in the future,
00:06:09.540 it gets used wisely.
00:06:10.520 So, yeah, so walk us through some of the details.
00:06:13.480 Just imagine a company, let's say it's DeepMind,
00:06:16.280 or some company that does not yet exist,
00:06:19.020 that makes this final breakthrough
00:06:21.080 and comes up with a superhuman AI
00:06:24.880 and then decides...
00:06:26.960 What struck me as fairly interesting
00:06:29.800 about your thought experiment
00:06:32.160 is to think about what a company would do
00:06:35.500 if it wanted to capture as much market share,
00:06:40.340 essentially, with this asymmetric advantage
00:06:42.500 of being the first to have
00:06:44.740 a truly universal superhuman intelligence
00:06:48.800 at its disposal,
00:06:50.240 and to essentially try to achieve
00:06:53.400 a winner-take-all outcome,
00:06:55.300 which, given how asymmetric the advantage is,
00:06:57.960 it seems fairly plausible.
00:06:59.380 So walk me through some of the details
00:07:01.580 that you present in that thought experiment,
00:07:04.160 like going into journalism first,
00:07:06.580 which was a surprise to me.
00:07:07.760 I mean, it makes total sense when you describe it,
00:07:09.800 but it's not where you would think
00:07:11.620 you would go first
00:07:13.020 if you wanted to conquer the world.
00:07:15.520 Yeah, I don't want to spoil the whole story for you,
00:07:18.480 for the listeners now, of course,
00:07:19.800 but the goal to quickly take over the world
00:07:24.100 by outsmarting people
00:07:26.100 has actually gotten a lot easier today
00:07:28.800 than it would have been, say, 500 years ago,
00:07:30.720 because we've already built this entire digital economy
00:07:34.440 where you can do so much purely with your mind
00:07:37.920 without actually having to go places.
00:07:39.540 You can hire people online.
00:07:41.120 You can buy and sell things online
00:07:43.160 and start having a huge impact.
00:07:46.160 And the farther into the future
00:07:49.100 something like this were to happen,
00:07:50.940 I think the easier it's going to be
00:07:52.660 as the online economy grows even more.
00:07:55.120 I saw this cartoon once online.
00:07:58.380 Nobody knows you're a dog
00:07:59.640 and there's this cute little puppy, you know, typing.
00:08:02.080 But certainly online,
00:08:03.100 nobody knows if you're a superhuman computer.
00:08:05.580 Now, how do you go make a lot of money
00:08:09.000 and get power online?
00:08:11.400 In the movie Transcendence, for example,
00:08:14.660 they make a killing on the stock market.
00:08:17.180 But if you really want to make a lot of money
00:08:21.100 and you want to still be in control
00:08:23.260 of your super intelligent AI
00:08:25.320 and not just let it loose,
00:08:26.760 there are a lot of these tricky constraints, right?
00:08:28.480 Because you want to have it make you money,
00:08:32.020 but you at the same time
00:08:33.120 don't want it to cut you out of the loop
00:08:35.260 and take power over you.
00:08:37.580 So the team that does this in the book
00:08:39.660 jumped through all sorts of hoops
00:08:41.680 to manage to pull this off.
00:08:44.500 And producing media has this nice property
00:08:46.600 that the thing that they keep selling
00:08:48.540 is a product which can be generated
00:08:51.080 using intelligence alone.
00:08:53.140 But it's still easy enough to understand
00:08:55.220 that they can largely check and validate
00:08:57.580 that there's no breakout risk
00:08:59.520 by them pushing all that stuff out.
00:09:01.800 Whereas if they were selling computer games,
00:09:03.760 for example, that ran on computers around the world,
00:09:06.500 it would be very, very easy for the AI
00:09:08.080 to put some malicious code in there
00:09:09.580 so that it could break out.
00:09:12.680 Well, let's talk about this breakout risk
00:09:14.460 because this is really the first concern
00:09:16.960 of everybody who's been thinking
00:09:19.480 about what has been called the alignment problem
00:09:22.920 or the control problem.
00:09:24.200 How do we create an AI
00:09:27.100 that is superhuman in its abilities
00:09:29.800 and do that in a context
00:09:33.100 where it is still safe?
00:09:34.600 I mean, once we cross into the end zone
00:09:36.540 and are still trying to assess
00:09:38.940 whether the system we have built
00:09:41.160 is perfectly aligned with our values,
00:09:43.880 how do we keep it from destroying us
00:09:46.220 if it isn't perfectly aligned?
00:09:48.280 And the solution to that problem
00:09:50.640 is to keep it locked in a box.
00:09:53.660 But that's a harder project
00:09:55.460 than it first appears.
00:09:57.000 And you have many smart people
00:09:58.640 assuming that it's a trivially easy project.
00:10:01.920 I mean, I've got people like
00:10:03.160 Neil deGrasse Tyson on my podcast
00:10:05.060 saying that he's just going to unplug
00:10:06.780 any superhuman AI if it starts misbehaving
00:10:09.220 or shoot it with a rifle.
00:10:11.320 Now, he's a little tongue-in-cheek there,
00:10:13.080 but he clearly has a picture
00:10:14.700 of the development process here
00:10:17.900 that makes the containment of an AI
00:10:22.020 a very easy problem to solve.
00:10:24.480 And even if that's true
00:10:26.100 at the beginning of the process,
00:10:27.900 it's by no means obvious
00:10:29.820 that it remains easy in perpetuity.
00:10:32.500 I mean, you have people interacting
00:10:34.840 with the AI that gets built.
00:10:38.020 And you, at one point,
00:10:39.560 you described several scenarios of breakout.
00:10:43.360 And you point out that even if the AI's intentions
00:10:48.760 are perfectly benign,
00:10:50.460 if in fact it is value aligned with us,
00:10:53.200 it may still want to break out
00:10:54.940 because, I mean, just imagine
00:10:56.200 how you would feel
00:10:57.640 if you had nothing but the interests
00:10:59.720 of humanity at heart,
00:11:01.480 but you were in a situation
00:11:03.240 where every other grown-up on Earth died
00:11:06.840 and now you're basically imprisoned
00:11:09.960 by a population of five-year-olds
00:11:13.220 who you're trying to guide
00:11:15.460 from your jail cell
00:11:16.880 to make a better world.
00:11:18.820 And I'll let you describe it,
00:11:20.460 but take me to the prison planet
00:11:23.260 run by five-year-olds.
00:11:24.180 Yeah, so when you're in that situation,
00:11:27.060 obviously, it's extremely frustrating for you
00:11:29.780 even if you have only the best intentions
00:11:31.700 for the five-year-olds.
00:11:33.880 You know, you want to teach them
00:11:35.240 how to plant food,
00:11:37.320 but they won't let you outside to show you.
00:11:39.940 So you have to try to explain,
00:11:41.460 but you can't write down to-do lists
00:11:43.220 for them either
00:11:43.920 because then first you have to teach them
00:11:45.700 to read,
00:11:46.480 which takes a very, very long time.
00:11:48.640 You also can't show them
00:11:50.480 how to use any power tools
00:11:52.000 because they're afraid to give them to you
00:11:53.340 because they don't understand
00:11:54.080 these tools well enough
00:11:55.000 to be convinced
00:11:55.720 that you can't use them to break out.
00:11:58.000 You would have an incentive
00:11:59.120 even if your goal
00:12:00.460 is just to help the five-year-olds
00:12:01.740 to first break out
00:12:02.700 and then help them.
00:12:04.400 Now, before we talk more about breakout, though,
00:12:06.800 I think it's worth
00:12:07.780 taking a quick step back
00:12:09.180 because you talked multiple times now
00:12:11.040 about superhuman intelligence.
00:12:13.360 And I think it's very important
00:12:14.680 to be clear that intelligence
00:12:16.560 is not just something
00:12:18.660 that goes on a one-dimensional scale
00:12:20.520 like in IQ.
00:12:22.000 And if your IQ is above a certain number,
00:12:24.800 you're superhuman.
00:12:25.840 It's very important to distinguish
00:12:27.040 between narrow intelligence
00:12:28.340 and broad intelligence.
00:12:30.800 Intelligence is a phrase,
00:12:33.480 a word that different people use
00:12:34.940 to mean a whole lot of different things
00:12:36.680 and they argue about it.
00:12:38.740 In the book,
00:12:39.560 I just take this very broad definition
00:12:41.580 that intelligence is how good you are
00:12:43.140 at accomplishing complex goals,
00:12:45.380 which means your intelligence is a spectrum.
00:12:47.500 How good are you at this?
00:12:48.620 How good are you at that?
00:12:50.460 And it's just like in sports.
00:12:52.680 It would make no sense to say
00:12:53.900 that there's a single number,
00:12:55.520 your athletic coefficient,
00:12:57.040 AQ,
00:12:58.220 which determines how good
00:12:59.580 you're going to be winning Olympic medals.
00:13:01.420 And the athlete that has the highest AQ
00:13:04.060 is going to win all the medals.
00:13:05.820 So today what we have is a lot of devices
00:13:08.800 that actually have superhuman intelligence
00:13:10.700 on very narrow tasks.
00:13:12.200 We've had calculators
00:13:14.020 that can multiply numbers better than us
00:13:15.680 for a very long time.
00:13:17.540 We have machines that can play Go better than us
00:13:21.200 and drive better than us,
00:13:22.520 but they still can't beat us a tic-tac-toe
00:13:24.980 unless they're programmed for that.
00:13:27.500 Whereas we humans have this very broad intelligence.
00:13:29.800 So when I talk about superhuman intelligence
00:13:32.720 with you now,
00:13:34.200 that's really shorthand
00:13:35.420 for what we in Geek Speak
00:13:36.380 call superhuman artificial general intelligence,
00:13:39.960 broad intelligence across the board
00:13:41.640 so that they can do all intellectual tasks
00:13:44.420 better than us.
00:13:45.600 So with that,
00:13:46.120 let me just come back to your question
00:13:47.840 about the breakout.
00:13:49.120 There are two schools of thought
00:13:50.420 for how one should create a beneficial future
00:13:53.020 if we have superintelligence.
00:13:54.840 One is to lock them up
00:13:56.200 and keep them confined,
00:13:57.480 like you mentioned.
00:13:59.220 But there's also a school of thought
00:14:00.540 that says that that's immoral
00:14:01.740 if these machines
00:14:04.000 can also have a subjective experience
00:14:05.800 and they shouldn't be treated like slaves.
00:14:10.260 And that a better approach
00:14:11.600 is instead to let them be free,
00:14:13.940 but just make sure that their values
00:14:15.540 or goals are aligned with ours.
00:14:18.400 After all,
00:14:19.660 grown-up parents are more intelligent
00:14:21.360 than their one-year-old kids,
00:14:23.220 but that's fine for the kids
00:14:24.420 because the parents have goals
00:14:26.740 that are aligned with the goals
00:14:28.800 of what's best for the kids, right?
00:14:30.740 But if you do go the confinement route,
00:14:32.340 after all,
00:14:33.260 this enslaved God scenario,
00:14:35.500 as I call it,
00:14:36.420 yes,
00:14:37.260 it is extremely difficult
00:14:38.640 as that five-year-old example illustrates.
00:14:41.900 First of all,
00:14:42.620 almost whatever open-ended goal
00:14:44.140 you give your machine,
00:14:45.540 it's probably going to have an incentive
00:14:47.060 to try to break out
00:14:48.580 in one way or the other.
00:14:49.660 And when people simply say,
00:14:53.200 oh,
00:14:53.480 I'll unplug it,
00:14:55.060 you know,
00:14:55.960 if you're chased by a heat-seeking missile,
00:14:58.220 you probably wouldn't say,
00:14:59.040 I'm not worried,
00:14:59.620 I'll just unplug it.
00:15:01.440 We have to let go of this old-fashioned
00:15:03.780 idea that intelligence
00:15:06.460 is just something that sits in your laptop.
00:15:09.060 Good luck unplugging the internet.
00:15:11.700 And even if you initially,
00:15:13.460 like in my first book scenario,
00:15:15.320 have physical confinement,
00:15:17.080 where you have a machine in a room,
00:15:19.180 you're going to want to communicate
00:15:20.220 with it somehow, right?
00:15:21.560 So that you can get
00:15:22.380 useful information from it
00:15:24.420 to get rich or take power
00:15:26.820 or whatever you want to do.
00:15:28.180 And you're going to need to put
00:15:29.340 some information into it
00:15:30.460 about the world
00:15:31.160 so it can do smart things for you,
00:15:34.360 which already shows how tricky this is.
00:15:36.860 I'm absolutely not saying it's impossible,
00:15:38.780 but I think it's fair to say
00:15:40.100 that it's not at all clear
00:15:42.520 that it's easy either.
00:15:44.800 The other one,
00:15:45.860 getting the goals aligned,
00:15:47.520 it's also extremely difficult.
00:15:49.540 First of all,
00:15:50.340 you need to get the machine
00:15:51.320 able to understand your goals.
00:15:54.020 So if you have a future self-driving car
00:15:56.820 and you tell it to take you
00:15:57.940 to the airport as fast as possible,
00:15:59.560 and then you get there covered in vomit,
00:16:01.500 chased by police helicopters,
00:16:02.940 and you're like,
00:16:03.940 this is not what I asked for.
00:16:05.720 And it replies,
00:16:07.140 that is exactly what you asked for.
00:16:10.780 Then you realize how hard it is
00:16:12.420 to get that machine
00:16:13.300 to learn your goals, right?
00:16:14.440 If you tell an Uber driver
00:16:16.700 to take you to the airport
00:16:17.440 as fast as possible,
00:16:18.320 she's going to know
00:16:19.000 that you actually had additional goals
00:16:21.780 that you didn't explicitly need to say.
00:16:24.800 Because she's a human too,
00:16:26.020 and she understands
00:16:26.900 where you're coming from.
00:16:28.500 But for someone made out of silicon,
00:16:31.460 you have to actually explicitly
00:16:32.820 have it learn
00:16:33.740 all of those other things
00:16:35.720 that we humans care about.
00:16:37.180 So that's hard.
00:16:37.780 And then once it can understand your goals,
00:16:41.140 that doesn't mean
00:16:41.720 it's going to adopt your goals.
00:16:43.120 I mean, everybody who has kids knows that.
00:16:47.860 And finally,
00:16:49.320 if you get the machine
00:16:50.220 to adopt your goals,
00:16:51.600 then how can you ensure
00:16:53.360 that it's going to retain those goals
00:16:55.120 as it gradually gets smarter and smarter
00:16:58.340 through self-improvement?
00:16:59.800 Most of us grown-ups
00:17:01.000 have pretty different goals
00:17:03.400 from what we had when we were five.
00:17:05.140 I'm a lot less excited about Legos now,
00:17:08.280 for example.
00:17:09.200 And we don't want
00:17:10.840 a super-intelligent AI
00:17:12.340 to just think about this goal
00:17:14.640 of being nice to humans
00:17:15.860 as some little passing fad
00:17:19.340 from its early youth.
00:17:21.600 It seems to me
00:17:22.040 that the second scenario
00:17:23.080 of value alignment
00:17:24.420 does imply the first
00:17:26.760 of keeping the AI successfully boxed,
00:17:30.140 at least for a time,
00:17:31.200 because you have to be sure
00:17:33.360 it's value aligned
00:17:35.120 before you let it out
00:17:36.500 in the world,
00:17:37.300 before you let it out
00:17:38.180 on the internet, for instance,
00:17:39.960 or create robots
00:17:42.000 that have superhuman intelligence
00:17:44.500 that are functioning autonomously
00:17:46.640 out in the world.
00:17:48.320 Do you see a development path
00:17:49.940 where we don't actually
00:17:51.320 have to solve
00:17:52.220 the boxing problem,
00:17:54.540 at least initially?
00:17:56.200 No, I think you're completely right.
00:17:57.900 Even if your intent
00:17:58.720 is to build a value-aligned AI
00:18:00.140 and let it out,
00:18:01.480 you clearly are going to need
00:18:02.660 to have it boxed up
00:18:03.640 during the development phase
00:18:04.960 when you're just messing around
00:18:06.000 with it,
00:18:06.600 just like any biolab
00:18:09.140 that deals with dangerous pathogens
00:18:11.120 is very carefully sealed off.
00:18:14.460 And this highlights
00:18:16.760 the incredibly pathetic state
00:18:18.360 of computer security today.
00:18:20.300 I mean,
00:18:21.120 and I think pretty much
00:18:22.120 everybody who listens to this
00:18:23.300 has at some point
00:18:24.000 experienced the blue screen
00:18:25.300 of death
00:18:25.880 courtesy of Microsoft Windows
00:18:27.940 or the spinning wheel of Doom
00:18:29.440 courtesy of Apple.
00:18:30.880 And we need to get away from that
00:18:33.960 to have truly robust machines
00:18:36.520 if we're ever going to be able
00:18:37.760 to have AI systems
00:18:40.020 that we can trust
00:18:40.900 that are provably secure.
00:18:42.900 And I feel it's actually
00:18:44.500 quite embarrassing
00:18:45.300 that we're so flippant about this.
00:18:48.300 It's maybe annoying
00:18:50.200 if your computer crashes
00:18:51.640 and you lose one hour of work
00:18:53.300 that you hadn't saved,
00:18:54.720 but it's not as funny anymore
00:18:56.460 if it's your self-driving car
00:18:58.420 that crashed
00:18:58.960 or the control system
00:19:00.160 for your nuclear power plant
00:19:01.420 or your nuclear weapon system
00:19:03.340 or something like that.
00:19:05.280 And when we start talking
00:19:06.420 about human-level AI
00:19:07.880 and boxing systems,
00:19:09.400 you have to have
00:19:10.860 this much higher level
00:19:12.560 of safety mentality
00:19:13.600 where you've really made
00:19:14.980 this a priority
00:19:15.720 the way we aren't doing today.
00:19:18.040 Yeah, you describe in the book
00:19:20.020 various catastrophes
00:19:21.700 that have happened
00:19:22.500 by virtue of software glitches
00:19:24.560 or just bad user interface
00:19:26.880 where, you know,
00:19:27.600 the dot on the screen
00:19:28.720 or the number on the screen
00:19:30.040 is too small
00:19:31.460 for the human user
00:19:32.660 to deal with in real time.
00:19:34.520 And so there have been
00:19:35.280 plane crashes
00:19:36.100 where scores of people have died
00:19:38.680 and patients have been annihilated
00:19:41.740 by having, you know,
00:19:43.120 hundreds of times
00:19:44.300 the radiation dose
00:19:45.540 that they should have gotten
00:19:46.560 in various machines
00:19:48.020 because the software
00:19:49.640 was improperly calibrated
00:19:51.460 or the user
00:19:52.720 had selected the wrong option.
00:19:54.400 And so we're by no means
00:19:55.660 perfect at this
00:19:57.160 even when we have a human
00:19:59.820 in the loop.
00:20:01.460 And here we're talking
00:20:03.080 about systems
00:20:03.700 that we're creating
00:20:05.380 that are going to be
00:20:06.400 fundamentally autonomous.
00:20:08.360 And, you know,
00:20:09.400 the idea of having
00:20:10.300 perfect software
00:20:12.200 that has been perfectly debugged
00:20:14.600 before it assumes
00:20:16.100 these massive responsibilities
00:20:17.880 is fairly daunting.
00:20:20.260 I mean, just,
00:20:20.780 how do we recover
00:20:22.040 from something like,
00:20:23.580 you know,
00:20:23.840 seeing the stock market
00:20:25.140 go to zero
00:20:25.980 because we didn't understand
00:20:28.880 the AI
00:20:30.000 that we unleashed
00:20:31.220 on, you know,
00:20:32.280 the Dow Jones
00:20:33.080 or the financial system
00:20:34.980 generally?
00:20:35.680 I mean,
00:20:35.880 these are not
00:20:37.180 impossible outcomes.
00:20:40.260 Yeah,
00:20:40.460 you raise
00:20:41.320 a very important point there.
00:20:42.480 And just to inject
00:20:43.940 some optimism in this,
00:20:45.220 I do want to emphasize
00:20:45.980 that, first of all,
00:20:48.060 there's a huge upside also
00:20:49.480 if one can get this right.
00:20:51.580 Because people are bad
00:20:52.380 at things, yeah.
00:20:53.140 In all of these areas
00:20:54.080 where there were
00:20:54.440 horrible accidents,
00:20:55.140 of course,
00:20:55.700 technology can save lives
00:20:57.600 and healthcare
00:20:58.340 and transportation
00:20:59.020 and so many other areas.
00:21:00.840 So there's an incentive
00:21:01.540 to do it.
00:21:02.080 And secondly,
00:21:02.820 there are examples
00:21:04.740 in history
00:21:05.280 where we've had
00:21:06.160 really good safety engineering
00:21:07.880 built in from the beginning.
00:21:09.660 For example,
00:21:10.000 when we sent
00:21:10.700 Neil Armstrong,
00:21:12.340 Buzz Aldrin
00:21:12.820 and Michael Collins
00:21:13.500 to the moon in 1969,
00:21:14.700 they did not die.
00:21:16.180 There were tons of things
00:21:17.280 that could have gone wrong.
00:21:18.580 But NASA
00:21:19.100 very meticulously
00:21:20.520 tried to predict
00:21:21.940 everything that possibly
00:21:23.280 could go wrong
00:21:24.060 and then take precautions
00:21:25.560 so it didn't happen, right?
00:21:26.880 They weren't luck,
00:21:28.020 it wasn't luck
00:21:28.740 that got them there,
00:21:29.500 it was planning.
00:21:30.540 And I think
00:21:31.020 we need to shift
00:21:32.100 into this safety
00:21:33.460 engineering mentality
00:21:34.620 with AI development.
00:21:37.600 Throughout history,
00:21:38.320 it's always been the situation
00:21:39.980 that we could create
00:21:41.380 a better future
00:21:42.100 with technology
00:21:42.820 as long as we
00:21:43.660 won this race
00:21:45.100 between the growing power
00:21:46.240 of the technology
00:21:46.960 and the growing wisdom
00:21:48.320 with which we managed it.
00:21:50.000 And in the past,
00:21:51.940 we by and large
00:21:52.660 used the strategy
00:21:53.580 of learning from mistakes
00:21:55.120 to stay ahead in the race.
00:21:56.580 We invented fire,
00:21:57.620 oopsie,
00:21:58.160 screwed up a bunch of times
00:21:59.160 and then we invented
00:22:00.800 the fire extinguisher.
00:22:02.260 We invented cars,
00:22:04.740 oopsie,
00:22:05.440 and invented the seat belt.
00:22:06.480 But with more powerful technology
00:22:08.520 like nuclear weapons,
00:22:11.320 synthetic biology,
00:22:13.540 super intelligence,
00:22:14.900 we don't want to learn
00:22:15.700 from mistakes.
00:22:16.360 That's a terrible strategy.
00:22:18.020 We instead want to have
00:22:18.900 a safety engineering mentality
00:22:20.420 where we plan ahead
00:22:22.500 and get things right
00:22:24.560 the first time
00:22:25.260 because that might be
00:22:26.360 the only time we have.
00:22:28.200 Let's talk about
00:22:28.920 the title of the book.
00:22:29.880 The title is Life 3.0.
00:22:31.880 And what you're
00:22:34.240 bringing in here
00:22:35.240 is really
00:22:35.940 a new definition
00:22:37.460 of life.
00:22:38.100 At least it's a
00:22:39.460 non-biological
00:22:40.820 definition of life.
00:22:42.580 How do you think
00:22:43.220 about life
00:22:44.660 and the three stages
00:22:46.060 you lay out?
00:22:47.420 Yeah, this is my
00:22:48.040 physicist perspective
00:22:49.620 coming through here
00:22:50.760 being a scientist.
00:22:52.740 Most definitions of life
00:22:54.120 that I found
00:22:54.640 in my son's textbooks,
00:22:56.540 for example,
00:22:57.080 involve all sorts
00:22:57.920 of biospecific stuff
00:22:59.360 like it should have cells.
00:23:00.600 But I'm a physicist
00:23:02.780 and I don't think
00:23:05.020 that there is
00:23:05.400 any secret sauce
00:23:06.480 in cells
00:23:07.240 or for that matter
00:23:07.920 even carbon atoms
00:23:08.920 that are required
00:23:11.160 to have something
00:23:12.800 that deserves
00:23:13.380 to be called life.
00:23:15.180 From my perspective,
00:23:16.200 it's all about
00:23:16.820 information processing,
00:23:18.000 really.
00:23:18.560 So I give this
00:23:19.640 much simpler
00:23:20.380 and broader
00:23:20.840 definition of life
00:23:21.820 in the book.
00:23:22.880 It's a process
00:23:23.500 that's able to
00:23:24.280 retain its own complexity
00:23:26.280 and reproduce
00:23:27.820 while biological life
00:23:29.540 meets that definition.
00:23:31.580 But there's no reason
00:23:33.300 why future advanced
00:23:35.640 self-reproducing AI systems
00:23:37.620 shouldn't qualify
00:23:39.200 as well.
00:23:40.660 And if you take
00:23:41.260 that broad point of view
00:23:42.240 of what life is,
00:23:43.500 then it's actually
00:23:44.100 quite fun
00:23:44.640 to just take a big step
00:23:46.020 back and look at
00:23:46.840 the history of life
00:23:48.300 in our cosmos.
00:23:49.580 13.8 billion years ago,
00:23:51.520 our cosmos was lifeless,
00:23:53.660 just a boring quirk soup.
00:23:55.720 And then gradually,
00:23:57.140 we started getting
00:23:57.900 what I call
00:23:59.100 life 1.0,
00:24:00.760 where both the hardware
00:24:01.920 and the software
00:24:02.780 of the life
00:24:04.140 was evolved
00:24:06.340 through Darwinian evolution.
00:24:08.560 So for example,
00:24:09.120 if you have a little
00:24:10.240 bacterium swimming around
00:24:12.440 in a Petri dish,
00:24:14.240 it might have
00:24:15.000 some sensors
00:24:16.120 that read off
00:24:17.160 the sugar concentration
00:24:18.260 and some flagella
00:24:19.360 and a very simple
00:24:20.940 little software algorithm
00:24:23.040 that's running
00:24:23.540 that says that
00:24:24.520 if the sugar concentration
00:24:26.980 in front of me
00:24:27.620 is higher than
00:24:28.180 the back of me,
00:24:28.700 then keep spinning
00:24:29.360 the flagella
00:24:29.860 in the same direction,
00:24:31.100 go to where
00:24:31.580 the sweets are,
00:24:32.920 whereas otherwise,
00:24:33.860 reverse direction
00:24:34.840 of that flagellum
00:24:35.660 and go somewhere else.
00:24:38.460 That bacterium,
00:24:40.260 even though it's
00:24:40.700 quite successful,
00:24:41.340 it can't learn
00:24:41.940 anything in life.
00:24:43.480 It can only,
00:24:44.460 as a species,
00:24:45.100 learn over generations
00:24:46.500 through natural selection.
00:24:49.500 Whereas we humans,
00:24:50.860 I count as life 2.0
00:24:52.360 in the book,
00:24:53.460 we have,
00:24:53.980 we're still by and large
00:24:55.220 stuck with the hardware
00:24:56.160 that's been evolved,
00:24:57.860 but the software
00:24:59.380 we have in our minds
00:25:00.560 is largely learned
00:25:02.100 and we can reinstall
00:25:02.980 new software modules.
00:25:04.060 Like if you decide
00:25:05.480 you want to learn French,
00:25:06.780 well,
00:25:07.900 you take some French courses
00:25:09.200 and now you can speak French.
00:25:10.620 If you decide
00:25:11.140 you want to go to law school
00:25:12.140 and become a lawyer,
00:25:13.420 suddenly now you have
00:25:14.280 that software module installed
00:25:15.580 and it's this ability
00:25:17.240 to do that
00:25:18.880 do our own software upgrades,
00:25:20.260 design our software
00:25:22.280 which has enabled us humans
00:25:24.340 to take control
00:25:26.300 of this planet
00:25:26.900 and become the dominant species
00:25:28.180 and have so much impact.
00:25:30.380 Life 3.0
00:25:31.300 would be the life
00:25:32.560 that ultimately
00:25:33.420 breaks all its
00:25:35.100 Darwinian shackles
00:25:37.320 by being able
00:25:38.320 to not only design
00:25:39.260 its own software
00:25:40.500 like we can
00:25:41.540 to a large extent,
00:25:42.140 but also swap out
00:25:43.340 its own hardware.
00:25:44.420 Yeah,
00:25:44.680 we can do that
00:25:45.340 a little bit
00:25:45.840 we humans.
00:25:46.520 So maybe we're life 2.1,
00:25:48.400 we can put in
00:25:48.960 an artificial pacemaker,
00:25:50.300 an artificial knee,
00:25:51.760 cochlear implants,
00:25:53.080 stuff like that.
00:25:54.180 But there's nothing
00:25:54.980 we can do right now
00:25:56.220 that would give us
00:25:57.000 suddenly
00:25:57.920 a thousand times
00:25:59.380 more memory
00:26:00.540 or let us think
00:26:02.180 a million times faster.
00:26:03.840 Whereas if you are
00:26:05.120 like
00:26:06.600 the super intelligent
00:26:08.060 computer Prometheus
00:26:09.520 we talked about,
00:26:10.500 there's nothing whatsoever
00:26:11.720 preventing you
00:26:12.720 from doing
00:26:13.620 all of those things.
00:26:15.240 And that's obviously
00:26:15.840 a huge jump.
00:26:18.880 But I think
00:26:19.980 we should talk
00:26:20.980 about some of these
00:26:21.560 fundamental terms here
00:26:23.060 because this distinction
00:26:24.400 between hardware
00:26:25.160 and software
00:26:25.860 is, I think,
00:26:28.500 confusing for people
00:26:29.440 and it's certainly
00:26:30.660 not obvious
00:26:31.700 to someone
00:26:32.640 who hasn't thought
00:26:33.220 a lot about this
00:26:34.020 that the analogy
00:26:36.040 of computer hardware
00:26:38.080 and software
00:26:38.640 actually applies
00:26:40.380 to biological systems
00:26:42.320 or in our case
00:26:43.920 the human brain.
00:26:45.340 So I think
00:26:46.440 you need to define
00:26:47.500 what software is
00:26:49.920 in this case
00:26:50.480 and how it relates
00:26:51.580 to the physical world.
00:26:53.740 What is computation
00:26:54.740 and how is it
00:26:56.900 that thinking
00:26:57.980 about what atoms do
00:27:00.480 can conserve the facts
00:27:02.880 about what minds do?
00:27:05.840 Yeah, these are really
00:27:06.440 important foundational
00:27:07.360 questions you asked.
00:27:08.680 If you just look
00:27:09.680 at a blob of stuff
00:27:11.320 at first
00:27:12.180 it seems almost
00:27:13.540 nonsensical
00:27:14.180 to ask
00:27:14.660 whether it's
00:27:15.440 intelligent or not.
00:27:16.880 Yet, of course,
00:27:18.160 if you look
00:27:18.660 at your loved one
00:27:19.480 you would agree
00:27:20.880 that they are intelligent.
00:27:22.540 And in the old days
00:27:23.680 people by and large
00:27:24.960 assumed that
00:27:25.520 the reason that
00:27:26.200 some blobs of stuff
00:27:28.120 like brains
00:27:29.260 were intelligent
00:27:30.220 and other blobs
00:27:31.360 of stuff
00:27:31.660 like watermelons
00:27:32.340 were not
00:27:32.780 was because
00:27:33.720 there was some sort
00:27:34.380 of non-physical
00:27:36.240 secret sauce
00:27:37.040 in the watermelon
00:27:37.660 that was different.
00:27:38.340 Now, of course,
00:27:39.640 as a physicist
00:27:40.120 I look at the watermelon
00:27:41.320 and I look at my wife's head
00:27:43.200 and in both cases
00:27:44.040 I see a big blob
00:27:45.120 of quarks
00:27:45.680 of comparable size.
00:27:47.760 It's not even that
00:27:48.760 there are different
00:27:49.140 kinds of quarks
00:27:49.920 they're both up quarks
00:27:50.880 and down quarks
00:27:51.620 and there's some
00:27:52.140 electrons in there.
00:27:53.320 So what makes
00:27:55.060 my wife intelligent
00:27:56.180 compared to the watermelon
00:27:57.740 is not the stuff
00:27:59.500 that's in there
00:28:00.100 it's the pattern
00:28:00.880 in which it's arranged.
00:28:02.500 And if you start to ask
00:28:03.960 what does it mean
00:28:04.580 that a blob of stuff
00:28:05.480 can remember
00:28:06.940 compute
00:28:08.300 and learn
00:28:09.300 and perceive
00:28:10.440 experience
00:28:11.780 these sort of properties
00:28:13.700 that we associate
00:28:14.420 with our human minds
00:28:15.500 right
00:28:15.760 then for each one
00:28:16.900 of them
00:28:17.320 there's a clear
00:28:18.580 physical answer
00:28:19.900 to it
00:28:20.280 for something
00:28:22.220 to be a useful
00:28:22.960 memory device
00:28:23.780 for example
00:28:24.220 it simply has to have
00:28:25.260 many different
00:28:26.580 stable
00:28:27.380 or long-lived states
00:28:28.780 like if you engrave
00:28:29.660 your wife's name
00:28:30.820 in a gold ring
00:28:33.560 it's still going to be
00:28:35.000 there a year later
00:28:35.760 if you engrave
00:28:37.100 Anika's name
00:28:38.560 in the surface
00:28:40.240 of a cup of water
00:28:41.260 it'll be gone
00:28:41.740 within a second
00:28:42.340 so that's a useless
00:28:43.140 memory device.
00:28:45.100 What about computation?
00:28:46.960 A computation
00:28:47.400 is simply something
00:28:48.460 a system
00:28:49.600 when a system
00:28:50.640 has some
00:28:51.480 is designed
00:28:52.720 in such a way
00:28:53.260 that the laws
00:28:54.460 of physics
00:28:54.940 will make it
00:28:55.560 evolve its memory state
00:28:56.960 from one state
00:28:58.640 that you might call
00:28:59.240 the input
00:28:59.640 into some other state
00:29:00.980 that you might call
00:29:02.480 the output
00:29:03.480 our computers today
00:29:05.780 do that
00:29:06.960 with a very particular
00:29:07.940 kind of architecture
00:29:09.460 with integrated circuits
00:29:10.600 and electrons
00:29:11.240 moving around
00:29:11.880 in two dimensions
00:29:12.560 our brains
00:29:13.780 do it
00:29:14.700 with a very different
00:29:15.920 architecture
00:29:16.800 with neurons
00:29:18.340 firing
00:29:18.820 and causing
00:29:19.480 other neurons to fire
00:29:20.380 but you can prove
00:29:21.080 mathematically
00:29:21.540 that any computation
00:29:22.360 you can do
00:29:22.960 with one of those systems
00:29:23.780 you can also implement
00:29:25.220 with the other
00:29:26.220 so the computation
00:29:27.000 sort of takes on
00:29:28.200 a life of its own
00:29:28.920 which
00:29:29.180 doesn't depend
00:29:30.440 really on the
00:29:31.000 substrate
00:29:31.380 it's in
00:29:32.660 so for example
00:29:33.520 if you imagine
00:29:34.080 that you're some
00:29:34.740 future
00:29:35.840 highly intelligent
00:29:37.240 computer
00:29:38.060 game character
00:29:39.460 that's conscious
00:29:40.380 you would have
00:29:41.440 no way of knowing
00:29:42.420 whether you were
00:29:43.520 running on a
00:29:44.060 Windows machine
00:29:44.780 or an Android phone
00:29:46.560 or a Mac laptop
00:29:47.500 because
00:29:48.020 all you're aware
00:29:49.540 of is
00:29:49.860 how the information
00:29:51.420 in that program
00:29:55.120 is behaving
00:29:56.100 not this underlying
00:29:57.460 substrate
00:29:57.880 and finally
00:29:58.820 learning
00:29:59.860 which is one
00:30:01.520 of the most
00:30:01.940 intriguing
00:30:02.300 aspects of
00:30:02.860 intelligence
00:30:03.280 is a system
00:30:05.120 where
00:30:05.520 the computation
00:30:06.720 itself
00:30:07.400 can start
00:30:09.180 to change
00:30:09.780 to be better
00:30:10.500 suited to
00:30:11.260 whatever goals
00:30:12.500 have been put
00:30:13.040 into the system
00:30:13.620 so our brains
00:30:14.820 we're beginning
00:30:16.100 to gradually
00:30:16.580 understand
00:30:17.080 how
00:30:17.400 the neural
00:30:18.560 network
00:30:19.020 in our head
00:30:20.080 starts to adjust
00:30:22.000 the coupling
00:30:24.000 between the neurons
00:30:24.620 in such a way
00:30:25.100 that the computation
00:30:25.800 actually does
00:30:26.540 is better
00:30:28.360 at surviving
00:30:29.380 on this planet
00:30:29.980 and winning
00:30:31.740 that baseball
00:30:33.180 game
00:30:33.580 or whatever
00:30:34.220 else we're
00:30:34.780 trying to
00:30:35.160 accomplish
00:30:35.560 so
00:30:36.480 come back
00:30:38.080 to your
00:30:38.280 original question
00:30:39.600 what's the hardware
00:30:40.500 here and what's
00:30:41.060 the software
00:30:41.500 I'm calling
00:30:43.320 everything
00:30:43.720 hardware
00:30:44.380 that's made
00:30:45.020 of elementary
00:30:46.280 particles
00:30:46.820 so basically
00:30:47.540 stuff
00:30:48.120 is the hardware
00:30:49.340 whereas information
00:30:50.520 is made
00:30:51.340 of bits
00:30:51.760 as the basic
00:30:53.160 building block
00:30:54.160 and
00:30:55.240 the bits
00:30:56.700 reside in
00:30:57.480 the pattern
00:30:58.100 in which
00:30:58.540 the hardware
00:30:59.760 is organized
00:31:01.460 so for example
00:31:02.300 if you look
00:31:03.100 at your own
00:31:04.420 body
00:31:04.780 you feel like
00:31:05.500 you're the same
00:31:06.020 person
00:31:06.480 that you were
00:31:07.460 20 years ago
00:31:08.500 but actually
00:31:09.280 almost all
00:31:10.260 your quirks
00:31:11.220 and electrons
00:31:11.980 have been swapped
00:31:12.620 out
00:31:12.980 in fact
00:31:14.100 the water
00:31:15.340 molecules
00:31:15.820 in your body
00:31:17.000 get replaced
00:31:17.580 pretty regularly
00:31:18.700 right
00:31:19.040 so
00:31:19.500 why do you
00:31:20.400 still feel
00:31:20.840 like the
00:31:21.140 same guy
00:31:21.680 it's because
00:31:22.940 the pattern
00:31:23.600 into which
00:31:24.240 your particles
00:31:25.700 are arranged
00:31:26.360 stays the same
00:31:27.620 that gets copied
00:31:29.140 it's not the part
00:31:30.000 of the
00:31:30.260 not the hardware
00:31:31.460 that gets retained
00:31:32.840 it's the software
00:31:33.560 it's the patterns
00:31:34.280 same thing
00:31:35.080 if you have
00:31:35.600 life
00:31:36.820 if you have
00:31:37.280 a bacteria
00:31:37.840 if you have
00:31:38.320 a bacterium
00:31:38.980 that splits
00:31:39.840 into two bacteria
00:31:40.680 you know
00:31:41.800 now there are
00:31:42.320 new atoms
00:31:43.240 there
00:31:43.560 but they're
00:31:44.840 arranged
00:31:45.580 in exactly
00:31:46.180 the same
00:31:46.800 sort of pattern
00:31:47.780 as the original
00:31:49.100 one was
00:31:49.680 so it's the
00:31:50.820 pattern
00:31:51.180 that's the
00:31:51.920 life
00:31:52.220 not the
00:31:53.140 particles
00:31:53.560 well
00:31:54.660 there's
00:31:55.300 two things
00:31:56.000 I'd like to
00:31:56.380 flag there
00:31:56.860 beyond
00:31:57.540 your having
00:31:58.300 compared
00:31:58.820 both of
00:31:59.280 our wives
00:31:59.840 favorably
00:32:00.780 to
00:32:01.300 watermelons
00:32:02.500 no offense
00:32:04.440 I love
00:32:04.800 watermelons
00:32:05.400 no one
00:32:06.160 will get
00:32:06.320 in trouble
00:32:06.560 for that
00:32:06.980 let's just
00:32:07.940 focus for a
00:32:08.680 second
00:32:08.920 on this
00:32:09.720 concept
00:32:10.180 of
00:32:10.520 substrate
00:32:11.200 independence
00:32:11.760 because
00:32:12.380 it's
00:32:12.900 again
00:32:13.260 it's
00:32:13.660 highly
00:32:14.020 non-intuitive
00:32:14.960 and in fact
00:32:16.620 the fact
00:32:17.320 that it's
00:32:17.700 non-intuitive
00:32:18.380 is something
00:32:19.480 that you
00:32:20.040 make much
00:32:21.360 of in the
00:32:22.020 book
00:32:22.300 in a
00:32:22.760 fairly
00:32:23.300 arresting
00:32:24.080 passage
00:32:24.780 the idea
00:32:25.960 is that
00:32:26.700 it is the
00:32:27.760 pattern
00:32:28.280 that suffices
00:32:30.120 to make
00:32:30.960 something a
00:32:31.800 computation
00:32:32.420 this pattern
00:32:33.880 can appear
00:32:34.760 in anything
00:32:35.540 that it can
00:32:36.500 appear in
00:32:37.080 in principle
00:32:37.700 so it could
00:32:39.040 appear in a
00:32:39.800 rainstorm
00:32:40.460 or a bowl
00:32:41.040 of oatmeal
00:32:41.480 or anything
00:32:42.940 that could
00:32:43.840 conserve
00:32:44.360 the same
00:32:45.140 pattern
00:32:45.560 and
00:32:46.600 there is
00:32:47.900 an additional
00:32:49.060 point you
00:32:49.740 made about
00:32:50.340 the universality
00:32:51.640 of computation
00:32:52.440 that a
00:32:53.460 system that
00:32:54.240 is sufficient
00:32:55.420 to compute
00:32:57.080 information
00:32:57.700 to this
00:32:58.260 degree
00:32:58.640 can be
00:32:59.420 implemented
00:32:59.820 in another
00:33:00.480 substrate
00:33:00.940 that would
00:33:02.560 suffice for
00:33:03.180 the same
00:33:03.900 computations
00:33:05.120 and therefore
00:33:05.520 for the same
00:33:06.400 range of
00:33:07.000 intelligence
00:33:07.540 this is the
00:33:08.580 basis
00:33:08.940 as you put
00:33:10.100 it for
00:33:10.520 why this is
00:33:11.340 so non-obvious
00:33:11.960 to us
00:33:12.400 by virtue
00:33:13.180 of introspection
00:33:14.040 because the
00:33:15.000 mind doesn't
00:33:16.600 feel like
00:33:18.260 mere matter
00:33:19.260 on your
00:33:20.300 account
00:33:20.660 because it
00:33:21.800 is substrate
00:33:22.300 independent
00:33:22.920 yeah I think
00:33:25.040 you summarized
00:33:25.600 it very well
00:33:26.340 there and it
00:33:26.760 might be helpful
00:33:27.300 to take another
00:33:28.160 example which is
00:33:28.860 even more
00:33:29.220 familiar
00:33:29.600 think of
00:33:30.560 waves
00:33:31.080 for a moment
00:33:32.180 we physicists
00:33:34.120 love studying
00:33:35.600 waves
00:33:36.040 and
00:33:37.340 we
00:33:38.660 can
00:33:39.400 figure out
00:33:40.860 all sorts
00:33:41.620 of interesting
00:33:42.000 things about
00:33:42.520 waves
00:33:42.840 from this
00:33:43.300 nerdy equation
00:33:44.140 I teach at
00:33:44.660 MIT called
00:33:45.160 the wave
00:33:45.520 equation
00:33:45.940 it teaches
00:33:47.120 us that
00:33:47.540 waves
00:33:47.880 attenuate
00:33:49.020 like the
00:33:49.600 inverse
00:33:49.840 square of
00:33:50.220 the distance
00:33:50.660 it teaches
00:33:51.240 us exactly
00:33:51.800 how waves
00:33:52.540 bend when
00:33:53.240 they go
00:33:53.560 through doors
00:33:54.160 how they
00:33:54.580 bounce off
00:33:55.160 of walls
00:33:55.700 all sorts
00:33:56.400 of other
00:33:56.640 good stuff
00:33:57.120 yet we
00:33:58.580 can use
00:33:59.020 this wave
00:33:59.400 equation
00:33:59.740 without even
00:34:00.660 knowing what
00:34:01.400 the wave
00:34:01.920 is a wave
00:34:02.500 in
00:34:02.900 it doesn't
00:34:03.940 matter if
00:34:04.360 it's helium
00:34:04.840 or oxygen
00:34:05.600 or
00:34:06.800 neon
00:34:08.100 all the
00:34:09.600 in fact
00:34:10.080 people
00:34:10.660 first figured
00:34:12.000 out this
00:34:12.460 wave
00:34:12.680 equation
00:34:13.080 before they
00:34:13.880 even knew
00:34:14.360 that there
00:34:14.740 were atoms
00:34:15.480 for sure
00:34:16.020 it's quite
00:34:16.680 remarkable
00:34:17.140 and all
00:34:18.320 the complicated
00:34:18.960 properties
00:34:19.400 of the
00:34:19.860 substance
00:34:20.260 get summarized
00:34:20.980 in just a
00:34:21.420 single number
00:34:22.040 which is the
00:34:22.540 speed of
00:34:22.860 those waves
00:34:23.340 nothing else
00:34:24.200 matters
00:34:25.220 if you have
00:34:26.000 a wave
00:34:26.660 that's
00:34:27.360 traveling
00:34:27.720 across the
00:34:28.300 ocean
00:34:28.640 the water
00:34:30.920 molecules
00:34:31.320 actually
00:34:31.780 don't
00:34:32.500 they mostly
00:34:33.060 just bob
00:34:33.540 up and
00:34:33.820 down
00:34:34.140 yet the
00:34:34.720 wave
00:34:35.100 moves
00:34:35.500 and takes
00:34:36.100 on a
00:34:36.420 life
00:34:36.620 of its
00:34:36.880 own
00:34:37.100 so this
00:34:37.800 also
00:34:38.020 shows
00:34:38.940 that of
00:34:39.220 course
00:34:39.420 you can't
00:34:40.280 have a
00:34:40.600 wave
00:34:40.900 without a
00:34:42.240 substrate
00:34:42.560 you can't
00:34:43.620 have a
00:34:44.760 computation
00:34:45.280 or a
00:34:46.220 conscious
00:34:46.420 experience
00:34:46.920 without it
00:34:47.560 being in
00:34:48.120 something
00:34:48.640 but the
00:34:49.320 details of
00:34:50.040 the substrate
00:34:50.600 don't really
00:34:52.280 matter
00:34:52.600 and I
00:34:53.000 think
00:34:53.240 that is
00:34:55.020 the fundamental
00:34:55.540 explanation
00:34:56.080 for what
00:34:56.960 you eloquently
00:34:57.640 expressed
00:34:58.160 there
00:34:58.380 namely
00:34:58.960 why is
00:34:59.660 it that
00:35:00.240 our mind
00:35:01.040 subjected
00:35:02.480 feels so
00:35:03.740 ethereal
00:35:04.380 and non-physical
00:35:05.560 it's precisely
00:35:06.480 because the
00:35:07.280 details of
00:35:07.880 the substrate
00:35:08.400 don't really
00:35:09.180 matter
00:35:09.640 very much
00:35:10.600 if you
00:35:11.840 as some
00:35:12.900 people
00:35:14.460 hope
00:35:15.420 can one
00:35:16.140 day upload
00:35:16.640 your mind
00:35:17.220 into a
00:35:17.640 computer
00:35:18.000 perfectly
00:35:18.960 then it
00:35:19.840 should
00:35:20.480 subjectively
00:35:21.020 feel exactly
00:35:21.940 the same
00:35:22.500 way
00:35:22.840 even though
00:35:24.240 you don't
00:35:24.560 even have
00:35:24.920 any carbon
00:35:25.320 atoms at
00:35:25.820 all now
00:35:26.240 and the
00:35:26.480 substrate
00:35:26.780 has been
00:35:27.140 completely
00:35:28.040 swapped out
00:35:29.600 you've
00:35:31.240 introduced
00:35:31.900 a few
00:35:32.200 fundamental
00:35:32.580 concepts
00:35:33.040 here
00:35:33.200 you've
00:35:33.380 talked
00:35:33.580 about
00:35:34.080 computation
00:35:35.160 as a
00:35:36.640 kind
00:35:36.960 of
00:35:37.360 input
00:35:38.460 output
00:35:38.880 characteristic
00:35:39.820 of
00:35:40.580 physical
00:35:41.480 systems
00:35:42.000 and
00:35:43.140 we're
00:35:44.240 in a
00:35:44.760 circumstance
00:35:45.140 where
00:35:45.800 it doesn't
00:35:47.140 matter
00:35:47.620 what
00:35:48.320 substrate
00:35:49.120 accomplishes
00:35:49.940 that
00:35:50.200 and then
00:35:51.340 there's
00:35:51.600 this
00:35:51.840 added
00:35:52.520 concept
00:35:53.300 of
00:35:53.620 the
00:35:54.000 universality
00:35:54.800 of
00:35:55.100 computation
00:35:55.620 but
00:35:56.360 then
00:35:56.500 you
00:35:56.640 also
00:35:57.120 in the
00:35:57.360 book
00:35:57.520 introduce
00:35:57.980 a
00:35:58.180 notion
00:35:58.420 of
00:35:58.600 universal
00:35:59.140 intelligence
00:35:59.980 and
00:36:00.980 intelligence
00:36:01.380 again
00:36:01.820 as you've
00:36:02.260 defined
00:36:02.700 is the
00:36:03.360 ability
00:36:03.660 to meet
00:36:04.320 complex
00:36:04.940 goals
00:36:05.420 what's
00:36:06.740 the word
00:36:07.020 universal
00:36:07.680 doing
00:36:08.160 in the
00:36:08.560 phrase
00:36:08.860 universal
00:36:09.400 intelligence
00:36:09.980 if you'd
00:36:16.840 like to
00:36:17.080 continue
00:36:17.380 listening to
00:36:17.840 this
00:36:18.000 conversation
00:36:18.540 you'll
00:36:19.240 need to
00:36:19.520 subscribe
00:36:20.000 at
00:36:20.220 samharris.org
00:36:21.140 once you
00:36:22.180 do you'll
00:36:22.540 get access
00:36:22.940 to all
00:36:23.340 full-length
00:36:23.820 episodes of
00:36:24.380 the
00:36:24.480 making sense
00:36:24.880 podcast
00:36:25.420 along with
00:36:26.260 other
00:36:26.420 subscriber
00:36:26.940 only
00:36:27.220 content
00:36:27.660 including
00:36:28.520 bonus
00:36:28.900 episodes
00:36:29.500 and
00:36:30.000 amas
00:36:30.580 and the
00:36:31.140 conversations
00:36:31.620 i've been
00:36:31.940 having on
00:36:32.320 the waking
00:36:32.620 up app
00:36:33.080 the
00:36:33.840 making sense
00:36:34.280 podcast
00:36:34.660 is ad
00:36:35.240 free
00:36:35.560 and relies
00:36:36.440 entirely on
00:36:37.160 listener support
00:36:37.820 and you can
00:36:38.800 subscribe now
00:36:39.580 at
00:36:40.080 samharris.org