#385 — AI Utopia
Episode Stats
Words per Minute
167.31406
Summary
Nick Bostrom is a professor at the University of Oxford, where he is the founding director of the Future of Humanity Institute. He is the author of many books, one of which, Superintelligence, alerted many of us to the problem of AI alignment. And his most recent book is Deep Utopia: Life and Meaning in a Solved World. In this conversation, we discuss the twin concerns of AI Alignment failure and a possible failure to make progress on superintelligence. We discuss why smart people don t perceive the risk of superintelligent AI, the ongoing problem of governance, path dependence, and what Nick calls "naughty problems." And we build a bridge between the two books that takes us through the present, and through the future, and Nick s thoughts on the current state of AI and what it could possibly go right in an AI future. We don t run ads on the podcast, and therefore, therefore, are made possible entirely through the support of our subscribers. If you enjoy what we're doing here, please consider becoming a supporter of the podcast by becoming a subscriber! We'll only be hearing the first part of this conversation if you become a supporter by listening to the second part of the conversation, "Making Sense: The Making Sense Podcast." Subscribe to Making Sense: A Podcast About AI and Superintelligence by Sam Harris. Subscribe Today! Subscribe to the podcast Learn more about your ad choices. Become a supporter? Learn about the benefits of joining the Making Sense Community by becoming one of our sponsors? You'll get access to exclusive shoutouts and access to all sorts of programs throughout the world's best podcasts, including blogs, blogs, podcasts, courses, and more! Thank you, Sam Harris, I'll get a chance to speak to you get a discount on the best places to listen to the whole world, and I'll hear more of his cool stuff like that, too, I'm listening to your chance to do that too, and he gets it too, too and more, too says it, I hear that I'm not even better, I won't even say so, I really won't hear that, I s that too at that s that out, I said it, I hear it, he says that that s not that, he s that and I won t say that I s really that and he s really like it, right, I mean that I really says that so good, right
Transcript
00:00:00.000
Welcome to the Making Sense Podcast. This is Sam Harris. Just a note to say that if
00:00:11.640
you're hearing this, you're not currently on our subscriber feed, and we'll only be
00:00:15.580
hearing the first part of this conversation. In order to access full episodes of the Making
00:00:19.840
Sense Podcast, you'll need to subscribe at samharris.org. There you'll also find our
00:00:24.960
scholarship program, where we offer free accounts to anyone who can't afford one.
00:00:28.340
We don't run ads on the podcast, and therefore it's made possible entirely through the support
00:00:32.860
of our subscribers. So if you enjoy what we're doing here, please consider becoming one.
00:00:44.980
Today I'm speaking with Nick Bostrom. Nick is a professor at the University of Oxford,
00:00:50.820
where he is the founding director of the Future of Humanity Institute. He is the author of many
00:00:57.780
books. Superintelligence is one of them that I've discussed on the podcast, which alerted many
00:01:04.320
of us to the problem of AI alignment. And his most recent book is Deep Utopia, Life and Meaning in a
00:01:12.120
Solved World. And that is the topic of today's conversation. We discuss the twin concerns of
00:01:20.000
alignment failure and also a possible failure to make progress on superintelligence. The only thing
00:01:26.860
worse than building computers that kill us is a failure to build computers that will help us solve
00:01:33.220
our existential problems as they appear in the future. We talk about why smart people don't
00:01:38.420
perceive the risk of superintelligent AI, the ongoing problem of governance, path dependence, and what Nick
00:01:46.620
calls naughty problems. The idea of a solved world. John Maynard Keynes' predictions about human
00:01:53.660
productivity. The uncanny valley issue with the concept of utopia. The replacement of human labor
00:02:00.920
and other activities. The problem of meaning and purpose. Digital isolation and plugging into something
00:02:07.540
like the matrix. Pure hedonism. The asymmetry between pleasure and pain. Increasingly subtle distinctions
00:02:15.300
in experience. Artificial purpose. Altering human values at the level of the brain. Ethical changes in the absence
00:02:22.840
of extreme suffering. What Nick calls our cosmic endowment. The ethical confusion around long-termism.
00:02:30.100
Possible problems with consequentialism. The ethical conundrum of dealing with small probabilities of
00:02:36.580
large outcomes. And other topics. Anyway, I think Nick has a fascinating mind. As always,
00:02:44.300
it was a great pleasure to speak with him. And now I bring you Nick Bostrom.
00:02:54.760
I am here with Nick Bostrom. Nick, thanks for joining me again.
00:02:59.860
So, you wrote a very gloomy book about AI that got everyone worried some years ago. This was
00:03:06.100
superintelligence, which we spoke about in the past. But you have now written a book about all that could
00:03:11.480
possibly go right in an AI future. And that book is Deep Utopia, Life and Meaning in a Solved World,
00:03:19.820
which I want to talk about. It's a fascinating book. But let's build a bridge between the two books
00:03:24.860
that takes us through the present. So, perhaps you can just catch us up. What are your thoughts about
00:03:31.480
the current state of AI? And is there anything that's happened since you published Superintelligence
00:03:38.120
that has surprised you? Well, a lot has happened. I think one of the surprising things is just how
00:03:45.860
anthropomorphic current-level AI systems are. The idea that we have systems that can talk
00:03:53.220
long before we have generally super-intelligent systems. I mean, we've had these for years already.
00:03:59.520
That was not obvious 10 years ago. Otherwise, I think things have unfolded pretty much in accordance
00:04:07.280
with expectation, maybe slightly faster. Were you surprised? The most surprising thing,
00:04:12.680
from my point of view, is that much of our talk about AI safety seemed to presuppose,
00:04:18.560
even explicitly, that there would be a stage where the most advanced AIs would not be connected to the
00:04:27.580
internet, right? There's this so-called air gapping of this black box from the internet. And then
00:04:33.680
there'd be a lot of thought around whether it was safe to unleash into the wild. It seems that we
00:04:41.120
have blown past that landmark and everything that's being developed is just de facto connected to more
00:04:47.580
or less everything else. Well, I mean, it's useful to connect it. And so far, the systems that have
00:04:52.380
been developed don't appear to pose this kind of existential risk that would occasion these more
00:04:59.600
extreme security measures like air gapping. Now, whether we will implement those when the time
00:05:07.340
comes for it is, I guess, remains to be seen. Yeah. I mean, it just seems that the safety ethos
00:05:13.400
doesn't quite include that step at the moment. I mean, maybe they have yet more advanced systems that
00:05:20.980
they wouldn't dream of connecting to anything, or maybe they'll reach that stage. But it just seems that
00:05:25.320
we're far before we really understand the full capacities of a system, we're building them
00:05:32.220
already connected to much of what we care about. Yeah. I don't know exactly how it works during the
00:05:39.540
training of the next generation models that are currently in development, whether there is any
00:05:46.820
form of temporary air gapping until sort of a given capability level can be assessed. On the one hand,
00:05:53.320
I guess you want to make sure that you also learn the kinds of capabilities that require internet
00:05:59.060
access, the ability to look things up, for example. But I guess, in principle, you could
00:06:04.400
train some of those by having a locally stored copy of the internet. Right. And so I think maybe it's
00:06:10.260
something that you only want to implement when you think that the risks are actually high enough
00:06:14.600
to warrant the inconvenience and the limitation of, at least during training and testing phases,
00:06:20.960
doing it in an air-gapped way. Have your concerns about the risks
00:06:26.100
changed at all? I mean, in particular, the risk of AI alignment or the failure of AI alignment?
00:06:32.840
I mean, the macro picture, I think, remains more or less the same. Obviously, there's a lot more
00:06:37.980
granularity now. We are 10 years farther down the track, and we can see in much more specificity what
00:06:46.680
the leading edge models look like and what the field in general looks like. But I'm not sure the
00:06:53.060
sort of the PDoom has changed that much. I think maybe my emphasis has shifted a little bit from
00:07:02.280
alignment failure, the narrowly defined problem of not solving the technical challenge of scalable
00:07:10.520
alignment to focus a little bit more on the other ways in which we might succumb to an existential
00:07:17.260
catastrophe. There is the governance challenge, broadly construed, and then the challenge of digital
00:07:22.660
minds with moral status that we might also make a hash of. And of course, also the failure, the potential
00:07:29.840
risk of failing ever to develop superintelligence, which I think in itself would constitute
00:07:35.080
plausibly an existential catastrophe. Well, that's interesting. So it really is a high wire act
00:07:41.100
that we have to develop it exactly as needed to be perfectly aligned with our well-being in an
00:07:48.640
ongoing way. And to develop it in an unaligned way would be catastrophic, but to not develop it in the
00:07:54.720
first place could also be catastrophic given the challenges that we face that could only be solved
00:07:59.400
by superintelligence. Yeah. And we don't really know exactly how hard any of these
00:08:05.000
problems are. I mean, we've never had to solve them before. And it's, so I think most of the
00:08:10.580
uncertainty in how things will, how well things will go is less uncertainty about the degree to
00:08:16.620
which we get our act together and rally. Although there's some uncertainty about that and more
00:08:21.280
uncertainty in just intrinsic difficulty of these challenges. So there is a degree of fatalism there.
00:08:28.080
Like if the problems are easy, we'll probably solve them. If they are really hard, we'll almost
00:08:32.140
certainly fail. And now maybe there will be sort of intermediate difficulty level, in which case
00:08:36.860
in those scenarios, it might make a big difference the degree to which we sort of do our best in the
00:08:44.020
coming months and years. Well, we'll revisit some of those problems when we talk about your new book.
00:08:49.940
But what do you make of the fact that some very smart people, people who were quite close or even
00:08:57.280
responsible for the fundamental breakthroughs into deep learning that have given us the progress we've
00:09:04.580
seen of late? I'm thinking of someone like Jeffrey Hinton. What do you make of the fact that certain of
00:09:10.340
these people apparently did not see any problem with alignment or the lack thereof? And they just,
00:09:19.300
they only came to perceive the risk that you wrote about 10 years ago in recent months. I mean, maybe
00:09:27.980
it was a year ago that Hinton retired and started making worried noises about the possibility of that
00:09:35.660
we could build superintelligence in a way that would be catastrophic. You know, you were, I gave a TED talk
00:09:42.080
on this topic, very much inspired by your book in 2016. Many of us have been worried for a decade
00:09:46.920
about this. How do you explain the fact that someone like Hinton is just now having these
00:09:55.120
thoughts? Well, I mean, I'm more inclined to give credit to, I mean, it's particularly rare and
00:10:01.020
difficult to change one's mind or update as one gets older. And particularly if one is very distinguished
00:10:07.160
like Hinton is, it's, I think that's more the surprising thing, I think, rather than like the failure
00:10:14.200
top date earlier. There are a lot of people who haven't yet sort of really come to appreciate
00:10:18.640
just how immense the sleep into superintelligence will be and, and the risks associated with it.
00:10:25.460
But I mean, the thing that I find mystifying about this topic is that there's some extraordinarily
00:10:29.720
smart people who you can't accuse of not understanding the details, right? It's not an
00:10:37.280
intellectual problem. And they don't accept the perception of risk, or that there really is,
00:10:44.060
in many cases, they just don't even accept that it's a category of risk that can be thought about.
00:10:50.900
And yet their counter arguments are so unpersuasive. I mean, insofar as they even have
00:10:57.640
counter arguments that it, it's just a, some kind of brute fact of a, of a difference of intuition
00:11:03.820
that it's very hard to parse. I mean, I'm thinking of somebody like David Deutsch, who you probably
00:11:08.360
know, the physicist at, at Oxford. Maybe he's revised his opinion. I spent a couple of years
00:11:13.100
since I've spoken to him about this, but I somehow doubt it. The last time we spoke, he was not worried
00:11:18.020
at all about this problem of alignment. I mean, he, and the analogy he drew at the time was that,
00:11:23.400
you know, we all have, we all have this problem. We have kids, you know, we have teenagers and the
00:11:27.820
teenagers are not perfectly aligned with our view of reality. And yet we navigate that fine and they
00:11:33.060
grow up on the basis of our culture and our instruction and in continuous dialogue with us
00:11:38.640
and nothing gets too out of whack. And, you know, everything he says about this, that is,
00:11:44.640
that keeps him so sanguine is based explicitly on the claim that there's just simply no way we can
00:11:52.000
be cognitively closed to what a superintelligence ultimately begins to realize and understand and,
00:11:59.020
and want to actuate. I mean, it's just, it's just a matter of us getting enough memory and enough,
00:12:03.600
you know, processing time to have a dialogue with such a mind. And then it just, the conversation
00:12:11.140
sort of peters out without there actually being a clear acknowledgement of the analogies to, you know,
00:12:17.340
our relationship to lesser species, right? I mean, if we, when you're in relationship to a species that
00:12:23.480
is much more competent, much more capable, much more intelligent than yours is, that there's just
00:12:29.700
the obvious possibility of that not working out well. And as, as we have seen the way in the way
00:12:35.800
we have run roughshod over the interests of every other species on earth, the skeptics of this problem
00:12:41.540
just seem to think that there's something about the fact that we are inventing this technology that
00:12:47.340
guarantees that this relationship cannot go awry. And I, I, I've yet to encounter a deeper argument
00:12:53.420
about why that is not merely guaranteed or why that's in any way even guaranteed to be likely.
00:13:00.240
Well, I mean, let's, let's hope they are right.
00:13:05.160
I don't believe I have. I think I've only met him once and that was, yeah, I don't recall whether
00:13:11.760
this came up or not long time ago. So, but I mean, do you share my mystification after colliding with
00:13:18.540
many of these people? Yeah, well, I guess I'm sort of numb to it by now. I mean, you just take it for
00:13:25.320
granted that that's the way things are. Perhaps things seem the same from their perspective, like
00:13:31.200
these people running around with their pants on fire, being very alarmed. And we have this long
00:13:37.000
track record of technological progress being for the best. And like sometimes ways to try to control
00:13:44.840
things end up doing more harm than what they are protecting against. And but yeah, it does seem
00:13:51.000
prima facie like something that has a lot of potential to go wrong. And like if you're introducing
00:13:56.440
the equivalent of a new species that are far more intelligent than Homo sapiens, even if it were like a
00:14:02.680
biological species that already would seem a little bit perilous or at least worth being cautious about
00:14:08.500
as we did it. And in here, it's like maybe even more different than much more sudden. And actually,
00:14:13.920
I think that that variable would sharpen up people's concerns. If you just said we're inventing a
00:14:21.060
biological species that is more intelligent and capable than we are and setting it loose,
00:14:26.540
it won't be in a zoo, it'll be out with us. I think people that the kind of the wetness of that
00:14:33.000
invention, I think, would immediately alert people to the danger or the possibility of danger. There's
00:14:41.080
something about the fact that it isn't a clear artifact, a non-biological artifact that we are
00:14:45.880
creating that makes people think this is a tool, this isn't a relationship.
00:14:51.500
Yeah. I mean, in fairness, the fact that it is sort of a non-biological artifact also potentially gives
00:14:57.540
us a lot more levers of control in designing it. Like you can sort of read off every single
00:15:03.460
parameter and modify it and it's software and you have a lot of affordances with software that you
00:15:09.700
couldn't have with like a biological creation. So if we are lucky, it just means we have like this
00:15:14.800
precision control as we engineer this and, you know, built in. Like maybe you think a biological
00:15:20.800
species might have sort of inbuilt predatory instincts and all of that which need not be
00:15:26.460
present in the digital mind. Right. But if we're talking about an autonomous intelligence that exceeds
00:15:34.980
our own, there's something about the meanings of those terms, you know, autonomous, intelligent,
00:15:40.700
and more of it than we have that entails our inability to predict what it will ultimately do,
00:15:50.200
right? I mean, it can form instrumental goals that we haven't anticipated. I mean, that's what
00:15:54.820
that falls out of the very concept of having an independent intelligence. It's just, it puts you in
00:16:02.840
relationship to another mind and we can leave, we can leave the topic of consciousness aside for the
00:16:08.500
moment. We're simply talking about intelligence and there's, there's just something about that that
00:16:13.260
is, unless again, unless we have complete control and can pull the plug at any moment, which becomes
00:16:20.320
harder to think about in the presence of something that is as stipulated, more powerful and more
00:16:25.580
intelligent than, than we are. It just, I, again, I, I'm mystified that people simply don't
00:16:32.640
acknowledge the possibility of the problem. It's not, the argument never goes, oh yes, that's totally
00:16:38.480
possible that we could have this catastrophic failure of relationship to this independent
00:16:43.440
intelligence. But here's why I think it's unlikely, right? Like that's not the argument. The argument,
00:16:48.260
the argument is you're merely worrying about this is a kind of perverse religious faith in something
00:16:54.460
that isn't demonstrable at all. Yeah. I mean, it's hard not to get religious in one way or another
00:17:02.160
when confronting such immense prospects and the possibility of much greater beings and how that
00:17:09.520
all fits in is like a big, but I guess it's worth reflecting as well on what the alternative is.
00:17:15.940
So it's not as if the default course for humanity is just this kind of smooth highway with McDonald's
00:17:22.580
stations interspersed every four kilometers. Like it does look like things are a bit out of control
00:17:27.360
already from a global perspective. You know, we're inventing different technologies without much
00:17:33.200
plan, without much coordination. And, uh, maybe we've just mostly been lucky so far that we haven't
00:17:39.540
discovered one that is like so destructive that it destroys everything because, uh, I mean the
00:17:47.860
technologies we have developed, we've put them to use and if they are destructive, they've been used to
00:17:52.760
cause destruction. It's just so far the, the worst destruction is kind of the destruction of one city
00:17:58.100
at the time. Yeah. This is your urn of invention argument that we spoke about last time. Yeah.
00:18:03.440
So there's that where you could focus it, say on specific technological discoveries, but in parallel
00:18:09.440
with that, there is also this kind of out of control dynamics. You could call it evolution or like
00:18:15.440
kind of a global geopolitical game theoretic situation that is evolving and our sort of
00:18:22.100
information system, the memetic drivers that have changed presumably quite a lot since we've developed
00:18:29.280
the internet and social media, and that is now driving human minds in various different directions.
00:18:34.100
And, you know, if we're lucky, they will make us like wiser and nicer, but, uh, there is no guarantee
00:18:39.700
that they won't instead create more polarization or, uh, addiction or other various kinds of, uh,
00:18:45.440
malfunctions in our collective mind. And so that's kind of the default course, I think. So, uh, yes,
00:18:51.500
I mean, AI will also be dangerous, but the relevant standard is like, how much will it increase the
00:18:57.360
dangerous relative to just keep doing what's currently being done? Actually, there's a metaphor
00:19:03.040
you use early in the book, the new book, Deep Utopia, which captures this wonderfully. And it's the
00:19:09.400
metaphor of the bucking beast. And I just want to read those relevant sentences because they bring home
00:19:15.020
the, the nature of, of this problem, which we tend not to think about in terms that are this vivid.
00:19:21.740
So the, you say that, quote, humanity is riding on the back of some chaotic beast of tremendous
00:19:27.600
strength, which is bucking, twisting, charging, kicking, rearing. The beast does not represent nature.
00:19:33.740
It represents the dynamics of the emergent behavior of our own civilization, the technology mediated,
00:19:39.880
culture-inflected, game-theoretic interactions between billions of individuals, groups, and
00:19:45.800
institutions. No one is in control. We cling on as best we can for as long as we can, but at any
00:19:52.120
point, perhaps if we poke the juggernaut the wrong way or, or for no discernible reason at all,
00:19:57.240
it might toss us into the dust with a quick shrug or possibly maim or trample us to death.
00:20:02.280
Right. So we have all these variables that we influence in one way or another through culture
00:20:10.320
and through all of our individual actions. And yet on some basic level, no one is in control and
00:20:18.820
there's just no, I mean, the system is, is, is, is increasingly chaotic, especially given all of our
00:20:25.480
technological progress. And yes, so into this picture comes the prospect of building more and
00:20:33.920
more intelligent machines. And again, it's this, this dual sided risk. There's, there's the risk of
00:20:38.680
building them in a way that contributes to the problem, but there's the risk of failing to build
00:20:43.340
them and failing to solve the problems that might only be solvable in the presence of greater
00:20:48.660
intelligence. Yeah. So that, that certainly is one dimension of it. I think it would be kind of sad if
00:20:55.120
we, uh, never even got to roll the dice with a super intelligence because we just, uh, destroyed
00:21:01.020
the ourselves before even that, that would be particularly ignominious. It seems.
00:21:07.560
Yeah. Well, maybe this is a place to talk about the concept of path dependence and what you call
00:21:14.680
naughty problems in the book. What are those two phrases mean?
00:21:19.060
Well, I mean, the path dependence, I guess means that the, uh, result depends sort of on how you got
00:21:28.380
there and that kind of the opportunities does don't supervene on some current state, but also on the
00:21:35.380
history, the history might make a difference, not just, but the naughty problems, basically there's like
00:21:40.900
a class of problems that become automatically easier to solve as you get better technology. And then there's
00:21:47.060
another class of problems for which that is not necessarily the case and where the solution instead
00:21:53.220
maybe requires improvements in coordination. So for example, you know, maybe the problem of poverty
00:21:59.660
is getting easier to solve the, the more efficient, productive technology we have. You can grow more,
00:22:06.300
like if, if you have sort of tractors, it's easier to keep everybody fed than if you have more primitive
00:22:11.100
technology. So, so starvation just gets easier over time as long as we make technological progress. But
00:22:17.060
uh, say the problem of war doesn't necessarily get easier just because we make technological progress. In,
00:22:24.300
in fact, in some ways, wars might become more destructive if we make technological progress.
00:22:29.220
Yossi, can you explain the analogy to the actual knots in, in string?
00:22:34.740
Well, so the idea is with, uh, knots that are just tangled in certain ways, if you just pull hard enough on the
00:22:40.660
ends of that, it kind of straightens out. But there might be other types of problems where if you kind
00:22:46.340
of advance technologically, equivalently to tugging on the ends of the string, you, you end up with this
00:22:53.780
ineliminable knot. And, and sort of the, the, the, the more perfect the technology, the, the tighter that
00:22:59.940
knot becomes. Yeah. So if you have, say you have a kind of um totalitarian system to start off with,
00:23:06.820
maybe then the more perfect technology you have, the, the greater the ability of the, uh, dictator to
00:23:14.340
maintain himself in power using advanced surveillance technology, or, or maybe like
00:23:19.620
anti-aging technology, like whatever you could with perfect technology, maybe it becomes a knot that
00:23:24.900
never goes away. And so in the ideal, if you want to end up with a kind of on the knotted string,
00:23:30.100
you might have to resolve some of these issues before you get, uh, technological maturity.
00:23:35.620
Yeah. Which, which relates to the concept of path dependence. So let's actually talk about the happy
00:23:41.380
side of this equation that the notion of deep utopia and a solved world, what do you mean by a solved
00:23:46.900
world? One characterized by two properties. So one is it has attained technological maturity or some like
00:23:56.100
good approximation thereof, meaning at least all the technologies we already can see are physically
00:24:02.820
possible, have been developed. But then it has one more feature, which is that, um, political and
00:24:09.460
governance problems have been solved to whatever extent those kinds of problems can be solved.
00:24:14.580
So imagine some future civilization with really advanced technology, and, uh, it's a generally, uh,
00:24:21.940
fair world that doesn't wage war and, uh, you know, what people don't oppress one another and, uh,
00:24:26.820
things are at least decent in terms of the political stuff. So that, that would be one way
00:24:31.540
of characterizing it. But like another is to think of it as a, as a world in which sense there's like
00:24:37.300
a sense in which either all practical problems have already been solved, or if there remain any
00:24:44.020
practical problems, they are any better worked on by AIs and robots. And so that in some sense,
00:24:50.980
there might not remain any practical problems for, for humans to, uh, to work out. The world is
00:24:57.060
already solved. And when we think about this, well, first it's interesting that this is historical
00:25:01.780
prediction from John Maynard Keynes, which, uh, was surprisingly accurate given the fact that it's
00:25:06.660
a hundred years old. What, what, what did Keynes predict? He thought that, um, productivity would
00:25:12.420
increase four to eightfold over the coming hundred years from when he was writing it. I think we are about
00:25:18.740
90 years since he wrote it now. And that seems to be on track. So that was the first part of his
00:25:24.980
prediction. And then he thought that the result of this would be that we would have a kind of
00:25:30.900
four hour working week at Leisure Society, that people would work much less because they could,
00:25:35.380
you know, get enough of all that they had before and more, even whilst working much less. If every hour
00:25:41.140
of work was like eight times productive as productive, you could like work, you know,
00:25:45.700
four times less and still have two times as much stuff. He got that part wrong, but, uh, that's,
00:25:50.420
uh, that was, yeah. So he got that mostly wrong, although we do work less like there is working
00:25:56.420
hours have decreased. Uh, so there's more sort of people take longer to enter the labor market.
00:26:01.940
There's more education. They retire earlier. There's more sort of maternity and paternity leave and, uh,
00:26:07.380
slightly shorter working hours, but nowhere near as much as he had anticipated.
00:26:11.300
I'm surprised. I mean, perhaps it's just a test to my lack of economic understanding,
00:26:15.620
but, um, I'm surprised that he wasn't an order of magnitude off or more in his prediction of
00:26:21.380
productivity. I'm going out that way. Given what he had to work with, you know, 90 years ago,
00:26:26.500
in terms of looking at the results of the industrial revolution and, uh, given all that's happened in
00:26:32.580
the intervening decades, it's surprising that his notion of where we would get as a civilization in
00:26:38.660
in terms of productivity is, was it all close? Yeah. I mean, so those basic economic growth
00:26:44.900
rates of productivity growth haven't really changed that much. It's a little bit like Moore's law where
00:26:49.620
it's had, you know, relatively steady doubling pace for, for a good long time now. And so I guess
00:26:55.300
he just extrapolated that and that's how he got this prediction. So there's this, I think you touch on it
00:27:01.060
in the book. There's, there's this strange distortion of our thinking when we think about
00:27:06.820
the utopia or the prospect of solving all of our problems. When you think about incremental
00:27:13.540
improvements in our world, all of those seem almost by definition good, right? I mean, if you were
00:27:19.620
talking about an improvement, right? So you're telling me you're going to cure cancer. Well, that's
00:27:23.300
good. But once you improve too much, right? If you cure cancer and heart disease and Alzheimer's,
00:27:30.260
and then even aging, and now we can live to be 2000, all of a sudden people's intuitions become
00:27:37.860
a little wobbly and they, they feel that you've improved too much. And we almost have a kind of
00:27:42.340
uncanny valley problem for a future of happiness. It all seems too weird and, and, and in some ways
00:27:50.500
undesirable and even unethical, right? Like, like, like I think, um, I don't know if you know the,
00:27:55.220
the, the, the gerontologist Aubrey de Grey, who, uh, has made many arguments about the, you know,
00:28:00.580
the, the ending of aging. And he ran into this whenever he would propose the idea of, of solving
00:28:05.460
aging itself as an effectively an engineering problem. He was immediately met by opposition
00:28:11.940
of the sort that I just described. People, you know, find it to be, you know, unseemly and unethical
00:28:16.500
to want to live forever, to want to live to be a thousand. And then, but then he, when he would break
00:28:20.340
it down and he would say, well, okay, but let me just get this straight. Do you want, do you think
00:28:24.020
curing cancer would be a good idea? And everyone of course would say, yes. And what about heart
00:28:28.020
disease? And what about Alzheimer's? And everyone will sign up a la carte for every one of those
00:28:32.660
things on the menu. Even if you present literally everything that constitutes aging from that menu,
00:28:37.860
they, they want them all piecemeal, but comprehensively it somehow seems indecent and uncanny.
00:28:45.060
So, I mean, do you have a, a sense of utopia being almost a hard thing to sell on,
00:28:52.420
were it achievable that people still have strange ethical intuitions about it?
00:28:57.060
Yeah. So I don't really try to sell it, but more drive, dive right into that counterintuitiveness
00:29:03.460
and, and awkwardness and kind of almost, um, the, the, the sense of unease that comes if you really
00:29:09.060
try to imagine what it would happen if we made all these little steps of progress that everybody would
00:29:13.460
agree are good individually. And then you think through what that would actually produce. Then
00:29:18.980
there is a sense in which at least at first time, a lot of sites, a lot of people would recoil from
00:29:24.820
that. And so the book doesn't try to sugarcoat that, but like, like, let's, let's really dive in
00:29:30.260
and like think just how potentially repulsive and counterintuitive that condition of a sold world
00:29:35.140
would be and, and not blink or look away and like, let's steer straight into that. And then like
00:29:41.220
analyze what kinds of values could you actually have in this solved world. And, uh, I mean,
00:29:46.020
I think I'm ultimately optimistic that as it were on the other side of that, there is something very
00:29:50.420
worthwhile, but it certainly would be, I think in many important ways, very different from the current
00:29:55.540
human condition. And there is a sort of paradox there that we're so busy making all these steps of
00:30:01.140
progress that we celebrate as we make them, but we rarely look at where this ends up if, if things go
00:30:08.660
well. And, and when we do, we kind of recoil like, uh, that, that seems like, yeah. So, I mean,
00:30:14.260
you could cure the individual diseases and then you cure aging, but like also other little practical
00:30:18.660
things, right? So you have, you know, you're black and white television, then you have a color
00:30:23.060
television, then you have a remote control. You don't have to get up. Then you have a virtual
00:30:27.780
reality headset, and then you have a little thing that reads your brain. So you don't even have to
00:30:31.460
select what you want to watch. It kind of directly just selects programming based on what maximally
00:30:37.140
stimulates various circuits in your brain. And then, you know, maybe you don't even have that.
00:30:41.860
You just have something that directly stimulates your brain. And then maybe it doesn't stimulate
00:30:46.020
all the brain, but just a pleasure center of your brain. And like, as you think through,
00:30:49.460
as it were, these things taken to, to their optimum degree of refinement, it seems that
00:30:57.140
it's not clear what's left at the, at the end of that process that would still be worth having.
00:31:01.380
But let's make explicit some of the reasons why people begin to recoil. I mean, so I mean,
00:31:08.020
once you, you just took us all the way essentially into the, into the matrix, right? And then, and then
00:31:12.900
we can talk about, I mean, you're talking about, you know, directly stimulating the brain so as to
00:31:16.980
produce non-veridical, but, but otherwise desirable experiences. So let's, we'll probably end somewhere
00:31:22.980
near there, but on the way to all of that, there are other forms of dislocation. I mean, just,
00:31:28.580
just, just the fact that, that you're, you're uncoupling work from the need to work to, in
00:31:34.020
order to survive, right? In a solved world. So let's just, let's just talk about that first
00:31:39.460
increment of progress where we, we achieve such productivity that work becomes voluntary, right?
00:31:46.660
Or we have, we have to think of our lives as games or as works of art, where, where what we do each
00:31:52.980
day has no implication for whether or not we have an, you know, sufficient economic purchase upon
00:31:58.980
the variables of our own survival. What's the problem there with unemployment or just purely
00:32:05.620
voluntary employment or not having a culture that necessarily values human work because all that work
00:32:12.100
is better accomplished by intelligent machines? How do you see that?
00:32:15.140
Yeah. So we can, we can take it in stages as it were layers of the onion. So the outermost and most
00:32:21.860
superficial analysis would say, well, so we get machines that can do more stuff. So they automate
00:32:27.700
some jobs, but that just means we humans would do the other jobs that haven't been automated. And
00:32:33.540
we've seen transitions like this in the past, like 150 years ago, we were all farmers basically,
00:32:40.020
and now it's one or 2%. And so similarly in the future, maybe people, you know, will be Reiki
00:32:48.020
instructors or a massage therapist or like other things we haven't even thought of. And so, yes,
00:32:53.860
there will be some challenges we need to, you know, maybe there will be some unemployment and we need,
00:32:57.780
I don't know, unemployment insurance or retraining of people to, and that's kind of often where the
00:33:04.180
discussion has started and ended so far in terms of considering the implications of this machine
00:33:12.340
intelligence era. I've noticed that the massage therapists always come out more or less the last
00:33:18.340
people standing in any of these thought experiments. But that might be a euphemism for some related
00:33:24.100
professions. I think the problem goes deeper than that because it's not just the current jobs that
00:33:32.900
could be automated, right? But like the new jobs that we could invent also could be automated.
00:33:37.940
If you really imagine AI that is as fully generally capable as the human brain and then presumably
00:33:45.060
robot bodies to go along with that. So all human jobs could be automated with some exceptions that might
00:33:52.740
be relatively minor, but are worth, I guess, mentioning in passing. So there are services or products
00:34:00.020
where we care not just about the functional attributes of what we're buying, but also about
00:34:04.340
how it was produced. So right now, some person might pay a premium for a trinket if it were made by
00:34:14.900
hand, maybe by an indigenous craftsperson, as opposed to in a sweatshop somewhere in Indonesia.
00:34:22.100
So you might pay more for it, even if the trinket itself is functionally equivalent,
00:34:26.260
because you care about the history and how it was produced. So similarly, if future consumers have
00:34:32.100
that kind of preference, it might create a niche for human labor, because only humans can make things
00:34:37.940
made by human. Or maybe people just prefer to watch human athletes compete rather than like robots,
00:34:44.020
even if the robots could run faster and box harder, etc. So that's like the footnote to that general
00:34:52.100
claim that everything could be automated. So that would be a more radical conception than of a leisure
00:34:57.860
society, where it's not just that we would retrain workers, but we would stop working altogether.
00:35:02.900
And in some sense, that's more radical, but it's still not that radical. We already have various groups
00:35:10.820
that don't work for a living. We have children, so they are economically completely useless,
00:35:16.820
but nevertheless often have very good lives. They, you know, run around
00:35:22.100
playing and inventing games and learning and having fun. So even though they are not economically
00:35:26.980
productive, their lives seem to be great. You could look at retired people. There, of course,
00:35:31.300
the situation is confounded by health problems that become more common at older ages. But if you take
00:35:38.340
a retired person who is in perfect physical and mental health, you know, they often have great lives.
00:35:45.060
So they maybe travel the world, play with their grandkids, watch television,
00:35:48.500
take their dog for a walk in the park, do all kinds of things. They often have great lives. And then
00:35:55.380
there are people who are independently wealthy who don't need to work for a living. You know,
00:36:00.500
some of those have great lives. And so it's just maybe we would all be in more like these categories,
00:36:06.980
all be like children. And that would undoubtedly require substantial cultural readjustment.
00:36:13.220
Like the whole education system presumably would need to change rather than training kids to become
00:36:19.780
productive workers who receive assignments and hand them in and do as they're told and sit at their
00:36:25.060
desks. You could sort of focus education more on cultivating the art of conversation, appreciation for
00:36:33.060
in the natural beauty, for literature, hobbies of different kinds, physical wellness. So,
00:36:38.820
so that would be a big readjustment, but... Well, you've already described many of the impractical
00:36:43.780
degrees that some of us have gotten, right? I mean, I, you know, I did my undergraduate degree
00:36:48.500
in philosophy. I forget what you did. Did you do philosophy or were you in physics?
00:36:52.660
I did a bunch of things. Yeah. Physics and philosophy and AI and stuff.
00:36:57.700
So, but I mean, you've described much of the humanities there. So yeah, it's funny to think of the
00:37:03.540
humanities as potentially the optimal, I guess, not humanities as circa 2024, given what's been
00:37:09.940
happening on college campuses of late, but some purified version of the humanities, like the
00:37:14.260
great books program at St. John's say is just the optimal education for a future wherein more or less
00:37:22.900
everyone is independently wealthy. Yeah. Or maybe one component of it. I think there's like,
00:37:27.620
you know, music appreciation, many different dimensions of a great life that don't all
00:37:32.980
consist of reading all the books, but it's definitely like there could be an element there.
00:37:37.220
Frisbee, yeah. But I think the problem goes like deeper than that. So we can peel off another layer
00:37:43.780
of the onion, which is that when we consider the affordances of technological maturity, we realize it's
00:37:51.140
not just economic labor that could be automated, but a whole bunch of other activities as well.
00:37:58.820
So rich people today are often leading very busy lives. They are like having various projects they
00:38:04.660
are pursuing, et cetera, and that they couldn't accomplish unless they actually put time and effort
00:38:09.460
into them themselves. But you can sort of think through of the types of activities that people might
00:38:14.820
fill their leisure time with and think whether those would still make sense at technological
00:38:19.700
maturity. And I think for many of them, you can sort of cross them out or at least put a question
00:38:24.660
mark there. You could still do them, but they would seem a bit pointless because that would be
00:38:30.260
an easier way to accomplish their aim. So right now, some people are not just like shopping.
00:38:38.340
If you'd like to continue listening to this conversation, you'll need to subscribe at samharris.org.
00:38:44.020
Once you do, you'll get access to all full-length episodes of the Making Sense podcast.
00:38:48.660
The podcast is available to everyone through our scholarship program.
00:38:52.180
So if you can't afford a subscription, please request a free account on the website.
00:38:56.820
The Making Sense podcast is ad-free and relies entirely on listener support.