Making Sense - Sam Harris - September 20, 2018


#138 — The Edge of Humanity


Episode Stats

Length

49 minutes

Words per Minute

141.7689

Word Count

7,072

Sentence Count

393

Misogynist Sentences

1

Hate Speech Sentences

11


Summary

Yuval Noah Harari is a world-historian and meditator. His new book, 21 Lessons for the 21st Century, is about the present and the future, and his views on AI, automation, and other topics. In this episode, I talk to Yuval about how he thinks about the past and the present, and why he thinks that the past has something to teach us about the future. We also talk about the importance of meditation, and the role that meditation plays in helping us understand the past, present and future, as well as the benefits of meditations, and how they can be applied in the present to understand the present. And, as always, thank you for listening to the Making Sense Podcast! Please consider becoming a supporter of the podcast by becoming one of our many platinum memberships, where you get access to all of our newest episodes, plus early ad-free episodes throughout the week. This podcast is made possible entirely through the support of platinum-memberships, which means you'll get early access to our most popular episodes and access to the latest releases, and access all special offers, all of which are available to you, the listener. You'll also get 20% off the entire M&M store, plus a 20% discount when you become a platinum member! discount code: M&Ms. at checkout. The discount starts at $99.99 and includes free shipping, and shipping throughout the rest of the month, plus two-and-a-half off the retail version of the M& a free copy of the book, "Making Sense: The Making Sense: A Guide to the podcast, "The Making Sense is a Bookshop. available in Kindle and Audible, and all other places you get 10% off including Audible Prime and Pizzazzarelli, and P&P Pro, plus an Audible membership, shipping + VaynerSpeaker, and Vimeo membership, too! and PODCAST PROMO + Vimeo Pro, and a free course on the Audible store, starting at $19. . Thanks to John Rocha, who kindly provided us with a copy of his excellent book, Making Sense and we'll be giving you a discount on the book review of the final issue of Making Sense? Subscribe to the book and a $5 autographed copy of "A Brief History of the Future: A Brief History Of Tomorrow,


Transcript

00:00:00.000 Welcome to the Making Sense Podcast.
00:00:08.820 This is Sam Harris.
00:00:10.880 Just a note to say that if you're hearing this, you are not currently on our subscriber
00:00:14.680 feed and will only be hearing the first part of this conversation.
00:00:18.420 In order to access full episodes of the Making Sense Podcast, you'll need to subscribe at
00:00:22.720 samharris.org.
00:00:24.060 There you'll find our private RSS feed to add to your favorite podcatcher, along with
00:00:28.360 other subscriber-only content.
00:00:30.540 We don't run ads on the podcast, and therefore it's made possible entirely through the support
00:00:34.640 of our subscribers.
00:00:35.900 So if you enjoy what we're doing here, please consider becoming one.
00:00:46.740 Today I am speaking with Yuval Noah Harari.
00:00:50.340 Yuval has a PhD in history from the University of Oxford, and he lectures at Hebrew University
00:00:55.960 in Jerusalem, where he specializes in world history.
00:01:00.380 His books have been translated into over 50 languages, and these books are Sapiens, A Brief
00:01:06.300 History of Humankind, Homo Deus, A Brief History of Tomorrow, and his new book, which we discuss
00:01:13.700 today, is 21 Lessons for the 21st Century.
00:01:18.840 Yuval is rather like me in that he spends a lot of time worrying out loud.
00:01:23.300 He's also a long-term meditator.
00:01:26.720 I don't know if there's a connection there.
00:01:28.580 There was so much to talk about.
00:01:30.180 There is much more in the new book than we touched, but we touched a lot.
00:01:36.500 We actually started talking about the importance of meditation for his intellectual life.
00:01:41.820 We talk about the primacy of stories, the need to revise our fundamental assumptions about
00:01:47.440 human civilization and how it works, the current threats to liberal democracy, what a world
00:01:54.640 without work might look like, universal basic income, the virtues of nationalism.
00:02:01.520 Yuval has some surprising views on that.
00:02:04.480 The implications of AI and automation, and several other topics.
00:02:08.760 So, without further delay, I bring you Yuval Noah Harari.
00:02:13.120 Thank you.
00:02:38.940 Thank you. Thank you. So, and thank you to Rivers Cuomo. That's amazing.
00:02:47.760 So, you've heard this from me before, if you've been to an event or listened to events on a
00:02:53.360 podcast, but I, so it may get old to hear, but it really doesn't get old to say, I can't tell you
00:02:58.460 what an honor it is to put a date on the calendar and have you all show up. I mean,
00:03:04.180 it's just astonishing to me that this happened. So, thank you.
00:03:08.940 And thank you to Yuval for coming out. It's my honor. It's amazing to collaborate with you.
00:03:15.520 Thank you.
00:03:19.720 So, Yuval, you have these books that just steamroll over all other books, and I know because I write
00:03:29.940 books. So, you wrote Sapiens, which is kind of about the deep history of, yes, with a few fans.
00:03:38.940 Which is really about the history of humanity. And then you wrote Homo Deus, which is about our far
00:03:44.640 future. And now you've written this book, 21 Lessons for the 21st Century, which is about the
00:03:50.780 present. I can't be the only one in your publishing world who notices that now you have nothing left to
00:03:55.880 write about. So, good luck with that career of yours. So, how do you describe what you do? Because
00:04:05.120 you're a historian. I mean, one thing that you and I have in common is that we have a reckless disregard
00:04:10.220 for the boundaries between disciplines. I mean, you just touch so many things that are not
00:04:15.340 straightforward history. How do you think about your intellectual career at this point?
00:04:20.940 Well, my definition of history is that history is not the study of the past. It's the study of change.
00:04:28.180 How things change. And yes, most of the time you look at change in the past. But in the end, all the
00:04:36.140 people who lived in the past are dead. And they don't care what you write or say about them.
00:04:42.940 So, if the past has anything to teach us, it should be relevant to the future and to the present
00:04:49.100 also. But you touch biology and the implications of technology. I follow the questions. And the
00:04:59.040 questions don't recognize these disciplinary boundaries. And as a historian, maybe the most
00:05:06.160 important lesson that I've learned as a historian is that humans are animals. And if you don't take
00:05:12.600 this very seriously into account, you can't understand history. Of course, I'm not a biologist.
00:05:19.560 I also know that humans are a very special kind of animal. If you only know biology, you will not
00:05:25.620 understand things like the rise of Christianity or the Reformation or the Second World War. So you
00:05:32.860 need to go beyond just the biological basis. But if you ignore this, you can't really understand
00:05:41.220 anything. Yeah. And the other thing we have in common, which gives you, to my eye, a very unique
00:05:48.020 slant on all the topics you touch, is an interest in meditation and a sense that our experiences in
00:05:54.320 meditation have changed the way we think about problems in the world and questions like just what
00:06:02.300 it means to live a good life or even whether the question of the meaning of life is an intelligible
00:06:09.200 one or a valid one or a one that needs to be asked. How do you view the influence of the contemplative
00:06:17.700 life on your intellectual pursuits? I couldn't have written any of my books, either Sapiens or Homo Deus or 21
00:06:25.400 lessons without the experience of meditation, partly because of just what I learned about the human mind
00:06:33.900 from observing the mind, but also partly because you need a lot of focus in order to be able to
00:06:43.020 summarize the whole of history into like 400 pages. And meditation gives you this kind of ability
00:06:52.580 to really focus. I mean, my understanding of at least the meditation that I practice is that the
00:06:59.520 number one question is what is reality? What is really happening? To be able to tell the difference
00:07:07.800 between the stories that the mind keeps generating about the world, about myself, about everything,
00:07:15.980 and the actual reality. And this is what I try to do when I meditate. And this is also what I try to do
00:07:23.560 when I write books to help me and other people understand what is the difference between fiction
00:07:31.400 and reality. Yeah, yeah. And I want to get at that difference because you use these terms in slightly
00:07:38.780 idiosyncratic ways. So I think it's possible to either be confused about how you use terms like story
00:07:45.920 and fiction. For instance, just the way you talk about the primacy of fiction, the primacy of story,
00:07:54.180 the way in which our concepts that we think map onto reality don't really quite map onto reality,
00:08:02.300 right? And yet they're nonetheless important. That is, in a way that you don't often flag in your
00:08:10.540 writing a real meditator's eye view of what's happening here. I mean, it's not, it's like you're
00:08:16.460 giving people the epiphany that certain things are made up, like the concept of money, right? Like
00:08:23.080 the idea that we have dirty paper in our pocket that is worth something, right? That is a convention
00:08:28.140 that we've all agreed about. But it is a, it's an idea. It only works because we agree that it works.
00:08:35.100 But you, the way you use the word story and fiction rather often seems to denigrate these things a
00:08:44.440 little bit more than I'm tempted to do when I talk about it. I don't say that there is anything wrong
00:08:49.340 with it. Stories and fictions are a wonderful thing, especially if you want to get people to
00:08:55.900 cooperate effectively. You cannot have a global trade network unless you agree on money. And you can,
00:09:03.540 you cannot have people playing football or baseball or basketball or any other game unless you get
00:09:09.180 them to agree on rules that quite obviously we invented. They did not come from heaven. They did
00:09:16.460 not come from physics or biology. We invented them. And there is nothing wrong with people agreeing,
00:09:23.160 accepting, let's say for 90 minutes, the story of football, the rules of football, that if you score
00:09:29.540 a goal and this is the goal of the whole game and so forth, the problem begins only when people
00:09:36.960 forget that this is only a convention, this is only something we invented. And they start confusing it
00:09:46.060 with kind of, this is reality. This is the real thing. And in football, it can lead to people,
00:09:52.820 to hooligans beating up each other or killing people because of this invented game. And on a higher
00:10:00.000 level, it can lead to, you know, to world wars and genocides in the name of fictional entities like
00:10:07.500 gods and nations and currencies that we've created. Now, there is nothing wrong with these creations
00:10:15.600 as long as they serve us instead of us serving them. But wouldn't you acknowledge that there's a
00:10:23.480 distinction between good stories and bad stories? Yeah, certainly. The good stories are the ones that
00:10:30.740 really serve us, that help people, that help other sentient beings live a better life. I mean,
00:10:36.240 it's as simple as that. I mean, of course, in real life, it's much more complicated to know what will be
00:10:42.520 helpful and what not and so forth. But a good starting place is just to have this basic ability
00:10:49.040 to tell the difference between fiction and reality, between our creations and what's really out there,
00:10:58.000 especially when, for example, you need to change the story. Or a story which was very adapted to one
00:11:07.440 condition is less adapted to a new condition, which is, for example, what I think is happening now
00:11:14.240 with the story of the underground liberal democracy, that it was probably one of the best stories ever
00:11:22.720 created by humanity. And it was very adapted to the conditions of the 20th century. But it is less
00:11:31.480 and less adapted to the new realities of the 21st century. And in order to kind of reinvent the
00:11:39.080 system, we need to acknowledge that to some extent, it is based on stories we have invented.
00:11:48.800 Right. But so when you talk about something like human rights being a story or a fiction,
00:11:54.400 that seems like a story or a fiction that shouldn't be on the table to be fundamentally
00:12:01.900 revised. Right. Like that's where people begin to worry that to describe these things as stories
00:12:07.680 or fictions is to suggest tacitly, if I don't think you do this explicitly, that all of this stuff is
00:12:15.900 made up and therefore it's all sort of on the same level. And yet there's clearly a distinction
00:12:22.240 between a distinction you make in your book between dogmatism and the other efforts we make to justify
00:12:29.680 our stories. Right. There's there are stories that are dogmatically asserted and religion has more
00:12:34.640 than its fair share of these. But there are political dogmas, there are tribal dogmas of all kinds,
00:12:39.920 you know, nationalism can be anchored to dogma. And the mode of asserting a dogma is is to be
00:12:47.080 doing so without feeling responsible to counter arguments and demands for evidence and and reasons
00:12:54.860 why. Whereas with something like human rights, we can tell an additional story about why we value
00:13:01.360 this convention. Right. Like we we don't have to doesn't have to be a magical story. It doesn't have
00:13:06.080 to be that we were all imbued by our creator with these things. But we can give we can talk for a long
00:13:13.280 time without saying it's just so to justify that convention. Yeah. I mean, human rights is is is is
00:13:21.060 a particularly problematic and also interesting case. First of all, because it's our story. I mean,
00:13:27.360 we are very happy with you discrediting the stories of all kinds of religious fundamentalists and all
00:13:33.680 kinds of tribes somewhere and ancient people, but not our story. Don't touch that. It depends what you
00:13:39.640 mean by we. So I guess we most of the people, I don't see anybody here. It's it could be just empty
00:13:45.880 chairs and then recordings of laughter. But I assume that the people here, most of them, this is our
00:13:53.280 story. The second thing is that we live in a moment when liberal democracy is is under a severe attack.
00:14:01.740 And this was not so when I wrote Sapiens. I felt much fear writing these things back in 2011, 2012.
00:14:09.640 And now it's it's much more problematic. And yes, I find myself one of the difficulties of living
00:14:16.280 right now as an intellectual, as a thinker, that you kind of I'm kind of torn apart by the
00:14:24.800 imperative to explore the truth, to follow the truth wherever it leads me. And the political realities
00:14:32.880 of the present moment and the need to engage in in very important political battles. And this is one of
00:14:41.680 the costs, I think, of what is happening now in the world, that it restricts our ability or our freedom
00:14:52.400 to truly go deep and explore the foundations of our system. And I still feel the importance of doing it, of
00:15:06.260 questioning even the foundations of liberal democracy and of human rights, simply because I think that as we have
00:15:15.640 defined them since the 18th century, they are not going to survive the tests of the 21st century.
00:15:23.640 And it's extremely unfortunate that we have to to engage in this two front battle, that at the same moment, we have to defend these ideas from people who look at them from the perspective of nostalgic fantasies.
00:15:43.640 That they don't even, they want to go back from the 18th century. And at the same time, we have to also go forward and think what it means, what the new scientific discoveries and technological developments of the 21st century really mean to the core ideas of what do human rights mean when you are starting to have superhumans?
00:16:10.040 Do superhumans have superhuman rights? What does the right of freedom mean when we have now technologies that simply undermine the very concept of freedom?
00:16:25.540 We kind of, when we created this whole system, not we, somebody, back in the 18th and 19th century,
00:16:35.180 we gave ourselves all kinds of philosophical discounts of not really going deeply enough in some of the key questions. Like, what do humans really need? And we settled for answers like, just follow your heart.
00:16:52.940 Yeah.
00:16:53.640 And this was, this was good enough.
00:16:57.800 This is Joseph Campbell. I blame Joseph Campbell for follow your bliss.
00:17:01.140 No, but follow your heart. The voter knows best. The customer is always right. Beauty is in the eyes of the beholder.
00:17:08.300 All these slogans, they were kind of, of covering up from not engaging more deeply with the question of what is really human freedom and what do humans really need?
00:17:22.320 And for the, for the last 200 years, it was good enough.
00:17:28.000 But now to just follow your heart is becoming extremely dangerous and problematic when there are corporations and organizations and governments out there that for the first time in history can hack your heart.
00:17:44.820 And your heart might be now be now a government agent and you don't even know it.
00:17:51.040 So telling people in 2018, just follow your heart is a much, much more dangerous advice than in 1776.
00:18:00.920 Yeah. So let's drill down on that circumstance.
00:18:04.340 So we have this claim that liberal democracy is one, under threat, and two, might not even be worth maintaining as we currently conceive it, given the technological changes that are upon us or will be upon us.
00:18:21.400 Well, it is worth maintaining. It's just becoming more and more difficult.
00:18:24.980 Well, presumably there are things about liberal democracy that are serious bugs and not features in light of the fact that, as you say, if it's all a matter of putting everything to a vote and we are all part of this massive psychological experiment where we're gaming ourselves with algorithms written by some people in this room to not only confuse us with respect to what's in our best interest,
00:18:53.620 but the very tool we would use to decide what's worth wanting is being hijacked.
00:19:01.720 It's one thing to be wrong about how to meet your goals.
00:19:05.240 It's another thing to have the wrong goals and not even know that.
00:19:09.300 It's hard to know where ground zero is for cognition and emotion if all of this is susceptible to outside influence,
00:19:19.180 which ultimately we need to embrace because there is a possibility of influencing ourselves in ways that open vistas of well-being and peaceful cooperation that we can't currently imagine or we can't see how to get to.
00:19:35.360 So it's not like we actually want to go back to when there was no, quote, hacking of the human mind.
00:19:40.460 Every conversation is an attempted hack of somebody else's mind, right?
00:19:44.260 So we're just getting, it's getting more subtle now.
00:19:47.360 Yeah, it's, you know, throughout history, other people and governments and churches and so forth,
00:19:54.980 they all the time tried to hack you and to influence you and to manipulate you.
00:20:01.280 They just weren't very good at it because humans are just so incredibly complicated.
00:20:07.940 And therefore, for most of history, this idea that I have an inner arena, which is completely free from external manipulation,
00:20:20.000 nobody out there can really understand what's happening within me.
00:20:24.680 How special you are.
00:20:25.980 And how special I am, and what I really feel and how I really think and all that, it was largely true.
00:20:32.940 And therefore, the belief in the autonomous self and in free will and so forth, it made practical sense.
00:20:42.580 Even if it wasn't true on the level of ultimate reality, on a practical level, it was good enough.
00:20:50.480 But however complicated the human entity is, we are now reaching a point when somebody out there can really hack it.
00:21:01.600 Now, they won't, it can never be done perfectly.
00:21:06.620 We are so complicated, I'm under no illusion that any corporation or government or organization can completely understand me.
00:21:16.260 This is impossible.
00:21:18.100 But the yardstick or the threshold, the critical threshold, is not perfect understanding.
00:21:25.060 The threshold is just better than me.
00:21:27.800 The key inflection point in history, in the history of humanity, is the moment when an external system can reliably, on a large scale,
00:21:40.500 understand people better than they understand themselves.
00:21:44.560 And this is not an impossible mission, because so many people don't really understand themselves very well.
00:21:51.660 No.
00:21:51.940 Similarly, with the whole idea of shifting authority from humans to algorithms.
00:22:01.760 So I trust the algorithm to recommend TV shows for me.
00:22:06.040 And I trust the algorithm to tell me how to drive from Mountain View to this place this evening.
00:22:12.700 And eventually I trust the algorithm to tell me what to study, and where to work, and whom to date, and whom to marry, and who to vote for.
00:22:23.180 And then people say, no, no, no, no, no, no.
00:22:25.680 That won't happen.
00:22:27.420 Because there will be all kinds of mistakes, and glitches, and bugs, and the algorithm will never know everything.
00:22:34.440 And it can't do it.
00:22:35.980 And if the yardstick is the algorithm to trust the algorithm, to give authority to the algorithm, it needs to make perfect decisions, then yes, it will never happen.
00:22:48.620 But that's not the yardstick.
00:22:50.420 The algorithm just needs to make better decisions than me about what to study, and where to live, and so forth.
00:22:58.120 And this is not so very difficult, because as humans, we often tend to make terrible mistakes, even in the most important decisions in life.
00:23:08.960 Yeah, yeah.
00:23:13.100 I promise this will be uplifting at some point.
00:23:18.680 So let's linger on the problem of the precariousness of liberal democracy.
00:23:24.760 And there's so many aspects to this.
00:23:27.480 Maybe just to add one thing to this precariousness, the idea that systems have to change, again, as a historian, this is obvious.
00:23:36.220 I mean, you couldn't really have a functioning liberal democracy in the Middle Ages, because you didn't have the necessary technology.
00:23:46.220 Liberal democracy is not this eternal ideal that can be realized anytime, anyplace.
00:23:54.140 Take the Roman Empire in the 3rd century, take the Kingdom of France in the 12th century, let's have a liberal democracy there.
00:24:02.400 No, you don't have the technology.
00:24:05.500 You don't have the infrastructure.
00:24:07.620 You don't have what it takes.
00:24:09.000 It takes communication.
00:24:10.400 It takes education.
00:24:11.820 It takes a lot of things that you just don't have.
00:24:14.660 And it's not just a bug of liberal democracy.
00:24:17.180 It's true of any socio-economic or political system, you could not build a communist regime in 16th century Russia.
00:24:27.520 I mean, you can't have communism without trains and electricity and radio and so forth.
00:24:34.060 Because in order to make all the decisions centrally, if a slogan is that you work, they take everything, and then they redistribute according to needs, each one works according to their ability and gets according to their need, the key problem there is really a problem of data processing.
00:24:54.100 How do I know what everybody is producing, how do I know what everybody needs, and how do I shift the resources, taking wheat from here and sending it there?
00:25:07.460 In 16th century Russia, when you don't have trains, when you don't have radio, you just can't do it.
00:25:14.820 So as technology changes, it's almost inevitable that the socio-economic and political systems will change.
00:25:24.440 So we can't just hold on.
00:25:26.220 No, this must remain as it is.
00:25:28.860 The question is, how do we make sure that the changes are for the better and not for the worse?
00:25:35.340 Well, by that yardstick, now might be the moment to try communism in earnest.
00:25:41.600 We can do it now, right?
00:25:42.980 So you can all tweet that Yuval Noah Harari is in favor of communism.
00:25:47.840 I didn't say anything.
00:25:51.960 I mean, we had a moment in the sun that seemed, however delusionally, to be kind of outside of history.
00:26:00.600 You know, it's like the first moment in my life where I realized I was living in history was September 11, 2001.
00:26:06.320 But before that, it just seemed like people could write books with titles like The End of History.
00:26:12.360 And we sort of knew how this was going to pan out, it seemed.
00:26:17.340 Liberal values were going to dominate the character of a global civilization, ultimately.
00:26:23.660 We were going to fuse our horizons with people of however disparate background.
00:26:30.620 You know, someone in a village in Ethiopia was eventually going to get some version of the democratic, liberal notion of human rights and the primacy of rationality and the utility of science.
00:26:46.760 So religious fundamentalism was going to be held back and eventually pushed all the way back and irrational economic dogmas that had proved that they're merely harmful would be pushed back.
00:26:58.900 And we would find an increasingly orderly and amicable collaboration among more and more people.
00:27:07.100 I think, like I say, and we would get to a place where war between nation states would be less and less likely to the point where, by analogy, a war between states internal to a country like the United States, a war between Texas and Oklahoma just wouldn't make sense, right?
00:27:24.420 But how is that possibly going to come about?
00:27:26.340 Wait and see.
00:27:27.240 Yeah, exactly.
00:27:27.920 But now we seem to be in a moment where much of what I just said we were taking for granted can't be taken for granted.
00:27:36.240 There's a rise of populism.
00:27:37.720 There's a xenophobic strand to our politics that is just immensely popular, both in the U.S. and in Western Europe.
00:27:46.880 And this anachronistic nativist reaction is, as you spell out in your most recent book, is being kindled by a totally understandable anxiety around technological change of the story.
00:28:03.040 I mean, we're talking about people who are sensing, it's not the only source of xenophobia and populism, but there are many people who are sensing the prospect of their own irrelevance, given the dawn of this new technological age.
00:28:19.400 What are you most concerned about in this present context?
00:28:23.340 I think irrelevance is going to be a very big problem.
00:28:27.300 It already fuels much of what we see today with the rise of populism, is the fear and the justified fear of irrelevance.
00:28:37.680 If in the 20th century, the big struggle was against exploitation, then in the 21st century, for a lot of people around the world, the big struggle is likely to be against irrelevance.
00:28:49.040 And this is a much, much more difficult struggle.
00:28:51.760 So, a century ago, so you felt that, literally if you were the common person, that there were all these elites that exploit me.
00:29:02.600 Now you increasingly feel, as a common person, that there are all these elites that just don't need me.
00:29:09.900 And that's much worse.
00:29:12.140 On many levels, both psychologically and politically, it's much worse to be irrelevant than to be exploited.
00:29:20.200 Let's spell that out.
00:29:21.560 Why is it worse?
00:29:23.940 First of all, because you're completely expendable.
00:29:27.620 If a century ago, you mount a revolution against exploitation, then you know that if things, when bad comes to worse, they can't shoot all of us because they need us.
00:29:43.540 Who's going to work in the factories?
00:29:45.640 Who's going to serve in the armies if they get rid of us?
00:29:49.180 That's a motivational poster I'm going to get printed up.
00:29:53.940 I'm not sure what the graphic is, but they can't shoot all of us.
00:29:58.380 If you're irrelevant, that's not the case.
00:30:02.360 You're totally expendable.
00:30:04.860 And again, we are often, our vision of the future is collowed by the recent past.
00:30:10.660 The 19th and 20th century were the age of the masses, where the masses ruled.
00:30:17.080 And even authoritarian regimes, they needed the masses.
00:30:21.060 So you had these mass political movements like Nazism and like communism.
00:30:26.520 And even somebody like Hitler or like Stalin, they invested a lot of resources in building schools and hospitals and having vaccinations for children and sewage systems and teaching people to read and write.
00:30:44.040 Not because Hitler and Stalin were so nice guys, but because they knew perfectly well that if they wanted, for example, Germany to be a strong nation with a strong army and a strong economy,
00:30:58.040 they needed millions of people, common people, to serve as soldiers in the army and as workers in the factories and in the offices.
00:31:07.900 So some people could be expendable and could be scapegoats like the Jews, but on the whole, you couldn't do it to everybody.
00:31:15.840 You needed them.
00:31:17.220 But in the 21st century, there is a serious danger that more and more people will become irrelevant and therefore also expendable.
00:31:26.080 We already see it happening in the armies.
00:31:28.380 That whereas the leading armies of the 20th century relied on recruiting millions of common people to serve as common soldiers,
00:31:39.540 today the most advanced armies, they rely on much smaller numbers of highly professional soldiers
00:31:46.720 and increasingly on sophisticated and autonomous technology.
00:31:52.160 If the same thing happens in the civilian economy, then we might see a similar split in civilian society
00:32:00.620 where you have a relatively small, very capable professional elite relying on very sophisticated technology
00:32:10.380 and most people, just as they are already today militarily irrelevant,
00:32:16.960 they could become economically and politically irrelevant.
00:32:19.940 Now, that sounds like a real risk we're running, but the normal intuitions about what is scary about that
00:32:29.980 don't hold up given the right construal and expectations about human well-being.
00:32:36.720 So it's like we know what people are capable of doing when they're irrelevant
00:32:42.560 because aristocrats have done that for centuries.
00:32:46.160 I mean, they're people who have not had to work in every period of human history
00:32:49.460 and they had a fine old time, you know, shooting pheasant and inventing weird board games
00:32:54.020 and then if you add to that some more sophisticated way of finding well-being,
00:33:01.920 you know, so if we taught people, you know, stoic philosophy and how to meditate and good sports
00:33:07.640 and it's nowhere written that life is only meaningful if you are committed to something you would only do,
00:33:17.760 you only will do because someone's paying you to do it, right?
00:33:20.280 Definitely. I mean, there is a worst case and a best case scenario.
00:33:24.200 In the best case scenario, people are relieved of all the difficult, boring jobs that nobody really wants to do,
00:33:33.020 but you do it because you need the money and you're relieved of that
00:33:36.900 and the enormous profits of the automation revolution are shared between everybody
00:33:43.080 and you can spend your time, your leisure time, on exploring yourself, developing yourself,
00:33:51.440 doing art or meditating or playing sports or developing communities.
00:33:56.540 There are wonderful scenarios that can be realized.
00:33:59.940 There are also some terrible scenarios that can be realized.
00:34:04.460 I mean, I don't think there is anything inevitable.
00:34:09.820 I mean, the technology, the technological revolution, which is just beginning right now,
00:34:14.960 it can go in completely different directions.
00:34:18.700 Again, if you look back at the 20th century,
00:34:22.020 then you see that with the same technology of trains and electricity and radio,
00:34:26.260 you can build a communist dictatorship or a fascist regime or a liberal democracy.
00:34:32.600 The trains don't care.
00:34:34.680 They don't tell you what to do with them,
00:34:37.020 and they can be used for anything you can use them for.
00:34:41.420 They don't object.
00:34:43.120 And it's the same with AI and biotechnology and all the current technological inventions.
00:34:49.820 We can use them to build really paradise or hell.
00:34:52.280 The one thing that is certain is that we are going to become far more powerful than ever before,
00:34:59.600 far more powerful than we are now.
00:35:01.640 We are really going to acquire divine abilities of creation,
00:35:06.400 in some sense even greater abilities than what was traditionally ascribed to most gods from Zeus to Yahweh.
00:35:17.040 If you look, for instance, the creation story in the Bible,
00:35:20.720 the only things that Yahweh managed to create are organic entities.
00:35:25.960 And we are now on the verge of creating the first inorganic entities after 4 billion years of evolution.
00:35:33.920 So in this sense, we are even on the verge of outperforming the biblical God in creation.
00:35:40.960 And we can do so many different things with that.
00:35:45.060 Some of them can be extremely good.
00:35:47.040 Some of them can be extremely bad.
00:35:49.380 This is why it's so important to have these kinds of conversations.
00:35:54.820 Because this is maybe the most important question that we are facing.
00:35:59.040 What to do with these powers?
00:36:01.020 Yeah.
00:36:01.220 So what norms or stories or conventions or fictions, concepts, ideas, do you think stand in the way of us taking the right path here?
00:36:12.960 I mean, we've sort of alluded to it without naming it.
00:36:16.280 Let's say we could all agree that universal basic income was the near-term remedy for some explosion of automation and irrelevance.
00:36:27.040 You look skeptical about that.
00:36:29.060 Yeah, I have two difficulties with universal basic income, which is universal and basic.
00:36:34.700 Income is fine.
00:36:36.120 But universal and basic, they are ill-defined.
00:36:40.580 Most people, when they speak about universal basic income, they actually have in mind national basic income.
00:36:48.740 They think in terms, okay, we'll tax Google and Facebook in California and use that to pay unemployment benefits or to give free education to unemployed coal miners in Pennsylvania and unemployed taxi drivers in New York.
00:37:05.060 The real problem is not going to be in New York.
00:37:08.420 The real problem, the greatest problem, is going to be in Mexico, in Honduras, in Bangladesh.
00:37:13.920 And I don't see an American government taxing corporations in California and sending the money to Bangladesh to pay unemployment benefits there.
00:37:24.240 And this is really the automation revolution.
00:37:29.440 They're clapping to stop us from paying.
00:37:32.940 Those are the libertarians in the audience.
00:37:37.100 We've built, over the last few generations, a global economy and a global trade network.
00:37:44.480 And the automation revolution is likely to unravel the global trade network and hit the weakest links, the hardest.
00:37:53.020 So you will have enormous new wealth, enormous new wealth created here in San Francisco and Silicon Valley.
00:38:02.100 But you can have the economies of entire countries just collapse completely because what they know how to do, nobody needs that anymore.
00:38:12.340 And we need a global solution for this.
00:38:16.160 So universal, if by universal you mean global, taking money from California and sending it to Bangladesh, then yes, this can work.
00:38:25.440 But if you mean national, it's not a real answer.
00:38:28.220 And the second problem is with basic.
00:38:30.580 How do you define what are the basic needs of human beings?
00:38:34.880 Now, in a scenario in which a significant proportion of people no longer have any jobs, and they depend on this universal basic income or universal basic services,
00:38:50.420 whatever they get, they can't go beyond that.
00:38:55.080 This is the only thing they're going to get.
00:38:56.900 Then who defines what is their basic needs?
00:39:02.080 What is basic education?
00:39:04.140 Is it just literacy or also coding or everything up to PhD or playing the violin?
00:39:10.580 Who decides?
00:39:11.720 And what is basic healthcare?
00:39:14.280 Is it just, I mean, if you're looking 50 years to the future and you see genetic engineering of your children and you see all kinds of treatments to extend life,
00:39:24.100 is this the monopoly of a tiny elite, or is this part of the universal basic package?
00:39:33.980 And who decides?
00:39:35.600 So it's a first step.
00:39:37.920 The discussion we have now about universal basic income is an important first step.
00:39:43.020 But we need to go much more deeply into understanding what we actually mean by universal and by basic.
00:39:50.700 Right.
00:39:51.520 Well, so let's imagine that we begin to extend the circle, coincident with this rise in affluence.
00:40:00.620 And because on some level, if the technology is developed correctly, we are talking about pulling wealth out of the ether, right?
00:40:11.260 So automation and artificial intelligence, there's more, the pie is getting bigger.
00:40:16.040 And then the question is how generously or wisely we will share it with the people who are becoming irrelevant because we don't need them for their labor anymore.
00:40:25.320 Let's just, let's say we get better at that than we currently are.
00:40:30.140 But I mean, you can imagine that we are going to be, we will be fast to realize that we need to take care of the people in our neighborhood, you know, in San Francisco.
00:40:38.380 And we will be slower to realize we need to take care of the people in Somalia.
00:40:43.700 But maybe we'll just, these lessons will be hard.
00:40:46.860 One, we'll realize if we don't take care of the people in Somalia, a refugee crisis, unlike any we've ever seen, will hit us in six months, right?
00:40:56.080 So there'll be some completely self-serving reason why we need to eradicate famine or some other largely economic problem elsewhere.
00:41:06.920 But presumably we can be made to care more and more about everyone, again, if only out of self-interest.
00:41:15.540 What are the primary impediments to our doing that?
00:41:21.740 Human nature.
00:41:22.760 It is possible.
00:41:26.240 It's just very difficult.
00:41:28.440 I think we need for a number of reasons to develop global identities, a global loyalty, a loyalty to the whole of humankind and to the whole of planet Earth.
00:41:41.500 So this is a story that becomes so captivating that it supersedes other stories that seem to say, Team America.
00:41:49.820 Not abolishes them.
00:41:50.320 I don't think we need to abolish all nations and cultures and languages and just become this homogeneous gray goo all over the planet.
00:41:58.220 No, you can have several identities and loyalties at the same time.
00:42:02.260 People already do it now.
00:42:04.140 They had it throughout history.
00:42:05.320 I can be loyal to my family, to my neighborhood, to my profession, to my city, and to my nation at the same time.
00:42:14.360 And then suddenly there are conflicts, say, between my loyalty to my business and my loyalty to my family.
00:42:20.760 So I hate to think hard.
00:42:22.280 Sometimes I prefer the interests of the family.
00:42:24.580 Sometimes I prefer the interests of the business.
00:42:27.460 So, you know, that's life.
00:42:29.120 We have these difficulties in life.
00:42:30.760 It's not always easy.
00:42:31.580 So I'm not saying let's abolish all other identities, and from now on we are just citizens of the world.
00:42:37.880 But we can add this kind of layer of loyalty to the previous layers.
00:42:44.760 And this, you know, people have been talking about it for thousands of years.
00:42:48.040 But now it really becomes a necessity, because we are now facing three global problems, which are the most important problems of humankind.
00:43:00.320 And it should be obvious to everybody that they can only be solved on a global level, through global cooperation.
00:43:07.380 These are nuclear war, climate change, and technological disruption.
00:43:11.040 It should be obvious to anybody that you can't solve climate change on a national level.
00:43:19.320 You can't build a wall against rising temperatures or rising sea levels.
00:43:26.520 No country, even the United States or China, no country is ecologically independent.
00:43:33.280 There are no longer independent countries in the world, if you look at it from an ecological perspective.
00:43:38.680 Similarly, when it comes to technological disruptions, the potential dangers of artificial intelligence and biotechnology should be obvious to everybody.
00:43:50.620 You cannot regulate artificial intelligence on a national level.
00:43:55.340 If there is some technological development you are afraid of, like developing autonomous weapon systems, or like doing genetic engineering on human babies,
00:44:07.300 then if you want to regulate this, you need cooperation with other countries.
00:44:12.680 Because like the ecology, also science and technology, they are global.
00:44:18.860 They don't belong to any one country or any one government.
00:44:22.920 So if, for example, the United States bans genetic engineering on human beings,
00:44:30.460 it won't prevent the Chinese or the Koreans or the Russians from doing it.
00:44:34.940 And then a few years down the line, if the Chinese are starting to produce superhumans by the thousands,
00:44:41.660 the Americans wouldn't like to stay behind.
00:44:43.980 So they will break their own ban.
00:44:45.880 Again, the only way to prevent a very dangerous arms race in the fields of AI and biotechnology is through global cooperation.
00:44:57.660 Now, it's going to be very difficult, but I don't think it's impossible.
00:45:01.820 I actually gain a lot of hope from seeing the strength of nationalism.
00:45:07.360 Okay, so that's totally counterintuitive, because everything you just said, in the space provided,
00:45:15.860 there's only one noun that solves the problem, which is world government on some level.
00:45:22.400 We don't need a single emperor or government.
00:45:25.800 You can have good cooperation even without a single emperor.
00:45:29.160 Then we need some other tools by which to cooperate, because we have, you know,
00:45:34.160 in a world that is as politically fragmented as ours into nation states,
00:45:39.080 all of which have their domestic political concerns and their short time horizons.
00:45:44.140 So you're talking about global problems and long-term problems
00:45:47.780 that can only be solved through global cooperation and long-term thinking.
00:45:53.660 And we have political systems that are insular and focused on time horizons
00:45:59.860 that don't exceed four or, in the best case, six years.
00:46:03.760 And then we have the occasional semi-benevolent dictatorship
00:46:07.180 that can play the game slightly differently.
00:46:09.700 So what is the solution, if not just a fusing of political apparatus at some point in the future?
00:46:15.380 No, we certainly need to go beyond the national level
00:46:18.640 to a level when we have real trust between different countries,
00:46:25.280 of the kind you see, for example, still in the European Union.
00:46:30.660 If you take the example of having a ban on developing autonomous weapon systems.
00:46:36.840 So if the Chinese and the Americans today try to sign an agreement banning killer robots,
00:46:43.460 the big problem there is trust.
00:46:46.280 How do you really trust the other side?
00:46:48.640 To live up to the agreement.
00:46:51.440 AI is, in this sense, much worse than nuclear weapons.
00:46:54.800 Because with nuclear weapons, it's very difficult
00:46:57.100 to develop nuclear weapons in complete secrecy.
00:47:01.760 People are going to notice.
00:47:04.120 But with AI, there are all kinds of things you can do in secret.
00:47:08.620 And the big question is, how can we trust them?
00:47:11.440 And at present, there is no way that the Chinese and the Americans, for example,
00:47:16.140 are really going to be able to trust one another.
00:47:20.360 Even if they sign an agreement, every side will say, yes, we are good guys.
00:47:24.480 We don't want to do it.
00:47:25.740 But how can we really be sure that they are not doing it?
00:47:29.440 So we have to do it first.
00:47:30.480 But if you think about, for example, France and Germany,
00:47:35.560 despite the terrible history of these two countries,
00:47:39.060 a much worse history than the history of the relations between China and the US,
00:47:44.520 if today the Germans come to the French and they tell the French,
00:47:49.740 trust us, we don't have some secret laboratory in the Bavarian Alps
00:47:54.880 where we develop killer robots in order to conquer France,
00:47:58.680 the French will believe them.
00:48:00.720 And the French have good reason to believe them.
00:48:02.920 They are really trustworthy in this.
00:48:05.280 And if the French and Germans manage to reach this situation,
00:48:10.680 I think it's not hopeless, also for the Chinese and the Americans.
00:48:14.960 So what explains that difference?
00:48:17.180 Because it is a shocking fact of history that you can take these time slices
00:48:23.100 that are 40, 50 years apart,
00:48:26.260 where you have the attempted rise of the thousand-year Reich,
00:48:31.780 where Germany is the least trustworthy nation anyone could conceive of,
00:48:36.740 the most power-hungry, the most militaristic.
00:48:39.560 You could say the same about Japan at that moment.
00:48:41.420 And then, fast forward a few decades,
00:48:44.900 and we have what's, I guess it's always vulnerable to some change,
00:48:50.540 but we have a seemingly, truly durable basis of trust.
00:48:56.420 What is, as a historian, what accomplished that magic,
00:49:00.540 and why is it hard to just reverse-engineer that
00:49:04.560 with respect to Russia or China or any other seeming adversary?
00:49:09.160 Well, it's just a lot of hard work.
00:49:10.420 In the case of the Germans, what you could say about them is they are very...
00:49:14.860 If you'd like to continue listening to this conversation,
00:49:17.660 you'll need to subscribe at samharris.org.
00:49:20.380 Once you do, you'll get access to all full-length episodes of the Making Sense podcast,
00:49:24.460 along with other subscriber-only content,
00:49:26.800 including bonus episodes and AMAs,
00:49:29.480 and the conversations I've been having on the Waking Up app.
00:49:32.320 The Making Sense podcast is ad-free
00:49:34.120 and relies entirely on listener support.
00:49:36.400 And you can subscribe now at samharris.org.
00:49:40.420 You can subscribe now on watch some music byyyyip.com.
00:49:50.520 Here we go!
00:49:52.400 Councillor Sė…œ