Making Sense - Sam Harris - June 23, 2020


#208 — Existential Risk


Episode Stats

Length

1 hour and 4 minutes

Words per Minute

169.42798

Word Count

10,966

Sentence Count

470


Summary

Toby Ord is a philosopher at Oxford working on the ethics of global poverty. He is one of the founders of the "Giving What We Can" movement, which has gotten to pledge over 1.5 billion dollars to the most effective charities. And his current research is on the risks that threaten human civilization: existential risk, and the collapse of a permanent civilization. In this episode, we cover a lot in this conversation, but the most important part is about existential risk. And that's why we should all be worried about it, and why it's so important to have a conversation about it. Sam Harris is the host of the podcast Making Sense, which is a podcast about the things we can do to help people make sense of the world we live in. And as there are no sponsors for the show, you need an account at Samharris.org to get access to the show. If you can t afford a subscription, there's an option at Samarris to request a free account and get 100% of the show's episodes for as little as $1.99 a month. And as always, I never want money to be the reason why someone can't listen to the podcast. I do not want money, I just want to do it because it's fun, and I love talking about things I care deeply about things that matter to people and I want to make the world a better place for people to listen to things they can do things they care about, not just because they can't do it better than they do it in a way they can, not only that, but because they're going to get a chance to do so in a world where they'll be better at it in the best way possible, like that, like they'll get a good day to do that, and they'll learn more of it, like it, they'll have a good time, and it's all of that, they're not going to be better than that, it's a good thing, they won't have it, it'll be a good experience, and there'll be more of those things, right? Thanks for listening to the Making Sense Podcast by Sam's Making Sense by Sam Harris, Sam's thoughts on the podcast by me, and thanks for listening by me? -- thank you for your support by you, I really appreciate it, I'm grateful for you, thank you, bye, bye! -- and I'll see you soon, bye.


Transcript

00:00:00.000 Welcome to the Making Sense Podcast.
00:00:08.340 This is Sam Harris.
00:00:10.400 Just a note to say that if you're hearing this, you are not currently on our subscriber
00:00:14.280 feed and will only be hearing partial episodes of the podcast.
00:00:18.340 If you'd like access to full episodes, you'll need to subscribe at samharris.org.
00:00:22.980 There you'll find our private RSS feed to add to your favorite podcatcher, along with
00:00:27.580 other subscriber-only content.
00:00:30.000 And, as always, I never want money to be the reason why someone can't listen to the podcast.
00:00:34.980 So if you can't afford a subscription, there's an option at samharris.org to request a free
00:00:39.480 account, and we grant 100% of those requests.
00:00:42.720 No questions asked.
00:00:48.500 Okay, while the last episode was controversial, episode 207 on racism and police violence,
00:00:58.560 we have since released an annotated transcript to that episode, with links to relevant videos
00:01:06.080 and articles and data.
00:01:09.040 I've seen some response, some of it quite effusive in praise, and some of it outraged, which of
00:01:17.640 course I expected.
00:01:18.400 many people also contacted me privately to convey their gratitude and full support, all the while
00:01:25.780 making it clear that they can't take such a position publicly.
00:01:29.700 And this is definitely a sign of the times that concerns me.
00:01:33.700 I'm talking about people who, in any sane society, should be able to have the courage of their
00:01:40.240 convictions, and some people thought it ironic, even hypocritical, for me to trumpet the value
00:01:47.540 of conversation in a solo podcast.
00:01:52.260 But the truth is, that podcast was just my way of starting my side of a public conversation.
00:01:59.260 I'm sure I will have proper conversations on this topic in future episodes, and I welcome
00:02:05.220 recommendations about who I should speak with, but given what I perceive to be the desperate
00:02:11.620 state of public irrationality at the moment, I wanted to say something at full length that
00:02:18.940 was relatively well-formulated and comprehensive, rather than just lurch into a conversation with
00:02:24.720 someone and just see what came of it.
00:02:27.780 Anyway, as I make clear in the podcast, that wasn't the final word on anything, apart from my
00:02:34.400 sense that intellectual honesty has to be the basis for any progress we make here.
00:02:39.740 And to that end, I will keep listening and reading and having conversations.
00:02:46.460 Another thing to clarify here, there are now two formats to the podcast.
00:02:51.100 And actually, there's three types of podcasts that fall into two categories.
00:02:58.920 The first is the regular podcast, which is generally an exploration of a single topic, and that is
00:03:05.420 usually with a guest, very often based on a book he or she has written.
00:03:10.000 But sometimes it's a solo effort, like my last podcast was.
00:03:14.180 And the aim in this standard format is to say something of more than topical interest.
00:03:22.640 These are podcasts that I hope if you listen to them two years from now, or even further in the
00:03:28.940 future, they would still be worth listening to. And if you're seeing these episodes online,
00:03:34.440 you'll see that they have a unique photo or piece of artwork associated with them, and they're
00:03:39.180 titled in some way to reflect their theme. And the second format, which I've piloted with Paul
00:03:45.660 Bloom and Caitlin Flanagan, but which I've also used for other guests recently, David Frum, Jonathan
00:03:52.340 Haidt, Andrew Yang, Yuval Noah Harari, this format aims to be more topical. It's not that we won't say
00:04:00.740 anything of lasting interest, but the goal is certainly to cover some events that are in the news,
00:04:06.360 and to not linger too long on any one topic. And these episodes are titled just with the date
00:04:13.080 of the broadcast. So I hope that clarifies any confusion out there. Once again, if you want to
00:04:19.780 get full episodes of the podcast, you need an account at samharris.org. And as there are no sponsors
00:04:25.620 for the show, the fact that people subscribe is what allows me to do this. So thank you all for your
00:04:31.960 support. Okay, and now for today's podcast. Today I'm speaking with Toby Ord. Toby is a philosopher at
00:04:42.540 Oxford University, working on the big picture questions that face humanity. He is focused on the
00:04:48.680 ethics of global poverty. He is one of the young founders of the effective altruism movement. I
00:04:55.680 previously had his colleague, Will McCaskill, on the podcast. And he created the online society,
00:05:02.960 Giving What We Can, which has gotten its members to pledge over 1.5 billion dollars to the most
00:05:08.680 effective charities. And his current research is on the risks that threaten human extinction or the
00:05:15.120 permanent collapse of civilization, otherwise known as existential risk. And Toby has advised the World
00:05:22.540 Health Organization, the World Bank, the World Economic Forum, the U.S. National Intelligence
00:05:28.060 Council, and the U.K. Prime Minister's Office. And most important, Toby is the author of the new book,
00:05:36.080 The Precipice, Existential Risk and the Future of Humanity. And it is an excellent book, which we cover
00:05:43.200 only in part in this conversation. But we cover a lot. We talk about the long-term future of humanity,
00:05:49.920 the moral biases that we all suffer with respect to distance in space and time, the psychology of
00:05:58.120 effective altruism, feeling good versus doing good, possible blind spots in consequentialism,
00:06:05.700 natural versus human-caused risk, the risk of asteroid impacts, nuclear war, pandemics,
00:06:12.900 the potentially cosmic significance of human survival, the difference between bad things and the absence
00:06:19.360 of good things, population ethics, population ethics, Derek Parfit. Derek Parfit was Toby's thesis advisor,
00:06:27.560 the asymmetry between happiness and suffering, climate change, and other topics.
00:06:35.060 Needless to say, this is a conversation that stands a very good chance of being relevant for many years to
00:06:40.900 come, because our capacity to destroy ourselves is only increasing.
00:06:45.500 So, without further delay, I bring you Toby Ord.
00:06:55.480 I am here with Toby Ord. Toby, thanks for joining me.
00:06:59.440 Great to be here.
00:07:01.060 So, I'm very happy we finally got together. This has been a long time coming, and I knew I wanted to
00:07:06.580 speak with you even before your book came out, but your book has provided the perfect occasion.
00:07:11.820 The book is The Precipice, Existential Risk and the Future of Humanity, and it couldn't be
00:07:19.040 better timed in some way, except one of my concerns in this conversation is that people
00:07:25.100 have, without even thinking about it in these terms, something like existential risk fatigue,
00:07:31.100 given that we're dealing with this global pandemic, which is not in and of itself an existential risk,
00:07:36.640 as we'll talk about, but I've had a bunch of podcasts on topics related to this, like nuclear
00:07:43.180 war and other big-picture concerns that I felt have been sort of mistimed in the current moment.
00:07:51.980 And so, I delayed this conversation, and I feel like people have acclimated to, if not the new normal,
00:07:58.440 a long emergency of some kind.
00:08:00.380 And this now strikes me as the perfect time to be having this conversation, because,
00:08:06.340 as I'm sure we'll talk about, this really seems like a stress test and a dress rehearsal
00:08:13.560 for much bigger problems that may yet come. And so, it's really an opportunity for us to learn the
00:08:21.920 right lessons from a bad but ultimately manageable situation. And perhaps to start here,
00:08:28.620 you can just introduce yourself, and I will have introduced you properly before, but how do
00:08:34.860 you describe your work as a philosopher and what you have focused on up until this moment? And
00:08:40.300 perhaps, how do you see the current context in which to think about these ideas?
00:08:45.800 Yeah, I'm a philosopher at Oxford University, where I specialize in ethics. Although, I didn't
00:08:52.740 always do philosophy. I used to be in science, specializing in computer science and artificial
00:08:57.440 intelligence. But I was really interested in questions, big picture questions, which is not
00:09:03.860 that fashionable in ethics, but questions about really the, what are the biggest issues facing
00:09:09.760 humanity? And what should we do about them? Thinking about humanity over the really long run,
00:09:16.280 and really global issues. So, I found that within philosophy is a place where one can ask these kinds
00:09:22.300 of questions. And I did quite a bit of work on global poverty in the past, as one of the really
00:09:29.940 big pressing issues facing humanity. And then I've moved in more recently, to really be specializing
00:09:36.500 in existential risk, which is the study of risks of human extinction, or other irrevocable losses of
00:09:45.680 the future. For example, if there was some kind of collapse of civilization, that was so great,
00:09:50.780 and so deep, that we could never recover. That would be an existential catastrophe. Anything in
00:09:56.040 which the entire potential of humanity would be lost. And I'm interested in that, because I'm very
00:10:02.580 hopeful about the potential of humanity. I think we have potentially millions of generations ahead of
00:10:08.600 us, and a very bright future. But we need to make sure we make it to that point.
00:10:12.540 Hmm. Yeah. And I assume you do view the current circumstance as, in some sense, despite the obvious
00:10:21.520 pain it's causing us, and the death and suffering and economic problems that will endure for some time,
00:10:30.680 on some level, this is almost as benign a serious pandemic as we might have experienced. And in that
00:10:38.960 sense, it really does seem like an opportunity to at least get our heads around one form of
00:10:46.180 existential risk.
00:10:47.940 Yeah. I see this as a warning shot, the type of thing that has the potential to wake us up to some
00:10:54.740 even greater risks. If we look at it in the historical perspective, it was about 100 years ago,
00:11:01.180 the 1918 flu. It looks like it was substantially worse than this. That was an extremely bad global
00:11:08.480 pandemic, which killed, we don't really know how many, but probably a few percent, something like
00:11:14.500 3% of all the people in the world, which is significantly in excess of where we are at the
00:11:19.160 moment. And if we go further back, in the Middle Ages, the Black Death killed somewhere between about
00:11:25.580 a quarter and a half of all people in Europe, and significant numbers of people in Asia and the
00:11:32.100 Middle East, which may have been about a tenth of all the people in the world. So sometimes we hear
00:11:38.240 that the current situation is unprecedented, but I think it's actually the reverse. What we'd thought
00:11:44.220 was that since it was 100 years since a really major global pandemic, we'd thought that that was all in
00:11:50.040 the past and we were entering an unprecedented era of health security. But actually, it's not. We're
00:11:57.200 actually still vulnerable to these things. So I think it's really the other way around.
00:12:01.400 So before we jump into existential risk, I just want to talk about your background a little bit,
00:12:06.820 because I know from your book that Derek Parfit was your thesis advisor, and he was a philosopher who
00:12:13.600 I greatly admire and was actually, I had a, I was in the middle of an email exchange with him when he
00:12:20.340 died. I was trying to record an interview with him and really consider it a major missed opportunity
00:12:27.580 for me, because he's just, he was such a, had such a beautiful mind. And then I know some of your other
00:12:32.920 influences, Peter Singer, who's been on the podcast, and Nick Bostrom, who's been on as well, have,
00:12:38.400 you single them out as people who have influenced you in your focus, both on effective altruism and
00:12:45.640 existential risk. I guess before we jump into each specifically, they strike me as related
00:12:53.760 in ways that may not be entirely obvious. I mean, obviously they're related in the sense that in,
00:13:00.560 in both cases, we're talking about the well-being and survival of humanity. But with effective
00:13:07.900 altruism, we're talking about how best to help people who currently exist and, and to mitigate
00:13:15.460 suffering that isn't in any sense hypothetical. It's just that these are people, you know, specifically
00:13:20.800 the, you know, the poorest people on earth who we know exist and we know are, are suffering the
00:13:28.380 consequences of intolerable inequality or what should be intolerable inequality in our world. And we can do
00:13:35.640 something about it. And, and the effective piece in effective altruism is just how to target our
00:13:42.540 resources in a way that truly helps and helps as much as possible. But then with existential risk,
00:13:49.400 we're talking rather often about people who do not yet exist and may never exist if we don't get our
00:13:56.580 act together. And we're also talking about various risks of bad things happening, which is to say,
00:14:02.840 we're talking about hypothetical suffering and death for the most part. It's interesting because
00:14:08.660 these are, I mean, in some sense, very different by those measures, but they play upon the deficiencies
00:14:16.820 in our intuition, our moral intuitions in similar ways. I'm not the first person to notice that our ethics
00:14:23.760 tends to degrade as a function of physical distance and over any significant time horizon,
00:14:31.560 which is to say we, we feel less of an obligation to help people who are far away from us in space
00:14:37.380 and in time. We, the truth is we even feel less of an obligation to prepare for our own well-being
00:14:43.820 when we think about our future self. If we discount our concern about our own happiness and suffering
00:14:49.900 fairly, extremely over the time horizon. Let's talk about the basic ethics here and, and feel free to
00:14:57.920 bring in anything you want to say about Parfit or any of these other influences, but how do you think
00:15:04.020 about proximity in space and time influencing our, our moral intuition and, and, you know, whether or not
00:15:13.540 these things should have any moral significance? So in terms of physical distance, Peter Singer was a big
00:15:22.060 influence on me when it comes to that. He has this brilliant paper, Famine, Affluence and Morality,
00:15:28.280 where he asked this question about, you know, if you, if you're walking on the way to work and you
00:15:35.580 passed a child drowning in a pond, you know, and in order to go in and help them to save them, you would
00:15:42.140 have to ruin your shoes or your suit or some aspect like this, which is, you know, significant value.
00:15:47.840 Say you're going to give a fancy lecture. And most of us, you know, without really much hesitation,
00:15:54.020 would go in and do this. And in fact, we might think it's wrong for someone if they just, you know,
00:15:58.300 looked at their suit and their shoes and then kind of thought, oh, actually, no, I'm not going to do
00:16:01.700 that and walked on by. And he made this analogy to what about people in distant countries? There's
00:16:07.360 some, some question about exactly how much it costs to save a life in poor countries. And it may
00:16:13.280 actually cost more than a, more than a fairly nice suit, maybe about a thousand dollars U.S.
00:16:18.060 But he, he kind of asked this question about what's really different in those cases and could
00:16:22.980 the physical distance really matter? Could the fact that they're, they're a stranger matter? And
00:16:27.480 he came up with a whole lot of ways of thinking about these differences and showing that none of
00:16:31.020 them really could matter. So yeah, he, he's, you know, really helped challenge a lot of people,
00:16:36.880 including me about that. Now, effective altruism is more general than just thinking about global
00:16:43.760 poverty. It could apply to existential risk as well. And in fact, many effective altruists do
00:16:48.940 think in those terms, but it's about this idea of really trying in our lives to be aware of how much
00:16:55.880 good we could do with our activities, such as donations or through our careers and really trying
00:17:03.180 to think seriously about the scale of it. So I got really interested in this when I looked at,
00:17:10.540 at a, a study called disease control priorities and developing countries to a catchy name, DCP2.
00:17:17.400 And it had this table in it where they'd looked at over a hundred different ways of helping people
00:17:23.300 in poor countries with their health. And if you looked at the, the amount that you could help in
00:17:29.980 terms of health, like in terms of healthy life years for a given amount of money, say a thousand
00:17:34.560 dollars, there was this really striking difference where the, the best interventions were about 10,000
00:17:41.080 times more effective than the, the least good ones. And the, the, uh, in fact, they're about a hundred
00:17:47.640 times better than the middle intervention. It was a log normal distribution. So this was something where
00:17:54.180 I did a bit of technical work on this and found a whole lot of interesting stats like that.
00:17:58.200 It obeyed almost exactly the 80, 20 rule, where if you funded all of these, these ways of helping
00:18:03.240 people in poor countries, 80% of the impact would happen from the, the 20% most effective
00:18:08.580 interventions. And also if you had a choice between two interventions at random and, uh, on average,
00:18:15.720 the more effective one would be a hundred times as effective as the less effective one.
00:18:19.540 So this is something where it really woke me up to this fact that where you give can be actually
00:18:26.560 even more important than whether you give. So if you're giving to something that say for a certain
00:18:31.520 amount of money is enough to save a life, there may well be somewhere you could give that would
00:18:35.560 save a hundred lives. And that choice, how you make it 99 people's lives, you know, depend upon
00:18:41.940 you making that right. Whereas the difference between you giving to the middle charity or nothing
00:18:46.360 is only one person's life. So maybe it could be even more important kind of where you give
00:18:51.440 than if you give in some sense, although obviously it's, it's, they're both important.
00:18:56.480 And so it was really thinking about that, that made me realize this and, uh, within moral philosophy,
00:19:02.220 there's a view utilitarianism or consequentialism. There's a kind of family of views that take
00:19:08.520 doing good really seriously. They're not just focused on not doing things that are wrong,
00:19:13.400 but also on how much can you help? But it made me realize that the people who support other ethical
00:19:19.300 views, they should still be interested in doing much more good with the resources that they're
00:19:25.080 devoting to helping others. And so, you know, I set up an organization called giving what we can
00:19:30.420 trying to encourage people to give more effectively and to give more as well. So it was based around a
00:19:37.120 pledge to give at least 10% of your income to the most effective places that we know of initially
00:19:42.800 around global poverty and global health. Although we've broadened that out to, to include anything,
00:19:48.060 for example, it could be animal charities or any way of helping others as much as you can.
00:19:52.880 And in fact, we, we've, uh, now got more than 4,000 people have made that pledge. They've given
00:19:58.740 more than a hundred million dollars to the most effective charities they know of and have pledged
00:20:03.020 more than a billion dollars. So it's actually a pretty big thing, uh, in terms of the, the number
00:20:08.040 of people who've embraced this message and are really trying to, uh, to really make their charitable
00:20:12.700 giving count. Yeah. Well, your, your colleague, Will McCaskill, who put us together, was on the
00:20:18.260 podcast a while back and, and that conversation was very influential on my thinking here because
00:20:26.160 the one thing you both have done in your thinking about effective altruism is you have uncoupled
00:20:33.280 sentimentality from a more hard-headed concern about just what actually works and what saves the
00:20:41.540 most lives. So much of philanthropy in its messaging and its tacit assumptions and in the, the experience
00:20:48.900 of people giving or deciding whether or not to give is predicated on the importance of feeling good
00:20:56.880 about giving and finding psychological reward there. And I'm convinced that's still important.
00:21:04.120 And I think we should figure out ways to amplify that. But at the end of the day, we need to correct
00:21:11.920 for our failures to be maximally rewarded by the most important contributions we can make. This is just a
00:21:20.280 kind of a domain-wide human failing that the worst things that can happen are not the things we find
00:21:28.040 most appalling and the best things we can do are not the things we find most rewarding. And surveying
00:21:33.860 this landscape of moral error, we need to find ways to correct for the reliable failures of our
00:21:40.920 intuitions. And so in talking to Will, it occurred to me that one way to do this is just to automate it.
00:21:46.520 I mean, you just, I've now spoken about this several times on the podcast, but it's, it was such an
00:21:50.980 instructive example for me because at the time, Will was saying that the most effective or certainly
00:21:56.540 one of the most effective ways of mitigating human death was to give money to the Against Malaria
00:22:03.680 Foundation. At the time, that was number one on the, I think on the GiveWell site might still be. And
00:22:11.060 I recognize that in myself, that that was a cause which, you know, struck me as deeply unsexy,
00:22:20.020 right? It's not that it's, I don't care about it. I do care about it when you give me the details,
00:22:24.260 but, you know, buying insecticide-treated bed nets and giving them out, it's neither the problem nor
00:22:32.160 the intervention that really tugs at my heartstrings. And it's just obvious that shouldn't be the
00:22:39.500 priority if in fact this is the way to save a life at the lowest dollar cost. And so, and so,
00:22:46.960 yeah, so I just decided to automate my giving to that, that one charity, knowing that it was,
00:22:53.120 it was vulnerable to my waking up in a month, not being able to care much about malaria. And so,
00:22:59.640 I mean, that, that's the kind of thing that you and Will and your, and the movement you guys have
00:23:04.560 inspired has made really salient and actionable for people to, I mean, that alone is a huge
00:23:11.420 contribution. And, and so thank you for doing that work.
00:23:15.020 Oh, no problem. That's, that's exactly why we did it. I should say, it's also the question of
00:23:21.500 how much you give is another thing that to try to automate as you put it.
00:23:26.640 Yeah. So I used to, like when I was a grad student, I, I used to, because I was aware of
00:23:33.240 these, these numbers and like how, how much further my money could go abroad. It basically
00:23:38.300 around about, I could do around about a thousand or 10,000 times as much good with my money by giving
00:23:44.620 it to the most effective places abroad than I could by spending it on myself. I, I worked this out
00:23:50.380 and that meant that, you know, I, I became very pained, you know, when I was at the supermarket
00:23:57.100 trying to work out whether to buy the absolute cheapest cereal or the second cheapest cereal.
00:24:02.120 And that, that's not really a good pathway to go down because you're, you're not that productive
00:24:06.440 if you're spending all your time stressing about that. So I, I took an approach instead of, of working
00:24:11.580 out how much to give and committing to give a large amount of my money over the rest of my life.
00:24:16.500 And then just living within my reduced means. And then you just basically just pretend that
00:24:21.780 your salaries, you know, or, or that your salary is a bit lower, you know, maybe pretend that you
00:24:26.020 took a, uh, uh, a job in the charitable sector or something, you know, with a, with a smaller salary
00:24:30.380 in order to do more good or pretend that you're being taxed a bit more because, you know, it would
00:24:35.560 be good if some of our money was taken to, to help people who are much less fortunate than ourselves
00:24:39.720 and then, and then just live within that reduced means. Yeah. Or you, or you could pretend that you're
00:24:43.620 working one day a week or one day out of every 10 for the benefit of, of others.
00:24:51.460 Yeah. That's, that's another way to think about it. Yeah. And it turned out that I made a, I mean,
00:24:57.660 the, the pledge it's based around is to give at least 10% of your income to where it can help the
00:25:01.980 most or where you think it can help the most. And we're not too prescriptive about that, but
00:25:06.500 ultimately I've given a bit over a quarter of, uh, everything I've earned so far. But the way I,
00:25:12.320 I think about it is, uh, to think about, uh, actually what Peter Singer suggested, which is
00:25:17.680 to set a, uh, amount of spending money on yourself and then to give everything above it. And I set that
00:25:24.200 to an amount, which is about equal actually to the median income in the UK at the time. And a lot
00:25:30.060 of journalists, yeah, would say, well, how on earth could you live on less than, uh, you know,
00:25:33.580 18,000 pounds per year? And yeah, it was kind of weird. I was trying to point out that actually
00:25:39.080 half of the population of the UK do that. So, uh, people would lose a bit of touch on these things
00:25:44.100 and, uh, that makes it, you know, makes it clear that it's as doable, uh, if you think about it in
00:25:48.600 those terms. So, so, but it is useful to use techniques like these to make it easier. So you're not
00:25:54.280 using all your willpower to keep giving instead, you make a kind of lasting commitment. That's the point
00:25:58.900 of making a long-term commitment on this, uh, is to, to tie yourself to the mast and make it a bit,
00:26:04.160 you know, a bit less onerous to be reevaluating this all the time. And we found that that, that
00:26:08.640 worked quite well. Initially people said, well, no one's going to do this. No one's going to make
00:26:13.200 this commitment. Forgetting of course, that there have been traditions of giving 10% of your income
00:26:18.040 for a long time, but it's something where, you know, we found actually that, that there are a lot
00:26:23.380 of people who would, and as I said, more than a hundred million dollars have been given and more
00:26:27.920 than a billion dollars pledged because it really adds up. And it's one of these things where if
00:26:32.820 someone kind of shakes a can at you on the street corner, it's not worth spending a lot of your time
00:26:37.740 trying to work out whether to give and also whether this is a, the best cause you could be giving to
00:26:42.760 because there's such a small amount at stake. But if you're making a choice to give something like
00:26:47.680 a 10th of your income over the rest of your life, you know, that, that's, that's something like
00:26:52.040 more than a hundred thousand dollars. And, you know, it's really, really worth quite a few
00:26:57.100 evenings of reflection about where to give it and whether you're going to do it and to make such
00:27:01.460 a commitment. But if you do, you know, there's, there's a lot at stake. So we found that, that
00:27:05.660 thinking in these bigger chunks, you know, really zooming out on your charitable giving over your
00:27:10.260 whole life and setting yourself in a certain direction on that really showed and made it worthwhile to
00:27:16.000 do it right. Yeah. And one of the ways you cut through sentimentality here is around the question
00:27:22.740 of what people should be doing with their time if they want to benefit the most number of people.
00:27:29.440 And it's not that everyone should be rushing into the charity sector and working for directly for a
00:27:37.120 cause they find valuable. You argue that if you have a talent to make immense wealth some other way,
00:27:45.540 well, then that is almost certainly the better use of your time. And then you just give
00:27:50.900 more of those resources to the charities that you want to support.
00:27:55.020 Yeah. So my colleague, Will McCaskill really, I mean, we'd talked about this right from the start,
00:28:00.640 but he really took that a step further when he set up this, this organization, 80,000 hours
00:28:04.920 with Ben Todd. And they were, they were going deep on this and, and really thinking, okay, we've,
00:28:12.440 we've got a theory for what to do with your charitable giving. How can you make that more
00:28:16.180 effective and really actually help more recipients or help those recipients by a larger amount.
00:28:21.900 And 80,000 hours was about this huge amount of time over your whole career and really trying to
00:28:28.280 spend, you know, if you're going to spend 80,000 hours doing your job, it kind of makes it obvious
00:28:33.240 that it could be worth spending, you know, a hundred hours or more thinking seriously about where you're
00:28:38.860 going to devote that time. And one of the things they considered was this idea of earning to give,
00:28:44.200 of taking a deliberately high paid job so that you could donate a lot more. And in some cases,
00:28:49.360 you could do a lot of good with that, particularly if you're someone who's, who's well suited to such
00:28:53.000 a job and also kind of emotionally resilient. There are a lot of people who want to do a lot of good
00:28:59.600 in the world, but really wouldn't last if they went into finance or something. And, you know,
00:29:04.480 everyone else, all of their friends were always offered the golf course or something. And this
00:29:09.560 person was scrimping and saving and couldn't socialize with any of their colleagues and so on
00:29:13.580 and saw them live into excess. It could be pretty difficult. But if you're someone who can deal with
00:29:18.620 that or can take a pretty sensible approach, maybe give half of what you earn in finance and still live
00:29:24.780 a very good life by any normal standards. And some people have taken that up, but that wasn't the
00:29:30.360 only message. We're also really interested in, in jobs, in areas where you could do a lot of good,
00:29:35.680 for example, working on a charitable foundation in order to help direct their endowment to the most
00:29:40.860 effective things to help others. Also, we found that, that people, we were very interested in a few
00:29:46.760 different areas. There were kind of a few clusters of work, which were on global health and global
00:29:51.620 poverty. That cluster was really to do with the fact that the poorest people in the world live on
00:29:57.880 about a hundredth of the median US wage. So, and it means therefore, because there's diminishing
00:30:07.420 returns on, on our income, that our money can do roughly a hundred times more good to help those
00:30:14.800 people than it can here. And if we, if we do kind of leveraged things, such as funding the very most
00:30:21.460 important healthcare that they can't buy themselves, then, you know, we can get even maybe a thousand
00:30:27.840 times more effectiveness for people abroad than we can for ourselves. So that's, that's one way to do
00:30:32.920 good. Another way that there's a, there's a cluster around is animal welfare, noting that there's a
00:30:38.600 market failure there where animals, you know, don't have a natural constituency, they can't vote.
00:30:43.880 It wouldn't be surprising if there were massive amounts of pain and suffering, which were being
00:30:48.640 neglected by, by the general capitalist system that we're in. And indeed, when we look at it, you know,
00:30:54.660 that there are. So that was another approach, although it's, there's a, there's a, you have
00:31:00.000 to go out on a limb a little bit about how on earth would you understand animal welfare compared
00:31:05.380 to human welfare in order to, to think about that. But you can see why it could be a really neglected
00:31:09.780 area. And then there's a, there's a kind of branch of people really interested in the long-term future
00:31:15.660 of humanity and noting that only a tiny fraction of all the people who have, who have ever lived
00:31:22.080 are alive at the moment. And it's probably an even tinier fraction when you consider all the people
00:31:26.180 who ever will live after us, that, you know, this is just one century. We've had 2000 centuries of
00:31:33.120 humanity so far. We could have thousands of centuries more after us. If there are ways that we can do
00:31:37.900 something now to have a lasting impact over that whole time, then perhaps that's another location
00:31:43.080 where we can do really outsized amounts of good with our lives. So we've often been thinking about
00:31:49.700 those, those three different areas. Are there trade-offs here with respect to
00:31:55.260 the feeling good versus being effective calculus? Because if you take a, a strictly consequentialist
00:32:04.500 framing of this, well, then it seems like, well, you should just cut through the,
00:32:09.080 the feeling or the, you know, or the perceived reward and salience of various ways of helping
00:32:15.660 and just help the most people. But the situation does strike me somewhat as morally analogous to the,
00:32:23.580 the failure of consequentialism to parse why it makes sense for us to have a preferential love
00:32:30.620 for our family and, and, you know, in particular, our kids. It's often posed as a riddle, you know,
00:32:37.720 how, how, how is it that you can shower more attention and resources and love and concern
00:32:43.180 on your child than you could on two strangers or, and obviously the equation gets even more
00:32:50.220 unbalanced if you talk about a hundred strangers. And that has traditionally struck many people as
00:32:57.080 just a, a failure of consequentialism. Either we're not really consequentialists or we can't be,
00:33:02.920 or we shouldn't be. But I've always seen that as just a, on some level, a failure to get as fine
00:33:10.800 grained as we might about the consequences. I mean, obviously there's a consequence to,
00:33:16.900 if you just think it through, there's a consequence to having a society or being the sort of social
00:33:22.680 primate who could, when faced with a choice to help their child or two strangers, would just
00:33:32.840 automatically default to the, what seems to be the consequentialist arithmetic of, oh, of course I'm
00:33:38.680 going to care more about two strangers than my own child. What do we mean by love and the norm of
00:33:44.780 being a good parent if that is actually the emotional response, right, that we think is normative?
00:33:50.800 And so it's always struck me that there could be something optimal, and it may only be one
00:33:55.900 possible optima, but at least it's a possible one, to have everyone more focused on the people
00:34:03.980 who are near and dear to them and kind of reach some collective equilibrium together where the human
00:34:11.660 emotion of love is conserved in that preferential way. And yet in extreme cases, or even just at the
00:34:18.440 level of which we decide on the uses of public funds and rules of fairness and justice that govern
00:34:24.860 society, we recognize that those need to be impartial, which is to say, when I go into a hospital
00:34:30.440 with my injured daughter, I don't expect the hospital to give us preferential treatment just
00:34:37.340 because she's my daughter. And in fact, I would not want a hospital that could be fully corrupted by
00:34:43.680 just answering to the person who shouted the loudest or gave the biggest tip at the door or
00:34:48.660 whatever it was. I can argue for the norm of fairness in a society, even where I love my daughter more
00:34:53.960 than I love someone else's daughter. It's a long way of saying that that seems to me to be somewhat
00:34:59.360 analogous, or at least potentially so, to this condition of looking to do good in the world and
00:35:05.160 noticing that there are causes, the helping of which gives a much stronger feeling of compassion
00:35:13.420 and solidarity and keeps people more engaged. And I think we do want to leverage that, obviously not
00:35:19.800 at the expense of being ineffective, but I'm just wondering if there's anything to navigate here or if
00:35:26.140 you just think it really is straightforward. We just have to just strip off any notion of
00:35:32.080 kind of the romanticism and reward around helping and just run the numbers and figure out exactly how to
00:35:38.860 prioritize our resources.
00:35:40.620 I guess I would say, here's three levels at which to think about this. So one approach would be to say,
00:35:47.580 yeah, just look at the raw numbers, let's say from some study on how much different ways of spending our
00:35:53.480 money could help people, and then just go with what that says. A second approach would be trying to be a bit
00:35:59.740 more sophisticated, to note that there might be a whole lot of people who just kind of, yeah, who
00:36:05.500 aren't getting enough feedback, perhaps, in their lives about the giving and the effect it's having
00:36:10.680 on people, such that if they were to try to do the first one, that they couldn't really sustain it,
00:36:16.440 which could be a really big deal, because I'm hoping that people can make a commitment and keep it
00:36:21.360 to give for the next 30 years. And if they get burnt out after a couple of years and stop, you've lost
00:36:26.300 almost all the value that they could have produced, especially as they're probably going to earn more
00:36:30.440 money later in their life and be able to give even more. It could be that you lose 99% of the
00:36:34.640 benefit if they give up after the first couple of years. So you at least want to go this one step
00:36:40.280 further and have some idea or some sensitivity to the idea that if it's more appealing or it can be
00:36:47.600 more sustained, then that matters. And I'm thinking in that sense, quite instrumentally,
00:36:53.440 in that it's just trying to take account of the fallibility of the humans who are the givers.
00:36:59.780 It's not about flattering them or kind of like stroking their ego or something like that.
00:37:04.640 But it's the way I think of it, a lot of people, when they think about giving in particular,
00:37:10.060 have a focus that's very focused on the giver. I think of it as giver-centric or donor-centric
00:37:16.040 kind of understanding of it. For example, norms against being public about your giving,
00:37:22.120 I think, are very donor-centric. They're about, well, that would be gauche to be public about it.
00:37:28.400 But from my perspective, I'm very focused on the recipients. And it seems to me that all of this
00:37:33.160 focus on the donor is misplaced. If the recipients would benefit more if the donors were public about
00:37:39.600 it, such that they help to encourage their friends to be giving, for example, by talking about some of
00:37:43.800 these causes, ideally in a non-annoying way, then that could be good for the recipients.
00:37:48.880 And similarly, if there are aspects where maybe if the donor somehow could follow through on a very
00:37:55.900 difficult, dry program of giving, they would be able to give more. If, in fact, many donors fail
00:38:00.600 to achieve that or they get burnt out, then that's bad for the recipients. So this approach is still
00:38:06.380 kind of recipient-focused. Or you could go a step further than that and build it into the structure
00:38:11.860 of what it means to be good at giving and to say, you know, fundamentally, for example, people in your
00:38:18.680 community, or it matters more to give to people who are close to you or something like that.
00:38:23.800 I wouldn't want to go that extra step, although I understand that that is where the kind of intuitive
00:38:29.420 position perhaps is. And you do run into troubles if you try to stop at step two.
00:38:34.120 You run into some of these challenges you're mentioning about how do you justify treating
00:38:39.220 your children better than other people. So I don't think that this is all resolved. But I also
00:38:44.340 want to say that the idea of effective altruism, yeah, really is to be broader than just a consequentialist
00:38:50.860 or utilitarian approach. The people who are non-consequentialists often believe that there
00:38:56.120 are side constraints on action. So there are things that we shouldn't do, even if they promote the
00:39:01.180 good, because it would be wrong or be treating people wrongly in order to do them. For example,
00:39:06.160 that you shouldn't kill someone in order to save 10 people. But since none of the ways we're talking
00:39:11.520 about of giving or of the careers that we're recommending people take, none of them involve
00:39:15.900 really breaking such side constraints, it seems like we should all still be interested in doing
00:39:21.040 more good in that case. As philosophers, we often focus on the interesting conflicts between the
00:39:25.960 different moral theories. But this is a case where I think the moral theories tend to run together.
00:39:29.320 And so that's our focus, you know, going beyond the kind of just what would utilitarianism say,
00:39:35.320 or something like that.
00:39:36.300 Okay, well, let's talk about the greatest downside protection we might find for ourselves and talk
00:39:43.980 about existential risk, which again, is the topic of your new book, The Precipice, which is really a
00:39:49.220 wonderful read. And it's great to have the complete picture pulled together between two covers. So I
00:39:57.260 highly recommend that. We won't exhaust all of what you say there. But I'll flag some of what we're
00:40:03.040 skipping past here. So you break the risks we face into the natural and the anthropogenic,
00:40:12.100 which is to say human caused. And it might be surprising for people to learn just how you weight
00:40:19.700 these respective sources of risk. To give some perspective, let's talk about just how you think
00:40:27.000 about the ways in which the natural world might destroy us, you know, all on its own, and the ways
00:40:33.120 in which we might destroy ourselves, and how you estimate the probability of one or the other sources
00:40:40.160 of risk being decisive for us in the next century.
00:40:43.400 Sure. I think, often when we think about existential risks, we think about things like
00:40:51.160 asteroid impacts. I think this is often the first thing that comes to mind. Because it's what we
00:40:58.360 think destroyed the dinosaurs 65 million years ago. But, you know, note that that was 65 million years
00:41:06.600 ago. So an event of that size seems to be something like a one in every 65 million years kind of event.
00:41:14.220 It doesn't sound like a once a century event, or you'd have trouble explaining why it hasn't happened,
00:41:18.840 you know, many, many more times. And I think people will be surprised to find out how recent it was
00:41:25.460 that we really understood asteroids, especially people of my generation, that in 1960, that's when we
00:41:32.100 conclusively discovered that meteor craters are caused by asteroids. People thought that maybe
00:41:37.880 they were caused by some kind of geological phenomenon, you know, like volcanism.
00:41:42.240 It's amazing.
00:41:43.060 And then it was 20 years after that, 1980, where evidence was discovered that the dinosaurs had
00:41:50.960 been destroyed in this KT extinction event by an asteroid about 10 kilometers across. So that's,
00:41:58.600 you know, 1980, that's 40 years ago. And then action, you know, things moved very quickly from
00:42:04.380 that. In particular, it was around about the same time as Carl Sagan and others were investigating
00:42:09.560 models for nuclear winter. And they realized that asteroids could have a similar effect,
00:42:15.740 where dust from the asteroid collision would darken the sky, and could in that way cause a mass
00:42:22.240 extinction due to stopping the plants growing. So this is very recent. And people really
00:42:28.580 leapt into action. And astronomers started scanning the skies. And they've now tracked what they think
00:42:35.340 is 95% of all asteroids one kilometer or more across. And one kilometer asteroid is a tenth the
00:42:42.860 size of the one that killed the dinosaurs. But it only has one thousandth of the energy and a thousandth
00:42:48.160 of the mass. So we could very likely survive that. And they've found 95% of those greater than one
00:42:56.060 kilometer across, including almost all of the ones which are really quite big, such as, you know,
00:43:00.640 five kilometers across or 10 kilometers. And so now, the chance of a one kilometer or more asteroid
00:43:07.060 hitting us in the next century is about one in 120,000. That's a kind of scientific probability from
00:43:13.900 the astronomers. But it also wouldn't necessarily wipe us out, even if it did hit us. And that's a
00:43:19.160 probability that we really is very unknown. But overall, I would guess that it's about a one in a
00:43:24.360 million chance that an asteroid destroys us in the next hundred years. And other things that have
00:43:29.340 been talked about as extinction possibilities. When you look at the probabilities, they're extremely
00:43:34.460 low. So an example is a supernova from a nearby star. It would have to be quite a close star within
00:43:41.220 about 30 light years. And it's extremely unlikely. It's unlikely that this will happen during the
00:43:46.940 lifespan of the Earth. And it's exceptionally unlikely it would happen in the next hundred years. I put the
00:43:52.040 chance of existential catastrophe due to that at about one in a billion over the next hundred
00:43:56.540 years. And these are quite rough numbers, but trying to give an order of magnitude idea to the
00:44:02.400 reader. And ultimately, when it comes to all of these natural risks, you might be worried that
00:44:08.160 supernovas and gamma ray bursts and supervolcanoes and asteroids and comets actually, it's very recent
00:44:15.220 that we've discovered how these things work and that we've really realized with proper scientific
00:44:19.280 basis that they could be threats to us. So there's probably more natural risks that we don't even
00:44:23.880 know about that we're yet to discover. So how would you think about that? But there's this very
00:44:29.800 comforting argument from the fossil record when you reflect upon this fact that Homo sapiens has
00:44:37.040 been around for 200,000 years, which is 2,000 centuries. And so if the chance of us being destroyed by
00:44:44.180 natural risks, in fact, all natural risks put together was as high as, say, one in 100, we almost
00:44:50.420 certainly wouldn't have made it this far. So using that kind of idea, you can actually bound the risk
00:44:56.160 and show very confidently that it's lower than about one in 200 per century, and most probably below
00:45:03.540 about one in 2,000 per century. You also take it a little further than that by reasoning by analogy to
00:45:10.340 other hominids and other mammals that would have died in similar extinction events as well.
00:45:15.860 Yeah, that's right. And I give quite a number of different ways of looking at that in order to
00:45:22.380 avoid any potential statistical biases that could come up. In general, it's very difficult to estimate
00:45:28.080 the chance of something that would have stopped the very observation that you're making now of
00:45:33.820 happening. There are certain kinds of statistical biases that come up to do its anthropic effects.
00:45:38.060 But you can avoid all of that, or most of it, by looking at related species, and you get a very
00:45:43.100 similar result. They tend to last around about a million years before going extinct. And so since
00:45:49.640 Homo sapiens is a species that is much more widely spread across the surface of the Earth and much
00:45:55.820 less dependent upon a particular species for food, we're very robust in a lot of ways.
00:46:02.640 So that's before you even get to the fact that we can use our intelligence to adapt to the threat
00:46:07.200 and so forth. That it's very hard to see that the chance of extinction from natural events could be
00:46:14.800 more than something like one in 10,000 per century is where I put it. But unfortunately, the same can't
00:46:21.540 be said for the anthropogenic risks. Yeah. And so let's jump to those. You put the likelihood that we
00:46:30.940 might destroy ourselves in the next century by making some colossal error or just being victim of
00:46:39.800 our own malevolence at one in six rather than one in 10,000, which is a pretty big disparity.
00:46:48.760 One thing that's interesting, especially in the present context of pandemic, you put pandemic risk
00:46:54.920 mostly on the anthropogenic side. Maybe we should talk about that for a second. What are the
00:47:01.440 anthropogenic risks you're most concerned about? And why is it that you're thinking of pandemic
00:47:07.460 largely in the terms of what we do or don't do? Yeah. Well, let's start with the one that started
00:47:16.440 it all off with nuclear war just briefly. That I think it was in 1945, the development of the atomic
00:47:23.700 bomb, that we, humanity really entered this new era, which I call the precipice, giving the book its
00:47:31.100 name. Explain that analogy. So what's interesting here is that the anthropogenic risk, the existential risk,
00:47:40.760 is really just the shadow side of human progress. It's only by virtue of our progress, technologically,
00:47:48.640 largely, although not entirely. I mean, just the fact that we, you know, have crowded together in
00:47:53.360 cities and that we can jump on airplanes and fly all over the world and that we have cultures that
00:47:58.300 value that. And you take the good side of globalization and culture sharing and cosmopolitanism
00:48:06.960 and economic integration. You know, that is perfectly designed, it would seem, to spread a novel virus
00:48:15.460 around the world in about 15 hours. And all of the things that we've been doing right have set us up
00:48:22.960 to destroy ourselves in a way that we absolutely couldn't have done, you know, even a hundred years
00:48:30.420 ago. And so this is, it's a paradox that casts a shadow of sorts on the work of my friend, Steve Pinker,
00:48:38.760 you know, who, as you probably know, has been writing these immense and immensely hopeful books
00:48:44.260 about human progress of late, saying that things are just getting better and better and better.
00:48:49.020 And we should acknowledge that. We should only have the decency to acknowledge that. But
00:48:53.340 he's been criticized rather often for things he hasn't said. He's not saying that there's a law
00:49:00.260 of history that ensures things are going to get better and better. He's not saying we can't screw
00:49:05.460 these things up. But because of his emphasis on progress, at the very least, he can be convicted
00:49:11.780 of occasionally sounding tone deaf on just how the risk that we will destroy everything seems also to
00:49:20.880 be increasing. I mean, just the power of our technology, the fact that we're talking about
00:49:26.040 a time where high school kids can be, you know, manipulating viruses, you know, based on technology
00:49:32.400 they could have in their bedrooms. It's just, this is, we're democratizing a rather Faustian relationship
00:49:39.820 to knowledge and power. And it's easy to see how this could go terribly wrong and wrong in ways that,
00:49:48.360 again, could never have been accomplished a few generations ago. So give us the analogy of the
00:49:55.440 precipice to frame this. Yeah. If we really zoom out and try to look at all of human history and to
00:50:04.220 see the biggest themes that unfolded across this time, then I think that two of them, one is this
00:50:11.020 theme of progress in our well-being that Steven Pinker mentions. And I think particularly in that
00:50:19.440 case over the last 200 years since the Industrial Revolution, that it's less clear over, you know,
00:50:25.920 it was the second 100,000 years of Homo sapiens better than the first 100,000 or something.
00:50:30.500 Right. I'm not sure. But in the last 200 years, we've certainly seen very marked progress. And I
00:50:37.860 think one of the challenges in talking about that is that we should note that while things have got a
00:50:44.440 lot better, they could still be a lot better again. And we have much further to go. There are many more
00:50:50.080 injustices and suffering remaining in the world. So we certainly want to acknowledge that while at the
00:50:55.960 same time, we acknowledge how much better it's got. And we also want to acknowledge both that there are
00:51:03.780 still very bad things and that we could go much further. But the other major theme, I think, is this
00:51:09.020 theme of increasing power. And that one, I think, has really gone through the whole of human history.
00:51:15.780 And this is something where there have been about 10,000 generations of Homo sapiens. And it's only
00:51:23.420 through a kind of massive intergenerational cooperation that we've been able to build this
00:51:29.560 world we see around us. So from where I sit at the moment, I can see zero things, well, actually,
00:51:35.700 except my own body, which were in the ancestral environment. It's something where we tend to think
00:51:42.580 of this as very recent, but we forget that things like clothing is a technology that was massively useful
00:51:48.140 technology that enabled us to inhabit, you know, huge regions of the world, which would otherwise be
00:51:53.260 uninhabitable by us. You know, you could think of it as almost like, you know, spacesuits or something
00:51:57.300 like that for the earth. You know, massive improvements like this. So many things that
00:52:02.060 we developed before we developed writing, which was only about 5,000 years ago. So this time,
00:52:08.800 like 97% of human history, we don't have any record of it. But that doesn't mean that there
00:52:15.260 weren't kind of these great developments happening. It was just a sequence of innovations that have
00:52:20.140 really built up everything. When I think about that, and these, how we kind of stand on the
00:52:25.220 shoulders of 10,000 generations of people before us, it really is humbling. And all the innovations
00:52:31.440 that they passed on in this unbroken chain. And one of the aspects of this is this increasing power
00:52:37.680 over the world around us, which really accelerated with the scientific revolution, where we discovered
00:52:43.400 these systematic ways to create knowledge and to use it to change the world around us. And the
00:52:47.680 industrial revolution, where we worked out how to harness the huge energy reserves of fossil fuels
00:52:53.800 and to automate a lot of labor using this. Particularly with those accelerations, there's
00:52:59.700 been this massive increase in the power of humanity to change the world. You know, often exponential
00:53:05.560 on many different measures. And that it was in the 20th century, and I think particularly with the
00:53:11.080 development of the atomic bomb, that we first entered this new era where our power is so great
00:53:19.800 that we have the potential to destroy ourselves. And in contrast, the wisdom of humanity has grown
00:53:28.160 only falteringly, if at all, over this time. I think it's been growing. And by wisdom, I mean
00:53:34.340 both wisdom in individuals, but also ways of governing societies, which for all their problems
00:53:41.660 are better now than they were 500 years ago. So there has been improvement in that. And there
00:53:46.420 has been improvement in international relations compared to where we were, say, in the 20th century.
00:53:52.940 But it's a slow progress. And so it leaves us in the situation where we have the power to destroy
00:53:59.620 ourselves without the wisdom to ensure that we don't. And where the risks that we impose upon
00:54:04.940 ourselves are many, many times higher than this background rate of natural risks. And in fact, if
00:54:11.200 I'm roughly right about the size of these risks, where I said one in six, a die roll, that we can't
00:54:19.200 survive many more centuries with risk like that. Especially as I think that, you know, we should expect
00:54:24.640 this power to continue to increase if we don't do anything about it. And the chances to
00:54:29.380 continue to go up of failing irrevocably. And because our whole bankroll is at stake, you know,
00:54:35.600 if we fail once on this level, then that's it. So that would mean that this time period where these
00:54:44.460 risks are so elevated can't last all that long. Either we get our act together, which is what I
00:54:50.500 hope will happen, and we acknowledge these risks and we bring them down, we fight the fires of today,
00:54:57.140 and we put in place the systems to ensure that the risks never get so high again. Either we succeed
00:55:03.480 like that, or we fail forever. Either way, I think this is going to be a short period of something like
00:55:10.760 a couple of centuries or maybe five centuries. You could think of it as analogous to a period like
00:55:17.800 the Renaissance or the Enlightenment or something like that. But a time where there's a really cosmic
00:55:24.680 significance, ultimately, where if humanity does survive it, and we, you know, we live for hundreds
00:55:30.680 of thousands more years, that we'll look back and that this will be what this time is known for,
00:55:36.560 this period of heightened risk. And it also will be one of the most famous times in the whole of
00:55:41.680 human history. And, you know, I say in the book that schoolchildren will study it, and it'll be
00:55:48.160 given a name. And I think we need a name now. And that's why I have been calling it the precipice.
00:55:53.600 And the analogy there is to think of humanity being on this really long journey over these 2000
00:56:00.480 centuries, you know, kind of journey through the wilderness, occasional times of hardship, and also
00:56:05.360 times of sudden progress and heady views. And that in the middle of the 20th century, we found ourselves
00:56:12.160 coming through a high mountain pass and realizing that we'd got ourselves into this very dangerous
00:56:18.260 predicament. And the only way onwards was this narrow ledge along the edge of a cliff with a steep
00:56:24.800 precipice at the side. And we're kind of, you know, inching our way along. And we've got to get through
00:56:30.820 this time. And if we can, then maybe we can reach much safer and more prosperous times ahead. So
00:56:37.720 that's how I see this. Yeah, there's a great opening illustration in your book that looks like
00:56:42.320 the style of an old woodcut of that precipice, which, yeah, you know, that's an, I guess,
00:56:50.080 an intuition that many people share just based on extrapolating the pace of technological change.
00:56:57.820 When you're talking about suddenly being in a world where anyone can potentially order DNA in
00:57:07.180 the mail, along with the tools to combine novel sequences or just recapitulate the recipe for
00:57:16.240 smallpox or anything else that is available, it's hard to see how, I mean, even 500 years seems like
00:57:23.540 an order of magnitude longer than the period here that we just crucially have to navigate without
00:57:31.700 a major misstep. It just seems like the capacity for one person or very few people to screw things up
00:57:40.960 for everyone is just doubling and doubling and doubling again within not just the lifetime of
00:57:47.660 people but within even the span of a decade. So, yeah, it's, and it's given cosmic significance, as you point
00:57:54.220 out, because if you accept the possibility, you know, even likelihood that we are, we are alone in the
00:58:02.180 universe, I don't know how, honestly, I don't have strong intuitions about that. I mean, both the prospect of
00:58:08.320 us being alone and the prospect that the universe is teeming with intelligent life that we haven't
00:58:14.240 discovered yet, both of those things seem just unutterably strange. I don't know which is
00:58:21.560 stranger, but there, I mean, it's a bizarre scenario where either of the possibilities on offer seem
00:58:27.940 somehow uncanny. But if it is the former case, if we're alone, then yes, what we do in a few short
00:58:35.580 years matters enormously, if anything in this universe matters.
00:58:39.940 Indeed. I, ultimately, when thinking about this, I see a handful of different reasons to really
00:58:47.700 think it's extraordinarily important what we do about this moment. It, to some extent,
00:58:53.300 it's just obvious, but I think it can be useful to see that you could, you could understand it in
00:58:59.540 terms of the badness of, of the deaths at the time. If it meant that in a catastrophe,
00:59:05.000 7 billion people were killed. That would be absolutely terrible. But it could be even much
00:59:12.480 worse than that. And you might think, why does it need to be worse than that? Surely that's,
00:59:16.200 that's absolutely terrible already. But the reason that it can matter is because we're not saying that
00:59:22.260 there's a, you know, 50% chance of a particular event that will destroy us. The chances for some
00:59:28.020 things could be lower. For example, I just mentioned the chance of an asteroid or comet impact
00:59:32.840 is substantially lower, but still important, still really important because it's, if it did happen,
00:59:39.440 it wouldn't just be a catastrophe for our generation, but it would, it would wipe out
00:59:45.360 this entire future that humanity could have had, where I think that there's every reason to think
00:59:50.820 that barring such a catastrophe, humanity could live surely at least a million years, which is the
00:59:57.240 typical lifespan of a species. But I don't see much reason to think that we couldn't live out
01:00:02.040 the entire habitable span of the earth's, of the earth's life, which is about 500 million or a billion
01:00:08.640 years, or even substantially beyond that, if we leave the earth. So, and the main challenges to
01:00:16.160 things like space travel are in developing the technologies and in harnessing enough energy.
01:00:22.180 But ultimately, if we've already survived a million years, that's not going to be such an issue. You
01:00:28.920 know, we will have 10,000 more centuries to develop our science and our technologies and to harness the
01:00:33.620 energies. So, ultimately, I think the future could be very long and very vast. So that's, for me, the most
01:00:41.760 motivating one is everything we could lose. And that could be understood in, say, utilitarian terms, as the
01:00:48.300 well-being of all the lives that we would lose. But it could also be understood in all these other
01:00:53.460 forms. And Derek Parfitt talks about this very famously near the end of his magnum opus, Reasons
01:01:00.060 and Persons, where he says that also, if you care about the excellences of humanity, if that's what
01:01:07.040 moves you, then there's, since most of our future is ahead of us, there's every reason to expect that
01:01:12.560 our greatest artworks and our most just societies and our most profound discoveries lie ahead of us as
01:01:19.060 well. So, whatever it is that you care about, there's reason to think that most of it lies in the
01:01:25.680 future. But then there's also, you could think about the past. You could think about the fact that human
01:01:31.500 society is necessarily this intergenerational partnership, as Burke put it, and that, you know, our
01:01:38.320 ancestors kind of built up this world for us, and have, you know, 10,000 generations, and then have
01:01:45.360 entrusted it to us, and so that we can make our own innovations and improvements and pass it down to our
01:01:50.940 children, and that if we fail, we would be the worst of all these generations, and we would be betraying
01:01:58.300 the trust that they've placed in us. So, you can think of it in terms of the present, the deaths, the future
01:02:05.340 that would be lost, the past that would be betrayed, or perhaps also in terms of this cosmic
01:02:10.840 significance. If we're the only place where there is perhaps life in the universe, or that the only
01:02:16.320 place where there is intelligent life, or the only place where there are beings that are influenced by
01:02:22.760 moral reasoning, so the only place where there's this kind of upwards force in the universe pushing
01:02:27.780 towards what is good and what is just. If humans are taken out, for all the value that there is in the
01:02:33.580 natural world, and I think that there is a vast amount, there's no other beings which are trying
01:02:40.020 to make the world, you know, more good and more just. If we're gone, things will just meander on
01:02:46.680 their own course with the animals doing their own things. So, there's a whole lot of different ways
01:02:51.820 of seeing this, and Derek Parfitt also pointed out this really useful thought experiment, I think,
01:02:57.020 which is, he imagined these three different scenarios. There's peace, there's a nuclear war
01:03:03.720 in which 99% of all people die, and there's a nuclear war in which 100% of all people die.
01:03:10.000 And obviously, the war where 100% of people die is the worst, followed by the war where 99% of people
01:03:15.460 die. But he said, which of those differences is bigger? And he said that most people would say that
01:03:22.200 the difference between peace and 99% of people dying is the bigger difference. But he thought
01:03:27.260 that because with that last 1%, some kind of discontinuous thing happens where you lose the
01:03:33.620 entire future, and that thus, that was the bigger difference. And there's this reason to be especially
01:03:40.000 concerned with what are now called existential risks.
01:03:44.900 Yeah. So, obviously, that final claim that the difference between 2 and 3 is bigger than the
01:03:50.080 difference between 1 and 2, that is going to be provocative for some people. And I think it does
01:03:55.820 expose another precipice of sorts. It's a precipice of moral intuition here, where people find it
01:04:04.000 difficult to think about the moral significance of unrealized opportunity, right? So, because on some
01:04:13.140 level of cancellation, fear of cancellation...
01:04:18.880 If you'd like to continue listening to this podcast, you'll need to subscribe at samharris.org.
01:04:24.820 You'll get access to all full-length episodes of the Making Sense podcast, and to other subscriber-only
01:04:30.020 content, including bonus episodes and AMAs, and the conversations I've been having on the Waking Up app.
01:04:36.560 The Making Sense podcast is ad-free and relies entirely on listener support.
01:04:40.460 And you can subscribe now at samharris.org.