Making Sense of Existential Threat and Nuclear War | Episode 7 of The Essential Sam Harris
Episode Stats
Words per Minute
152.43048
Summary
The goal of this series is to organize, compile, and juxtapose conversations hosted by Sam Harris into specific areas of interest, in order to construct a coherent overview of Sam's perspectives and arguments, the various explorations and approaches to the topic, the relevant agreements and disagreements, and the pushbacks and evolving thoughts which his guests have advanced. The purpose of these compilations is not to provide a complete picture of any issue, but to entice you to go deeper into these subjects. Along the way, we ll point you to the full episodes with each featured guest, and at the conclusion, we'll offer some reading, listening, and watching suggestions which range from fun and light to densely academic. This is The Essential Sam Harris: Making Sense of Extraterrestrial Threat and Nuclear War, a series of episodes hosted by the philosopher and writer Sam Harris, exploring a broad array of ideas and concepts related to existential threat and nuclear war. In this episode, you ll hear the natural overlap between theories of ethics, violence, and pacifism, and more. We don t run ads on the podcast, and therefore it s made possible entirely through the support of our subscribers. So if you enjoy what we re doing here, please consider becoming a supporter of the podcast by becoming a subscriber. You ll get a better idea of what s going on here, and a deeper understanding of the topics covered in the episodes, and how they can affect your day-to-day life and the rest of your day to day life. Thanks to our sponsorships, we re making a better listening experience, and your life better, and you lllllllll. We lllll get more stuff like that s better at listening to the Making Sense Podcast! -Sam Harris Thanks, Sam Harris. -The Making Sense Project and - to become a supporter, too! . Thank you for listening Subscribe to the making sense podcast! -Your support is helping us make sense of existential threats, nuclear war, existential threat, and existential threats and nuclear threats, and everything in between. Sam Harris -Our goal is to make the world a little bit better, not less scary, more beautiful, more fun, more cool, more interesting, more peaceful, more profound, more understanding of life, more meaningful, and less scary -and more beautiful - Thank you, again and again, thank you.
Transcript
00:00:10.880
Just a note to say that if you're hearing this, you are not currently on our subscriber
00:00:14.680
feed and will only be hearing the first part of this conversation.
00:00:18.420
In order to access full episodes of the Making Sense Podcast, you'll need to subscribe at
00:00:24.060
There you'll find our private RSS feed to add to your favorite podcatcher, along with
00:00:30.520
We don't run ads on the podcast, and therefore it's made possible entirely through the support
00:00:35.900
So if you enjoy what we're doing here, please consider becoming one.
00:00:50.240
This is Making Sense of Existential Threat and Nuclear War.
00:00:54.360
The goal of this series is to organize, compile, and juxtapose conversations hosted by Sam
00:01:05.080
This is an ongoing effort to construct a coherent overview of Sam's perspectives and arguments,
00:01:10.600
the various explorations and approaches to the topic, the relevant agreements and disagreements,
00:01:16.940
and the pushbacks and evolving thoughts which his guests have advanced.
00:01:20.420
The purpose of these compilations is not to provide a complete picture of any issue, but
00:01:27.500
to entice you to go deeper into these subjects.
00:01:30.740
Along the way, we'll point you to the full episodes with each featured guest, and at the
00:01:35.860
conclusion, we'll offer some reading, listening, and watching suggestions, which range from
00:01:43.320
One note to keep in mind for this series, Sam has long argued for a unity of knowledge where
00:01:50.680
the barriers between fields of study are viewed as largely unhelpful artifacts of unnecessarily
00:01:57.940
The pursuit of wisdom and reason in one area of study naturally bleeds into, and greatly
00:02:05.280
You'll hear plenty of crossover into other topics as these dives into the archives unfold.
00:02:09.960
And your thinking about a particular topic may shift as you realize its contingent relationships
00:02:17.000
In this topic, you'll hear the natural overlap with theories of ethics, violence and pacifism,
00:02:25.700
Let's make sense of existential threat and nuclear war.
00:02:29.000
In 1961, the astronomer Francis Drake jotted down a fairly simple, back-of-the-napkin formula
00:02:39.960
to calculate just how many technologically advanced civilizations we should expect to be out there
00:02:50.180
The equation starts with an extremely large number, the estimate of the total number of
00:02:59.440
Then we narrow that number down to how many of those stars have planets orbiting them.
00:03:04.700
Then we narrow that number down to how many of those planets are likely to be suitable
00:03:11.800
Then we narrow that down to the number of those life-suitable planets that have actually had
00:03:17.460
Then we narrow that down to how many of those life-forms are intelligent.
00:03:23.400
And then, finally, we narrow that down to how many of those intelligent life-forms advanced
00:03:33.460
Even if we're quite conservative with our estimate at each step of the narrowing process,
00:03:38.900
maybe we guess that only one in every 100,000 life-suitable planets actually did achieve even
00:03:46.400
Or that only one in every 1,000,000 forms of intelligent life became technologically
00:03:54.340
Even if we apply these stringent factors, the results of the equation and our remaining
00:03:59.600
number suggest that there still ought to be between 1,000 and 100 million advanced civilizations
00:04:08.860
And there are, of course, billions of galaxies just like ours.
00:04:14.440
So even if the correct number is just in the hundreds in our Milky Way, when you look out
00:04:19.400
in the cosmos, there should be millions of civilizations out there.
00:04:22.900
A physicist named Enrico Fermi asked the simple question,
00:04:33.600
How come, when we look out into the cosmos, we don't see or hear obvious evidence of a plethora
00:04:40.040
of advanced life-forms zipping about in their ships, symmetrically geoforming entire galaxies
00:04:53.220
There is no shortage of hypotheses to address Fermi's question, but just about all of the
00:04:58.560
responses can be categorized under three general answer types.
00:05:05.220
Perhaps all of Drake's math was right, and everybody will show up, but we just happen
00:05:13.680
The cosmos itself may have just recently reached a state of habitability, after the chaos from
00:05:19.020
the initial inflation and the Big Bang sent heat and debris flying about in every direction.
00:05:24.880
Maybe it just recently settled down and allowed life like ours to flourish, and we humans are
00:05:34.340
Maybe Drake's numbers were not nearly conservative enough, and life such as ours is just an exceedingly
00:05:42.620
Perhaps there are only a small handful of civilizations out there, and given the vastness of the cosmos,
00:05:48.760
it's no surprise that we wouldn't have had any close neighbors who happen to be advanced
00:05:57.700
Or perhaps the most disturbing answer, the one we're going to be dealing with in this compilation,
00:06:08.280
What if there is a certain unavoidable technological phase that every intelligent life's advancement
00:06:15.780
A technological phase that is just so hard to get through that almost no civilization successfully
00:06:23.440
And that explains why it appears that no one is out there.
00:06:27.820
It may be that we humans are on a typical trajectory, and are destined to be erased.
00:06:35.660
But even if there is a filter, and even if just the tiniest percentage of civilizations have
00:06:41.300
been able to get through it and continue advancing without tripping over themselves, pretty soon
00:06:46.620
they'd have the knowledge of how to do monumentally big engineering projects, if they so choose.
00:06:53.040
We should see evidence of their continued existence, right?
00:06:57.720
So let's make sure we're imagining this filter analogy correctly.
00:07:05.760
Maybe we should be picturing thicker and thicker filter layers stacked one on top of the other.
00:07:10.680
Maybe there would be a moment when you really do leave them all behind.
00:07:16.200
That point of permanent safety would be when a civilization achieves a kind of knowledge so
00:07:21.360
powerful that it understands how to survive and avoid its own self-destruction perpetually,
00:07:26.620
and really does get through all of those filters.
00:07:30.520
But there does seem to be a kind of natural sequential order of the types of knowledge that
00:07:37.000
However, it is difficult to imagine discovering how to build flying machines before building
00:07:42.340
wheelbarrows, but that is also not a guarantee.
00:07:46.440
Is our human order of scientific discovery typical, or an outlier?
00:07:51.060
It seems that harnessing energy is key to both creative and destructive power, and that they
00:07:59.320
You could imagine the kind of knowledge it would take to pull off a huge engineering project,
00:08:03.620
like building a device that could siphon all of the energy from a black hole at the center
00:08:10.360
And you can recognize that this same knowledge would presumably also contain the power to destroy
00:08:15.720
the civilization which discovered it, either maliciously or accidentally.
00:08:20.400
And the odds of avoiding that fate trend towards impossible over a short amount of time.
00:08:27.140
This is the great filter answer to Enrico Fermi, that there are countless civilizations out there
00:08:34.220
that blip out of existence almost as quickly as they achieve the technical prowess to harness
00:08:39.620
even a small percentage of the potential energy available to them.
00:08:51.080
We humans are a relatively young species, and already we seem to be discovering a few technologies
00:08:59.460
If we get through our current challenges, are we bound to just discover another, even more
00:09:09.960
This compilation is going to be a tour of Sam's engagement with, and a close look at, the strongest
00:09:18.700
A weapon that might be a candidate for this great filter, or at least a very difficult
00:09:27.180
The complete erasure and annihilation of civilization was a talent once thought to be reserved only
00:09:34.800
As a reminder of just how stark the moment was when we realized we may have that power in
00:09:39.640
our own hands, perhaps for the first time sensing that great filter on our horizon.
00:09:44.660
It's worth playing a haunting and now very famous audio clip which lays the realization
00:09:52.800
Upon witnessing a successful test detonation of a nuclear bomb south of Los Alamos, Robert
00:09:58.320
Oppenheimer, the physicist leading the Manhattan Project, recalls the scene and his thoughts.
00:10:18.140
And I remembered the line from the Hindu scripture, the Bhagavad Gita.
00:10:29.960
Vishnu is trying to persuade the prince that he should do his duty.
00:10:39.100
And to impress him, takes on his multi-armed form and says, now I am become death, the destroyer
00:10:54.560
I suppose we all thought that one way or another.
00:10:57.300
Making sense of nuclear war and its existential threat is not the happiest of subjects.
00:11:06.460
And perhaps that's why most of us don't often look closely at the precariousness of
00:11:12.180
We experience a kind of cognitive dissonance that can act as a psychological barrier when
00:11:17.120
direct engagement with a known threat is just too destabilizing.
00:11:20.900
And more importantly, when the threat seems to defy a readily available remedy.
00:11:26.940
If there is a great filter out there, what good would it do to worry about it?
00:11:34.780
Well, Sam Harris is one of those people who forces himself to.
00:11:41.000
Before we get to the guests and conversations that Sam has hosted on Making Sense, we should
00:11:45.560
remind ourselves of the analogy that we're using to approach this subject.
00:11:51.860
A filter, no matter how dense, does permit some things to get through.
00:11:57.040
So even if the odds are stacked against us, the only game in town appears to be trying
00:12:02.080
to improve our chances of getting to the other side.
00:12:06.500
We're going to start with Sam himself as he describes his re-engagement with this threat.
00:12:11.740
It's his attempt to shake us out of our collective moral slumber, to help us notice our circumstances
00:12:20.360
He reads here from a particular book which was instrumental to his paying close attention
00:12:26.440
Sam is speaking in July of 2020, in the introduction of episode 210.
00:12:30.740
We're coming up on the 75th anniversary of the atomic bomb in about a week.
00:12:39.760
July 16th is the 75th anniversary of Trinity, the explosion of the first atomic bomb at the
00:12:49.440
Whatever the merits or necessity of our building the bomb, and even using it to end the war with
00:12:59.720
But what is absolutely clear to anyone who studies the ensuing 75 years, is that these were 75 years
00:13:14.760
And this has been a chapter in human history of such reckless stupidity, that it's been a kind of
00:13:32.260
We have forgotten about the situation we are in every day of our lives.
00:13:37.880
This is really difficult to think about, much less understand.
00:13:44.120
The enormity of our error here is stupefying, in some basic sense.
00:13:51.600
It's like we were convinced 75 years ago to rig all of our homes and buildings to explode.
00:14:01.620
And then we just got distracted by other things, right?
00:14:05.820
And most of us live each day totally unaware that the status quo is as precarious as it
00:14:14.940
So when the history of this period is written, our descendants will surely ask, what the hell
00:14:23.160
And we are the people of whom that question will be asked.
00:14:28.020
That is, if we don't annihilate ourselves in the meantime, what the hell are we thinking?
00:14:39.080
We have been stuck for nearly three generations in a posture of defending civilization, or imagining
00:14:48.220
that we are, by threatening to destroy it at any moment.
00:14:53.660
And given our capacity to make mistakes, given the increasing threat of cyber attack, the status
00:15:05.920
The first book I ever read about the prospect of nuclear war was Jonathan Schell's The Fate
00:15:11.240
of the Earth, which originally came out in the New Yorker in 1982.
00:15:17.260
If you haven't read it, it's a beautifully written and amazingly sustained exercise in
00:15:26.900
And I'd like to read you a few passages to give you a sense of it.
00:15:30.800
This is from the beginning, starting a few sentences in.
00:15:33.920
These bombs were built as weapons for war, but their significance greatly transcends war
00:15:42.360
They grew out of history, yet they threatened to end history.
00:15:46.580
They were made by men, yet they threatened to annihilate man.
00:15:51.460
They are a pit into which the whole world can fall, a nemesis of all human intentions, actions,
00:15:58.520
Only life itself, which they threatened to swallow up, can give the measure of their
00:16:05.980
Yet in spite of the immeasurable importance of nuclear weapons, the world has declined,
00:16:13.320
We have thus far failed to fashion, or even to discover within ourselves, an emotional or
00:16:21.560
This peculiar failure of response, in which hundreds of millions of people acknowledge the
00:16:26.840
presence of an immediate, unremitting threat to their existence, and to the existence of
00:16:31.420
the world they live in, but do nothing about it, a failure in which both self-interest and
00:16:37.160
fellow-feeling seem to have died, has itself been such a striking phenomenon that it has to
00:16:43.560
be regarded as an extremely important part of the nuclear predicament, as this has existed
00:16:50.180
So there, Shell gets at the strangeness of the status quo, where the monster is in the
00:16:57.220
room, and yet we have managed to divert our attention from it.
00:17:04.560
It's a violation both of self-interest and fellow-feeling.
00:17:09.800
Our capacity to ignore this problem somehow seems psychologically impossible.
00:17:14.680
It's a subversion of, really, all of our priorities, both personal and with respect to our ethical
00:17:24.180
A little bit later on, he talks about this state of mind a little more.
00:17:28.940
Because denial is a form of self-protection, if only against anguishing thoughts and feelings,
00:17:35.120
and because it contains something useful, and perhaps even, in its way, necessary to life.
00:17:40.240
If anyone who invites people to draw aside the veil and look at the peril face-to-face is
00:17:46.380
at risk of trespassing on inhibitions that are part of our humanity, I hope in these reflections
00:17:52.180
to proceed with the utmost possible respect for all forms of refusal to accept the unnatural
00:17:57.800
and horrifying prospect of a nuclear holocaust.
00:18:02.020
So there, Shell is being more tactful than I'm being here, admitting that this denial is
00:18:07.880
on some level necessary to get on with life, but it is nonetheless crazy.
00:18:14.340
Year after year after year, we are running the risk of mishap here.
00:18:20.480
And whatever the risk, you can't keep just rolling the dice.
00:18:27.380
And so it seems time to ask, when is this going to end?
00:18:31.860
To begin the exploration of clips, we're going to hear from a philosopher and author who spends
00:18:41.260
a lot of time looking at existential risk, Nick Bostrom.
00:18:46.140
Bostrom has a talent for painting colorful analogies to prime our thinking about these
00:18:52.220
One of his analogies that brings the great filter hypothesis into vivid clarity goes like
00:18:57.820
Imagine a giant urn filled with marbles, which are mostly white in color, but range in shades
00:19:07.460
Each of these marbles represents a kind of knowledge that we can pluck from nature and apply
00:19:13.780
Picture reaching in and pulling out the knowledge of how to make a hairdryer, or the automobile,
00:19:19.440
or a toaster oven, or even something more abstract, like the knowledge of how to alter the genome
00:19:24.520
to choose eye color or some other aesthetic purpose.
00:19:28.300
Reaching into this urn, rummaging around and pulling out a marble, is the act of scientific
00:19:36.020
Now, white marbles represent the kinds of knowledge that carry with them very little existential
00:19:42.400
Maybe pulling a marble like this would be gaining knowledge of how to manufacture glass.
00:19:46.760
That's a marble that we pulled out of the urn around 3500 BCE in Egypt.
00:19:53.080
That little bit of knowledge mostly improves life on Earth for humans and has all kinds
00:19:57.820
of lovely applications for food preservation, artistic expression, window manufacturing, eyesight
00:20:06.940
It likely carries with it some kind of minor threat as well, though it's difficult to imagine
00:20:12.180
how that specific advancement would inherently threaten the existence of the species.
00:20:16.780
You can imagine thousands of white marbles that feel as benign, positive, and generally
00:20:23.940
But Bostrom asks us to consider what a black marble would be.
00:20:28.840
Is there some kind of knowledge that, when plucked out of nature, is just so powerful that
00:20:33.980
every civilization is eradicated shortly after pulling it from the urn?
00:20:38.240
Are there several of these black marbles hiding in the urn somewhere?
00:20:45.140
Sam points out that it has generally been the attitude of science to just pull out as
00:20:49.660
many marbles as fast as we possibly can and let everyone know about it the moment you
00:20:55.480
And we operate as if the black marbles aren't in the urn, as if they simply don't exist.
00:21:01.620
What shade of gray was the marble that represented the moment we obtained the knowledge of how to
00:21:06.880
split the nucleus of a uranium-235 atom and trigger and target its fission chain reaction
00:21:16.380
That will be a question we consider throughout this episode, as well as the specific political
00:21:21.340
entanglements which relate to this problem, and the alliances and personalities which affected
00:21:27.220
So, let's start out with Nick Bostrom and Sam engaging on the topic of existential threat
00:21:33.200
in general as we move towards the nuclear question.
00:21:36.940
Here, you'll hear Bostrom lay out his vulnerable world hypothesis and draw out the metaphor that
00:21:43.680
This is from episode 151, Will We Destroy the Future?
00:21:47.580
Let's start with the vulnerable world hypothesis.
00:21:56.100
Well, the hypothesis is, roughly speaking, that there is some level of technological development
00:22:01.840
at which the world gets destroyed by default, as it were.
00:22:06.800
So then, what does it mean to get destroyed by default?
00:22:10.120
I define something I call the semi-anarchic default condition, which is a condition in which there
00:22:16.340
are a wide range of different actors with a wide range of different human recognizable motives.
00:22:22.900
But then, more importantly, two conditions hold.
00:22:26.200
One is that there is no very reliable way of resolving global coordination problems, and the other is that
00:22:31.980
we don't have a very extremely reliable way of preventing individuals from committing actions
00:22:39.240
that are extremely strongly disapproved of by a great majority of other people.
00:22:43.080
So, maybe it's better to come at it through a metaphor.
00:22:49.740
So, what if, in this urn, there is a black ball in there somewhere?
00:22:55.280
Like, is there some possible technology that could be such that whichever civilization discovers
00:23:04.840
And what if there is such a black ball in the urn, though?
00:23:08.020
I mean, we can ask about how likely that is to be the case.
00:23:11.640
We can also look at what is our current strategy with respect to this possibility.
00:23:16.480
And it seems to me that currently our strategy, with respect to the possibility that the urn
00:23:22.520
might contain a black ball, is simply to hope that it doesn't.
00:23:25.700
And so, we keep extracting balls as fast as we can.
00:23:28.500
We have become quite good at that, but we have no ability to put balls back into the urn.
00:23:35.640
So, the first part of this paper tries to identify what are the types of ways in which the world
00:23:46.080
could be vulnerable, the types of ways in which there could be some possible black ball technology
00:23:52.460
And the first and most obvious type of way the world could be vulnerable is if there is
00:23:57.680
some technology that greatly empowers individuals to cause sufficiently large quantities of destruction.
00:24:05.300
So, motivate this with a, or illustrate it by means of a historical counterfactual.
00:24:12.160
We, in the last century, discovered how to split the atom and release the energy that
00:24:18.220
is contained within, some of the energy that's contained within the nucleus.
00:24:24.040
And it turned out that this is quite difficult to do.
00:24:31.380
So, really, only states can do this kind of stuff to produce nuclear weapons.
00:24:37.120
But what if it had turned out that there had been an easier way to release the energy of
00:24:41.840
What if you could have made a nuclear bomb by baking sand in the microwave oven or something
00:24:48.520
So, then that might well have been the end of human civilization in that it's hard to see
00:24:54.260
how you could have cities, let's say, if anybody who wanted to could destroy millions of people.
00:25:02.680
Now, we know, of course, that it is physically impossible to create an atomic detonation by
00:25:10.740
But before you actually did the relevant nuclear physics, how could you possibly have known how
00:25:15.220
Well, let's just spell out that because I want to conserve everyone's intuitions as we
00:25:21.000
go on this harrowing ride to your terminus here because the punchline of this paper is
00:25:27.420
fairly startling when you get to what the remedies are.
00:25:31.420
So, why is it that civilization could not endure the prospect of what you call easy nukes?
00:25:41.580
If it were that easy to create a Hiroshima-level blast or beyond, why is it just a foregone conclusion
00:25:50.440
that that would mean the end of cities and perhaps the end of most things we recognize?
00:25:57.100
I think foregone conclusion is maybe a little too strong.
00:25:59.160
It depends a little bit on the exact parameters we plug in.
00:26:03.400
I mean, the intuition is that in a large enough population of people, like amongst every population
00:26:10.240
with millions of people, there will always be a few people who, for whatever reason, would
00:26:15.840
like to kill a million people or more if they could.
00:26:19.600
Whether they are just crazy or evil or they have some weird ideological doctrine or they're
00:26:27.260
trying to extort other people or threaten other people, that just humans are very diverse
00:26:33.420
and in a large enough set of people that will, for practically any desire, you can specify
00:26:40.940
So, if each of those destructively inclined people would be able to cause a sufficient amount
00:26:45.880
of destruction, then everything would get destroyed.
00:26:48.740
Now, if one imagines this actually playing out in history, then to tell whether all of civilization
00:26:58.300
really would get destroyed or some horrible catastrophe short of that would happen instead
00:27:04.100
Like just what kind of nuclear weapon would it be like a small kind of Hiroshima type of
00:27:12.020
Could literally anybody do it like in five minutes?
00:27:14.260
Or would it take some engineer working for half a year?
00:27:18.940
And so, depending on exactly what values you pick for those and some other variables, you
00:27:24.380
might get scenarios ranging from very bad to kind of existential catastrophe.
00:27:31.220
But the point is just to illustrate that there historically have been these technological transitions
00:27:38.320
where we have been lucky in that destructive capability we discovered were hard to wield.
00:27:46.660
You know, and maybe a plausible way in which this kind of very highly destructive capability
00:27:52.520
could become easy to wield in the future would be through developments in biotechnology that
00:27:57.640
maybe makes it easy to create designer viruses and so forth that don't require high amounts
00:28:04.100
of energy or special difficult materials and so forth.
00:28:07.740
And there you might have an even stronger case.
00:28:09.960
Like so with a nuclear weapon, like one nuclear weapon can only destroy one city, right?
00:28:14.760
Where the viruses and stuff potentially can spread.
00:28:19.520
And we should remind people that we're in an environment now where people talk with some
00:28:26.460
degree of flippancy about the prospect of every household one day having something like
00:28:33.880
a desktop printer that can print DNA sequences, right?
00:28:37.840
That everyone becomes their own bespoke molecular biologist and you can just print your own medicine
00:28:44.900
at home or your own genetic intervention at home.
00:28:48.360
And this stuff really is, you know, the recipe under those conditions, the recipe to weaponize
00:28:54.560
the 1918 flu could just be sent to you like a PDF.
00:28:59.580
It's not beyond the bounds of plausible sci-fi that we could be in a condition where it really
00:29:06.320
would be within the power of one nihilistic or, you know, otherwise ideological person to
00:29:12.700
destroy the lives of millions and even billions in the wrong case.
00:29:16.360
Yeah, or send us a PDF or you could just download it from the internet.
00:29:20.640
So the full genomes of the number of highly virulent organisms are in the public domain
00:29:32.500
I think that I would rather see a future where DNA synthesis was a service provided by a few
00:29:37.560
places in the world where it would be able, if the need arose, to exert some control, some
00:29:42.900
screening rather than something that every lab needs to have its own separate little machine.
00:29:48.360
So that's, these are examples of type one vulnerability, like where the problem really
00:29:54.000
arises from individuals becoming too empowered in their ability to create massive amounts of
00:30:01.580
Now, so there are other ways in which the world could be vulnerable that are slightly more
00:30:06.420
subtle, but I think also worth bearing in mind.
00:30:09.380
So these have to do more about the way that technological developments could change the
00:30:16.960
We can again return to the nuclear history case for an illustration of this.
00:30:23.120
And actually, this is maybe the closest to a black ball we've gotten so far with thermonuclear
00:30:29.040
weapons and the big arms race during the Cold War led to something like 70,000 warheads
00:30:37.820
So it looks like when we can see some of the archives of this history that have recently
00:30:45.020
opened up, that there were a number of close calls.
00:30:48.300
The world actually came quite close to the brink on several occasions, and we might have
00:30:54.100
It might not have been that we were in such a stable situation, which rather might have
00:30:59.380
been that this was a kind of slightly black ball-ish technology and we just had enough luck
00:31:05.060
But you could imagine it could have been worse.
00:31:07.880
You could imagine properties of this technology that would have created stronger incentives,
00:31:11.700
say, for a first strike so that you would have crisis instability.
00:31:16.540
If it had been easier, let us say, in a first strike to take out all the adversary's nuclear
00:31:21.260
weapons, then it might not have taken a lot in a crisis situation to just have enough fear
00:31:30.880
that you would have to strike first for fear that the adversary otherwise would do the
00:31:36.400
Remind people that in the aftermath of the Cuban Missile Crisis, the people who were closest
00:31:41.420
to the action felt that the odds of an exchange had been something like a coin toss.
00:31:49.220
And what you're envisioning is a situation where what you describe as safe first strike, which
00:31:55.340
is there's just no reasonable fear that you're not going to be able to annihilate your enemy
00:32:05.040
And it's also it's also forgotten that the status quo of mutually assured destruction was
00:32:13.340
I mean, it was before the Russians had or the Soviets had their own arsenals.
00:32:18.920
There was a greater game theoretic concern that we would be more tempted to use ours because
00:32:29.460
So some degree of stabilizing influence, although, of course, maybe at the expense of the outcome
00:32:33.920
being even worse, if both sides were destroyed, then the safe first strike might just be one
00:32:41.820
And so if it had been possible, say, with one nuclear warhead to wipe out enemies, nuclear
00:32:47.620
warheads within a wider radius, then it's actually the case.
00:32:51.160
Or if it had been easier to detect nuclear submarines so that you could be more confident
00:32:57.760
that you had actually been able to target all of the other side's nuclear capability,
00:33:03.920
that could have resulted in a more unstable arms race, one that would, with a sort of
00:33:11.140
higher degree of certainty, result in the weapons being used.
00:33:16.240
And you can consider other possible future ways in which, say, the world might find itself
00:33:22.740
Or it's not that anybody wants to destroy the world, but it might just be very hard to come
00:33:28.760
to an agreement that avoids the arms being built up and then used in a crisis.
00:33:35.320
Nuclear weapon reduction treaties, you know, there are concerns about verification.
00:33:41.000
But in principle, you can kind of have, like, nuclear weapons are quite big and they use very
00:33:46.440
There might be other military technologies where, even if both sides agree that they
00:33:50.200
wanted to just ban this military technology, it might just, the nature of the technology
00:33:55.860
might be such that it would be very difficult or impossible to enforce.
00:34:01.160
In that exchange, you heard Bostrom mention how lucky we may have gotten, in that it turns
00:34:06.420
out, nuclear weapons are not very easy to create.
00:34:09.740
So, even if this technology turns out to be a nearly black ball, and perhaps the darkest
00:34:16.340
one we've yet pulled out of the urn, we can examine our treatment of them as a dress
00:34:24.360
Bostrom also mentioned something in passing that's worth keeping in mind as we look closer
00:34:31.040
What he referred to as global coordination problems.
00:34:34.380
This is a concept sometimes used in economics and game theory, and it describes a situation
00:34:40.480
that would be best solved by everyone simultaneously moving in the same direction.
00:34:45.660
But of course, people can't be sure what's in anyone else's mind, and humans are famously
00:34:51.240
difficult to coordinate and synchronize in any case.
00:34:54.480
So often, these types of problems entrench themselves and worsen, even if most people agree that they
00:35:01.740
Another relevant feature of a coordination problem is that there's usually a strong disincentive
00:35:09.700
This can be applied to climate change, political revolutions, or even something like a great
00:35:14.760
number of people secretly desiring to quit social media, but not wanting to lose connections
00:35:21.920
Laying the global coordination problem framework onto disarmament of nuclear weapons is an easy
00:35:27.660
The first mover who dismantles their bombs may be at a huge disadvantage, even if everyone
00:35:36.720
In fact, as you also heard Bostrom point out, when thinking about nuclear war strategy, the
00:35:42.960
first strike is often aimed at decapitating the opponent's ability to strike back.
00:35:47.660
Of course, if your opponent has already willingly disarmed, say, in accordance with the mutual treaty,
00:35:53.180
while you have retained your weapons and only pretended to disarm, the effect is just as devastating.
00:36:02.740
Now that we've laid some of the foundation to think about existential risk in general,
00:36:07.140
let's move to a conversation Sam had with a guest who looks very closely at the prospect
00:36:12.520
The guest is Fred Kaplan, and when Sam spoke with him, Kaplan had just published a book called
00:36:21.180
But before we get to Kaplan, let's first listen to some of Sam's introduction to the conversation,
00:36:26.260
and let him do the work of trying to drag our attention to the unnerving reality of this
00:36:32.660
He's going to bring us back to 1983, at a moment when the only thing standing between us
00:36:38.480
and nuclear Armageddon may have been a single person's intuition.
00:36:44.420
The doomsday clock was just advanced closer to midnight than it has been at any point in the last 75 years.
00:36:56.120
Now, whether you put much significance in that warning,
00:37:01.100
just take a moment to consider that the people who focus on this problem
00:37:11.840
And if I were to ask how long it's been since you worried that you might
00:37:20.300
or how long has it been since you've worried about
00:37:33.720
But I would wager that very few people listening to this podcast
00:37:38.580
have spent any significant time feeling the implications of what is manifestly true.
00:37:44.580
All of us are living under a system of self-annihilation
00:37:53.560
that we might stumble into a nuclear war based solely on false information.
00:37:58.600
In fact, this has almost happened on more than one occasion.
00:38:07.020
He should be one of the most famous people in human history.
00:38:12.720
He was a lieutenant colonel in the Soviet Air Defense Forces
00:38:16.660
who is widely believed to be almost entirely responsible
00:38:21.840
for the fact that we didn't have World War III in the year 1983.
00:38:30.460
and the Soviet Union had just mistaken a Korean passenger jet,
00:38:39.120
and shot it down after it strayed into Siberian airspace.
00:38:45.400
And the U.S. and our allies were outraged over this
00:38:55.100
had performed multiple nuclear tests that month.
00:39:14.060
and there was apparently no sign that they were in error.
00:39:34.220
it's widely believed that had he passed that information along,
00:39:38.020
a massive retaliatory strike against the United States
00:39:43.440
And of course, upon seeing those incoming missiles,
00:39:47.660
of which there would likely have been hundreds,
00:39:51.480
we would have launched a retaliatory strike of our own.
00:40:07.660
And his decision boiled down to mere intuition.
00:40:11.880
The protocol demanded that he pass the information along
00:40:16.300
because it showed every sign of being a real attack.
00:40:23.160
were really going to launch a nuclear first strike,
00:40:32.120
But it's also believed that any of the other people