#151 — Will We Destroy the Future?
Episode Stats
Words per Minute
163.53212
Summary
Nick Bostrom is a Swedish-born philosopher with a background in theoretical physics, computational neuroscience, and logic, and many other interesting intersecting topics. But officially, he s a Professor of Philosophy at Oxford University, where he leads the Future of Humanity Institute, a research center focused largely on the problem of existential risk. And today we get deep into his views on existential risk by focusing on three of his papers, which I ll describe in our conversation. We talk about the Vulnerable World Hypothesis, which gets us into many interesting tangents with respect to the history of nuclear deterrence and the possible need for what he calls turnkey totalitarianism, we talk about whether we re living in a computer simulation, and the doomsday argument, which is not his, but it s one of these philosophical thought experiments that have convinced many people that we might be living close to the end of human history. And I hope you ll agree that the difference between those two scenarios is one of the more significant ones we can find. I really enjoyed talking to Nick, and I find his work fascinating and consequential, and that s a combination I find fascinating and very consequential. Make sure to check out his work! Sam Harris This is a podcast by a philosopher I ve been hoping to get on the podcast for quite some time. I m sure you'll agree that he's one of my favorite philosophers. If you like what he's doing, please consider becoming a supporter of The Making Sense Podcast by becoming a patron. It helps me out tremendously. I can t wait to hear what you're doing here. -Sam Harris Thank you for listening to the podcast, too! -Epsiode: "Making Sense" by and by . - "The Good Thing by Nick" by The Good Thing by The Bad Thing by , & so on and so on & so forth, etc., etc. Thanks for listening? -- Thank you, in the making sense? -- I mean, not too fast, I mean it's such a simple one, I know that it's good, you know that so I can be so good and so so much so that so that I can make it so good, and so I don't need to be so much of it, right right, right, so can you be that kind of thing, right? - Thank you so much?
Transcript
00:00:10.880
Just a note to say that if you're hearing this, you are not currently on our subscriber
00:00:14.680
feed and will only be hearing the first part of this conversation.
00:00:18.440
In order to access full episodes of the Making Sense Podcast, you'll need to subscribe at
00:00:24.140
There you'll find our private RSS feed to add to your favorite podcatcher, along with
00:00:30.520
We don't run ads on the podcast, and therefore it's made possible entirely through the support
00:00:35.900
So if you enjoy what we're doing here, please consider becoming one.
00:00:54.760
Nick is someone I've been hoping to get on the podcast for quite some time.
00:00:58.360
He is a Swedish-born philosopher with a background in theoretical physics and computational neuroscience
00:01:06.880
and logic and AI and many other interesting intersecting topics.
00:01:14.120
But officially he's a professor of philosophy at Oxford University, where he leads the Future
00:01:21.380
And this organization is a research center which is focused largely on the problem of existential
00:01:30.220
And today we get deep into his views on existential risk by focusing on three of his papers, which
00:01:39.860
We talk about what he calls the Vulnerable World Hypothesis, which gets us into many interesting
00:01:47.500
tangents with respect to the history of nuclear deterrence and the possible need for what he
00:01:57.880
We talk about whether we're living in a computer simulation.
00:02:00.300
He's the father of the now-famous simulation argument.
00:02:04.640
We talk about the doomsday argument, which is not his, but it's one of these philosophical
00:02:10.420
thought experiments that have convinced many people that we might be living close to the
00:02:16.240
We talk about the implications of there being extraterrestrial life out there in the galaxy
00:02:21.800
and many other topics, but all of it is focused on the question of whether humanity is close
00:02:29.320
to the end of its career here or near the very beginning.
00:02:34.560
And I hope you'll agree that the difference between those two scenarios is one of the more
00:02:46.640
I find his work fascinating and very consequential, and that's a good combination.
00:03:06.520
So you are fast becoming, or not too fast, it's been years now that I've been aware of
00:03:12.260
your work, but you are becoming one of the most provocative philosophers I can think of.
00:03:21.220
And I want to introduce you, but perhaps to begin with, how do you view your work and
00:03:30.080
How do you summarize your interests as a philosopher?
00:03:35.020
Broadly speaking, I'm interested in big picture questions for humanity and figuring out which
00:03:42.120
That is, out of all the things you can be pushing on or pulling on in the world, which
00:03:46.920
ones would actually tend to make things better in expectation.
00:03:51.080
And then various kind of sub-questions that come out from that ultimate quest to figure
00:03:57.220
So when I think about your work, I see a concern that unifies much of it, certainly, with existential
00:04:06.380
And I don't know if this is a phrase that you have popularized or if it's just derivative
00:04:11.140
of your work, but how do you think of existential risk?
00:04:15.360
And why is it so hard for most people to care about?
00:04:19.620
It's amazing to me that this is such an esoteric concern and you really have brought it into
00:04:25.200
Yeah, I introduced the concept in a paper I wrote back in the early 2000s, the concept
00:04:33.860
being that of a risk either to the survival of Earth originating in talent life or a risk
00:04:39.860
that could permanently and drastically reduce our potential for desirable future developments.
00:04:46.460
So in other words, something that could permanently destroy the future.
00:04:50.500
I mean, even that phrase, I mean, you really have a talent for coming up with phrases that
00:04:57.940
are arresting and, you know, it's such a simple one.
00:05:03.320
You know, there are probably more people working in my local McDonald's than are thinking about
00:05:09.660
the prospect of permanently destroying the future.
00:05:13.300
How long have you been focused on this particular problem?
00:05:17.340
And again, why is it, there's something bewildering about trying to export this concern to the
00:05:24.360
rest of culture, even to very, very smart people who claim to be worried about things
00:05:32.340
Why is existential risk still such an esoteric concern?
00:05:36.400
Well, it's become less so over the last few years.
00:05:38.560
There is now actually a community of folk around rationalist community, the EA community,
00:05:44.680
you know, various academic centers and EA is an effect of altruism.
00:05:49.600
And about not, not just these, but, but kind of radiating out from these number of individuals
00:05:57.860
So I think the comparison to the McDonald's restaurant would no longer be true now.
00:06:01.860
Maybe it was true several McDonald's years ago.
00:06:05.160
Well, I guess you could ask that or you could ask why it's no longer the case.
00:06:08.780
I mean, I don't know that the default should be, if we're looking at academia,
00:06:11.540
that questions receive attention in proportion to their importance.
00:06:15.940
I think that's just kind of a poor model of what to expect from academic research.
00:06:23.520
I mean, on one level, you're asking people to care about the unborn, if the time horizon
00:06:28.800
is beyond their lives and the lives of their children, which, which seems on its face, probably
00:06:35.580
harder than caring about distant strangers who are currently alive.
00:06:50.100
So generally in a simple model of the market economy, public goods tend to be undersupplied
00:06:55.960
because the creator of them captures only a small fraction of the benefits.
00:07:00.580
The global public goods are normally seen as the extreme of this.
00:07:04.940
If all of humanity benefits from some activity or is harmed by some activity, as in maybe
00:07:10.540
the case of global warming or something like that, then the incentives facing the individual
00:07:14.640
producer are just very dissociated from their overall consequences.
00:07:18.940
But with existential risk, it's even more extreme, actually, because it's a transgenerational
00:07:22.740
good in the sense that all future generations are also impacted by our decisions concerning
00:07:31.740
And, and they are obviously not in a position in any direct way to influence our decisions.
00:07:36.160
They can't reward us if we do things that are good for them.
00:07:39.020
So if one thinks of, of, of human beings as selfish, one would expect the good of existential
00:07:46.820
Like you could imagine if somehow people could go back in time, that future generations would
00:07:51.660
be willing to like spend huge amounts of money to compensate us for our efforts to reduce
00:07:58.580
But since you can't, that, that transaction is, is not possible, then there is this undersupply.
00:08:05.300
So that, that, that, that could be one way of, of like explaining why there's relatively
00:08:11.760
And there was something about what you said about it's harder to care.
00:08:16.240
Like it, it's a little strange, some, that, that caring should be something that requires
00:08:21.640
If, if one does care about it, it doesn't seem like it should be a straining thing to do.
00:08:26.100
And if one doesn't care, then it's not clear why one should be, what, what, what, what motive
00:08:30.760
one would have for trying to strain to, to start caring.
00:08:33.640
So it's a framing problem in many cases so that there's a certain set of facts, let's say the
00:08:39.780
reality of human suffering in some distant country that you have never visited, you have
00:08:47.040
This information can just be transmitted to you about the reality of the suffering.
00:08:52.120
And it transmitted one way you find that you don't care and transmitted another way, the
00:08:58.820
reality of it and the, and the, and the analogy to your own life or the lives of your children
00:09:05.040
And so we know that we know, you know, through the work of someone like Paul Slovic, we know
00:09:09.140
there are moral illusions here where people can be shown to care more about the fate of
00:09:14.580
one little girl who's delivered to them in the form of a personalized story.
00:09:19.800
And they'll care less about the fate of that same little girl plus her brother.
00:09:24.580
And they'll care less still if you tell them about the little girl, her brother, and the
00:09:30.760
500,000 other children who are also suffering from a famine.
00:09:35.260
And you just get this diminishment of altruistic impulse and the amount of money they're willing
00:09:48.160
So we know we have some moral bugs in our, in our psychology.
00:09:52.020
Yeah, so the original paper about existential risk made the point that from a certain type
00:10:00.580
of ethical theory, it looks like existential risk reduction is a very important goal.
00:10:06.740
If you have a broadly aggregative consequentialist philosophy, say if you're a utilitarian, and
00:10:12.740
if you work the numbers out, the number of possible future generations, the number of individuals
00:10:17.460
in each of those that can live very happy lives, then you multiply that together, and then
00:10:23.160
it looks like even a very small chance in the probability that we will eventually achieve
00:10:30.980
In fact, a higher expected value than any other impacts that we might have in more direct ways
00:10:38.200
So that reducing existential risk by, you know, one thousandth of one percentage point would
00:10:44.000
be, from this utilitarian perspective, worth more than eliminating world hunger or curing
00:10:49.860
Now, that, of course, says nothing about the question of whether this kind of utilitarian
00:10:59.320
But it just notes that that does seem to be an implication.
00:11:01.680
Yeah, I'm definitely a consequentialist of a certain kind, so, you know, we don't need
00:11:09.000
But one thing that's interesting here, and this may be playing into it, is that there seems
00:11:13.500
to be a clear asymmetry between how we value suffering and its mitigation and how we value
00:11:23.300
the mere preemption of well-being or flourishing or positive state.
00:11:29.580
So that suffering is worse than pleasure or happiness is good.
00:11:36.520
You know, I think if you told most people, here are two scenarios for how you can spend
00:11:41.980
You can spend it as you were planning to, living within the normal bounds of human experience,
00:11:47.640
or I can give you one hour of the worst possible misery, followed by another hour of the deepest
00:11:56.000
Would you like to sample the two extremes on the phenomenological continuum?
00:12:01.420
I think most people would balk at this because we think, we have a sense that suffering is,
00:12:06.280
on some level, you know, in the limit is worse than any pleasure or happiness could be.
00:12:11.900
And so we look at the prospect of, let's say, you know, curing cancer and mitigating the suffering
00:12:18.180
from that as being more important ethically than simply not allowing the door to close
00:12:26.320
on future states of creativity and insight and beauty.
00:12:32.300
Yeah, I think one might want to decompose different ways in which that intuition could
00:12:38.240
So one might just be that for us humans, as we are currently constituted, it's a lot easier
00:12:43.600
to create pain than to create a corresponding degree of pleasure.
00:12:47.760
We might just evolutionarily be such that we have a kind of deeper bottom than we have
00:12:55.220
It might be possible, if you think about it in biological term, in a short period of time
00:13:00.000
to introduce more damage to damage reproductive fitness more than you could possibly gain within
00:13:07.120
So if we are thinking about these vast futures, you want to probably factor that out in that
00:13:13.320
you could re-engineer, say, human hedonic systems or the hedonic systems of whatever inhabitants
00:13:20.000
would exist in this future so that they would have a much larger capacity for upside.
00:13:25.420
And it's not obvious that there would be an asymmetry there.
00:13:28.060
Now, you might nevertheless think that given, in some sense, equal amounts of pleasure and
00:13:32.660
pain, and it's a little unclear exactly what the metric is here, that there would nevertheless
00:13:37.700
be some more basically ethical reason why one should place a higher priority on removing
00:13:45.260
A lot of people have intuitions about equality, say, in economic context, where helping the
00:13:52.420
worst of is more important than further promoting the welfare of the best of.
00:13:56.700
Maybe that's the source of some of those intuitions?
00:14:00.400
Actually, there's one other variable here, I think, which is that there is no victim or
00:14:07.360
beneficiary of the consequence of closing the door to the future.
00:14:12.680
So if you ask someone, well, what would be wrong with the prospect of everyone dying painlessly
00:14:20.300
in their sleep tonight, and there are no future generations, there's no one to be bereaved by
00:14:27.340
There's no one suffering the pain of the loss or the pain of the deaths, even.
00:14:31.800
So people are kind of at a loss for the place where the moral injury would land.
00:14:39.720
So that is a distinction within utilitarian frameworks between total utilitarians who think you basically
00:14:48.340
count up all the good and subtract all the bad.
00:14:50.860
And then other views that try to take a more so-called person-affecting perspective, where
00:14:57.040
what matters is what kind of happens to people, but coming into existence is not necessarily
00:15:06.560
And now I would say some kinds of existential catastrophe would have a continuing population
00:15:16.960
If you might say the world getting locked into some totalitarian, like really dystopian
00:15:22.620
totalitarian regime, maybe there would be people living for a very long time, but just having
00:15:31.540
So in some scenarios of existential catastrophe, there would still be inhabitants there.
00:15:37.160
Yeah, no, I think it's pretty clear that destroying the future could be pretty unpleasant for people
00:15:45.040
Now, I'd like just to harken back a few minutes ago, like on the general premise here.
00:15:51.100
So I don't see it so much as a premise, this utilitarian view.
00:15:54.720
I mean, in fact, I wouldn't really describe myself as a utilitarian.
00:15:58.780
It would be more just pointing out the consequence.
00:16:00.640
There are various views about how we should reason about ethics, and there might be other
00:16:05.020
things we care about as well, aside from ethics.
00:16:07.740
And rather than directly trying to answer, what do we have most reason to do all things
00:16:11.240
considered, you might break it down and say, well, given this particular ethical theory,
00:16:15.140
what do we have most reason to do given this other value or this other goal we might have?
00:16:19.480
And then at the end of the day, you might want to add all of that up again.
00:16:23.120
But insofar as we are trying to reason about our ethical obligations, I have kind of a
00:16:31.820
normative uncertainty over different moral frameworks.
00:16:35.760
And so the way I would try to go about making decisions from a moral point of view would be
00:16:44.540
It's a kind of metaphor, but where you try to factor in the viewpoints of a number of different
00:16:50.680
ethical theories kind of in proportion to the degree to which you assign them probability.
00:16:56.440
When I'm out and about in the world, I usually have to make the case for utilitarianism,
00:17:01.260
or at least you should consider this perspective.
00:17:07.440
If this thing has millions of people and this one only has hundreds of people being affected,
00:17:13.720
And yet when I'm back here at the headquarters, as it were, I usually am the one who has to
00:17:19.440
kind of advocate against the utilitarian perspective because so many of my friends are so deeply
00:17:26.740
And so narrowly focused on X-risk mitigation, but I feel that I'm always the odd one out.
00:17:34.220
Well, you know, I would love to get into a conversation with you about metaethics some
00:17:38.040
other time because I think your views about the limits of consequentialism would be fascinating
00:17:45.880
But I have so much I want to talk to you about with respect to X-risk and a few of your papers
00:17:51.860
that I think, well, let's just table that for another time.
00:17:55.660
In fact, I don't even think we're going to be able to cover your book, Superintelligence.
00:17:59.960
I mean, maybe if we have a little time at the end, we'll touch it.
00:18:02.100
But I should just want to say that this book was incredibly influential on many of us in
00:18:08.220
arguing the case for there being a potential existential risk with respect to the development
00:18:14.760
of artificial intelligence and artificial general intelligence in particular.
00:18:19.780
And so it's, you know, this is something, the reason why I wouldn't cover this with you
00:18:23.700
for the entirety of this conversation is I've had several conversations on my podcast that
00:18:32.940
I mean, I've had Stuart Russell on, I've had Eliezer Yudkowsky on.
00:18:36.280
And basically, every time I talk about AI, I consider what I say to be, you know, fairly
00:18:46.680
So my audience will be familiar with your views on AI, even if they're not familiar with
00:18:54.440
But what I really want to talk about are a few of your papers.
00:19:00.980
Maybe I'll just name the papers here that I hope we'll cover.
00:19:03.840
The Vulnerable World, the second is, are you living in a computer simulation?
00:19:11.500
Which is your analysis of the Fermi problem, asking where is the rest of the intelligent
00:19:20.100
Let's start with the Vulnerable World Hypothesis.
00:19:25.980
Well, the hypothesis is, roughly speaking, that there is some level of technological development
00:19:31.660
at which the world gets destroyed by default, as it were.
00:19:36.620
So then, what does it mean to get destroyed by default?
00:19:39.940
I define something I call the semi-anarchic default condition, which is a condition in
00:19:45.820
which there are a wide range of different actors with a wide range of different human
00:19:51.640
But then, more importantly, two conditions hold.
00:19:56.020
One is that there is no very reliable way of resolving global coordination problems.
00:20:01.020
And the other is that we don't have a very extremely reliable way of preventing individuals
00:20:06.560
from committing actions that are extremely strongly disapproved of by a great majority of
00:20:14.160
Maybe it's better to come at it through a metaphor.
00:20:19.520
So you can kind of think of the history of technological discovery as the process of pulling balls out
00:20:38.180
And we've extracted throughout history a great many of these balls.
00:20:42.600
And the net effect of this has been hugely beneficial, I would say.
00:20:47.120
And this is why we now sit in our air-conditioned offices and struggle not to eat too much rather
00:20:55.420
than to try to get enough to eat in large parts of the world.
00:20:58.200
But what if in this ball, in this urn, there is a black ball in there somewhere?
00:21:05.140
Is there some possible technology that could be such that whichever civilization discovers it,
00:21:16.720
So in your paper, you refer to this as the urn of inventions.
00:21:20.180
And we have been, as you say, pulling balls out as quickly as we can get our hands on them.
00:21:26.620
And on some level, the scientific ethos is really just a matter of pulling balls out as fast as you can
00:21:33.320
and making sure that everybody knows about them.
00:21:39.500
And we have pulled out, thus far, only white or gray balls.
00:21:44.700
And the white balls are the ones, or the technologies, or the memes, or the norms,
00:21:48.920
or the social institutions that just have good consequences.
00:21:52.740
And the gray ones are norms and memes and institutions and, in most cases,
00:22:00.120
technology that has mixed results or that can be used for good or for ill.
00:22:05.420
And, you know, nuclear energy is a classic case where we can power our cities with it,
00:22:11.440
but we also produce fantastic amounts of pollution that's difficult to deal with.
00:22:19.440
So I just want to give a little more context to this analogy.
00:22:22.080
Yeah, and I guess most technologies are, in some sense, double-edged, but maybe the positive predominate.
00:22:29.140
I think there might be some technologies that are mainly negative if you think of, I don't know, nerve gases or other tools.
00:22:36.320
But what we haven't so far done is extract a black ball, right?
00:22:41.360
One that is so harmful that it destroys the civilization that discovers it.
00:22:46.160
And what if there is such a black ball in the urn, though?
00:22:50.420
I mean, we can ask about how likely that is to be the case.
00:22:54.040
We can also look at what is our current strategy with respect to this possibility.
00:22:59.160
And it seems to me that currently our strategy with respect to the possibility that the urn might contain a black ball is simply to hope that it doesn't.
00:23:07.780
And so we keep extracting balls as fast as we can.
00:23:10.880
We have become quite good at that, but we have no ability to put balls back into the urn.
00:23:18.020
So the first part of this paper tries to identify what are the types of ways in which the world could be vulnerable,
00:23:29.620
the types of ways in which there could be some possible black ball technology that we might invent.
00:23:34.440
And the first and most obvious type of way the world could be vulnerable is if there is some technology that greatly empowers individuals to cause sufficiently large quantities of destruction.
00:23:48.020
We motivate this with a, or illustrate it by means of a historical counterfactual.
00:23:54.520
We, in the last century, discovered how to split the atom and release the energy that is contained within some of the energy that's contained within the nucleus.
00:24:06.400
And it turned out that this is quite difficult to do.
00:24:13.760
So really only states can do this kind of stuff to produce nuclear weapons.
00:24:19.500
But what if it had turned out that there had been an easier way to release the energy of the atom?
00:24:24.220
What if you could have made a nuclear bomb by baking sand in the microwave oven or something like that?
00:24:30.880
So then that might well have been the end of human civilization in that it's hard to see how you could have cities, let us say,
00:24:38.120
if anybody who wanted to could destroy millions of people.
00:24:45.040
Now we know, of course, that it is physically impossible to create an atomic detonation by baking sand in the microwave oven.
00:24:53.120
But before you actually did the relevant nuclear physics, how could you possibly have known how it would turn out?
00:24:57.860
Well, let's just spell out that because I want to conserve everyone's intuitions as we go on this harrowing ride to your terminus here.
00:25:07.420
Because the punchline of this paper is fairly startling when you get to what the remedies are.
00:25:13.800
So why is it that civilization could not endure the prospect of what you call easy nukes?
00:25:23.920
If it were that easy to create a Hiroshima-level blast or beyond,
00:25:30.160
why is it just a foregone conclusion that that would mean the end of cities and perhaps the end of most things we recognize?
00:25:39.100
I think foregone conclusion is maybe a little too strong.
00:25:41.760
It depends a little bit on the exact parameters we plug in.
00:25:45.780
And the intuition is that in a large enough population of people,
00:25:50.380
like amongst every population with millions of people,
00:25:53.980
there will always be a few people who, for whatever reason,
00:25:57.880
would like to kill a million people or more if they could.
00:26:01.820
Whether they are just crazy or evil or they have some weird ideological doctrine
00:26:08.600
or they're trying to extort other people or threaten other people.
00:26:13.520
That just humans are very diverse and in a large enough set of people that will,
00:26:18.280
for practically any desire, you can specify there will be somebody in there that has that.
00:26:23.320
So if each of those destructively inclined people would be able to cause a sufficient amount of destruction,
00:26:31.120
Now, if one in mind is this actually playing out in history,
00:26:37.480
then to tell whether all of civilization really would get destroyed
00:26:41.800
or some horrible catastrophe short of that would happen instead,
00:26:48.760
Would it be like a small kind of Hiroshima type of thing or a thermonuclear bomb?
00:26:54.400
Could literally anybody do it like in five minutes?
00:26:56.640
Or would it take some engineer working for half a year?
00:27:01.320
And so depending on exactly what values you pick for those and some other variables,
00:27:06.640
you might get scenarios ranging from very bad to kind of existential catastrophe.
00:27:13.540
But the point is just to illustrate that there historically have been these technological transitions
00:27:20.560
where we have been lucky in that destructive capability we discovered were hard to wield.
00:27:29.040
And maybe a plausible way in which this kind of very highly destructive capability
00:27:34.960
could become easy to wield in the future would be through developments in biotechnology
00:27:39.700
that maybe makes it easy to create designer viruses and so forth
00:27:43.480
that don't require high amounts of energy or special difficult materials and so forth.
00:27:50.560
And there you might have an even stronger case.
00:27:52.340
Like so with a nuclear weapon, like one nuclear weapon can only destroy one city, right?
00:27:57.120
Where the viruses and stuff potentially can spread.
00:28:01.080
So yeah, and we should remind people that we're in an environment now
00:28:05.380
where people talk with some degree of flippancy about the prospect of every household
00:28:14.740
one day having something like a desktop printer that can print DNA sequences, right?
00:28:20.160
That everyone becomes their own bespoke molecular biologist and you can just print your own medicine at home
00:28:27.880
or your own genetic intervention at home and this stuff really is, you know, the recipe under those conditions,
00:28:34.940
the recipe to weaponize the 1918 flu could just be sent to you like a PDF.
00:28:41.360
It's not beyond the bounds of plausible sci-fi that we could be in a condition where it really would be
00:28:49.240
within the power of one nihilistic or, you know, otherwise ideological person
00:28:54.480
to destroy the lives of millions and even billions in the wrong case.
00:28:59.080
Yeah, or send us a PDF or you could just download it from the internet.
00:29:02.800
That's the full genomes of the number of highly virulent organisms are in the public domain
00:29:11.520
So yeah, so I mean, we could talk more about that.
00:29:14.860
I think that I would rather see a future where DNA synthesis was a service provided by a few
00:29:19.940
places in the world where it would be able, if the need arose, to exert some control, some
00:29:25.280
screening rather than something that every lab needs to have its own separate little machine.
00:29:30.740
So that's, these are examples of type one vulnerability, like where the problem really
00:29:36.380
arises from individuals becoming too empowered in their ability to create massive amounts of
00:29:43.960
Now, so there are other ways in which the world could be vulnerable that are slightly more
00:29:48.800
subtle, but I think also worth bearing in mind.
00:29:51.760
So these have to do more about the way that technological developments could change the
00:30:01.200
We can again return to the nuclear history case for an illustration of this.
00:30:07.720
And actually, this is maybe the closest to a black ball we've gotten so far with thermonuclear
00:30:13.620
weapons and the big arms race during the Cold War led to something like 70,000 warheads
00:30:24.720
Like when we can see some of the archives of this history that have recently opened up,
00:30:32.860
The world actually came quite close to the brink on several occasions.
00:30:36.200
And we might have been quite lucky to get through.
00:30:38.680
It might not have been that we were in such a stable situation.
00:30:42.660
It rather might have been that this was a kind of slightly black ball-ish technology and
00:30:49.640
But you could imagine it could have been worse.
00:30:52.560
You could imagine properties of this technology that would have created stronger incentives,
00:30:56.280
say, for a first strike so that you would have crisis instability.
00:31:01.420
If it had been easier, let us say, in a first strike to take out all the adversary's
00:31:05.600
nuclear weapons, then it might not have taken a lot in a crisis situation to just have
00:31:14.300
enough fear that you would have to strike first or fear that the adversary otherwise would
00:31:20.980
Remind people that in the aftermath of the Cuban Missile Crisis, the people who were closest
00:31:26.000
to the action felt that the odds of an exchange had been something like a coin toss.
00:31:33.780
And what you're envisioning is a situation where what you describe as safe first strike, which
00:31:39.900
is, you know, there's just no reasonable fear that you're not going to be able to annihilate
00:31:49.600
And it's also forgotten that the status quo of mutually assured destruction was actually
00:31:57.920
I mean, there was before the Russians had or the Soviets had their own arsenals, there
00:32:03.940
was a greater, you know, game theoretic concern that we would be more tempted to use ours because
00:32:14.680
So some degree of stabilizing influence, although, of course, maybe at the expense of, you know,
00:32:21.100
If you'd like to continue listening to this conversation, you'll need to subscribe at
00:32:27.920
Once you do, you'll get access to all full-length episodes of the Making Sense podcast, along with
00:32:32.600
other subscriber-only content, including bonus episodes and AMAs and the conversations I've
00:32:40.080
The Making Sense podcast is ad-free and relies entirely on listener support.