Special Episode: Recipes for Future Plagues
Episode Stats
Length
1 hour and 57 minutes
Words per Minute
170.4561
Summary
MIT biologist and evolutionary engineer Kevin Esvelt joins me to talk about the threat posed to humanity by genetic engineering, and why we should all be worried about it. In this episode, we discuss the dangers of genetic engineering as a tool for mass destruction, and how it could be used to do so in order to wipe out entire populations. We also talk about how the U.S. government is developing a program called Deep Vision, which I believe is part of the agency which distributes most of its foreign aid to fight infectious diseases, like HIV, Aids, and other diseases that pose a threat to our health and well-being. And we discuss whether or not this program is a good or bad thing, and what it could do to our ability to prepare for the next pandemic, which could be coming in the near future. Thanks for listening, and don't forget to subscribe to Making Sense and subscribe to our other podcast, Afteron, wherever you get your podcasts, to get immediate access to all the latest episodes of Making Sense, and Afteron's companion podcast, The Making Sense: The Aftermath, wherever else you might be listening to podcasts and reading out your favorite sci-fi, fantasy, and science fiction novels. This episode was produced by Rob Reed and Sam Harris, and edited by Alex Blumberg. It was produced for Gimlet Media and produced by Rachel Ward, who also writes for Slate and The New York Times bestselling book, "The Dark Side of the Internet." and is a regular contributor to the New York Magazine, and is the author of the newsletter "The NextDoor" and The Astronomy, The Astronaut's Guide to the world's most influential newsletter, "Space Traveler." and the New Scientist's "The Astronaut s Guide to Space Traveler" to which you can get the most up-to-date scoop on all things space travel, including the latest in space travel and sci-truck and the newest podcast on the cutting-edge travel guide, "How to find out what's going on in the stars are up to in space and what's up in the universe, and where you're going to find it on the best of all things in the best places on the internet, and more! in the most exciting places in the next episode on the planet, "Everything you need to know about space and everything you've ever heard about it." Subscribe to the Making Sense Podcast by clicking here.
Transcript
00:00:00.000
Welcome to the Making Sense Podcast. This is Sam Harris.
00:00:25.180
Well, Russia has invaded Ukraine, so we have the first major land war in Europe in decades.
00:00:38.580
So, that seems like a very big deal. It certainly deserves its own podcast at some point. I think
00:00:45.780
I'll wait to see how things evolve for a little while here. It remains to be seen how bad this
00:00:51.800
war will be and what else might happen as a result. So, I will reserve comment at this point,
00:01:00.680
apart from echoing the nearly universal sentiment that Putin's actions are despicable,
00:01:08.560
as is the support for him that came dribbling out of the mouth of our former president.
00:01:14.760
Anyway, as chance would have it, the topic of today's podcast is also scary. This is another
00:01:22.240
PSA, and in some sense it's a continuation of the podcast that my friend Rob Reed did for me
00:01:28.220
in April of last year. That episode was titled Engineering the Apocalypse, and it was a four-hour
00:01:37.040
examination of the threat that is posed to all of us by developments in synthetic biology.
00:01:45.380
In recent weeks, Rob discovered a specific threat along these lines that seems fairly imminent,
00:01:52.880
and he's tapped Kevin Esfelt to walk him through the problem. Kevin is a Harvard-trained biologist,
00:01:59.740
scientist, and he's credited as the first person to describe how CRISPR gene drives could be used
00:02:06.140
to alter the traits of wild populations in a way that was evolutionarily stable, and he is currently
00:02:12.860
a professor at MIT. As you'll hear, there's a call to action at the end of this episode, and the call
00:02:20.500
is to get the attention of USAID, which is currently running the program of virus hunting that poses such a
00:02:29.340
concern. Anyway, I won't say any more about this. Rob does an impeccable job at exploring the issue,
00:02:35.980
and I hope you will join me in making noise about it once you come to understand the nature of the
00:02:52.060
Today's conversation will be an episode of two different shows, the After On podcast, which I host,
00:02:57.560
and Making Sense, which my podcasting colleague Sam Harris hosts. We're doing this because we both
00:03:03.440
find the subject extraordinarily important, and also timely enough that we want to get it out there
00:03:08.820
fast. And since I've done a bunch of research that's connected to the subject and also know the guest,
00:03:14.360
Sam and I thought the quickest thing would be for me to conduct the interview, and then for both of us
00:03:19.100
to distribute it. For Making Sense listeners who don't know me, my name is Rob Reed, and I'm a
00:03:24.720
venture capitalist turned tech entrepreneur turned science fiction author turned science podcaster
00:03:30.140
turned venture capitalist once again, one who still podcasts and scribbles a bit on the side.
00:03:36.840
My voice may be familiar because I was on Sam's show several months ago, when we spent almost four
00:03:41.880
hours examining how very awful the next pandemic could be, and how we can prevent that awful pandemic
00:03:48.540
if we get our act together. I spent about a thousand hours, literally, including over 20 scientific
00:03:54.320
interviews, researching those subjects, and our episode had a fairly unusual format as a result.
00:03:59.780
I'm not a scientist myself, but over the past five years, I've gotten pretty deep into pandemic-related
00:04:04.980
issues. It started with research for one of my novels, which later led to writing a few articles on the
00:04:10.600
subject, which led to several episodes of my own podcast, then appearances on other shows, and then to a
00:04:16.500
TED talk that I gave right before COVID, and then to that big episode with Sam, and then quite a bit more.
00:04:22.660
Which brings us to today, and my conversation with MIT's Kevin Esvelt, a highly regarded evolutionary
00:04:29.220
engineer, who I met in the wake of some congressional testimony that he made on a subject that interests us both.
00:04:36.200
Kevin's going to present something that will probably shock you, which you may find
00:04:40.520
hard or even impossible to believe. But before that, he's going to lay down a fairly deep foundation
00:04:46.700
of concepts, definitions, and a bit of history, which should give you a sophisticated basis for
00:04:52.420
deciding whether you buy the fairly shocking points he's going to make in the second half of our
00:04:57.500
conversation. Now, I think it'll be useful for you to know the full context while he's laying that
00:05:02.920
foundation. So here comes a spoiler, which is that Kevin is going to argue that a small, new,
00:05:07.840
and very well-intentioned U.S. government program could, completely inadvertently, cause the deaths
00:05:14.360
of millions or even hundreds of millions of people or even more, despite what Kevin is certain are the
00:05:21.620
entirely good intentions of the people behind it. The program is called DeepVision, which I believe is
00:05:27.720
pronounced Deep Vision, and it's part of USAID, the agency which distributes most American foreign aid.
00:05:33.960
Kevin believes Deep Vision is on a path to posting assembly instructions for what we can only call
00:05:40.220
weapons of mass destruction to the open internet for anyone to download. Specifically, the genomes of
00:05:47.320
previously unknown pandemic-grade viruses that we have no defenses against. Viruses that tens of
00:05:54.620
thousands of people in dozens of countries could easily build from scratch, as soon as they're given
00:06:00.280
the genetic code. Now, if this all sounds a bit bonkers to you, I get it. I struggle to believe my
00:06:06.820
own ears at first. But if you listen to Kevin, you'll quickly realize that this isn't some COVID-era
00:06:12.520
conspiracy theory, even if he doesn't ultimately persuade you that things are as dangerous as he
00:06:17.660
thinks. I'll add that I wouldn't have interviewed him for this if I thought he had even the faintest
00:06:23.460
partisan agenda. I actually have no idea what party, if any, Kevin affiliates with, and I couldn't care
00:06:29.460
less. Because every point I've heard him make on the subject has been rooted in science, not politics.
00:06:35.440
Now, one thing that made this story especially hard for me to credit at first is that it's coming out
00:06:41.380
of USAID, which is dedicated to economic development and human flourishing in poorer countries, and which
00:06:48.380
puts its resources into so many great projects. Now, like any program that's put out hundreds of
00:06:54.100
billions of dollars across many decades, USAID has had its share of blunders and scandals and lousy uses
00:07:00.380
of funds. But I admire the agency on the whole. And that's partly due to personal experience, because
00:07:06.160
right after college, I spent a year in Cairo on a Fulbright grant and met lots of USAID people.
00:07:11.740
And they were funding things like irrigation projects, schools, and technical assistance in places that
00:07:16.740
really needed them. And they were all smart, committed, and working for a fraction of what
00:07:21.560
they could have made in the private sector. On top of that, USAID is run by former U.S. ambassador to
00:07:27.940
the U.N., Samantha Power. I can't say that I know Samantha, but we have a fair number of people in common
00:07:34.120
and had a couple great conversations at the TED conference, and I know how smart and ethical she is.
00:07:39.660
Plus, she literally wrote the book on not killing millions of people. It was a searing denunciation.
00:07:46.740
of genocide, which you may be thinking isn't the most controversial or risky position for an author
00:07:52.520
to stake out. But it won the Pulitzer Prize, so it was no puff piece. And there's zero chance that
00:07:59.140
Samantha would deliberately imperil millions of lives. All of this is to say that there are two sides of
00:08:05.580
this story, each of them held by very smart ethical people. And as you'll hear, the other side sincerely
00:08:11.780
sees Deep Vision as an invaluable weapon against pandemics. And I'm sure this is the side Samantha
00:08:18.220
hears from the most, because the program's creators work for her. And since Deep Vision is literally less
00:08:23.940
than a tenth of one percent of USAID's budget, and she basically inherited the program, this just isn't
00:08:30.260
something she manages directly. I should also point out that some of Deep Vision's plans could help us
00:08:36.300
fight pandemics. But after many hours of researching this, and talking to various people in academia,
00:08:42.900
philanthropy, security, and entrepreneurship, I believe that most of its agenda is appallingly
00:08:48.560
misguided. And I'm not going to hide that, because I'm not an actor, and it would be dishonest.
00:08:54.360
Now despite that, in the second half of the interview, I'm going to present all of the best arguments I've
00:08:59.440
heard or have thought of myself in favor of Deep Vision. Because this subject is way too important
00:09:05.040
for you not to hear the other side. I also want to convey why some very smart and ethical people
00:09:10.460
sincerely think this is a great idea, and show that thinking this doesn't mean that they're not
00:09:16.200
smart and ethical, because I'm sure that they are. Now, someone close to Deep Vision could surely do a
00:09:22.260
much better job of presenting this side. And in an ideal world, we might have structured this as a
00:09:28.160
debate between Kevin and a Deep Vision proponent. But I am an interviewer who has never once
00:09:34.440
moderated a debate. And though I could probably become a decent moderator with some experience
00:09:40.260
and guidance, this is happening right now. And we want to get the story out right now. Because the
00:09:47.320
program is so new, it's barely underway. In fact, it may not even have quite switched on yet. Which means
00:09:54.360
it should still be possible to alter its focus, or even redirect its entire budget to the countless
00:10:00.580
world-positive programs that USAID supports. And you may be able to help with this yourself,
00:10:07.280
as Kevin and I will discuss toward the end of our conversation. Which will start right now.
00:10:17.580
So Kevin, thanks so much for joining us today. And why don't we start out with a quick overview of
00:10:23.940
your professional background and what you spend most of your time doing in the lab these days?
00:10:29.280
Well, thanks so much for the invitation. I'm an evolutionary engineer and professor at MIT,
00:10:33.620
where my lab specializes in building synthetic ecosystems to what we call direct the evolution
00:10:39.200
of new molecular tools. And we also work a lot with communities to safely and controllably
00:10:44.560
engineer populations of wild organisms and associated ecosystems.
00:10:48.480
So we evolve cool tools in the lab. And then we talk a lot about whether, when, and how to use those
00:10:55.560
tools to change the environment. Evolution, of course, doesn't know anything. It's just a natural
00:11:00.760
process. And that means you can harness it to create things when you don't know how they work.
00:11:06.340
And that's true of a lot of biology, honestly. We still don't know the details of how it works,
00:11:10.740
of how to make things fold just the right way to cause something to happen within a cell.
00:11:15.720
One way to deal with this is to take something that does something reasonably close to what you want,
00:11:22.180
make a billion variants, throw them all at the wall and see which ones stick best.
00:11:26.520
Then take those, make a billion variants of those, throw those at the wall and see what sticks.
00:11:30.760
And do this over and over again until you get something that does exactly what you selected for.
00:11:35.620
And you do some related work with phages in your lab.
00:11:39.780
Yeah. So since we're interested in applying evolution to create molecular tools, it's useful to
00:11:45.780
work with the fastest evolving things in nature, which tend to be bacteriophages,
00:11:51.820
viruses that infect bacteria. And these viruses can replicate once every 10 minutes.
00:11:59.100
So that's a really short generation time. So what we do is we engineer the bacteria
00:12:04.780
so that the phage only gets to reproduce if it does the molecular trick that we want to see.
00:12:11.780
So we put a gene we want to evolve or multiple genes onto the genome of the virus. And we take away the
00:12:18.240
pieces that it needs to reproduce. We move those into the host cell. So the host cell will produce
00:12:23.460
this critical factor for virus replication in proportion to how well the virus manages to perform
00:12:30.940
the molecular trick. So that way, all of these populations of a billion viruses compete with one
00:12:35.840
another, creating mutations in those genes that we put on there. And the ones that perform the
00:12:42.080
molecular trick best produce the most offspring. And of course, this is a continuous process,
00:12:46.780
generation after generation, for hundreds of generations in the lab until it spits out the
00:12:52.560
thing that we want to see, all to create a better platform for making useful and hopefully safe biotech
00:12:58.680
tools. Now, just a couple clarifying questions. First of all, phages, they are viruses, but they're
00:13:05.680
minuscule and they can only infect bacteria. There's no way a phage is going to infect a human and
00:13:12.620
inflict disease. Is that right? That's right. Their machinery just would not function within
00:13:17.420
our cells. They're optimized for bacteria, very different context. Got it. And to edge toward what
00:13:23.180
I understand to be the practical application of this is you can train this ecosystem that you've put
00:13:27.580
together to create a very good version of a complex thing that you want with all these billions and
00:13:34.020
billions of shots on goal. And then having done that, you can make some useful product. In your case,
00:13:40.800
it's generally for pretty sophisticated biological applications, right? Like big pharma, biotechs,
00:13:46.720
other labs. For the most part, yeah, a platform for creating useful molecular tools can really
00:13:52.540
accelerate biotech research because there's no way to have a big impact like accelerating everybody
00:13:58.680
else's work and empowering others. So you've been doing this for quite a while, lots of experiments.
00:14:03.920
These things replicate every 10 minutes. You've got robotic tools accelerating it. How many genetically
00:14:09.880
distinct phages do you estimate you've produced in your lab since you started doing this work?
00:14:17.460
Genetically distinct is a hard question because we do crank up the mutation rate very high,
00:14:21.780
so they evolve much more quickly. Probably somewhere between 10 to the 13th,
00:14:26.180
maybe even 10 to the 15th at most. So that's a quadrillion.
00:14:30.340
I believe that would be, to contextualize it, the number of stars in roughly 5,000 to 10,000
00:14:37.040
galaxies the size of the Milky Way. So that is a lot. Now, you are often described as the inventor of
00:14:47.060
Gene drive. So gene drive is a naturally occurring phenomenon that originated maybe a billion years
00:14:53.780
ago. And it happened when some genes realized that they could replicate more often if they changed the
00:15:00.260
odds that they'd get inherited by the offspring. So one way they do this, in organisms that have sex,
00:15:06.280
so they inherit one copy from the mother and one copy from the father, a gene can cut the chromosome
00:15:12.420
that doesn't have it, which causes a cell to copy itself over. And that means that instead of half
00:15:18.820
the offspring inheriting the particular gene drive system, all of them will. That's a huge fitness
00:15:24.600
advantage. You're going from half of offspring inheriting you to all of them, at least for the
00:15:28.360
case of heterozygotes that have one copy of each. And so gene drive systems can just sweep through
00:15:34.480
populations of sexually reproducing organisms very, very quickly.
00:15:38.120
Just to clarify, that means that the trait that this particular gene confers can go from rare or
00:15:45.500
even one-off because it was a mutation to saturating the population in a certain number of generations.
00:15:52.600
The trait changes. We don't end up getting clones of the individual.
00:15:57.300
That's right. And Austin Burt, who is one of the earliest gene drive researchers in the modern era
00:16:03.120
at Imperial College London, first proposed that we use these things by engineering genes to cut
00:16:09.480
sequences that we want in order to change the population. The problem is we really didn't have
00:16:15.500
the tools to make this happen. Well, I played a minor role in developing CRISPR, which is a genome
00:16:21.840
editing tool that is basically a set of molecular scissors that can be programmed to cut whatever sequence
00:16:26.940
we want. A few months after we had been one of the first groups to show that CRISPR worked in
00:16:32.540
mammalian cells, I was looking around outdoors at a bunch of ducks and I saw a turtle that day in the
00:16:38.640
water. And I was thinking, hey, are we ever going to edit wild organisms with CRISPR? And I thought,
00:16:43.300
probably not, at least not effectively, because when we edit an organism, we're changing it to do what
00:16:49.180
we want, which means we're diverting its resources away from survival and replication in nature.
00:16:54.000
And that means natural selection is going to act to remove whatever changes we make.
00:16:58.980
But then I thought, well, wait a minute. What if instead of just using CRISPR to edit the genome,
00:17:04.200
what if we use it to insert our engineered trait and we also encode the CRISPR system that can make
00:17:11.040
that change? Then you get recursive genome editing. That means in principle, we could build a system that
00:17:18.000
would spread the engineered trait throughout the whole population, even if the trait was harmful.
00:17:23.180
And because I'd read the literature, as soon as I thought of this, I thought, wait a minute,
00:17:27.160
that's a gene drive. Didn't I see a paper where someone had said we should use the genes like the
00:17:31.960
one in yeast to engineer populations? And maybe we could take out, say, malaria. What if we crashed
00:17:37.360
populations of malarial mosquitoes? We might be able to help eradicate malaria that way.
00:17:42.880
With CRISPR, it's so versatile. You could cut whatever gene you needed to at whatever site you needed
00:17:48.620
to. And you could even do it in multiple places, which could make it evolutionarily stable or fairly
00:17:52.760
so. So it was tremendously exciting thinking we might be able to help get rid of malaria and
00:17:58.960
Okay. So you had this idea that was obviously very, very novel and exciting on many levels,
00:18:08.280
Well, the first day I admit was pure euphoria, thinking up all the applications. Because I mean,
00:18:12.780
malaria is an exceptionally horrible disease. Kills nearly half a million people every year,
00:18:18.500
most of them kids under five. And there's things like schistosomiasis, which cognitively stunt
00:18:23.360
tens of millions. So I spent the first day kind of euphoric and of course doing research to see
00:18:29.320
whether it would actually work. But the next day I woke up in a cold sweat because I was thinking,
00:18:33.780
wait a minute, if just about anyone who can edit the genome of a sexually reproducing species with
00:18:39.640
CRISPR can make a gene drive system to spread their change through the population. Well, what about
00:18:45.900
misuse? Sure, you can engineer a mosquito so it can't carry a disease. Could you engineer a mosquito
00:18:51.900
so it would always carry a disease? How much of a problem would misuse be? And it seemed rather
00:18:59.980
frightening because you could imagine people weaponizing the natural world. So I spent quite
00:19:05.900
a long time before I even told my academic advisor at the time about this idea, because I wanted to
00:19:12.040
learn whether or not that was a likely outcome. And fortunately, I concluded that it's not. Gene
00:19:17.880
drive spreads only vertically, parents to offspring. So that means it's fairly slow. It can't do more than
00:19:23.780
double every generation at absolute most in terms of frequency in the population. It's fairly obvious
00:19:29.540
in that it works in sexually reproducing organisms, which don't have CRISPR in nature. So this is a
00:19:35.760
signature that can't be hidden. It will be present in all gene drive systems. And that means you can
00:19:40.700
always find it if it's there. And most importantly, if you see a gene drive that you don't like,
00:19:47.420
it's trivial to reliably build the counter. You can add some extra instructions for CRISPR telling it to
00:19:54.640
cut the original rogue version. And your version, what we call an immunizing reversal drive,
00:20:00.100
could still spread through the wild population, just like the rogue. But whenever it encounters the
00:20:04.880
rogue, the immunizing reversal drive will cut the rogue and replace it with itself.
00:20:09.620
So there is a reliable off switch. If somebody does something bad that imperils human society or the
00:20:15.460
ecosystem, this immunizing reversal drive is a reliable off switch. And did you think through and
00:20:22.700
basically kind of map out the immunizing reversal drive before you told the world about the gene
00:20:29.080
Long before, yes. And in fact, before I even told George.
00:20:35.400
So you did not let this new idea leak into the world until you knew you had an off switch.
00:20:42.440
Well, I mean, an off switch is perhaps a little much. Just because you can reliably overwrite the
00:20:47.460
unwanted change doesn't mean that it wouldn't have made changes in the ecosystem before you managed
00:20:53.200
to do that. But anything that is slow, obvious if you look and easily blocked is not a terribly
00:20:59.960
great threat. And so with that understanding, I then approached George and said, I think there's a
00:21:05.720
lot of good that we could do this, especially if we do it openly and transparently. And I don't think
00:21:10.600
there's much potential for misuse for this reasoning. What do you think?
00:21:14.000
And when you first told me this story, you said the words that we're very lucky that gene drives,
00:21:20.560
quote, favor defense. What exactly does that mean?
00:21:25.580
Well, anytime you have a technology that is accessible to many people, and gene drive isn't
00:21:31.180
that accessible. There aren't that many labs that use CRISPR to edit the genomes of organisms other than
00:21:36.540
currently fruit flies, worms, and mice are sort of the big ones. If there's a lot of people who can
00:21:43.500
use the technology, then there's a lot of people who could misuse the technology. And so favoring
00:21:49.480
defense just means that if someone does misuse it, everyone else can prevent it from causing much
00:21:55.400
harm. Anything that's slow, obvious, and easily blocked isn't much of a weapon. But things that are
00:22:01.960
fast or are subtle or are unblockable are another story. And most obvious example of something that
00:22:09.820
can't easily be countered is a nuke, right? A nuclear warhead on an ICBM is something that
00:22:15.320
we haven't figured out how to counter, despite many decades of wishing that we could. The way we
00:22:20.860
have to deal with that threat is by minimizing access and deterring those who can actually wield
00:22:29.240
that weapon. Because if they actually use it, there's nothing we can do.
00:22:33.720
On the subject of nukes, this could be a good time to define the terms information hazard and
00:22:41.080
attention hazard, because we'll probably use them later in the conversation.
00:22:45.700
Yeah. So the history of nuclear weapons, it's remarkable how they were originally developed
00:22:51.520
because researchers were afraid of the consequences if malicious people got a hold of them. Specifically,
00:22:58.600
Nazi Germany was the concern of Leo Szilard and Albert Einstein when they wrote the letter that
00:23:04.180
launched the Manhattan Project. But what's not so well known is that once Germany surrendered in World
00:23:10.460
War II, Szilard launched a petition within the top secret Manhattan Project in which he argued that
00:23:17.680
the United States should not use the bomb on Japan. Not because it wouldn't save many American lives by
00:23:24.160
preventing an invasion, but because doing so would call attention to the power of nuclear weaponry.
00:23:32.000
He pointed out that at the time, the notion that any adversary could militarily threaten the
00:23:37.300
mainland American city, Los Angeles or New York, was just laughable. There is no way that any adversary
00:23:43.560
could threaten the continental United States. But if you show the world that there is a single bomb
00:23:50.040
that can destroy the bulk of the city, that would change. You would be advertising the existence of
00:23:56.500
this kind of weapon and thereby incentivizing other countries to gain access to it, which would then
00:24:02.000
imperil your own cities. So he said, if you use the bomb on Japan, the Soviet Union will get it that
00:24:07.560
much more quickly and probably other nations as well. And you will make the United States less secure.
00:24:13.060
And so he spent the rest of his career advocating for a nuclear non-proliferation,
00:24:16.880
pointing out that the fewer actors who had the ability to wield this kind of power, the safer humanity would be.
00:24:23.700
And so in that sense, Hiroshima was an attention hazard. It informed the world that this is possible.
00:24:32.100
This thing you might not otherwise have bothered with because it's going to cost billions of dollars to
00:24:37.340
develop and distract a lot of your top scientists without that attention to the fact that this is possible.
00:24:44.160
That's right. So there's different kinds of information hazards. One is like the blueprints.
00:24:48.920
So in a way, using the bomb on Japan, the isotopic signature told everyone what the Americans had
00:24:55.000
So without the blueprints of the bomb, they could look at the radioactive signature of Hiroshima
00:25:01.700
Yeah, that's a conceptual information hazard. But the big one is, as you said, an attention hazard. It says,
00:25:06.760
this is a weapon powerful enough to destroy cities that could determine the future course of warfare.
00:25:11.440
That's a giant neon blinking sign in the sky to anyone who wants to acquire power,
00:25:17.220
that they need to acquire this kind of weapon in order to matter in the future.
00:25:21.920
Now, with all the background of the last several minutes, I'd like to now jump topics pretty
00:25:27.060
significantly to the Spanish flu, which was quite literally extinct for 80 years. But then that
00:25:35.660
Well, some influenza researchers at the CDC were concerned that something like 1918 influenza might
00:25:44.420
come again. So they wanted to know, well, what about it was so bad? Because no other flu strain has
00:25:51.000
killed anywhere near as many people. So they went to museum specimens that had samples from victims of
00:25:58.220
the 1918 pandemic. And they also got samples from someone who had died of the flu and been buried in
00:26:05.880
the permafrost in Alaska. And remarkably, they were able to extract the RNA genome of the virus from
00:26:12.540
these samples and sequence it, and thereby learn how to make the virus using recently discovered
00:26:19.980
techniques for virus assembly. So they resurrected this extinct deadly virus.
00:26:26.340
So that was a well-oiled machine that came back into existence after having vanished
00:26:33.160
into obscurity. I'm sure it was a very seductive idea scientifically, but was that a reasonable
00:26:42.700
Well, there's arguments back and forth as to whether or not it was wise. And there's good
00:26:47.560
arguments on both sides. But what bothers me from a security standpoint is that they posted the
00:26:53.540
genome sequence online. And it's there for anybody to download, as I know, because I was able to find
00:26:59.860
it very quickly via Google myself, which means literally anybody in the world can get this thing.
00:27:05.800
Now, I'm pretty sure I know how you're going to answer this. But what is bad about this genome being
00:27:11.460
online? Well, posting the genome sequence of a virus online, in this case, gave exact blueprints
00:27:18.360
for a pathogen that once killed maybe as much as one in 30 living humans to anyone, including
00:27:25.060
lone terrorists and bioweapons programs who have the skills to assemble it, which doesn't seem like
00:27:31.360
a great idea. Important qualifier. We should be grateful that it's pretty unlikely that releasing
00:27:37.860
that virus again today would cause such a deadly pandemic. And the reason is that virus has been
00:27:43.380
called the mother of all pandemics. That is to say, most modern flu strains are much less deadly
00:27:49.180
descendants. But because most of us have been exposed to at least one of those, we tend to have
00:27:54.980
immunity to that kind of flu virus, especially those of us who caught the flu during the 2009 pandemic,
00:28:01.360
which was the same kinds of molecular coding. Second is we do have antiviral drugs that work against it,
00:28:08.820
even though we probably can't produce enough of those quickly enough to really matter in a fast
00:28:13.560
moving pandemic. But what we do have are antibiotics. And there's some pretty good evidence from autopsies
00:28:21.660
and going through the records of autopsies from 1918, that it killed people at least as much by causing
00:28:28.640
secondary bacterial infections of the lungs as it did directly causing damage, that is the virus itself.
00:28:35.020
And we definitely have enough antibiotics to dose the entire world population already available.
00:28:42.100
Now, this is obviously completely impossible to know precisely. But taking those mitigating factors
00:28:47.860
into account, what would you estimate back of the envelope, the plausible ranges are of a death toll
00:28:57.700
I'm not so sure of the lower window just because of, you know, unlikely to take off at all. But if you
00:29:02.140
presume that it does take off, then I would probably say somewhere between 200,000 and 10 million people.
00:29:11.340
And that's just because it would probably be worse than most modern day strains. But it's not going to
00:29:18.620
be nearly as bad as it was back then. So I think that's a probably reasonable 10 to 90% confidence
00:29:23.840
interval. But honestly, I haven't sat down and thought about it for 20 minutes. Of course, on the other hand,
00:29:28.720
just because it's highly unlikely doesn't mean it's no chance. And when there's a non-trivial chance of
00:29:35.020
millions or even tens of millions of people dying, well, some discretion would seem prudent.
00:29:41.300
Now, I've actually known for years that the genome is freely available online, but I actually
00:29:46.060
only learned that the CDC put it up there pretty recently. And at the time, the only explanation I
00:29:51.480
could imagine for that was that it was so impossible for anyone to assemble a virus from
00:29:57.760
scratch back then in 2005 that they just failed to realize how soon the publication of this genome
00:30:04.320
would enable labs in dozens of countries to basically restart the engine of one of history's
00:30:09.560
worst pandemics. And that was a scary enough idea to live with. But then a couple weeks ago,
00:30:15.320
you came along and told me that they actually did realize it would soon become possible to
00:30:21.360
synthesize it, and they were fine with it. And since this makes my head spin, I'd love it if you
00:30:26.980
could give us a quick sense of where the science of virus synthesis was in 2005 when they posted this.
00:30:35.000
Well, the first instructions, the first detailed protocol for assembling influenza viruses
00:30:39.980
was published in 1999. And the first virus was synthesized from synthetic DNA in 2002. And that
00:30:50.520
was poliovirus, which is probably comparable to maybe slightly easier to assemble than influenza.
00:30:58.700
How long after the publication of the genome would you say it became scientifically feasible to
00:31:05.540
synthesize it? Immediately? That is to say, probably a good hundred or so people had the skills at the
00:31:12.840
time to follow that protocol, maybe a few hundred people, to use the protocol to assemble 1918 influenza
00:31:19.000
after publication of the genome sequence. What they may not have anticipated was just how cheap
00:31:24.540
synthetic DNA would become. Because in the last 20 years, the cost of assembled DNA, that is assembled
00:31:32.200
into large gene-length fragments like you would need in order to boot up the virus. The cost has
00:31:37.300
fallen by a factor of a thousand. And while presumably they knew that a good couple hundred people could
00:31:43.820
assemble it immediately, they may not have understood how that number would grow. Because most of the folks
00:31:49.820
involved were virologists, they weren't involved in biotech, and they weren't familiar with exponential
00:31:54.160
technologies. Okay, so we've gone from a hundred-ish labs that could create this more or less
00:31:58.900
immediately after it was published. What would you say that number has grown to today?
00:32:04.380
I'd estimate somewhere between 20,000 and 50,000 people.
00:32:10.580
Which is pretty alarming, even given that it's unlikely that it would actually cause a pandemic.
00:32:15.560
So is it fair to say that we're kind of relying on the goodwill of half a stadium to a full stadium's worth of
00:32:27.000
I mean, frankly, most people aren't that evil. And anyone who is that malicious, if there's only a
00:32:32.080
small chance that it might actually take off, then why would they bother? But you can do the math and
00:32:37.900
say it still looks like a pretty bad bet given the expected death toll, even if you assign a very small
00:32:43.380
chance that someone would, and a small chance that it would actually cause a pandemic. The numbers don't
00:32:47.920
look good when you calculate it from that perspective, especially because it's going to remain
00:32:52.720
accessible to future generations. That is, in five years, there's going to be even more people who
00:32:57.580
could do it. And in 10 years, quite a bit more. The number is only going to grow.
00:33:02.740
Okay, even though 1918 flu would probably be less destructive than many of us would fear if it was
00:33:10.540
released today. The decision to put it online will probably boggle most people's intuitions. It's
00:33:18.080
certainly boggled my own. But the people at the CDC are really smart, and by definition, they're hugely
00:33:23.620
concerned about public health. So what benefits might they have thought would accrue from publishing this?
00:33:31.120
And what perspective might they have that I lack that would cause them to prioritize those benefits over
00:33:38.220
the fairly obvious downsides? There is a mindset difference, which says something along the lines
00:33:44.320
of, if we understand how the world works, then we can come up with better cures and treatments and
00:33:50.760
interventions that we couldn't necessarily predict in the absence of that knowledge. That is another way
00:33:56.480
to say it is, you got to do the basic blue sky research without being able to point to a particular
00:34:01.360
benefit, or you will, of necessity, lose access to all the benefits that you couldn't see before you did the
00:34:07.620
research. That really is what drives folks in these fields, the belief that we can come up with
00:34:15.320
innovations that will make life sufficiently better to be worth the risk. It's important to point out
00:34:21.480
that the scientific benefits that did accrue from resurrecting the 1918 influenza virus were largely
00:34:28.960
accomplished by a relative handful of specialized laboratories doing research with it or with pieces of it.
00:34:36.260
We didn't actually need to publish the genome sequence in order to gain virtually any of those benefits.
00:34:43.960
Given that the number of people with access to 1918 flu has grown by a factor of 300, and it's
00:34:50.460
quite plausible that the CDC didn't realize it would be quite that large quite that soon,
00:34:57.720
do you imagine the people who made the decision to post it regret that decision now,
00:35:03.440
particularly in the post-COVID era? Well, some of the folks who are in charge or highly influential
00:35:08.660
at NIH today have been around for most of the last 20 years, and they've been among some of the
00:35:13.120
strongest supporters of taking highly lethal viruses that aren't very transmissible and engineering and
00:35:19.740
evolving them to become much more transmissible. Since that's simply identifying things from the past,
00:35:26.060
to which most people have immunity today, it's fair to say that they don't regret it.
00:35:30.000
What other scary genomes are there online? Is the horse out of the barn when it comes to
00:35:38.320
terrifying things being posted online, or are we currently in okay-ish shape?
00:35:43.800
You know, a lot of people seem to think that it's too late, but it's really not, because again,
00:35:50.040
1918 isn't that likely to take off itself. There is something that would take off, which is variola virus,
00:35:56.860
smallpox, which is much more lethal and about as transmissible.
00:36:02.340
And also online, put there by the CDC. But with variola virus, the United States has 350 million doses of vaccine
00:36:10.000
ready to go. And variola is much harder to make as a virus. I would estimate that maybe 100 people could make
00:36:17.380
variola. And it doesn't do the asymptomatic transmission thing the way COVID does. So I think we'd have a much
00:36:24.380
better chance of getting under control, especially since we have so many vaccines. It would require
00:36:28.740
the U.S. to actually donate some to the rest of the world to stamp it out. But the world, in living
00:36:33.180
memory, has experience of vaccinating people against smallpox and eradicating it. And other than that,
00:36:38.700
people speculate about a few things, but there's nothing else out there that is really all that
00:36:45.440
Right. So, of course, the Ebola genome is online, 50% case fatality rate, but relatively not very
00:36:52.560
contagious compared to COVID. SARS, 10%, demonstrably far less contagious than COVID, because we know how
00:37:00.080
many people died in the SARS outbreak, and it was less than 1,000. One that always frightens me is MERS,
00:37:06.060
Middle East Respiratory Syndrome, another coronavirus like SARS, but also not very transmissible. So unless I'm
00:37:13.760
missing something, we're in pretty good shape for now.
00:37:17.200
That's right. I'm not super worried about all the viruses that are out there in terms of
00:37:21.900
causing sufficient devastation to really, certainly not to threaten civilization or anything like that,
00:37:27.060
probably not even the casualty levels that we've just seen with COVID. Folks can reasonably disagree
00:37:31.800
on this, but from my perspective, we're actually doing pretty okay when it comes to blueprints for nasty
00:37:39.360
Now, has anybody heard the last episode I did on Sam's podcast? One thing that really worries me a great deal
00:37:46.320
is the deliberately artificially modified H5N1 flu that was created in Madison, Wisconsin, and Holland,
00:37:54.700
two labs, in 2012. It scares me because it has a 60% fatality rate, even worse than Ebola. And in nature,
00:38:02.860
it's barely contagious at all. But that experiment led it to being transmissible, at least in ferrets.
00:38:10.720
And the idea of something with that kind of case fatality rate ravaging the population freaks me out.
00:38:17.000
But you're not as worried about that as I am, correct?
00:38:20.760
That's right. Both of them, yeah, they're respiratory transmissible in ferrets. That doesn't
00:38:24.720
mean they're transmissible enough to cause a pandemic in ferrets, let alone transmissible enough
00:38:28.800
to cause a pandemic in humans. And subsequent studies of them suggest that there are reasons
00:38:34.620
to think that even if they were continually passaged to be even more transmissible and it translated to
00:38:40.780
humans, the virus would be much, much less lethal. The same mutations affecting transmissibility would
00:38:46.880
also reduce the lethality. So those are bad, but are still in the bucket of probably not.
00:38:52.860
So unless we're quite unlucky with H5N1, at this point, we don't really have much to fear
00:39:00.860
from the genomes that are currently on the internet unless we keep posting new and worse ones to the
00:39:08.980
internet. And because it's more future risk than current risk, it's really alarming that you previously
00:39:14.980
said that quite a few people seem to be operating under the presumption that the horse is already out
00:39:20.280
of the barn and therefore why not post everything? Like who's saying that?
00:39:24.320
Well, a lot of people, in fact, to the extent that I've been trying to raise this concern saying,
00:39:28.860
you know, we perhaps shouldn't share blueprints for what amounts to an arsenal of plagues online
00:39:34.220
by doing new research for trying to find things that would actually be that bad and would be likely to
00:39:39.480
take off. A lot of people say, eh, we're already past that. And I think that's because some people
00:39:45.300
are referring to COVID, which is not really what we're talking about. There's no reason not to post
00:39:49.720
COVID genome online when it's already causing a pandemic. That's different from the scenario where
00:39:55.520
it hasn't actually infected humans because that means that it might never do so, at least unless
00:40:01.260
humans cause it to do so. But a lot of scientists seem to take the view of there's lethal viruses
00:40:07.060
online, therefore there's no reason not to put more of them online. And that attitude is quite worrisome.
00:40:14.460
To give you just one example of this, so Dennis Carroll is a huge figure in the field of finding
00:40:19.600
and identifying unknown viruses. In a December article in The Intercept that we were both
00:40:24.500
interviewed for, said that Dennis basically acknowledged that yes, a scientist could use
00:40:30.140
the genetic sequence of a dangerous virus maliciously. But he said that risk already exists.
00:40:37.860
we don't need to find some new virus in order to elevate that risk. And I just can't imagine why
00:40:44.760
he would say something like that. Now, to be fair, unlike many folks in this field, Dennis
00:40:49.940
does seem to invite us to hold him accountable in the case that he's wrong. He says, and again quoting,
00:40:55.320
if you go out and cavalierly begin collecting and characterizing these viruses, there is inherent risk
00:40:59.640
attached to that, and you have to be accountable for that risk. So I applaud him for saying that,
00:41:04.680
but I just couldn't disagree more when it comes to the risks of putting new viruses online.
00:41:10.700
Speaking of Dennis, this is actually a really good moment to transition to talk about Deep Vision,
00:41:14.600
the new USAID program that concerns you so much, because Dennis ran the immediate predecessor to it,
00:41:20.520
a program called PREDICT, which was basically the template for Deep Vision. And that's just strange
00:41:26.580
to me that PREDICT was able to spawn a follow-on program. In light of some of the terrible scandals
00:41:34.660
frankly, it has been implicated in. At the center of which is the fact that PREDICT directed
00:41:39.720
a fair amount of money by way of an intermediary called EcoHealth Alliance to fund research on
00:41:46.280
coronaviruses in Wuhan. And not just that, but in labs that were known to have very shoddy safety
00:41:54.180
practices to the US government. We know that beyond a shadow of a doubt from some declassified
00:41:58.580
diplomatic cables. So that seems like an incredibly radioactive error in judgment. This was all out
00:42:06.360
in the open by the time Deep Vision was funded. And on top of that, we have to add the significant
00:42:11.760
possibility, not a certainty, but possibility that that work may have resulted in a leak that led us
00:42:18.340
to COVID. While there was a time when a lot of well-regarded scientists were putting the odds of a lab
00:42:23.760
leak at zero, that's a long, long time ago. At this point, substantially nobody who has
00:42:29.460
significant scientific depth in this would put the odds of a lab leak at zero. Some put it as high
00:42:35.300
as 90%. I'm sure some put it in single digits. But it is undeniable that PREDICT may have had a hand
00:42:42.320
in that. So for all those reasons, this is politics. It's irrelevant to your discussions, Kevin, so we can
00:42:47.660
just leave it there. But I just find it a little bit unfathomable that PREDICT spawned an encore.
00:42:53.620
Now, one of PREDICT's main activities was something called virus hunting. And that's also going to be
00:42:59.300
a major activity of Deep Vision. What is virus hunting?
00:43:04.900
Well, you probably have this vision of someone who puts on some kind of suit and cavalierly goes
00:43:10.880
spelunking in a bat cave to collect samples and take them back to the lab, see what kinds of viruses
00:43:15.620
you can pull out. Yeah, Indiana Jones, National Geographic kind of vibe.
00:43:20.220
Now, you've got to keep in mind that's the media distortion. That's the glamorous presentation of the
00:43:24.240
virus hunter. Most virus hunting is probably more like going to those wet markets with various wild-caught
00:43:30.840
creatures and taking samples, or going to bushmeat markets in other parts of the world, or contracting
00:43:37.560
with local hunters to bring samples of various wild critters back, or even just taking general
00:43:43.280
environmental samples out there from a wide variety of environments and isolating whatever
00:43:48.460
viruses you can find from those samples. In the modern day, now sequencing them all, sharing the
00:43:54.120
genomes online, and then potentially doing more than that with the viruses that you've isolated.
00:43:59.320
The idea is we want to know what viruses are out there.
00:44:03.000
Having done a bit of digging into this, some basic stats on PREDICT. They found roughly 1,200
00:44:08.360
previously unidentified viruses. They identified them, they sequenced them, and they were active,
00:44:14.860
I think, in 20-something countries, 60 foreign laboratories. And what was interesting to me,
00:44:21.140
in the 1,200 viruses, all of them had the potential to erupt into pandemics. That was the presupposition
00:44:28.120
for selecting them. I got this from the LA Times, including more than 160 novel coronaviruses.
00:44:34.940
So that was PREDICT's haul. What's interesting to me about Deep Vision, they're actually talking
00:44:41.560
about 10,000 to 12,000 viruses over five years. So something like 10x the number of viruses in half
00:44:49.120
the time. This is with 125 million bucks instead of 200. It's a big explosion in efficiency. And to
00:44:56.080
the extent that this work poses risks, it's a big explosion in the risk. What's driving this
00:45:01.540
acceleration? And if we continue to do programs like this after Deep Vision's legislated five-year
00:45:08.600
life, will that explosion continue to the point that at some point we might be doing this for
00:45:13.760
many tens or even hundreds of thousands of viruses?
00:45:17.540
So one big chunk of the cost is taking all of these isolated virus samples and sequencing them.
00:45:23.220
That's creating the detailed blueprints. Because once you have the genome, then we can use DNA
00:45:27.200
synthesis to reconstruct that virus using these virus assembly protocols that folks have worked
00:45:33.360
out and made tremendous improvements in over the last few years. But the main cost reduction for
00:45:38.880
programs like Deep Vision is that sequencing is just getting cheaper and cheaper and cheaper.
00:45:42.800
And that means it's easier and easier to collect more and more viruses. What isn't appreciably easier
00:45:48.720
is taking them back to the lab, culturing them in the lab, and performing the set of studies that tell
00:45:54.160
you, is this one likely to actually cause a pandemic?
00:45:58.000
So what you're talking about now is essentially step two. We have found the viruses, we have virus
00:46:02.320
hunted, now they're back in the lab, and what you're talking about now is called characterization, right?
00:46:07.280
Yeah. The whole point of the program is to identify which viruses might spill over and cause future
00:46:13.760
pandemics in the hope that we could do something to prevent it. There's four key classes of experiments
00:46:20.560
that you perform on an animal virus to determine whether it's a good candidate for causing a
00:46:26.320
pandemic in humans. You want to know how tightly does this virus bind to human target cells? How
00:46:34.000
readily does it actually infect those cells? How readily does the backbone of the virus replicate
00:46:40.880
and churn out new copies of the virus in relevant human tissue types? And because you can't test it in
00:46:47.680
humans, of course, how readily is it transmitted in animal models that are chosen for their similarity
00:46:53.520
to humans, whether naturally or because in the case of mice, they've been engineered to express
00:46:58.960
human receptors and have human-like immune systems? So to play this back to you to make sure I understood,
00:47:03.840
there's four sets of experiments. Experiment one is, does this virus bind to a receptor on a human cell?
00:47:10.640
Like, can it find the door? Step two, having found the door, can it get in? Does it infect the cell? So does it
00:47:17.360
find the door? Does it have the key? Step three, having gotten inside, can it hijack the mechanisms
00:47:24.080
of the cell to replicate? And then step four is, if it does those first three things, does it seem to be
00:47:31.600
transmissible in animal models that replicate as closely as possible transmissibility in humans? Did I get
00:47:38.480
that right? That's exactly right. Great analogy. And if you answer those four questions, you'll basically
00:47:44.880
determine which of these 10,000 yet undiscovered viruses actually have major pandemic potential.
00:47:51.600
There's going to be a tiny minority of them. And this is something you'd have zero knowledge about without
00:47:58.080
taking these steps. So having done this characterization or pandemic identification process,
00:48:05.440
you will know something that humanity would otherwise never have known, which is that these viruses could be
00:48:11.840
incredibly dangerous, and these are not. Now, while I know this is, of course, an unknowable number,
00:48:18.560
and the best we'll probably do is a confident range, roughly how many pandemic-grade viruses would
00:48:26.240
you estimate Deep Vision is likely to find over its five years if it does, in fact, net 10,000 to 12,000 mystery viruses?
00:48:34.720
It's a good question. That's about 10 times as many as PREDICT found. And PREDICT didn't find
00:48:40.400
any that looked particularly likely in and of themselves. It found a bunch that looked worrisome
00:48:44.880
in one or another property on some of those tests, but it didn't find any that checked all the boxes,
00:48:49.520
so to speak. We can also just look back and say, well, how many viruses are plausibly out there?
00:48:54.080
How many pandemics do we normally see? And then here's the real big one. What fraction of viruses
00:48:59.520
out there are actually going to spill over and get into humans in the first place? Because we're
00:49:05.120
still discovering new species out there. There's plausibly a lot of viruses out there that have
00:49:09.280
never seen a human and never will. So some researchers say there's probably way more pandemic-capable
00:49:15.120
viruses out there than will ever cause pandemics, because most of them are just never going to come
00:49:18.560
in contact with humans. Let's try to drill this down and say the low-end estimate for how many
00:49:23.280
viruses might be out there in mammals is around 40,000, of which they estimate that maybe 10,000
00:49:31.440
are capable of human-to-human transmission. That's one rigorous estimate that's out in
00:49:36.000
the literature. That's the low-end estimate. Others in the field have estimated as many as
00:49:40.000
half a million in mammals. They didn't give an exact number for how many in humans, but you dig
00:49:44.560
into the other papers, they suggest that maybe 20% of those. So somewhere between 10,000 and 100,000
00:49:49.920
viruses that could plausibly human-to-human transmission. But human-to-human transmission
00:49:55.040
is not the same as causing a pandemic. There's lots of viruses that can spread human-to-human,
00:50:01.920
but on average each person infects fewer than one other person. So they die out, like SARS-1 and MERS.
00:50:09.040
So how many of them could actually cause pandemics? That's really hard to know, but if deep vision
00:50:15.120
finds 10,000 and you assume the lower-end estimate, then that would be as much as a
00:50:19.280
quarter of all the viruses out there. So let's say 20% of those are going to be capable of
00:50:24.960
spreading human-to-human. It's 2,000 that we expect human-to-human. What fraction of those 2,000
00:50:30.400
spreading human-to-human could actually cause a pandemic? It's not really known because the ones
00:50:36.000
that are endemic in humans are a biased sample from all of history. So it's really hard to say. But if
00:50:41.280
we're guessing like one in 100, then that would mean 20 pandemic-capable viruses. But that doesn't
00:50:46.640
make sense statistically because PREDICT found 1,100, 1,200 viruses and didn't find anything
00:50:52.480
that clearly looked like a pandemic-capable virus. Which makes logical sense, and I do generally
00:50:57.280
believe that probably that means 8-ish is a high-end plausible outcome. But a quick question. Is it
00:51:03.200
possible that the targeting of the hunted viruses or the accuracy of some of the laboratory steps
00:51:11.920
have gotten substantially more efficient, that perhaps more will be caught, maybe predict missed
00:51:18.320
things that deep vision will not miss because it's being done in 2022 instead of 2009?
00:51:25.040
The main difference is that now there are much better computational tools for matching virus to
00:51:32.160
host species by receptor similarity and the like. So when deep vision decides a virus is worth testing,
00:51:39.120
from the sequence data they're going to have a much better shot at identifying at that virus being
00:51:44.640
high risk than PREDICT was. Okay, so to play this back, the upper ceiling could be 8. It could be
00:51:52.000
higher because of improved technology. Bottoms-up estimate gets us into the possibility of it maybe
00:51:57.840
even being low dozens. We don't know exactly. But it's important to know that it is possible that you will find
00:52:03.840
zero pandemic-capable viruses, in which case the program cannot by definition trigger any of the
00:52:10.720
terrifying things that we're about to talk about that it could trigger. So this is not a certainty
00:52:14.880
that there's a huge risk here. But zero feels like a low-end estimate because the organizers of this
00:52:22.640
program presumably know a lot about the state of the science and what can be done now and what can be
00:52:27.040
expected. And they wouldn't be spinning up this program if they thought it was a real possibility
00:52:32.000
that there would be zero because then by definition the program will fail. So zero is possible. And then
00:52:37.680
the high-end from the bottoms-up and an intuition and a top-down, we're getting into quite plausibly
00:52:45.200
mid-to-high single-digit number possibly but not likely into the dozens. Is that reasonable ballpark?
00:52:51.280
Yeah, I think that's, frankly, a much better way of estimating it than I was giving.
00:52:55.920
Okay, thank you. Now, the characterizing work is really interesting to me for a bunch of reasons.
00:53:02.640
How hard and expensive is it to do this characterizing work at scale? And how likely
00:53:10.400
is that kind of work to happen with many thousands of viruses in the absence of this program?
00:53:16.640
From what I understand, the characterization aspect is the most expensive component of a program that
00:53:24.240
is not limited to virus hunting and characterization. That is, Deep Vision is also about monitoring the
00:53:30.880
human populations that are closely exposed to a bunch of the animals that are most likely to pass
00:53:35.520
on viruses to them, so as to identify outbreaks early and give us a better shot at containing them.
00:53:42.000
So this program does a bunch of really good stuff, too.
00:53:44.960
Yeah, that's an unadulterated good thing, what you just mentioned.
00:53:48.160
But my understanding is that the bulk of the cost is actually taking the
00:53:52.400
viruses back to the lab and running the experiments, because that requires a bunch
00:53:55.600
of trained virologists with skills that are much more specialized than following a protocol
00:54:02.080
to assemble a virus from the sequence. That is, there's a whole lot more people that can
00:54:06.080
make a virus from a genomic blueprint than can search through this haystack of wild viruses
00:54:12.560
looking for ones that could plausibly cause a pandemic.
00:54:15.680
Okay, so we virus hunt, we characterize, we haul in a record-shattering number of viruses over the
00:54:23.040
five years. At the end of which, humanity knows something that it would otherwise not have known,
00:54:29.440
which is that these, let's call them six, previously undiscovered viruses pose a very,
00:54:35.600
very significant pandemic risk. Now, I could see very good arguments for us not wishing to know
00:54:44.080
this information, but I could also see arguments for us wishing to know this information. But what's
00:54:49.600
most significant to me is step three in this process, which so flabbergasted me when I learned about it
00:54:58.480
that my head is, frankly, still spinning a little bit. And I want to be very clear, and I don't want
00:55:03.760
to exaggerate, what step three is. And so in order to put this out there in Deep Vision's own words,
00:55:10.800
so to speak, I'm going to quote from their so-called NOFO, which is a notice of funding opportunity.
00:55:17.760
This is something that USAID and other agencies put out in the world and say,
00:55:21.520
we want this program created. And Deep Vision's NOFO is over 100 pages long, so it's very,
00:55:26.560
very specific. And then universities and other people can bid on doing this piece or that piece,
00:55:32.880
and all the pieces are eventually assembled by a program officer inside of USAID. So this NOFO,
00:55:38.960
this 100-plus page document in the public sphere, a bit of Googling, you can find it,
00:55:43.200
is as clear and detailed and certainly official statement of Deep Vision's intentions as exists.
00:55:49.760
So what I'm going to do right now, Kevin, is I'm going to read a few snippets. And for those who
00:55:54.960
find the NOFO, these are scattered a few phrases and sentences from the bottom of page 16 to the top
00:56:00.000
of 17. And I'll ask you to explain a couple terms and meanings as I go along. So first of all,
00:56:06.160
from the NOFO, since USAID expects that data generated by Deep Vision will be publicly available,
00:56:13.760
and I'm going to skip ahead a few words for efficiency, Deep Vision will assist in linking
00:56:19.520
in-country data with global systems, e.g. GenBank and GIS-AID. What are GenBank and GIS-AID,
00:56:28.240
and what does it mean to link the data with them?
00:56:31.840
So that's referring to the standard open data practice of sharing the genome sequences of
00:56:38.400
everything that you find in a given research program. So GenBank is the main repository hosted
00:56:43.920
by the U.S. government of all genomic information that everyone has sequenced and submitted to GenBank.
00:56:49.920
GIS-AID is the repository specifically of virus strains. Not everyone sends the sequence of every
00:56:56.640
strain to GenBank if there's already a sequence that is very similar there. GIS-AID is for the
00:57:02.960
detailed virologist viral evolution specialists who want to see and map out how different are the
00:57:10.160
mutational variants of a given virus. So basically it is a wildly standard
00:57:16.320
scientific practice now to put any sequence you find up to the cloud before you have any notion of
00:57:22.000
its significance. So everything they find by definition is going to be made publicly available,
00:57:26.880
long before they characterize it. Is that roughly correct?
00:57:29.600
That's exactly right. The principle of open data is you collect data in a scientific experiment,
00:57:34.480
you should really put it online for everyone to see.
00:57:36.320
Okay, a couple words after that last snippet of text, pointing again from the program's NOFO.
00:57:42.400
Knowledge gained by deep vision on novel viruses assessed to be zoonotic and significant epidemic
00:57:49.600
slash pandemic threats will be immediately available to the in-country owners of the data and will be
00:57:57.200
expected to be available expeditiously to policy makers, the private sector, and implementing partners.
00:58:04.720
So I think I know what that means. Could you just play back what that amounts to?
00:58:09.280
If you think there's a virus that poses a major epidemic, that is local outbreak risk or,
00:58:14.560
gods forbid, a pandemic risk, you need to tell the country that you found it in immediately so that
00:58:19.840
they can take action. And it's expected that they will then share it with the international community
00:58:26.080
So bottom line, if deep vision succeeds, everyone on Earth will get the recipes for
00:58:35.200
the dangerous viruses that we find. They will be put online just as the 1918 flu virus genome was put online.
00:58:42.560
That's right. Because remember, the intent of this program is to identify which natural viruses might
00:58:49.120
spill over and cause pandemics, if they spill over into humans, so as to better target interventions
00:58:55.120
to prevent that from happening. And the way they hope to do that is by creating a list of viruses
00:59:02.320
rank ordered by perceived threat level, which they've already done for the viruses that are highly lethal
00:59:07.520
and don't spread particularly well human-to-human, but are known to have spilled over. That's the
00:59:13.200
product of a related USAID program called Stop Spillover, which is generally highly admirable with the
00:59:18.560
perhaps exception of this list. Although, as we discussed previously, there's not a whole lot that's
00:59:22.000
dangerous on that list so far. But what deep vision apparently wants to do is, as soon as they find
00:59:27.520
anything dangerous, they're going to alert the world to it and put it on that list, presumably in the
00:59:32.320
number one position. So bottom line, if and when deep vision succeeds, based on what they've stated
00:59:38.880
themselves in general practices, the genomes of potentially very significant pandemic viruses
00:59:46.000
will be put on the open internet for anybody to find. Correct. And given the base of scientists and
00:59:52.080
individuals who are already able to conjure certain viruses from whole cloth in the lab, tens of thousands
00:59:59.440
of people will be in a position to follow those recipes as soon as they're posted and animate that
01:00:06.400
virus, even though they don't personally have a sample from the bat that it originally came from.
01:00:11.600
Did I get that right? That's exactly correct. And to tie it back to what we were talking about a
01:00:16.640
little while ago, the current population of frightening genomes on the internet is probably not all that bad.
01:00:24.800
But this action could substantially change that by saying, yes, world, 1918, probably not as bad as you
01:00:32.720
think. H5N1, pretty good chance it's not transmissible. But holy cow, people, these, whatever, six viruses
01:00:40.000
can really do the trick. That's going to be out there as a result of this program. And tens of thousands
01:00:46.000
of people at first will be able to conjure these things, followed by we don't know how many in the
01:00:52.160
future. Is that an excessively alarmist interpretation? No, it's really not, I'm afraid.
01:00:57.920
And it's worth comparing, COVID has killed many more people than any single nuclear detonation.
01:01:03.920
There are nine commonly acknowledged nation states with access to nuclear devices. So as soon as viruses
01:01:14.000
likely to cause pandemics are publicly identified with freely available genomes, the number of actors in
01:01:21.600
security terms capable of inflicting a million plus deaths in expectation will rise by roughly a
01:01:28.720
thousand fold. So the vulnerability of the world to a deliberately inflicted pandemic will rise
01:01:38.080
tremendously if deep vision succeeds. I mean, that's what I get from this. And again, please tell me if I'm
01:01:46.400
being unrealistic in my interpretation. Frankly, I wish you were, but that seems pretty accurate.
01:01:53.280
I'm going to add, it's a little bit worse than that actually, because a lot of people
01:01:56.960
seem to assume a pandemic is a pandemic, right? It's the same virus. But nature throws on average
01:02:02.800
four or so pandemics at us per century. If someone deliberately causes a pandemic, if the list has eight
01:02:10.800
viruses, they could make and release all eight at once. The idea of facing eight different COVIDs,
01:02:19.600
none of them with vaccines or even tests, all at the same time, it just boggles my mind. That's
01:02:26.160
just such a horrifying thought, given how hard it was to deal with this one.
01:02:29.680
And it's worse than that, I'm afraid, because a lot of people see our best hope as being vaccines,
01:02:35.200
because obviously that's what got us out of this one, right? Well, as we're talking about this,
01:02:40.000
it's been 102 days since the Omicron variant of COVID was first sequenced and the genome shared with
01:02:47.680
the world. The goal that the White House has put forward is that we should have vaccines available
01:02:53.280
within 100 days of the genome sequence being identified. But by a couple of days ago, Omicron had
01:02:58.720
infected a very large fraction, maybe not 50 percent, but in some places, definitely 50 percent of all
01:03:05.280
humanity. So 100 days is too slow, even though Omicron arose in one place in the world and spread from
01:03:11.920
there. And that's our stretch goal is 100 days. And I agree with you, as you know, I've been aware of
01:03:17.280
that number for a while. And it's just, it's dangerously timid. But it's worse than that.
01:03:22.560
Oh, good. If Omicron can spread that fast from a single point of release,
01:03:27.440
anyone malicious enough to deliberately cause a new pandemic, let alone several at once,
01:03:33.360
would almost certainly release them in multiple travel hubs, say airports throughout the world.
01:03:39.600
So it wouldn't be a single point of release you have a chance to notice because lots of
01:03:42.640
people are getting sick. It would be people getting sick in cities all over the world all at once.
01:03:48.960
It would substantially diminish our response time. And one would imagine thereby substantially
01:03:54.880
increase the resulting death toll. So even if it's just one virus, a deliberate release is worse
01:04:00.800
than a natural spillover or an accident because both of those are spread from a single site.
01:04:05.680
Yeah, that's a really interesting and important point. So COVID emerges in Wuhan and it's two and
01:04:12.160
a half months before it shows up in Detroit. That was nowhere near enough lead time for us. And it could
01:04:17.520
be chopped by 99 percent by a malevolent actor. That's scary.
01:04:21.360
And there's one more thing I really want to emphasize, which is some people might be saying,
01:04:24.560
well, yeah, but no matter how those four experiments turn out, even if they're all four
01:04:29.280
positive and give values comparable to an endemic human virus of the same family,
01:04:33.280
you might say, well, maybe that's just a 50 percent chance that it would actually cause a pandemic.
01:04:36.880
Because again, the transmission studies are in animals, not humans. And to be fair,
01:04:40.320
I just pooh-poohed the H5N1s, mutants, as being actually likely to cause pandemic in humans.
01:04:46.560
So let's say that, okay, they find five viruses that they think could cause pandemics. Let's assume
01:04:52.720
each of them is actually 50 percent likely to cause one. Well, if someone were to assemble and
01:04:57.760
release all five, they'd have a 96 percent chance of causing at least one pandemic and an 81 percent
01:05:04.480
chance of igniting two or more simultaneously, again, presumably across multiple travel hubs.
01:05:09.680
That's pretty terrifying. And if you assume that deep vision is only the beginning and the goal is to
01:05:16.640
find all pandemic-capable viruses out there in nature and put them all on a list in order to
01:05:21.760
prevent as many natural pandemics as possible, then you're raising the possibility that someone might
01:05:26.720
do it and then you'd see copycats. Because that's what happens with mass shootings. It's a socially
01:05:32.800
contagious behavior. Once one person sets the example, then many more people who are mentally
01:05:40.800
ill or captive to a horrific ideology would be more likely to do it now that they know that it's
01:05:46.560
possible. In general, I'm not super worried about civilization falling apart because of a single
01:05:52.080
natural pandemic virus. But if you're talking releasing eight at once, that might be another
01:05:58.800
story depending on how bad they are. I mean, remember, if essential workers aren't willing to
01:06:03.040
go out there and risk their lives, then you start having problems in food and water and power
01:06:08.240
distribution. And if those fall apart, then everything falls apart. So putting those genomes online
01:06:14.800
looks pretty risky given that the best case scenario is that we would prevent all of the natural pandemics,
01:06:21.360
which is historically around four per century. It seems a lot to say it's definitely a good deal
01:06:28.720
to prevent four natural spillovers per century in exchange for giving tens of thousands of people
01:06:34.720
the power to launch more pandemics than that simultaneously across multiple travel hubs.
01:06:40.800
In light of all of that, I don't know how you could argue against the statement that if deep vision is
01:06:48.880
spun up and succeeds, the exposure humanity faces to malevolently inflicted pandemics skyrockets,
01:06:57.440
even if deep vision finishes its job and none of this work is ever, ever done again. Which is,
01:07:04.880
first of all, a completely naive assumption. I don't think anybody would make that, given that
01:07:08.640
there's momentum to do more and more of this rather than less and less of this in USAID of all places,
01:07:14.320
which we'll discuss in a bit. And that brings me to my last quote from deep vision's NOFO.
01:07:20.400
The USAID seeks to assist a limited number of countries with a focus on Africa, Asia, and Latin
01:07:30.640
America to establish capacity to detect, characterize, and disseminate information and findings regarding
01:07:38.960
previously unknown viruses that have originated in wildlife. Correct me if I'm wrong, but basically,
01:07:45.200
he says, in addition to doing this stuff ourselves, we are going to be training lots of foreign countries
01:07:51.920
to also do this. And it's noteworthy that if this training, which is currently specialized,
01:08:00.080
and will probably enjoy many breakthroughs as a result of $125 million of funding over five years,
01:08:06.640
you're going to have a lot of smart people concentrating on characterizing better, cheaper,
01:08:10.080
faster, which wouldn't otherwise happen without this program. So if deep vision succeeds and
01:08:15.760
society continues tripling down on this stuff, this work will be as a consequence done on foreign soil,
01:08:23.120
where the US government has absolutely no sway or say, will eventually be done with tools that are
01:08:28.720
far better than what we've got right now, tools and techniques. So this is the horse out of the barn
01:08:34.560
that doesn't actually yet exist on the internet with the relatively limited set of sort of dangerous
01:08:42.320
viruses that are there. Now, there is one factor that I should probably point out, which is suppose
01:08:49.760
we're concerned about state bioweapons programs. If they were to say, don't mess with us, we have the
01:08:56.400
ability to launch new pandemics, no one would believe them. I would laugh and say I can fabricate data
01:09:03.440
too. That's cute. But if it's done by deep vision, by deep vision funding, a bunch of independent labs
01:09:11.360
who are well-meaning because they want to prevent natural pandemics and they just haven't thought
01:09:16.080
about the security risk. If they publish that data, I believe it. If it's done by independent labs who
01:09:23.600
don't have a motive to have a pandemic in their pocket, it's going to be believable in a way that it
01:09:28.880
won't be if malicious actors try to do it themselves. And I think this is one of the most powerful and
01:09:34.800
original arguments that I've heard about the danger of this kind of genomic information being
01:09:39.920
out there. And I hadn't thought about it until you first mentioned it to me on the phone before. If, I
01:09:44.560
don't know, an environmental extremist movement or a frightening anonymous source on the internet,
01:09:50.800
or even a state actor like North Korea, were to say, hello world, I am going to inflict a devastating
01:09:58.000
pandemic unless you meet my demands, it would be laughed off. It would freak people out, it would
01:10:03.840
probably get a lot of coverage on CNN, but the scientists who are in a position to inform the
01:10:09.440
national security apparatus in various countries about whether or not to take this threat seriously
01:10:14.160
will say this is just not feasible. That totally changes when a genome that the world would not
01:10:21.440
have known otherwise is definitively blessed and publicized by a brilliant scientific group as being
01:10:29.280
that thing. Once that work has been done, which wouldn't be done otherwise, now all of a sudden we know
01:10:35.520
30,000 and perhaps quite a bit more people can follow that recipe overnight. And what that means
01:10:43.840
is we could suddenly go through a terrifying series of hijackings of the attention and the stress
01:10:52.560
levels of the world with all kinds of people issuing those threats and they don't even have to have
01:10:57.680
access to one of those 30,000 people that can make that virus. They just have to say it because it's
01:11:03.600
suddenly credible. Nobody will be able to deny that it's impossible for one of those 30,000 people to be
01:11:08.800
under the control of this terrifying anonymous source online or this terrorist group or this rogue
01:11:14.640
actor or whatever. And that alone is just wildly disruptive. It's one thing if a kid pulls the fire
01:11:22.560
alarm in their school to get out of their science exam or whatever. But if that kid calls the school and
01:11:29.200
says, I have a hydrogen bomb that's ready to go off, no one's going to get that alarm. The creation of this
01:11:34.800
credibility empowers so many more people than, quote unquote, just the 30,000 people who could act on it
01:11:43.200
to do awful things in the world. And it's probably even more likely because the people who might make
01:11:49.440
that threat may in their inner conscience say, I'm actually not going to do it. I don't have a gun.
01:11:53.920
You know, I'm just holding up the liquor scorer with a squirt gun. No one's going to get hurt.
01:11:57.760
Might actually make them more likely to move ahead with that kind of thing than it would be for a state actor.
01:12:05.040
to do something. Yeah, I'm actually at least as afraid of not just non-state actors, but even just
01:12:11.520
individuals, mentally ill or otherwise. So Seichi Endo was a member of this apocalyptic terrorist cult,
01:12:17.680
Aum Shinrikyo in Japan. Aum was responsible for making and releasing chemical weapons that killed
01:12:23.760
a bunch of people in Japanese cities in the early mid-1990s. But before he joined the cult,
01:12:30.080
Endo was a graduate-trained virologist. And he sought to obtain samples of Ebola for use against
01:12:37.680
civilians and was unsuccessful. James Holmes, who is a convicted mass murderer, the Aurora shooter,
01:12:43.600
quit his life sciences PhD program not long before he opened fire in the theater. And of course,
01:12:49.680
pre-al-Qaeda, the most famous terrorist was arguably the Unabomber. Ted Kaczynski was a brilliant
01:12:55.280
mathematics professor who referred in his manifesto to the immense power of biotechnology,
01:13:01.280
even though he wrote it decades ago. It's really hard to imagine someone like that,
01:13:05.440
who wanted to bring down the industrial system, would not have used that power if given access
01:13:11.760
to pandemic virus genome sequences and modern virus assembly protocols. And that's not even getting
01:13:17.200
to groups like Daesh, ISIS, and other kinds of terrorists. Folks who might actually be tempted to
01:13:24.160
use it, even if it would hit their own people. Yeah. And the omnicidal factor, if we can call it that,
01:13:31.680
is something that I think people very frequently miss. Because it's easy to rule out
01:13:35.520
a majority of bad guys who normally dominate international headlines. We can safely say that
01:13:44.480
Putin, Xi, indeed North Korea, are wildly unlikely to inflict a pandemic on the world because they have
01:13:53.120
so many people to protect and so much to lose. And if we're in the mindset that it takes a major
01:13:59.280
state actor to do such a thing, there is a bit of mutual assured destruction built into that.
01:14:05.680
There's radical deep ecologists who don't think much of humanity in general and might think that
01:14:10.720
the world is better off with a whole lot fewer humans. There's folks with those persuasions who
01:14:16.000
have called for it. Certainly it's unclear whether a group like Al Qaeda would actually have unleashed
01:14:21.360
something that would hit their supporters as well, but quite possibly. I mean, they did put out a call
01:14:26.640
for brothers with skills in microbiology in 2010 to make biological weapons of mass destruction. And then
01:14:34.000
maybe one of the most haunting ones is folks with the mindset of the German wings pilot who decided
01:14:39.840
to commit suicide. He was mentally ill, didn't disclose his mental state, and decided to end it
01:14:49.360
Yeah, in the United States, I'll add, we have an average of more than one mass shooting per day.
01:14:54.560
So that suicidal mass murder instinct is out there, and the omnicidal instinct is out there. And then
01:15:00.480
there's just thinking out loud, like, there could be other people who don't realize quite how deadly the
01:15:05.760
thing they're doing is, and might have some sort of boneheaded cunning plan. Like, I could imagine
01:15:12.320
somebody saying, ooh, I remember the markets crashing when COVID came out. I'm going to release something,
01:15:18.000
short the market, make a pile of money, and stockpile gas masks. It sounds moronic and absurd, but there's
01:15:26.800
seven billion people in the world, and moronic, absurd ideas occur to at least some of us each and every day.
01:15:33.280
Okay, now I want to get into the final, and in some ways, maybe even the most important part of our
01:15:40.720
conversation, a little bit of a preamble. It's obviously abundantly clear that I am horrified by
01:15:47.760
the agenda and the prospects of this program called Deep Vision. But I'm going to try to do my very best
01:15:54.960
to push back with every argument that I have heard of or have dreamt up on my own in favor of Deep Vision,
01:16:02.200
because there is another side to this story. And the people behind this program are extremely smart
01:16:08.280
and well-intentioned. And I'll also add that when I personally first heard about PREDICT,
01:16:14.760
it was when the news broke about it being shut down. And that actually happened either right before
01:16:19.240
or right as the pandemic was starting. And at the time, I thought this was insane, because zoonotic
01:16:26.040
spillover has obviously happened before, it will obviously happen again. And how can we defend ourselves
01:16:31.240
against it if we don't study our enemies? I'm actually very sympathetic to these arguments.
01:16:36.040
So let's go back and forth. And I'm really going to try to make every argument I'm capable of in
01:16:42.920
favor of this. So as much as the other side as we can possibly deliver. So my first question to you,
01:16:51.400
which is the question that was in my mind when this thing was first shut down,
01:16:54.280
shouldn't we want to know who the bad guys are and where they live before they strike? Shouldn't we
01:17:01.880
want to know where the dangerous pandemics dwell and know exactly what they look like? And isn't that
01:17:07.960
a bit like putting wanted posters all over the Wild West?
01:17:12.360
It's an extremely intuitive and compelling rationale. And most of the time it's true. The problem is
01:17:22.280
that it's kind of like saying, here's a wanted poster for this particular device, which happens
01:17:28.920
to be the detailed schematics for a hydrogen bomb, and peppering the world with different blueprints for
01:17:35.640
hydrogen bombs or equipment necessary to make hydrogen bombs in order to identify people who
01:17:42.680
might be making hydrogen bombs. It's not a perfect analogy, but it's hard at the top level to say,
01:17:49.720
okay, best case scenario, we prevent all natural pandemics. Is that worth giving these tens of
01:17:55.800
thousands of people the power to release more pandemics than would normally occur in a century
01:18:01.480
all at once? That just doesn't seem like a good trade. And when you put it in that context,
01:18:07.000
then you kind of have to look back at your basic assumption that it's always good to know more about
01:18:11.960
a threat. Because that intuition doesn't encapsulate the costs of knowing. That is,
01:18:18.360
the value of information can be negative. That's the whole point of an information hazard. There's
01:18:23.640
actually a whole field of modeling and information theory on calculating the value of information.
01:18:28.280
When should you run an experiment to learn more? And it tends to assume that information is always
01:18:33.080
positive. And the question is, is the cost of running the experiment worth reducing your uncertainty
01:18:37.480
about what would happen in the world? But it's also possible for the value of information to be
01:18:41.480
negative. And you can imagine an extreme case. Suppose you figured out some way of creating a
01:18:50.120
singularity on earth that would devour the planet, to give the absurd example. Would we want
01:18:54.920
humanity, anyone in humanity to know that that is possible? Probably better off if no one ever
01:19:00.520
knows that that's possible. And because of the credibility issue and the difficulty of doing this
01:19:05.320
kind of research, it looks a lot like if we don't go there, then there won't be credible pandemic
01:19:11.720
capable virus blueprints online for quite some time. That's not going to last forever. We will eventually
01:19:19.320
lose. They will eventually go up there, whether through this kind of route or another one. But
01:19:25.720
the longer we can push it off, the more time we have to build actually effective defenses.
01:19:31.560
And it's important not for listeners to come away with a sense of doom and gloom, because it's also
01:19:36.200
true that even for the scenario where people release multiple pandemic capable viruses all at once in
01:19:43.160
multiple airports. If we have sequencing-based monitoring systems in place everywhere, we'll pick
01:19:49.240
them up nearly immediately, certainly before they spread too widely through air travel, and be able to
01:19:57.400
put on our protective equipment. And if we actually tried, we could probably build comfortable,
01:20:03.000
even stylish equipment that keeps you from getting infected with viruses. And we could make it available
01:20:08.200
to everyone who is required to keep food and water and power flowing for as long as it takes to stamp
01:20:15.000
out the virus entirely. And if we had that kind of equipment available, and the threat was that salient,
01:20:20.920
we could do it. Even after COVID, even after our manifest failure in so many ways, with that kind
01:20:27.160
of technological advantage, I think we could do it. But we can't do it today. So it's really hard to make
01:20:34.360
the argument that the threat of natural pandemics could ever be enough to justify creating that risk
01:20:40.680
of deliberate misuse that could be so much worse because humans make better terrorists than nature.
01:20:46.600
Even if we can't make viruses worse than nature, we make better terrorists. I think the main difference
01:20:53.080
between folks who think that this kind of attitude towards understanding the threat as thoroughly as
01:20:59.080
possible, no matter what, assume that all technologies favor defense. That no matter how bad it is,
01:21:06.920
if we just know more, we can come up with some kind of effective defense. But that's just not how the
01:21:14.120
world necessarily works. And we know that from nuclear weapons. But also, frankly, we should know that from
01:21:20.440
COVID. Because we weren't able to effectively defend against COVID, even though we arguably should have been.
01:21:26.600
Certainly some nations did much better than others. But right now, you can't say that understanding
01:21:32.840
viruses better, especially given our difficulties reliably making vaccines quickly enough, is
01:21:39.080
plausibly going to mitigate the damage from any pandemic enough to warrant the kind of offense that
01:21:46.840
you're giving to individuals. One individual can launch pandemics, and the entire world has to
01:21:53.080
frantically make vaccines, test them, approve them, manufacture them, and distribute them,
01:21:58.600
if we can even do that. And then the offense can do it again. It takes so much less effort if you have
01:22:06.920
a list of many different pandemic-capable viruses to choose from.
01:22:10.120
Okay, you mentioned vaccines. So that gets to another argument that I've heard in favor of this
01:22:16.520
approach, and one that I certainly harbored myself for quite some time, which is, if we do identify
01:22:23.640
the spillover bad guys before they spill over, don't we get a huge head start on creating vaccines and
01:22:29.960
therapies to counteract that spilled-over pandemic? And if that's the case, don't we have a great potential
01:22:37.880
to really mitigate an awful lot of deaths? And we need to put that in the benefits column as we also
01:22:43.800
look at the potential cost column. In order to have a head start in vaccine production,
01:22:50.120
you need to be able to establish whether it works. And if you're trying to develop a vaccine against
01:22:56.680
a virus that has never infected a human before, the only way to test efficacy, that is to run what's
01:23:02.840
called a phase two clinical trial, would be to deliberately infect a bunch of people with a virus
01:23:08.840
of unknown lethality that we think might cause a pandemic? Now that's what's called a challenge
01:23:14.360
trial, and we weren't willing to do that even for COVID until well over a year into the pandemic.
01:23:20.600
Would we really be willing to do it for a virus that was isolated from animals and might never
01:23:26.360
actually infect a human or certainly ever take off as a pandemic? What if we discover a couple dozen
01:23:32.920
viruses? Are we going to do it for all of them? Then you have to take into account how fast mRNA vaccines
01:23:37.640
can be designed basically a day. I mean, Moderna's was famously in less than 48 hours. I'm sure we can
01:23:42.680
do it within 24 now. If you can design the vaccine within a day, and you already have production
01:23:47.720
facilities that allow you to churn out tons of doses very quickly, because again, you're just
01:23:51.640
making RNA, you're just changing the sequence. It's very easy to specify it for a new virus. Then
01:23:57.080
there's no reason why you can't run a combined phase one and phase two trial immediately. Because one of
01:24:03.880
the best things that NIH has called for and is now working on with White House support is a program
01:24:09.080
to develop one vaccine for a virus of every family. Because if you do that, and we actually have some
01:24:14.760
idea of this from many viruses, they're already ongoing, then you know roughly what dose you need
01:24:19.880
to use for your mRNA vaccine against that family. If there's a pandemic already going, you should be
01:24:26.360
making your mRNA vaccine candidate and getting it in people's arms who are high risk in order to protect
01:24:32.040
them as soon as possible. Because you already know that an mRNA vaccine for a related virus at
01:24:37.240
a given dosage was safe and effective. So given that, you wouldn't save even a single full day
01:24:46.360
of vaccine development by knowing the virus in advance.
01:24:49.480
Very few people know how quickly Moderna was developed, and I think it's an important
01:24:53.960
factual point to lay out there without any editorializing. This is just fact. I think it was
01:24:59.080
something like 342 days between when Moderna and most of the non-Chinese world got the genome for
01:25:07.160
COVID. It's 300 and something, mid to lower 300s of days before the vaccine came out. But what few
01:25:13.720
realize is that roughly two of those days were the total vaccine development timeframe, and the rest of
01:25:20.760
it was testing, safety, regulation. And the point well taken that it doesn't take long at all to make
01:25:27.160
one of these vaccines. But it's really important to note the development of the formula for the vaccine
01:25:32.920
is much briefer than the time necessary, as we've seen with COVID, to create 7 billion of them to
01:25:39.080
vaccinate the world, or even 350 million of them to vaccinate our own country. That just took many months
01:25:45.080
in the case of the US until we got to the point where anybody who wanted a vaccine could get it.
01:25:49.240
And we're nowhere near that point with the world yet. So wouldn't it be beneficial to do the deep
01:25:54.360
vision work, to find the plausible bad guys, and to just stockpile billions or hundreds of millions,
01:26:01.080
or whatever the appropriate number is, of those vaccines so we are in a position to snuff it immediately?
01:26:06.120
Well, what you just described sounds a lot like spending an awful lot of money
01:26:10.440
stockpiling doses of vaccines that we're not actually sure work yet, so that you can do what
01:26:15.960
amounts to a phase one plus phase two trial of ring vaccination to try to stamp out an epidemic before
01:26:25.240
So ring vaccination is what we use to get rid of smallpox, for example, and it's where you have a case
01:26:30.440
and you essentially give everyone in the area who might plausibly have come in contact with them a jab,
01:26:36.680
just in case. You do some contact tracing if you can, but it's more like list everyone you know,
01:26:42.440
plus everyone who lives or works within 10 blocks of your home or workplace, respectively.
01:26:47.080
We're just going to vaccinate everyone to a couple of degrees of contact out, or even whole cities, if need be.
01:26:54.600
But that's still actually not that many doses. What you described sounds a whole lot more expensive
01:27:01.080
than just building the capacity to make mRNA vaccines in bulk very quickly in various places throughout
01:27:07.640
the world. I mean, we're going to have these factories for making mRNA vaccines against other
01:27:11.640
things. We're definitely going to be developing mRNA vaccines against other pathogens and probably
01:27:16.520
mRNA versions of existing vaccines because it looks a little like the mRNA versions may well be better.
01:27:21.560
Chickenpox is probably going to eventually be an mRNA vaccine because it's probably better.
01:27:25.560
And if you have factories making all these other vaccines, you can immediately switch those to be
01:27:30.840
making vaccines against some new zoonotic agent that has just jumped. And that's a heck of a lot
01:27:36.680
cheaper than having to stockpile all those doses in advance for a bunch of viruses that are probably
01:27:43.480
never going to spill over anyway. My immediate rebuttal to that, I think I know your answer to it,
01:27:49.080
but let me just make it is, if that's so easy, why COVID? Why didn't we just do ring vaccination
01:27:55.240
with a vax that took a day to make and stuff that one out in Wuhan?
01:27:58.840
Well, that would have required the Wuhan officials to actually inform the Beijing central government
01:28:03.160
that there was a problem in a timely manner, and for them to have had mRNA vaccines available,
01:28:10.120
I can think of another immediate rebuttal to the ring vaccine strategy, but the rebuttal is so obvious
01:28:16.120
that even though I'm trying to be a good devil's advocate, I'm just going to lay it out there. I
01:28:19.560
mean, yes, it takes a day, but it took 340 days of testing and approval. And so how can we really
01:28:27.080
do this ring vaccination strategy? The obvious rebuttal to that is you're going to have the same
01:28:32.520
problem if you have 350 million stockpiled copies of a vaccine that itself, by definition, has not yet
01:28:39.240
been tested and approved because there was no pandemic against which to test it. So that rebuttal has a
01:28:45.640
built-in rebuttal. But I do actually want to add something here, and I'd like to just hear your take on
01:28:52.200
it. My personal belief is now that we do have mRNA vaccines, and we'll have more in three years, and
01:28:57.320
they'll be everywhere in some amount of time. And now that the safety profile of mRNA vaccines has been
01:29:03.960
well-established, my own feeling is when something scary emerges, we need to be able to access emergency
01:29:10.600
regulations. We can't have this 340-day test period, which, by the way, was record-settingly
01:29:17.000
quick. We can't have that because, as you pointed out earlier in the podcast, 100 days is nowhere near
01:29:23.480
enough time if something diabolical is on the march. And so this is something I've thought about on my
01:29:28.520
own. I'm wondering if you think it's a crazy idea to have ready-to-go, pre-approved, pre-thought-through,
01:29:34.360
pre-debated, pre-protocolized emergency provisions that if something really awful starts to happen,
01:29:42.920
society X, whether it's us or a country in the hot human-animal interface, can basically flip a
01:29:48.820
switch and say, as soon as we have a high-confidence vaccine, very high confidence in safety and pretty
01:29:54.960
damn good confidence in efficacy because we've been doing mRNA for X years now, we can at least allow
01:30:00.960
people to take that voluntarily rather than waiting 340 days before they can take it. What do you think
01:30:08.480
of that as a tool that whether we take the stockpiled approach or the ring vaccination approach, I kind of
01:30:14.540
feel like we need that tool? I mean, what you say makes so much sense that it hurts, and it hurts
01:30:19.900
especially knowing that so many of those 300-odd days could have been avoided given appropriate
01:30:27.180
institutional incentives that we're sadly lacking at both FDA and CDC. But it's not really to single
01:30:32.720
out those agencies in particular, because it's not like international agencies did that much better.
01:30:38.480
And to be fair, mRNA vaccines were new. It took time for the manufacturing scale up. It just wasn't there.
01:30:45.660
So even though a lot of lives could have been saved by accelerating the regulatory approval, which could
01:30:49.800
then let the companies in confidence build up even faster than under Operation Warp Speed, there is a limit to
01:30:56.700
how fast we could have done it when mRNA vaccines were new. And in future, we'll be able to do it much
01:31:01.680
faster, which means that the regulatory approval is the sticking point even more. Having a set of
01:31:07.860
people who are authorized in the event of nascent epidemic to just go ahead with a combined phase
01:31:13.460
one, phase two trial in a ring vaccination format, using an mRNA vaccine targeted to the new agent,
01:31:19.360
using doses similar to those identified for viruses of the same family in the past.
01:31:24.580
I think that's just got to be on the books as something you can do and negotiate internationally
01:31:29.320
to get approval to do it everywhere. If we agree to do it, if our FDA agrees this is okay and we can
01:31:33.940
do it, then it'd be a lot easier to get regulatory agencies in other countries to agree and just have
01:31:38.680
that as the plan. That would frankly make a heck of a lot more sense, your idea, than what nations are
01:31:45.120
currently arguing about in the World Health Organization for so-called pandemic preparedness.
01:31:48.940
Okay. I think another powerful argument in favor of knowing the precise genome of a potential bad
01:31:56.600
guy in advance is monitoring the hot interfaces between the human world and the animal world,
01:32:03.720
that human-animal interface. Those are in fairly narrow parts of the world. And it seems that if we
01:32:10.420
do identify the likely spillovers from a particular region, we can put a lot of muscle into that interface
01:32:18.760
specifically targeted at this one bad guy that might emerge from there. And we're going to inevitably
01:32:26.420
put much more muscle in there into that early detection in that geographically specific place than we
01:32:33.280
would if we never did the work that deep vision proposes to do.
01:32:37.780
I think that's inarguable that you could. It's not clear how much better it would be than just looking at
01:32:44.600
the animal-human interface again without looking for which specific viruses you think would actually
01:32:51.160
cause pandemics were they to spill over. And instead saying, which animals cause the most spillover
01:32:57.460
events in which communities? And can we work to prevent those?
01:33:02.200
So you're arguing that those hot interfaces between the animal and human world can be carefully
01:33:08.000
monitored even in the absence of the precise genome. How would that happen? And how easy is
01:33:13.820
that to do? And does it require new technology or enormous budgets?
01:33:19.260
Well, so that's what the technology that's already made deep vision's job easier is doing. That is,
01:33:24.380
they're going out there and they're monitoring people who are often exposed to animals and checking to see
01:33:29.760
which animal viruses they have been exposed to. Or even they're just getting a lot of samples from
01:33:36.220
the animals that people are most likely to contact and sequencing them. And that would let you create
01:33:41.900
a model of which creatures are highest risk. And the thing is, identifying that, say, a particular kind
01:33:49.380
of bat is high risk because of the suite of viruses to which is exposed and how often viruses from it
01:33:56.720
end up in people in regions nearby, that doesn't give anyone blueprints that could be used to cause
01:34:04.120
a new pandemic. But it does let you target interventions in communities, ensuring that
01:34:08.640
anyone who might have been exposed to a bat gets much prompter medical care and diagnostics to see what
01:34:15.140
it is they might have been infected with and resources to contain that potential outbreak before it actually
01:34:22.360
happens. Okay, next rebuttal, which I actually think is a very strong one, and which a number of people
01:34:27.960
have put to me when I raise this issue. The United States can control whether or not deep vision does
01:34:34.380
this work. But we can't stop the rest of the world from doing it. Based on what you told me about the
01:34:39.220
level of expertise and budget that it would require, there's probably not a lot of actors out there who could
01:34:44.440
do this. And there's obviously no economic incentive for any private actors to do it. But nothing's
01:34:51.020
stopping China, for instance, from doing this work. And wouldn't it be bad for China to do this work
01:34:56.980
under the cloak of darkness, for them to identify six pathogens that we know nothing about, and then
01:35:03.580
we have this information asymmetry? Isn't the danger of that frightening enough that we just can't let it
01:35:11.020
happen? And we're kind of dragged into almost an arms race? Well, you have to ask, what do we really
01:35:16.660
lose? And what do they gain from that scenario? I mean, these are viruses that are going to kill
01:35:21.660
their people as well. They're not strategically useful to great powers the way nuclear weapons are,
01:35:27.380
because they can't be effectively targeted. And you could say, well, what if they hypothetically
01:35:32.380
tried to develop vaccines in advance and vaccinate their people in advance? And there I would say,
01:35:37.800
I think it's pretty hard to vaccinate a billion people without intelligence agencies noticing that
01:35:43.980
you're doing it, and presumably getting a sample of whatever it is, or at least finding evidence of
01:35:49.780
it. And even if you somehow manage to accomplish that feat, it's going to be awfully suspicious when
01:35:55.440
it ravages every other country, but even your citizens abroad somehow never get it. That just seems
01:36:02.220
quite a reach. I think normal deterrence really operates just fine in that scenario. And again,
01:36:09.900
what do we gain from identifying it in advance? A day, when it comes to vaccine development.
01:36:16.200
If we really do have that capability of make mRNA vaccines very quickly, which is, I certainly hope we
01:36:23.160
have, and frankly, even if the government fails to invest in it, it looks a lot like the private sector
01:36:27.940
is interested in doing that anyway, because market force is to the rescue, doesn't really look to me
01:36:32.360
like we lose anything. So two responses to that. A, it would make no sense for China to do this,
01:36:38.580
unless B, they start mobilizing in plain sight. To which I'll say, governments do stupid things all
01:36:45.520
the time, even though they shouldn't. And as we can see with Russia and Ukraine right now,
01:36:51.520
governments even marshal their forces in plain sight and de facto tell the world what you're going to do
01:36:57.260
about it. So I'd feel better about the arguments you just presented if I believe that there was
01:37:04.520
a plausible path to, say, the United States deciding vehemently against deep vision, and then
01:37:12.260
basically evangelizing that viewpoint to the rest of the world successfully. Could that happen
01:37:19.100
realistically? Is there any shot of that? I think actually we are probably the hardest audience for
01:37:24.940
that one. We'd be the hardest people to change the minds of, you mean?
01:37:28.960
Honestly, yes. I think it's inarguable that if China's leadership decides this kind of thing
01:37:33.880
shouldn't happen, it's not going to happen there. True.
01:37:36.820
Whereas in the US, if we decide that this isn't going to happen and the government isn't going to
01:37:40.500
fund it, then it's actually a lot harder for us to stop the private sector from doing it anyway.
01:37:44.620
The Global Virome Project hoped to raise a couple billion dollars from government, yes, but also a lot of
01:37:50.200
it from philanthropists to do this kind of research and assemble that ranked order list of viruses by
01:37:55.440
threat level for all natural viruses using private money. It's a lot harder for the United States to
01:38:01.340
say you can't actually do that. There are some things we could do. Most notably, we could add most
01:38:07.220
viruses with a hint of pandemic potential to the select agent list, which greatly increases the cost
01:38:13.240
of working with them by requiring background checks and ensuring that physical samples are
01:38:18.380
appropriately under lock and key and so forth. That could do a lot, but we still can't actually
01:38:22.760
stop them unless we actually decided to ban the particular class of experiments required to identify
01:38:30.620
a virus as pandemic capable. That is, though, to give your example, can it find the door? Can it find the
01:38:37.840
key to the lock in order to get in? Can it take over the inside once it's there? And can it actually
01:38:44.860
take over others using the animal transmission models? If we were to say, you know what? A pandemic
01:38:51.940
capable virus can kill as many people as a nuclear weapon. We spend somewhere between two and seventy
01:38:56.520
billion dollars a year on nuclear nonproliferation. Why don't we take pandemic proliferation similarly
01:39:04.580
seriously? Internationally, there is a nuclear test ban treaty. Well, those four sets of experiments
01:39:11.720
are the virological equivalents of nuclear testing. You actually make it a national security
01:39:17.360
matter, then you treat it like a national security matter and a proliferation risk, which is arguably
01:39:21.680
greater than that of nuclear proliferation. Because again, there's nine acknowledged nuclear powers
01:39:27.920
versus tens of thousands of people that could gain access to these kinds of agents once the genomes are
01:39:33.600
online. So I think that if you can convince USAID, which I think is eminently doable, and if you can
01:39:40.700
convince NIH, which I think is much more difficult, but still possible, then we could absolutely take
01:39:46.860
the case internationally that this is in our shared strategic interest as the international community
01:39:52.380
to prevent people from doing this tiny subset, less than one percent of all virology, that is the
01:39:58.460
equivalent of nuclear weapons testing. So tell me if this is a fair summary of that detailed
01:40:05.400
response. It is definitely not in China's interest that this knowledge be discovered. They have a lot
01:40:11.080
of people to protect. This is not targeted weapons. And even though governments do stupid things all the
01:40:17.240
time, they're far less likely to do something stupid if it has been strenuously and persuasively
01:40:23.140
argued to them, hey, guys, this is stupid. Okay, here's another, and again, like, I'm not just being
01:40:29.720
devil's advocate. I think some of the arguments in favor of deep vision are pretty strong. Although you've done
01:40:35.200
a decent job of demolishing a couple of them already. But this is one that I think about a lot. I believe one of the
01:40:42.900
greatest and most securing development science could possibly conjure in the response to COVID would be so-called
01:40:51.820
pan-familial vaccines, which just brief pocket definition for those who aren't familiar with it.
01:40:58.240
The notional pan-coronavirus vaccine would immunize the lucky recipient against substantially all coronaviruses, of
01:41:07.120
which there are countless numbers. And there was talk about, and even I believe the beginning of an effort
01:41:12.500
back in 2003 in the wake of SARS, to gin up a pan-coronavaccine effort, which was understood would cost a lot of money
01:41:20.760
and take many years. But after SARS petered out and didn't even kill a thousand people, that focus was
01:41:26.280
lost. Not because it was scientifically impossible. It might be, but not because it was scientifically
01:41:30.940
possible, because it became politically uninteresting. And it haunts me to think of how different the world
01:41:37.160
would be right now had that pan-coronavirus vaccine in response to SARS been completed before MERS came
01:41:45.100
along. Several years later, another coronavirus, and obviously before COVID came along. Now, isn't it true that the
01:41:52.140
kind of virus hunting Deep Vision is proposing, isn't it going to get a lot more examples of a lot more corona and other
01:41:58.780
viruses, paramyxoviruses, et cetera, to inform the development efforts of pan-familial vaccines? Because I imagine to do
01:42:07.280
one of those things, you need as many examples as possible because you need to find the vulnerabilities that are
01:42:11.960
conserved throughout the family. That's exactly right. If you want a broad-spectrum vaccine, you need
01:42:18.220
a decent sample of the viruses within that family. What you don't need to know is which ones of those
01:42:23.720
could cause pandemics in humans. Because if you have a pan-coronavirus vaccine and it works against
01:42:29.200
a good fraction of the diversity, of the extreme diversity throughout the family, then you should
01:42:35.400
believe that it'll work for all of them. Because you're not going to find every last coronavirus out
01:42:39.060
there. You're only ever going to get a decent enough sample. So yeah, you do need to have the
01:42:43.300
genome sequences of a bunch of the coronaviruses, but you don't need to know which ones of those
01:42:48.360
viruses could cause pandemics. So here's where we will have to draw a really important distinction
01:42:52.220
between the virus hunting part, or just sequencing viruses in nature, to get an idea of what's out there,
01:42:59.840
and the pandemic virus identification, which is where you go back in the lab and you run those four
01:43:05.300
sets of characterization experiments. It's the latter that creates the problem from a security
01:43:10.000
perspective. And you can do the former without doing the latter. So if I was in charge of deep
01:43:16.480
vision, I would say, you know what? We already agreed that we would not continue to fund virus
01:43:23.760
enhancement work. Because PREDICT did fund the Wuhan Institute of Virology in not just finding a bunch
01:43:29.220
of bat coronaviruses. But they also funded research in which the Wuhan folks made chimeras of some of
01:43:36.080
the more dangerous looking ones, the ones that passed one or another of the tests, but not all
01:43:40.500
of them, and mixed and matched the pieces to see if they could make something that was more dangerous.
01:43:45.720
So deep vision, to their very great credit, has said, we're not going to fund that anymore.
01:43:50.280
And that's a very important point. And I'm glad you surfaced it. And I just want to highlight it
01:43:54.280
because there's a little bit of definitions flooding around here. Deep vision is already a step in the
01:43:58.860
right direction. They've already made one sensible step by saying we're not going to do what many
01:44:03.220
call gain-of-function. And the next step is to say the virus discovery part, the virus hunting,
01:44:10.400
is important, not just the pan-family vaccine development, whether or not it's actually
01:44:14.040
possible, it's worth a shot for exactly the reasons you articulated, but also for pan-family
01:44:19.640
antiviral development. So if I were in charge of deep vision, I would say, just like we said,
01:44:25.380
no more gain-of-function, well, we're still going to go out there and sequence a bunch of viruses
01:44:28.680
to help out the broad-spectrum folks. But we're just not going to take them back to the lab and
01:44:32.540
run those experiments to determine which ones are most likely to cause pandemics. And we're
01:44:37.000
certainly not going to add them to a list of viruses rank-ordered by threat level.
01:44:43.020
Now, just to drill down a little bit more on that fantasy situation of Kevin Esfeldt,
01:44:47.320
deep vision director, you pointed out that the characterization work is actually probably
01:44:51.520
a very, very high percentage of the budget. So if you're running deep vision and you got that
01:44:56.520
budget, what else would you do? Spend on the monitoring, not on the prediction. It's that
01:45:01.580
simple. And there's an opportunity cost to budgets. I'll just point out an obvious fact.
01:45:06.960
If they're spending 20 million of the 25 on characterization, they ain't got that 20 to
01:45:10.500
spend it on these other things. And I know that takes me a little bit out of my semi-devil's
01:45:15.540
advocate-y role, but I'm actually done with it because I've made pretty sure all of the arguments
01:45:22.760
that I've heard in favor of deep vision, some of which, as I said, aren't all that bad.
01:45:25.680
So your responses to those arguments and also your assessment of the overall situation seems so
01:45:34.680
intuitively obvious once one hears those arguments. So how is it that this program
01:45:43.400
is going forward with its stated objective of posting what I'm going to call weapons of
01:45:49.500
mass destruction to the internet? USAID deserves a tremendous amount of credit for recognizing that
01:45:55.600
one of the greatest threats to the poor comes from pandemics. And the problem came when they
01:46:01.940
took the reasonable seeming step of saying, you know what, we could target all these efforts
01:46:06.880
more effectively if we knew exactly which viruses were the most risky. I don't think anyone should
01:46:14.340
blame the folks at USAID for failing to notice this. Because first of all, USAID leadership
01:46:22.580
inherited deep vision as a program. The new director, Samantha Power, was confirmed only three months
01:46:29.280
before the announcement, which means that the program was pretty much fully established and just
01:46:33.600
needed the stamp. And what's more, even the folks who were working at PREDICT, like Dennis Carroll,
01:46:41.480
who launched the program, as far as I know, no one ever mentioned that this could pose a security
01:46:48.700
risk, let alone a proliferation risk greater than that of nuclear weapons, during all of PREDICT and
01:46:55.400
afterwards, including in folks who are super worried about pandemics and even do think about security
01:47:01.840
issues. I mean, you yourself, when you heard that PREDICT was canceled, you thought that was a bad thing.
01:47:07.340
Absolutely. So no one, I think, pointed out that this was a security risk until after deep vision
01:47:14.260
was announced. And that includes folks who do have security experience, which is not something
01:47:20.940
that anyone at USAID is expected to have, is trained to have in any way, shape, or form.
01:47:27.840
These are people who have passed up, frankly, much more lucrative salaries in the private sector
01:47:32.200
in exchange for the opportunity to help some of the most vulnerable people in the world. The poorest
01:47:39.080
of the poor, the folks who have really been left out of all the benefits that have accrued from all
01:47:44.520
of the technologies that we've developed, all of the economic growth. And they identified pandemics
01:47:50.740
as one of the things that could most harm the poor and vulnerable. And they were right. I mean,
01:47:56.800
remember, they did this before COVID, more than a decade before COVID. And they did their best to
01:48:02.880
come up with a program to prevent them. And they started out by doing the really reasonable thing,
01:48:08.260
saying, we need to know which communities are most at risk. We need to identify what we can do in
01:48:15.380
order to limit potential exposure that could lead to spillover events and cause epidemics. We need to
01:48:20.260
ensure that they have good medical care and diagnostics to identify viruses quickly, train their medical
01:48:26.320
workers in vast response, give them support for isolation protocols and everything required to
01:48:32.640
give the best chance of containing the epidemic, thereby protecting not just that vulnerable community,
01:48:38.020
but vulnerable communities throughout the world. And they did all this again pre-COVID. So that's
01:48:42.800
to their tremendous credit. So bottom line is self-evident is these arguments certainly seem to me having
01:48:49.300
heard them. They just were not self-evident until people like you started raising them. That happened very
01:48:55.400
recently. The fact that nobody, not just inside of Predict, but in society, pointed to this danger
01:49:01.900
over the 11-year history, I think, of Predict, makes it pretty clear that these are not obvious or
01:49:07.440
natural arguments to rise in the mind of somebody who is not tasked with security or even people who are.
01:49:14.260
That's exactly right. You can't expect folks who have devoted their lives to serving the poor to recognize
01:49:20.760
security risks that their work might be creating when folks who do have that kind of security background
01:49:29.760
Now, to wrap this up and also to bring it home in a really important way, I'm just going to point out to listeners
01:49:35.940
the reason we're having this conversation and we're getting it out there as quickly as we can is because although
01:49:42.260
deep vision has been approved, it hasn't yet launched. Is there any evidence out there that
01:49:50.300
It's really hard to say. There's certainly been the press release of announcing what they were going
01:49:55.500
to do, that the program existed. But there isn't anything out there suggesting that funds have been
01:50:00.080
dispersed, certainly not to begin the characterization. And remember, the characterization comes after the
01:50:05.340
virus discovery part. So even if they've begun the virus discovery, that doesn't mean they're taking
01:50:09.460
them back to the lab and running those four classes of experiments that are, again, the virological
01:50:16.240
So it's more or less beyond a shadow of a doubt that this train has not left the station, that deep
01:50:22.140
vision's objectives might be shaped if people start thinking about them differently, or perhaps all of
01:50:28.320
its budget could be directed toward anti-malaria, bed nets, or who knows. And as a statement of obvious
01:50:34.680
fact, it's much easier to influence the shape and objective of a program before it starts than after
01:50:41.600
X dozen or hundred people are working for it and are deep into their objectives. So now feels like a really
01:50:47.140
important time to get these arguments out into the world, which is obviously why we're doing this. If anybody
01:50:55.880
who's listening to this is concerned, what might they do to try to influence folks, to try to spread the word,
01:51:02.260
etc. Well, I don't want to be irritating and say everyone should do something that would really bury someone who is
01:51:08.240
not even directly involved in this with a mess. But I would suggest, you know, despite social media being
01:51:14.100
consistently identified as being one of the high candidates for a net negative, USAID has a Twitter account at
01:51:20.120
USAID. You could tweet at them and say, this program could do a lot of good in some ways, but the security risks
01:51:27.580
inherent in pandemic virus identification seem pretty considerable. I think you should reconsider that
01:51:33.240
and perhaps move all those funds into the other aspects of the program that could really contain a
01:51:40.620
Yeah. And I totally agree that it is probably counterproductive to bury any particular individual
01:51:46.020
with messages on the subject. But in addition to tweeting at USAID, which is a great idea because I'm sure
01:51:52.200
that account is monitored by the folks inside, people can go to USAID.gov slash contact hyphen us
01:52:00.180
where you will find the following message. General inquiries and messages to USAID may be submitted
01:52:06.880
using the form below. They also have a phone number. And again, we don't know this factually,
01:52:13.020
but it stands to reason that this is an email account or a submission process that is monitored.
01:52:18.980
And USAID isn't like the IRS where they get literally tens of millions of consumer requests
01:52:25.160
in a very short period of time each year. So I think that's another mechanism. And perhaps
01:52:30.720
a more reliable on-ramp government is through elected representatives. If you reach out to a
01:52:37.060
representative of whom you're a constituent, they do have staff to field all of those inbound
01:52:42.600
messages. And I know this because I had quite a few friends in college whose job it was during their
01:52:47.120
summers in Washington to deal with these things. I'll give the specifics in the outro. But just for
01:52:52.040
now, if you live in Maryland, Virginia, Hawaii, Connecticut, Massachusetts, Tennessee, Kentucky,
01:52:58.640
Texas, Wisconsin, Florida, New Jersey, or Idaho, it's a lot of states, you have a Senate representative
01:53:06.320
on the United States Senate Foreign Relations Committee, on the State Department, and USAID management.
01:53:12.880
So those would be good people to alert as well. And then, of course, there's social media, blogging,
01:53:19.400
whatever megaphones you happen to have. If you feel like spreading this word, there's lots of ways
01:53:23.940
to do it. And please do so. And we thank you. Kevin, is there anything we haven't hit on that you think
01:53:31.800
I think just the precedent that this would set. USAID has already done the right thing by stopping virus
01:53:37.040
enhancement research. If they recognize that this is a problem and decide they're not going to do it,
01:53:41.860
then that is one more step towards the US as a whole, moving away from identifying pandemic-capable
01:53:48.420
viruses and sharing the blueprints online, and thus being able to credibly lead the international
01:53:53.660
community towards something like a virological test ban treaty for pandemic non-proliferation.
01:54:00.760
Which would just be so powerful. Important as it is to do what we can to not allow this work to
01:54:08.320
happen that's being currently contemplated. If it's the start of a series of dominoes that precludes
01:54:15.340
an enormous amount of this work happening on a go-forward basis, that's profoundly powerful and
01:54:20.320
potentially profoundly curative. So please, listeners, don't despair. There's a lot of
01:54:26.040
concerning information here. But this horse is not out of the barn at all. And we may actually
01:54:32.400
be in a extremely propitious historic moment to dramatically slow and perhaps even put a stop to
01:54:40.740
the most threatening activity that we've talked about today. And that's why both Sam and I think
01:54:46.460
this is a particularly important conversation with what is perhaps extraordinarily significant timing.
01:54:54.420
So thank you, Kevin, very much for joining me today.
01:54:57.180
Well, thank you for the invitation and for, again, highlighting this potential issue of
01:55:02.400
inadvertent proliferation and what we really can do to stop it.
01:55:06.500
And listeners, please stick around for a brief moment of a couple outro thoughts and more detail
01:55:12.040
on those 12 states and who your representative is if you are moved to reach out to that person.
01:55:16.920
Okay, so that's a lot to process. But I hope you collected enough background information as well
01:55:27.760
as a rich enough sense for both sides of the debate to make your own informed judgment about
01:55:32.440
whether you share Kevin's concerns. If you do and would like to help the situation, I have a couple
01:55:37.780
more suggestions before I list those senators. First, USAID has designated Washington State University
01:55:43.680
to coordinate most of the scientific work that Deep Vision is funding. So if you have a WSU tie,
01:55:50.540
then your school or employer or alma mater is Deep Vision Central. And if you know any heavy hitters
01:55:57.080
over there, you may want to share your perspective with them. Next, after Kevin and I wrapped up,
01:56:02.460
it occurred to me that at least someone who's hearing this, and maybe quite a few someones,
01:56:06.460
probably knows Samantha Power, the head of USAID herself, or other heavy hitters inside the agency.
01:56:12.400
If you are that someone, and are deeply worried about this, then please pass on your feelings.
01:56:18.700
Or just a link to this episode to Samantha or one of her senior lieutenants. Finally,
01:56:24.960
the members of the Senate Foreign Relations Subcommittee on State Department and USAID Management.
01:56:31.440
Quite a mouthful. If you're from one of the 12 states I mentioned, here are your representatives on
01:56:36.600
that subcommittee. Maryland, it's Ben Cardin, and he is the chair of the committee.
01:56:41.800
Tennessee, it's Bill Hagerty, and he's the ranking member, which means he's the most senior member
01:56:48.360
of the opposition party, currently the Republicans. For Virginia, Tim Kaine. Kentucky, Rand Paul.
01:56:56.700
For Hawaii, Brian Schatz. For Texas, Ted Cruz. For Connecticut, Chris Murphy. For Wisconsin,
01:57:04.700
Ron Johnson. Which does rhyme. For Massachusetts, Ed Markey. For Florida, Marco Rubio. For New Jersey,
01:57:14.520
Bob Menendez. And for Idaho, Jim Risch. And that's all I've got. So thank you so much for listening to