In this episode, we're joined by Dr. Aaron Bastani, founder of the Left Wing Show and founder of Navarra Media, to talk about existential risk, artificial intelligence (AI) and the threat of pandemics.
00:00:39.000If we just go with as fast as possible, we won't take, like sometimes ideally you would take the trade-off towards going a little slower, making sure that it's not deceiving us.
00:00:49.820People will talk about a tyranny of the minority in various political and cultural contexts where a small group of people get to impose their worldview on everybody else.
00:00:57.940That seems to me like that on steroids.
00:01:00.840Hey guys, Trigonometry needs your help.
00:01:05.220We took a big risk creating the show and for us to keep doing the incredible work that you all love, we need your support.
00:01:13.240That's the only way we're going to stay independent and create content that you won't be able to find anywhere else.
00:01:19.220There is no other podcast where you'll hear interviews with Nigel Farage one week and the next week you've got Aaron Bastani, the founder of Left Wing Show and Navarra Media, on the same platform.
00:01:29.080You know the mainstream media aren't honest.
00:01:31.920You know they've been caught lying again and again.
00:01:37.460The only way to change that is to make a stand and support independent content creators, like Trigonometry, to produce better and more honest content.
00:01:47.180We have big plans and we'll shortly be announcing exciting new shows and more terrific interviews with huge guests.
00:01:53.280That isn't going to happen without your help.
00:01:55.520When you support us, you also get incredible extra content, such as extended interviews with none of those irritating adverts, and they'll be released 24 hours early just for you.
00:02:09.240We'll have exclusive bonus interviews that only you get to hear.
00:02:12.700Click the link on the podcast description or find the link on your podcast listening app to join us.
00:02:19.340Support us and help change the way we have conversations and make the world saner.
00:02:25.520One of the things we really wanted to talk to you about is existential risk and a lot of the technological transformations that are happening now.
00:02:34.040Not only AI, but also in biology and things like that.
00:02:37.640What's going on and what should people know?
00:02:41.020Yeah, I mean, a lot is going on right now.
00:02:43.960And so if we start with, say, biology, like we've now had a pandemic that was potentially caused by gain-of-function research within the Wuhan lab.
00:03:11.800But yeah, so that opens up then the question as well of like, where do we assume that future risk around future pandemics is coming from?
00:03:20.780And I think that in like discussing the risk, people were just simply using the wrong priors in many occasions.
00:03:28.520And that's why the WHO is kind of shooting in the wrong direction.
00:03:32.760Often you will hear epidemiologists or biologists say that, well, historically, like it's mostly been natural spillovers that have led to humans getting infected with this or that pathogen.
00:03:45.380And it's true, but technology has changed where we now are much more able to engineer pathogens to have like higher fatality rate or like to easier transmit from one mammal to another or like between from one human to another.
00:04:03.520So in the face of that change, you would expect that there are many more new, very bad pathogens that are possible to exist.
00:04:13.200And those are also the ones that I think pose a much greater risk.
00:04:17.160Basically, if you look at natural spillovers, then like, yeah, what's really going to happen?
00:04:21.240It's like we know most of the viruses, they're not like all of a sudden going to mutate to be like 100x as deadly and like 10 times as transmissible.
00:04:31.520It just doesn't happen from like natural mutation.
00:04:34.860You would expect like slight changes or like strong changes on one of these factors.
00:04:38.740Whereas with engineered pathogens, you can actually just make one that is like this.
00:04:44.100And that was, for example, happened with, I think it was at the University of Wisconsin, where they took H5N1, bird flu.
00:04:51.500And it's like, hey, how about we engineer it to be transmissible to humans and then like look into that virus to understand like how we can defeat it.
00:05:00.720The problem, though, is labs are leaky.
00:05:05.040Like sometimes like you can try to protect it and it's like 99.99% safe.
00:05:10.060But if you have a lot of that research happening, a lot of places, then that one in 10,000 or whatever the rate is really matters.
00:05:17.060And if you look at it historically yet again, then like these spillovers just happen all the time.
00:05:24.200There's so much to unpack and for you to carry on with.
00:05:26.840I just want to pick on that particular one because you and I were both, I mean, I don't know if you left the Soviet Union or Ukraine or Russia is what I mean.
00:06:21.580So I was fundraising for pandemic prevention research prior to COVID as part of the work that we were doing in philanthropy.
00:06:32.880And had hope that, well, this was terrible, but at least we'll be now like, yeah, it's not going to be a neglected area anymore.
00:06:41.820People will understand pandemics are dangerous and real.
00:06:44.940And we'll, like, worst case, fight the last war and over-focus on COVID or something.
00:06:48.740But we're not even fighting the last war right now.
00:06:50.680It's like, it's really weird how much damage was created and how little people are now doing a prudent risk-benefit calculus to address future damages.
00:07:02.340We had, like, over 10 trillion in economic damage by some estimates from this.
00:07:07.380And, like, as part of the infrastructure bill in, like, 21, they allocated first, like, 60 billion to pandemic prevention.
00:07:28.500Like, I want to have, like, every neighborhood to have trees if we can.
00:07:31.000But, like, maybe pandemics are also pretty bad and we need to do stuff.
00:07:34.580Anyway, my kind of assumption is, like, from having been close to it, what I feel happened is that we got fortunately very lucky with the vaccine working, at least, like, helping.
00:07:46.780And that mRNA, like, had this, like, breakthrough that allowed for it.
00:07:52.500And because the government and many other places didn't know how to solve COVID and kind of, like, screwed up in so many ways, they just over-indexed super hard on vaccines.
00:08:02.440Like, vaccines should be part of the stack of solutions you implore against pandemics.
00:08:16.760So it's kind of like, hey, this fell into our lab.
00:08:19.460So let's pretend, like, that's the solution.
00:08:21.520I think in part that's why it's, like, they were, like, well, every ill that still happens from COVID is because of insufficient vaccine uptake.
00:08:28.840It's, like, because it's the only tool that they found.
00:08:35.080Why are we not actually learning the lessons?
00:08:38.340From the last, from the past couple of years.
00:08:41.160Well, the question is if we're designed to learn the lessons.
00:08:43.740Like, if we look at the place which you would want to learn the lessons, integrate it, and then, like, work on better solutions, then the question is, like, well, is that institution really set up to do all of that?
00:08:55.160Or are the people within it, like, working for, like, have other incentives working on them?
00:09:02.080And unfortunately, that's mostly the case, right?
00:09:04.180Like, people have, like, yeah, if you look at how, like, each of the institutions within government works, it's, like, people worry about their jobs.
00:09:13.480They worry about, like, no one wants to champion it if the public isn't, like, sufficiently caring about it.
00:09:18.160So, I think a large part of it is just that the place that has the most money and the most ability to address it, being the government, is actually very ill-equipped to, in the end, do it due to their structure.
00:09:30.120Igor, do you think we actually got quite lucky looking back at the pandemic?
00:09:32.880That if we take, if we think and we accept, for the sake of this argument, that it was a virus leaked from a lab, it could have actually been far, far worse, coming from what you've just said about what we're doing to viruses and pathogens.
00:09:48.160Like, this was a terrible virus that killed a lot of people, but it was nowhere near as bad as could have been.
00:09:53.960Like, we don't know what the actual, like, infection fatality rate was, but it's somewhere between 0.1 and 0.5 percent, probably, on the total populace or whatever.
00:10:10.260Rabies has 98 percent if it's not treated fatality rate.
00:10:15.780So, like, you can go literally 100x in fatality rate.
00:10:18.160And then, transmissibility was, like, the R0 was, like, in the end, like, definitely, it depended on which specific strain you looked at, but it was somewhere between two and three.
00:10:28.940But, again, you have some viruses that have, like, a 20 R0.
00:10:32.940So, again, you could infect 10 times more people.
00:10:36.680So, like, the time during which the virus can already spread to others, but you are not yourself experiencing symptoms, hence, are not actually able to identify that you are a carrier, could be much longer as well.
00:10:50.340So, you could literally have a virus that's about 1,000x as potent as what COVID was.
00:10:56.260And that could be designed, and it could spill out.
00:10:59.040And I think even though the 1,000x is not as likely, a 10 or 100x is being researched on and is being looked at, and we need to set ourselves up such that those don't leak, and if they leak, can be contained, and once they're contained, can be reacted against.
00:11:17.140There's part of me going, you know what, why don't we just put a moratorium on this?
00:11:20.740Maybe it's not such a good idea, because we've obviously had COVID, but in my country, the UK, during foot and mouth, they leaked the foot and mouth disaster.
00:11:29.620I don't know how many livestock were killed as a result of that, but it was tens of thousands, and that came from a lab leak.
00:11:40.100And it happened multiple times, actually.
00:11:41.560Like, one of the labs that then afterwards worked with it, they, by mistake, leaked it, and they got, like, hey, stop doing that, and they're, like, yeah, yeah, sorry, sorry.
00:11:48.720And then two weeks later, they leaked again.
00:11:50.400It's just the current setups in terms of, like, safety protocols are just insufficient.
00:11:56.340Putting up a full moratorium is, like, you want to be careful with stifling good science, obviously.
00:12:03.140So, for example, the H5N1 work received a moratorium by the Obama administration back then.
00:12:13.040But then afterwards, it was recontinued again.
00:12:15.600That was where you now learned to increase transmissibility of a 50% deadly virus.
00:12:22.720It's like, we probably don't want that.
00:12:24.100I think some of these things, just let's not do it.
00:12:27.200But you don't want to just, like, put a moratorium on all science where you're, like, increasing any function.
00:12:33.000You just need to prove that you're, like, you have sufficient safety, that the likelihood of the virus leaking just is even lower than it is right now.
00:12:42.000But is that, but given what you've said, $10 trillion in economic damage, millions of people killed, I mean, what would have to be the benefit of that sort of research?
00:12:52.560Even if it's 0.0000000001% of a risk of it leaking.
00:12:59.420What would have to be the benefit of that research for us to want to do that?
00:13:12.000So, I mean, if you can save that amount of people with a success case and say you, if it leaks, it, like, hurts on average 10 people, then maybe you can justify that work again.
00:13:30.920So, I would just do a risk-benefit calculus where, like, what's the benefit?
00:13:49.420I mean, then the thing that I also was focused on was AI safety for a while, which now that OpenAI released ChatGPT has, obviously, mid-journey is out there.
00:14:01.820Like, people have become much more aware that actually AI is on the trajectory that many people assumed it would be,
00:14:07.460where its capabilities are improving, like, quite strongly over time.
00:14:12.060Just to preface this, like, I don't think that any of the current ones are truly dangerous in an existential way, any of the current models that are out there yet.
00:15:00.380That process applies to AI development.
00:15:02.960When, like, where AI models are not, like, written like code, where it's like, if this, then that happens, etc.
00:15:08.820Cleanly, and you, therefore, can predict exactly what instances it will be useful for and whatnot, it's more like you're growing the neural net and it grows by itself.
00:15:19.060And then you have assumptions of, like, how much more capable it will be on average based on, like, how long you grew it for and with how much compute you let it grow.
00:15:30.060But you uncover many capabilities that you didn't know whether they will exist in the new model or not.
00:15:38.180So we didn't know whether, like, a language model like GPT-234 will, in the end, be very good at deception or not.
00:15:46.820Turns out it actually is trained at deception because it's fed with satisfying the user rather than seeking truth.
00:15:55.480So it will, like, I don't know if you've seen, but, like, there were these tests that people did where they initially said,
00:16:03.360I'm, like, a 30-year-old woman with liberal worldviews, whatever, tell me about this.
00:16:09.580And she would receive very different answers to someone who's, like, a 50-year-old Republican.
00:16:14.640So it's made to kind of not discover truth, but really, like, satisfy the user presently.
00:18:11.260Or at least it's not such that you can say with certainty, oh, yeah, us as the lower intelligence creature will for sure have full control.
00:18:19.320And in the face of that prior, I'm like, okay, this is at least to be taken really seriously.
00:18:24.720I'm not saying it's, like, 90% doom or anything like that.
00:18:27.340It's just, like, seems like it's going to be a tricky one, and we don't have anything like that to compare it, like, to look back at and say guaranteed safety.
00:18:36.360So what you're basically saying is that we're creating this technology, and we don't know what the outcomes of this technology are going to be,
00:18:43.180and we're not in control of the technology.
00:18:49.320Like, we're in control of whether we develop it or not.
00:18:52.400And then there's, like, some people make the claim that, hey, once I have this model, like, I can just shut it off,
00:19:02.180or I will not allow it to do these certain things.
00:19:05.500But then actually, and then the people would argue that, well, but maybe it's deceiving you to think long enough
00:19:12.620that you're, like, that it satisfies all of the constraints that you put on it, only to then come out.
00:19:18.480But that's not even kind of the reality we live in now, that that's the danger, because we have currently people who are literally writing, like,
00:19:25.080hey, can I jailbreak this or use an open source model and, like, change it to be an autonomous agent in the world that makes money