00:00:00.000Okay, so get ready to have your brain have a workout by the time we're done with this interview.
00:00:05.020That's a guarantee because when I watch him in interviews, my brain, if there's muscles, it's getting bigger and bigger and bigger listening to him.
00:00:13.500My guest today is Nick Bostrom, who's a professor at University of Oxford, director of Future of Human Institute.
00:00:21.020He wrote a book where two people that you may have heard of recommended it to everybody to read.
00:00:26.520One man, his name is Bill Gates. Maybe you've heard of Bill Gates.
00:01:37.460So Nick, so tell us, right now, with everything that's going on, you got guys that are talking about the AI from the standpoint of we need universal basic income.
00:02:36.780Well, it's mostly based on observing the rapid recent pace of progress in machine learning, deep learning.
00:02:49.820The sense that it looks like this is quite tightly connected to the hardware that we have available, which is growing.
00:02:57.100And we can expect to continue to grow the hardware performance that these systems are implemented on.
00:03:05.840And it will probably require some new breakthroughs as well.
00:03:08.820But if we just see how far we've come, even just in the last eight years or so, it seems rash to roll out that we might have enough progress within the lifetime of some current folk to get this transition.
00:03:22.280But I'm not by any means certain that this will happen, but it seems more likely than not.
00:03:29.340Do you have are you somebody who do you have kids?
00:03:33.440OK, so you're not concerned about, you know, what role technology is going to play in their lifetime where they're still going to have a purpose.
00:03:43.840You don't know for a fact, but in your mind, you're thinking this is the direction we're going.
00:03:47.500Is it just going to be a different way of living for us or is there going to be a threat where eventually we're controlled by the machinery?
00:03:55.880Well, I think on the downside, we have.
00:03:59.540Significant risks, including existential risks, these would be threats to the very survival of Earth originating intelligent life or ways which could permanently destroy the future by locking ourselves in to some radical suboptimal state.
00:04:12.520But there is also an upside to this, like if things go well, if we avoid these disaster scenarios, then I think superintelligence would unlock a much bigger and better future for humanity.
00:04:27.760And I'm quite excited about that potential for actually good things to come out of this.
00:04:34.140Also, in addition to unlocking the subside, I think getting superintelligence right would help with a lot of other existential risks that we will otherwise be confronting in this century.
00:04:48.900From synthetic biology, for instance, that will make it increasingly easy to enhance pathogens and democratize that capability by making these tools easier to use and more widely available.
00:05:14.840So he says in 1863, he wrote this essay saying Darwin Among the Machines predicted the domination of humanity by intelligent machines that eventually these guys are going to dominate the rest of us and we can't do anything and we're going to be having to bowing down to them.
00:05:34.700Do you think that is likely to happen?
00:05:38.640Well, I don't think that would be a bowing down scenario.
00:05:40.600I think in a lot of these existential risks as humanity is simply wiped out and the resources around Earth and in the solar system and beyond that, then we formatted and used for some other goal instead.
00:05:54.780You know the talk you gave on Ted where you're like, you know, if we measure ourselves against the gorilla or the ape, you know, it is so much more powerful even than the strongest man in the world.
00:06:06.240Yet at the same time, our capability is, you know, our brain function, processing, thinking, all that stuff, right?
00:06:27.660And so we get at least one shot to get it right.
00:06:31.380If we can engineer them in such a way that they are actually a kind of extension of our own will, maybe in an idealized form.
00:06:39.540If we can align them with our intentions and values, then that would be on our side.
00:06:43.600And it would be a huge boost for our goals and aspirations.
00:06:48.600So if we build them, because we have control to build them, but at the same time, you know, the world is filled with good people and bad people, right?
00:07:00.080So some people, you know, build things with a positive motive.
00:07:04.860Some people build things with negative motive.
00:07:07.300Let's say the noble people that build them with a positive motive.
00:07:10.040What if the ones that are going to build it with a negative motive because they're driven by power, control, force, you know, and they build this machine that's stronger, thinks better, you know, does everything better than us.
00:07:22.640If there's one area that they can't build it to be better, even the most evil person in the world.
00:07:27.360Say the most evil person in the world wants to build a robot because he wants to take over and rule the planet.
00:07:32.000Hypothetically, we've seen this in movies, novels.
00:07:33.960It's not the first time we have seen this or read this, right?
00:07:36.860Even at that level, is there anything we have an edge over the best machine, robot, any intelligence that's built by man?
00:07:48.220I mean, for now, obviously, the machines are very limited.
00:07:50.840I think eventually they will become way more capable than any human individually.
00:07:56.580And some point after that, more capable than all humans taken together.
00:08:00.400I don't think we will have any edge from any, you know, physical or intellectual capacity post that.
00:08:10.840At that point, we would be dependent on these having been designed in such a way that they actually do what we are intending for them to do.
00:08:20.500They need to be on our side, I think, basically, in order for humanity to have a bright future or even any future at all in this kind of scenario.
00:08:30.840So, the good news is that, like, while I was writing the book, this was an almost entirely neglected area.
00:08:39.100A lot of people working on AI, but hardly anybody working on thinking about what happens if AI succeeds.
00:08:45.280But since then, there is now an active research subfield working on AI safety and on creating scalable methods for AI control that could apply no matter how smart and capable a learning system becomes.
00:08:58.940And some really clever people going into that field.
00:09:01.600So, some progress is being made there.
00:09:03.200I mean, we'll see whether a sufficient solution to these have become available by whatever time we need it, which is like when some other researchers figure out how to create machine superintelligence.
00:09:14.540So, there is a kind of race on, right, between the majority trying to make superintelligent machines as quickly as possible and then a minority working to make sure that by then we will have the relevant control and safety technologies.
00:09:28.900Yeah, but I guess the way I'm looking at it is from the risk standpoint, right?
00:09:33.520Like, you know, we just experienced a pandemic and it shut down America, shut down the world, it shut down Europe, Central America, America.
00:09:51.620What if somebody wants to do that intentionally, the power to want to shut everybody down?
00:09:55.800So, then the conversation becomes recently one of the main leaders of our Department of Defense just resigned because he said the amount of intelligence and experience China has on cyber warfare is years ahead of us where we can't even compete against those guys.
00:10:31.840You can inspect anywhere except these nine places.
00:10:34.120Well, then maybe they're building it in those nine places, right?
00:10:36.000So, what I'm trying to say is, say somebody is not as noble as you are, and if they really have motivations of power influence control, how far can they go with AI the next 5, 10, 15, 20, 30 years?
00:10:52.460I mean, right now, we know that the leading edge of AI development is mostly in the public domain.
00:11:01.980In fact, the best researchers are falling over themselves to publish as quickly as possible their latest findings, putting it on pre-print archive servers, even before it can appear in journals.
00:11:14.860Now, it's very possible that at some point this will shift to a more closed development regime.
00:11:22.120And at that point, it might become harder to know who is where in the race to develop AI.
00:11:32.380But for now, we have a pretty good grasp.
00:11:36.260And I think it would be quite hard at present to mount a competitive effort in complete secrecy, because all the best researchers are really keen to be able to publish, because it's the way that they can show to the other researchers how good they are, right?
00:11:55.820If you're just doing something in the bowels of some corporation, and you're never allowed to tell anybody about it, it kind of sucks if you're one of these people who could get worldwide fame or, you know, renowned amongst your fellow researchers.
00:12:09.140And at the moment, really great AI researchers is in such strong demand that they can kind of have their pick.
00:12:18.420And a lot of them prefer to work for corporations or universities that allow open publishing.
00:12:23.500Some of these things vary based on the country, right?
00:12:32.240Meaning, China's not big on recognizing the individual.
00:12:37.280And Iran's not big on recognition either for the individual, because God forbid if somebody gets too much power, you know, maybe a second, a resurrection of the Shah could come up.
00:12:47.560So we can't give one person too much recognition.
00:12:49.520Meaning, in some of these societies, some of these countries, you're not doing it for you.
00:13:09.940In some of these places, you cannot fully know how advanced they are.
00:13:14.180So maybe let me take this in a different angle than we're going to.
00:13:17.480But China is publishing a lot of papers in AI more and more every year.
00:13:23.860There's like a kind of strong incentive structure in China to publish it because academics get rewards depending on how many papers they publish and so on and so forth.
00:13:34.200From what you're seeing, where are you seeing the biggest advancement?
00:13:39.540So hypothetically, like in your world, what is some technology that they're talking about being built today that maybe we saw in a movie 20 years ago, 40 years ago, that could become a reality?
00:14:04.800So if you look at some of the more impressive recent advances with large language models, for example, OpenAI's GPT-3, which has, I mean, to simplify, it's ingested a huge amount of human written text, basically the internet.
00:14:21.060And you can then give it some text prompt, and it will kind of continue writing in response to this prompt, maybe the style that the prompt suggests.
00:14:33.120And it can write some paragraphs of text that can still, in most cases, easily be distinguished from human written prose.
00:14:43.380But in some cases, maybe for a paragraph or so, can trick you into thinking that it was a human writing it.
00:14:48.700And with occasional glimpses of sort of surprising incisiveness.
00:14:54.320Now, it looks like the performance of these large language models scales with the amount of compute.
00:15:01.000So the more parameters, the more data you have, the better these models become.
00:15:06.700And so one interesting question is, as we scale up these models by maybe a few orders of magnitude, does that mean that we will really close in on the gap between the current models and what the grown human can do?
00:15:21.280It also turns out that basically the same architecture is that you can be used to do this kind of text generation.
00:15:30.380So there are now a system like the clip system where you combine text and imagery.
00:15:35.180And you can have either generate imagery or you can have like input text.
00:15:40.960And you could kind of imagine what a picture of that might look like and kind of in a multimodal way fuse these different information streams that we humans have and that our brains interpret in a quite neat way.
00:15:54.900So that's like one thing, like another recent thing was AlphaFold2 by DeepMind, the AI that is able to, if not solve the protein folding problem, at least make like a dramatic amount of progress there with potentially important applications in the biosciences.
00:16:14.960If we go back a couple of years, obviously, that was AlphaGo, you know, where the game of Go was conquered by AIs.
00:16:27.340And in all of these cases, it's a relatively small set of techniques that are being applied and reapplied.
00:16:34.340So it's not like each of these systems requires some degree of handcrafting, but the real engine, the real juice is kind of this knowledge we now have of how to make machines learn.
00:16:45.220We have really figured out how to make machines learn.
00:16:47.860And that can then be applied to vision or to sound or to text or to pretty much any domain.
00:16:56.240So that's, I think that's like probably the most exciting thing that is currently happening is this deep learning revolution.
00:17:01.660Do you think it's possible to make, to teach machines, not just how to learn, but how to feel?
00:17:06.940I think there's a lot of uncertainty about the philosophy of mind question about what the criteria are for, say, being sentient, for having conscious experiences.
00:17:19.120It's something philosophers have wrestled with for a long time.
00:17:24.000And I do believe that we will eventually have machines, digital minds that are conscious and that then will also have moral status.
00:17:35.060And that means that the question is not just what do they do to us or what do we do to each other with machine tools, but also what do we do to them?
00:17:42.740So the ethics of digital minds, I think, will gradually arise as a really important issue.
00:17:49.740Today, it's a little bit outside the Overton window.
00:17:52.080It's a kind of wacky thing that you can't really.
00:17:54.360But I think the time has come now where at least some people have the luxury to be like, you know, in academia, sitting and thinking about things all day long.
00:18:02.860So we should start to try to work out some of what this might look like, a world where, say, humans and AIs at different levels of capability and at different levels of sentience have to coexist in some kind of workable harmony.
00:18:16.940You ever seen the movie Her with Joaquin Phoenix?
00:18:21.520Do you know which one I'm talking about where he's...
00:18:23.660Yeah, that's one where they have like the personal assistant, right?
00:18:26.700Yeah, do you think it's ever going to come a time where man marries a robot because the robot can now do everything and anything that another human being can do, including Phil?
00:18:42.920Well, I mean, ever is a long time, right?
00:18:44.920So, but in terms of chat robots and stuff like that, that might be kind of rather around the corner.
00:18:51.400I mean, already exist in limited ways.
00:18:53.080It might be that, at least for some applications, you don't really even need a fully convincing human-like interlocutor.
00:19:00.960It might be that the more limited thing will still be compelling to some people.
00:19:06.580And, I mean, maybe it will have utility as well.
00:19:09.460I mean, if you have like these personal assistants like Siri or...
00:19:13.460And if they could start having a more kind of social relationship with the user as well and, you know, learn to say encourage.
00:19:23.080Things when they detect signs of somebody feeling down or like, I could imagine that that in a small and gradual way is starting to happen within the next few years.
00:19:33.300But ultimately, yeah, I mean, if you have machines that are completely equivalent to humans or beyond that and that are conscious, et cetera, then I don't see any reason why you couldn't have the same deep relationships between some humans and some machines as you currently have between some humans.
00:19:55.060You've argued that true AI, if it's realized, might pose a danger that exceeds every previous threat from technology, even nuclear weapons.
00:20:07.080And that if its development is not managed carefully, humanity risks engineering its own extinction.
00:20:12.580Has your position changed with that or are you still in the same place with that?
00:20:16.240Yeah, I mean, in terms of the magnitude of threat, if we're talking about probability, that obviously goes up and down over time.
00:20:23.380So when scientists first detonated an atomic bomb, the Trinity test, there was some concern that maybe the high temperatures that would be generated could ignite the atmosphere and then that would kill all life.
00:20:43.880Concerned in us that a number of studies were commissioned by Robert Oppenheimer, who was the director of the Los Alamos lab.
00:20:53.700And those calculations that were performed show that this shouldn't happen in terms of the nuclear physics, like the new atmosphere is not ignitable.
00:21:03.520And of course, they detonated a bomb and the atmosphere didn't ignite, which is a good thing for us.
00:21:12.200But at the time, you might say that was a small existential risk, that their calculations maybe could have been mistaken.
00:21:18.560And then that would have been the end.
00:21:19.980And in fact, just a few years later, when scientists were developing the fusion bomb, the hydrogen bomb, they again made some calculations as to the yield of this experimental device that they were going to detonate.
00:21:41.620And there, it turned out, there was a mistake in their calculation, with the result that the yield was two and a half times bigger than they had anticipated.
00:21:49.400And so what this meant was that a huge blast arose, irradiating a Japanese fishing boat where one person died, causing an international accident.
00:22:00.200You could imagine the Japanese being kind of sensitive to nuclear after what they had gone through.
00:22:04.300Several islands had to be evacuated, and it was like a big calamity.
00:22:07.480A lot of the instruments that had set up to record the detonation were destroyed by the blast.
00:22:13.300So it's a good thing that the calculation error was in this second experiment, rather than in the calculation about whether the Trinity test would ignite the atmosphere.
00:22:24.120But then I think there were maybe larger existential risks during the height of the Cold War, where the world seemed to be on the brink of nuclear Armageddon on several occasions.
00:22:33.620Although it's not clear that that would have caused human extinction.
00:22:39.600But now I think, if we're looking ahead over the coming decades, the biggest existential risks, and I think that will be unprecedentedly big, will arise from some technological breakthroughs we can be expected to make.
00:22:51.860A, superintelligence being one, and then synthetic biology being another.
00:22:55.400And there might be some further areas as well that could introduce these new factors into the world, where we have no track record of living with this for many years or decades or millennia.
00:23:06.800And we're kind of rolling the dice anew with these brand new powers.
00:23:10.980You know, typically when something like right now, the conversation that's being talked about a lot is regulation with Bitcoin, cryptocurrencies.
00:23:17.980So, you know, one side is saying it'll never get regulated, another side is saying it's already regulated, another side saying it's about to get regulated.
00:23:40.700Do you think, you know, do you think it'll ever get to a point, again, ever is a long time, but do you think anytime soon we'll get to a point where the level of regulation on AI needs to be a global thing rather than a national thing?
00:24:03.500I mean, if you have 200 countries and each make their own independent choices about some of these things, then basically we'll have to assume that there won't be any larger scale externalities from these different technologies.
00:24:18.740But we already have a lot of examples where there are externalities from what the country does.
00:24:23.040I mean, global warming being one example, where if you want to solve that, it's not enough that one country unilaterally reduce their own pollutions.
00:24:30.840Like, it has to be something most countries do.
00:24:34.740And with some other things like nuclear weapons and biological weapons, we have made big efforts to limit the proliferation of nuclear weapons and to ban entirely biological weapons.
00:24:45.980With a reasonable degree of success, but not complete success.
00:24:50.900So it might well be, I mean, to some extent, it depends on how lucky we are with how new technologies pan out.
00:24:55.780But we might get technology so destructive that it's unacceptable if even one actor develops and deploy something.
00:25:04.440Then the only hope would be some kind of global agreement to prevent that.
00:25:09.800Have you seen the recent movie Machines vs. the Mitchells, the cartoon?
00:26:26.560But a lot of the biggest existential risks, I think, arise from the fact that the world is splintered.
00:26:34.000And you have different actors, different groups of humans currently working at cross purposes and intention and conflict with one another.
00:26:41.620And if they have more powerful tools to inflict damage on the other side, then more damage might happen.
00:26:48.040Now, you might say, haven't we already maxed out on that?
00:26:50.700Like with nuclear weapons during the Cold War, you could already have destroyed civilization.
00:26:56.860Well, A, nuclear weapons were actually quite difficult to develop and expensive.
00:27:03.500So you could just have any random person having their own nuclear weapon in their garage, right?
00:27:07.480It was like something states only could develop.
00:27:10.240And even then, you needed a big industrial program, et cetera, et cetera.
00:27:16.460And at B, they are relatively detectable.
00:27:19.920So we basically know who has nuclear weapons and who doesn't.
00:27:25.020But that doesn't need to hold for future technologies we might develop.
00:27:28.540With biotechnology, for example, we might get the tools that enable an individual in their garage to make something
00:27:34.560that could decimate the global population, right?
00:27:39.920And these things would be very hard to monitor because you don't need large facilities with, like, a power plant next door to pump in energy.
00:27:48.780You could just have some, you know, test tubes and some chemicals and some biological specimens.
00:27:55.920And there are other ways as well that the properties of nuclear weapons were somewhat stabilizing.
00:28:01.940So it's quite, it was not sufficiently easy to wipe out all the adversaries' nuclear weapon to be sure that if you struck first, you would be safe.
00:28:12.880Like during the Cold War, both the Soviet Union and the U.S. had a second strike capability.
00:28:16.980So that kind of stabilized things a little bit in a crisis because the alternative, right?
00:28:22.240Even if you don't want to destroy the other side, like, you think that would be a great shame.
00:28:25.520But if you are worried that they could strike first and wipe you out, and that in a crisis situation, that could easily result to each side thinking,
00:28:34.480we've got to strike now because even though we don't really mean harm to the others, we can't afford to take the risk of leaving ourselves exposed.
00:28:40.800The only way to be safe is to wipe out their side first.
00:28:43.940And so if they hadn't, if the technology had not been such that you could have a secure second strike capability,
00:28:49.340then you would have potentially a much less stable arms race.
00:28:52.980And other technologies in the future, you know, might also be more unstable in that respect.
00:28:59.720And there are other possibilities as well.
00:29:01.900But yeah, so I think a big category of risk come from the kind of fracturedness of the current world order.
00:29:09.600Also, some accident risks arise more deeply from conflict.
00:29:16.960I mean, if you think it during the Cold War, one thing that could have happened is that that would have been a nuclear war by accident,
00:29:23.380like some warning system malfunctioned or something.
00:29:25.840In fact, almost happened on a couple of occasions with Abel Archer exercises and so forth.
00:29:31.960But the deeper cause of this would have been the conflict because it's the nuclear conflict that led these arsenals to be built up in the first place and to put on here a trigger alert.
00:29:43.140So even if the kind of immediate cause might be an accident, the thing that allowed that situation to rise where a small accident could cause this was the conflict.
00:30:07.460Like, for example, music is all about math, right?
00:30:11.040So can we get to a point where an AI can take and come up with, you know, certain rhythm or music that is perfect math and create any kind of a voice and put the lyrics together and sing it where, you know,
00:30:29.680the music entertainment industry could be disrupted because, you know, softwares are making better music than human beings are.
00:30:36.380Like, do you ever go deep to see how each industry is going to be affected by AI?
00:30:39.580Mostly we are focusing more on these, like, more general questions.
00:30:46.620And I think once you have sufficiently advanced capability, the answer to your question is, like, all of these errors will be affected and overtaken by machine.
00:30:55.180But, I mean, it's fun sometimes maybe just to think what is likely in the near term before we have this fully general AI.
00:31:02.740I mean, and with music, I got to say so far, the results are not that impressive of what machines, I think they get a lot of the more local structure, right?
00:31:12.900Like, the small little snippets of music would sound really convincing and good.
00:31:16.780But the larger architecture and the sort of the meaning of the whole piece is something that so far has not really been produced by these music generating AI.
00:31:26.560But we'll see how that goes when we scale up the systems, because in other arts, when we have scaled up basically the same algorithm,
00:31:37.020but it has gotten more of this kind of holistic context.
00:31:41.520So maybe that might happen in the relative linear term, these kind of music generating AI is becoming pretty decent.
00:31:51.400You know, every time I do an interview, I got like, I'll ask 5, 10, 15, 20 questions,
00:31:55.920but I got the one question that I'm trying to get an answer for myself.