Valuetainment - November 03, 2021


Nick Bostrom: The Threat Of Artificial Intelligence - Elon Musks Biggest Fear


Episode Stats

Length

44 minutes

Words per Minute

172.58136

Word Count

7,695

Sentence Count

417

Hate Speech Sentences

3


Summary


Transcript

00:00:00.000 Okay, so get ready to have your brain have a workout by the time we're done with this interview.
00:00:05.020 That's a guarantee because when I watch him in interviews, my brain, if there's muscles, it's getting bigger and bigger and bigger listening to him.
00:00:13.500 My guest today is Nick Bostrom, who's a professor at University of Oxford, director of Future of Human Institute.
00:00:21.020 He wrote a book where two people that you may have heard of recommended it to everybody to read.
00:00:26.520 One man, his name is Bill Gates. Maybe you've heard of Bill Gates.
00:00:29.000 The other man is Elon Musk.
00:00:30.360 And the book he wrote, it's called Super Intelligence, Paths, Dangers, Strategies.
00:00:37.460 2014, he wrote it, became a New York Times bestseller.
00:00:40.820 He's done many different talks on TEDx, 200 different publications.
00:00:45.420 I don't know how many interviews you've done, but 2,000 plus interviews.
00:00:49.560 And in many ways, he is referred to as one of the most important thinkers of our age.
00:00:56.060 And the conversation today is going to be, what can we do?
00:00:59.140 What is the risk of having technology, AI, robots grown as advancing at the pace that they're advancing?
00:01:05.580 Is the strategy to slow them down or something else?
00:01:25.600 Nick, thank you so much for being a guest on Valuetainment.
00:01:34.820 Glad to be here. Let's get to it.
00:01:37.460 So Nick, so tell us, right now, with everything that's going on, you got guys that are talking about the AI from the standpoint of we need universal basic income.
00:01:46.440 This is not that conversation, right?
00:01:48.060 You got guys that are talking about, you know, AI is advancing way too fast.
00:01:53.320 We should slow it down.
00:01:54.340 Some are saying it's nothing to worry about.
00:01:56.740 It's been going like this for a long time.
00:01:58.380 We never had computers.
00:01:59.680 Back in the days, we went on typewriters.
00:02:02.040 We were so worried about what movies we're going to do, what televisions we're going to do.
00:02:05.780 Why is this such a different time?
00:02:07.420 And what are your thoughts about the negative side effects of technology and AI?
00:02:11.940 I don't think we should slow AI down.
00:02:13.740 I think it's rapidly increasing in capability and that it's not implausible that within the lifetime of a lot of people alive today,
00:02:22.100 we might see a transition to a machine intelligence era, an era where the human brain is no longer the place where the action is.
00:02:32.220 And why do you believe that?
00:02:33.420 We might have super intelligent machines.
00:02:34.880 Why do you believe that?
00:02:36.780 Well, it's mostly based on observing the rapid recent pace of progress in machine learning, deep learning.
00:02:49.820 The sense that it looks like this is quite tightly connected to the hardware that we have available, which is growing.
00:02:57.100 And we can expect to continue to grow the hardware performance that these systems are implemented on.
00:03:05.840 And it will probably require some new breakthroughs as well.
00:03:08.820 But if we just see how far we've come, even just in the last eight years or so, it seems rash to roll out that we might have enough progress within the lifetime of some current folk to get this transition.
00:03:22.280 But I'm not by any means certain that this will happen, but it seems more likely than not.
00:03:29.340 Do you have are you somebody who do you have kids?
00:03:31.260 Are you a parent?
00:03:32.000 Are you a father?
00:03:33.440 OK, so you're not concerned about, you know, what role technology is going to play in their lifetime where they're still going to have a purpose.
00:03:42.720 Obviously, it's an odds game.
00:03:43.840 You don't know for a fact, but in your mind, you're thinking this is the direction we're going.
00:03:47.500 Is it just going to be a different way of living for us or is there going to be a threat where eventually we're controlled by the machinery?
00:03:55.880 Well, I think on the downside, we have.
00:03:59.540 Significant risks, including existential risks, these would be threats to the very survival of Earth originating intelligent life or ways which could permanently destroy the future by locking ourselves in to some radical suboptimal state.
00:04:12.520 But there is also an upside to this, like if things go well, if we avoid these disaster scenarios, then I think superintelligence would unlock a much bigger and better future for humanity.
00:04:27.760 And I'm quite excited about that potential for actually good things to come out of this.
00:04:34.140 Also, in addition to unlocking the subside, I think getting superintelligence right would help with a lot of other existential risks that we will otherwise be confronting in this century.
00:04:47.980 Such as?
00:04:48.900 From synthetic biology, for instance, that will make it increasingly easy to enhance pathogens and democratize that capability by making these tools easier to use and more widely available.
00:05:05.600 Got it.
00:05:06.340 So you're familiar with Samuel Butler's essay, Darwin Among the Machines, right?
00:05:11.120 The 1863.
00:05:12.440 I'm assuming you are, right?
00:05:13.460 I might have heard of it.
00:05:14.840 So he says in 1863, he wrote this essay saying Darwin Among the Machines predicted the domination of humanity by intelligent machines that eventually these guys are going to dominate the rest of us and we can't do anything and we're going to be having to bowing down to them.
00:05:34.700 Do you think that is likely to happen?
00:05:38.640 Well, I don't think that would be a bowing down scenario.
00:05:40.600 I think in a lot of these existential risks as humanity is simply wiped out and the resources around Earth and in the solar system and beyond that, then we formatted and used for some other goal instead.
00:05:54.780 You know the talk you gave on Ted where you're like, you know, if we measure ourselves against the gorilla or the ape, you know, it is so much more powerful even than the strongest man in the world.
00:06:06.240 Yet at the same time, our capability is, you know, our brain function, processing, thinking, all that stuff, right?
00:06:12.520 So the edge there is strength.
00:06:15.120 The edge for us is we can think, process issues, make better decisions, let's just say.
00:06:20.240 So what is our edge over computers and AI?
00:06:25.220 What edge do we have?
00:06:26.640 Well, we get to build them.
00:06:27.660 And so we get at least one shot to get it right.
00:06:31.380 If we can engineer them in such a way that they are actually a kind of extension of our own will, maybe in an idealized form.
00:06:39.540 If we can align them with our intentions and values, then that would be on our side.
00:06:43.600 And it would be a huge boost for our goals and aspirations.
00:06:48.600 So if we build them, because we have control to build them, but at the same time, you know, the world is filled with good people and bad people, right?
00:07:00.080 So some people, you know, build things with a positive motive.
00:07:04.860 Some people build things with negative motive.
00:07:07.300 Let's say the noble people that build them with a positive motive.
00:07:10.040 What if the ones that are going to build it with a negative motive because they're driven by power, control, force, you know, and they build this machine that's stronger, thinks better, you know, does everything better than us.
00:07:22.640 If there's one area that they can't build it to be better, even the most evil person in the world.
00:07:27.360 Say the most evil person in the world wants to build a robot because he wants to take over and rule the planet.
00:07:32.000 Hypothetically, we've seen this in movies, novels.
00:07:33.960 It's not the first time we have seen this or read this, right?
00:07:36.860 Even at that level, is there anything we have an edge over the best machine, robot, any intelligence that's built by man?
00:07:48.220 I mean, for now, obviously, the machines are very limited.
00:07:50.840 I think eventually they will become way more capable than any human individually.
00:07:56.580 And some point after that, more capable than all humans taken together.
00:08:00.400 I don't think we will have any edge from any, you know, physical or intellectual capacity post that.
00:08:10.840 At that point, we would be dependent on these having been designed in such a way that they actually do what we are intending for them to do.
00:08:20.500 They need to be on our side, I think, basically, in order for humanity to have a bright future or even any future at all in this kind of scenario.
00:08:30.840 So, the good news is that, like, while I was writing the book, this was an almost entirely neglected area.
00:08:39.100 A lot of people working on AI, but hardly anybody working on thinking about what happens if AI succeeds.
00:08:45.280 But since then, there is now an active research subfield working on AI safety and on creating scalable methods for AI control that could apply no matter how smart and capable a learning system becomes.
00:08:58.940 And some really clever people going into that field.
00:09:01.600 So, some progress is being made there.
00:09:03.200 I mean, we'll see whether a sufficient solution to these have become available by whatever time we need it, which is like when some other researchers figure out how to create machine superintelligence.
00:09:14.540 So, there is a kind of race on, right, between the majority trying to make superintelligent machines as quickly as possible and then a minority working to make sure that by then we will have the relevant control and safety technologies.
00:09:28.900 Yeah, but I guess the way I'm looking at it is from the risk standpoint, right?
00:09:33.520 Like, you know, we just experienced a pandemic and it shut down America, shut down the world, it shut down Europe, Central America, America.
00:09:44.060 It shut down everywhere, right?
00:09:45.500 And let's say that was an accident and that was an intentional thing that happened.
00:09:50.840 Great.
00:09:51.620 What if somebody wants to do that intentionally, the power to want to shut everybody down?
00:09:55.800 So, then the conversation becomes recently one of the main leaders of our Department of Defense just resigned because he said the amount of intelligence and experience China has on cyber warfare is years ahead of us where we can't even compete against those guys.
00:10:10.820 So, I'm resigning.
00:10:11.920 That's what he said when the article came out, this was last week, but I'm talking about more, I'm from Iran.
00:10:17.580 So, the biggest challenge historically has been with Iran, hey, we don't want you to build any nuclear weapons.
00:10:23.200 Fine.
00:10:24.400 Okay, we're not building nuclear weapons.
00:10:26.540 You sure?
00:10:27.260 We're not building any nuclear weapons.
00:10:29.200 Can we come inspect?
00:10:30.400 Sure.
00:10:31.300 Anywhere?
00:10:31.840 You can inspect anywhere except these nine places.
00:10:34.120 Well, then maybe they're building it in those nine places, right?
00:10:36.000 So, what I'm trying to say is, say somebody is not as noble as you are, and if they really have motivations of power influence control, how far can they go with AI the next 5, 10, 15, 20, 30 years?
00:10:50.940 Well, I don't know about 30 years.
00:10:52.460 I mean, right now, we know that the leading edge of AI development is mostly in the public domain.
00:11:01.980 In fact, the best researchers are falling over themselves to publish as quickly as possible their latest findings, putting it on pre-print archive servers, even before it can appear in journals.
00:11:14.860 Now, it's very possible that at some point this will shift to a more closed development regime.
00:11:22.120 And at that point, it might become harder to know who is where in the race to develop AI.
00:11:32.380 But for now, we have a pretty good grasp.
00:11:36.260 And I think it would be quite hard at present to mount a competitive effort in complete secrecy, because all the best researchers are really keen to be able to publish, because it's the way that they can show to the other researchers how good they are, right?
00:11:55.820 If you're just doing something in the bowels of some corporation, and you're never allowed to tell anybody about it, it kind of sucks if you're one of these people who could get worldwide fame or, you know, renowned amongst your fellow researchers.
00:12:09.140 And at the moment, really great AI researchers is in such strong demand that they can kind of have their pick.
00:12:18.420 And a lot of them prefer to work for corporations or universities that allow open publishing.
00:12:23.500 Some of these things vary based on the country, right?
00:12:32.240 Meaning, China's not big on recognizing the individual.
00:12:36.320 Everything's about the collective.
00:12:37.280 And Iran's not big on recognition either for the individual, because God forbid if somebody gets too much power, you know, maybe a second, a resurrection of the Shah could come up.
00:12:47.560 So we can't give one person too much recognition.
00:12:49.520 Meaning, in some of these societies, some of these countries, you're not doing it for you.
00:12:56.500 You're doing it for the country.
00:12:57.560 So even the fellow researcher that wants to get the kind of recognition in China, you can't do it anyways.
00:13:02.340 So if somebody wanted to do it, it wouldn't be like, hey, look what I wrote.
00:13:06.300 Here, look at this paper I just wrote.
00:13:07.520 Here's what I'm working on.
00:13:08.300 This is my findings.
00:13:09.940 In some of these places, you cannot fully know how advanced they are.
00:13:14.180 So maybe let me take this in a different angle than we're going to.
00:13:17.480 But China is publishing a lot of papers in AI more and more every year.
00:13:23.860 There's like a kind of strong incentive structure in China to publish it because academics get rewards depending on how many papers they publish and so on and so forth.
00:13:34.200 From what you're seeing, where are you seeing the biggest advancement?
00:13:39.540 So hypothetically, like in your world, what is some technology that they're talking about being built today that maybe we saw in a movie 20 years ago, 40 years ago, that could become a reality?
00:13:52.000 Well, do you mean within AI or?
00:13:54.000 Within AI.
00:13:54.940 Yeah, within AI.
00:13:55.500 Well, so there's like a kind of basic progress on techniques that can be applied.
00:14:03.020 They're a pretty general purpose.
00:14:04.800 So if you look at some of the more impressive recent advances with large language models, for example, OpenAI's GPT-3, which has, I mean, to simplify, it's ingested a huge amount of human written text, basically the internet.
00:14:21.060 And you can then give it some text prompt, and it will kind of continue writing in response to this prompt, maybe the style that the prompt suggests.
00:14:33.120 And it can write some paragraphs of text that can still, in most cases, easily be distinguished from human written prose.
00:14:43.380 But in some cases, maybe for a paragraph or so, can trick you into thinking that it was a human writing it.
00:14:48.700 And with occasional glimpses of sort of surprising incisiveness.
00:14:54.320 Now, it looks like the performance of these large language models scales with the amount of compute.
00:15:01.000 So the more parameters, the more data you have, the better these models become.
00:15:06.700 And so one interesting question is, as we scale up these models by maybe a few orders of magnitude, does that mean that we will really close in on the gap between the current models and what the grown human can do?
00:15:21.280 It also turns out that basically the same architecture is that you can be used to do this kind of text generation.
00:15:27.100 You can apply for other modalities.
00:15:30.380 So there are now a system like the clip system where you combine text and imagery.
00:15:35.180 And you can have either generate imagery or you can have like input text.
00:15:40.960 And you could kind of imagine what a picture of that might look like and kind of in a multimodal way fuse these different information streams that we humans have and that our brains interpret in a quite neat way.
00:15:54.900 So that's like one thing, like another recent thing was AlphaFold2 by DeepMind, the AI that is able to, if not solve the protein folding problem, at least make like a dramatic amount of progress there with potentially important applications in the biosciences.
00:16:14.960 If we go back a couple of years, obviously, that was AlphaGo, you know, where the game of Go was conquered by AIs.
00:16:27.340 And in all of these cases, it's a relatively small set of techniques that are being applied and reapplied.
00:16:34.340 So it's not like each of these systems requires some degree of handcrafting, but the real engine, the real juice is kind of this knowledge we now have of how to make machines learn.
00:16:45.220 We have really figured out how to make machines learn.
00:16:47.860 And that can then be applied to vision or to sound or to text or to pretty much any domain.
00:16:56.240 So that's, I think that's like probably the most exciting thing that is currently happening is this deep learning revolution.
00:17:01.660 Do you think it's possible to make, to teach machines, not just how to learn, but how to feel?
00:17:06.940 I think there's a lot of uncertainty about the philosophy of mind question about what the criteria are for, say, being sentient, for having conscious experiences.
00:17:19.120 It's something philosophers have wrestled with for a long time.
00:17:24.000 And I do believe that we will eventually have machines, digital minds that are conscious and that then will also have moral status.
00:17:35.060 And that means that the question is not just what do they do to us or what do we do to each other with machine tools, but also what do we do to them?
00:17:42.740 So the ethics of digital minds, I think, will gradually arise as a really important issue.
00:17:49.740 Today, it's a little bit outside the Overton window.
00:17:52.080 It's a kind of wacky thing that you can't really.
00:17:54.360 But I think the time has come now where at least some people have the luxury to be like, you know, in academia, sitting and thinking about things all day long.
00:18:02.860 So we should start to try to work out some of what this might look like, a world where, say, humans and AIs at different levels of capability and at different levels of sentience have to coexist in some kind of workable harmony.
00:18:16.940 You ever seen the movie Her with Joaquin Phoenix?
00:18:20.920 Mm-hmm.
00:18:21.520 Do you know which one I'm talking about where he's...
00:18:23.660 Yeah, that's one where they have like the personal assistant, right?
00:18:26.700 Yeah, do you think it's ever going to come a time where man marries a robot because the robot can now do everything and anything that another human being can do, including Phil?
00:18:40.840 You think that day is ahead of us?
00:18:42.920 Well, I mean, ever is a long time, right?
00:18:44.920 So, but in terms of chat robots and stuff like that, that might be kind of rather around the corner.
00:18:51.400 I mean, already exist in limited ways.
00:18:53.080 It might be that, at least for some applications, you don't really even need a fully convincing human-like interlocutor.
00:19:00.960 It might be that the more limited thing will still be compelling to some people.
00:19:06.580 And, I mean, maybe it will have utility as well.
00:19:09.460 I mean, if you have like these personal assistants like Siri or...
00:19:13.460 And if they could start having a more kind of social relationship with the user as well and, you know, learn to say encourage.
00:19:23.080 Things when they detect signs of somebody feeling down or like, I could imagine that that in a small and gradual way is starting to happen within the next few years.
00:19:33.300 But ultimately, yeah, I mean, if you have machines that are completely equivalent to humans or beyond that and that are conscious, et cetera, then I don't see any reason why you couldn't have the same deep relationships between some humans and some machines as you currently have between some humans.
00:19:55.060 You've argued that true AI, if it's realized, might pose a danger that exceeds every previous threat from technology, even nuclear weapons.
00:20:07.080 And that if its development is not managed carefully, humanity risks engineering its own extinction.
00:20:12.580 Has your position changed with that or are you still in the same place with that?
00:20:16.240 Yeah, I mean, in terms of the magnitude of threat, if we're talking about probability, that obviously goes up and down over time.
00:20:23.380 So when scientists first detonated an atomic bomb, the Trinity test, there was some concern that maybe the high temperatures that would be generated could ignite the atmosphere and then that would kill all life.
00:20:43.880 Concerned in us that a number of studies were commissioned by Robert Oppenheimer, who was the director of the Los Alamos lab.
00:20:53.700 And those calculations that were performed show that this shouldn't happen in terms of the nuclear physics, like the new atmosphere is not ignitable.
00:21:03.520 And of course, they detonated a bomb and the atmosphere didn't ignite, which is a good thing for us.
00:21:12.200 But at the time, you might say that was a small existential risk, that their calculations maybe could have been mistaken.
00:21:18.560 And then that would have been the end.
00:21:19.980 And in fact, just a few years later, when scientists were developing the fusion bomb, the hydrogen bomb, they again made some calculations as to the yield of this experimental device that they were going to detonate.
00:21:34.940 And they set it off.
00:21:39.580 Castle Bravo detonation.
00:21:41.620 And there, it turned out, there was a mistake in their calculation, with the result that the yield was two and a half times bigger than they had anticipated.
00:21:49.400 And so what this meant was that a huge blast arose, irradiating a Japanese fishing boat where one person died, causing an international accident.
00:22:00.200 You could imagine the Japanese being kind of sensitive to nuclear after what they had gone through.
00:22:04.300 Several islands had to be evacuated, and it was like a big calamity.
00:22:07.480 A lot of the instruments that had set up to record the detonation were destroyed by the blast.
00:22:13.300 So it's a good thing that the calculation error was in this second experiment, rather than in the calculation about whether the Trinity test would ignite the atmosphere.
00:22:24.120 But then I think there were maybe larger existential risks during the height of the Cold War, where the world seemed to be on the brink of nuclear Armageddon on several occasions.
00:22:33.620 Although it's not clear that that would have caused human extinction.
00:22:39.600 But now I think, if we're looking ahead over the coming decades, the biggest existential risks, and I think that will be unprecedentedly big, will arise from some technological breakthroughs we can be expected to make.
00:22:51.860 A, superintelligence being one, and then synthetic biology being another.
00:22:55.400 And there might be some further areas as well that could introduce these new factors into the world, where we have no track record of living with this for many years or decades or millennia.
00:23:06.800 And we're kind of rolling the dice anew with these brand new powers.
00:23:10.980 You know, typically when something like right now, the conversation that's being talked about a lot is regulation with Bitcoin, cryptocurrencies.
00:23:17.980 So, you know, one side is saying it'll never get regulated, another side is saying it's already regulated, another side saying it's about to get regulated.
00:23:26.200 So, but regulation comes up.
00:23:28.400 We don't have control of this.
00:23:29.520 The government's got to come in and see what's going on because there's a lot of money laundering going on.
00:23:33.340 And these NFTs, we have to get regulation because some people are using NFTs as a way to launder money.
00:23:39.800 Okay, fine.
00:23:40.700 Do you think, you know, do you think it'll ever get to a point, again, ever is a long time, but do you think anytime soon we'll get to a point where the level of regulation on AI needs to be a global thing rather than a national thing?
00:23:58.920 Yeah.
00:24:02.280 That seems likely.
00:24:03.500 I mean, if you have 200 countries and each make their own independent choices about some of these things, then basically we'll have to assume that there won't be any larger scale externalities from these different technologies.
00:24:18.740 But we already have a lot of examples where there are externalities from what the country does.
00:24:23.040 I mean, global warming being one example, where if you want to solve that, it's not enough that one country unilaterally reduce their own pollutions.
00:24:30.840 Like, it has to be something most countries do.
00:24:34.740 And with some other things like nuclear weapons and biological weapons, we have made big efforts to limit the proliferation of nuclear weapons and to ban entirely biological weapons.
00:24:45.980 With a reasonable degree of success, but not complete success.
00:24:50.900 So it might well be, I mean, to some extent, it depends on how lucky we are with how new technologies pan out.
00:24:55.780 But we might get technology so destructive that it's unacceptable if even one actor develops and deploy something.
00:25:04.440 Then the only hope would be some kind of global agreement to prevent that.
00:25:09.800 Have you seen the recent movie Machines vs. the Mitchells, the cartoon?
00:25:16.700 No, you haven't seen it?
00:25:18.080 No.
00:25:18.220 So my kids are like, hey, Dad, we've got to watch Machines vs. Mitchells.
00:25:23.420 And I'm like, guys, I'm good.
00:25:24.440 I don't need to watch this.
00:25:25.300 But when we watch a cartoon, it's a good opportunity for me to take a nap.
00:25:28.040 So I'm like, okay, great.
00:25:28.780 Let's watch it.
00:25:29.840 So we put it on.
00:25:30.680 I'm watching this cartoon.
00:25:31.820 All of a sudden, I'm like, man, it's this cartoon.
00:25:33.980 I cannot fall asleep because this is actually very interesting.
00:25:36.900 And it's a cartoon about machines taking over, you know, Mitchells.
00:25:42.580 Mitchells is a family.
00:25:43.620 They can't fight back.
00:25:44.540 And everybody unites.
00:25:45.600 And, you know, they know how to bring everything together.
00:25:49.160 And the human becomes the enemy.
00:25:52.240 And even the movie iRobot.
00:25:53.400 You've seen the movie iRobot.
00:25:54.260 I don't know if you've seen the movie iRobot where the robot becomes the enemy.
00:25:57.560 The human becomes the enemy and they revolt against it.
00:26:00.920 And, you know, many times we watch these movies, a lot of these movies eventually become a reality.
00:26:06.020 It's just a matter of time where you're sitting there saying, are they making a possibility of what could happen in the future?
00:26:10.680 What concerns you the most with the future?
00:26:12.800 Everybody has a different concern.
00:26:13.980 What concerns you the most with the future?
00:26:17.620 Well, probably conflict ultimately.
00:26:20.140 I mean, you can carve concern cake up in different ways.
00:26:24.100 You can slice it or dice it.
00:26:26.560 But a lot of the biggest existential risks, I think, arise from the fact that the world is splintered.
00:26:34.000 And you have different actors, different groups of humans currently working at cross purposes and intention and conflict with one another.
00:26:41.620 And if they have more powerful tools to inflict damage on the other side, then more damage might happen.
00:26:48.040 Now, you might say, haven't we already maxed out on that?
00:26:50.700 Like with nuclear weapons during the Cold War, you could already have destroyed civilization.
00:26:56.860 Well, A, nuclear weapons were actually quite difficult to develop and expensive.
00:27:03.500 So you could just have any random person having their own nuclear weapon in their garage, right?
00:27:07.480 It was like something states only could develop.
00:27:10.240 And even then, you needed a big industrial program, et cetera, et cetera.
00:27:16.460 And at B, they are relatively detectable.
00:27:19.920 So we basically know who has nuclear weapons and who doesn't.
00:27:25.020 But that doesn't need to hold for future technologies we might develop.
00:27:28.540 With biotechnology, for example, we might get the tools that enable an individual in their garage to make something
00:27:34.560 that could decimate the global population, right?
00:27:39.920 And these things would be very hard to monitor because you don't need large facilities with, like, a power plant next door to pump in energy.
00:27:48.780 You could just have some, you know, test tubes and some chemicals and some biological specimens.
00:27:55.920 And there are other ways as well that the properties of nuclear weapons were somewhat stabilizing.
00:28:01.940 So it's quite, it was not sufficiently easy to wipe out all the adversaries' nuclear weapon to be sure that if you struck first, you would be safe.
00:28:12.880 Like during the Cold War, both the Soviet Union and the U.S. had a second strike capability.
00:28:16.980 So that kind of stabilized things a little bit in a crisis because the alternative, right?
00:28:22.240 Even if you don't want to destroy the other side, like, you think that would be a great shame.
00:28:25.520 But if you are worried that they could strike first and wipe you out, and that in a crisis situation, that could easily result to each side thinking,
00:28:34.480 we've got to strike now because even though we don't really mean harm to the others, we can't afford to take the risk of leaving ourselves exposed.
00:28:40.800 The only way to be safe is to wipe out their side first.
00:28:43.940 And so if they hadn't, if the technology had not been such that you could have a secure second strike capability,
00:28:49.340 then you would have potentially a much less stable arms race.
00:28:52.980 And other technologies in the future, you know, might also be more unstable in that respect.
00:28:59.720 And there are other possibilities as well.
00:29:01.900 But yeah, so I think a big category of risk come from the kind of fracturedness of the current world order.
00:29:09.600 Also, some accident risks arise more deeply from conflict.
00:29:16.960 I mean, if you think it during the Cold War, one thing that could have happened is that that would have been a nuclear war by accident,
00:29:23.380 like some warning system malfunctioned or something.
00:29:25.840 In fact, almost happened on a couple of occasions with Abel Archer exercises and so forth.
00:29:31.960 But the deeper cause of this would have been the conflict because it's the nuclear conflict that led these arsenals to be built up in the first place and to put on here a trigger alert.
00:29:43.140 So even if the kind of immediate cause might be an accident, the thing that allowed that situation to rise where a small accident could cause this was the conflict.
00:29:51.420 Interesting.
00:29:52.640 Do you look at it from the standpoint of how AI is going to impact every industry in a different way?
00:29:58.560 Do you ever sit there and say, well, yeah, I kind of see this is what could happen to sports.
00:30:04.420 Here's what could happen to warfare.
00:30:06.260 Here's what can happen to music.
00:30:07.460 Like, for example, music is all about math, right?
00:30:11.040 So can we get to a point where an AI can take and come up with, you know, certain rhythm or music that is perfect math and create any kind of a voice and put the lyrics together and sing it where, you know,
00:30:29.680 the music entertainment industry could be disrupted because, you know, softwares are making better music than human beings are.
00:30:36.380 Like, do you ever go deep to see how each industry is going to be affected by AI?
00:30:39.580 Mostly we are focusing more on these, like, more general questions.
00:30:46.620 And I think once you have sufficiently advanced capability, the answer to your question is, like, all of these errors will be affected and overtaken by machine.
00:30:55.180 But, I mean, it's fun sometimes maybe just to think what is likely in the near term before we have this fully general AI.
00:31:02.740 I mean, and with music, I got to say so far, the results are not that impressive of what machines, I think they get a lot of the more local structure, right?
00:31:12.900 Like, the small little snippets of music would sound really convincing and good.
00:31:16.780 But the larger architecture and the sort of the meaning of the whole piece is something that so far has not really been produced by these music generating AI.
00:31:26.560 But we'll see how that goes when we scale up the systems, because in other arts, when we have scaled up basically the same algorithm,
00:31:37.020 but it has gotten more of this kind of holistic context.
00:31:41.520 So maybe that might happen in the relative linear term, these kind of music generating AI is becoming pretty decent.
00:31:51.400 You know, every time I do an interview, I got like, I'll ask 5, 10, 15, 20 questions,
00:31:55.920 but I got the one question that I'm trying to get an answer for myself.
00:31:58.540 It's not even for the audience.
00:32:00.040 I'm trying to get it.
00:32:01.220 Let's see what we can do.
00:32:02.260 I've asked it already a few times.
00:32:04.340 I'm just trying to get a little bit clear about it.
00:32:06.880 But for me, it's, you know, I'm 6'4 1⁄2", 245, okay?
00:32:14.240 My best vertical leap I ever had was maybe, you know, 30 inches, let's just say.
00:32:20.880 Okay, it's my vertical leap.
00:32:22.420 Okay, so I know I'm limited to how high I can jump.
00:32:26.720 I know my limits on how high I can jump.
00:32:29.180 Okay, great.
00:32:29.680 So I'm limited in my ability to jump.
00:32:35.440 In basketball, the highest vertical leap they've ever had is 48 inches.
00:32:39.620 Some say 51 inches.
00:32:40.840 But let's just say that's the number.
00:32:42.640 We are limited to how high we can jump.
00:32:45.180 Animals have a limit to what they're capable of doing.
00:32:48.580 We all have limits, right?
00:32:50.700 And when people say, you don't have any limits, it's a great motivational quote,
00:32:55.100 but there's a certain limit that we all have.
00:32:56.980 And the sooner you know your limits and certain areas that maybe you don't have a limit,
00:33:01.080 you can go out there and do great things with.
00:33:03.620 Back to AI.
00:33:05.120 If we know animals are limited, human beings are limited to certain things they could do,
00:33:10.380 what is there any limitations that AI and machines and technology can have?
00:33:16.120 And if yes, what would that be?
00:33:18.680 Yeah, I mean, there are, but they're very high up, as it were.
00:33:23.340 I mean, basically at that point, we have to look at the fundamental physics involved,
00:33:29.640 physics of computation.
00:33:31.600 And so there are limits to how fast signals can propagate.
00:33:35.580 We have, you know, speed of light ultimately limits, how fast the signal can go from one
00:33:40.600 point to another.
00:33:41.360 So that means if you have a very large computer, at some point internally, there will be limits
00:33:48.440 to the serial depth of computation, the speed of the serial computation it can perform,
00:33:52.520 because it will just take a long time for one part of the computer to communicate with
00:33:56.440 another part.
00:33:57.960 There are also limits like the black hole limit.
00:34:00.180 Like if you really made a really sufficiently large computer, eventually the mass of this
00:34:05.960 would collapse it into a black hole, right?
00:34:08.660 There are limits to the amount of like energy use.
00:34:15.200 If you want to erase information, there's like the smallest amount of neg entropy that
00:34:21.640 you have to expand to erase one bit of information.
00:34:24.160 So if you want to do irreversible computation, there's limits to how efficient that can happen.
00:34:29.940 And ultimately, there's also a limit in the universe to the amount of matter we can lay
00:34:34.360 our hands on, starting from the Earth today.
00:34:36.360 Even if we travel at the speed of light, there's a finite sphere that we can access.
00:34:42.520 Things beyond that will have receded from us by the time we get there, because the universe
00:34:46.480 is expanding.
00:34:49.640 So there are these various physical limits, assuming our current understanding of physics
00:34:54.860 is correct.
00:34:55.440 But there are many, many orders of magnitudes above, like as in not just two or three.
00:35:01.960 I mean, in terms of the mass we can access, that would be on the order of maybe 10 to the
00:35:08.700 power of 20 stars or so.
00:35:12.500 And each one of those could hold, you know, maybe 10 to the power of 30 times more beings
00:35:19.640 around it than, you know, have lived around the Earth today.
00:35:23.760 So there's a lot of room above our heads before we hit the ceiling of what physics permit.
00:35:31.560 Crazy question for you.
00:35:32.780 Is this possible?
00:35:33.920 Okay.
00:35:34.080 You seem like you're going to say yes, because your general answer is anything is possible
00:35:40.840 with AI, because ever is a long time.
00:35:43.780 So eventually they can figure it out.
00:35:45.220 So let me kind of give this to you and try to either trash it or say, no, that could possibly
00:35:49.700 happen one day.
00:35:51.180 So, you know, half the time you, in America, especially, I've lived in Iran, I've lived in
00:35:56.940 Germany, I've lived here, in America, when you vote for a president, people will typically
00:36:03.660 hate the president because of personal reasons.
00:36:07.620 They'll say, I don't like that he's this, he's that.
00:36:09.180 So it's easier to demonize a president.
00:36:12.000 It's easier to demonize a Biden, a Trump, an Obama, a Bush, a Clinton, a Reagan.
00:36:17.800 It's easy to demonize because it's a human being, right?
00:36:20.060 Versus, you know, if there was no face on the president, it was ran based on predictive
00:36:29.260 analytics, data on what's the best decision to make with this specific area, because we
00:36:33.900 can run stats and we can write data to say the right tax system to run based on the, you
00:36:39.080 know, what the system figured out here with this computer, with this AI, we should never
00:36:43.440 tax people more than 22 and a half percent.
00:36:45.440 I'm just throwing a number out there to you, right?
00:36:47.100 Right. And so then, you know, this other situation says, based on the amount of wars
00:36:54.300 that we've gone and the conflict that we've had with India, with China, with Iran, with
00:36:57.540 this, this, that, the right move right now is to do nothing or let's leave or let's stay.
00:37:02.100 Let's stay five more months because it's all data, data, data, data, data, data, right?
00:37:06.560 Do you think there could come a time where we no longer have people running for presidents?
00:37:10.220 We have systems and computers and AI that we vote for?
00:37:13.260 I mean, I think that would come late. I think there's a lot of kind of inertia embedded in the
00:37:20.360 constitutional framework. So I imagine that would probably for some time be at least a
00:37:24.420 figurehead. Now, you can always ask how much is this figurehead actually the thing that
00:37:28.340 controls what the government does? Sure.
00:37:30.420 I think you could ask that question already in the present world. And I think.
00:37:35.720 True.
00:37:36.020 In most cases in history, there's like one salient figure, but then there is some kind
00:37:40.780 of elite around that. There's some institutions that constrain their freedom of action. And
00:37:45.960 maybe some of what constrains them in the future will increasingly be these big information systems
00:37:52.080 and algorithms running on them. Just like today, like say the market kind of constrains what
00:37:57.580 you can do as a president. And then maybe, you know, there will be social networks that constrain
00:38:02.020 you. And then maybe there will be other interactions between cyber systems that also
00:38:06.620 constrain what you can do.
00:38:09.820 Because how can you talk to a system, right? You're like, we got three systems here to vote
00:38:15.420 for. The predictive analytics ran by such and such university believes these are the decisions
00:38:20.120 for us to make. And in the last 200 years, if we would have ran it based on this system,
00:38:24.660 it would have done. So we can't get say that computer I don't like. He hurt my feelings
00:38:28.500 or it hurt my feelings. We can't do that. But what you did say with your answer is maybe
00:38:34.180 AI does have a certain limit. And the limitation it has is it can never be a president.
00:38:39.380 I'm trying to figure out.
00:38:40.700 Well, I mean, that's like if we decide that. I mean, but I do think that
00:38:43.700 the limit of not being able to become the object of our hatred, I don't think it's a real
00:38:50.480 limit that wouldn't underestimate the human ability to hate. And I think we can hate individuals
00:38:55.500 and we can hate nations and institutions and companies and systems and all kinds of things.
00:39:00.560 I don't see why people couldn't bring themselves together to hate some like predictive algorithm
00:39:05.760 as well, if that's what it comes to.
00:39:07.760 I guess the part where I'm going, when I have the debate over God and I'm the skeptical guy,
00:39:13.180 I'm the guy that got kicked out of Bible study in Iran because I'm like, if there's really a God,
00:39:16.560 why the hell am I going to? Why am I seeing so many people dying? Why are we being bombed by Saddam Hussein?
00:39:21.100 So, but the conversations about God were logic, emotion, feelings, you know, decision-making process,
00:39:30.160 choices, you know, to me, it eventually gets to a point where, you know, I don't know if technology
00:39:36.700 can build feelings. I don't know if technology can build feelings because you said, I'm sure
00:39:42.580 we can figure out a way to hate something. That's a machine. Believe me, I hated my Escalade
00:39:46.560 last because I had no clue what the hell was going on with a suspension. So yes, you're right.
00:39:50.420 I can't hate a machine because I hated my Escalade last week. Okay. And it couldn't figure out how to
00:39:55.780 go down. I'm going down to Miami and I'm bumping all over the place. We figured out there was two
00:39:59.360 things I had to fix and it was addressed. So yes, you're right. We can do that. But I'm just trying
00:40:03.980 to see what is the one area that we have an edge over machines where long-term machines are going
00:40:09.700 to need us rather than us desperately needing machines to survive.
00:40:16.160 Yeah, I don't think there will be such an area. I think if machines need us, it's because somehow
00:40:22.320 they care about us. And either that or there are some other machines that care about us. So that
00:40:28.520 might give the first machine an instrumental reason to care about us. Yeah. But otherwise, I think from a
00:40:35.580 practical point of view, I think ultimately there will be no physical outcome that we would be able
00:40:40.960 to produce through our muscles or brains that some intelligence system couldn't produce equally well
00:40:45.860 or better. You think so? Even feelings? Yeah. I mean, sorry. I mean, right. I mean, yeah. So that's
00:40:52.320 like the super intelligence, obviously, if they are generally much more capable of solving problems than we
00:40:57.480 are then. True. And then like, I mean, is it our fingers that are going to be so intricate that you
00:41:02.780 couldn't have a robotic manipulator that is able to do the same? No, I think that obviously with
00:41:07.240 nanotech, they will have much more capable actuators as well. And so then at the fundamental
00:41:12.900 level, there is nothing we can do. But it might well be that we have a position in the scheme of
00:41:17.540 values that makes us very important. I mean, this is the same way. It's like maybe I have like a
00:41:24.780 grandmother or something who is dearly beloved and maybe she can't really do much. She can't hold a job
00:41:29.800 and earn income or, you know, serve any practical. But if people care about her, then there is something
00:41:37.420 only she can do, which is to be alive and be happy. Nobody else can do that for her. And similarly, our role
00:41:44.760 in the future might be the people who kind of actually enjoy this whole situation and the people for whose sake
00:41:51.500 all of this work is being done. And that would kind of, in some sense, be a more dignified role than
00:41:56.320 being the worker and the arms and the legs of the whole apparatus.
00:42:00.720 So then if that's the case, like intuition, you know, there's a different thing about intuition. You sit down with somebody
00:42:05.580 afterwards, you say, babe, what did you think about the guy? I don't trust him. Yeah, me neither.
00:42:10.600 You know, can AI get to that level to have the intuition that we have? Because sometimes it's not like,
00:42:15.500 you know, 19 keys to have an intuition. It's kind of a gut raising upbringing, who you're around to get a gut
00:42:22.100 feeling. Can they duplicate feelings? Can they duplicate eyes looking at each other, that contact?
00:42:29.760 I don't know. All I'm looking for is this is your world. I'm just trying to get smarter by talking to a guy
00:42:34.980 who is living in the world where this is what you consume 24-7. I don't consume this 24-7.
00:42:40.900 I'm just trying to get 1% smarter about the industry you're in than I was 45 minutes ago.
00:42:46.920 And that's been my goal the entire time. So I'm going to give you the last thoughts here.
00:42:50.860 And here's how I'll phrase the question. And you can answer it any way you want. As a parent,
00:42:55.920 as a human being, are you optimistic about the future? Are you curious? Are you kind of like,
00:43:03.740 man, I wonder how crazy these things can get? Are you like, man, what is the limitation on how much
00:43:09.120 we can take technology and how much can this technology advance? What's your feeling about
00:43:14.860 the future? If you were to say, this is my feelings about the future.
00:43:18.600 I would characterize myself as a fretful optimist.
00:43:23.160 Frightful optimist.
00:43:24.720 Fretful.
00:43:25.400 Fretful optimist.
00:43:28.060 Got it. You want to unpack that a little bit?
00:43:29.680 It's pretty self-explanatory. I mean, I think there is a, given our current state of ignorance,
00:43:38.380 we can preclude either extremely good outcomes or very bad ones. And we live in this uncertainty.
00:43:47.640 And we'll see how it pans out.
00:43:50.120 I like that. Fretful optimist. Nick, thank you for being a guest on Valuetainment. We're going to put
00:43:58.880 the link below to your book, Super Intelligence, Paths, Dangers, Strategies, New York Times bestseller.
00:44:05.320 We'll put the link below. Thank you so much for making the time and being a guest on Valuetainment.
00:44:09.800 Great. I'm glad we could make it happen.
00:44:11.360 You got it. Take care, buddy. Bye-bye.
00:44:13.300 What do you think? Do you think AI is going to take over? Do you think it's going to get to a point
00:44:16.920 where we may have a president that's a robot one day? Curious to know your thoughts. Comment below.
00:44:22.240 If you enjoyed this interview, give it a thumbs up and subscribe to the channel.
00:44:25.360 And you may also enjoy another interview I did with Pavlos Holman, who's a futurist, I believe,
00:44:29.660 and a hacker. Very interesting mind. If you've not seen that, click over here to watch it.
00:44:34.340 Take care, everybody. Bye-bye.