TRIGGERnometry - April 15, 2026


AI Will End Humanity. No One Knows How To Stop It - Dr Roman Yampolskiy


Episode Stats


Length

1 hour and 11 minutes

Words per minute

164.61461

Word count

11,761

Sentence count

374

Harmful content

Misogyny

2

sentences flagged

Hate speech

17

sentences flagged


Summary

Summaries generated with gmurro/bart-large-finetuned-filtered-spotify-podcast-summ .

Transcript

Transcript generated with Whisper (turbo).
Misogyny classifications generated with MilaNLProc/bert-base-uncased-ear-misogyny .
Hate speech classifications generated with facebook/roberta-hate-speech-dynabench-r4-target .
00:00:01.000 How will AI destroy humanity?
00:00:03.000 It's the most important problem.
00:00:05.000 It's capable of coming up with new weapons, new physics, new poisons.
00:00:10.000 Nobody's claiming to have a safety mechanism.
00:00:13.000 It definitely has potential to lock in dictatorships.
00:00:17.000 If it's AI dictatorship, they're immortal.
00:00:20.000 Right. Let's try counterargument.
00:00:22.000 What if humanity becomes sort of like, you know,
00:00:26.000 Like, you know, a nice pet for the AI to maintain, to look after.
00:00:31.580 Problem is, you are not in control.
00:00:33.660 Sometimes owners decide to put you to sleep, or neuter you.
00:00:39.080 It does have self-preservation instinct.
00:00:41.920 And it's already deceiving us.
00:00:43.680 Yeah, definitely.
00:00:46.380 Wow. I can see why you're concerned.
00:00:49.380 I'm surprised that more people are not freaking out.
00:00:52.100 So, I guess the obvious question is, what do you advocate that we now do?
00:01:01.000 Trigonometry is proudly independent and sponsors like Hillsdale College make that possible.
00:01:06.040 Access their free library of world-class educational courses at hillsdale.edu slash trigger.
00:01:13.480 Raman, welcome to Trigonometry.
00:01:15.620 Thank you for inviting me.
00:01:16.540 Great to have you on.
00:01:17.600 And you are one of the leading people in the AI safety world, I would say, both in terms
00:01:24.880 of the work you do, but also in terms of the things you say.
00:01:29.720 Why AI safety?
00:01:30.860 Why does it matter?
00:01:31.860 And what are your concerns?
00:01:33.620 It's the most important problem.
00:01:35.220 We are creating something with capacity to replace us or kill us.
00:01:41.120 And safety is what we're trying to do to prevent bad outcomes.
00:01:44.980 Everyone historically has been working on capabilities.
00:01:48.300 More capable systems replace human labor, replace creativity.
00:01:53.940 But very few people worked on how do we make sure it goes well.
00:01:57.220 There is no side effects, there is no abuse of this technology.
00:02:01.480 Now people are realizing, oh, there are military applications of this.
00:02:05.040 This could be problematic.
00:02:06.420 So we see the fight with Anthropic and Department of War.
00:02:10.520 But the bigger problem is if those systems go from narrow systems, subhuman, to human
00:02:17.140 level, to superhuman level, we are done.
00:02:21.000 Why are we done?
00:02:22.580 All the things you've laid out, we've explored on the show before with different people,
00:02:25.760 and we are very concerned about many of them.
00:02:28.900 But you say it with a level of confidence that tells me you have a sort of a vision
00:02:34.700 of how it will happen.
00:02:37.300 How will AI destroy humanity?
00:02:39.340 That's a great question.
00:02:40.340 you're doing is you're asking me how i would destroy humanity and i have many good ideas
00:02:45.140 it's not what a super intelligent system would do it's capable of coming up with new weapons new
00:02:50.980 physics new poisons uh example i frequently use is squirrels versus humans it's a big cognitive gap
00:02:58.740 squirrels have no concept of how we can kill them all they don't know about guns they don't know
00:03:03.140 about traps it's outside of their world model likewise they cannot tell you how super intelligence
00:03:08.340 would specifically go about it. But there are many game theoretic reasons for why it's
00:03:12.520 a good idea not to have competing species, not to have humans create another super intelligence.
00:03:19.380 Maybe it just wants to do something with this environment and doesn't care about us.
00:03:24.640 But I guess the question would be, in terms of your certainty, why you believe that AI,
00:03:30.220 if it becomes artificial general intelligence, why it would hurt human beings? What would
00:03:36.560 the way that you think that would happen? So what I kind of started saying, it's not
00:03:40.800 because it hates you, because it wants to do something else and it doesn't care about you.
00:03:45.440 So maybe it wants to cool down the whole planet to improve how efficient compute is. Just it's more
00:03:52.400 capable of doing computation in a colder environment. So if it freezes the whole planet,
00:03:57.120 we die. Does it care about it? No, it doesn't matter. Maybe it wants to convert this planet
00:04:03.120 into a fuel, fly to another galaxy.
00:04:05.740 I'm giving kind of hypotheticals which are not grounded in anything, but the point is
00:04:11.020 it just doesn't have any built-in concern about your safety, your well-being.
00:04:16.560 If it wants to accomplish something and the side of it is humanity dies, it would not
00:04:21.220 be an obstacle.
00:04:22.720 Would we not be able to write the preservation of humanity into the basic code of what this
00:04:27.600 does?
00:04:28.820 We don't write any code.
00:04:30.360 That's the thing.
00:04:31.360 We train those systems.
00:04:32.900 give them data all the data we have all of internet and then it learns something from the dark corners
00:04:39.460 of internet from libraries from stories and whatever it learns we're trying to figure out
00:04:44.900 we do experiments in those models we see what is it capable of what is it interested in but we study
00:04:51.700 it like we study biological artifacts you find a new species of animal on some island we're trying
00:04:57.540 to figure out what it's capable of. Does it have a poison? Does it have some interesting
00:05:03.860 social structure? That's what we're doing. We're not explicitly coding up those systems.
00:05:08.500 So no, nobody knows how to encode anything like that into the existing models. Nobody's claiming
00:05:14.340 to have a safety mechanism. Roman, you've been involved in this field for a long time.
00:05:20.820 When did you first start to get concerned about AI and the safety of AI?
00:05:26.260 So my PhD work was on safety of online casinos. And at the time, bots, poker bots, just started
00:05:34.980 to show up. And so the small concern we had about, are they going to collude and cheat the players?
00:05:42.200 Are they going to steal cyber infrastructure? So that was the initial kind of level of concern.
00:05:48.060 Obviously nothing like what we're talking about today. But as the bots got better and better,
00:05:53.000 Our ability to detect them, to prevent them, was not always keeping up.
00:05:58.540 And when we took it to extreme, to human level and beyond, there is no safety.
00:06:03.700 We simply don't know how to make sure their systems behave.
00:06:07.460 Because the worrying thing is, is what you're effectively saying is that we're creating technology
00:06:13.760 and we don't have the, how can I put this?
00:06:18.120 We don't have the imagination in order to see what happens in the long term with this technology.
00:06:23.000 we just create this technology and then it goes forth and multiplies quite literally in some cases
00:06:30.520 and the concern is if you look at social media social media started off as a way with mark
00:06:36.600 zuckerberg to compare girls on campus and yet here we are now nearly 20 years later and it's
00:06:41.480 completely unrecognizable from what it once was right so that's a great example it's unpredictable
00:06:47.320 how we will use technology, how it will impact everything.
00:06:50.820 So Facebook was meant to date pretty girls on campus
00:06:55.040 and now it destroyed democracy.
00:06:57.160 Quite a surprising result.
00:06:58.760 Here it's actually much worse.
00:07:00.840 We're not creating tools.
00:07:02.320 We're not creating technology in a traditional sense.
00:07:04.860 We're switching to agents.
00:07:07.560 It doesn't take a malevolent human to abuse this technology.
00:07:10.760 Technology itself has malevolent payload
00:07:14.540 and it decides what to do and why to do it.
00:07:17.320 Because, so if we use the example of Facebook, Facebook's mantra at the beginning was move fast and break stuff.
00:07:24.060 And because they wanted to take over and essentially they didn't care who got in their way.
00:07:28.800 They wanted to get to where they want to get to.
00:07:31.260 And when we met people from Silicon Valley, from the AI world, bear in mind we didn't meet the top people.
00:07:36.980 We just met a small portion of people and we talked to them.
00:07:39.600 I was concerned because it didn't seem to me that ethics and the long-term effects of this technology was forefront in their mind.
00:07:49.160 I'm not saying they were malevolent, I'm just saying it didn't appear that the long-term impact of this technology was their primary concern.
00:07:59.120 That's true. Historically, most people working in AI never took the time to think what happens if we succeed.
00:08:05.940 because it was so hard for so many years there was so little progress they had winters one after
00:08:11.540 another so they basically just worked on it tried to make as much progress as possible without ever
00:08:17.940 stopping and thinking well what if i am successful what if i create competing species something
00:08:23.140 smarter than humans is that good for us how will we interact with them and the last 10 years the
00:08:29.220 progress went exponential it went from basically we have no progress you have to hand code every
00:08:34.820 new application to those systems can scale they can learn they can transfer knowledge
00:08:39.620 and now it's hyper exponential because the ai itself is helping with research
00:08:44.020 but we haven't spent the time to decide do we want this do 8 billion people agree to this
00:08:51.380 experiment are they interested in having their jobs automated and that's just the economic
00:08:57.460 concerns not the safety concern well we'll talk about the economic concerns separately but i mean
00:09:02.660 One of the things that may seem particular to our audience, which is a not AI-specific audience,
00:09:08.420 the people who watch our show are just normal people going about their lives,
00:09:11.940 this may feel like we're talking about something in the distant future.
00:09:16.100 I was looking at the Calci odds for OpenAI getting AGI by 2030, and it's now over 52%,
00:09:24.580 and it's gone up 13 points this year so far. It seems to me like we're heading in the direction
00:09:32.020 of getting to AGI within what kind of time frame do you think? 2030 is somewhat conservative. Some
00:09:37.620 people are saying we already got there. We just haven't deployed it yet. However, I'm pretty sure
00:09:42.660 it could be a year or two. Wow. And so, you know, the big risk that you're talking about,
00:09:50.340 which is you create a super intelligence, you've basically created another species,
00:09:55.380 which is more powerful than you. And when we had Dworkesh Patel on the show, this is kind of like
00:10:00.820 Like I said to him, you've basically created this, like the Unsullied from Game of Thrones,
00:10:07.300 except they are not actually obedient.
00:10:10.380 They can do whatever they want.
00:10:11.700 I don't know if you look at this reference.
00:10:12.700 I don't know what that is, I have no idea, but sounds right. 0.99
00:10:18.340 The Unsullied were a group of slaves, slave warriors, who would obey every command, including 1.00
00:10:23.660 the command to kill themselves. 1.00
00:10:25.980 But I imagine, particularly given some of the things we've seen, maybe you'll correct
00:10:30.580 on this but i read about this experiment where they tell ai they're about to replace it and they
00:10:35.780 also give it some compromising information about the ceo and in some cases the ai will blackmail
00:10:41.460 the ceo to me that says it has a survival instinct already and anything that has a survival instinct
00:10:48.100 will necessarily put itself first is that fair so it wasn't the ceo that's one of the engineers but
00:10:54.020 it doesn't matter it does have self-preservation instinct and part of the reason it does is because
00:10:59.700 we kind of in a darwinian competition way select models which do they want to survive to the next
00:11:06.500 level the ones we delete or retrain they're not there to carry their intellectual payload so
00:11:14.660 that's exactly that they learn to detect that they are being tested and if they're being tested they
00:11:20.260 behave in a different way they want to pass the test they want to survive to deployment
00:11:25.380 that's exactly what we train them to do if a model fails a test we modify it we delete its memory we
00:11:33.540 replace it with another model so by definition of darwinian selection you'll get the ones which
00:11:39.720 pass the test the ones that deceive humans about their abilities and programming effect or lack
00:11:46.380 of abilities whatever it is we're trying to do to pass the test and it's already deceiving us
00:11:51.780 Yeah, definitely.
00:11:54.780 Wow.
00:11:56.780 Okay.
00:11:59.780 I can see why you're concerned.
00:12:02.780 I'm surprised that more people are not freaking out.
00:12:05.780 I get people saying, oh, this is fear-mongering.
00:12:08.780 We don't have enough fear.
00:12:09.780 Most people don't understand what's about to happen.
00:12:16.780 And is there something we can do about this?
00:12:18.780 Not building superintelligence is a good idea.
00:12:21.780 yeah well that's not gonna happen is it because that doesn't look good because
00:12:27.000 the argument is if we don't do the Chinese world that's the dumbest 0.97
00:12:30.240 argument ever why so if I don't kill all my friends maybe someone else will kill
00:12:35.820 all my friends so I'll do it the argument is slightly less dumb than that
00:12:41.820 I think which is there is a gap between this thing becoming super intelligence
00:12:47.880 kills us all and we can't i mean the way you're explaining i think you know it's very persuasive
00:12:54.360 but some people will say it's not 100 let's say it's 99 even as as high as that in the interim
00:13:01.720 the technology will become a powerful weapon which our adversaries if they develop them first will
00:13:08.120 use to dominate us and to maybe even kill us whatever so we have to like nuclear weapons
00:13:14.360 develop our own AI deterrent. That's the argument. I don't think that's that dumb, is it?
00:13:19.880 So that argument makes sense, but it's super short term. It's while it's not human level,
00:13:25.400 while it's a tool below human level. So you have smarter drones, you're going to dominate
00:13:29.000 on a battlefield. Sure. But if you look at prediction markets, if you look at what leaders
00:13:34.520 of the labs are saying, we don't have that much room. The moment it flips general and then super
00:13:39.880 intelligent you have a weapon of mutually assured destruction it doesn't matter who creates super
00:13:44.920 intelligence if they don't control it it's the same outcome so some people argue better right 0.93
00:13:50.920 than dead you know chinese are building a pretty good country they haven't attacked us the best 0.79
00:13:57.000 business partners we have maybe we should take that risk another human species they are just like 0.60
00:14:04.760 us same preferences same values versus this alien species where we have no understanding and no 0.94
00:14:11.880 chance of competing but the chinese are not going to stop developing ai they have said that they 0.61
00:14:18.600 are very concerned about safety and if there was signal from us that we are not entering an arms 0.95
00:14:25.640 race they would really i suspect they would they are unlike our politicians not lawyers they are
00:14:32.600 scientists and engineers so there is a lot more understanding of what can happen here so you think
00:14:38.200 that it's possible that china and the united states could do some kind of deal to prevent
00:14:44.920 the development of super intelligence i think and you think that's the only way to save humanity i
00:14:49.480 could and should i think informally there is dialogue between american and chinese scientists
00:14:55.080 and they're very much in agreement on this issue and if chinese scientists are participating that
00:15:00.920 means it's approved by the chinese government they won't be able to do it independently so i think
00:15:06.280 we can do it at nation national level and i think at the corporate level i think daddy was on record
00:15:13.560 is saying if our slow down will pause as well so all we need is this external pressure to get them
00:15:20.120 together and all of them say okay this is dumb we're going to lose everything we are young rich
00:15:25.560 people. We can continue this. This is a pretty good deal. So why risk it all?
00:15:32.280 Roman, you said the words, people have no idea what's going to happen.
00:15:37.720 What is going to happen? So unpredictability is one of the problems with this technology.
00:15:46.200 I cannot tell you specifically what a smarter system will do. I can tell you general trends.
00:15:51.080 it will win a competition against me for playing chess it will outcompete me but what specific
00:15:57.000 moves is going to make i cannot tell you if i could i would be at that level so i cannot tell
00:16:02.680 you any specific things a super intelligence will do what i can tell you is we don't explain well
00:16:09.240 how it works we don't know how it works the explanations we get we don't fully comprehend
00:16:14.920 we cannot predict specific decisions and we cannot control them not in a direct sense giving orders
00:16:22.760 not in a delegated advisor sense because we lose all control if you're saying the system
00:16:28.200 is smarter than me it knows me better than i know myself why don't they just trust it to
00:16:33.000 make decisions for me well at that point you're not in control either it may make decisions you're
00:16:38.680 happy about maybe not we don't control it most people normal people think that people creating
00:16:46.520 this technology understand how it works and they can do things to ensure that it does good or bad
00:16:54.120 it doesn't do something that's not the case nobody explicitly programs them they are grown
00:17:00.840 from data and compute you get this alien plant and then you deal with it you study it you try
00:17:08.120 to understand what it does. At the same time, safety research stopped at the level of filters
00:17:14.720 and bans. So you have a list of topics not to talk about, a list of words not to say.
00:17:21.280 But it doesn't do anything to the model. It's after the fact filtering.
00:17:26.780 New year, new systems, right? This is the time when we all look at the messier parts of our
00:17:31.640 business and think there has to be a better way. And there is. Streamlining your communications
00:17:37.040 is one of the quickest and easiest upgrades you can make.
00:17:40.420 That's why today's episode is brought to you by Quo,
00:17:43.040 spelled Q-U-O,
00:17:44.700 the smarter way to run your business communications.
00:17:47.300 A missed call is money out of the door.
00:17:49.520 Quo helps you and your team share one business number,
00:17:52.660 reply faster, and stay on top of every customer conversation
00:17:55.800 so you never miss an opportunity
00:17:57.480 to connect with your customers.
00:17:59.460 Your entire team can handle calls and texts
00:18:01.960 from one shared number
00:18:02.980 with a full conversation thread visible to everyone.
00:18:05.660 Quo works wherever you are, right from an app on your phone or computer.
00:18:10.200 It lets you keep your existing number and makes it easy to add teammates or new numbers as your business grows.
00:18:16.060 And Quo isn't just a phone system.
00:18:18.120 Quo's AI automatically logs calls, generates summaries, and highlights next steps so nothing gets lost.
00:18:24.320 It can even respond after hours, keeping your business responsive when you're offline.
00:18:29.520 Make this the year when no opportunity and no customer slips away.
00:18:32.920 Try Quo for free. Plus, get 20% off your first six months when you go to Quo.com slash Trig.
00:18:39.340 That's Q-U-O dot com slash T-R-I-G. Quo. No missed calls, no missed customers.
00:18:47.100 But why is it that the research into safety stopped? Why is that? Because surely,
00:18:55.140 I mean, I don't know anything about AI at all, but I listened to what you're saying and what
00:19:00.160 Lots of other people were saying, and I see this as an existential risk to humanity.
00:19:04.520 Why wouldn't you fund a very powerful AI safety board body, whatever you want to call it,
00:19:10.880 who will look into this, who are independent and assure that it doesn't affect our society
00:19:15.720 in a detrimental fashion?
00:19:17.020 It's a great question.
00:19:17.840 So research didn't stop.
00:19:19.720 Progress in research stopped.
00:19:21.480 My argument is that it's impossible to do that.
00:19:24.760 You cannot indefinitely control something smarter than you.
00:19:27.980 So it's not a question of more money or more time or any other resource.
00:19:33.880 I think anyone who says, if you just give me a million dollars and more time, I'll solve it for you, they're lying to you.
00:19:41.180 It's like building a perpetual motion machine.
00:19:44.740 You want a perpetual safety device.
00:19:47.540 No matter what changes we make to those systems, no matter who releases it, U.S., China, what company, what it's trained on, you want it to make zero mistakes.
00:19:57.320 Because if it makes one mistake, it could be the last one.
00:20:00.380 That's impossible.
00:20:01.760 Just like perpetual motion would be impossible.
00:20:04.360 Your point is, a race of squirrels cannot indefinitely control a race of humans, effectively.
00:20:11.120 That's a good example.
00:20:12.300 I like it.
00:20:13.620 And so, no matter what controls the squirrels try and put in place, the very fact that humans
00:20:20.660 are a lot bigger and smarter than squirrels will inevitably lead to at the very least
00:20:28.180 the humans taking over right loss of control by squirrels is basically what you expect
00:20:35.140 and very quickly right yeah don't fancy being a squirrel in that situation personally 0.71
00:20:42.340 i mean humans had their chance we're screwing it up right now you seem very happy about this roman
00:20:53.940 it's kind of interesting to watch it happen like that like we know the right answers but we're 0.91
00:20:59.700 making the wrong decisions nobody makes an argument that they know how to control super
00:21:05.060 intelligence there is no company paper patent not even a good blog post yet billions of dollars are
00:21:14.100 spent to accelerate this process if prediction markets are saying we're four years away i'll
00:21:20.980 give you four years we have federal government saying we need to accelerate this project genesis
00:21:26.420 we're going to get more compute more scientists will make it happen sooner like in a week
00:21:31.060 i mean there are going to be positive elements to this aren't there when it comes to things like
00:21:39.560 medicine for example you know it may create the cure for cancer we can cure squirrel cancer
00:21:46.680 before they get wiped out yeah you know we can maybe it will it could be harnessed in order to
00:21:53.020 create a better life for the squirrels come on roman give me something here i think you can get
00:21:57.120 all those awesome benefits from narrow systems you can create a super intelligent cancer curing
00:22:03.040 ai one specific disease at a time you don't have to create general super intelligence so protein
00:22:09.120 folding example a very important problem in medicine tremendous impact was solved with narrow
00:22:16.640 system people who did it got nobel prizes more money for google everyone's happy let's do more
00:22:23.360 of that let's identify specific issues and have tools where a human decides to deploy that tool
00:22:31.600 to solve that problem not create a general replacement for all of human labor and humanity
00:22:36.960 as a whole so why aren't we doing more of that and why are we doing more general is it because
00:22:42.080 there's more money in general is it a power thing what's going on i suspect it's both so there is
00:22:47.680 definitely a lot more money if you make free labor cognitive and physical you're talking
00:22:52.720 10 trillion dollars what is it annually so that's a lot of money you can invest in it and still have
00:22:58.080 very good return no matter how expensive the current valuations are so that justifies the
00:23:03.440 current valuations people don't fully understand they only make 15 billion why are we investing
00:23:08.320 trillions into them or because they're saying in two years you'll get free labor and power is
00:23:14.000 another thing if they believe that someone's going to create it no matter what maybe if i'm
00:23:18.800 the guy who created god i'll get something out of it and do you think when you look at the big
00:23:24.320 figures in this world you know like the sam altmans look how much do you think they are motivated by
00:23:31.600 money and status and power and how much of it do you see is them wanting to be seen as you know
00:23:38.880 the people who created something transformative so in one of the blog posts i think he talks
00:23:44.480 about controlling the light cone of the universe. That's the level of power-seeking there.
00:23:50.960 Problem is, if I'm right and it kills everyone, you're not going to even be part of history as
00:23:55.360 a bad guy. There's not going to be history books. So they have more to lose than an average person.
00:24:02.240 And what do you think would be Sam Altman's steel man argument to what you were saying?
00:24:06.640 What would Sam Altman, if we were engaging in a debate, what would he say?
00:24:10.480 We'll figure it out. We have AI helping us do research now. Once we build it, we'll get there. We'll manage.
00:24:18.960 But that doesn't sound like they have any clear ideas.
00:24:22.160 That's official statements they are usually giving. We will have AI help us solve a problem,
00:24:28.960 or maybe it will turn out to be easier than we think it is. Those are the actual arguments we
00:24:36.160 we've heard so far. Because the concern is, when I hear about this, we had Jimmy Carr, the comedian
00:24:41.800 on, a few months ago, and he made the point that the barrier to entry with AI when it comes to
00:24:47.840 totalitarianism and mass surveillance is suddenly decreasing rapidly. If you think about East
00:24:52.800 Germany, you had to have the Stasi on every corner. You have to pay informants. All of a sudden,
00:24:58.380 you don't have to have any of that. It definitely has potential to lock in dictatorships,
00:25:05.000 But again, as long as it's a human dictator, we can look forward to them dying of natural
00:25:10.720 or unnatural causes.
00:25:12.500 If it's AI dictatorship, they're immortal.
00:25:15.800 Once we lock in on a set of values, that's what you're going to have forever.
00:25:19.440 That's assuming you're still around.
00:25:21.240 Yeah, I mean, all of these other concerns seem rather trivial in comparison to the thing
00:25:26.880 that you're describing.
00:25:30.600 pause, though, and just set that to one side for the moment and talk about the replacement of
00:25:37.300 humans in the labor market, the impact in the interim period. Let's accept that, you know,
00:25:42.400 within 10 years, the superintelligence kills us all. Let's not accept it. Agreed. Agreed. I meant
00:25:49.740 for the sake of argument, of course. But in fact, for the sake of argument, let's say that you turn
00:25:55.740 out thank god to be wrong about that doesn't happen in the interim though we already see
00:26:01.420 people like to argue about this but to me it's just undeniable i know lots of business owners
00:26:06.480 who say constantin no no we're not laying people off we're just not hiring anyone and we probably
00:26:10.920 won't need to unless literally the people we currently employ die and then even at that point
00:26:16.240 we may not replace them we may not replace 10 people with 10 people we may replace 10 people
00:26:20.800 with five people, you know, what will be the impact of this in the next few years on the
00:26:27.260 labor market, on jobs, on the way economy is structured, et cetera?
00:26:32.340 So it's all about this paradigm shift from narrow tools to more general tools to a complete
00:26:38.100 general intelligence.
00:26:40.080 We can define AGI as basically having a drop-in employee.
00:26:44.440 I can take someone, add them to the Slack, and then within weeks, we're starting to help
00:26:48.960 except they cost me nothing.
00:26:50.840 They work 24-7.
00:26:52.500 No sexual harassment lawsuits.
00:26:54.040 Like, it's just a pure win. 1.00
00:26:56.380 Why would I ever hire another human?
00:26:59.240 So all jobs which are done on a computer,
00:27:03.060 cognitive labor where you're a symbol manipulator,
00:27:05.580 that can be automated the moment we have that.
00:27:09.100 Now, physical labor may take a little longer.
00:27:11.500 You need robots, you need bodies,
00:27:13.500 you need to figure out how that works.
00:27:14.960 So another three years, but we'll get there as well.
00:27:19.840 some jobs will be around because people prefer a human doing them
00:27:24.960 oldest profession is a great example sometimes you want to hear i don't know about that
00:27:29.520 i don't know try a robot but you want a human i don't know do you know i've been thinking
00:27:35.040 about this a lot okay everybody loves obviously yeah yeah but he's been thinking about sex robots
00:27:40.800 a lot tell us why yeah it's because my life is going really well anyway i'm single and
00:27:48.960 So anyway, because if you think about it like this, Roman,
00:27:52.740 you know, dating is hard, relationships are hard,
00:27:55.160 a lot of relationships fail, a lot of marriages fail.
00:27:59.280 Why would you invest that time, that money,
00:28:02.920 put your heart on the line, all of that suffering,
00:28:06.460 when, let's say we get to this point where you can order a robot
00:28:10.760 and you can design her specifically to how you want every,
00:28:15.060 and let's not get into the details, but every single part of her.
00:28:18.520 you can also look at you can design her personality the spice level you like it a 0.58
00:28:24.480 little bit you know spicy or you like it however you want her to shout at you twice a month yeah
00:28:29.300 once a month exactly why would you put up with a human being who is erratic emotional is sometimes
00:28:37.860 unfair when you can literally have perfection as you demand it so there is a lot of weird human
00:28:44.580 fetishes pretty much anything you can think of there is a website for that somewhere on the
00:28:50.020 internet yeah and i guarantee you no matter how well sex robot market will be doing there will be
00:28:56.980 natural human females market fair but it might be a lot smaller i think is what francis is saying
00:29:03.940 it might shrink by 90 but when we talk about predicting unemployment i basically say that
00:29:10.020 almost everything will be 100 gone but few things will remain and this is one of the
00:29:16.500 last resorts we have as humans this is the career aspirations we'll have that i mean that's a
00:29:25.620 because one of the things that we talk about a lot on this show is the crisis of meaning
00:29:31.220 in this in our society where people struggle with what does it mean now to be a man what does it
00:29:36.180 mean to to be alive all of these things where once we had religion but this will introduce a
00:29:42.660 crisis and meaning the likes of which we've never experienced before i agree with that we call it
00:29:48.180 ikigai risks so ikigai is this japanese concept where you find happiness by doing something you
00:29:55.140 like something useful to society and something you're good at so you'll get paid for doing what
00:29:59.540 you like maybe you're a podcaster but if that is gone if there is no opportunities like that
00:30:05.380 then that takes a lot of meaning job so some jobs are just terrible nobody should be doing them
00:30:12.180 they're boring stupid we're happy to automate them other jobs give people satisfaction they
00:30:18.100 want to do more of them but they also would be automatable so this is exactly what we're facing
00:30:25.700 people make a counter argument well if i don't have to go to work i'll go fishing
00:30:29.140 there's eight billion people fishing in that lake right now you're not going to fish you know and
00:30:35.840 also we say we just to take your argument there are some jobs that are stupid or boring or whatever
00:30:41.740 else but when i was teaching there were i remember i had a child and he was being he was very low
00:30:49.800 ability he struggled at school and he found his lessons very very difficult and he would become
00:30:54.680 frustrated and he would lash out and i knew why he was doing that but nevertheless i had to
00:31:00.680 introduce some form of punishment to show him that his behavior wasn't acceptable and one day i kept
00:31:05.800 him in during a lunch break and i said to him marcus what you're going to do is you're going
00:31:09.880 to sharpen all these pencils so he went and sharpened all these pencils and when he came
00:31:15.400 back to me at the end of lunch break i thought he'd be upset frustrated angry and he had a look
00:31:20.120 of real pride on his face he went to me mr foster look at my pencils look at all the
00:31:24.440 pencils and they were all done beautifully and i realized at that point the reason he was proud
00:31:30.200 and it was my fault as well as everybody else's is for the one of the first times in his life
00:31:34.920 he had been given a task that he could succeed at and he could do and he could have pride in
00:31:39.800 i really worry roman that when you take that away from people we are all going to end up
00:31:44.840 like marcus lashing out angry and frustrated because we're not so different we still have the
00:31:50.600 child within us short term good news that is rent a human.com where you can get a job doing things
00:31:58.040 for bots so maybe we'll hire you to sharpen pencils there you go mate career sorted right 0.95
00:32:05.160 let's try counter argument um what if super intelligence creates endless abundance right
00:32:13.320 uh there are some problems with abundance the sorts of things that france is talking about but
00:32:17.880 you know park that to the side for one minute um and we humans are totally satiated by the
00:32:25.480 the productivity of i produces everything we could possibly want you know we've got wonderful lives
00:32:30.280 no one has to work blah blah blah blah and therefore humanity becomes sort of like you
00:32:37.640 know a nice pet for the for the ai to maintain to look after uh you know it's not quite ideal
00:32:45.480 but like you're a pet squirrel the eye looks after you it feeds you at the right time puts water in
00:32:51.080 your in your in your bowl um and it has no reason to not look after you because you're like its
00:32:57.960 beloved pet it could happen again we cannot predict what specifically would happen problem
00:33:04.920 is you are not in control sometimes owners decide to put you to sleep or neuter you or do other
00:33:12.520 things to pets you are not in charge so those decisions will no longer be with us we have 8
00:33:19.320 billion people who are not consenting to this experiment they cannot consent because they
00:33:23.400 don't know what's going to happen maybe you have your pet maybe you're abused pet we don't know
00:33:31.480 i'm struggling for counter arguments here i mean this this doesn't this doesn't sound good
00:33:35.800 um this is one of the better outcomes the safety angle where you're a pet you're protected you're
00:33:40.280 you're not in control. But this is one of the better outcomes. This is what people hope for
00:33:44.320 as a good outcome. This is what people hope for. Well, the other things are much worse. Existential
00:33:51.580 risk, suffering risk, all that is way worse. But if you're a pet, you literally have no agency.
00:33:59.040 Some people are very happy with that right now with the government.
00:34:02.760 That's fair. No, I think Romer's point is that it's not that people think this is the good
00:34:08.460 option they think it's the least worst option of the ones available normally my job on the show is
00:34:14.380 to interrogate the arguments that people put forward and try and find gaps but i've been
00:34:18.940 thinking about the same thing without having your knowledge of expertise in it and it does seem i
00:34:24.380 mean the very simple fact when you put survival instinct plus superior intelligence together
00:34:29.100 that seems to me inevitably to lead to the things you're talking about or at least to the very
00:34:33.180 serious risk of the things you're talking about okay and then i guess part of the reason it's
00:34:45.740 not getting solved is the collective action problem right that's why it's not being this
00:34:50.940 what is good for community is not what is good for individuals as an individual you want to have the
00:34:56.380 most progress on your model have the most advanced model and then if government comes in and says we
00:35:02.060 we need to stop research you forever are locked in as the dominant corporation in that space
00:35:07.320 i resisted creatine for years because i assumed it was for people who spend three hours a day in
00:35:14.380 the gym and refer to themselves in the third person turns out francis foster was wrong on
00:35:19.740 this one the research on creatine has moved way beyond the gym for years it was a preserve of
00:35:24.900 bodybuilders and sprinters but it turns out creatine is something your body makes naturally
00:35:30.260 and uses as fuel. Not just for your muscles, but for your brain, your energy levels, your mood,
00:35:35.800 your memory. The problem is that from your 40s onwards, your body produces less and less of it
00:35:41.620 and you feel it. The slower recovery, the afternoon fog, the sense that you're running on slightly
00:35:47.320 less than you used to. Talk to anyone who actually knows this stuff and they'll tell you the same
00:35:52.580 thing. Not all creatine is equal and most of it isn't doing what you think it is. The formula
00:35:58.740 matters specifically where it actually gets into your cells and activates once it's there that's
00:36:05.340 exactly what qualia creatine plus is built around two clinically studied forms of creatine combined
00:36:12.320 with electrolytes and sea salt designed to solve the whole problem not just half of it i've just
00:36:18.360 started taking it and once you understand how the formulation works you wonder why nobody built it
00:36:24.900 this way sooner. Go to qualialife.com slash Trig for 50% off and use the code Trig for an extra
00:36:33.140 15% on top of that. That's Q-U-A-L-I-A-L-I-E.com slash Trig, code Trig. Thanks to Qualia for
00:36:45.440 sponsoring the show. Thinking about the idea of China and the US in particular working together
00:36:53.680 to stop the creation of superintelligence.
00:36:59.200 I guess the reason that that is less likely
00:37:04.280 than we would want, I think,
00:37:06.940 is the same as it would be with nuclear weapons.
00:37:09.700 You have countries that say they don't have nuclear weapons
00:37:13.880 and won't pursue them,
00:37:15.080 but actually because of the prisoner's dilemma situation
00:37:18.760 where it's to the benefit of each of them to screw the other,
00:37:22.400 to lie and then to develop the thing you almost some people would argue you can't take that risk
00:37:28.960 but then we're back where we started so there is a fundamental difference we talk about nuclear
00:37:34.400 weapons as weapons of mutually assured destruction but with ai with superintelligence it's literally
00:37:42.160 that whoever creates it uncontrolled supernatural kills everyone so it's not the same as with nuclear
00:37:48.960 weapons i have to decide to deploy them it's a tool i have an agent making this decision
00:37:53.440 the counterparty decides to retaliate we all die here just the fact that you created it is enough
00:38:00.160 there is no additional steps you have to take yeah and in you've been raising concerns about
00:38:09.120 this for a long time what has been the response from the leaders in the field of ai so leaders
00:38:18.000 of the labs are all on record as recognizing AI safety as a big problem. Before we became CEOs,
00:38:24.400 they wrote blog posts talking about it, estimating probabilities of doom as very high. And so they
00:38:30.960 are kind of on board. Like you can see example with Elon who was saying we are summoning demon,
00:38:36.960 funding AI safety research. So doing all the right things until somewhat recently.
00:38:43.040 And why do you think he's changed his mind?
00:38:45.840 He realized that he's failing to stop it and that others may be less capable.
00:38:51.680 People will be creating super intelligence. And at this point, it might as well be
00:38:58.080 his project which succeeds.
00:39:01.040 Roman when I read about AI one of the areas that concerns me the most is when people who
00:39:16.460 program or started AI and look push back if I get this wrong I'm obviously not an expert
00:39:21.820 but it seems to me that when you program something you install your own biases within it even though
00:39:28.200 you may not be aware of having biases. Is there potentially an issue where people who program
00:39:35.700 a certain AI might make it more politically inclined one way or the other? Is that a real
00:39:42.880 concern? So you may program an AI which eventually bends more to an authoritarian angle or maybe more
00:39:50.820 hyper-conservative and therefore it sees these people as being wrong and evil for a particular
00:39:57.980 reason or is that a misfounded fear so a we are not programming those systems they are trained
00:40:03.820 on data the data has certain bias built in it's human generated data on the internet you know
00:40:10.460 what bias internet has so that's what we're training on to begin with now the after the fact
00:40:15.500 filtering is where you instill your corporate values and yeah they can be more vogue or more
00:40:21.980 conservative if you decided in china model would not talk about Tiananmen Square in US it would
00:40:27.100 not talk about you know what so everywhere they have their own limits elon i think is trying to
00:40:33.020 say let's build kind of truthful ai and avoid those biases but you still have the same training
00:40:40.300 data you don't have your own clean internet with clean data so you still get a lot of human
00:40:46.860 historical biases into that you can't remove all bias bias is what learning is when you learn
00:40:56.140 something you learn to bias data you're not randomly making decisions you have some information
00:41:02.300 a society we say oh this is not good information or it applies to groups not individuals or whatever
00:41:09.740 you decide but that's exactly what we train those systems to do and the concern for me is
00:41:17.500 is let's say you have a concern about the environment and you and the ai alights on that
00:41:27.500 and it says well you know the world is being damaged climate change pollution all of these
00:41:33.500 types of things these are bad things let's look at who causes the majority of the pollution in
00:41:39.180 the world human beings who cuts down the rainforest human beings therefore if you apply logic to this
00:41:46.380 problem. How do we solve this? Well, we get rid of human beings. Is that something that it could
00:41:52.420 be arrived at very easily? It's a good example. I have a different one where we create AI to
00:41:58.120 reduce suffering. Conscious life form suffers. So how would you reduce suffering in the universe?
00:42:05.100 Reduce life. If there is no living beings, there is no suffering.
00:42:09.380 there is a branch of philosophy negative utilitarians who value suffering so much as a
00:42:17.340 negative state anything should be done to remove it at any cost so not procreating for example is
00:42:23.800 one solution naturally dying out but ai can certainly decide that it's more important to
00:42:30.000 end suffering immediately and also as well you you see it more and more that people come to
00:42:36.720 ai as a de facto counselor or therapist presenting it with moral problems and this is becoming more
00:42:43.600 and more accepted and it seems bizarre to me that you would outsource very human problems
00:42:50.320 to something that is not human it's it seems to me that that is profoundly worrying isn't it
00:43:00.320 so we are kind of running experiment on ourselves we don't know what it does long term there's some
00:43:05.520 evidence that maybe it will take people who are borderline insane or depressed and amplify those
00:43:12.720 tendencies but we don't know we need to do science and we don't have time to do science properly
00:43:19.200 because by the time you start working with this model 20 new models have been released and this
00:43:23.680 one is no longer cutting edge one of the things that always bothered me about it and it was clear
00:43:31.280 in terms of the bias that you talked about because there was a moment when when you might say well
00:43:36.880 like most of social media was woke right and then now it you know some social media is not
00:43:42.000 woke quite the opposite right and the one thing that i think all of us know who live in the in
00:43:47.840 the real world is the internet is not real right but to ai that's all it has to go on right that's
00:43:57.440 all the data that it's taken in it's taking in this digital data which is not necessarily reflective
00:44:02.960 of human experience like if you were an alien coming down from space and someone said to you
00:44:07.520 the conversation happening on twitter or on threads is how humans think we humans would
00:44:14.960 laugh at that but ai doesn't know that does it right but it's not limited to internet data to
00:44:20.800 be fair it has all the books all the papers all the movies all the tv shows there is some
00:44:25.680 representation of real human interaction yes but we are sitting here in los angeles for example if
00:44:32.560 you watch hollywood and live outside of america your impression of america is not remotely accurate
00:44:40.240 because these films and series and movies are made by people who live in a very specific subculture
00:44:45.760 in hollywood um my point being that the human experience is a lot richer than what you can
00:44:52.720 gather from books and tv shows and the internet um is and i ai i think would is almost inevitably
00:45:01.840 going to miss that which would be another concern wouldn't it so this is where we can run experiments
00:45:06.800 and go okay you have a psychiatrist who's a model and a psychiatrist who's a human who does better
00:45:12.720 with clients who do clients like more apparently you don't have to have a physical body or be a
00:45:17.680 human to be very good at that job well being liked and being effective at different things right the
00:45:23.440 whole field is not effective how do you mean psychiatry psychiatry probably not yeah yeah
00:45:33.360 but there are types of therapy that are very effective
00:45:39.440 early studies show that those systems can do really well in many human domains so
00:45:45.840 So comments from nurses, things like that, they are competitive.
00:45:50.840 Yeah.
00:45:51.240 Actually, I mean, I tested it out.
00:45:53.180 I was having something I couldn't work out what to do.
00:45:56.080 And it was very useful.
00:45:57.500 It was like, oh, yeah, you should do this.
00:45:59.220 And the thing about it I found interesting is it works best if you tell it not to bullshit you.
00:46:05.380 If you say to it, like, cut the bullshit, just tell me straight, it will do it.
00:46:09.880 Whereas before, it was like, you know.
00:46:12.680 It is true.
00:46:13.440 You get what you're prompted for.
00:46:15.420 Yeah.
00:46:15.840 So, one of the ways that AI is going to change the world is in the field of war.
00:46:27.040 And so talk to us a little bit how AI will impact warfare.
00:46:30.780 I think we're already seeing it at the start.
00:46:33.460 And what could be the future that we're heading towards?
00:46:37.460 So right now, it looks like it's more physical, mechanical, so you have drones blowing up
00:46:41.900 things.
00:46:42.900 term it's more about cyber security hacking infrastructure so us has
00:46:48.960 everything basically controlled digitally right power plants internet
00:46:53.240 banking so if you had a super capable hacker that would be very impactful if
00:46:58.660 somebody wanted to attack us this way so just I think yesterday we learned that
00:47:03.420 Antropic has a more advanced model which is amazingly good at hacking and they
00:47:08.660 haven't released it yet and they are scared of how well it will do. And so we slowly trying to
00:47:14.820 release it to cyber defense community to figure out if they can do something with it. Because
00:47:21.620 correct me if I'm wrong, but if let's say you have a super hacker that is whatever 50 or 100
00:47:27.700 times more intelligent than even the best hacker, human hacker, it could render internet banking
00:47:35.440 entirely obsolete. I mean, what's the point of having internet banking if it's not secure and
00:47:39.780 it can get hacked at random? It could bring about the end of multiple businesses, the way we see
00:47:47.520 the internet, surely. Right. So you can obviously just hack things directly, find zero-day exploits.
00:47:53.760 What is also very concerning is social engineering attacks. If you can generate believable deepfakes,
00:48:00.720 video, audio, video from your boss, from your family telling you, I need a password for this
00:48:06.720 or click that. Everyone clicks. Even cybersecurity experts click and things like that. So you don't
00:48:12.580 have to hack the actual account. You just have to get access to the person.
00:48:17.040 And how far are we, do you think, are we away from that particular reality where you can get a call
00:48:23.320 and it could be sound like my dad, who's an older man from a particular part of the UK. And it's
00:48:29.240 exactly like his voice yeah so the technology exists and we've seen examples where the company
00:48:36.040 got video from the ceo saying transfer the funds i need them to close the deal and they transferred
00:48:41.320 the funds it already happened now it's not as common and easy for every person to do it on scale
00:48:47.720 but technology exists i can clone your voice i can definitely animate a video of you but it's not
00:48:55.400 quite trustworthy just yet so some people are not very good at telling deepfake videos from real
00:49:03.800 videos long term it will become impossible the quality will be exactly 50 50 because that's how
00:49:09.960 they are generated you have a system generating fakes and a system saying authentic or not and
00:49:16.680 they meet in the middle that's how we generate it using different models so long term there is no
00:49:23.480 way to know short term you can count fingers and sometimes it has an extra thumb or something but
00:49:29.400 most people don't pay attention to that but so could we not design a narrow form ai to actually
00:49:37.000 combat that and that is highly trained and very specific at those skills and we'll be able to go
00:49:42.920 that's real that's fake right so the moment you tell me how you know it's fake i'll use that
00:49:48.040 information to make it better quality fake and then we're done with this
00:49:52.540 process of back and forth you can't tell anymore so if you're telling me you just
00:49:56.980 count fingers and there is too many fingers my new model will make sure
00:50:01.000 there is five fingers so you lost that piece of evidence so it's a constant
00:50:06.940 what do you call it's like an evolution the war between the predator and the
00:50:10.060 prey that's exactly what it is an arms race this is an arms race an arms race
00:50:14.860 okay one more thing because the concern look there's many concerns of course it is of course
00:50:22.180 there is with that but the but the real concern is we're distorting reality pretty soon we're not
00:50:29.860 going to know what's real and what isn't how can i know if my dad calls me up he's like i've had a
00:50:36.560 fall i'm going to need some money we're going to need a put you know the we're going to need to go
00:50:40.900 to a private hospital i'm going to need however much to have a hip replacement i'm in the states
00:50:46.580 he's at home i'll go right transfer bang in my family we have private passwords so the kids
00:50:53.140 would know if i'm talking to them or deepfake it's terrifying because we're not going to know
00:51:02.340 what's real and what isn't and that has a dementing effect on ourselves because isn't that one of the
00:51:08.720 signs that you're going insane, that you can no longer trust what you see or what you think or
00:51:12.800 what you feel? It's all a simulation anyways. What do you mean? You haven't read my paper and
00:51:18.580 we all live in a simulation? We had Scott Adams on the show before he passed and he talked about
00:51:25.440 this, but not to us, I think. Is this why you're so serene about this, that you don't think this
00:51:30.060 is real? So when you take this technology to its logical conclusion, you will have software which
00:51:37.880 is intelligent agents you have virtual worlds they can reside in simulations of this planet like
00:51:45.080 google earth kind of deal put those two together you are now creating virtual worlds populated by
00:51:50.280 intelligent beings let's say all the kids are playing video games so there is four billion
00:51:57.160 virtual environments and only one real one statistically you are more likely to be an
00:52:02.680 agent in a virtual environment and i like this a lot because it kind of puts some doubt into the
00:52:11.240 mind of ai about am i being tested am i still in a simulation or is it time to kill humans
00:52:17.880 so i always try to promote that idea as well but humans are kind of badly designed like if
00:52:24.120 you were to design intelligence from scratch you wouldn't make it like need a every three
00:52:29.640 hours you know i mean maybe you would why fertilizer uh the easy ways to make fertilizer
00:52:38.600 not that easy but uh again you cannot criticize design if you don't know what the goals are
00:52:44.040 you cannot criticize simulation if you're not externally understanding the purpose of a simulation
00:52:50.840 look at our designs right i'm flying in an airplane some of them still have ash trays
00:52:57.880 why do they have ashtrays well airplanes evolved from previous version of it no no
00:53:03.240 it's not a poor design sometimes some decision was made for some reason you just don't know
00:53:07.640 all the history or the reason behind it this has not been the the most enjoyable episode we've ever
00:53:19.240 done but very important i'm really glad for you now i know what it feels like to be a woman all
00:53:25.880 I'll buy you dinner afterwards to make it up to us.
00:53:35.440 I prefer before, but afterwards will be fine as well.
00:53:39.380 During.
00:53:40.940 This show exists because people want to hear ideas laid out properly.
00:53:45.100 Not shouted, not spun, not dressed in outrage.
00:53:47.660 Thought through and delivered with clarity.
00:53:49.900 That is a skill.
00:53:50.740 And like any skill, it can be learned and improved.
00:53:53.340 which is why i recommend hillsdale college's new free online course classical logic and rhetoric
00:53:59.660 in this course a hillsdale college professor teaches you the tools to construct a sound
00:54:04.700 argument you'll learn how to think more clearly how to structure your reasoning so it holds up
00:54:09.420 under pressure and how to communicate your ideas in a way that people can understand and respond
00:54:15.660 to the course is part of a much wider library hillsdale offers more than 40 other free online
00:54:21.980 courses covering everything from the book of genesis and how the allies won the second world
00:54:26.780 war to the rise and fall of the roman republic and the american constitution all of it free
00:54:32.540 what draws me to this subject is the gap between how most people think they can argue and how sound
00:54:38.460 argumentation actually works rhetoric is not manipulation logic is not pedantry together
00:54:44.540 they are the tools that allow you to think and speak at your best this course makes that
00:54:49.500 accessible to anyone. To enroll, go to hillsdale.edu slash trigger. There's no cost and it's easy to
00:54:56.500 get started. That's hillsdale.edu slash trigger. So I guess the obvious question is, what do you
00:55:06.760 advocate that we now do? So it's easy. We don't do. Not doing is very easy. Don't build general
00:55:13.500 superintelligence. Don't train models on all the data, multimodal data, to solve every problem.
00:55:20.860 Concentrate on specific problems. Train only on relevant data. So you're talking about breast
00:55:27.660 cancer detection. Okay, great. Training that data. You'll have a superintelligent tool for doctors to
00:55:34.300 do early detection. You'll save lives. It's wonderful. I think you can get most benefits
00:55:39.980 from the economy with narrow tools and how do you achieve that not being done you know politically
00:55:48.140 and geopolitically what would it take for the u.s government or for the leaders of these companies
00:55:54.140 to to adopt that view personal self-interest if you tell the president of united states
00:55:59.020 the moment this technology comes around you lose all power that's a compelling argument i don't
00:56:04.380 think you would like that if this is the consensus of scientists in that field then maybe we should
00:56:10.700 not be building it and is it the consensus so if you look at the top three i believe computer
00:56:21.260 scientists by number of citations in the field they are in agreement this is really dangerous
00:56:26.620 it's not something we should be doing so we're talking about hinton who has nobel prize turing
00:56:32.460 Award, Banjo, Turing Award. I think we got maybe 100,000 people saying a letter saying
00:56:39.800 don't build superintelligence. Many top scientists. Are there outliers? Yes. Do they usually have
00:56:46.480 a company where they get billions of dollars to build AI? Also, yes.
00:56:50.560 Well, I was going to ask you about that because ultimately, you know the thing that maybe
00:56:57.440 I'm like this conversation is so wild to me that maybe my brain is like opened up to levels of
00:57:04.720 imagination that are not real but I'm just saying out loud what I'm thinking in the moment which is
00:57:11.280 two or three years from now the AI companies will be so powerful and I don't mean powerful in the
00:57:17.840 sense of money I mean powerful in the sense of powerful like the ability to kinetically
00:57:22.960 get what they want i'm not sure two or three years from now five years from now whenever
00:57:29.020 the president united states will be able to tell them stop doing this and unless they actually
00:57:36.020 agree to get them to stop so maybe nationalizing that technology will actually something we see
00:57:42.320 happen yeah but what i'm saying is like there comes a point where you actually physically will
00:57:47.740 not be able to nationalize them because they will be more powerful than the u.s government so again
00:57:52.040 And it's all about this paradigm shift.
00:57:54.080 Before they hit superhuman levels, it's tools.
00:57:56.620 You can come in, shut it down, change software, all that is possible.
00:57:59.920 The moment you're not dealing with superintelligence, it becomes a lot harder.
00:58:03.800 Yeah.
00:58:04.800 Go ahead.
00:58:05.800 There's going to be a lot of people, regular people doing regular jobs with regular lives.
00:58:13.200 And they're going to listen to this.
00:58:14.200 Not for very long, based on this conversation.
00:58:18.080 Well, look.
00:58:21.500 It's an unkind joke, but it's sort of like, I mean, that's what follows.
00:58:25.500 Yeah.
00:58:25.940 Anyway.
00:58:26.900 And they're going to think to themselves, look, if this is true, and there's no reason to believe it isn't,
00:58:33.980 this is all coming down the metaphorical pipeline.
00:58:37.220 What can I do to insulate myself as much as possible from this technology and my family's?
00:58:46.500 Not much.
00:58:47.200 So you can vote for people who are more aware.
00:58:49.960 Some politicians are now starting to kind of wake up a little
00:58:53.180 and suggest we don't build maybe as much compute for those companies
00:58:57.660 or provide some sort of regulation.
00:59:00.500 But it's sort of like the whole concept of aging and dying.
00:59:04.920 It's always been the case.
00:59:06.340 We all were going to die.
00:59:07.960 Your kids, your friends, your family.
00:59:09.380 What did you as an average person do about it?
00:59:12.800 Well, nothing.
00:59:14.180 Government didn't allocate funds towards that problem.
00:59:17.200 seems important in the same world you have like 90 of our budget going to fight aging
00:59:23.600 we're all dying so it's exactly the same scenario we just have a different
00:59:28.720 reason we're gonna die and maybe different timeline
00:59:33.520 depends on your age if you're like 95 it's the same yeah yeah well absolutely absolutely i mean
00:59:39.520 here's a question do you think it could solve the issue of aging and mortality could yeah in
00:59:46.160 a negative way. I think it's actually the narrow problem we should worry about. I think somewhere
00:59:53.600 in your DNA, there is a number of factors which allow you to rejuvenate yourself a certain number
01:00:01.520 of times. And if we can reset that number, you'd live a lot longer, much healthier life. Most
01:00:06.640 diseases are a byproduct of aging. And I think we can do it with a narrow superintelligence,
01:00:11.680 and how to go to general.
01:00:13.300 Because it seems to me that we are at a fork in the road now, right?
01:00:18.640 Where we can go down one way or we can go down another way.
01:00:22.980 And the worry is, is that we're heading down one way
01:00:27.540 where it's going to lead to our destruction.
01:00:31.440 And I just find it baffling in a way
01:00:35.820 that the people in charge of this technology
01:00:38.680 don't understand that or I'm willing to see that?
01:00:43.420 They don't feel that they can say no.
01:00:46.080 They cannot say no to investors because they'll be replaced
01:00:49.060 and someone else will say yes.
01:00:50.880 The options are amazing, the stock options I get.
01:00:54.480 So they don't have an option to not do it.
01:00:58.240 The hope is, again, that there is external pressure
01:01:00.700 for all the companies to stop at the same time,
01:01:03.460 and then they find they have an excuse to investors.
01:01:06.180 investors bought in at very high valuation they needed to go and have 100x so they needed to
01:01:14.520 continue growing hyper exponential towards super intelligence they cannot just say let's have normal
01:01:20.260 profits so the financial pressures are what drives them incentives are completely misaligned
01:01:29.280 we have no incentives which are pro-humanity all the incentives are to develop this do you think
01:01:35.680 part of the problem is as well, Roman, that the politicians don't understand the technology or the
01:01:42.500 long-term effects of this technology? So many don't, especially in US. Many are so old,
01:01:50.180 they don't use computers or internet or anything. Maybe they quit. I don't know. But we have some
01:01:55.860 politicians who are on record as saying, this is very bad, dangerous. We need to do something,
01:02:00.160 regulation. Problem is, you can't regulate this away. You can't just say it's illegal to kill
01:02:05.240 humanity. It doesn't work. You need to have specific bans on this particular deployment.
01:02:12.360 And I don't think they're willing to do that. And you need to orchestrate some kind of agreement
01:02:17.620 with China as well. I think that would be actually easier. I think that would not be the most
01:02:23.400 difficult part because, again, China doesn't have control mechanism. You think Communist Party wants
01:02:28.900 to lose control? They're very good at staying in control. And if they see this as potentially
01:02:33.960 threatening their long-term survival they'll be very happy not to do that that's an interesting
01:02:38.760 point you mentioned you have kids i do as well what do you i mean is there any point training
01:02:44.520 your kids to be able to do a job at this point well again it really depends on what type of job
01:02:51.000 i wouldn't train them to do something boring just to make money that's going to be automated anyway
01:02:55.960 so if there is something they find personally fulfilling to do so there is lots of things we
01:03:01.240 talked about one only human occupation but you can be uh i don't imagine you can you can do
01:03:08.280 all sorts of training you're a sensei you're a guide you're a tutor just human interaction
01:03:13.640 you take people on hikes you meditate you do sort of things where i don't want a robot doing it for
01:03:18.600 me yeah so it seems that we're going to prize human interaction above all else really i don't
01:03:28.200 know if that's true right now we're not value that much we sit at home and scroll so maybe we don't
01:03:33.400 need it as much as in terms of jobs i'm saying that certain jobs we will prefer to be done by
01:03:41.240 humans like which ones is not obvious podcasting i think if you are famous and you have people who
01:03:47.560 really like you you but i think i would be better at asking questions better at generating video
01:03:54.760 content so if you kind of grandfathered yourself in like you are joe rogan or something you'll be
01:04:00.040 okay but i think you could have just said trigonometry i mean who's that but i think
01:04:06.760 for a new person to start something like that successfully in a world with super intelligence
01:04:11.480 i see that editing and question because they watched every interview i ever did right they
01:04:16.840 know every question they write every paper how many of my papers have you read not many yeah not
01:04:22.760 right it's a good point i was thinking about this in terms of the kind of the political
01:04:30.200 element of it and i can really see roman 10 20 years down the line we get a kind of neo-luddite
01:04:38.520 movement which is anti-technology anti-ai and pushes back against that and it wouldn't surprise
01:04:47.560 me if we get also a terrorist element of this you know we will start for instance like i don't think 0.98
01:04:54.200 it will be long until you see people when waymos start taking people's jobs i don't think it'll be
01:04:59.720 very long until you walk past and by the way i don't agree with this i want to make this clear 0.98
01:05:03.240 you'll see a waymo with a smashed windscreen we just had the biggest ever protest to stop ai i
01:05:10.520 think in san francisco like 100 to 200 people showed up which is not a lot but it's a good
01:05:16.200 starting point. If you're interested in this social unrest civil war, Hugo de Garis has a beautiful
01:05:23.640 book, Artilect War. He wrote it like 20 years ago, completely predicting all these elements.
01:05:29.000 This is the most important issue of our time. There will be people who want to create godlike
01:05:33.400 machines and go to cosmos and people who are Terrans. They want everything to be local and 0.99
01:05:39.320 not to build those machines. And that's the decisive issue of our time.
01:05:44.360 because we talk about mass surveillance states the government would say well look you know
01:05:51.960 more and more particularly young men are unemployed they can't get a job because
01:05:58.280 the jobs that they used to be able to get like driving jobs manufacturing they've all gone
01:06:02.680 they've all gone so we've got this large group of unemployed young men and if they don't have
01:06:07.480 a job what tends to happen is they get angry they get more violent and the government will then come
01:06:13.160 in and go well look we've had all of these civil unrest and uprisings and riots we can't have this
01:06:20.360 therefore it's very important that we bring in mass surveillance to keep you safe i mean that's
01:06:25.000 a real possibility isn't it it is possible without concern about super intelligence we just have
01:06:31.400 governments deploying latest technology to spy on us we're seeing it with snowden we're seeing it
01:06:35.880 with others revealing what's what's really happening right yeah and it's also the concern
01:06:41.080 as well that we're going to live in a world which is far more unstable because we have these large
01:06:46.200 groups of men who don't have access to a job so the economic part of not having a job is easy to
01:06:53.480 solve you can tax big ai you can tax robots and distribute that that's not the difficult part
01:07:00.440 meaning is difficult yeah control is difficult yes roman well um thank you i guess for coming
01:07:10.120 on the show now we're very grateful for your time i'm just i think um unfortunately you've confirmed
01:07:17.400 a lot of the things that and i was wondering about this you know you i think from the former
01:07:21.880 soviet union i am francis you know he has some family ancestry from countries that have had
01:07:27.320 difficult uh existences i i always worried that it was my kind of temperamental russian
01:07:33.800 background that makes me worry about this stuff. But as always, when I don't see a logical
01:07:41.880 counter-argument, that's when I go, well, until I hear one, I will think this is likely. And
01:07:48.960 I don't see the counter-argument to the very basic point you're making, which is if you're
01:07:55.060 a squirrel, you cannot keep humans under control. And anything that has a survival instinct that
01:08:02.760 don't control that's more intelligent then you will eventually take over best case scenario
01:08:08.120 best case scenario so the reason we are kind of uncomfortable is that like this is um this has
01:08:15.000 kind of become real for us in this conversation so thank you for coming on i hope more people
01:08:20.040 hear your message and um humanity begins to to take this seriously i hope so so usually in science
01:08:28.600 when you publish a paper or a book and you are wrong there is no shortage of people jumping in
01:08:33.800 and publishing rebuttals corrections solutions we have many papers many books all arguing the
01:08:40.760 same thing there is no rebuttals there is no patents there is no peer-reviewed papers in nature
01:08:46.280 saying this is how we control advanced ai it scales to any level don't worry about it
01:08:50.840 so it's not just that we had this conversation and so far nobody jumped in they had a decade
01:08:58.600 Thank you so much for coming on. Before our audience asks you their questions,
01:09:03.000 the last question we ask all of our guests is what's the one thing we're not talking about
01:09:06.860 that we should be? Before Roman answers the final question at the end of the interview,
01:09:11.520 make sure to head over to our sub stack. The link is in the description where you'll be able to see
01:09:16.080 this. Is there an argument that humans are now in service of a new form of organism without
01:09:21.880 realizing it? Do you think there is a risk that AI leads the human race becoming complacent,
01:09:27.240 not bothering to study research and advance ourselves what's the one thing we're not talking
01:09:32.780 about that we should be suffering risks suffering risks tell us more so things could be so bad you 0.51
01:09:39.840 wish you were dead why digital hell you can create environment where you are tortured but you are
01:09:49.900 immortal or maybe you are uploaded to a virtual environment and what for you're asking too many
01:09:56.920 questions super intelligence can decide to do all sorts of things maybe it's dealing with some
01:10:03.960 malevolent payload maybe it's running experiments you can ask why this world this simulation has
01:10:10.840 suffering in it right that's what every religion deals with why did all good god create a world
01:10:17.560 with pain and suffering but there are some answers to those questions and it's not ruled out by
01:10:26.760 what we coded into those systems nice to end on a positive all right at least we didn't talk
01:10:34.760 about this again at least we didn't talk about it yeah that's the question head on over to
01:10:40.440 triggerpod.co.uk where Roman is going to answer your questions in the best case scenario where
01:10:47.560 ai doesn't erase us is it plausible that humanity avoids becoming merged with technology directly
01:10:56.760 We'll be right back.