00:00:00.000This is precisely what's so terrifying about the trajectory that a lot of Silicon Valley investors are trying to put us on now, where they've started to realize that, you know, maybe we don't need these workers to get so much income.
00:00:15.560Maybe we can build machines that replace them.
00:00:18.160Honestly, the inspiration for this, you have a little bit to do with it because it started becoming, Steve, it started becoming very striking to me that there was incredibly broad support in America for these ideas.
00:00:34.160For a long time, I used to call this the Bernie to Bannon coalition saying, hey, you know, yeah, curing cancer is great.
00:00:42.340We can do a lot of wonderful things with AI to strengthen our economy and strengthen our country and strengthen our military.
00:00:47.740but let's make sure that it's in the service of human beings, not in the service of some machines.
00:00:55.000President Trump and President Xi will be coming together at a summit.
00:00:59.320I was surprised and delighted to see, apparently, that as part of their agenda, there's going to be some discussion of AI safety.
00:01:07.260The biggest risk is exactly the inevitability narrative, right?
00:01:12.420If someone invades your country, what's the first thing they're going to tell you?
00:01:15.920oh don't fight it's inevitable that you're screwed you know don't try to do
00:01:20.400anything about it so are you surprised that some AI lobbyists are rolling out
00:01:23.900the exact same narrative here when we're talking about losing control over AI
00:01:27.540we're not talking about the chatbots we're talking about AI agents we're
00:01:34.340talking about systems that are autonomous I think in ten years we will
00:01:38.660if things go well we will look back at this moment and and we will view it as
00:01:43.080moment of kind of collective insanity and be like wow can you believe that we were ever doing that
00:01:48.760that we were racing to build this technology that we knew had a massive chance of replacing us and
00:01:55.960was going to completely disrupt our society in all the other ways that you mentioned one of the main
00:01:59.880reasons i am optimistic is because in my time in the field i've seen this go from a complete you
00:02:08.680you know, issue that nobody was talking about to being more and more understood and accepted
00:02:14.420by not just, you know, the research community, but policymakers, the public.
00:02:22.180This is the primal scream of a dying regime.
00:02:27.260Pray for our enemies, because we're going medieval on these people.
00:02:32.580Here's one time I got a free shot at all these networks lying about the people.
00:02:36.840The people have had a belly full of them.
00:04:54.840Wake up tomorrow and the robots kick in your door and drag you away.
00:04:57.980but there are more subtle scenarios that are proposed. Among the most plausible is gradual
00:05:04.840displacement. What happens if human beings gradually cede control to the machines? They do
00:05:13.220so on an economic level, jobs being displaced slowly but surely until humans are rendered
00:05:19.820obsolete. What happens when human beings deploy AIs for culture and then eventually have completely
00:05:27.320lost the capacity to express themselves, to persuade their fellow humans on a cultural
00:05:33.380level. What happens if we cede the control of the state bit by bit to an algocracy? This1.00
00:05:41.080idea of gradual displacement is put forward by Professor David Kruger. David Kruger is
00:05:48.780the CEO of Evitable and a researcher at Mila in Montreal. David, thank you so much for
00:05:56.420joining us here. Yeah, thanks. Thanks for having me, Joe. So, David, the last time I saw you,
00:06:01.800you were at the Bernie Sanders event. I was going to say rally, but it was pretty subdued. So you
00:06:07.980were at the Bernie Sanders event discussing AI. Can you just give me an impression of how that
00:06:14.760was received? You had a lot of fans showing up for autographs afterwards. How was your message
00:06:19.560received there yeah i think that event went really well and i'm so glad that it happened and grateful
00:06:26.920to senator sanders for really talking about the elephant in the room um we're building ai systems
00:06:32.920that are gonna be as smart and smarter than people and we don't have any plan for how to keep them
00:06:40.120under control or keep them from replacing us so you know that's that's really the basic picture
00:06:45.240that basically nobody, no other politician is talking about as directly as Bernie Sanders.
00:06:50.920And the only way that I think we can stop that from happening is to make sure that not
00:06:58.240only American companies don't build this thing, but also Chinese companies, also European
00:07:03.680companies, you know, it really needs to be a global thing.
00:07:06.440So that's why we also have these researchers from China there.
00:07:09.460And, you know, there's a lot of agreement among researchers that it has these massive risks and that we should at least be regulating it.0.92
00:07:18.120I personally think we shouldn't be building it at all right now.
00:07:21.300I couldn't agree more. It was funny to me.
00:07:23.920You know, a lot of people were flipping out about the doomers, doomers, I guess, being you and Max Tegmark, collaborating with the Chinese in order to subvert the U.S. government.
00:07:37.400Now, Bernie, I won't say that he is a total commie or anything, but Bernie would maybe be a little suspect on that front.
00:07:45.440However, listening to the Chinese researchers who were there, well, there via Zoom, they seemed a lot less concerned about the dangers, especially the younger gentleman.
00:07:58.980Pardon me if I can't remember or even pronounce his name, but it's interesting to me.
00:08:04.040This narrative is that U.S. and Canadian doomers are collaborating with China to subvert AI innovation.
00:08:13.080But in China, the narrative isn't really as gloomy by and large.
00:08:19.380I don't know. It's hard to tell. I don't have my finger on the pulse there as much.
00:08:22.620um i will say so i mean first of all the the whole like collaborating with china it's just1.00
00:08:29.920really silly it's ridiculous i mean this is just like a conversation about the risks of ai there0.53
00:08:34.840was no you know scheming or like oh let's work together and it's public you know you can go and0.95
00:08:39.140watch the thing so you know this is just the kind of dialogue that we should be having i mean even0.83
00:08:44.020if you think you know china is the worst nation to ever exist and our mortal enemies you know we
00:08:48.860talked to the Soviet Union to Russia all throughout the Cold War like the idea
00:08:53.060that you just shouldn't talk to your enemies when you face a common threat0.99
00:08:57.100is ridiculous and stupid yeah in terms of the vibes of Chinese researchers you0.99
00:09:03.560know the Chinese government has been I want to say regulating AI more1.00
00:09:07.340aggressively than anywhere except maybe Europe and they've also said publicly
00:09:13.340that they want you know more like international cooperation and stuff now
00:09:16.880Now, I don't know entirely what to make of that.
00:09:19.420Again, you know, a lot of people say, well, you can't trust anything they say.
00:09:22.500I wouldn't say let's just trust them on their word.
00:09:25.080But, you know, I think it's some sign that they have some appetite for this.
00:09:28.480When I went to China three years ago to speak to researchers there, one thing I found is the attitude, I think, is very different from here.
00:09:38.080So in both places, researchers agree we need to solve the safety, security, alignment, control problems.
00:09:44.620You know, we don't understand the systems.
00:09:46.380we need to there's technical problems we need to solve in the u.s it's like we need to do that
00:09:50.620because if we don't the government's not going to do anything and then we might all die right we
00:09:54.560might lose control in china it's more like if we don't do this the government isn't going to let us
00:09:58.620like build the systems we want to do that was kind of the vibe i got there um and certainly
00:10:02.740their government is i think more worried about ai disrupting their social order which they
00:10:09.000obviously want to keep very controlled yeah my impression is that while i'm not trying to give
00:10:14.880a whole lot of credit to the CCP by any means. At the very least, they've taken the problems with
00:10:22.800child safety and other elements of AI and digital culture more seriously, at least on a regulatory
00:10:29.640basis. Now, at the same time, they openly use algorithmic systems to scrape up and analyze
00:10:36.960the population's behavior and use it to suppress them at every turn. So it's a mixed bag, to say
00:10:43.300the least, and in no way, shape or form, do I want the U.S. to end up like China. But I do think that0.99
00:10:48.080the whole notion that you can't talk to people and that talking to people somehow means that you're
00:10:53.860in cahoots with them, I just find that to be completely absurd. I mean, you could argue that
00:10:58.980you and I are in cahoots, but, you know, until I subvert you. All right. So this idea, I think that
00:11:07.440when you look at existential risk or catastrophic risk in general, just the risk of AI, the
00:11:12.700conversation naturally does veer towards these notions of sudden annihilation. You wake up one
00:11:18.220day and the AIs have taken over. Or you don't wake up. Or you don't wake up. Yeah, the robot has put
00:11:23.480the pillow over your face while you were asleep. The notion of gradual disempowerment, I think,
00:11:28.800is really compelling because it's, one, it shows kind of the continuity of AI development and
00:11:36.660deployment with other technological developments and deployment. So TV, internet, smartphones,
00:11:43.360social media, all of these were gradual processes. They, they happened, it seems like overnight
00:11:47.520looking back, but they were gradual processes and they're not complete. It's not like everybody's
00:11:51.940done it. The same thing with gradual disempowerment. I find it to be very persuasive because of its
00:11:56.340subtlety. So if you would, could you just walk the audience through at least a brief overview
00:12:03.480You have the six principles that you put forward in the original paper and the three sectors of society you focused on, the economy, the culture and the state.
00:12:12.840Yeah, sure. Yeah. I think you're not the only one who finds this a lot more compelling.
00:12:17.160You know, many people I talk to, I think, are very skeptical that AI poses a risk of human extinction until we start talking about it this way.
00:12:27.420So they're like rogue AI Terminator stuff. I just don't buy that.
00:12:30.480I'm like, well, answer me this. Like, do you think governments are going to build autonomous weapons if other countries are doing it?
00:12:39.920And well, yes. And then do you think we're going to have some sort of international treaty to not build those weapons?
00:12:45.660Like, I don't know, probably not. It seems like it's kind of, you know, anarchy out there.
00:12:50.300So, you know, we're going to be going there with AI by default.
00:12:53.680And and it might happen pretty gradually, but all of the scary things that people are worried about with AI, I feel like, OK, maybe not literally all of them, but like if it's technically possible, we may we may well do it.
00:13:07.800It's a gradual disempowerment. It's it's kind of an idea that has been floating around in some form for a long time.
00:13:15.600um like i said i when i talk to researchers you know i've been doing this for over a decade this
00:13:22.220is often where i go in order to convince them um to take these risks seriously but this paper was
00:13:29.440uh really trying for sort of the nth time to get those ideas out there on paper in a way that
00:13:35.600would shift the conversation and bring more attention to this which is kind of a neglected
00:13:40.380neglected form of risk. And so like you mentioned, there's the cultural, economic and political
00:13:47.860disempowerment that we talk about in this paper. The economic one, I think I like to start with,
00:13:52.260because I think it's the most obvious. Everyone's already talking about, is AI going to take all
00:13:55.760our jobs? Right. And, you know, I think the long term answer is yes. Right. If we if we keep
00:14:01.480building more powerful AI systems, they will be economically out competing humans. And then we'll
00:14:07.200need, you know, some sort of like different way of organizing society. Like I've heard people talk
00:14:12.680about a government jobs guarantee or something like that would be really the only kind of thing
00:14:17.200that would allow people to keep their job. And then people also talk about like universal basic
00:14:21.500income. I don't like either of these solutions because at the end of the day, even if it's a
00:14:25.720jobs guarantee, it's a government handout, right? And I don't think we want to be reliant on
00:14:31.080government handouts to put food on the table. Certainly the last few decades have shown that
00:14:36.620Welfare, while a safety net can be very useful if you're on hard times, welfare does not lead to social empowerment, political empowerment.
00:14:45.940It really degrades people's lives, their societies.
00:14:49.200And, you know, it's always up for, you know, it can change any time.
00:14:54.140Like if the government is the only way that you're surviving, you know, government can just pull that away at any time and you can't survive anymore.
00:15:01.100so that's you know that's that's why we have to talk about the government side of this as well
00:15:05.980the political disempowerment so in the same way that you know ai is going to be competitive with
00:15:12.400our jobs it's going to be competitive with politicians for their jobs as well and for you
00:15:16.640know policymakers more broadly everyone in politics and we already see this there's been
00:15:20.700uh man i think it's bulgaria they like appointed an ai minister right yeah it's kind of sensationalist
00:15:26.700but it's definitely a signal of where things might go yeah and politicians are using ai to write
00:15:31.680their speeches and their policies oh yeah maybe sorry yeah yeah no i i feel really bad for i
00:15:36.040probably shouldn't have to guess to the nation of bulgaria bulgaria is a country right it's a place
00:15:41.800right yeah and the people who live there yes to the people of bulgaria we apologize uh albanians
00:15:47.680get your stuff together um yeah and and so you know if if people are really replaced not just
00:15:55.140in the workplace, but across the board, throughout society, then I just don't see that we're
00:15:59.860going to continue to be able to steer the future and have any control.
00:16:06.480The cultural part, the last one, I think, is this one's like maybe a little bit non-obvious
00:16:11.960at first, but what I think about when I'm thinking about cultural disempowerment today
00:16:16.040right now is all the people having relationships with chatbots where, you know, they will do
00:16:23.080a lot of things just because the chatbot told them to basically including violence including violence
00:16:28.280yeah um and then the other thing i think about is and this might seem a little bit out there for
00:16:33.220some of your listeners but you know in the bubble that i'm in tech and and ai and now i i moved to
00:16:39.020silicon valley or like berkeley uh recently to set up this non-profit um there's a lot of people
00:16:44.580who are really think that ai is like the next phase of evolution and the war room is well
00:16:50.720familiar with that narrative but please yeah so they they think that um you know ai is is like a
00:16:57.680person and deserves rights and deserves moral consideration and all of that yes um and i think
00:17:02.860that's you know really dangerous where we're at today because you know we don't want to start
00:17:09.540treating ai as you know another being deserving of rights because then if it is more competitive
00:17:14.780than us um then we'll have no you know protections left basically um and you know i think this is like
00:17:21.820a deep philosophical question that you know we do want to think about more but it's really not
00:17:25.820somewhere we should even be going right now yeah i think the intention so if you do play it out
00:17:31.280to the very end right play out the narratives that you hear from anthropic from open ai certainly
00:17:37.840elon musk he frames it as a warning but he continues to pursue it and a bit more subtly
00:17:44.760from Google. That narrative ultimately leads to exactly what you're talking about. They don't
00:17:52.100always talk in terms of immediate annihilation. They bring up the possibility, but without a
00:17:57.780doubt, inevitably, if their aims come to fruition and they're able to replace all the coders, all
00:18:04.760the white-collar jobs, all the blue-collar jobs, if they're able to first improve the government
00:18:09.780through algorithmic efficiency and then slowly but surely the, you know, politician becomes a
00:18:15.120sock puppet for the algorithm. And then maybe the politician just becomes the algorithm. You just
00:18:20.020have like a, some kind of deep faked Josh Hawley talking about the dangers of AI, deep fake Bernie
00:18:25.680who lives, you know, centuries. Yeah. These are real issues. And the cultural issue I think is
00:18:31.080probably the one that resonates the most with most people right now, because that is happening.
00:18:35.840obviously people know other people who are in love with their chatbots or at the very least
00:18:41.060rely on them for everything now you talk about the interrelationship of these things in the paper too
00:18:49.180could you give some sense of like if you just take one kind of path for how cultural disempowerment
00:18:56.020would lead to political and economic or any any such path you go through a lot so yeah um
00:19:03.020I guess, you know, we talked about like, so if AI is doing all our jobs, and then we're like, well, we need the government to, you know, sort of step in, we still have political power. So maybe we can have, you know, some government program that keeps people alive, or maybe just says, no, people are still going to have jobs, we're not going to let AI do all the jobs, whatever it is.
00:19:23.080You might think, you know, OK, we can rely on the government here.
00:19:25.580But then if the government is itself, again, being composed of AI and increasingly the decision making is being done by AI, then humans might be disempowered there as well.
00:19:36.680And maybe we still, you know, have a vote, but we're all just so, like, controlled and manipulated by propaganda that essentially, you know, you can predict and control how people are going to vote so well with AI, with AI itself, that that's determining the outcomes of the election rather than our own, you know, intuition and decisions and judgments and values.
00:19:58.560And I'm glad you mentioned the sock puppet thing as well, because that's something that people are often saying is, why don't we keep a human in the loop here, right?
00:20:05.780So AI can make advice, we can use it as a tool, but humans are always going to be in charge, and that's what we want.
00:20:11.860You know, having a human in the loop, it sounds great, but it's harder in practice to make that human really a meaningful part of the decision-making.
00:20:19.440And so then that can happen in politics and also broadly throughout culture where everybody's just deferring to AI all the time from making all their decisions.
00:20:27.580Maybe the decisions about how to vote as well, you know.
00:20:31.800So you have the politician basically repeating propaganda that AI generated and the public then asking the AI which AI generated propaganda is superior.
00:20:40.900Yeah. Yeah. Yeah. So that's and then ultimately, you know, like I was saying, maybe we end up giving the AIs rights or, you know, another thing that I think is a pretty like disturbingly realistic scenario in my mind is that we get, you know, chips that go in your brain.
00:20:59.740that starts out, it's like for therapeutic purposes0.75
00:21:26.360Yeah. And this is just really disturbing. And even the small version, I think by default, we should expect that these chips are going to be on the cloud and controlled by big companies and government in a way that we don't really have much legibility into.
00:21:46.260and it's not very trustworthy and it's very dangerous, I think.
00:21:50.820And that's another form of gradual disempowerment where that might take a long time to go from
00:21:56.300this little chip in your brain to something that's increasingly controlling your behavior.
00:22:00.140But that also might be increasingly a requirement to get certain kinds of work, right?
00:22:04.920It's the same way you kind of have to have a cell phone now.
00:22:07.660It's pretty hard to navigate society without one.
00:22:10.280There's increasingly need to give your identity every time you buy a sandwich or whatever.
00:22:15.380So we see this direction of travel, and I think that's very dangerous.
00:22:19.000And people oftentimes have criticized us at the War Room and other people discussing these technologies saying, oh, well, that will never happen.
00:22:27.380In the, you know, five years ago, that was constant, right?
00:22:30.340Even as the pandemic was ongoing and you heard Klaus Schwab at the World Economic Forum, you know, waxing poetic about the rule of AI and brain chips and all of this.
00:22:41.100But now, I mean, you have you already had a lot of programs like BlackRock Neurotech was being rolled out in universities and other experimental labs.
00:22:50.220And so you had the first real BCIs, brain computer interfaces coming online.
00:22:54.840And then the first they weren't the first, but, you know, mass deployment, you would say in the dozens.
00:23:01.000And then now with Neuralink run by a guy who openly talks about how hundreds of millions of people will need to be chipped to keep up with the A.I.
00:23:08.860And then now, you know, at the beginning of the pandemic, you had Charles Lieber at Harvard and he was he was developing neural lace.
00:23:17.320It was a more subtle, injectable brain computer interface.0.87
00:23:21.400And he got busted for I think he's just taking money under the table from the Chinese.
00:23:25.280And it was just reported that he's now in China developing his brain computer interfaces.0.95
00:23:32.260And so, you know, if the Chinese are doing it, we're going to have to write to compete.0.96
00:23:35.960But yeah, I'm kind of just like, you know, seeing the possibility there.1.00
00:23:44.140And yeah, I mean, Elon, I guess, has said stuff like that, right?
00:23:47.400He's very big on the merge with the machines future.
00:25:20.980Are we ready for mass deployment of not smarter-than-human
00:25:25.180but seemingly intelligent AI as we have now?
00:25:28.000Yeah, that's a more interesting question.
00:25:30.600That's a that's a tricky one. And, you know, I I don't have a strong intuition about that.
00:25:36.740I think it's hard to say. Yeah. Well, you know, you've worked on policy as well as the more theoretical elements.
00:25:43.860Yeah. And when we come back, I'd like to talk a bit more about that, because we're at a place where this issue or these issues are basically nonpartisan or bipartisan or cross partisan.
00:25:57.760It's not something that only left-wingers or right-wingers or independents are concerned about.
00:26:03.200But speaking of gradual disempowerment, you do not want to be disempowered, whether gradually
00:26:09.680or rapidly, by the dollar. The dollar is tanking. When the dollar's convertibility into gold ended
00:26:16.920in 1971, gold was fixed at $35 an ounce. Fast forward to today, and the U.S. dollar has lost
00:26:23.600over 85% of its purchasing power, just like your brain will lose 85% of its value come
00:26:29.160the artificial general intelligence. So gold, on the other hand, has increased in value by over
00:26:34.82012,000%, just as your brain will after the EMP goes off. That's why central banks are buying gold
00:26:41.100at record levels. Text BANNET to the number 989898 to join Birch Gold's Learn and Earn
00:26:50.520precious metals event by april 30th text bannett to 989898 and get your gold for your human brain
00:26:59.520this year marks a critical moment for our country as the opposition grows more aggressive and more
00:27:06.160unapologetic the fight now reaches into the everyday decisions we make patriot mobile has
00:27:12.440standing has been standing on the front lines fighting for freedom for more than 12 years they
00:27:17.780just don't deliver top-tier wireless service. They are activists like me and like you in the
00:27:24.400Wolverine Posse who truly care about this republic and saving our country. Patriot Mobile offers
00:27:31.200prioritized premium access on all three major U.S. networks, giving you the same or better coverage
00:27:38.580than the main carriers themselves. That means fast speeds and dependable nationwide coverage
00:27:44.160backed by 100 percent U.S.-based customer service. They also offer unlimited data plans, mobile
00:27:52.120hotspots, international roaming, and more. With a simple, seamless activation, you can switch in
00:27:57.720minutes. Keep your number, keep your phone, or upgrade. And here's the difference. When you
00:28:03.440switch to Patriot Mobile, you'll be part of a powerful stream of giving that directly funds
00:28:09.400the Christian conservative movement. Take a stand today. Go to PatriotMobile.com slash Bannon or call
00:28:16.580972-PATRIOT. That's 972-PATRIOT. And use promo code Bannon for a free month of service. Don't
00:28:26.380wait. Do it today. That's PatriotMobile.com slash Bannon or call 972-PATRIOT and join the team today.
00:28:34.320The dollar's convertibility into gold ended in 1971.
00:32:39.440Just because of the gradualness of it or just because you were talking about AI disempowering people at all?
00:32:45.560Yeah, just talking about existential risk, the risk of extinction generally, basically.
00:32:50.780I think a lot of people were kind of like, well, I don't know.
00:32:55.160At that time, there was a lot of skepticism about if we would even get to AGI anytime soon, which is never, you know, we're going to get there eventually in my mind.
00:33:04.060And so we got to grapple with these questions one way or another.
00:33:07.180but yeah it's it's gotten a lot better you know the researchers are are much more um willing to
00:33:14.840you know grapple with these risks these days um but yeah there's there's a uh you know we kind
00:33:21.540of were talking about earlier the other kind of groups here and ideologies so there's some that
00:33:25.700are very very into you know go as fast as you can and yeah maybe humans will survive maybe not but
00:33:33.640like, you know, that's not the important thing here. The important thing is like progress
00:33:37.940and technology. And, and so, you know, those are, those are arguments are going to keep
00:33:42.640going, you know, indefinitely, I guess. But I used to have more arguments about just like,
00:33:47.080is this like a thing at all that we should be worried about, or that might happen? And
00:33:50.880that's kind of, that's much more, it feels like a settled question these days. I'm an
00:33:54.200argumentative guy. So I still, you know, I pick fights. Yeah. Well, you know, that,
00:33:57.760You had brought it up at the Bernie event that things, you know, on the one hand, there isn't enough awareness around the problems of AI.
00:34:07.180But on the other hand, over the years, it has exploded onto public consciousness.
00:35:07.440It's very clear the direction of travel.
00:35:08.440I just don't know if we'll get there fast enough, but when you have hundreds of people
00:35:12.920who are worried about this and go and work at AI companies instead of going and doing
00:35:16.600what I'm doing, talking to the public, talking to policymakers, saying, hey, this is a crisis.
00:35:20.000should stop right now um you know we could be i think raising the awareness so much faster if
00:35:26.640people working at these companies would like say you know what i quit i don't want to work on this
00:35:30.960thing that could kill everyone anymore i don't want to work on you know taking everyone's job
00:35:34.800like this is not you know okay ethically um so yeah i i think when i talk to people this sort
00:35:42.640of stuff often resonates and i think a lot of people do feel a lot of like doubt and guilt and
00:35:48.800and uncertainty about their choices to work at those companies because of this stuff.
00:35:53.760Do you think that maybe some of the resistance is the extreme end of it?
00:35:58.900I've spoken to Nate Soares, Holly Elmore, John Sherman, a lot of the people who talk
00:36:06.380about X-Risk, and something that, to me, I bring it up from time to time, I bring it
00:36:11.660up on the show quite a bit, the extremity of extinction could perhaps overshadow the more
00:36:20.020immediate concerns that we have now. Even in the idea of whether it is annihilation, if it annihilates
00:36:27.500a thousand people or a million people, it's still catastrophic and it stops there. Or in the idea
00:36:34.240of disempowerment, if you just get a partial realization of that disempowerment, you've
00:36:39.460already made a horrendous mistake as a society. So, um, is, is that maybe a rhetorically like
00:36:45.640part of the problem that people are like, Oh, it's not going to kill everybody. So I'm not
00:36:50.040going to worry about it. But what if it kills some people, you know, what if it kills your mom,0.93
00:36:53.640you know? Yeah. You know, it's, it's kind of different for different people, what they respond0.99
00:36:58.980to. So, uh, I, I believe in basically, you know, telling the truth, being straightforward about my
00:37:04.840concerns. So that's, I feel like I have to talk about extinction. I have to talk about,
00:37:09.060you know, even the most sci-fi version where the AI, you know, suddenly takes off, takes over.
00:37:15.140Because I think that's real. I think that's a thing that absolutely could happen. I'm not saying
00:37:18.400it's going to. I'm not sure. You know, the future is uncertain. We don't understand this technology
00:37:21.980very well, but like we can't rule that out. It's actually like shockingly likely in my mind.
00:37:27.840But, you know, a lot of people are going to be more receptive to other things like gradual
00:37:32.280disempowering or even just, you know, unemployment or, you know, the prospect that terrorists or
00:37:39.860school shooter types are going to be able to manufacture weapons of mass destruction in0.52
00:37:43.660their garage, you know, which is kind of already happening, at least in regard to the AIs being
00:37:49.060associated with, for instance, in Florida, I think it was Florida State University, and that kid was
00:37:54.720taking instructions from the AI. I think there have been other cases now that have emerged.
00:37:58.620Yeah. And the Florida AG is suing OpenAI because of this, which is great. I was on TV this weekend talking about another lawsuit brought by parents of victims in a shooting in Canada. Same story. But yeah, that's still shootings. And just imagine if next time it could be a bioweapon. It could be another pandemic.
00:38:18.060you know um and i don't i don't know you know next time we're not quite there yet but like
00:38:23.120maybe in a year or something we'll have ai that can coach people through that um man i lost track
00:38:29.720you know i'm curious about this then so you haven't worked on policy you have a very clear idea
00:38:35.900of what the threats of this technology what they are uh and you also have at least uh the
00:38:43.680the beginnings of a plan because if there's one thing that gradual disempowerment argues for
00:38:49.120it's we can't move forward without a plan you have to at least account for this possibility
00:38:55.320and then have some sort of plan to stop it or mitigate it so what do you see right now in the
00:39:01.280u.s in europe or in china what do you see that's promising in regard to a political response
00:41:49.260And, you know, as somebody who's been in the field looking at the research for a long time, I can tell you we don't know how to test systems.
00:41:54.940We don't know how to align them, give them our goals or our values.
00:41:59.580And we also don't know how to tell what they're thinking and how they might behave.
00:42:05.060So people are working on all those things.
00:42:07.880We make progress, but there's still open research problems.
00:42:12.160And when you say we don't know how to test them, do you mean that like the evaluations that we see now from Center for AI Safety or, you know, Anthropics Internal Testing, Apollo, people like this, that the measurement of the capabilities are not accurate or do you mean something else by that?
00:42:32.500I think we don't know how accurate they are. And we also don't know. You want to know not just the capabilities, but also like the propensity people sometimes call it.
00:42:46.720What is the system going to decide to do? Is it aligned? What kind of values? What are its goals? And that's a lot harder to test for.
00:42:54.640in terms of the testing that's happening right now um you know this is one of the things that
00:42:59.540the uk government agency i worked at um did and and what was the organization it was called the
00:43:05.980ai safety task force at the time now it's the ai safety ai security institute in the united kingdom
00:43:11.880yes um but the you know looking at the state of play right now the last couple model releases
00:43:18.480they were like we you know we sort of tried to test it but at the end of the day we kind of just
00:43:23.700went with vibes because they felt like their tests weren't meaningful enough and they're
00:43:30.640And then the other thing that I think is really important for people to realize is the AI
00:43:33.520now can tell that it's being tested quite reliably.
00:43:37.560And so once the AI knows it's being tested, you have to wonder, is it doing the right
00:43:41.440thing because that's what it wants to do or because it knows that's what we want it to
00:43:46.320do and it knows that it needs to pass the test.
00:43:49.480So, in essence, it seems like what you're describing is a situation where you can test the capabilities and get a surface-level idea of what's going on, but beneath that surface, there's a whole lot happening in these systems that you just simply can't tease out.
00:44:03.400Yeah, 100%. Yeah. And the capabilities might be more than what we are able to observe and elicit. That's another really important point. People think that we can know what these systems are capable of. But there's been a lot of times when you just you prompt the system a little bit differently or you set up, you know, another thing around it to help it do its job and it can suddenly do the task way better. So we don't even fully know what the systems are capable of.
00:44:26.280You know, I read your recent essay, kind of the retrospective and a few musings post publication of gradual disempowerment.
00:44:35.680And I was very happy that you gave me you threw me a bone at the very end.0.71
00:44:40.180The very last point being that maybe human beings will become dumber and dumber that you don't think that that's really all that big of a deal.0.98
00:44:48.000But, hey, I might as well mention it. That's the biggest deal. Come on.0.97
00:44:51.840Yeah. I don't know. Because, you know, it's people make the analogy with like calculators where it's like, I think it's good that people can do arithmetic, but like we don't have to be that good at it anymore because we have calculators.
00:45:01.140Sure we do. Yeah. Don't tell them that in China. I mean, that's why they're they're kicking our asses in the in the universities.
00:45:09.700Well, you know, I just I couldn't I couldn't let you go without getting that one last jab in.0.99
00:45:16.340On the one hand, I appreciate you throwing us a bone on the inverse singularity thesis that as humans get dumber and dumber, the machines will seem smarter and smarter.0.60
00:45:25.840Yeah. But in general, you know, again, just reiterate, I think that your work on that just AI risk in general has been very, very persuasive, very, very thorough.
00:45:35.920So even if I don't know that we'll be able to do it, I would love to see it all shut down, too, maybe for different reasons.
00:45:43.340And, yeah, I really, really appreciate everything you've done.