Bannon's War Room - May 11, 2026


WarRoom Battleground EP 1007: David Krueger on AI - Humanity Dies by Gradual Disempowerment


Episode Stats


Length

53 minutes

Words per minute

165.25945

Word count

8,873

Sentence count

447

Harmful content

Toxicity

19

sentences flagged

Hate speech

33

sentences flagged


Summary

Summaries generated with gmurro/bart-large-finetuned-filtered-spotify-podcast-summ .

Transcript

Transcript generated with Whisper (turbo).
Toxicity classifications generated with s-nlp/roberta_toxicity_classifier .
Hate speech classifications generated with facebook/roberta-hate-speech-dynabench-r4-target .
00:00:00.000 This is precisely what's so terrifying about the trajectory that a lot of Silicon Valley investors are trying to put us on now, where they've started to realize that, you know, maybe we don't need these workers to get so much income.
00:00:15.560 Maybe we can build machines that replace them.
00:00:18.160 Honestly, the inspiration for this, you have a little bit to do with it because it started becoming, Steve, it started becoming very striking to me that there was incredibly broad support in America for these ideas.
00:00:34.160 For a long time, I used to call this the Bernie to Bannon coalition saying, hey, you know, yeah, curing cancer is great.
00:00:42.340 We can do a lot of wonderful things with AI to strengthen our economy and strengthen our country and strengthen our military.
00:00:47.740 but let's make sure that it's in the service of human beings, not in the service of some machines.
00:00:55.000 President Trump and President Xi will be coming together at a summit.
00:00:59.320 I was surprised and delighted to see, apparently, that as part of their agenda, there's going to be some discussion of AI safety.
00:01:07.260 The biggest risk is exactly the inevitability narrative, right?
00:01:12.420 If someone invades your country, what's the first thing they're going to tell you?
00:01:15.920 oh don't fight it's inevitable that you're screwed you know don't try to do
00:01:20.400 anything about it so are you surprised that some AI lobbyists are rolling out
00:01:23.900 the exact same narrative here when we're talking about losing control over AI
00:01:27.540 we're not talking about the chatbots we're talking about AI agents we're
00:01:34.340 talking about systems that are autonomous I think in ten years we will
00:01:38.660 if things go well we will look back at this moment and and we will view it as
00:01:43.080 moment of kind of collective insanity and be like wow can you believe that we were ever doing that
00:01:48.760 that we were racing to build this technology that we knew had a massive chance of replacing us and
00:01:55.960 was going to completely disrupt our society in all the other ways that you mentioned one of the main
00:01:59.880 reasons i am optimistic is because in my time in the field i've seen this go from a complete you
00:02:08.680 you know, issue that nobody was talking about to being more and more understood and accepted
00:02:14.420 by not just, you know, the research community, but policymakers, the public.
00:02:22.180 This is the primal scream of a dying regime.
00:02:27.260 Pray for our enemies, because we're going medieval on these people.
00:02:32.580 Here's one time I got a free shot at all these networks lying about the people.
00:02:36.840 The people have had a belly full of them.
00:02:38.680 I know you don't like hearing that.
00:02:40.240 I know you try to do everything in the world to stop that,
00:02:41.940 but you're not going to stop it.
00:02:42.880 It's going to happen.
00:02:44.140 And where do people like that go to share the big line?
00:02:47.540 Mega Media.
00:02:48.900 I wish in my soul, I wish that any of these people had a conscience.
00:02:54.300 Ask yourself, what is my task and what is my purpose?
00:02:58.100 If that answer is to save my country, this country will be saved.
00:03:04.260 War Room.
00:03:05.300 Here's your host, Stephen K. Band.
00:03:08.680 Good evening. I'm Joe Allen, and this is War Room Battleground.
00:03:19.440 We talk a lot about existential risk in artificial intelligence.
00:03:25.020 Sometimes we discuss it in terms of human action, humans using machines.
00:03:29.660 What if a dictator uses algorithms to monitor the communications and even the thoughts of a population
00:03:36.820 and then uses those thoughts, uses those communications to subdue his own people?
00:03:43.100 What happens if a rogue actor uses the expertise provided by an AI system to create a bioweapon
00:03:49.780 or any other kind of improvised weaponry?
00:03:53.820 What if the U.S. or China develop armies of humanoid robots, drone swarms in the skies,
00:04:03.260 and deploy these autonomously against soldiers or even citizens.
00:04:08.720 What happens if both do this?
00:04:12.480 On the other side of this, the more extreme wavelength,
00:04:16.320 you have the idea that artificial intelligence itself could be put in control of these systems
00:04:22.060 and by its own decision-making capacity begin to produce propaganda to subdue the population
00:04:30.100 or perhaps to unleash a bioweapon to weaken or kill a population or the entire human race. 0.53
00:04:38.820 What happens, these thinkers ask, if AIs take control of autonomous drone swarms and exterminate some or the entire human race?
00:04:52.460 Now, these are Terminator vibes.
00:04:54.840 Wake up tomorrow and the robots kick in your door and drag you away.
00:04:57.980 but there are more subtle scenarios that are proposed. Among the most plausible is gradual
00:05:04.840 displacement. What happens if human beings gradually cede control to the machines? They do
00:05:13.220 so on an economic level, jobs being displaced slowly but surely until humans are rendered
00:05:19.820 obsolete. What happens when human beings deploy AIs for culture and then eventually have completely
00:05:27.320 lost the capacity to express themselves, to persuade their fellow humans on a cultural
00:05:33.380 level. What happens if we cede the control of the state bit by bit to an algocracy? This 1.00
00:05:41.080 idea of gradual displacement is put forward by Professor David Kruger. David Kruger is
00:05:48.780 the CEO of Evitable and a researcher at Mila in Montreal. David, thank you so much for
00:05:56.420 joining us here. Yeah, thanks. Thanks for having me, Joe. So, David, the last time I saw you,
00:06:01.800 you were at the Bernie Sanders event. I was going to say rally, but it was pretty subdued. So you
00:06:07.980 were at the Bernie Sanders event discussing AI. Can you just give me an impression of how that
00:06:14.760 was received? You had a lot of fans showing up for autographs afterwards. How was your message
00:06:19.560 received there yeah i think that event went really well and i'm so glad that it happened and grateful
00:06:26.920 to senator sanders for really talking about the elephant in the room um we're building ai systems
00:06:32.920 that are gonna be as smart and smarter than people and we don't have any plan for how to keep them
00:06:40.120 under control or keep them from replacing us so you know that's that's really the basic picture
00:06:45.240 that basically nobody, no other politician is talking about as directly as Bernie Sanders.
00:06:50.920 And the only way that I think we can stop that from happening is to make sure that not
00:06:58.240 only American companies don't build this thing, but also Chinese companies, also European
00:07:03.680 companies, you know, it really needs to be a global thing.
00:07:06.440 So that's why we also have these researchers from China there.
00:07:09.460 And, you know, there's a lot of agreement among researchers that it has these massive risks and that we should at least be regulating it. 0.92
00:07:18.120 I personally think we shouldn't be building it at all right now.
00:07:21.300 I couldn't agree more. It was funny to me.
00:07:23.920 You know, a lot of people were flipping out about the doomers, doomers, I guess, being you and Max Tegmark, collaborating with the Chinese in order to subvert the U.S. government.
00:07:37.400 Now, Bernie, I won't say that he is a total commie or anything, but Bernie would maybe be a little suspect on that front.
00:07:45.440 However, listening to the Chinese researchers who were there, well, there via Zoom, they seemed a lot less concerned about the dangers, especially the younger gentleman.
00:07:58.980 Pardon me if I can't remember or even pronounce his name, but it's interesting to me.
00:08:04.040 This narrative is that U.S. and Canadian doomers are collaborating with China to subvert AI innovation.
00:08:13.080 But in China, the narrative isn't really as gloomy by and large.
00:08:17.180 Would you agree with that?
00:08:19.380 I don't know. It's hard to tell. I don't have my finger on the pulse there as much.
00:08:22.620 um i will say so i mean first of all the the whole like collaborating with china it's just 1.00
00:08:29.920 really silly it's ridiculous i mean this is just like a conversation about the risks of ai there 0.53
00:08:34.840 was no you know scheming or like oh let's work together and it's public you know you can go and 0.95
00:08:39.140 watch the thing so you know this is just the kind of dialogue that we should be having i mean even 0.83
00:08:44.020 if you think you know china is the worst nation to ever exist and our mortal enemies you know we
00:08:48.860 talked to the Soviet Union to Russia all throughout the Cold War like the idea
00:08:53.060 that you just shouldn't talk to your enemies when you face a common threat 0.99
00:08:57.100 is ridiculous and stupid yeah in terms of the vibes of Chinese researchers you 0.99
00:09:03.560 know the Chinese government has been I want to say regulating AI more 1.00
00:09:07.340 aggressively than anywhere except maybe Europe and they've also said publicly
00:09:13.340 that they want you know more like international cooperation and stuff now
00:09:16.880 Now, I don't know entirely what to make of that.
00:09:19.420 Again, you know, a lot of people say, well, you can't trust anything they say.
00:09:22.500 I wouldn't say let's just trust them on their word.
00:09:25.080 But, you know, I think it's some sign that they have some appetite for this.
00:09:28.480 When I went to China three years ago to speak to researchers there, one thing I found is the attitude, I think, is very different from here.
00:09:38.080 So in both places, researchers agree we need to solve the safety, security, alignment, control problems.
00:09:44.620 You know, we don't understand the systems.
00:09:46.380 we need to there's technical problems we need to solve in the u.s it's like we need to do that
00:09:50.620 because if we don't the government's not going to do anything and then we might all die right we
00:09:54.560 might lose control in china it's more like if we don't do this the government isn't going to let us
00:09:58.620 like build the systems we want to do that was kind of the vibe i got there um and certainly
00:10:02.740 their government is i think more worried about ai disrupting their social order which they
00:10:09.000 obviously want to keep very controlled yeah my impression is that while i'm not trying to give
00:10:14.880 a whole lot of credit to the CCP by any means. At the very least, they've taken the problems with
00:10:22.800 child safety and other elements of AI and digital culture more seriously, at least on a regulatory
00:10:29.640 basis. Now, at the same time, they openly use algorithmic systems to scrape up and analyze
00:10:36.960 the population's behavior and use it to suppress them at every turn. So it's a mixed bag, to say
00:10:43.300 the least, and in no way, shape or form, do I want the U.S. to end up like China. But I do think that 0.99
00:10:48.080 the whole notion that you can't talk to people and that talking to people somehow means that you're
00:10:53.860 in cahoots with them, I just find that to be completely absurd. I mean, you could argue that
00:10:58.980 you and I are in cahoots, but, you know, until I subvert you. All right. So this idea, I think that
00:11:07.440 when you look at existential risk or catastrophic risk in general, just the risk of AI, the
00:11:12.700 conversation naturally does veer towards these notions of sudden annihilation. You wake up one
00:11:18.220 day and the AIs have taken over. Or you don't wake up. Or you don't wake up. Yeah, the robot has put
00:11:23.480 the pillow over your face while you were asleep. The notion of gradual disempowerment, I think,
00:11:28.800 is really compelling because it's, one, it shows kind of the continuity of AI development and
00:11:36.660 deployment with other technological developments and deployment. So TV, internet, smartphones,
00:11:43.360 social media, all of these were gradual processes. They, they happened, it seems like overnight
00:11:47.520 looking back, but they were gradual processes and they're not complete. It's not like everybody's
00:11:51.940 done it. The same thing with gradual disempowerment. I find it to be very persuasive because of its
00:11:56.340 subtlety. So if you would, could you just walk the audience through at least a brief overview
00:12:03.480 You have the six principles that you put forward in the original paper and the three sectors of society you focused on, the economy, the culture and the state.
00:12:12.840 Yeah, sure. Yeah. I think you're not the only one who finds this a lot more compelling.
00:12:17.160 You know, many people I talk to, I think, are very skeptical that AI poses a risk of human extinction until we start talking about it this way.
00:12:27.420 So they're like rogue AI Terminator stuff. I just don't buy that.
00:12:30.480 I'm like, well, answer me this. Like, do you think governments are going to build autonomous weapons if other countries are doing it?
00:12:39.920 And well, yes. And then do you think we're going to have some sort of international treaty to not build those weapons?
00:12:45.660 Like, I don't know, probably not. It seems like it's kind of, you know, anarchy out there.
00:12:50.300 So, you know, we're going to be going there with AI by default.
00:12:53.680 And and it might happen pretty gradually, but all of the scary things that people are worried about with AI, I feel like, OK, maybe not literally all of them, but like if it's technically possible, we may we may well do it.
00:13:07.800 It's a gradual disempowerment. It's it's kind of an idea that has been floating around in some form for a long time.
00:13:15.600 um like i said i when i talk to researchers you know i've been doing this for over a decade this
00:13:22.220 is often where i go in order to convince them um to take these risks seriously but this paper was
00:13:29.440 uh really trying for sort of the nth time to get those ideas out there on paper in a way that
00:13:35.600 would shift the conversation and bring more attention to this which is kind of a neglected
00:13:40.380 neglected form of risk. And so like you mentioned, there's the cultural, economic and political
00:13:47.860 disempowerment that we talk about in this paper. The economic one, I think I like to start with,
00:13:52.260 because I think it's the most obvious. Everyone's already talking about, is AI going to take all
00:13:55.760 our jobs? Right. And, you know, I think the long term answer is yes. Right. If we if we keep
00:14:01.480 building more powerful AI systems, they will be economically out competing humans. And then we'll
00:14:07.200 need, you know, some sort of like different way of organizing society. Like I've heard people talk
00:14:12.680 about a government jobs guarantee or something like that would be really the only kind of thing
00:14:17.200 that would allow people to keep their job. And then people also talk about like universal basic
00:14:21.500 income. I don't like either of these solutions because at the end of the day, even if it's a
00:14:25.720 jobs guarantee, it's a government handout, right? And I don't think we want to be reliant on
00:14:31.080 government handouts to put food on the table. Certainly the last few decades have shown that
00:14:36.620 Welfare, while a safety net can be very useful if you're on hard times, welfare does not lead to social empowerment, political empowerment.
00:14:45.940 It really degrades people's lives, their societies.
00:14:49.200 And, you know, it's always up for, you know, it can change any time.
00:14:54.140 Like if the government is the only way that you're surviving, you know, government can just pull that away at any time and you can't survive anymore.
00:15:01.100 so that's you know that's that's why we have to talk about the government side of this as well
00:15:05.980 the political disempowerment so in the same way that you know ai is going to be competitive with
00:15:12.400 our jobs it's going to be competitive with politicians for their jobs as well and for you
00:15:16.640 know policymakers more broadly everyone in politics and we already see this there's been
00:15:20.700 uh man i think it's bulgaria they like appointed an ai minister right yeah it's kind of sensationalist
00:15:26.700 but it's definitely a signal of where things might go yeah and politicians are using ai to write
00:15:31.680 their speeches and their policies oh yeah maybe sorry yeah yeah no i i feel really bad for i
00:15:36.040 probably shouldn't have to guess to the nation of bulgaria bulgaria is a country right it's a place
00:15:41.800 right yeah and the people who live there yes to the people of bulgaria we apologize uh albanians
00:15:47.680 get your stuff together um yeah and and so you know if if people are really replaced not just
00:15:55.140 in the workplace, but across the board, throughout society, then I just don't see that we're
00:15:59.860 going to continue to be able to steer the future and have any control.
00:16:04.440 And that's really concerning.
00:16:06.480 The cultural part, the last one, I think, is this one's like maybe a little bit non-obvious
00:16:11.960 at first, but what I think about when I'm thinking about cultural disempowerment today
00:16:16.040 right now is all the people having relationships with chatbots where, you know, they will do
00:16:23.080 a lot of things just because the chatbot told them to basically including violence including violence
00:16:28.280 yeah um and then the other thing i think about is and this might seem a little bit out there for
00:16:33.220 some of your listeners but you know in the bubble that i'm in tech and and ai and now i i moved to
00:16:39.020 silicon valley or like berkeley uh recently to set up this non-profit um there's a lot of people
00:16:44.580 who are really think that ai is like the next phase of evolution and the war room is well
00:16:50.720 familiar with that narrative but please yeah so they they think that um you know ai is is like a
00:16:57.680 person and deserves rights and deserves moral consideration and all of that yes um and i think
00:17:02.860 that's you know really dangerous where we're at today because you know we don't want to start
00:17:09.540 treating ai as you know another being deserving of rights because then if it is more competitive
00:17:14.780 than us um then we'll have no you know protections left basically um and you know i think this is like
00:17:21.820 a deep philosophical question that you know we do want to think about more but it's really not
00:17:25.820 somewhere we should even be going right now yeah i think the intention so if you do play it out
00:17:31.280 to the very end right play out the narratives that you hear from anthropic from open ai certainly
00:17:37.840 elon musk he frames it as a warning but he continues to pursue it and a bit more subtly
00:17:44.760 from Google. That narrative ultimately leads to exactly what you're talking about. They don't
00:17:52.100 always talk in terms of immediate annihilation. They bring up the possibility, but without a
00:17:57.780 doubt, inevitably, if their aims come to fruition and they're able to replace all the coders, all
00:18:04.760 the white-collar jobs, all the blue-collar jobs, if they're able to first improve the government
00:18:09.780 through algorithmic efficiency and then slowly but surely the, you know, politician becomes a
00:18:15.120 sock puppet for the algorithm. And then maybe the politician just becomes the algorithm. You just
00:18:20.020 have like a, some kind of deep faked Josh Hawley talking about the dangers of AI, deep fake Bernie
00:18:25.680 who lives, you know, centuries. Yeah. These are real issues. And the cultural issue I think is
00:18:31.080 probably the one that resonates the most with most people right now, because that is happening.
00:18:35.840 obviously people know other people who are in love with their chatbots or at the very least
00:18:41.060 rely on them for everything now you talk about the interrelationship of these things in the paper too
00:18:49.180 could you give some sense of like if you just take one kind of path for how cultural disempowerment
00:18:56.020 would lead to political and economic or any any such path you go through a lot so yeah um
00:19:03.020 I guess, you know, we talked about like, so if AI is doing all our jobs, and then we're like, well, we need the government to, you know, sort of step in, we still have political power. So maybe we can have, you know, some government program that keeps people alive, or maybe just says, no, people are still going to have jobs, we're not going to let AI do all the jobs, whatever it is.
00:19:23.080 You might think, you know, OK, we can rely on the government here.
00:19:25.580 But then if the government is itself, again, being composed of AI and increasingly the decision making is being done by AI, then humans might be disempowered there as well.
00:19:36.680 And maybe we still, you know, have a vote, but we're all just so, like, controlled and manipulated by propaganda that essentially, you know, you can predict and control how people are going to vote so well with AI, with AI itself, that that's determining the outcomes of the election rather than our own, you know, intuition and decisions and judgments and values.
00:19:58.560 And I'm glad you mentioned the sock puppet thing as well, because that's something that people are often saying is, why don't we keep a human in the loop here, right?
00:20:05.780 So AI can make advice, we can use it as a tool, but humans are always going to be in charge, and that's what we want.
00:20:11.860 You know, having a human in the loop, it sounds great, but it's harder in practice to make that human really a meaningful part of the decision-making.
00:20:19.440 And so then that can happen in politics and also broadly throughout culture where everybody's just deferring to AI all the time from making all their decisions.
00:20:27.580 Maybe the decisions about how to vote as well, you know.
00:20:30.780 Yeah, so both ends.
00:20:31.800 So you have the politician basically repeating propaganda that AI generated and the public then asking the AI which AI generated propaganda is superior.
00:20:40.900 Yeah. Yeah. Yeah. So that's and then ultimately, you know, like I was saying, maybe we end up giving the AIs rights or, you know, another thing that I think is a pretty like disturbingly realistic scenario in my mind is that we get, you know, chips that go in your brain.
00:20:59.740 that starts out, it's like for therapeutic purposes 0.75
00:21:02.720 or whatever, but next we're using it
00:21:04.580 to augment ourselves, next we're using it
00:21:06.200 to connect to the internet and other people
00:21:07.900 and some hive mind thing, you know, after a few years
00:21:10.340 it's like, you know, maybe this chip should be bigger
00:21:11.980 and there's not really space in there, why don't we just take out
00:21:13.880 this part of your brain and then the next year it's like, you know
00:21:15.760 this brain part isn't really that useful anymore
00:21:18.680 like, let's just make the whole thing a chip
00:21:20.820 and then you can really put those bodies 0.93
00:21:23.760 those headless bodies that they're developing in Singapore
00:21:25.820 to use
00:21:26.360 Yeah. And this is just really disturbing. And even the small version, I think by default, we should expect that these chips are going to be on the cloud and controlled by big companies and government in a way that we don't really have much legibility into.
00:21:46.260 and it's not very trustworthy and it's very dangerous, I think.
00:21:50.820 And that's another form of gradual disempowerment where that might take a long time to go from
00:21:56.300 this little chip in your brain to something that's increasingly controlling your behavior.
00:22:00.140 But that also might be increasingly a requirement to get certain kinds of work, right?
00:22:04.920 It's the same way you kind of have to have a cell phone now.
00:22:07.660 It's pretty hard to navigate society without one.
00:22:10.280 There's increasingly need to give your identity every time you buy a sandwich or whatever.
00:22:15.380 So we see this direction of travel, and I think that's very dangerous.
00:22:19.000 And people oftentimes have criticized us at the War Room and other people discussing these technologies saying, oh, well, that will never happen.
00:22:27.380 In the, you know, five years ago, that was constant, right?
00:22:30.340 Even as the pandemic was ongoing and you heard Klaus Schwab at the World Economic Forum, you know, waxing poetic about the rule of AI and brain chips and all of this.
00:22:41.100 But now, I mean, you have you already had a lot of programs like BlackRock Neurotech was being rolled out in universities and other experimental labs.
00:22:50.220 And so you had the first real BCIs, brain computer interfaces coming online.
00:22:54.840 And then the first they weren't the first, but, you know, mass deployment, you would say in the dozens.
00:23:01.000 And then now with Neuralink run by a guy who openly talks about how hundreds of millions of people will need to be chipped to keep up with the A.I.
00:23:08.860 And then now, you know, at the beginning of the pandemic, you had Charles Lieber at Harvard and he was he was developing neural lace.
00:23:17.320 It was a more subtle, injectable brain computer interface. 0.87
00:23:21.400 And he got busted for I think he's just taking money under the table from the Chinese.
00:23:25.280 And it was just reported that he's now in China developing his brain computer interfaces. 0.95
00:23:32.260 And so, you know, if the Chinese are doing it, we're going to have to write to compete. 0.96
00:23:35.960 But yeah, I'm kind of just like, you know, seeing the possibility there. 1.00
00:23:44.140 And yeah, I mean, Elon, I guess, has said stuff like that, right?
00:23:47.400 He's very big on the merge with the machines future.
00:23:51.020 Sounds great, right?
00:23:52.020 And you know, you sound kind of, you know, before we hit the break, I just gotta, I've
00:23:55.900 got to level an accusation at you.
00:23:58.600 You sound almost as Luddite as I do, but that's, is that the case?
00:24:05.280 Could you do away with all AIs tomorrow if you could, or are you seeing this all in a
00:24:10.600 bit of a different light?
00:24:11.600 Yeah, no, I don't think I'm as extreme as you.
00:24:14.700 I mean, first of all, I'm just like, well, what counts as AI?
00:24:18.220 There's kind of a fuzzy boundary there, like, you know, Google search and like just, you
00:24:23.440 know, computer vision systems that recognize handwriting, like these sorts of things.
00:24:28.240 Translation, I think are just pretty obviously useful, and I wouldn't get rid of those.
00:24:33.020 You know, a lot of my hesitancy and skepticism here is not about like the technology itself.
00:24:38.740 I think AI can do all sorts of great things.
00:24:40.920 It has vast potential as a technology in lots of areas like medicine is a classic one people talk about.
00:24:45.580 But it's about society's readiness to absorb these advances as fast as they're coming.
00:24:52.200 And it's about the way that they are kind of being developed by, you know, tech billionaires who have very strange, you know, values.
00:25:02.000 and kind of the lack of accountability and transparency and process.
00:25:07.160 It's just we're rushing towards this thing,
00:25:09.220 and it's completely insane to be racing so fast to build this
00:25:14.120 with all the risks that it poses.
00:25:15.880 So you don't think we're ready for mass deployment of smarter-than-human AI?
00:25:20.060 Oh, hell no.
00:25:20.980 Are we ready for mass deployment of not smarter-than-human
00:25:25.180 but seemingly intelligent AI as we have now?
00:25:28.000 Yeah, that's a more interesting question.
00:25:30.600 That's a that's a tricky one. And, you know, I I don't have a strong intuition about that.
00:25:36.740 I think it's hard to say. Yeah. Well, you know, you've worked on policy as well as the more theoretical elements.
00:25:43.860 Yeah. And when we come back, I'd like to talk a bit more about that, because we're at a place where this issue or these issues are basically nonpartisan or bipartisan or cross partisan.
00:25:57.760 It's not something that only left-wingers or right-wingers or independents are concerned about.
00:26:03.200 But speaking of gradual disempowerment, you do not want to be disempowered, whether gradually
00:26:09.680 or rapidly, by the dollar. The dollar is tanking. When the dollar's convertibility into gold ended
00:26:16.920 in 1971, gold was fixed at $35 an ounce. Fast forward to today, and the U.S. dollar has lost
00:26:23.600 over 85% of its purchasing power, just like your brain will lose 85% of its value come
00:26:29.160 the artificial general intelligence. So gold, on the other hand, has increased in value by over
00:26:34.820 12,000%, just as your brain will after the EMP goes off. That's why central banks are buying gold
00:26:41.100 at record levels. Text BANNET to the number 989898 to join Birch Gold's Learn and Earn
00:26:50.520 precious metals event by april 30th text bannett to 989898 and get your gold for your human brain
00:26:59.520 this year marks a critical moment for our country as the opposition grows more aggressive and more
00:27:06.160 unapologetic the fight now reaches into the everyday decisions we make patriot mobile has
00:27:12.440 standing has been standing on the front lines fighting for freedom for more than 12 years they
00:27:17.780 just don't deliver top-tier wireless service. They are activists like me and like you in the
00:27:24.400 Wolverine Posse who truly care about this republic and saving our country. Patriot Mobile offers
00:27:31.200 prioritized premium access on all three major U.S. networks, giving you the same or better coverage
00:27:38.580 than the main carriers themselves. That means fast speeds and dependable nationwide coverage
00:27:44.160 backed by 100 percent U.S.-based customer service. They also offer unlimited data plans, mobile
00:27:52.120 hotspots, international roaming, and more. With a simple, seamless activation, you can switch in
00:27:57.720 minutes. Keep your number, keep your phone, or upgrade. And here's the difference. When you
00:28:03.440 switch to Patriot Mobile, you'll be part of a powerful stream of giving that directly funds
00:28:09.400 the Christian conservative movement. Take a stand today. Go to PatriotMobile.com slash Bannon or call
00:28:16.580 972-PATRIOT. That's 972-PATRIOT. And use promo code Bannon for a free month of service. Don't
00:28:26.380 wait. Do it today. That's PatriotMobile.com slash Bannon or call 972-PATRIOT and join the team today.
00:28:34.320 The dollar's convertibility into gold ended in 1971.
00:28:39.820 Gold was fixed at $35 an ounce.
00:28:43.240 Well, fast forward to today, and the U.S. dollar has lost over 85% of its purchasing power.
00:28:50.140 Gold, on the other hand, is increased in value by over 12,000%.
00:28:54.760 That's why central banks are buying gold at record levels.
00:28:58.760 That's why major firms like Vanguard and BlackRock
00:29:01.160 hold significant positions in gold.
00:29:05.020 And that's why I encourage you to consider
00:29:06.940 diversifying your savings with physical gold
00:29:10.400 from Birch Gold Group.
00:29:12.360 But it starts with education.
00:29:14.120 Birch Gold just announced their Learn and Earn
00:29:16.560 Precious Metals event.
00:29:18.640 This free online event rewards you for learning
00:29:21.260 the basics of investing in precious metals.
00:29:23.320 Sign up to get a free silver on your next purchase.
00:29:26.620 Get even larger incentives as you go.
00:29:29.780 The more you learn, the more you can earn.
00:29:32.460 But you must act now, as this special event only runs through April 30th.
00:29:37.780 The dollar lost its anchor in 1971.
00:29:41.860 You don't have to lose yours.
00:29:44.140 Text my name, Bannon, B-A-N-N-O-N, to the number 989898 to join Birch Gold's Learn and Earn Precious Metals event by April 30th.
00:29:54.300 Text Bannon, B-A-N-N-O-N, to 989898 and do it today.
00:30:00.700 Fellow patriots, the Federal Reserve has betrayed America for over a century,
00:30:06.100 printing fiat, inflating away your savings, serving globalist masters,
00:30:11.700 but President Trump is ending it.
00:30:14.220 President Trump is wielding a 112-year-old law to reclaim control from the rogue Federal Reserve.
00:30:21.980 He's replacing Jerome Powell, slashing rates, igniting America's re-industrialization.
00:30:28.040 Now, this is not theory.
00:30:30.180 Government-backed industry plus low rates unleashes super cycles.
00:30:34.620 History does repeat.
00:30:36.960 Gold's already exploding.
00:30:38.720 Miners are up over 400% in the last year.
00:30:41.840 What Rickards is calling Trump's gift is wealth for American patriots, not global handouts.
00:30:49.060 Now it's America's turn.
00:30:50.560 Jim Rickards, former CIA and Pentagon veteran, says act now.
00:30:57.160 Go to Insider2026.com.
00:30:59.640 That is Insider2026.com to get Jim Rickards' strategic intelligence newsletter today.
00:31:07.620 Strategic intelligence based upon predictive analytics.
00:31:11.340 It's what chairman and CEO throughout the world read, and you should too.
00:31:16.500 War Room.
00:31:17.560 Here's your host, Stephen K. Mann.
00:31:20.560 welcome back war room posse i'm here with david krueger ceo of evitable and researcher at mila
00:31:30.600 um david you and i have met a number of times uh in person in this crazy world of digital
00:31:38.060 interaction in person first in san francisco at the curve at light haven and then again at
00:31:44.540 the Future of Life Institute's event around the pro-human declaration and the composition of it.
00:31:52.680 So we both have at least some common touch points or reference points in this culture,
00:32:00.660 and the rapid extermination narrative is really dominant. I'm curious, with your thesis,
00:32:08.360 do you get a lot of pushback? Do you find yourself in a lot of arguments about this,
00:32:11.680 or is it just a friendly exchange between gentlemen and gentlemen?
00:32:16.300 Oh, it's constant arguments.
00:32:18.680 It's gotten more polite over the years.
00:32:20.460 So, you know, I started in this field in 2013,
00:32:24.060 and it took me almost two years to find any other researchers
00:32:28.660 who were worried about this stuff.
00:32:30.660 Wow.
00:32:30.820 And so I had, you know, years of conversations with people
00:32:35.620 just kind of like mocking me and laughing in my face kind of thing
00:32:38.640 when I talked about it.
00:32:39.440 Just because of the gradualness of it or just because you were talking about AI disempowering people at all?
00:32:45.560 Yeah, just talking about existential risk, the risk of extinction generally, basically.
00:32:50.780 I think a lot of people were kind of like, well, I don't know.
00:32:55.160 At that time, there was a lot of skepticism about if we would even get to AGI anytime soon, which is never, you know, we're going to get there eventually in my mind.
00:33:04.060 And so we got to grapple with these questions one way or another.
00:33:07.180 but yeah it's it's gotten a lot better you know the researchers are are much more um willing to
00:33:14.840 you know grapple with these risks these days um but yeah there's there's a uh you know we kind
00:33:21.540 of were talking about earlier the other kind of groups here and ideologies so there's some that
00:33:25.700 are very very into you know go as fast as you can and yeah maybe humans will survive maybe not but
00:33:33.640 like, you know, that's not the important thing here. The important thing is like progress
00:33:37.940 and technology. And, and so, you know, those are, those are arguments are going to keep
00:33:42.640 going, you know, indefinitely, I guess. But I used to have more arguments about just like,
00:33:47.080 is this like a thing at all that we should be worried about, or that might happen? And
00:33:50.880 that's kind of, that's much more, it feels like a settled question these days. I'm an
00:33:54.200 argumentative guy. So I still, you know, I pick fights. Yeah. Well, you know, that,
00:33:57.760 You had brought it up at the Bernie event that things, you know, on the one hand, there isn't enough awareness around the problems of AI.
00:34:07.180 But on the other hand, over the years, it has exploded onto public consciousness.
00:34:11.480 It's no longer the Terminator.
00:34:13.260 It's XAI.
00:34:14.700 It's Google.
00:34:15.600 It's anthropic.
00:34:17.840 In that, do you find, I mean, you are interacting with people in these corporations.
00:34:23.740 A lot of them worry about some of the same things you do.
00:34:26.460 uh what's your read on that like you have people like anthropic who are very intently communicating
00:34:32.440 their worries for whatever reason elon musk is very much the same do you find a lot of reception
00:34:39.260 to your ideas there or yeah you know i mean i always feel like i should talk to these people
00:34:44.600 more because i think you know they're they're basically making a mistake in my mind by working
00:34:52.600 at these companies and continuing to pursue the technology with full awareness of its
00:34:57.300 risks because they do believe that it's just inevitable.
00:35:01.980 And what I've seen, like we talked about, is just more and more awareness and concern
00:35:06.440 over time.
00:35:07.440 It's very clear the direction of travel.
00:35:08.440 I just don't know if we'll get there fast enough, but when you have hundreds of people
00:35:12.920 who are worried about this and go and work at AI companies instead of going and doing
00:35:16.600 what I'm doing, talking to the public, talking to policymakers, saying, hey, this is a crisis.
00:35:20.000 should stop right now um you know we could be i think raising the awareness so much faster if
00:35:26.640 people working at these companies would like say you know what i quit i don't want to work on this
00:35:30.960 thing that could kill everyone anymore i don't want to work on you know taking everyone's job
00:35:34.800 like this is not you know okay ethically um so yeah i i think when i talk to people this sort
00:35:42.640 of stuff often resonates and i think a lot of people do feel a lot of like doubt and guilt and
00:35:48.800 and uncertainty about their choices to work at those companies because of this stuff.
00:35:53.760 Do you think that maybe some of the resistance is the extreme end of it?
00:35:58.900 I've spoken to Nate Soares, Holly Elmore, John Sherman, a lot of the people who talk
00:36:06.380 about X-Risk, and something that, to me, I bring it up from time to time, I bring it
00:36:11.660 up on the show quite a bit, the extremity of extinction could perhaps overshadow the more
00:36:20.020 immediate concerns that we have now. Even in the idea of whether it is annihilation, if it annihilates
00:36:27.500 a thousand people or a million people, it's still catastrophic and it stops there. Or in the idea
00:36:34.240 of disempowerment, if you just get a partial realization of that disempowerment, you've
00:36:39.460 already made a horrendous mistake as a society. So, um, is, is that maybe a rhetorically like
00:36:45.640 part of the problem that people are like, Oh, it's not going to kill everybody. So I'm not
00:36:50.040 going to worry about it. But what if it kills some people, you know, what if it kills your mom, 0.93
00:36:53.640 you know? Yeah. You know, it's, it's kind of different for different people, what they respond 0.99
00:36:58.980 to. So, uh, I, I believe in basically, you know, telling the truth, being straightforward about my
00:37:04.840 concerns. So that's, I feel like I have to talk about extinction. I have to talk about,
00:37:09.060 you know, even the most sci-fi version where the AI, you know, suddenly takes off, takes over.
00:37:15.140 Because I think that's real. I think that's a thing that absolutely could happen. I'm not saying
00:37:18.400 it's going to. I'm not sure. You know, the future is uncertain. We don't understand this technology
00:37:21.980 very well, but like we can't rule that out. It's actually like shockingly likely in my mind.
00:37:27.840 But, you know, a lot of people are going to be more receptive to other things like gradual
00:37:32.280 disempowering or even just, you know, unemployment or, you know, the prospect that terrorists or
00:37:39.860 school shooter types are going to be able to manufacture weapons of mass destruction in 0.52
00:37:43.660 their garage, you know, which is kind of already happening, at least in regard to the AIs being
00:37:49.060 associated with, for instance, in Florida, I think it was Florida State University, and that kid was
00:37:54.720 taking instructions from the AI. I think there have been other cases now that have emerged.
00:37:58.620 Yeah. And the Florida AG is suing OpenAI because of this, which is great. I was on TV this weekend talking about another lawsuit brought by parents of victims in a shooting in Canada. Same story. But yeah, that's still shootings. And just imagine if next time it could be a bioweapon. It could be another pandemic.
00:38:18.060 you know um and i don't i don't know you know next time we're not quite there yet but like
00:38:23.120 maybe in a year or something we'll have ai that can coach people through that um man i lost track
00:38:29.720 you know i'm curious about this then so you haven't worked on policy you have a very clear idea
00:38:35.900 of what the threats of this technology what they are uh and you also have at least uh the
00:38:43.680 the beginnings of a plan because if there's one thing that gradual disempowerment argues for
00:38:49.120 it's we can't move forward without a plan you have to at least account for this possibility
00:38:55.320 and then have some sort of plan to stop it or mitigate it so what do you see right now in the
00:39:01.280 u.s in europe or in china what do you see that's promising in regard to a political response
00:39:07.140 to the threat of AI?
00:39:11.080 Yeah, my plan is shut it all down, basically.
00:39:14.260 So get rid of the advanced AI chips,
00:39:16.720 get rid of the factories that make those chips.
00:39:18.980 I think that's the simple and obvious solution.
00:39:21.600 Maybe we can improve on that.
00:39:22.900 I don't know how realistic, but it appeals to me.
00:39:26.020 Shut it down. Great.
00:39:27.680 And I think the most promising signs I see
00:39:30.260 are just more people waking up
00:39:32.160 and realizing how insane this situation is,
00:39:35.300 how big and how urgent the risks are, because I think that's what it's going to take, right,
00:39:41.320 to make something like that happen. We're going to have to start treating this like it's as big
00:39:45.800 or a bigger deal than nuclear weapons. Well, you see right now, I mean, at the moment,
00:39:53.280 maybe by the time this airs, things will have changed quite a bit. But at the moment,
00:39:57.980 you have a response from the Trump administration to the dangers of AI. You know, it's been all over
00:40:04.080 the news today that Casey, the Center for Standards and Innovation, Casey, under the
00:40:12.460 Commerce Department, will be the main interface between the tech companies in the U.S. government
00:40:18.100 and will begin testing frontier models before they are deployed. At least there's an agreement
00:40:23.860 with, at the moment, Google, Microsoft, and XAI. So do you think that there's a lot of questions
00:40:32.280 about i mean they've got casey has a brand new director uh of course the commerce department's
00:40:37.060 run by howard lucknick which is a questionable uh choice in a in a horrendous situation for many
00:40:43.140 reasons but do you see this as promising because i don't think it's necessarily a coincidence that
00:40:49.720 just last week you've got max tegmark on here talking about this you got you and tegmark uh
00:40:56.580 in in the capital talking about these problems and the lack of response and then lo and behold
00:41:03.140 we now have one does this seem promising to you at least in the seminal or the nascent phase i yeah
00:41:09.480 i mean definitely it's a good sign um and i think probably this has more to do you know much as i'd
00:41:15.620 like to feel responsible with uh mythos and the cyber security threats which i from that model
00:41:21.260 which I think are huge and really caught most people by surprise.
00:41:25.920 I wish people would stop being caught by surprise.
00:41:27.900 We know these things are coming down the pipeline.
00:41:29.800 In terms of this response, testing is obviously a good thing.
00:41:34.540 I don't know if they're going to do the best job of it.
00:41:37.200 I don't think it's not adequate.
00:41:40.400 And we don't know how to do testing well enough.
00:41:44.360 So there's a lot of false solutions that people are offering
00:41:47.360 and will offer to this problem.
00:41:49.260 And, you know, as somebody who's been in the field looking at the research for a long time, I can tell you we don't know how to test systems.
00:41:54.940 We don't know how to align them, give them our goals or our values.
00:41:59.580 And we also don't know how to tell what they're thinking and how they might behave.
00:42:05.060 So people are working on all those things.
00:42:07.880 We make progress, but there's still open research problems.
00:42:10.580 So we can't count on that.
00:42:12.160 And when you say we don't know how to test them, do you mean that like the evaluations that we see now from Center for AI Safety or, you know, Anthropics Internal Testing, Apollo, people like this, that the measurement of the capabilities are not accurate or do you mean something else by that?
00:42:32.500 I think we don't know how accurate they are. And we also don't know. You want to know not just the capabilities, but also like the propensity people sometimes call it.
00:42:46.720 What is the system going to decide to do? Is it aligned? What kind of values? What are its goals? And that's a lot harder to test for.
00:42:54.640 in terms of the testing that's happening right now um you know this is one of the things that
00:42:59.540 the uk government agency i worked at um did and and what was the organization it was called the
00:43:05.980 ai safety task force at the time now it's the ai safety ai security institute in the united kingdom
00:43:11.880 yes um but the you know looking at the state of play right now the last couple model releases
00:43:18.480 they were like we you know we sort of tried to test it but at the end of the day we kind of just
00:43:23.700 went with vibes because they felt like their tests weren't meaningful enough and they're
00:43:28.600 maxing out the capabilities.
00:43:30.640 And then the other thing that I think is really important for people to realize is the AI
00:43:33.520 now can tell that it's being tested quite reliably.
00:43:37.560 And so once the AI knows it's being tested, you have to wonder, is it doing the right
00:43:41.440 thing because that's what it wants to do or because it knows that's what we want it to
00:43:46.320 do and it knows that it needs to pass the test.
00:43:49.480 So, in essence, it seems like what you're describing is a situation where you can test the capabilities and get a surface-level idea of what's going on, but beneath that surface, there's a whole lot happening in these systems that you just simply can't tease out.
00:44:03.400 Yeah, 100%. Yeah. And the capabilities might be more than what we are able to observe and elicit. That's another really important point. People think that we can know what these systems are capable of. But there's been a lot of times when you just you prompt the system a little bit differently or you set up, you know, another thing around it to help it do its job and it can suddenly do the task way better. So we don't even fully know what the systems are capable of.
00:44:26.280 You know, I read your recent essay, kind of the retrospective and a few musings post publication of gradual disempowerment.
00:44:35.680 And I was very happy that you gave me you threw me a bone at the very end. 0.71
00:44:40.180 The very last point being that maybe human beings will become dumber and dumber that you don't think that that's really all that big of a deal. 0.98
00:44:48.000 But, hey, I might as well mention it. That's the biggest deal. Come on. 0.97
00:44:51.840 Yeah. I don't know. Because, you know, it's people make the analogy with like calculators where it's like, I think it's good that people can do arithmetic, but like we don't have to be that good at it anymore because we have calculators.
00:45:01.140 Sure we do. Yeah. Don't tell them that in China. I mean, that's why they're they're kicking our asses in the in the universities.
00:45:09.700 Well, you know, I just I couldn't I couldn't let you go without getting that one last jab in. 0.99
00:45:16.340 On the one hand, I appreciate you throwing us a bone on the inverse singularity thesis that as humans get dumber and dumber, the machines will seem smarter and smarter. 0.60
00:45:25.840 Yeah. But in general, you know, again, just reiterate, I think that your work on that just AI risk in general has been very, very persuasive, very, very thorough.
00:45:35.920 So even if I don't know that we'll be able to do it, I would love to see it all shut down, too, maybe for different reasons.
00:45:43.340 And, yeah, I really, really appreciate everything you've done.
00:45:45.780 I appreciate you coming on here.
00:45:47.120 Thanks.
00:45:47.420 Yeah, I appreciate that.
00:45:48.440 And it's been great.
00:45:49.360 Thanks for having me.
00:45:50.140 Let the posse know where can they find you on social media, your website, your sub stack.
00:45:55.420 Yeah, so Evitable is easy to find, evitable.com.
00:45:59.540 I'm David S. Kruger.
00:46:01.560 That's K-R-U-E-G-E-R on Twitter.
00:46:04.100 and I have a blog called The Real AI on Substack.
00:46:08.700 So those are great starting points.
00:46:10.580 Again, David, I appreciate it, brother.
00:46:12.240 Absolutely.
00:46:14.640 And once again, War Room Posse,
00:46:17.220 in case you have forgotten,
00:46:18.580 the central banks are buying gold at record levels.
00:46:22.140 That's why major firms like Vanguard and BlackRock
00:46:25.120 hold significant positions in gold.
00:46:27.900 And that's why I encourage you
00:46:29.120 to consider diversifying your savings
00:46:31.200 with physical gold from Birch Gold Group.
00:46:34.080 Think of physical gold as being analogous to a biological brain
00:46:38.580 and think of digital currency as analogous to AIs.
00:46:42.660 The AIs take over, the biological brain plummets.
00:46:46.420 What you need, what you need is gold, physical gold.
00:46:51.060 So text Bannon to the number 989898.
00:46:55.200 That's Bannon to the number 989898
00:46:58.740 and learn how gold can protect your assets.
00:47:03.980 That is Bannon to the number 989898.
00:47:09.200 Now, War Room Posse, as I see you off here,
00:47:13.340 I want to talk about just for a moment
00:47:16.860 a concept of gradual disempowerment
00:47:19.400 that goes to mythological levels.
00:47:21.620 That is the idea of Moloch,
00:47:24.540 the analogy between systems that are completely either out of human control or against human
00:47:33.900 values. This was an idea first brought up by Scott Alexander of Slate Star Codex, and it was taken
00:47:40.820 from a poem, Howl, from Allen Ginsberg. And however much you think that Allen Ginsberg
00:47:48.660 was a degenerate weirdo, I think that it is undoubted that his passage on Moloch in the poem
00:47:56.180 Howl is as relevant to our society today as it was then. And hey, maybe it takes a degenerate 0.87
00:48:03.740 to truly understand the essence of a Canaanite demon and its machinic counterpart. So War Room 0.96
00:48:11.020 posse i present to you moloch what sphinx of cement and aluminum bashed open their skulls 0.99
00:48:18.960 and ate up their brains and imagination moloch solitude filth ugliness ash cans and unobtainable 0.87
00:48:30.520 dollars children screaming under stairways boys sobbing in armies old men weaving in the parks 0.58
00:48:41.000 Moloch, Moloch, nightmare of Moloch
00:48:45.660 Moloch the loveless, mental Moloch
00:48:50.360 Moloch the heavy judger of men
00:48:53.460 Moloch the incomprehensible prison
00:48:56.580 Moloch the cross-bones, soulless, jailhouse and congress of sorrows 0.60
00:49:02.560 Moloch whose buildings are judgment 0.72
00:49:05.840 Moloch the vast stone of war 0.94
00:49:09.320 Moloch the stunned governments 0.72
00:49:12.320 Moloch whose mind is pure machinery
00:49:15.560 Moloch whose blood is running money
00:49:18.920 Moloch whose fingers are ten armies 0.94
00:49:22.220 Moloch whose breast is a cannibal dynamo 0.97
00:49:25.820 Moloch whose ear is a smoking tomb 0.94
00:49:29.500 Moloch whose eyes are a thousand blind windows
00:49:34.220 Moloch whose skyscrapers stand in the long streets 0.90
00:49:38.700 like endless Jehovah's Moloch whose factories dream and croak in the fog Moloch whose smokestacks 0.65
00:49:48.320 and antennae crown the cities Moloch whose love is endless oil and stone Moloch whose soul is 0.80
00:49:58.360 electricity and banks Moloch whose poverty is the specter of genius Moloch whose fate is a cloud of
00:50:07.660 sexless hydrogen moloch whose name is the mind moloch in whom i sit lonely moloch in whom i dream
00:50:20.400 angels crazy in moloch sucker in moloch lack love and manless in moloch moloch who entered my soul 0.70
00:50:31.280 early, Moloch in whom I am a consciousness without a body, Moloch who frightened me out 0.97
00:50:38.380 of my natural ecstasy, Moloch whom I abandon, wake up in Moloch, light streaming out of
00:50:47.140 the sky, Moloch, Moloch, robot apartments, invisible suburbs, skeleton treasuries, blind
00:50:57.360 capitals, demonic industries, spectral nations, invincible madhouses, granite
00:51:06.300 monstrous bombs. They broke their backs lifting Moloch to heaven, pavements, 0.74
00:51:15.000 trees, radios, tons, lifting the city to heaven which exists and is everywhere
00:51:21.920 about us. Visions, omens, hallucinations, miracles, ecstasies, gone down the American river. Dreams,
00:51:33.220 adorations, illuminations, religions, the whole boatload of sensitive bullshit. Breakthroughs 0.99
00:51:43.540 over the river flips and crucifixions gone down the flood highs epiphanies
00:51:51.300 despairs ten years animal screams and suicides minds new loves mad generation down on the rocks
00:52:03.460 of time real holy laughter in the river they saw it all the wild eyes the holy yells they
00:52:12.260 need farewell. Okay, can we talk about what's really happening right now? New data shows
00:52:21.320 financial stress is at an all-time high. Millions of Americans are at a breaking point. Debt maxed
00:52:27.320 out, no extra money, no room to breathe. And this isn't just lower-income households anymore.
00:52:33.380 Middle-class families are hitting their limits, too. This isn't about reckless spending. Everyday
00:52:39.600 people are running out of options. So if debt has been weighing on you, you're not alone.
00:52:46.560 And when it comes to debt, waiting usually makes it worse. Interest piles up. Minimum payments
00:52:51.760 keep you stuck. You don't need another loan and you don't need bankruptcy. You need a strategy.
00:52:59.920 That's why I like Done With Debt. They've built a smart, personalized plan around you.
00:53:05.280 their experience and knowing what it takes to get you the biggest reductions possible.
00:53:10.800 Whether you owe $10,000 or much more done with debt has one clear goal. Lower what you owe so
00:53:17.720 you keep more of your paycheck every month. It's very simple. Let's repeat that. Lower what you owe
00:53:23.100 so you can keep more of your paycheck every month. Start with a free consultation. It just takes
00:53:28.380 minutes. Share your situation, your tale of woe, and find out what's possible. You do not
00:53:34.840 have to stay stuck. Go to donewithdebt.com. That's donewithdebt.com and do it today.