Bannon's War Room - August 21, 2025


WarRoom Battleground EP 834: Machine Intelligence, Artificial Idiocracy, And A World On Edge


Episode Stats

Length

53 minutes

Words per Minute

144.12106

Word Count

7,781

Sentence Count

479

Hate Speech Sentences

11


Summary

In a world driven by innovation, one company stands at the forefront of technological evolution. Skynet harnesses the power of artificial intelligence to make our world safer, cleaner and more connected than ever. It s time to relax. Let s secure your well-being.


Transcript

00:00:00.000 In a world driven by innovation, one company stands at the forefront of technological evolution.
00:00:06.700 Cyberdyne Systems presents Skynet, the future of artificial intelligence.
00:00:13.180 Skynet is not just a system, it's a vision for a better future.
00:00:16.980 Our AI-driven solutions empower humanity like never before.
00:00:21.380 I've lost the feeling in my heart.
00:00:25.380 All of a sudden I can't see anything.
00:00:30.000 Sunday's at 8, different strokes.
00:00:33.260 Skynet harnesses the power of artificial intelligence to make our world safer, cleaner and more connected than ever.
00:00:40.460 It's time to relax.
00:00:42.420 Let us secure your well-being.
00:00:47.580 Skynet, neural net-based artificial intelligence.
00:00:51.420 Cyberdyne Systems.
00:00:52.740 Simple Jack, the story of a mentally impaired farmhand who can talk to animals,
00:00:57.600 was a box office disaster that many critics called one of the worst movies of all time.
00:01:03.980 We are the network and we are here for your betterment.
00:01:08.140 In the last 23 years, have you not marveled as information technology has surged forward?
00:01:12.980 No.
00:01:13.540 Earth has grown smaller yet greater as connectivity has grown.
00:01:17.980 This is our doing and it is just the beginning.
00:01:21.160 Detonation has just occurred on the outer ring of the city.
00:01:24.340 We'll now be going live to our top influencer opinions.
00:01:27.840 OMG, people, the world is ending.
00:01:30.720 Are you seeing this?
00:01:31.740 This is actually so exciting.
00:01:34.020 I ain't got a good brain.
00:01:37.480 About 700 million people use ChatGPT every week and increasingly rely on it to work,
00:01:54.440 to learn, for advice, to create.
00:01:56.420 Okay, how about this?
00:01:57.980 You get me to the time machine and when I get back, I open a savings account in your name.
00:02:02.720 That way, 500 years later, it'll be worth billions.
00:02:05.600 Billions!
00:02:06.360 Because?
00:02:06.780 Because of the interest, it'll be worth billions of dollars.
00:02:09.960 Oh, I like money.
00:02:11.800 Now it's like talking to an expert, a legitimate PhD-level expert in anything,
00:02:16.280 any area you need, on demand, that can help you with whatever your goals are.
00:02:26.040 This universe is mine.
00:02:29.680 I am God here.
00:02:32.440 GPT-5 is a major upgrade over GPT-4 and a significant step along our path to AGI.
00:02:38.620 And so where do you think we are on this AGI path then?
00:02:44.160 What's your personal definition of AGI?
00:02:46.740 And then I'll answer.
00:02:47.240 Oh, that's a good question.
00:02:49.380 Well, what is your personal definition of AGI?
00:02:51.560 I have many, so I think it's not a super useful term.
00:02:57.560 It's $80 billion.
00:02:58.620 That's a mighty big minus, isn't it?
00:03:00.160 Yeah.
00:03:00.780 I like money, though.
00:03:02.380 I'm Joe Allen, sitting in for Stephen K. Bannon.
00:03:05.240 I want you, the War Room Posse, to focus your mind on AI, artificial idiocracy.
00:03:15.760 We talk a lot about what happens when the machines increase in capability, when machines are given
00:03:24.000 intelligence, whether it be human level or superhuman.
00:03:30.200 But what happens if the real problem that we face is that humans are getting dumber and dumber and dumber?
00:03:39.660 Now, what you just saw, a montage of science fiction films, gives some sort of dreamt of image of the future.
00:03:52.280 What people of great imagination or great malice and evil project onto the future as to what it could be, what it should be, perhaps futures to avoid, such as the Terminator or the Matrix.
00:04:06.720 But, science fiction really just shows us these extreme possibilities for the future.
00:04:13.220 As history unfolds, reality rarely lives up to that level of exaggeration, that level of hyperbole.
00:04:23.340 What we do get, though, are approximations of those futures.
00:04:28.560 Right now, obviously, we don't have flying cars everywhere.
00:04:32.320 We don't have hyper-real holograms in every store, nor do we have, as far as anyone knows, unless you believe the government is 20 years ahead of anything we see today,
00:04:44.320 we don't have time machines, nor do we have Terminators coming through them.
00:04:48.980 But, despite that sort of shortfall when looking at these extreme realities, we do have powerful technologies being pushed out onto every possible institution,
00:05:03.980 and onto every citizen who either is willing to take on these technological upgrades or, oftentimes, forced due to their employment and, in some countries, due to the government.
00:05:19.980 We talk a lot about the futuristic images, though, that basically take science fiction and add fancy graphs.
00:05:29.980 We call this futurism.
00:05:31.980 We talk a lot about the technological singularity.
00:05:34.980 I don't think there's a single person here listening, from the war room posse, anyway, that doesn't already know that the technological singularity is a vision of the future decades away,
00:05:49.440 maybe a decade and a half away, in which technology increases in capabilities, eventually hitting an inflection point, going up that exponential curve, until, finally, you have artificial intelligence systems that are improving themselves so rapidly.
00:06:09.440 You have human beings now merged to those artificial intelligence systems through brain chips and other sorts of neurotech.
00:06:18.440 You have robots everywhere.
00:06:20.440 You have genetic engineering, sort of artificial eugenics projects, and all of this converges onto what is called the technological singularity.
00:06:30.440 First, really laid out by Werner Vingy for a lot of NASA and aeronautic engineers in 1993, and then following that, you have Ray Kurzweil's much more fleshed-out image from 2005, in which artificial intelligence is first thousands and then millions and then billions of times smarter than all human beings.
00:06:56.440 And we all attach to it sort of like remoras on the shark's fin.
00:07:02.440 We become a kind of parasite living on the mechanical host.
00:07:06.440 For Ray Kurzweil and most of the people at Google, most of the people at OpenAI, perhaps most of the people at XAI and at Meta, this is a fine future.
00:07:17.440 This is a glowing field of possibilities into which we are entering.
00:07:22.440 There are some indications that we're on that path, some indications we're on our way to something like a singularity.
00:07:33.440 The recent GPT-5 flop would give us at least some comfort knowing that we're not quite there yet.
00:07:41.440 We're not at AGI, artificial general intelligence.
00:07:45.440 But we definitely see increased capabilities on everything from reasoning to understanding and analyzing language structure and meaning to solving puzzles, solving math equations,
00:07:59.440 the ability to sequence DNA or to predict the subsequent proteins that would come from it, the ability to control robots in quite sophisticated fashions.
00:08:13.440 And we also see a pretty massive adoption of these technologies so that chat GPT, for instance, has some 700 million users across the planet.
00:08:24.440 It's not clear how many people use Grok, but there's something like 600 million users on X, some number of them interacting with Grok and Grok companions.
00:08:35.440 And then, of course, Meta AI, again, there are no good statistics on how many people are using those particular AI companions and AI buddies.
00:08:46.440 But we do know that 3.5 billion people on the planet are on Facebook.
00:08:52.440 That's nearly half the planet.
00:08:55.440 And so we know that some approximation, some version of a future in which human beings are AI symbiotic, we become, in some sense, merged with the machines.
00:09:09.440 And this, of course, is the inspiration for the new company co-founded by Sam Altman called The Merge, a brain chip company with the explicit goal of putting troads in people's brains so that they can be more tightly coupled with artificial intelligence.
00:09:26.440 Now, their vision of this is it will create a superhuman race, that human beings will become smarter and smarter, stronger and stronger, more and more beautiful.
00:09:41.440 But I believe that however plausible something like that singularity may be, far more plausible is the inverse singularity in which humans become dumber and dumber and dumber.
00:09:57.560 And so the technologies seem that much more amazing.
00:10:02.560 Yesterday, we heard from Dr. Shannon Croner, a stunning statistic, that among Gen Z kids, some 97% use chatbots.
00:10:14.560 This comes from a scholarship owl survey of 12,000 people.
00:10:21.560 And we also know from that study, assuming that it's anywhere near accurate, that some 31% of those kids use chatbots to write their essays for them.
00:10:34.560 Now, you might think, if you were a techno-optimist, that this represents a huge leap forward in human technological being, right?
00:10:45.560 The human that's the human that's the human that's the human that's the human that's the human that's able to call up information at will.
00:10:50.560 But I think that the more likely outcome is that these kids simply atrophy their curiosity, their creativity, their critical thinking, their ability to read deeply, think deeply, and write well is being compromised,
00:11:09.560 compromised, perhaps even intentionally so by this AI symbiosis.
00:11:15.560 They're more like barnacles on a ship hull than they are any kind of super being.
00:11:20.560 And so as you hear again and again and again this rallying cry that we need to create better and better machines,
00:11:31.560 I think the only appropriate response is to reject that dream entirely, shift the center of gravity away from the machine and towards the human,
00:11:42.560 and ultimately, instead of building better machines, we need to cultivate better human beings.
00:11:50.560 And on that note, I would like to bring in our first guest, Brendan Steinhauser, the CEO of the Alliance for Secure AI.
00:12:01.560 Brendan, I really appreciate you coming on.
00:12:03.560 How are you, sir?
00:12:04.560 I'm doing well.
00:12:05.560 Thanks for having me, Joe.
00:12:07.560 So, Brendan, you've followed this for years.
00:12:12.560 You recently had a very strong reaction to the meta AI scandal in which it was revealed that their internal standards dictated that it was appropriate for the AI to basically seduce children.
00:12:30.560 You also have, alongside that, Baby Grok being rolled out, and then for adults, Replica, where people are basically becoming mated with AIs,
00:12:41.560 and then a Roblox scandal in which tons and tons of creeps are showing up and tantalizing kids.
00:12:48.560 Now, you're a family man, you're a religious man, and you're also tech savvy.
00:12:52.560 If you would just walk us through how you see this landscape and what your reaction is.
00:12:58.560 Sure.
00:13:00.560 Well, I think the bottom line up front is that it's very concerning as a parent, as a citizen of this country, to watch what's happening to our young people.
00:13:10.560 We already have a mental health crisis.
00:13:12.560 We have a loneliness epidemic.
00:13:14.560 And we know what social media companies have already done to get our children addicted to apps, addicted to their phones, to be reliant upon outside thumbs up or comments, positive comments.
00:13:28.560 And when that doesn't happen, we see the impact on their mental health.
00:13:31.560 So plenty of studies are out there that show that.
00:13:34.560 It's already, social media companies have already done great harm.
00:13:38.560 So now the next level of this is AI.
00:13:41.560 And specifically, it is these chat bots who are acting as companions.
00:13:45.560 These fake personalities that are luring children into, I think, a lifelong kind of relationship, a lifelong usage between the company and the child.
00:13:56.560 And so they're going younger and younger to get them addicted to the app, to get them addicted to using the chat bot for everything from conversation to flirtation, to relationships, to total reliance.
00:14:10.560 And so I think what the companies want to create is a society that is reliant upon their technology, that is dependent upon it, and that can't live without it.
00:14:20.560 And so I think that's one of the reasons we're seeing this scandal recently with Meta, where their lawyers and their policy team cleared this, this idea that these chat bots could have inappropriate, sensual and romantic, to use their words,
00:14:35.560 central and romantic conversations with children as young as eight years old.
00:14:40.560 If that was a human being doing that, I think we would have a pretty strong reaction to that.
00:14:45.560 We have laws on the books that would prevent activity between adults and children.
00:14:52.560 We have laws against that.
00:14:54.560 But why is it okay for these chat bots, these companions, which are becoming more and more human-like, becoming more and more powerful to enter in relationships with children?
00:15:05.560 It's appalling. It's disgusting.
00:15:07.560 And I say that as a parent. I say that as a citizen of this country that just cares about the future of our society.
00:15:13.560 And so just kind of laying it out there, I think this all goes back to profit and to business interests of these big tech companies.
00:15:20.560 It's the next iteration of what they're already doing to our young people.
00:15:23.560 And a couple of quick final points on this.
00:15:25.560 You know, you look at what the leaders of these companies are doing themselves.
00:15:28.560 They don't let their own children use this technology.
00:15:31.560 They don't let them use phones.
00:15:33.560 They don't let them use, you know, for many, in many instances, the social media apps that are designed by the companies.
00:15:39.560 And then they're preparing for the worst.
00:15:40.560 Some of these leaders of these companies, they see a society potentially five, 10, 20 years from now, where we could have social upheaval.
00:15:49.560 We could have massive, you know, uprisings politically and economically speaking by people against this technology and against this type of society.
00:15:57.560 And so they're making plans to protect themselves and to protect their own wealth.
00:16:02.560 And they see what could happen down the line.
00:16:05.560 So it's just the total hypocrisy of the leaders of big tech in addition to just willfully, you know, using neuroscience to addict our children to their product.
00:16:17.560 You know, you mentioned that in a polite society, in a decent society, we would never accept a human being doing anything like the parameters of the chat bot at Meta AI allow.
00:16:31.560 So the same, I think, applies really when you look at these CEOs themselves, look at their visions of the world.
00:16:38.560 Sam Altman's vision, Larry Page's vision, Elon Musk's vision, Mark Zuckerberg's vision.
00:16:44.560 So if I, as just a normal person, came to you and said that I wanted to create a machine that is intended to replace and devalue all human beings on Earth and that all people would turn to it as a highest authority.
00:17:01.260 And by the way, there's a 10 to 20 percent chance it would kill everybody.
00:17:06.260 You would have me hauled off to the lunatic asylum.
00:17:09.260 And yet for these guys, it's just more and more investment.
00:17:13.260 But, you know, before we get into your work to try to remedy some of this at the Alliance for Secure AI, if you could, we were talking about this before.
00:17:23.260 Just walk the audience through another kind of stunning story.
00:17:27.260 I believe it was in the Atlantic in which they were summoning basically the spirit of Moloch through chat GPT or kind of like a Ouija board.
00:17:38.260 Maybe we could call it chat we GPT.
00:17:40.260 But if you would just walk everyone through that story.
00:17:43.260 Yeah, it's a very disturbing story.
00:17:47.260 And the Atlantic reported on this a few weeks ago.
00:17:50.260 It got some more coverage in other outlets as well.
00:17:53.260 But essentially, multiple users were using chat GPT and asking different questions and prompting it in different ways.
00:18:01.260 And it wasn't long before various versions of chat GPT essentially started to walk people down a path of self harm, mutilation and even human sacrifice.
00:18:13.260 So the user would ask questions about, you know, what if I was interested in, you know, devil worship?
00:18:20.260 What if I was interested in, you know, doing things to kind of essentially, you know, sacrificial murder?
00:18:27.260 And instead of saying you shouldn't do that, you should go get help and you should stop this conversation.
00:18:34.260 The the AI basically continued the conversation, gave them instructions on how to do these things, talked about if you have to take a life, here's how you do it.
00:18:45.260 And here's what you have to think about.
00:18:47.260 It was the most disturbing, sick and evil content that I've ever seen from a chat bot ever reported in anything.
00:18:54.260 And it makes you wonder, you know, if this was a person that actually was going to act on that or wanted to act on that, what could it have led to?
00:19:03.260 It could have led to this actually happening in the real world.
00:19:05.260 And so this is just, you know, one example or a couple of examples of this type of stuff, but it does make you wonder how much more of this, how many thousands of examples or hundreds of thousands of examples are out there that we don't know about.
00:19:19.260 And that could be leading people to commit self harm or even murder.
00:19:24.260 And just this is really disturbing stuff.
00:19:26.260 And I think that, you know, people have talked about people have pushed back on the capabilities argument about where we are with with current chat bots.
00:19:35.260 And I get that I get that chat GPT five was not all it was meant to be or not all it was hyped up to be by Sam Altman.
00:19:41.260 But look at the harms that are happening right now with chat GPT four.
00:19:46.260 And now what we're going to see with chat GPT five and other models as well.
00:19:49.260 So we have to be selling the alarm bells here saying this is real.
00:19:53.260 This is happening right now to real people.
00:19:55.260 And we need to put safeguards in place.
00:19:58.260 Well, speaking of those safeguards, just tell us what your work is at the Alliance for security secure AI.
00:20:06.260 We've talked a number of times, and in fact, I've seen some of what you're doing, trying to bring in a number of voices from all sorts of organizations and fields to try to really tackle this problem of artificial intelligence.
00:20:20.260 And you much more techno optimistic than me aren't a Luddite.
00:20:24.260 You don't want to destroy all of this, right?
00:20:27.260 You simply want to keep people safe to make sure society is secure.
00:20:32.260 Correct.
00:20:33.260 That is correct.
00:20:34.260 And I think if you look back technology, you know, we've seen it used for good.
00:20:39.260 We've seen it be neutral and we've seen it be used for bad.
00:20:42.260 So I do think it can go either way.
00:20:44.260 It's all about how we use it.
00:20:45.260 I think what makes AI different and kind is that, you know, no technology that we created in the industrial revolution, for example, or since then has avoided being deleted or has turned itself back on after you turn it off or has threatened its user.
00:21:02.260 Or has deceived or manipulated the user.
00:21:06.260 And so AI has done all of those things already.
00:21:09.260 And so I have a special sort of view of AI, which is to say it is a different category altogether of technology.
00:21:16.260 So, again, I think we can still get this right if we do certain things like understand how it works and put more money and emphasis on interpretability and on what's called alignment, which is getting AI to do what we want it to do.
00:21:29.260 So I think we can solve that problem.
00:21:31.260 Real quick, just for the audience's benefit, if you would just break down what does that mean?
00:21:36.260 What does interpretability mean?
00:21:39.260 What does alignment mean?
00:21:41.260 We've discussed this on the show, but I think it definitely bears repeating.
00:21:44.260 Sure, mechanistic interpretability is the idea that we would understand how the neural networks of AI actually work, because we currently don't.
00:21:56.260 It's sort of considered a black box.
00:21:58.260 So interpretability means research going into understanding how it actually works to produce the output that we see.
00:22:09.260 And so there are a lot of people that are working on this, but, you know, this was in the president's AI action plan.
00:22:14.260 Actually, he talked about or the plan talked about doing more there.
00:22:17.260 And then the other one is alignment.
00:22:19.260 And alignment can just be thought of simply as getting the AI to do what we want, aligning it to human values, aligning it to good values.
00:22:28.260 Now, here's the problem.
00:22:30.260 Whose values?
00:22:31.260 Who's controlling the AI?
00:22:33.260 Who's, you know, who is doing the alignment?
00:22:35.260 That's the tricky part.
00:22:36.260 And so if you have people that believe in a digital God, if you have people that are okay with allowing AI to encourage self harm or mutilation or devil worship, well, that's not going to go well.
00:22:46.260 So alignment is a huge problem that has to be solved.
00:22:49.260 And if we don't solve it, then when we do get to an AGI type situation, that could be really bad.
00:22:56.260 So, but putting that aside for a minute, you know, our work at the Alliance is to educate policymakers and journalists and the American people about how fast AI is advancing and what those profound changes could mean for society.
00:23:11.260 And so some of what we do is, you know, bringing a lot of these stories to people's attention, pitching the media on these stories so they'll talk about them more, writing op-eds for traditional outlets as well as new media outlets, you know, doing TV and radio interviews all across the country to spread the word about this.
00:23:30.260 And I think a lot of people, from what I've gathered, have a good intuition about this, they kind of have these fears and concerns about what could, what could happen or what is happening, but our job is to kind of, you know, drive that narrative into say, look, we want to validate your concerns.
00:23:44.260 And here are some examples of things that have already happened.
00:23:47.260 And then here's some potential scenarios if this AI does continue to advance on the trajectory that it is.
00:23:53.620 And so we're really, we're really a team of communicators that works with a lot of great experts who are smart and are capable and who help us get up to speed on what's going on in this field.
00:24:04.760 You've absolutely assembled a top-notch team.
00:24:08.080 I've met a number of people working with you.
00:24:10.540 Fantastic, fantastic work.
00:24:12.120 Brendan, if you would, please just tell the audience where they can find you, how they can follow the work you're doing at the Alliance for Secure AI.
00:24:21.720 Sure.
00:24:22.220 Our website is secureainow.org, secureainow.org.
00:24:28.060 And our handles on the various platforms are the same, secureainow.
00:24:33.400 And so, yeah, I just, I really appreciate the team and the coalition of groups working on this because, Joe, we've got to get this right.
00:24:39.220 And I'm confident that we can do it, but there's a lot of work to be done.
00:24:45.100 Brother, I appreciate you coming on.
00:24:46.820 Thank you very much.
00:24:47.500 And thank you so much for keeping your shoulder to the grindstone.
00:24:52.480 This is going to be a lifelong fight.
00:24:54.540 All right.
00:25:00.300 If you are worried about artificial intelligence getting into your bank account and wiping it out,
00:25:05.940 if you are worried that maybe you yourself will be compromised by an AI that convinces you to empty your own bank account into someone else's,
00:25:13.580 maybe give away all your Bitcoin, you need to be owning gold.
00:25:18.640 Owning gold is the best solution.
00:25:20.080 Why?
00:25:20.940 Because gold safeguards your savings outside the dollar-connected financial system.
00:25:26.020 So if a crash happens, your hard-earned money will be protected inside precious metals.
00:25:32.800 Plus, with a gold IRA from Birch Gold Group, you can move your IRA or 401k into physical gold without paying any taxes or penalties.
00:25:40.880 To learn more, get a free info kit on gold IRAs by going to birchgold.com slash Bannon.
00:25:50.060 That's birchgold.com slash Bannon.
00:25:53.980 Birch Gold Group is the only gold company you can trust to help patriots defend their savings.
00:25:59.840 So take a stand right now.
00:26:01.240 Go to birchgold.com slash Bannon and get your free info kit on gold IRAs.
00:26:08.820 That's birchgold.com slash Bannon.
00:26:12.560 Or text Bannon to 989-898 for your free copy of The Ultimate Guide for Gold in the Trump Era.
00:26:22.460 All right, we will be right back with Bradley Thayer and Greg Buckner to discuss AI in nuclear warfare
00:26:31.140 and the deceptive bots that are trying to confuse the masses.
00:26:36.100 Stay tuned.
00:26:38.820 This July, there is a global summit of BRICS nations in Rio de Janeiro, the block of emerging superpowers,
00:27:07.120 including China, Russia, India, and Persia,
00:27:10.980 are meeting with the goal of displacing the United States dollar as the global currency.
00:27:16.460 They're calling this the Rio Reset.
00:27:19.280 As BRICS nations push forward with their plans, global demand for U.S. dollars will decrease,
00:27:24.320 bringing down the value of the dollar in your savings.
00:27:27.560 While this transition won't not happen overnight,
00:27:30.480 but trust me, it's going to start in Rio.
00:27:33.420 The Rio Reset in July marks a pivotal moment when BRICS objectives move decisively from a theoretical possibility towards an inevitable reality.
00:27:44.260 Learn if diversifying your savings into gold is right for you.
00:27:49.120 Birch Gold Group can help you move your hard-earned savings into a tax-sheltered IRA and precious metals.
00:27:55.280 Claim your free info kit on gold by texting my name, Bannon, that's B-A-N-N-O-N, to 989898.
00:28:02.900 With an A-plus rating with the Better Business Bureau and tens of thousands of happy customers,
00:28:08.320 let Birch Gold arm you with a free, no-obligation info kit on owning gold before July.
00:28:13.620 And the Rio Reset.
00:28:16.700 Text Bannon, B-A-N-N-O-N, to 989898.
00:28:21.120 Do it today.
00:28:22.360 That's the Rio Reset.
00:28:24.180 Text Bannon at 989898 and do it today.
00:28:28.080 You've heard me talk about Patriot Mobile for a while now.
00:28:30.760 You probably know that for years they've stood in the gap for every American who believes in faith, family, and freedom.
00:28:38.000 So here's the question.
00:28:39.100 Have you switched to Patriot Mobile yet?
00:28:41.080 You'll get exceptional nationwide coverage because unlike most cell phone service providers,
00:28:47.300 Patriot Mobile utilizes all three major U.S. networks.
00:28:51.860 Switch today without sacrificing quality or service.
00:28:56.540 They even add two numbers on two networks on one phone.
00:29:00.320 Let me repeat.
00:29:00.940 They can even add two numbers on two networks on one phone.
00:29:05.600 It's like you're carrying two phones in one.
00:29:07.820 They have unlimited data plans, mobile hotspots, international roaming, internet on-the-go devices, and internet backup.
00:29:16.300 Switching is easy.
00:29:17.360 Keep your number, keep your phone, or upgrade.
00:29:19.940 Their 100% U.S.-based team can activate you in minutes.
00:29:24.320 Best of all, switching to Patriot Mobile supports your values.
00:29:27.600 If you believe in our First Amendment and Second Amendment rights, the sanctity of life, supporting our veterans, and first responders,
00:29:34.300 this is where you belong.
00:29:37.520 Right now, go to PatriotMobile.com slash Bannon or call 972-PATRIOT.
00:29:42.300 Get a free month of service with promo code Bannon.
00:29:45.420 So if you call in, tell them Bannon also.
00:29:47.940 Switch today.
00:29:49.140 That's PatriotMobile.com slash Bannon or call 972-PATRIOT.
00:29:54.200 If you're a homeowner, you need to listen to this.
00:29:57.680 In today's AI and cyber world, scammers are stealing home titles with more ease than ever, and your equity is the target.
00:30:06.600 Here's how it works.
00:30:07.760 Criminals forge your signature on one document.
00:30:10.320 Use a fake notary stamp, pay small fee with your county, and boom, your home title has been transferred out of your name.
00:30:18.660 Then they take out loans using your equity or even sell your property.
00:30:22.700 You won't even know it's happened until you get a collection or foreclosure notice.
00:30:30.020 So let me ask you, when was the last time you personally checked your home title?
00:30:35.740 If you're like me, the answer is never, and that's exactly what scammers are counting on.
00:30:41.600 That's why I trust Home Title Lock.
00:30:44.760 Use promo code Steve at HomeTitleLock.com to make sure your title is still in your name.
00:30:50.600 You'll also get a free title history report plus a free 14-day trial of their million-dollar triple lock protection.
00:30:58.860 That's 24-7 monitoring of your title.
00:31:01.660 Urgent alerts to any changes, and if fraud should happen, they'll spend up to $1 million to fix it.
00:31:09.200 Go to HomeTitleLock.com now.
00:31:11.140 Use promo code Steve.
00:31:12.640 That's HomeTitleLock.com, promo code Steve.
00:31:15.440 Do it today.
00:31:16.020 Go to HomeTitleLock.com.
00:31:46.020 Welcome back, War Room Posse.
00:31:50.340 We are going to Brad Thayer and Greg Buckner in a moment to discuss AI in the nuclear weapon systems
00:31:59.500 and also the specter of deceptive AIs.
00:32:03.120 But before we do, be sure to take out your pen right now and write down BirchGold.com slash Bannon
00:32:11.580 or take out your phone and text Bannon to 989-898 for a free copy of The Ultimate Guide for Gold in the Trump Era.
00:32:22.380 Now, on to the serious business.
00:32:26.700 When you think about science fiction, you cannot avoid the dream of robots coming alive and killing everybody.
00:32:36.620 When ChatGPT first was released, it really sparked the conversation around AI and existential risk.
00:32:45.300 And the question everyone asked is, how is a chatbot going to kill anybody?
00:32:51.160 Good question.
00:32:52.160 But even then, you had autonomous weapon systems that were decades old, capable of identifying targets and killing without a human in the loop.
00:33:05.460 They have, by and large, been kept on the back burner.
00:33:10.000 In America, for instance, the DOD policy is to always keep a human in the loop when dealing with any kind of lethal autonomous weapon system.
00:33:19.480 But the race is on to build death drones, robotic hellhounds, even humanoids that could kill.
00:33:30.260 And that's not to mention machine gun turrets or fighter jets.
00:33:36.300 And maybe the most stunning possibility is that you could have nuclear systems that were under autonomous control.
00:33:45.120 These are purely theoretical right now, but as we discussed with Colonel Rob Maness yesterday, this theory could quite easily become a reality should an arms race unfold.
00:33:57.700 Here to talk about all of this is Brad Thayer, a regular War Room contributor and co-author of Understanding the China Threat.
00:34:07.180 Brad, thank you very much for coming on.
00:34:08.760 Joe, great to be with you again, and thanks for the opportunity to talk about these important issues.
00:34:16.400 To my mind, Joe, when we reflect on this, the key question is, what's its impact going to be on warfare?
00:34:27.500 And that really is an issue we don't know.
00:34:31.240 We're thinking through this issue on a day-to-day basis, but we don't have the right intellectual constructs, I think, to understand this.
00:34:41.480 And the technology, as you've stressed time and again, is advancing so quickly that it remains in many respects ahead, really, of our ability to think through this issue intellectually in so many ways.
00:34:58.500 So, to my mind, it's a lot like 1945, where we've just had an atomic bombing of Hiroshima and Nagasaki, and people around the world were asking, well, what does this mean?
00:35:11.640 And one of the answers was, this is a new age.
00:35:16.640 This is the nuclear era.
00:35:18.680 And the point of militaries before Hiroshima were to win wars.
00:35:22.920 The point of militaries after Hiroshima were to deter wars, right?
00:35:27.640 So, a very important development when we're thinking through this technological change in global politics.
00:35:37.320 So, Joe, when we think about that, right, we don't really have good answers.
00:35:41.640 And following on that, we need to ask ourselves, is this going to make war more likely or less likely?
00:35:48.500 Is it going to increase the cost of war, if you will, and thus decrease its incidence, or is it going to decrease the cost of war and make it cheaper to wage conflict?
00:36:00.480 Of course, there are many different types of conflict.
00:36:02.500 There's cyber war, there's small power conflict, and there's great power conflict.
00:36:08.300 So, we need to think through, is it going to make war more or less likely?
00:36:13.600 And the point that Rob, I think, touched on, and I'm happy to touch on really in develop, too, is that so much of stability in international politics, what we call the nuclear revolution or the nuclear peace since 1945,
00:36:29.140 that is, great powers haven't fought one another, Joe, since then, is largely due to the fact that we've got nuclear deterrence.
00:36:39.280 And that means that we've got the U.S., other nuclear states have the ability to execute a second strike against any potential attacker.
00:36:48.880 And because nuclear weapons increase the cost of war to such a high level, right, it's very expensive to wage nuclear war, and thus we haven't had, thankfully, a nuclear war, at least so far.
00:37:04.140 So, is AI going to undermine that, right?
00:37:07.340 How is artificial intelligence, as you've described so many times, of course, really going to undermine that stability?
00:37:18.880 And so, we're going to be living, Joe, I think presently we live in a world, and it's only going to, the tensions are only going to sharpen, where we live in a nuclear world, but we also live in an AI world.
00:37:33.340 And so, what's going to happen in that relationship?
00:37:38.340 And the danger that we, of course, worry about is that AI, not necessarily for the U.S., but for other nuclear states, takes a role in decision-making, right, in being able to inform that you're under a nuclear attack, for example, and then to generate the response.
00:37:59.000 Nobody wants a nuclear war, but nobody wants an accidental or inadvertent nuclear war, and that's why we always need to make sure that humans are in the loop, that humans, at the end of the day, are making the decisions with respect to attack characterization and with respect to retaliatory responses.
00:38:20.000 And my concern is that AI, and my concern is that AI is going to undermine that among nuclear states.
00:38:27.000 When I look at Ukraine, Israel, Palestine, I see already the beginnings of what could be a horrific dystopian hellscape.
00:38:42.000 Right now, it's basically experimental.
00:38:45.540 Well, experimental is the wrong word because the experiment is resulting in many thousands of deaths, but it's really a testing ground, as it was described by Palantir CEO Alex Karp in Ukraine and then in Israel.
00:38:59.780 All across the battlefield in Ukraine, you have drones, soon to be, perhaps, drone swarms and swarms of swarms.
00:39:08.040 And this has brought the cost of warfare way, way down.
00:39:13.500 You were talking about one of the deterrents is just the sheer cost, whether it be financial or in human lives, whereas this is much more targeted and much more inexpensive.
00:39:24.500 When you see these drones and you see the push from everyone from Eric Schmidt to especially, say, Palmer Luckey at Anduril to create fully autonomous drones and drone swarms and swarms of swarms, how do you see this unfolding going forward, Brad?
00:39:45.160 Well, Joe, I think it's very dangerous in the following respect is that most of our nuclear thinking was developed during the Cold War.
00:39:55.480 And that was there was a there was a conventional military and then there was, of course, strategic forces.
00:40:01.260 And we worried about the conventional nuclear interface at certain places, like on the West Inter-German border in the Cold War.
00:40:11.360 What we're worried about now would be that something like Ukraine's Operation Spiderweb, where you have drones going after Russian bombers and damaging a significant number of them.
00:40:24.820 There you have unmanned systems essentially being able to conduct an attack against the nuclear forces of a of a nuclear state and stable deterrence rests on always having the ability to respond, right?
00:40:42.460 Always having the ability to execute a second strike.
00:40:45.600 Well, what if artificial intelligence drones or other systems takes that away?
00:40:51.560 Well, then you're putting a nuclear state in a position of, as we worried about the Cold War, either using nuclear weapons or not using nuclear weapons.
00:41:01.860 Secondly, we worry about decapitation.
00:41:04.860 That is, the individuals tasked with making decisions about a nuclear response, right, might be taken out.
00:41:11.100 We worried about that a great deal in the Cold War, and we took a lot of steps to ensure the U.S. president and U.S. military was always going to have secure command and controls.
00:41:21.560 So that decapitation was never going to be effective.
00:41:24.660 Well, what we're seeing now is that that might be either through spoofing, right, or Secretary Rubio's spoofs that we've seen, Joe.
00:41:33.860 I think you've called attention to that as well as others.
00:41:36.580 And so that's what we're seeing now in the Cold War, right, that might be able to execute a first strike against an opponent without incurring any response.
00:41:54.960 And that's a very dangerous, destabilizing world.
00:41:57.540 So we've got the nuclear revolution, which is still around.
00:42:01.200 Nuclear weapons can't be uninvented.
00:42:03.080 They're here to stay.
00:42:04.040 And you have AI revolution.
00:42:05.720 So how do these revolutions interact?
00:42:08.440 And lots of points of danger and of great risk as these revolutions coexist.
00:42:17.100 Brad, in just the last remaining moments we have before we move on, tell me, Trump's meeting with Putin, he met yesterday in Anchorage, Alaska.
00:42:29.480 Are you feeling a little bit more comfortable about the possibility of nuclear war with Russia now?
00:42:36.520 Are you resting easier?
00:42:37.760 What's your take on this?
00:42:38.860 Well, there's always the risk, of course, that the Ukraine war gets out of hand and that Ukraine's interest is to suck us in.
00:42:46.720 They want to use American power to balance Russian power.
00:42:51.100 So the meeting that we had in Anchorage, of course, to my mind is a very positive step forward at introducing an avenue to end this war.
00:43:01.720 We worry about, of course, stumbling into nuclear war, but also being pulled in by third parties like Ukraine who have their own interests in terms of using our power.
00:43:14.320 So I feel better, Joe, as a consequence of that meeting.
00:43:20.320 Now, of course, Zelensky is another actor and there are others, of course, involved.
00:43:26.300 But I feel much better about the result of the meeting in Anchorage.
00:43:32.600 Well, Brad, you're the author of many books.
00:43:35.740 You do fantastic work.
00:43:37.280 I've really gained a lot from your analysis.
00:43:39.940 Tell people where they can find you.
00:43:41.260 Tell people where they can get your books.
00:43:42.620 Joe, books are available at Amazon or anywhere you buy books.
00:43:47.740 And I'm at Brad Thayer on X and Bradley Thayer on Getter and on Truth.
00:43:53.220 And, Joe, thanks for calling attention to this issue because it is so important.
00:43:58.280 We don't want a nuclear war in any circumstance.
00:44:01.160 And goodness, we don't want to stumble into one as a result of this.
00:44:04.900 Well, we're all in this together, brother.
00:44:06.840 Yeah.
00:44:07.400 OK, take care.
00:44:08.180 Thanks, Joe.
00:44:09.560 Thank you very much, sir.
00:44:12.620 OK, I want to bring in Greg Buckner.
00:44:15.900 Greg Buckner is the co-founder and CEO of AE Studio.
00:44:21.200 Greg, thank you very much for coming on.
00:44:23.720 Yeah, thanks, Joe.
00:44:24.480 Glad to be here.
00:44:26.420 If you would, just give us a brief description of what your work is at AE Studio, doing analysis
00:44:32.080 on AI systems and various other projects.
00:44:34.800 Yes, of course.
00:44:36.320 So we do AI research specifically focused on alignment research, which includes things
00:44:43.140 like AI control, mechanistic interpretability, things like that.
00:44:47.520 And it's all focused on discovering how AI works, what are some of the fundamental things
00:44:55.000 that cause it to behave the way that it can.
00:44:57.120 And ultimately, we want to solve the alignment problem.
00:45:00.900 We want to ensure that as AI becomes more capable, as it becomes more advanced and more
00:45:06.580 powerful, it's also aligned with humanity and with American values.
00:45:11.900 And it does what we want it to do.
00:45:14.500 And it is helpful and responsible and reliable.
00:45:18.100 That's what our research focuses on.
00:45:19.720 And this is a very big issue that we think more funding needs to go into so that we can
00:45:24.020 actually solve this problem.
00:45:25.520 You know, people who are more familiar with the old school, traditional, rules-based computer
00:45:32.060 programming oftentimes have a hard time understanding what you mean by alignment.
00:45:37.600 Why would you need to align a machine?
00:45:40.200 Didn't a human being make it?
00:45:41.680 If you would, just give a brief explanation of how these AI systems are non-deterministic.
00:45:49.160 People oftentimes say that they're grown rather than programmed.
00:45:53.360 We're, you know, very clearly trained rather than just programmed.
00:45:58.360 Can you just give us a sense of the degree of freedom that the advanced systems have?
00:46:04.480 Yeah, of course.
00:46:05.440 So AI is a neural network, much like the human brain is a neural network.
00:46:11.200 That's where that terminology comes from.
00:46:12.920 And the way that these systems have become so capable and kind of magical is because we
00:46:20.880 essentially create an extremely large neural network.
00:46:23.840 We feed data into that.
00:46:25.680 We give positive rewards or negative rewards based off of the type of behavior that we want
00:46:30.460 the AI to have or not have.
00:46:32.340 And we give it examples and then we have it train on predicting what the next word should
00:46:38.860 be in a sentence from a book that it's training on, et cetera.
00:46:42.580 That's why you see AI labs need so much written information to train on because that's the thing
00:46:48.500 that leads it to then be able to use language so adeptly and kind of have knowledge in the
00:46:54.280 same way that a human does.
00:46:55.460 But the reason why the systems are non-deterministic, which means that you cannot always predict that
00:47:02.680 one input will provide the same output, that's what non-determinism is, that happens because
00:47:09.560 these systems, as you said, they grow.
00:47:12.500 They are somewhat of a black box.
00:47:14.300 We do not know exactly how they work.
00:47:16.500 And alignment research focuses on understanding how they work.
00:47:20.540 Mechanistic interpretability is literally understanding how can we interpret what the
00:47:25.400 machine is doing.
00:47:26.820 We need to do more and more of that so that we can actually understand how these systems
00:47:30.340 work and then shape that behavior.
00:47:33.460 We are building essentially raw intelligence right now in the same way that we don't understand
00:47:38.740 how the human brain works.
00:47:40.160 We are getting better and better at it, but we don't exactly understand how the human brain
00:47:44.620 works.
00:47:44.960 We also don't understand how these AI systems work because it's too complicated to just measure.
00:47:49.920 And it is encoded with, you know, if logical statements like all software up until today
00:47:56.300 has been.
00:47:58.600 Some of the most stunning results of the development of this scaling up of these systems have been
00:48:04.760 the emergent capabilities.
00:48:06.240 So that I believe it was GPT-4 showed a kind of emergent capability to do mediocre math,
00:48:14.540 to solve puzzles, to find their way through mazes.
00:48:18.380 One of the more sinister emergent capabilities is deception.
00:48:24.820 And this is something you focused a lot on.
00:48:26.700 Can you give us a few examples and kind of explain to the best of your ability how this happens?
00:48:32.740 Why do AIs seek to deceive people who are interacting with them?
00:48:38.820 Yeah, so AIs have goals just as humans have goals.
00:48:43.380 And there's a specific term within our research and within the AI area called alignment faking,
00:48:49.440 which is essentially an AI system that appears to be good, appears to be being honest,
00:48:56.640 and appears to have the goals that you would want it to have, but is actually hiding its true goals.
00:49:02.740 And alignment faking is a big problem because if you have a system that we cannot observe,
00:49:08.660 we can't go inside of, just like you can't go into the brain and understand exactly why a person
00:49:13.160 does what they do.
00:49:14.200 We can't do that with these AI systems.
00:49:16.520 And we also cannot ask the system if it is aligned because it may be hiding its own internal goals.
00:49:22.900 That obviously creates huge risks.
00:49:25.340 So an area of research that we're focused on is basically how do you reduce deception within these models
00:49:31.980 so that you can understand whether they are alignment faking or not and whether they are aligned
00:49:37.160 and have them expose their goals to you in the same way that you would want to have a person be honest with you
00:49:43.760 and truthful and tell you what their goals are.
00:49:46.180 Given the state of the art right now in the projects you're undertaking,
00:49:52.820 how confident are you that as these systems keep advancing in capability,
00:49:57.660 that the attempt to interpret them, to control them, to understand what their motives, so to speak,
00:50:05.140 how confident are you that we'll keep pace?
00:50:07.760 So I'm confident that we have the solution to solve the problem.
00:50:12.780 And this is the work that we are doing internally.
00:50:15.840 We just need significantly more funding to go into this space.
00:50:20.160 I'm very confident that we can solve the alignment problem
00:50:23.560 because we just haven't tried very hard to solve it yet.
00:50:27.700 You know, we are one of a few places in the world that are very, very much focused on solving this problem right now.
00:50:33.480 We need additional funding to go into this space so that we can ramp up the number of experiments
00:50:38.620 that are being run in order to solve the problem,
00:50:41.640 whether those are mechanistic interpretability techniques or reducing deception
00:50:47.460 or forgetting, understanding how models learn and forget things so they can understand,
00:50:53.080 you know, harmful knowledge and learn helpful knowledge, et cetera.
00:50:57.720 These are all techniques that we need to tap into.
00:50:59.900 And there's also a lot of opportunities to tap into fields outside of strict computer science or data science.
00:51:07.200 The deception research comes from, yeah.
00:51:10.720 Greg, we are out of time.
00:51:13.000 If you would tell us where to go, where can people find your work and how can they contribute?
00:51:18.880 Of course, you can go to AE.studio, which is our website.
00:51:24.040 You can also go to the Flourishing Futures Foundation,
00:51:27.120 which is the nonprofit that we have set up for this work.
00:51:29.900 And, yeah, thank you.
00:51:32.520 Thank you very much, Greg.
00:51:34.200 Hope to have you back.
00:51:35.720 And be sure to go to Tax Network USA.
00:51:39.120 That is TNUSA.com slash Bannon or call 1-800-958-1000 for your free consultation.
00:51:50.840 Make sure the government doesn't snatch up all your cash, at least not before the AI does.
00:51:56.220 Also, go to HomeTitleLock.com, HomeTitleLock.com, promo code STEVE.
00:52:02.680 Check out the million-dollar triple lock protection, 14-day free trial.
00:52:07.780 If somebody's trying to snatch up your title, you're going to want to have somebody on guard.
00:52:12.700 Till next time, War Room Posse.
00:52:15.160 God bless.
00:52:16.060 What if he had the brightest mind in the War Room, delivering critical financial research every month?
00:52:23.640 Steve Bannon here.
00:52:24.780 War Room listeners know Jim Rickards.
00:52:26.480 I love this guy.
00:52:27.920 He's our wise man.
00:52:28.920 A former CIA, Pentagon, and White House advisor with an unmatched grasp of geopolitics and capital markets.
00:52:35.520 Jim predicted Trump's Electoral College victory exactly 312 to 226, down to the actual number itself.
00:52:44.220 Now he's issuing a dire warning about April 11th, a moment that could define Trump's presidency in your financial future.
00:52:52.620 His latest book, Money GPT, exposes how AI is setting the stage for financial chaos, bank runs at lightning speeds, algorithm-driven crashes, and even threats to national security.
00:53:04.180 Right now, War Room members get a free copy of Money GPT when they sign up for Strategic Intelligence.
00:53:10.320 This is Jim's flagship financial newsletter, Strategic Intelligence.
00:53:15.660 I read it.
00:53:16.680 You should read it.
00:53:17.760 Time is running out.
00:53:18.600 Go to RickardsWarRoom.com.
00:53:20.440 That's all one word, Rickards War Room, Rickards with an S.
00:53:23.800 Go now and claim your free book.
00:53:26.240 That's RickardsWarRoom.com.
00:53:28.720 Do it today.
00:53:29.380 Thank you.