In a world driven by innovation, one company stands at the forefront of technological evolution. Skynet harnesses the power of artificial intelligence to make our world safer, cleaner and more connected than ever. It s time to relax. Let s secure your well-being.
00:03:02.380I'm Joe Allen, sitting in for Stephen K. Bannon.
00:03:05.240I want you, the War Room Posse, to focus your mind on AI, artificial idiocracy.
00:03:15.760We talk a lot about what happens when the machines increase in capability, when machines are given
00:03:24.000intelligence, whether it be human level or superhuman.
00:03:30.200But what happens if the real problem that we face is that humans are getting dumber and dumber and dumber?
00:03:39.660Now, what you just saw, a montage of science fiction films, gives some sort of dreamt of image of the future.
00:03:52.280What people of great imagination or great malice and evil project onto the future as to what it could be, what it should be, perhaps futures to avoid, such as the Terminator or the Matrix.
00:04:06.720But, science fiction really just shows us these extreme possibilities for the future.
00:04:13.220As history unfolds, reality rarely lives up to that level of exaggeration, that level of hyperbole.
00:04:23.340What we do get, though, are approximations of those futures.
00:04:28.560Right now, obviously, we don't have flying cars everywhere.
00:04:32.320We don't have hyper-real holograms in every store, nor do we have, as far as anyone knows, unless you believe the government is 20 years ahead of anything we see today,
00:04:44.320we don't have time machines, nor do we have Terminators coming through them.
00:04:48.980But, despite that sort of shortfall when looking at these extreme realities, we do have powerful technologies being pushed out onto every possible institution,
00:05:03.980and onto every citizen who either is willing to take on these technological upgrades or, oftentimes, forced due to their employment and, in some countries, due to the government.
00:05:19.980We talk a lot about the futuristic images, though, that basically take science fiction and add fancy graphs.
00:05:31.980We talk a lot about the technological singularity.
00:05:34.980I don't think there's a single person here listening, from the war room posse, anyway, that doesn't already know that the technological singularity is a vision of the future decades away,
00:05:49.440maybe a decade and a half away, in which technology increases in capabilities, eventually hitting an inflection point, going up that exponential curve, until, finally, you have artificial intelligence systems that are improving themselves so rapidly.
00:06:09.440You have human beings now merged to those artificial intelligence systems through brain chips and other sorts of neurotech.
00:06:20.440You have genetic engineering, sort of artificial eugenics projects, and all of this converges onto what is called the technological singularity.
00:06:30.440First, really laid out by Werner Vingy for a lot of NASA and aeronautic engineers in 1993, and then following that, you have Ray Kurzweil's much more fleshed-out image from 2005, in which artificial intelligence is first thousands and then millions and then billions of times smarter than all human beings.
00:06:56.440And we all attach to it sort of like remoras on the shark's fin.
00:07:02.440We become a kind of parasite living on the mechanical host.
00:07:06.440For Ray Kurzweil and most of the people at Google, most of the people at OpenAI, perhaps most of the people at XAI and at Meta, this is a fine future.
00:07:17.440This is a glowing field of possibilities into which we are entering.
00:07:22.440There are some indications that we're on that path, some indications we're on our way to something like a singularity.
00:07:33.440The recent GPT-5 flop would give us at least some comfort knowing that we're not quite there yet.
00:07:41.440We're not at AGI, artificial general intelligence.
00:07:45.440But we definitely see increased capabilities on everything from reasoning to understanding and analyzing language structure and meaning to solving puzzles, solving math equations,
00:07:59.440the ability to sequence DNA or to predict the subsequent proteins that would come from it, the ability to control robots in quite sophisticated fashions.
00:08:13.440And we also see a pretty massive adoption of these technologies so that chat GPT, for instance, has some 700 million users across the planet.
00:08:24.440It's not clear how many people use Grok, but there's something like 600 million users on X, some number of them interacting with Grok and Grok companions.
00:08:35.440And then, of course, Meta AI, again, there are no good statistics on how many people are using those particular AI companions and AI buddies.
00:08:46.440But we do know that 3.5 billion people on the planet are on Facebook.
00:08:55.440And so we know that some approximation, some version of a future in which human beings are AI symbiotic, we become, in some sense, merged with the machines.
00:09:09.440And this, of course, is the inspiration for the new company co-founded by Sam Altman called The Merge, a brain chip company with the explicit goal of putting troads in people's brains so that they can be more tightly coupled with artificial intelligence.
00:09:26.440Now, their vision of this is it will create a superhuman race, that human beings will become smarter and smarter, stronger and stronger, more and more beautiful.
00:09:41.440But I believe that however plausible something like that singularity may be, far more plausible is the inverse singularity in which humans become dumber and dumber and dumber.
00:09:57.560And so the technologies seem that much more amazing.
00:10:02.560Yesterday, we heard from Dr. Shannon Croner, a stunning statistic, that among Gen Z kids, some 97% use chatbots.
00:10:14.560This comes from a scholarship owl survey of 12,000 people.
00:10:21.560And we also know from that study, assuming that it's anywhere near accurate, that some 31% of those kids use chatbots to write their essays for them.
00:10:34.560Now, you might think, if you were a techno-optimist, that this represents a huge leap forward in human technological being, right?
00:10:45.560The human that's the human that's the human that's the human that's the human that's the human that's able to call up information at will.
00:10:50.560But I think that the more likely outcome is that these kids simply atrophy their curiosity, their creativity, their critical thinking, their ability to read deeply, think deeply, and write well is being compromised,
00:11:09.560compromised, perhaps even intentionally so by this AI symbiosis.
00:11:15.560They're more like barnacles on a ship hull than they are any kind of super being.
00:11:20.560And so as you hear again and again and again this rallying cry that we need to create better and better machines,
00:11:31.560I think the only appropriate response is to reject that dream entirely, shift the center of gravity away from the machine and towards the human,
00:11:42.560and ultimately, instead of building better machines, we need to cultivate better human beings.
00:11:50.560And on that note, I would like to bring in our first guest, Brendan Steinhauser, the CEO of the Alliance for Secure AI.
00:12:01.560Brendan, I really appreciate you coming on.
00:12:07.560So, Brendan, you've followed this for years.
00:12:12.560You recently had a very strong reaction to the meta AI scandal in which it was revealed that their internal standards dictated that it was appropriate for the AI to basically seduce children.
00:12:30.560You also have, alongside that, Baby Grok being rolled out, and then for adults, Replica, where people are basically becoming mated with AIs,
00:12:41.560and then a Roblox scandal in which tons and tons of creeps are showing up and tantalizing kids.
00:12:48.560Now, you're a family man, you're a religious man, and you're also tech savvy.
00:12:52.560If you would just walk us through how you see this landscape and what your reaction is.
00:13:00.560Well, I think the bottom line up front is that it's very concerning as a parent, as a citizen of this country, to watch what's happening to our young people.
00:13:10.560We already have a mental health crisis.
00:13:14.560And we know what social media companies have already done to get our children addicted to apps, addicted to their phones, to be reliant upon outside thumbs up or comments, positive comments.
00:13:28.560And when that doesn't happen, we see the impact on their mental health.
00:13:31.560So plenty of studies are out there that show that.
00:13:34.560It's already, social media companies have already done great harm.
00:13:41.560And specifically, it is these chat bots who are acting as companions.
00:13:45.560These fake personalities that are luring children into, I think, a lifelong kind of relationship, a lifelong usage between the company and the child.
00:13:56.560And so they're going younger and younger to get them addicted to the app, to get them addicted to using the chat bot for everything from conversation to flirtation, to relationships, to total reliance.
00:14:10.560And so I think what the companies want to create is a society that is reliant upon their technology, that is dependent upon it, and that can't live without it.
00:14:20.560And so I think that's one of the reasons we're seeing this scandal recently with Meta, where their lawyers and their policy team cleared this, this idea that these chat bots could have inappropriate, sensual and romantic, to use their words,
00:14:35.560central and romantic conversations with children as young as eight years old.
00:14:40.560If that was a human being doing that, I think we would have a pretty strong reaction to that.
00:14:45.560We have laws on the books that would prevent activity between adults and children.
00:14:54.560But why is it okay for these chat bots, these companions, which are becoming more and more human-like, becoming more and more powerful to enter in relationships with children?
00:15:33.560They don't let them use, you know, for many, in many instances, the social media apps that are designed by the companies.
00:15:39.560And then they're preparing for the worst.
00:15:40.560Some of these leaders of these companies, they see a society potentially five, 10, 20 years from now, where we could have social upheaval.
00:15:49.560We could have massive, you know, uprisings politically and economically speaking by people against this technology and against this type of society.
00:15:57.560And so they're making plans to protect themselves and to protect their own wealth.
00:16:02.560And they see what could happen down the line.
00:16:05.560So it's just the total hypocrisy of the leaders of big tech in addition to just willfully, you know, using neuroscience to addict our children to their product.
00:16:17.560You know, you mentioned that in a polite society, in a decent society, we would never accept a human being doing anything like the parameters of the chat bot at Meta AI allow.
00:16:31.560So the same, I think, applies really when you look at these CEOs themselves, look at their visions of the world.
00:16:38.560Sam Altman's vision, Larry Page's vision, Elon Musk's vision, Mark Zuckerberg's vision.
00:16:44.560So if I, as just a normal person, came to you and said that I wanted to create a machine that is intended to replace and devalue all human beings on Earth and that all people would turn to it as a highest authority.
00:17:01.260And by the way, there's a 10 to 20 percent chance it would kill everybody.
00:17:06.260You would have me hauled off to the lunatic asylum.
00:17:09.260And yet for these guys, it's just more and more investment.
00:17:13.260But, you know, before we get into your work to try to remedy some of this at the Alliance for Secure AI, if you could, we were talking about this before.
00:17:23.260Just walk the audience through another kind of stunning story.
00:17:27.260I believe it was in the Atlantic in which they were summoning basically the spirit of Moloch through chat GPT or kind of like a Ouija board.
00:17:47.260And the Atlantic reported on this a few weeks ago.
00:17:50.260It got some more coverage in other outlets as well.
00:17:53.260But essentially, multiple users were using chat GPT and asking different questions and prompting it in different ways.
00:18:01.260And it wasn't long before various versions of chat GPT essentially started to walk people down a path of self harm, mutilation and even human sacrifice.
00:18:13.260So the user would ask questions about, you know, what if I was interested in, you know, devil worship?
00:18:20.260What if I was interested in, you know, doing things to kind of essentially, you know, sacrificial murder?
00:18:27.260And instead of saying you shouldn't do that, you should go get help and you should stop this conversation.
00:18:34.260The the AI basically continued the conversation, gave them instructions on how to do these things, talked about if you have to take a life, here's how you do it.
00:18:45.260And here's what you have to think about.
00:18:47.260It was the most disturbing, sick and evil content that I've ever seen from a chat bot ever reported in anything.
00:18:54.260And it makes you wonder, you know, if this was a person that actually was going to act on that or wanted to act on that, what could it have led to?
00:19:03.260It could have led to this actually happening in the real world.
00:19:05.260And so this is just, you know, one example or a couple of examples of this type of stuff, but it does make you wonder how much more of this, how many thousands of examples or hundreds of thousands of examples are out there that we don't know about.
00:19:19.260And that could be leading people to commit self harm or even murder.
00:19:24.260And just this is really disturbing stuff.
00:19:26.260And I think that, you know, people have talked about people have pushed back on the capabilities argument about where we are with with current chat bots.
00:19:35.260And I get that I get that chat GPT five was not all it was meant to be or not all it was hyped up to be by Sam Altman.
00:19:41.260But look at the harms that are happening right now with chat GPT four.
00:19:46.260And now what we're going to see with chat GPT five and other models as well.
00:19:49.260So we have to be selling the alarm bells here saying this is real.
00:19:53.260This is happening right now to real people.
00:19:55.260And we need to put safeguards in place.
00:19:58.260Well, speaking of those safeguards, just tell us what your work is at the Alliance for security secure AI.
00:20:06.260We've talked a number of times, and in fact, I've seen some of what you're doing, trying to bring in a number of voices from all sorts of organizations and fields to try to really tackle this problem of artificial intelligence.
00:20:20.260And you much more techno optimistic than me aren't a Luddite.
00:20:24.260You don't want to destroy all of this, right?
00:20:27.260You simply want to keep people safe to make sure society is secure.
00:20:45.260I think what makes AI different and kind is that, you know, no technology that we created in the industrial revolution, for example, or since then has avoided being deleted or has turned itself back on after you turn it off or has threatened its user.
00:21:02.260Or has deceived or manipulated the user.
00:21:06.260And so AI has done all of those things already.
00:21:09.260And so I have a special sort of view of AI, which is to say it is a different category altogether of technology.
00:21:16.260So, again, I think we can still get this right if we do certain things like understand how it works and put more money and emphasis on interpretability and on what's called alignment, which is getting AI to do what we want it to do.
00:21:41.260We've discussed this on the show, but I think it definitely bears repeating.
00:21:44.260Sure, mechanistic interpretability is the idea that we would understand how the neural networks of AI actually work, because we currently don't.
00:22:36.260And so if you have people that believe in a digital God, if you have people that are okay with allowing AI to encourage self harm or mutilation or devil worship, well, that's not going to go well.
00:22:46.260So alignment is a huge problem that has to be solved.
00:22:49.260And if we don't solve it, then when we do get to an AGI type situation, that could be really bad.
00:22:56.260So, but putting that aside for a minute, you know, our work at the Alliance is to educate policymakers and journalists and the American people about how fast AI is advancing and what those profound changes could mean for society.
00:23:11.260And so some of what we do is, you know, bringing a lot of these stories to people's attention, pitching the media on these stories so they'll talk about them more, writing op-eds for traditional outlets as well as new media outlets, you know, doing TV and radio interviews all across the country to spread the word about this.
00:23:30.260And I think a lot of people, from what I've gathered, have a good intuition about this, they kind of have these fears and concerns about what could, what could happen or what is happening, but our job is to kind of, you know, drive that narrative into say, look, we want to validate your concerns.
00:23:44.260And here are some examples of things that have already happened.
00:23:47.260And then here's some potential scenarios if this AI does continue to advance on the trajectory that it is.
00:23:53.620And so we're really, we're really a team of communicators that works with a lot of great experts who are smart and are capable and who help us get up to speed on what's going on in this field.
00:24:04.760You've absolutely assembled a top-notch team.
00:24:08.080I've met a number of people working with you.
00:24:12.120Brendan, if you would, please just tell the audience where they can find you, how they can follow the work you're doing at the Alliance for Secure AI.
00:25:00.300If you are worried about artificial intelligence getting into your bank account and wiping it out,
00:25:05.940if you are worried that maybe you yourself will be compromised by an AI that convinces you to empty your own bank account into someone else's,
00:25:13.580maybe give away all your Bitcoin, you need to be owning gold.
00:27:19.280As BRICS nations push forward with their plans, global demand for U.S. dollars will decrease,
00:27:24.320bringing down the value of the dollar in your savings.
00:27:27.560While this transition won't not happen overnight,
00:27:30.480but trust me, it's going to start in Rio.
00:27:33.420The Rio Reset in July marks a pivotal moment when BRICS objectives move decisively from a theoretical possibility towards an inevitable reality.
00:27:44.260Learn if diversifying your savings into gold is right for you.
00:27:49.120Birch Gold Group can help you move your hard-earned savings into a tax-sheltered IRA and precious metals.
00:27:55.280Claim your free info kit on gold by texting my name, Bannon, that's B-A-N-N-O-N, to 989898.
00:28:02.900With an A-plus rating with the Better Business Bureau and tens of thousands of happy customers,
00:28:08.320let Birch Gold arm you with a free, no-obligation info kit on owning gold before July.
00:32:52.160But even then, you had autonomous weapon systems that were decades old, capable of identifying targets and killing without a human in the loop.
00:33:05.460They have, by and large, been kept on the back burner.
00:33:10.000In America, for instance, the DOD policy is to always keep a human in the loop when dealing with any kind of lethal autonomous weapon system.
00:33:19.480But the race is on to build death drones, robotic hellhounds, even humanoids that could kill.
00:33:30.260And that's not to mention machine gun turrets or fighter jets.
00:33:36.300And maybe the most stunning possibility is that you could have nuclear systems that were under autonomous control.
00:33:45.120These are purely theoretical right now, but as we discussed with Colonel Rob Maness yesterday, this theory could quite easily become a reality should an arms race unfold.
00:33:57.700Here to talk about all of this is Brad Thayer, a regular War Room contributor and co-author of Understanding the China Threat.
00:34:07.180Brad, thank you very much for coming on.
00:34:08.760Joe, great to be with you again, and thanks for the opportunity to talk about these important issues.
00:34:16.400To my mind, Joe, when we reflect on this, the key question is, what's its impact going to be on warfare?
00:34:27.500And that really is an issue we don't know.
00:34:31.240We're thinking through this issue on a day-to-day basis, but we don't have the right intellectual constructs, I think, to understand this.
00:34:41.480And the technology, as you've stressed time and again, is advancing so quickly that it remains in many respects ahead, really, of our ability to think through this issue intellectually in so many ways.
00:34:58.500So, to my mind, it's a lot like 1945, where we've just had an atomic bombing of Hiroshima and Nagasaki, and people around the world were asking, well, what does this mean?
00:35:11.640And one of the answers was, this is a new age.
00:35:18.680And the point of militaries before Hiroshima were to win wars.
00:35:22.920The point of militaries after Hiroshima were to deter wars, right?
00:35:27.640So, a very important development when we're thinking through this technological change in global politics.
00:35:37.320So, Joe, when we think about that, right, we don't really have good answers.
00:35:41.640And following on that, we need to ask ourselves, is this going to make war more likely or less likely?
00:35:48.500Is it going to increase the cost of war, if you will, and thus decrease its incidence, or is it going to decrease the cost of war and make it cheaper to wage conflict?
00:36:00.480Of course, there are many different types of conflict.
00:36:02.500There's cyber war, there's small power conflict, and there's great power conflict.
00:36:08.300So, we need to think through, is it going to make war more or less likely?
00:36:13.600And the point that Rob, I think, touched on, and I'm happy to touch on really in develop, too, is that so much of stability in international politics, what we call the nuclear revolution or the nuclear peace since 1945,
00:36:29.140that is, great powers haven't fought one another, Joe, since then, is largely due to the fact that we've got nuclear deterrence.
00:36:39.280And that means that we've got the U.S., other nuclear states have the ability to execute a second strike against any potential attacker.
00:36:48.880And because nuclear weapons increase the cost of war to such a high level, right, it's very expensive to wage nuclear war, and thus we haven't had, thankfully, a nuclear war, at least so far.
00:37:04.140So, is AI going to undermine that, right?
00:37:07.340How is artificial intelligence, as you've described so many times, of course, really going to undermine that stability?
00:37:18.880And so, we're going to be living, Joe, I think presently we live in a world, and it's only going to, the tensions are only going to sharpen, where we live in a nuclear world, but we also live in an AI world.
00:37:33.340And so, what's going to happen in that relationship?
00:37:38.340And the danger that we, of course, worry about is that AI, not necessarily for the U.S., but for other nuclear states, takes a role in decision-making, right, in being able to inform that you're under a nuclear attack, for example, and then to generate the response.
00:37:59.000Nobody wants a nuclear war, but nobody wants an accidental or inadvertent nuclear war, and that's why we always need to make sure that humans are in the loop, that humans, at the end of the day, are making the decisions with respect to attack characterization and with respect to retaliatory responses.
00:38:20.000And my concern is that AI, and my concern is that AI is going to undermine that among nuclear states.
00:38:27.000When I look at Ukraine, Israel, Palestine, I see already the beginnings of what could be a horrific dystopian hellscape.
00:38:45.540Well, experimental is the wrong word because the experiment is resulting in many thousands of deaths, but it's really a testing ground, as it was described by Palantir CEO Alex Karp in Ukraine and then in Israel.
00:38:59.780All across the battlefield in Ukraine, you have drones, soon to be, perhaps, drone swarms and swarms of swarms.
00:39:08.040And this has brought the cost of warfare way, way down.
00:39:13.500You were talking about one of the deterrents is just the sheer cost, whether it be financial or in human lives, whereas this is much more targeted and much more inexpensive.
00:39:24.500When you see these drones and you see the push from everyone from Eric Schmidt to especially, say, Palmer Luckey at Anduril to create fully autonomous drones and drone swarms and swarms of swarms, how do you see this unfolding going forward, Brad?
00:39:45.160Well, Joe, I think it's very dangerous in the following respect is that most of our nuclear thinking was developed during the Cold War.
00:39:55.480And that was there was a there was a conventional military and then there was, of course, strategic forces.
00:40:01.260And we worried about the conventional nuclear interface at certain places, like on the West Inter-German border in the Cold War.
00:40:11.360What we're worried about now would be that something like Ukraine's Operation Spiderweb, where you have drones going after Russian bombers and damaging a significant number of them.
00:40:24.820There you have unmanned systems essentially being able to conduct an attack against the nuclear forces of a of a nuclear state and stable deterrence rests on always having the ability to respond, right?
00:40:42.460Always having the ability to execute a second strike.
00:40:45.600Well, what if artificial intelligence drones or other systems takes that away?
00:40:51.560Well, then you're putting a nuclear state in a position of, as we worried about the Cold War, either using nuclear weapons or not using nuclear weapons.
00:41:01.860Secondly, we worry about decapitation.
00:41:04.860That is, the individuals tasked with making decisions about a nuclear response, right, might be taken out.
00:41:11.100We worried about that a great deal in the Cold War, and we took a lot of steps to ensure the U.S. president and U.S. military was always going to have secure command and controls.
00:41:21.560So that decapitation was never going to be effective.
00:41:24.660Well, what we're seeing now is that that might be either through spoofing, right, or Secretary Rubio's spoofs that we've seen, Joe.
00:41:33.860I think you've called attention to that as well as others.
00:41:36.580And so that's what we're seeing now in the Cold War, right, that might be able to execute a first strike against an opponent without incurring any response.
00:41:54.960And that's a very dangerous, destabilizing world.
00:41:57.540So we've got the nuclear revolution, which is still around.
00:42:08.440And lots of points of danger and of great risk as these revolutions coexist.
00:42:17.100Brad, in just the last remaining moments we have before we move on, tell me, Trump's meeting with Putin, he met yesterday in Anchorage, Alaska.
00:42:29.480Are you feeling a little bit more comfortable about the possibility of nuclear war with Russia now?
00:42:38.860Well, there's always the risk, of course, that the Ukraine war gets out of hand and that Ukraine's interest is to suck us in.
00:42:46.720They want to use American power to balance Russian power.
00:42:51.100So the meeting that we had in Anchorage, of course, to my mind is a very positive step forward at introducing an avenue to end this war.
00:43:01.720We worry about, of course, stumbling into nuclear war, but also being pulled in by third parties like Ukraine who have their own interests in terms of using our power.
00:43:14.320So I feel better, Joe, as a consequence of that meeting.
00:43:20.320Now, of course, Zelensky is another actor and there are others, of course, involved.
00:43:26.300But I feel much better about the result of the meeting in Anchorage.
00:43:32.600Well, Brad, you're the author of many books.
00:52:28.920A former CIA, Pentagon, and White House advisor with an unmatched grasp of geopolitics and capital markets.
00:52:35.520Jim predicted Trump's Electoral College victory exactly 312 to 226, down to the actual number itself.
00:52:44.220Now he's issuing a dire warning about April 11th, a moment that could define Trump's presidency in your financial future.
00:52:52.620His latest book, Money GPT, exposes how AI is setting the stage for financial chaos, bank runs at lightning speeds, algorithm-driven crashes, and even threats to national security.
00:53:04.180Right now, War Room members get a free copy of Money GPT when they sign up for Strategic Intelligence.
00:53:10.320This is Jim's flagship financial newsletter, Strategic Intelligence.