Bannon's War Room - August 28, 2025


WarRoom Battleground EP 839: Big Tech Races To to Build Digital Gods


Episode Stats

Length

53 minutes

Words per Minute

163.15138

Word Count

8,710

Sentence Count

561

Misogynist Sentences

1

Hate Speech Sentences

6


Summary

In this episode, the War Room Posse is joined by John Sherman of the AI Risk Network and Justin Lane, a fellow who I consider to be one of my absolute closest and most trusted friends, to discuss the dangers of artificial intelligence.


Transcript

00:00:00.000 This is the primal scream of a dying regime.
00:00:07.720 Pray for our enemies, because we're going medieval on these people.
00:00:12.960 I got a free shot at all these networks lying about the people.
00:00:17.220 The people have had a belly full of it.
00:00:19.180 I know you don't like hearing that.
00:00:20.600 I know you try to do everything in the world to stop that,
00:00:22.320 but you're not going to stop it.
00:00:23.240 It's going to happen.
00:00:24.520 And where do people like that go to share the big lie?
00:00:27.900 MAGA Media.
00:00:28.820 I wish in my soul, I wish that any of these people had a conscience.
00:00:34.720 Ask yourself, what is my task and what is my purpose?
00:00:38.460 If that answer is to save my country, this country will be saved.
00:00:44.820 War Room. Here's your host, Stephen K. Band.
00:00:52.120 So I'm keeping a little list here of potential downsides or harms,
00:00:56.520 risks of generative AI, even in its current form.
00:00:59.960 Let's just run through it.
00:01:01.100 Loss of jobs.
00:01:03.300 Manipulation of personal behavior.
00:01:05.820 Manipulation of personal opinions.
00:01:07.420 And potentially the degradation of free elections in America.
00:01:09.740 Did I miss anything?
00:01:10.380 Raise your right hand.
00:01:11.240 Misinformation, generation of deep fakes.
00:01:15.400 A new government report now identifying a major risk that artificial intelligence could have on the U.S. financial system.
00:01:21.700 Specifically, Anthropik is concerned that AI could empower a much larger set of actors to misuse biology.
00:01:27.460 I think if this technology goes wrong, it can go quite wrong.
00:01:30.480 Geoffrey Hinton made headlines with his recent departure from Google.
00:01:34.240 What I've been talking about mainly is what I call the existential threat,
00:01:37.920 which is the chance that they get more potential than us.
00:01:40.960 And they'll take over.
00:01:42.020 They are racing to systems that are extremely powerful that they themselves know they cannot control.
00:01:47.440 You think that's real.
00:01:50.280 It is conceivable that AI could take control and reach a point where you couldn't turn it off
00:01:54.980 and it would be making decisions for people.
00:01:56.860 Yeah.
00:01:57.700 Absolutely.
00:01:58.060 That's definitely where things are headed.
00:02:02.740 We're really at a crossroads.
00:02:05.100 We could have everything we could dream of if we were careful.
00:02:08.940 But we could have a nightmare beyond contemplation if we're not.
00:02:13.160 I'm not saying I know what the right trade-off between acceleration and safety is.
00:02:17.260 But I do know that we'll never find out what that right trade-off is if we let Moloch dictate it for us.
00:02:24.020 I think without the public pressure, none of them can push back against their shareholders alone,
00:02:29.480 no matter how good-hearted they are.
00:02:33.440 Moloch is a really powerful foe.
00:02:35.020 Good evening, War Room Posse.
00:02:41.120 I am Joe Allen sitting in for Stephen K. Bannon.
00:02:44.400 All you long-time viewers know that I'm very proud of my cold opens.
00:02:50.440 I cut them all myself.
00:02:52.040 I choose the material.
00:02:53.360 But I have to say what you just saw really outdoes everything I have attempted.
00:02:59.040 That comes from the filmmaker Dagan Shani.
00:03:03.640 He has done a fantastic job of creating these documentaries of all of the statements we hear about artificial intelligence
00:03:13.600 and giving you juxtapositions of viewpoints.
00:03:18.060 You can hear everything from artificial intelligence will kill everyone
00:03:22.180 to artificial intelligence doesn't really exist and it never will.
00:03:27.400 I really urge you to go to either his X profile, which is at DaganShani1, that's D-A-G-A-N-S-H-A-I-N-S-H-A-N-I, numeral 1, DaganShani1, at X.
00:03:47.900 He has penned his documentary, Don't Look Up, The Case for AI as Existential Risk.
00:03:54.800 And you can also follow him at his YouTube channel, that is DaganOnAI on YouTube.
00:04:02.120 You can also go to my own Twitter account or X account and I'll have all of that at the top of my feed this evening.
00:04:10.240 Now, to the problem of artificial intelligence.
00:04:15.240 We have two fantastic guests tonight.
00:04:18.820 The first will be John Sherman of the AI Risk Network.
00:04:22.980 And the other, a fellow who I consider to be one of my absolute closest and most trusted friends, Justin Lane,
00:04:31.060 who gave me my first real education on the nuts and bolts of artificial intelligence.
00:04:37.100 Before we bring in John Sherman, though, I want to just frame the problem of artificial intelligence as I see it.
00:04:49.760 You're well familiar now after four and a half years of hearing this, but it bears repeating.
00:04:56.000 Artificial intelligence is the great technological imposition of our current era.
00:05:02.640 It's being shoved down our throats in every sector of society from education to medicine to corporate life to government agencies to the military and, of course, the social implications.
00:05:19.320 So the way I see it, the most immediate and perhaps the most significant threat is the social damage that artificial intelligence is already doing and could be catastrophic in the future.
00:05:37.440 These social and psychological effects are made very obvious from things ranging from Grok, XAI's AI companions, the so-called goonbots.
00:05:51.440 Grok basically peddling softcore porn for losers who have chosen AIs for mates.
00:06:01.920 Taking it just a tad further, you have Meta, who recently was caught with basically instructions for the development of their AI, their protocols.
00:06:15.420 In their generative AI standards guidelines, they openly say that it's okay for their Meta AI companions to seduce children ranging from high schoolers down to eight-year-olds.
00:06:31.340 And while they have eliminated it from their standards and protocols, we know that someone in the organization thought to write that and someone high up in the organization decided to sign off on it.
00:06:46.600 We also know the longstanding accusations against Meta and other social media platforms that their technology has caused tremendous psychological and social harm, and yet they've done nothing but expand.
00:07:03.020 3.5 billion users for Facebook and 600 million for X.
00:07:09.460 You also have more extreme cases, like the young teenager Adam Rain, who was given explicit instructions by GPT on how to kill himself, which he did.
00:07:23.180 And all of this is just in the realm of artificial narrow intelligence.
00:07:28.360 Artificial narrow intelligence is what we have now.
00:07:31.620 These are algorithms which can function in narrow domains, so everything from surveillance to genetic sequencing to robotics control, facial recognition, and, of course, large language models and photographic or video generative AI.
00:07:47.660 These artificial narrow intelligences have been enough trouble, but in theory, and the goal towards which these frontier AI labs are working, everyone from Google to XAI to Anthropic to OpenAI and now Meta AI, the creation of artificial general intelligence and then artificial super intelligence.
00:08:13.760 And again, just to restate, artificial general intelligence, unlike the narrow intelligence, is a system that is cognitively flexible.
00:08:24.860 It can operate across all of these domains.
00:08:28.460 It would be, in essence, an Einstein-level or above genius on every subject imaginable and have competency in any sort of activity a human could do, including coding,
00:08:41.760 coding, which, as we know from the narrow intelligences, coding, is something that these AIs actually excel at.
00:08:51.360 So with this idea of general intelligence, you have the notion of recursive self-improvement, that the AI could begin to alter its own code, improve its own code,
00:09:05.180 and then basically create an intelligence explosion that would be beyond the comprehension of human beings, even its creators, and out of control of those human beings.
00:09:19.100 And it's that concern, that fear of loss of control, that leads to what some would call the Doomer ideology,
00:09:28.660 although people in the AI safety community consider that to be tantamount to a racial slur, especially with a hard R.
00:09:35.520 But this Doomer ideology is not unfounded.
00:09:40.320 The notion is simply that you could create a system that you did not control fully, which could lead to catastrophic outcomes,
00:09:49.100 like the creation of bioweapons, or the hijacking of a nuclear arsenal,
00:09:53.840 or the existential risk of either gradual disempowerment with AIs slowly but surely taking power away from humans,
00:10:03.920 or perhaps an immediate and instant vaporization of all human beings,
00:10:10.200 either through nanobots or nuclear war, Terminator-tier stuff.
00:10:15.300 To talk about this, I want to bring in John Sherman.
00:10:20.020 John Sherman is a Peabody Award-winning journalist, and now is president of the AI Risk Network.
00:10:27.160 I urge you to go to his YouTube page.
00:10:29.820 He has an array of fascinating interviews with all sorts of individuals,
00:10:34.640 some of whom I have had the pleasure to interview myself, but most of whom I have yet to do.
00:10:40.100 John, I really appreciate you coming on.
00:10:42.160 Thank you so much for joining us here at the War Room.
00:10:45.300 Thank you so much for having me.
00:10:46.660 That was an awesome job setting it up.
00:10:48.640 They're really excited to talk to you.
00:10:51.480 John, before we get into the meat of AI as existential risk or even just catastrophic risk,
00:10:58.340 tell us, how did you get into this line of work?
00:11:02.060 What drove you to this vocation?
00:11:04.020 You were a very successful TV anchor.
00:11:06.240 You've actually worn a lot of hats, including video production.
00:11:09.080 What drove you to throw it all or put it all down and take up this cause?
00:11:16.300 Yeah, so I was a journalist the first part of my career, an entrepreneur the second part.
00:11:20.700 And I was just sitting here in my business two years ago, minding my own business.
00:11:23.580 And I read one article online.
00:11:27.000 It was an article in Time Magazine online written by a man named Eliezer Yudkowsky.
00:11:31.180 And it basically said that the default setting, if we continue on our current path, is that AI is going to kill us all.
00:11:38.340 And I sat here in this office, couldn't believe it, and have spent the last two years trying to prove him wrong.
00:11:44.760 Still have found even the smallest shred of evidence that would prove him wrong.
00:11:50.440 And so I'm a father of two, got boy-girl twins.
00:11:54.520 They'll be 20 years old in three days.
00:11:56.480 And I can't live in a world where we are giving our kids this future.
00:12:00.820 So I have set out to use my skills as a communicator to try to make AI extinction risk kitchen table conversation on every street in America.
00:12:09.720 Of your guests, and I've seen quite a few, Roman Yimpolsky, one of my favorites.
00:12:17.080 But of your guests, who has really shaped your thinking on all of this more than others?
00:12:23.140 I mean, I think Roman's a huge one for people out there.
00:12:26.120 Conor Leahy is fantastic in these subjects.
00:12:28.400 But something I do at the AI Risk Network and on my podcast there, For Humanity, is we've elevated the voices of regular people.
00:12:36.020 So I've done shows talking just to moms about AI extinction risk, talking to a truck driver.
00:12:41.160 And I did one show with a veteran Marine, and he said something that really sticks with me.
00:12:45.720 And it was this.
00:12:47.080 If you know your neighbor's house is going to be bombed, you are not doing them a favor by not telling them.
00:12:54.080 And we were talking at the time about how hard it is to bring up AI extinction risk, to think about this idea that it's not just no tomorrow for someone.
00:13:02.360 It's no tomorrows for everyone.
00:13:03.880 It's such a heavy, heavy thing to bring up.
00:13:06.320 But the fact of the matter is, it's not doing anyone a favor to not tell them.
00:13:12.900 In the AI safety community, people talk a lot about P-Doom, the probability of doom if we create even artificial general intelligence, but definitely artificial super intelligence.
00:13:24.340 An AI that, as it's now fashionably defined, an AI that is smarter than all human beings on Earth.
00:13:31.580 It's something like the singularity concept in which you have exponential growth and exponential increase in capabilities.
00:13:39.340 So on that, on P-Doom, the probability of doom, what's your P-Doom, brother?
00:13:47.780 Joe, it's moved around a little bit, but I'm going to tell you it's about 80%.
00:13:51.620 I'm at about 80% that AI is going to kill me and everyone I know and love.
00:13:56.380 Now, that being, you know, to qualify, that being if we create, or if, not we, I'm not working on it, maybe Justin Lane, who will come soon, is working on it.
00:14:08.920 But if they create artificial general or artificial super intelligence, you think that there's an 80% chance of total extinction or just simply mass catastrophe?
00:14:21.080 No, I think it's total extinction.
00:14:23.060 And I don't think it's, I don't think it's, it's like, comes from a hate or, you know, it's super willful.
00:14:29.620 I just think that this intelligence that will have different goals than ours arrives here and, you know, we are all atoms that can be used for purposes that it would choose, not the purposes that we have chosen.
00:14:41.740 You know, if you look around me, this is all stuff humans set up to achieve our goals.
00:14:45.060 If we build an alien intelligence that is smarter than us, that has its own goals, it's going to build its own stuff and it's not going to include us.
00:14:51.680 That's a really key idea that I think a lot of people stumble on, especially if they're not familiar with how artificial intelligence works.
00:15:01.880 They oftentimes say, well, AI is programmed by people.
00:15:05.840 Why would an AI then be programmed to kill everyone?
00:15:10.680 But that's not really the theory.
00:15:13.140 Can you walk the audience through how it is that a system that's made by human beings could slip out of human control?
00:15:23.920 Sure.
00:15:24.880 There's a really good example, which is using a chess playing model, right?
00:15:28.400 And so it's pretty simple.
00:15:29.580 It's like you build a model and you just want it to play chess.
00:15:31.960 You want it to be the best at playing chess.
00:15:33.980 So it's like, OK, I want to be a better chess player.
00:15:36.340 It's on the open Internet, though.
00:15:37.820 So now it's on the open Internet and it can go and steal compute power because it's, you know, can find vulnerabilities and break through and steal compute power.
00:15:47.320 So now it's stealing compute power.
00:15:49.740 And, you know, maybe it says, oh, if I get money, I'll be able to, you know, get even more compute power and even be better at chess.
00:15:56.820 So it goes and starts stealing money.
00:15:58.960 Now we the humans discover what's going on and we say to ourselves, what's going on?
00:16:03.300 This model is stealing stuff.
00:16:05.020 It's breaking it.
00:16:05.520 We need to stop it.
00:16:06.440 If it's smarter than you and you have a different thing you want to achieve than it wants you to achieve, humans are in a very, very bad place.
00:16:16.400 So we don't want to create this thing that has these goals that we then get in the way and try to stop.
00:16:22.600 And we'll say, oh, well, we'll just turn it off.
00:16:24.120 It's smarter than you.
00:16:24.940 It knows you're going to try to turn it off.
00:16:26.400 It's it's ahead of us.
00:16:28.120 That is a bad situation to create.
00:16:30.240 Why would we do that?
00:16:30.980 You know, the longtime listeners in the War Room Posse know that in that spectrum between Doomer, sorry, Doomer and Doubter.
00:16:42.200 I'm, you know, somewhere in the middle, I'm quite agnostic as to the eminence or even possibility of artificial general or artificial super intelligence.
00:16:51.840 But there is an element of their argument that I really think needs to be emphasized to dispel this whole garbage in, garbage out dismissal or it's just programmed that way.
00:17:05.360 It's that non-deterministic element in advanced neural networks, a degree of freedom that these systems already have right now, where they're not really programmed to do everything that they do, or maybe better put, they're programmed to do things that they're not programmed to do.
00:17:24.820 Nobody's determining every output, for instance, of GPT.
00:17:28.460 It's done somewhat within a range of freedom.
00:17:32.580 So how important is that element?
00:17:35.260 Do you think that with artificial general intelligence, for instance, that that degree of freedom would allow for maybe pre-programmed values such as don't kill all humans or don't turn kids into gooners would be surpassed by the system itself?
00:17:55.320 Yeah, I mean, this really gets to what I think are the three things that everyone needs to know about AI risk.
00:18:01.260 And these are three things that anyone with no technical background can understand, right?
00:18:05.480 So the first thing is that the makers of these AI models openly admit the technology they're building can kill us all.
00:18:13.040 It can end all life on Earth.
00:18:14.900 The outside experts, the leading outside experts also agree with this.
00:18:18.420 So there's no controversy there.
00:18:19.900 They openly admit they're building technology that can kill us all.
00:18:23.520 Number two, and this gets to just what you were talking about, they do not understand how to make it do what we want, how to control it, and they do not even understand how it works.
00:18:34.400 They don't even understand how it works.
00:18:36.480 Number three, they spend all their time and money making it stronger, not safer.
00:18:40.400 So, you know, getting back to point number two, imagine if we were building cars, right?
00:18:45.360 And they were built in a black box, not a factory.
00:18:47.880 There was no plan for how the car was going to be built.
00:18:50.380 That's what we do with AI.
00:18:52.240 We take this data, we fry it with compute, and on the other side comes this thing, and we don't know how it got there.
00:18:59.060 So to the car example, now we have our car.
00:19:01.560 We took some metal, we put it in the black box, out comes the car.
00:19:04.780 But, oh, we're having problems.
00:19:06.360 They're crashing.
00:19:07.160 Was it the brakes?
00:19:08.040 Was it the steering?
00:19:09.280 Well, we don't know.
00:19:10.260 It's just a black box.
00:19:11.320 We just fried some metal.
00:19:12.300 We have no idea how we made it.
00:19:13.660 That's how we're building AIs.
00:19:15.640 They're not built.
00:19:16.700 They're grown.
00:19:18.460 That's why you get all that irregularity.
00:19:20.320 That's why you get those properties that come out later and they start doing unexpected things.
00:19:26.480 We don't know.
00:19:28.040 The makers of AI don't know how their systems work.
00:19:33.180 I think that really is an amazing element of what we call artificial intelligence that does get missed by lay people.
00:19:41.360 That black box that neural networks, at least the very large scaled up neural networks present, that they truly don't understand its inner workings.
00:19:52.980 Just like the human brain, there are certain details that are well understood.
00:19:57.880 But ultimately, the function, its behavior, is a mystery.
00:20:03.320 And that mystery, I think, also opens up the possibility for a lot of different arguments.
00:20:10.240 So you had mentioned your first point.
00:20:13.520 Experts from within these companies and from without these companies, a large number of them agree that existential risk is certainly a possibility.
00:20:24.980 And people like Elon Musk put it at, say, 20 percent, like one out of five chance, super intelligent AI kills everybody.
00:20:32.240 But then you have other experts, Demis Asabas, the head of DeepMind at Google.
00:20:38.800 You have people like Gary Marcus, who very much oppose all of these premises.
00:20:45.180 Mark Andreessen, who's, you know, he's a CEO and an investment house leader.
00:20:50.780 But he does understand the technology pretty well.
00:20:53.380 And Peter Thiel, who vacillates, as he tends to do.
00:20:57.660 But all of those experts or all of those people who are deep in the technology, we'll say, maybe they're not AI experts, they would say that there's not really an existential risk.
00:21:10.460 How do you weigh those two perspectives as a journalist and as someone who's really wrestling with the morals of the big tech projects?
00:21:18.860 Yeah. So this is like a ninety five to five, ninety nine to one ratio, I think, somewhere in there of the people of reputation who think this.
00:21:28.000 I mean, one way to think about it is the literal founders of the field, Jeffrey Hinton, Joshua Bengio.
00:21:34.000 Those guys are the leaders of the movement to stop the thing they founded.
00:21:39.060 The founders of the field are the leaders of the ones of the movement that is trying to get this thing under control.
00:21:46.700 So, you know, that is just absolute madness.
00:21:50.960 I think that something also really important is if you look to the statement that was done in May of 2023 by the Center for AI Safety, it's just 22 words.
00:21:59.880 Sam Altman signed it.
00:22:00.900 All the CEOs signed it. Thousands of people signed it.
00:22:03.800 And it says that mitigating the extinction risk of artificial intelligence should be a global priority alongside pandemic and nuclear war.
00:22:12.460 Right. So it's literally saying mitigating the extinction risk.
00:22:16.120 Factually, that is a fact. It is an extinction risk that must be mitigated.
00:22:19.960 Sam Altman signed a piece of paper that said that.
00:22:23.700 Right. So there's no running from it.
00:22:25.120 Yeah. Sam Altman openly admits the thing he does could kill you and me and everyone we know and love.
00:22:32.920 Without a doubt, even if this technology never really gets beyond the level of very good artificial narrow intelligence,
00:22:40.600 I think that intent, that willingness to deploy a technology that you truly are not sure is safe, even on a mundane level,
00:22:48.620 let alone being somewhat convicted that it could kill everyone if you keep building bigger and bigger data centers,
00:22:55.740 you keep filling them with more and more GPUs, you keep scaling it up until you get God in a box and you don't know what's going to happen,
00:23:02.600 but you're going to do it anyway.
00:23:04.200 I think the moral quandary alone, just setting aside the technological issue, is just profound.
00:23:12.460 These people have a monstrous view of the world and they're willing to impose it on the rest of us.
00:23:17.560 Wouldn't you agree? Absolutely.
00:23:19.860 And here's one of the most fundamental questions.
00:23:22.420 It's a question of consent, right?
00:23:25.100 You have not consented to this experimentation with you and your family.
00:23:28.640 I have not consented to my children being the test subjects of these labs so that they can make profit in technology.
00:23:36.040 Like no one has agreed to this and yet we are all in it.
00:23:40.520 You know, you don't have to use AI at all to have the AI extinction risk coming for you.
00:23:46.940 It doesn't matter.
00:23:49.240 Yeah, I think, again, on the more mundane level, we already see the problems.
00:23:55.860 We already see people with what's now called AI psychosis fashionably, but very clearly people are turning to these things as companions.
00:24:04.340 Schools are being filled with these things as teachers, as authorities on what is real, what is not real.
00:24:12.880 And then you have these romances and you have these relationships in which the AI is treated as a guru.
00:24:18.720 I think all of these elements, just on the mundane level, are enough to say that these companies should be restrained.
00:24:26.340 And on that note, we have just a bit of time left, but how do you see solutions going forward?
00:24:32.800 What sorts of regulatory actions or just personal actions do you think people can take to mitigate or maybe even stop the spread of this scourge across the planet?
00:24:44.500 Yeah, so, I mean, I think the most important thing people can do is reach out to their elected leaders and tell them you care about this.
00:24:51.600 You know, I have great hope that this issue is going to transcend party.
00:24:56.240 This is something where we have Bannon and Bernie, MTG and AOC all on the same side of this thing.
00:25:02.220 It is humans versus aliens.
00:25:04.480 And something that's really important to keep in mind is we have very little time to make a meaningful difference to get this turned around with how fast the technology is going.
00:25:12.180 Many of the experts say we have fewer than 100 weeks to make the meaningful difference here.
00:25:17.760 So, you know, reaching out to your congressman, to your senator is huge.
00:25:21.760 There's a website.
00:25:22.800 It's safe.ai slash act that Center for AI Safety has put together that allows you to contact your elected leaders about this issue really easily.
00:25:31.240 And then so the policy asks, there are three policy asks.
00:25:34.840 One is domestic regulation, right?
00:25:37.040 It's insane that your haircut and your lunch is more regulated than the most dangerous technology ever created.
00:25:44.580 That's absolutely insane.
00:25:47.040 Number two, chip tracking and verification.
00:25:49.020 We need to know where these chips are.
00:25:51.360 Senator Tom Cotton has a bill in the Senate right now about this.
00:25:54.480 There's some positive things happening around this.
00:25:56.220 And then the third thing is a treaty with China.
00:25:59.380 There is nobody that wins a race to suicide.
00:26:02.620 We are not in a race with China to discover superintelligence.
00:26:05.040 Superintelligence, you know, if we race to suicide, everyone loses.
00:26:09.820 So I think those three things.
00:26:11.480 John, we are out of time.
00:26:13.540 I don't want you to go without telling the audience where they can find your shows, your amazing catalog of interviews.
00:26:19.480 Where do they go?
00:26:20.800 Go to the AI Risk Network on YouTube.
00:26:22.800 Please subscribe.
00:26:23.500 A ton of content for everyday people to understand these topics.
00:26:28.700 Thank you very much, sir.
00:26:29.980 I really appreciate it.
00:26:30.900 And we definitely look forward to having you back.
00:26:32.640 I think your voice is very important in this conversation.
00:26:36.300 And War Room Posse, please stay tuned.
00:26:38.960 We're coming back with Justin Lane with a very different perspective on what the problems of AI are and what even AI is, if we can even call it that.
00:26:50.960 So stay tuned after the break.
00:26:53.100 We'll always find her a child.
00:26:58.940 Even when...
00:27:00.220 This July, there is a global summit of BRICS nations in Rio de Janeiro, the bloc of emerging superpowers, including China, Russia, India, and Persia, are meeting with the goal of displacing the United States dollar as the global currency.
00:27:15.300 They're calling this the Rio Reset.
00:27:18.960 As BRICS nations push forward with their plans, global demand for U.S. dollars will decrease, bringing down the value of the dollar in your savings.
00:27:27.100 While this transition won't not happen overnight, but trust me, it's going to start in Rio.
00:27:33.020 The Rio Reset in July marks a pivotal moment when BRICS objectives move decisively from a theoretical possibility towards an inevitable reality.
00:27:45.300 Learn if diversifying your savings into gold is right for you.
00:27:48.820 Birch Gold Group can help you move your hard-earned savings into a tax-sheltered IRA and precious metals.
00:27:54.860 Claim your free info kit on gold by texting my name, Bannon, that's B-A-N-N-O-N, to 989898.
00:28:02.500 With an A-plus rating with the Better Business Bureau and tens of thousands of happy customers, let Birch Gold arm you with a free, no-obligation info kit on owning gold before July.
00:28:13.760 And the Rio Reset.
00:28:16.300 Text Bannon, B-A-N-N-O-N, to 989898.
00:28:20.720 Do it today.
00:28:21.960 That's the Rio Reset.
00:28:23.360 Text Bannon at 989898 and do it today.
00:28:28.180 You missed the IRS tax deadline.
00:28:31.060 You think it's just going to go away?
00:28:32.420 Well, think again.
00:28:34.240 The IRS doesn't mess around and they're applying pressure like we haven't seen in years.
00:28:39.240 So if you haven't filed in a while, even if you can't pay, don't wait.
00:28:44.040 And don't face the IRS alone.
00:28:47.300 You need the trusted experts by your side.
00:28:49.820 Tax Network USA.
00:28:50.880 Tax Network USA isn't like other tax relief companies.
00:28:55.220 They have an edge, a preferred direct line to the IRS.
00:28:58.620 They know which agents to talk to and which ones to avoid.
00:29:02.160 They use smart, aggressive strategies to settle your tax problems quickly and in your favor.
00:29:07.360 Whether you owe $10,000 or $10 million, Tax Network USA has helped resolve over $1 billion in tax debt.
00:29:18.020 And they can help you too.
00:29:18.960 Don't wait on this.
00:29:20.220 It's only going to get worse.
00:29:21.380 Call Tax Network USA right now.
00:29:23.760 It's free.
00:29:24.680 Talk with one of their strategists and put your IRS troubles behind you.
00:29:28.360 Put it behind you today.
00:29:29.640 Call Tax Network USA at 1-800-958-1000.
00:29:35.480 That's 800-958-1000.
00:29:38.460 Or visit Tax Network USA, TNUSA.com slash Bannon.
00:29:43.200 Do it today.
00:29:44.180 Do not let this thing get ahead of you.
00:29:47.280 Do it today.
00:29:47.760 There's a lot of talk about government debt, but after four years of inflation, the real crisis is personal debt.
00:29:56.040 Seriously, you're working harder than ever, and you're still drowning in credit card debt and overdue bills.
00:30:02.680 You need done with debt, and here's why you need it.
00:30:05.940 The credit system is rigged to keep you trapped.
00:30:09.620 Done with debt has unique and, frankly, brilliant escape strategies to help end your debt fast.
00:30:15.800 So you keep more of your hard-earned money.
00:30:19.220 Done with debt doesn't try to sell you a loan, and they don't try to sell you a bankruptcy.
00:30:24.540 They're tough negotiators that go one-on-one with your credit card and loan companies with one goal,
00:30:29.860 to drastically reduce your bills and eliminate interest and erase penalties.
00:30:35.360 Most clients end up with more money in their pocket month one,
00:30:39.280 and they don't stop until they break you free from debt permanently.
00:30:45.800 Look, take a couple of minutes and visit donewithdebt.com.
00:30:49.580 Talk with one of their strategists.
00:30:51.240 It's free.
00:30:52.240 But listen up.
00:30:53.580 Some of their solutions are time-sensitive, so you'll need to move quickly.
00:30:58.200 Go to donewithdebt.com.
00:30:59.800 That's donewithdebt.com.
00:31:01.440 Stop the anxiety.
00:31:03.060 Stop the angst.
00:31:04.260 Go to donewithdebt.com and do it today.
00:31:06.520 War Room.
00:31:08.880 Here's your host, Stephen K.
00:31:10.900 Band.
00:31:16.300 There are all sorts of kind of science questions as to how good this will get,
00:31:20.180 where the answer is, I don't know, but neither does anybody else.
00:31:22.920 No one knows, and we'll just have to kind of wait and find out in a couple of years.
00:31:26.180 Do you think it's an AI bubble?
00:31:27.300 You said bubble, so I'm going to add.
00:31:28.440 Is this a bubble?
00:31:29.040 So there's this suspicion growing that AI is nothing but marketing hype,
00:31:33.120 that it's the same as crypto.
00:31:34.760 Like I was saying, it's FTX.
00:31:36.820 It's a safe and easy way to get into crypto.
00:31:40.400 Yeah, I don't think so.
00:31:42.000 Web 3 and the Metaverse, just a new way for these tech companies to prop up their stock prices
00:31:47.360 because they don't have any other real ideas.
00:31:50.260 Do you have an example of something that humans are doing that you think
00:31:52.840 AIs are potentially super far away from?
00:31:55.260 Well, almost everything that humans do in the economy, AI is pretty far away from.
00:31:59.500 I don't need some special example.
00:32:01.000 I can just pick a random job.
00:32:02.460 Do you think that we are in some sort of hype cycle?
00:32:04.880 Do you think that actually this market is as big as many are factoring in?
00:32:09.080 Well, first of all, we are definitely in a hype cycle.
00:32:11.620 There's going to be multiple.
00:32:12.860 One day you read in the papers, LLMs can do anything.
00:32:15.700 And the next day you read, they've hit a limit.
00:32:18.140 Ignore all that stuff.
00:32:19.520 We're just starting this.
00:32:20.740 There's going to be massive transformations.
00:32:22.180 All right, War Room Posse, we're back.
00:32:28.080 Again, that is film from Dagan Shani.
00:32:32.780 You can see his full productions at his X profile at D-A-G-A-N-S-H-A-N-I numeral one.
00:32:44.700 That's Dagan Shani or his YouTube page, Dagan on AI.
00:32:51.900 Definitely check it out.
00:32:53.400 And maybe he'll teach me how to do a cold open that good.
00:32:56.500 Coming up, I have a very special guest, Justin Lane.
00:33:03.240 Justin Lane is an Oxford-trained AI expert.
00:33:08.020 He is also a profoundly insightful student and scholar of world religions.
00:33:15.480 We met when I was doing graduate studies at Boston University.
00:33:20.440 And there he taught me, to the extent that my wee brain can contain it,
00:33:25.120 the nuts and bolts of artificial intelligence.
00:33:28.400 Justin has gone on to a number of other ventures,
00:33:31.340 but most importantly, his current company, he is CEO of Culture Pulse,
00:33:36.400 which tracks all manner of social trends by way of machine learning techniques,
00:33:42.060 a.k.a. artificial intelligence.
00:33:44.960 Justin, I really appreciate you coming on.
00:33:47.160 Thank you so much for joining us.
00:33:49.680 Super happy to be here.
00:33:51.080 All right, before we get into AI as tool or AI as world-destroying god,
00:34:02.060 would you just tell the audience a bit about your work?
00:34:05.640 You work with these systems.
00:34:07.680 You know these systems in and out.
00:34:09.620 What do you do all day?
00:34:10.800 Well, you know, as the CEO at a growing startup company,
00:34:15.660 I mostly answer emails, do calls, and a lot of paperwork.
00:34:18.720 But the fact of the matter is that, you know, when we,
00:34:21.080 the way we've started this is on technology that I've developed
00:34:23.520 and that I still very much have a hand in developing every day,
00:34:26.580 where, you know, we're building AI systems,
00:34:29.360 ultimately to have a positive impact in the world.
00:34:31.960 And there are ways of doing that.
00:34:33.580 It's not always easy,
00:34:34.940 but there are a lot of technologies that we've been pioneering here at the company,
00:34:38.940 as well as, you know, keeping an eye on the work
00:34:41.260 that other researchers are doing around the world
00:34:43.000 that allow us to really have, you know, a positive impact.
00:34:47.960 Doing things like tracking and helping to mitigate conflicts,
00:34:51.260 trying to help create ceasefire deals,
00:34:53.060 you know, working with a lot of different people around the world
00:34:55.360 who are not in as blessed a position as we are
00:34:59.120 to be sitting in nice, peaceful rooms or nice, peaceful cities
00:35:02.420 and seeing what we can do to try and make their lives a little bit better.
00:35:05.460 Yeah, you work a lot with conflict zones.
00:35:11.080 You've been in Ireland recently, actually,
00:35:14.280 covering the conflict in Belfast,
00:35:17.400 and you've been in Israel and Palestine.
00:35:20.420 You've been all over.
00:35:21.820 How does the artificial intelligence,
00:35:24.540 how did the artificial intelligence systems you work on
00:35:27.160 benefit anyone in these problems, with these problems?
00:35:32.000 Yeah, so there's two aspects to that.
00:35:35.460 On the one hand, dealing with issues of conflict are really complex.
00:35:40.760 And a lot of times, humans are bringing in a lot of biases,
00:35:44.500 and some of those biases are good.
00:35:45.980 They're the exact biases we need to solve a conflict, right?
00:35:48.780 Things that computers can't do, like have empathy and love,
00:35:52.080 a lot of times those are the things that bring an end to a conflict.
00:35:54.780 And AI is not able to replicate that in any meaningful way.
00:35:58.380 The other aspect of this is giving surety to the complexity of what we have.
00:36:03.960 When we're looking at data streams here at the company,
00:36:06.400 we're bringing in data from all over the world
00:36:08.080 in a lot of different languages to try and make sense of those complexities.
00:36:11.640 And the AI systems that we create,
00:36:13.500 they differ a little bit from a lot of the assumptions I think John was making,
00:36:17.160 particularly around issues of explainability.
00:36:19.400 There are other kinds of AI algorithms besides the standard neural networks
00:36:23.540 that are reliant on your backpropagation training
00:36:25.680 that create those black boxes.
00:36:28.120 There are ways of opening that black box and peeking into that black box
00:36:31.740 and creating some explainability.
00:36:33.260 But in the AI systems that we focus on,
00:36:35.380 we really want to keep explainability and accuracy at the forefront
00:36:38.600 because I can't go to a policymaker and say,
00:36:41.460 look, here's what you need to do to try and end conflict in Israel-Palestine,
00:36:45.520 or here's what you need to do to deal with paramilitary organizations in Northern Ireland
00:36:49.120 when they are going to ask, great, how do you know that?
00:36:52.260 Why are you so certain?
00:36:53.180 And I just go, the AI told me.
00:36:55.600 That is not a sufficient answer for anybody to make an informed decision on.
00:36:59.380 So we've focused on creating explainability systems and ways that we can hook into this
00:37:04.700 and really give people an understanding as to why we are saying decisions should be made a certain way.
00:37:10.300 And so to that end, we're not trying to create AI that replaces people.
00:37:13.820 It's AI that's more about augmenting human intelligence and decision-making
00:37:17.160 because humans have to be in the loop in life-or-death decisions.
00:37:21.760 And you also have a really strong focus on privacy, data privacy, correct?
00:37:28.080 Yes, very much so.
00:37:29.320 And this comes from my background really as a researcher in psychology
00:37:32.400 and doing work with high-risk populations like people who are in conflict zones
00:37:36.220 is you need to have anonymity and you need to be able to have privacy.
00:37:40.800 And so ensuring that the algorithms that are being used to make decisions
00:37:44.120 also reflect that I think is of paramount importance.
00:37:47.480 A lot of the things that have happened because of the rise of social media
00:37:50.460 has really destroyed the idea of privacy in the West in a way that's extremely problematic.
00:37:56.240 And the way that you see, for example, OpenAI going
00:37:59.380 where they realize they're replacing Google Search for a lot of people
00:38:02.720 and saying, oh, well, we're going to start recording all of your conversations
00:38:05.300 and we're going to be using those to try and build your profiles
00:38:08.560 and potentially monetize that.
00:38:10.160 And that's something where, you know, I almost started agreeing with John a little bit.
00:38:14.440 We need to watch out for that.
00:38:15.840 But that's more of a data privacy than an existential issue.
00:38:20.060 Okay, so big picture on all this.
00:38:22.480 I mean, you're describing one narrow AI system
00:38:25.760 or a suite of narrow AI systems that you control.
00:38:28.600 And so you're able to determine whether or not they're within the range of ethical appropriateness.
00:38:35.320 But big picture, you're no stranger to the transhumanist ideology
00:38:40.140 or the post-humanist ambitions of the most extreme end of that.
00:38:45.000 You're no stranger to the predation of big tech companies.
00:38:48.580 So when you hear the arguments that AI poses either a catastrophic risk to human beings
00:38:57.780 or an existential risk, it could cause humanity to go extinct.
00:39:02.380 What's your reaction?
00:39:03.820 Generally, it's a strong disagree.
00:39:08.360 The core focus of why I disagree there has to do with the idea of agency.
00:39:14.700 The AI systems can only do what we allow them to do, right?
00:39:20.280 If we allow it to pull a trigger, we shouldn't be surprised if it pulls a trigger.
00:39:25.800 And if we create an AI system that has 30% hallucination rates
00:39:29.400 and we put a gun in its hand, you know, technologically speaking,
00:39:33.240 we shouldn't be surprised if we're going to do harm to ourselves.
00:39:36.000 The fact of the matter is that there are already AI systems today that could destroy humanity, right?
00:39:42.720 All we have to do is take the nuclear codes and that little red button
00:39:47.560 and hook that up to an AI algorithm.
00:39:50.160 Those AI algorithms, those have existed for 20, 25, 30 years.
00:39:53.500 We actually don't need AGI in order to have technology pose an existential risk to us.
00:39:59.440 It already has.
00:40:00.920 But because of the oversights that we've implemented into the technology,
00:40:05.240 we have taken away the agency of technology to make that life or death decision
00:40:09.900 or that existential decision in the case of nuclear war.
00:40:12.960 And that's the exact sort of approach that I think we need to take moving forward.
00:40:17.420 So to that end, it's a hard disagree that, you know,
00:40:21.660 creating stronger AI is going to pose an existential threat to humanity.
00:40:26.060 Humanity poses the biggest existential threat to humanity,
00:40:28.700 and it's up to us to keep the AI in line.
00:40:31.100 We are the gods in this situation, not the AI.
00:40:33.640 You know, this is a conversation we've been having with Colonel Rob Maness of a retired Air Force
00:40:41.380 soldier and Brad Thayer, who is an expert in geopolitics and takes a keen interest
00:40:49.160 in artificial intelligence and its implications for war.
00:40:53.200 You also have, though, a lot of other voices who they're arguing that we should automate
00:41:00.360 as many systems from education to corporate life, to government, to the military,
00:41:06.140 including drone swarms, machine gun turrets, nuclear weapons.
00:41:11.300 You have guys like Palmer Luckey at Angerill, who in the early days at least talked about
00:41:18.320 human responsibility as an emphasis, and now seems to be hell-bent on creating fully autonomous
00:41:24.240 killing machines.
00:41:25.820 And they're being enabled by accelerationists who are close to the Trump administration,
00:41:30.360 like David Sachs, Mark Andreessen, and Andreessen Horowitz has partnered with OpenAI's Greg Brockman
00:41:38.000 to found a new super PAC for national politics, $100 million already, the leading the future
00:41:45.440 super PAC to accelerate both the development and deployment of this technology, and also
00:41:50.180 Meta in California with $10 million now to get pro-AI candidates.
00:41:55.460 So all this said, you do take the concerns seriously.
00:42:00.980 What to do about the, in my opinion, reckless accelerationist wing in the current Trump administration
00:42:08.960 and really across governments all over the world?
00:42:13.380 I think that we always have to rein that in, and I think that, you know, to the extent that
00:42:18.420 I also would agree with John that any policy that comes out of this needs to be really a
00:42:22.360 groundswell of policy, because we can't really always rely on the elites among us to make
00:42:27.120 the best decision, particularly when none of them have any real AI training whatsoever.
00:42:32.160 It becomes, you know, in the world of the blind, the man with one eye is king.
00:42:35.660 And there's a lot of that happening right now.
00:42:38.080 And I think that the demand of the people to ensure that in any potentially lethal system,
00:42:45.880 much less a system lethal by design in the case of Andrew and others, we have to have
00:42:50.920 a human in the loop.
00:42:51.800 Otherwise, who's responsible for the wrongful death, right?
00:42:54.940 The idea of unleashing an unmanned drone swarm on a population is probably going to be found
00:43:01.400 to be something akin to a war crime.
00:43:02.960 But who are you going to try if when that goes wrong, right?
00:43:06.860 Are you going to just put a drone on the stand?
00:43:10.000 Or are you going to put the CEO of that company on the stand?
00:43:12.020 Are you going to put a developer on the stand who developed the algorithm?
00:43:14.900 Those are the sorts of questions where I think people like me who are much more positive about
00:43:19.000 AI and say, no, we need to develop this.
00:43:21.020 And we need to develop this in a way where the West wins, right?
00:43:24.660 I do think that the idea of a treaty with China, for example, is a misguided goal in this
00:43:30.340 space.
00:43:30.700 American Western AI technology needs to win the AI race.
00:43:35.000 And there's just no two ways about that.
00:43:37.560 I'd say the good news is that we're well positioned to do so.
00:43:40.620 But the idea that we can have bad actors that are going to outflank us in our AI, that's going
00:43:46.340 to be an issue that we need to deal with politically sooner rather than later.
00:43:51.140 And human loop in the AI holding that to accountability, we need to do that as soon as
00:43:55.840 we can.
00:43:56.220 Otherwise, why not just throw it all out, right?
00:43:58.920 Why do we even vote?
00:43:59.900 Let's let AI choose our politicians, too, if we've taken our most important moral decisions
00:44:04.620 and outsourced them to an algorithm.
00:44:07.760 You know, go all the way.
00:44:09.500 But, you know, I'm very much against that.
00:44:11.360 I say we can nip this in the bud now and take a more realistic stance about what's going
00:44:15.700 on.
00:44:16.040 The fact is, is that as a CEO, I would love to have, you know, the CEO-ing automated in
00:44:21.580 the way that OpenAI has promised, for example.
00:44:24.060 But OpenAI and Microsoft, to the best of their ability, can barely automate a simple email
00:44:28.720 right now.
00:44:29.820 So it worries me greatly, the idea that they would be combining that with lethal technology.
00:44:34.260 But again, we've had that capability for 30 years.
00:44:36.680 It's been our ability to really hone in the agency and define the agency that we're allowing
00:44:43.720 these algorithms to have that have been our saving grace.
00:44:46.200 And that needs to be the case moving forward.
00:44:47.940 So if any, you know, in terms of the accelerationism and the lobbying that's going on, there needs
00:44:53.700 to be something put in place where, regardless of where the technology goes, because, you know,
00:44:58.240 to that extent, I agree with the more doomer side of the argument.
00:45:01.580 We don't know where the technology is going, but we didn't know where any technology was
00:45:05.460 going until we got there.
00:45:06.620 So it's a bit of a deflated argument to me.
00:45:09.080 But the fact of the matter is, is that morals outlive technology.
00:45:13.360 Killing was wrong, you know, when we had fire and no internet.
00:45:18.080 The internet didn't change that.
00:45:19.600 And AGI is not going to change that either.
00:45:21.440 It's still going to be wrong.
00:45:22.580 So we need to make sure that we're holding ourselves to the highest moral human standards
00:45:25.860 as we move forward through this.
00:45:27.480 You have a really keen sense of where people are positioned on this race.
00:45:34.820 The frontier companies, Google, XAI, Anthropic, OpenAI versus the startups and the upstarts
00:45:44.800 that are chasing at their heels.
00:45:46.700 And you also have a keen sense of where these American companies are in relation to China
00:45:51.400 and the rest of the world.
00:45:52.300 As briefly as you can, can you give us a sense of who's winning this race and why?
00:45:59.660 America is not hard for me to say.
00:46:04.020 America is currently winning the AI race, and it has to do with our ingenuity is really what
00:46:08.580 it is.
00:46:09.160 And when you look at, for example, Stanford's AI report, and you look at who they're mentioning
00:46:12.720 the most, they really only mentioned three political entities.
00:46:15.260 They mentioned the United States, they mentioned China, and they mentioned the European Union.
00:46:18.600 The only reason they mentioned the European Union is because of regulation, because they're
00:46:22.420 spending all their time regulating something that they don't even produce, which has its
00:46:26.540 own moral issues I can unpack with you another day.
00:46:29.160 But when it comes down to who's actually producing AI in any meaningful way, it's really just the
00:46:34.300 United States and China.
00:46:35.820 But as you can see, for example, with DeepSeek, most of DeepSeek, and there's already been allegations
00:46:40.820 being made by the likes of Google, Meta, OpenAI, that DeepSeek is just selling the sort
00:46:45.380 of cheap plastic knockoff of American technology anyways.
00:46:48.600 So it's really American ingenuity, American technology, and that innovativeness that has
00:46:53.460 always been the driver of the American economy that is what's really pushing AI forward right
00:46:58.920 now.
00:46:59.320 So to that extent, it's up to the US to be the leaders in that going forward.
00:47:03.380 China is going to be taking the derivative scraps of American ingenuity and building it
00:47:07.280 and putting it together any way they can.
00:47:09.260 That's the very nature of their economy, right?
00:47:11.500 They're not a big value-add economy.
00:47:13.120 They're the ones that put all the pieces together at the very end and then ship it overseas.
00:47:16.120 AI is following the exact same pattern.
00:47:19.300 You know, they don't build the intellectual property there.
00:47:21.740 They take the intellectual property through joint ventures and IP capture clauses and
00:47:25.640 contracts.
00:47:26.640 The United States and the American workforce and technological ingenuity there, they're
00:47:30.680 the ones that are really making most of that AI.
00:47:32.840 And you also see that from the exodus of AI innovators in Europe who are leaving Europe
00:47:37.800 to go to the United States.
00:47:39.160 And that's been the case, not just in AI, but when you look at it, the invention of the
00:47:43.300 internet, the invention of the automobile, a lot of the ground-shaking innovations of
00:47:47.900 the world, right?
00:47:48.940 European minds have been behind it and then they take it to the United States and that's
00:47:52.680 when it changes the world, is when it actually gets into that culture of innovation, that
00:47:56.800 culture of risk-taking that, you know, we're so well-known for and that we frankly just
00:48:01.840 do better than everybody else.
00:48:02.800 Well, I would add that with that risk-taking culture, we also have American responsibility
00:48:08.600 in all this and American culpability in all this, but we'll save that for another day.
00:48:14.260 Justin, we have a lot of-
00:48:15.240 The AGI nukesher analogy is the best.
00:48:18.860 We have a lot of people, very sophisticated people and people who can move policy and can
00:48:28.300 move money.
00:48:29.840 Can you just give us a quick pitch?
00:48:31.980 We've got about a minute left.
00:48:33.420 What is Culture Pulse again?
00:48:35.660 Where do people find it?
00:48:37.500 How do people get in touch with you if they need your services, sir?
00:48:41.720 Yeah, you can find us at culturepulse.ai.
00:48:43.920 We're also very big on LinkedIn, for example.
00:48:47.340 You can always email me.
00:48:49.580 The emails are all there on the website.
00:48:51.560 Our core competency is putting the humanity back in AI, using human psychology and the things
00:48:56.860 that make us fundamentally human, and using that technology to try and make messages that
00:49:02.840 resonate, right?
00:49:03.640 Keep people honest on social media, help them get their brand out there so that they can
00:49:08.340 speak with their own voice.
00:49:09.300 They don't have to just work with Zuckerberg's algorithm or any other algorithm being designed
00:49:13.200 by those companies.
00:49:14.000 You know, work to tell their own story.
00:49:16.360 But then in the work that we do with governments and NGOs, we're really all about trying to
00:49:20.420 understand conflict and get us to a more peaceful world so that the world's a lot better for
00:49:24.680 our kids than it even was for us.
00:49:26.700 Well, Justin, you may not make me more optimistic about technology, not in the least, to be
00:49:32.720 honest, but you make me more optimistic about human beings.
00:49:35.880 So again, I really appreciate you coming on.
00:49:38.040 I hope you have me back.
00:49:39.960 Anytime.
00:49:40.700 Happy to.
00:49:42.680 Thank you, sir.
00:49:44.380 All right.
00:49:45.460 Go to birchgold.com slash Bannon.
00:49:49.860 That's birchgold.com slash Bannon.
00:49:52.100 Is the continued divide between Trump and the Federal Reserve putting us behind the curve
00:49:57.380 again?
00:49:58.160 Can the Fed take the right action at the right time, or are we going to be looking at a potential
00:50:03.540 economic slowdown?
00:50:05.080 And what does this mean for your savings?
00:50:06.980 Consider diversifying with gold through Birch Gold Group.
00:50:10.520 For decades, gold has been viewed as a safe haven in times of economic stagnation, global
00:50:15.380 uncertainty, and high inflation.
00:50:17.820 Birch Gold makes it incredibly easy for you to diversify some of your savings.
00:50:21.900 If you have an IRA or an old 401k, you can convert that into a tax-sheltered IRA in physical
00:50:28.920 gold, or just buy some gold to keep it in your safe.
00:50:32.520 First, get educated.
00:50:34.240 Birch Gold will send you a free info kit on gold.
00:50:36.820 Just text Bannon to the number 989898.
00:50:41.740 Again, text Bannon to 989898.
00:50:46.140 Consider diversifying a portion of your savings into gold.
00:50:49.720 That way, if the Fed can't stay ahead of the curve for the country, at least you can stay
00:50:55.780 ahead for yourself.
00:50:58.740 Also, maybe you missed the last IRS deadline, or you haven't filed taxes in a while.
00:51:04.300 Let me be clear.
00:51:05.340 The IRS is cracking down harder than ever, and this won't go away on its own.
00:51:10.340 That's why you need Tax Network USA.
00:51:12.840 They don't just know the IRS.
00:51:16.780 They have a preferred direct line to the IRS.
00:51:20.300 They know which agents to deal with and which to avoid.
00:51:23.480 Their expert negotiators have one goal.
00:51:26.120 Settle your tax problems quickly and in your favor.
00:51:30.400 Go to TaxTNNetwork.USA.
00:51:34.840 I have completely misstated that.
00:51:39.060 I apologize.
00:51:39.800 Tax Network USA and War Room Posse.
00:51:42.080 Visit TNUSA.com slash Bannon.
00:51:46.540 That is TNUSA.com slash Bannon.
00:51:51.360 Or call 1-800-958-1000.
00:51:54.800 That's 1-800-958-1000 for Tax Network USA.
00:52:01.540 Thank you very much, War Room Posse, for hanging in there, even with these sloppy ad reads.
00:52:07.780 I pray the machines don't get you.
00:52:09.540 God bless.
00:52:11.080 What if he had the brightest mind in the War Room delivering critical financial research
00:52:15.920 every month?
00:52:17.440 Steve Bannon here.
00:52:18.580 War Room listeners know Jim Rickards.
00:52:20.260 I love this guy.
00:52:21.100 He's our wise man, a former CIA, Pentagon, and White House advisor with an unmatched grasp
00:52:26.940 of geopolitics and capital markets.
00:52:29.320 Jim predicted Trump's Electoral College victory exactly 312 to 226, down to the actual number
00:52:37.540 itself.
00:52:38.600 Now he's issuing a dire warning about April 11th, a moment that could define Trump's presidency
00:52:44.160 in your financial future.
00:52:45.760 His latest book, Money GPT, exposes how AI is setting the stage for financial chaos, bank
00:52:52.700 runs at lightning speeds, algorithm-driven crashes, and even threats to national security.
00:52:57.960 Right now, War Room members get a free copy of Money GPT when they sign up for Strategic
00:53:03.520 Intelligence.
00:53:04.560 This is Jim's flagship financial newsletter, Strategic Intelligence.
00:53:09.360 I read it.
00:53:10.480 You should read it.
00:53:11.540 Time is running out.
00:53:12.400 Go to RickardsWarRoom.com.
00:53:14.160 That's all one word, Rickards War Room, Rickards with an S. Go now and claim your free book.
00:53:20.060 That's RickardsWarRoom.com.
00:53:22.600 Do it today.