Bannon's War Room - April 20, 2026


WarRoom Battleground EP 991: HOLLY ELMORE: Save the Human Race! Pause AI


Episode Stats


Length

53 minutes

Words per minute

175.90625

Word count

9,461

Sentence count

493

Harmful content

Toxicity

3

sentences flagged


Summary

Summaries generated with gmurro/bart-large-finetuned-filtered-spotify-podcast-summ .

Transcript

Transcript generated with Whisper (turbo).
Toxicity classifications generated with s-nlp/roberta_toxicity_classifier .
00:00:00.000 This is the primal scream of a dying regime.
00:00:07.720 Pray for our enemies, because we're going medieval on these people.
00:00:12.960 I got a free shot at all these networks lying about the people.
00:00:17.220 The people have had a belly full of it.
00:00:19.140 I know you don't like hearing that.
00:00:20.560 I know you try to do everything in the world to stop that,
00:00:22.320 but you're not going to stop it.
00:00:23.240 It's going to happen.
00:00:24.520 And where do people like that go to share the big lie?
00:00:27.920 Mega Media.
00:00:28.820 I wish in my soul, I wish that any of these people had a conscience.
00:00:34.700 Ask yourself, what is my task and what is my purpose?
00:00:38.460 If that answer is to save my country, this country will be saved.
00:00:44.880 War Room. Here's your host, Stephen K. Vann.
00:00:53.400 Good evening. I am Joe Allen and this is War Room Battleground.
00:00:57.700 For the last year and a half, artificial intelligence has exploded onto American politics.
00:01:04.600 On the one side, you have accelerationists who are hell-bent on developing and deploying
00:01:10.240 AI, and there seems to be absolutely no end to their reckless thirst to alter the entire
00:01:18.840 trajectory of the human race.
00:01:21.080 On the other side, you have those, maybe you would consider me to be in that camp, those
00:01:24.800 who would prefer to see all of it stopped.
00:01:28.100 If you could push a button right now
00:01:30.640 and turn off the entire AI industry,
00:01:33.320 I would be all for it.
00:01:35.160 But of course, these are perhaps unrealistic dreams.
00:01:39.020 And in the spectrum between these extremes,
00:01:42.520 you have every position imaginable.
00:01:45.180 People who are mildly concerned
00:01:47.620 about artificial intelligence,
00:01:49.380 the expansion of data centers,
00:01:51.020 child protection, deep fakes,
00:01:53.200 and people who are extremely concerned, but not to the point that they would give up the entire
00:01:59.920 industry. In the AI safety community, you have concerns such as bioweapons being developed by
00:02:07.120 amateurs or even rogue states, concerns like AI going rogue. What happens if you create a super
00:02:13.920 human artificial intelligence that cannot be controlled? One of the more sane arguments,
00:02:20.660 I would say, is to simply pause this race. The race dynamic between American corporations and
00:02:28.280 American corporations against China means that no one is incentivized to pause. And yet it is
00:02:38.260 ultimately up to human beings to make these decisions. Here to talk about pausing AI is
00:02:45.320 Holly Elmore, Executive Director of Paws AI. Holly, thank you very much for joining us here.
00:02:51.720 That's Paws AI US, just to be. Paws AI US, as opposed to Paws AI Mars.
00:02:56.140 That's a positive, Paws AI the whole movement, you know, it's a worldwide thing.
00:02:59.240 Yes. So Holly, if you would, maybe just give us a sense of what Paws AI is as an organization,
00:03:07.340 what your goal is, and what your tactics are to achieve that goal.
00:03:11.580 So we're a grassroots organization. We're focused on using the democratic process to connect the already 70 percent, looking at different polls, sometimes the number is even higher, that want something like a pause.
00:03:22.860 They want a slowdown. They want regulation on AI. That's already what the people want.
00:03:27.880 So we're connecting them to their representatives to hear that message and then demonstrating in other ways to just let people know.
00:03:34.480 There are a lot of people out there who wish they could pause. We hear very commonly like, oh, wouldn't that be nice? You know, kumbaya, we could pause.
00:03:39.740 But we really can. We've done things like this before. This is why we're not dead from nuclear weapons is because there have been international treaties to control the proliferation and the use of nuclear weapons.
00:03:50.020 When you say the democratic process, you've been here in D.C. for a while. You're talking to politicians.
00:03:55.500 You actually had a protest out in front of the White House or a demonstration at the Capitol.
00:03:59.000 Yes, the Capitol. Yeah. So tell me a little bit about your experience here in D.C. and what you hope to achieve here.
00:04:05.560 Honestly, I hadn't done what we did this week on Monday and Tuesday was we met with 75 congressional offices, including 25 Senate offices.
00:04:12.720 So that's 25 percent of the Senate. And we just brought constituents who are concerned about this to talk to their representatives.
00:04:20.060 And that went so much better than I even thought it was going to go.
00:04:24.500 It was really impressive how much the staffers of these offices and sometimes the members themselves wanted to know more about what we were saying, about what was possible.
00:04:32.900 They wanted to see these polls. You know, there were a lot of things. They're busy. They haven't heard of a lot of things. You really can make a difference by bringing it to their attention. And an in-person meeting makes a big point. And then the Capitol demonstration just further goes. It's a big conscientious test. It's so hard to put a lawful demonstration on Capitol Hill because the security is high.
00:04:52.100 But it shows that, look, there are this many people, you know, we had something like 80 people who want to, who support pausing AI.
00:04:59.040 They've got lobbyists in their ear all day from the AI industry who are telling them, oh, you can't, it's too hard, you know.
00:05:04.400 And the people, this is so unpopular, the people love AI.
00:05:07.860 And the people really don't love AI.
00:05:09.600 And so if you get that message to them, like, that makes a difference.
00:05:12.880 So here in America, you've got four major frontier companies who are pushing this race forward.
00:05:18.520 You've got Google, OpenAI, XAI, and Anthropic.
00:05:23.300 I oftentimes joke that Meta is an upstart trying, just jogging along behind them.
00:05:30.220 I think it was our colleague Jeffrey Ladish joke that Mark Zuckerberg should get the Nobel Peace Prize for slowing down the race to superintelligence.
00:05:39.900 What, by buying up all the GPUs? 0.98
00:05:41.720 No, just by, I think, if I took the joke correctly, just because they suck.
00:05:45.320 But at any rate, this race is driven by the notion that if we, Google, or we, XAI, or we, Anthropic, don't create AGI first, don't arrive at the finish line first, then the bad guys will, people who are less responsible, less trustworthy.
00:06:03.200 When you look at that, you look at that incentive structure, how do you see pausing AI being possible?
00:06:10.680 How would it even really occur? Would it require some sort of governmental intervention or is it possible to let these guys basically determine our fates on their own?
00:06:21.280 I think it has to require, you know, the people of the world cooperating. And the legal way we do that is through governments. We have infrastructure for governments to come together. Again, we have other treaties on other destructive technologies already run by the U.N., for example, by other intergovernmental organizations.
00:06:40.040 So with pause, you know, it's a very broad ask.
00:06:43.980 It's just the idea of what we want.
00:06:45.440 There's actually so many ways to recognize a pause.
00:06:48.440 Also, a pause could happen without us doing it on purpose.
00:06:51.720 If, for instance, there is some problem with the compute supply chain, that would put in
00:06:56.080 place a de facto pause, because I think what a lot of people don't realize is that there's
00:06:59.620 a lot of effort going into making each new model.
00:07:02.580 You know, it seems like it's happening all the time, and it is.
00:07:04.400 But every time, every new model like Mythos, you know, Claude Mythos that just was sort of released, that takes exponentially more compute every time, more huge resources.
00:07:14.740 That's why these big data centers are being built.
00:07:17.400 And so if something got in the way of the compute supply chain, if something got in the way of the price of energy that this is going to take, there would be lots of ways that that project could be stopped in its track.
00:07:28.200 It's a huge project.
00:07:29.280 It's by many ways, by many lights, the largest human project ever.
00:07:32.740 And it's not just happening by itself. So there's lots of ways to stop this. What I would prefer is that our governments stop it on purpose because it's dangerous. Like it's a national security issue. It's an international security issue. But there are lots of ways. We just we what we want is a pause. And I'm fond of saying that a pause is the next right step. It's the next right step for any correct solution.
00:07:55.160 For any solution, we need time where we're not making the problem worse.
00:07:59.040 We're not building more and more, you know, dense neural networks that we don't understand and that we can't control.
00:08:04.900 We need time to catch up on technical safety and research, but mainly on how we're going to govern it, how we're going to make sure who gets to call the shots, whose values, all of these questions.
00:08:14.260 We need time.
00:08:14.920 You know, for the most part, this argument has been about theoretical dangers, ideas like bioweapons or cyber attacks, things like this.
00:08:25.440 But I think that the at least completion, if not release, of Claude Mythos from Anthropic shows that there are practical concerns that need to be honed in on.
00:08:35.840 You've got, with Mythos, the capability to identify and exploit vulnerabilities in operating systems, browsers, security systems.
00:08:46.960 And even if we don't have a really clear view of the details because they've kept it behind closed doors, we at least, I think, can trust that what they developed is actually dangerous.
00:08:59.060 They say it is dangerous.
00:09:01.140 People called it a sales pitch, and I can see why you might think that, but I don't think that you would have all these other corporations that are involved in it.
00:09:09.780 I don't think they're all conspiring to boost Anthropics' stock values.
00:09:13.480 So on the note of danger, what are some of the dangers that you see with artificial intelligence?
00:09:19.240 I mean, we've talked a few times about this, that, you know, the more kind of mundane dangers of, you know, mass AI psychosis, but also the more extraordinary dangers of artificial general intelligence out of control or artificial superintelligence.
00:09:34.920 So for you, Holly Elmore, what are the major dangers you're concerned about?
00:09:39.460 I mean, for me, truly, I mean, I when I set the priorities of Pase, I meant it.
00:09:44.300 I think the entire spectrum of dangers that are caused by developing out of control and unregulated are important.
00:09:51.520 And they're all connected. They're all connected to the they're all externalities of this out of control development.
00:09:57.160 But I mean, the biggest thing, you know, imaginable to me is that the human race goes extinct.
00:10:02.340 I mean, something that serious, we either we lose control or it empowers a bad actor or a dictator to do something that wipes out our civilization.
00:10:12.720 That's I really do think that's on the table. And that will sometimes sound very histrionic to people.
00:10:17.580 But, you know, I used to be in my old life an evolutionary biologist and ninety nine percent of all species that have ever lived are extinct now.
00:10:25.200 uh because and that's the normal thing that that happens and a lot of times you can see in the
00:10:29.840 fossil record what happens when one species gains something like eyes that are you know they become
00:10:34.600 better predators they just wipe out a lot of species there's nothing there's no natural law
00:10:38.780 that says that we cannot go extinct and when we destabilize our society um all these things add up
00:10:44.380 too so you know if we destabilize society by through you know we can now we can't trust uh
00:10:49.720 deepfakes we can't trust what we see we don't know what's going on then we're not going to be as
00:10:53.180 resilient to big threats like bioweapons, like possibly AI having its own ideas and usurping
00:11:00.360 power. So I think the whole range of these threats is real. There's probably many we haven't thought
00:11:07.960 of, and those are going to be the real sleepers. I mean, that's the nature of this danger is it's
00:11:11.500 intelligence. It's the ability to figure out ways to get to a goal. And if it's smarter than us,
00:11:16.140 it's going to find out ways to do what it wants. We might not know why it wants what it wants.
00:11:21.420 and it's going to be very hard for us to just anticipate.
00:11:25.420 We can't deal with it like that.
00:11:27.040 That's why I really think we have to stop now
00:11:29.260 and get really serious about figuring out
00:11:31.580 without advancing capabilities,
00:11:33.840 how can we look forward and know how to make sure
00:11:36.080 that what we're doing is safe?
00:11:37.820 You know, speaking to both laymen and experts,
00:11:40.600 there's some resistance to the notion
00:11:43.380 of artificial intelligence being smart or being intelligent.
00:11:47.520 The direct comparison with human intelligence,
00:11:50.100 I think is a real blocker to seeing AI as a real cognitive system.
00:11:56.240 An example that I think exposes how it is that an AI is, quote unquote, smart would be in gaming.
00:12:04.580 Google's DeepMind has created a number of different AIs, AlphaZero being maybe one of the more impressive,
00:12:11.360 that are able to figure out how to play games, chess, Go.
00:12:15.820 I think StarCraft is another one that AlphaZero has mastered.
00:12:20.100 And it excels at them.
00:12:22.600 And it's not that they, with AlphaGo, they trained the system on previous Go moves.
00:12:29.020 With AlphaZero, it's learning on its own.
00:12:32.820 And very quickly, it becomes superhuman.
00:12:35.480 It may not be smarter than human beings in reading.
00:12:37.800 It can't even read.
00:12:39.220 It may not be more perceptive than humans.
00:12:41.820 It can't really see.
00:12:43.760 But when it comes to the rule of that game, it excels all human capabilities and it teaches itself.
00:12:49.400 And I think when you extrapolate that out to things like drone piloting, things like target acquisition, any other system that might be able to recognize patterns at a superhuman level, you then run into the danger of it exceeding human capabilities.
00:13:07.220 So it may not be at that large scale theoretical sort of place of, say, AGI yet, artificial general intelligence or artificial superintelligence.
00:13:17.660 But just with what we have now, you have a lot of potential dangers.
00:13:22.000 And so I guess if I could tease out some of your more theoretical ideas on this before we move back to Earth,
00:13:28.780 Um, how do you see what, if you could give us a, your definition of artificial general
00:13:34.940 intelligence, artificial super intelligence, and if you have a definite timeline, what
00:13:40.280 is it?
00:13:41.400 Uh, so artificial general intelligence, that's a very problematic term at this point.
00:13:45.620 Cause, um, I think the current, the frontier models of all, all of them are greatly exceeding
00:13:51.460 human abilities in many, many ways.
00:13:53.600 You know, they're not, there's some abilities they don't have at all.
00:13:56.620 like they're very limited like sensory abilities or things like that but um i think like i would
00:14:03.160 say agi should mean roughly like human level ability and it turns out that like just that
00:14:09.300 ability is as you're saying kind of spiky um in some ways it's not fully it's not up to human
00:14:13.680 level but in a lot of ways that are enough to be dangerous certainly and my i think we should be
00:14:17.700 having risk in mind when we make these kinds of definitions i think we're for the audience when
00:14:23.000 say spiky or jagged uh what do you mean so some abilities so none of us can read uh 10 novels in
00:14:29.600 one second and write up a summary of what happened but that's something that all of these frontier
00:14:34.320 models can do even if then later they lie to you about having a timer or something like they don't
00:14:39.360 know things about themselves like so that it's very uneven what their abilities are um hence the
00:14:44.800 strawberry the strawberry thing and yeah how many hours are in strawberry yeah um and i think the
00:14:52.080 AI companies make a big deal of that to kind of make us feel safe. Like, oh, well, it's not quite
00:14:55.660 human. And they'll get us used to the idea that just because something's not strictly better than
00:15:01.360 humans at every single thing doesn't mean that they have worryingly better capabilities. Doesn't
00:15:05.680 mean that they can't do your job for almost no money compared to you. So that's AGI I put there.
00:15:12.700 I'm mainly today talking about superintelligence and the risk of just capabilities, any kind of
00:15:19.400 capability that greatly exceeds uh human ability so it could even be a narrow intelligence or
00:15:24.140 maybe not something as broad as the dreamt of super intelligent i'm sorry general intelligence
00:15:29.640 that encompasses most human abilities and i think we'll continue discovering abilities that
00:15:36.560 we haven't really thought of as like cognitive abilities but that are going to be a source of
00:15:40.160 immense power to a super intelligence um so from the perspective of like it's kind of a
00:15:46.380 philosophical question like is it human level is it the same my question is like is it a threat
00:15:51.160 and i think we're definitely at human level threat as far as cognitive abilities from our
00:15:56.900 current models and i'm really worried about raising that any higher and just to bring it
00:16:03.560 down to specifics by threats do you mean things like the creation of bioweapons cyber attacks
00:16:08.500 these sorts of things weapon system control systems yes that sort of thing yes absolutely
00:16:13.080 I mean, this is so Claude Mythos claimed that it found a number of zero day exploits in all of the operating systems.
00:16:22.680 Right. That's pretty serious. That's the kind of thing that would be just one of those would be a major, you know, human hacking campaign.
00:16:28.800 and increasingly there are benchmarks for virology knowledge and AIs are scoring above
00:16:36.300 virologists, human virologists who take these tests about just with knowledge and kind of a
00:16:42.440 lot of implicit knowledge about how to make things in the lab and stuff. So we have a lot of
00:16:47.060 indication that there could be this danger if for some reason a bad actor wanted to use an AI that
00:16:53.020 way or for some reason an AI had the desire to do it themselves. So on that note, when you think
00:17:01.460 about the social opposition to this, I mean, you have a huge backlash to AI right now. And as you
00:17:08.080 noted earlier, the public sentiment has absolutely turned against AI for a variety of reasons. And
00:17:15.540 as with any kind of social mood, right, like a sense of discontent or malaise that's spread
00:17:23.000 across the population, you're going to have, to put it in a colloquial way, you're going to have
00:17:28.960 psychopaths that freak out and they turn to violence. They do it for, they justify their
00:17:35.420 violence for all sorts of different reasons. And this has been a problem in America, especially
00:17:40.040 for decades. Here recently we had two incidents where Sam Altman's home was hit with first I
00:17:48.860 think a Molotov cocktail and then shot at and then there was an Indianapolis councilman who
00:17:54.880 was advocating for building data centers and his home was shot at with a manifesto referencing
00:18:00.460 the data centers. You hear then in response this sort of blame being put on anyone critical
00:18:08.720 of artificial intelligence companies, of anyone who is supporting the building of data centers. 0.71
00:18:15.040 All of this backlash is basically being scapegoated for the actions of psychopaths.
00:18:20.520 And you were interviewed recently, I believe it was in Fortune magazine, about that sort
00:18:25.560 of blame coming your way.
00:18:28.260 You have always advocated for nonviolent tactics, correct?
00:18:33.460 I've always been extremely strict, you know, making people sign our volunteer agreement,
00:18:37.640 number one thing about the code of conduct is nonviolence uh yes extremely extremely strict
00:18:42.600 uh to the point at first everybody thinks like oh you know come on we i don't let people even
00:18:47.220 make jokes um i don't you know when people make protest signs i screen the signs and if somebody
00:18:52.580 has like blood dripping like i know they don't mean anything by it but the person you're you're
00:18:57.120 speaking to and then the people who are watching you know don't necessarily know that so we really
00:19:00.860 really really strict uh you know this is about influencing people morally and through democratic
00:19:06.300 means. And you have a real positive element to your message too, correct? I mean, maybe not you
00:19:13.900 personally, but pause AI US does. I think so. Give us a little light before we get on to the
00:19:19.860 commercial break. Well, I think we're about the world is really good and we want to protect it.
00:19:25.280 And we think that there is a way to protect it. We're not talking about anything that hasn't
00:19:28.640 happened before. We have nuclear nonproliferation treaties. We have new start treaties. We just
00:19:33.960 want this for AI. And then we can enjoy whatever benefits are safe from the AI. That's really the
00:19:39.840 best of all possible worlds. I also enjoy, frankly, I enjoy its bracing to be part of the democratic
00:19:46.420 process and discussion. And I find that very fun. Speaking of life, your study as an evolutionary
00:19:53.720 biologist, you studied mushrooms, correct? That was one of your specialties? That was the species
00:19:57.880 I worked on in grad school yeah the species the phylum kingdom did you did you spend a lot of
00:20:04.260 time out in nature doing this I did I did uh field work where I would collect actually I
00:20:08.440 worked on the deadliest mushrooms we'd collect those and be very careful morbid again but
00:20:14.880 let's hear a little bit about that though because I'd like to connect it to the way you see
00:20:19.760 artificial intelligence to some extent um I think I do have uh I used to say well I you know I
00:20:25.760 started doing pause AI because it was necessary morally and that it wasn't connected to my old
00:20:29.940 work. But more and more, I think it really is. I think the way that I see things, definitely
00:20:36.820 machine learning is the same process as natural selection, gradient descent. I think I have an
00:20:42.120 intuition for what works. And I wanted to spend my life just understanding life and doing cool
00:20:47.720 stuff like that. And I thought, okay, I've been called to duty now and I just have to do this
00:20:53.020 instead. But it's really, it's an interesting challenge, not only to, I understand people who
00:21:00.280 are really interested in AI, and they really like want to be close to it and study it, because it
00:21:03.660 is fascinating, but it's also dangerous. And I kind of think I've got the right remove. And then
00:21:08.420 it's also just a really cool challenge to like figure out a new social movement. And I'm now
00:21:12.560 here I'm talking to you, like, I'm having a good time. Yeah, it's interesting that someone like
00:21:17.760 you, I mean, you've never, I've never gotten a clear sense of your political leanings, but I
00:21:22.220 would say quite a bit further left than mine but who knows we're a bipartisan group yeah i think
00:21:28.380 that this moment is really fascinating and and heartening because this this issue is sort of
00:21:35.940 like pollution uh or unhealthy foods or drugs in schools it's not something that any particular
00:21:42.940 political persuasion is necessarily going to be concerned about if the soil is poisoned and getting
00:21:48.560 into the crops it affects everyone and it affects everyone that is a beautiful thing about working
00:21:54.140 on pause ai we have people really coming together under the same banner under this single issue
00:21:58.700 um really it's everything but working for the ai companies that unites us i mean everything else
00:22:05.080 like being parents and at this hill day we had you know parents engineers uh people who had been in
00:22:11.500 the ai industry people teachers you know just all kinds of people can agree this is dangerous what
00:22:17.220 are we doing? Why are we doing? Why are we making this problem worse before we have any idea of what
00:22:20.900 to do about the dangers? Yeah, I think that I've met a number of people like the AI ecosystem has
00:22:27.040 opened up to me in the last year in ways that were really unexpected. I mean, there are some people
00:22:32.420 maybe that I didn't get along with totally, but for the most part, we're talking about people from
00:22:36.580 very, very different walks of life, all of whom share the same concern. The Future of Life Institute,
00:22:42.160 that's been a really, really important resource to not only connect to different people,
00:22:46.560 but also just to learn more from people who are expert in this about what artificial intelligence systems are, what the real effects are on the human mind.
00:22:57.360 Another example would be, say, Nate Soares and Eliezer Yudkowsky from the Machine Intelligence Research Institute, Jeffrey Ladish and his colleagues at Palisade Research.
00:23:09.080 So it's been really amazing. How do you see that ecosystem functioning now?
00:23:15.440 I mean, of those institutions I just mentioned or any others, what when you look out across the landscape, how do you see it fitting together?
00:23:25.400 Where do you see the real strength in this movement?
00:23:28.220 I think the strength. So what we're aiming at is the public engaging the public.
00:23:32.960 And I think that's going to be our real source of strength.
00:23:36.260 And I think that's where the organizations you mentioned are getting their strength is that they're working with they're working publicly openly.
00:23:43.440 So the old AI safety ecosystem used to be very closed and it was very in-groupy and it got to the point where, unfortunately, people see people working at, you know, AI companies as like closer to them as like a safety person than like the public and their interests.
00:24:00.580 And I think our power, those groups are branching out into just being more open and being more involved in the democratic process.
00:24:08.540 And that like that's going to be the way forward. And that's going to be it's already growing.
00:24:13.720 But we we have the public already. They just need to understand and be helped a little bit in this like baffling.
00:24:19.600 Everybody's baffled by how much is happening and how fast with in the ecosystem.
00:24:24.180 But this is, I think this is where the power is, is just harnessing that, focusing that, shepherding that.
00:24:31.500 And you come from San Francisco, right?
00:24:33.900 Or you've been in San Francisco for many years, but you've been all over the country talking to different people.
00:24:40.280 Do you see a certain sort of personality type or certain cultural type that's more open to the critiques of this technology?
00:24:47.640 Or is it kind of like my experience?
00:24:50.060 Is it just really across the board?
00:24:51.580 It's everybody.
00:24:52.120 it's every the only people who are close to it are the the people in san francisco really um
00:24:56.820 pretty much everybody uh thinks like well this like we don't need this like i'm i'm perfectly
00:25:02.020 happy with my life like why would i risk it all why would i take a dice roll on you know my species
00:25:06.680 going extinct for what uh to be able to to write faster or that you know or to have sort of the
00:25:12.960 temptation that my children like never learn anything and cheat their way through college
00:25:17.180 or like that's most people's and there's so many so many angles on that there's so many ways in
00:25:22.800 which people are dissatisfied and scared about what's happening with ai well you know another
00:25:27.980 kind of common human need is to pay bills and to store wealth and if there is one way to store
00:25:35.660 your wealth that you don't have to worry about robots coming and scooping up everything you own
00:25:40.720 it is gold especially gold provided by birch gold when the dollar's convertibility into gold
00:25:47.100 ended in 1971. Gold was fixed at $35 an ounce. Fast forward to today, and the U.S. dollar has
00:25:53.920 lost over 85% of its purchasing power. 85%. Gold, on the other hand, has increased in value by over
00:26:00.960 12,000%. That's why central banks are buying gold at record levels. That's why major firms like
00:26:09.140 Vanguard and BlackRock hold significant positions in gold. And that's why I encourage you to
00:26:15.480 discover diversifying your savings with physical gold from birch gold group but it starts with
00:26:22.500 education not education by bots not education by going on to wikipedia education from philip
00:26:30.100 kirk patrick birch gold just announced their learn and earn precious metals event this free
00:26:35.480 online event rewards you for learning the basics of investing in precious metals you must act now
00:26:40.960 This special event only runs through April 30th.
00:26:43.620 The dollar lost its anchor in 1971.
00:26:45.760 You don't want to have to lose yours.
00:26:48.220 Text Bannon to the number 989898 to join Birch Gold's Learn and Earn Precious Metals event.
00:26:54.620 Bannon at 989898.
00:27:00.220 The dollar's convertibility into gold ended in 1971.
00:27:05.180 Gold was fixed at $35 an ounce.
00:27:08.600 Well, fast forward to today, and the U.S. dollar has lost over 85% of its purchasing power.
00:27:15.500 Gold, on the other hand, is increased in value by over 12,000%.
00:27:20.140 That's why central banks are buying gold at record levels.
00:27:24.040 That's why major firms like Vanguard and BlackRock hold significant positions in gold.
00:27:29.700 And that's why I encourage you to consider diversifying your savings with physical gold from Birch Gold Group.
00:27:37.760 But it starts with education.
00:27:39.500 Birch Gold just announced their Learn and Earn Precious Metals event.
00:27:44.020 This free online event rewards you for learning the basics of investing in precious metals.
00:27:48.860 Sign up to get a free silver on your next purchase.
00:27:52.500 Get even larger incentives as you go.
00:27:55.140 The more you learn, the more you can earn.
00:27:57.440 But you must act now, as this special event only runs through April 30th.
00:28:03.160 The dollar lost its anchor in 1971.
00:28:06.160 You don't have to lose yours. Text my name, Bannon, B-A-N-N-O-N, to the number 989898 to join Birch Gold's Learn and Earn Precious Metals event by April 30th. Text Bannon, B-A-N-N-O-N, to 989898 and do it today.
00:28:25.340 The American health care system is broken, and for most Americans, nothing changes.
00:28:31.740 There's still delays, denials, high-cost insurance roadblocks.
00:28:35.820 So when I find people doing things differently, I talk about it.
00:28:40.860 All-family pharmacy is not your typical big-chain pharmacy.
00:28:44.920 This is an independent, family-owned pharmacy that gives you access to over 400 medications delivered straight to your door.
00:28:52.840 They've got ivermectin, antibiotics, antivirals, NAD+,
00:28:58.540 even your daily maintenance medications, and so much more.
00:29:02.900 If you already have a prescription, your doctor can send it directly.
00:29:06.720 If you don't, their doctors handle it.
00:29:09.420 As long as there is a medical necessity, they'll take care of you.
00:29:13.580 And I'll tell you this, the feedback from people listening to this show
00:29:16.600 and watching has been incredibly strong.
00:29:19.740 People are using it.
00:29:20.820 it's working for them, and they're sticking with it. That's because it cuts out the delays,
00:29:25.660 the middlemen, and all the usual nonsense. This is about being ready before you need it.
00:29:33.060 Go to allfamilypharmacy.com. That's all one word, allfamilypharmacy.com slash Bannon,
00:29:39.200 and use code Bannon10 to save 10%. The healthcare system is broken. Your pharmacy doesn't have to be.
00:29:47.660 everyone's focused on how the conflict in the middle east is raising oil prices but there's
00:29:53.120 another grim reality to this contention oil isn't the only resource being constrained about one
00:30:00.140 third of global fertilizer trade happens through this region and with spring planting season on
00:30:06.260 top of us american farmers are sounding the alarm with some saying they can't afford to plant their
00:30:11.360 fields when one piece of the supply chain gets hit this hard you know what comes next higher
00:30:17.240 food prices, reduced availability, maybe even panic buying. That's why having an emergency
00:30:23.160 food supply at home makes so much sense. And that's where our friends at MyPatriotSupply come
00:30:29.860 in right now at preparewithbannon.com. That is preparewithbannon.com. We've set up an entire
00:30:36.900 just site for the war room posse. You go to preparewithbannon.com. That's all one word,
00:30:44.240 preparewithbannon.com. You get a three-month emergency food supply. They'll include a free
00:30:50.400 mega protein upgrade, an incredible $200 bonus you don't want to miss. It's a simple way to
00:30:57.520 protect your family from whatever comes next. Go to preparewithbannon.com. That is preparewithbannon.com
00:31:06.180 to get your emergency food supply today. That's preparewithbannon.com. Do it today. Go check it out.
00:31:14.240 War Room. Here's your host, Stephen K. Mann.
00:31:21.140 Welcome back, War Room Posse.
00:31:26.180 The American health care system is broken.
00:31:29.860 And for most Americans, nothing changes.
00:31:33.680 There are still delays, denials, high costs, insurance, roadblocks.
00:31:38.680 so when i find people doing things differently i talk about it all family pharmacy is not your
00:31:47.780 typical big chain pharmacy this is an independent family-owned pharmacy that gives you access to
00:31:54.520 over 400 medications delivered straight to your door not by drones but by a smiling human being
00:32:00.740 who just wants to see you well they've got ivermectin mabendazole antibiotics antivirals
00:32:08.260 nad plus even your daily maintenance medications and a whole lot more if you already have a
00:32:15.260 prescription your doctor can send it directly if you don't their doctors handle it at all family
00:32:21.780 pharmacy as long as there is a medical necessity they'll take care of it for you and i'll tell you
00:32:27.340 this the feedback from people listening to this show has been extremely strong people are using
00:32:33.260 it it's working for them and they're sticking with it that's because it cuts out the delays
00:32:37.500 the middlemen, and all the usual nonsense.
00:32:40.720 This is about being ready before you need it.
00:32:43.800 Go to allfamilypharmacy.com slash Bannon.
00:32:48.080 That's allfamilypharmacy.com slash Bannon.
00:32:52.620 And use code Bannon10, that's B-A-N-N, numeral one, numeral zero, to save 10%.
00:33:00.820 The healthcare system is broken.
00:33:03.780 Your pharmacy doesn't have to be.
00:33:07.500 And now that I am activated by my meds, I am back with Holly Elmore of Paws AI U.S.
00:33:19.600 Holly, we left off talking about cognitive surrender to AIs in schools.
00:33:26.820 And you studied in Vanderbilt or at Vanderbilt?
00:33:31.120 Vanderbilt and then Harvard.
00:33:31.860 And then Harvard.
00:33:32.940 Yeah, basically, you one-upped me on both.
00:33:35.400 You know, I went to the University of Tennessee, Knoxville. And then, of course, I was across the river at Boston University, staring out the window over at Harvard, wondering what it must taste like over there.
00:33:44.640 Like, what's it smell like at Harvard? So, you know, the educational system.
00:33:51.840 Do you do you have faith in academia as an institution? Was it a satisfying experience for you?
00:33:58.760 i you know i back then before ai i did i've been pretty discouraged by what i've seen with
00:34:06.400 ai education day so we have uh university level organizers and i was talking to them the other
00:34:11.300 day and i told them i was like you know i was a scrupulous never cheat never did i would stay up
00:34:16.120 till you know 5 6 a.m to do essays all the time but i never had i just go to one link and one
00:34:23.900 click of a button practically and had the whole thing done i mean that how can you stand up to
00:34:28.140 that. And then one of my organizers told me it's actually even worse than that. Like I have a
00:34:32.840 school provided laptop that's a Lenovo with Microsoft Office. And when I write in Microsoft
00:34:37.140 Office, it constantly prompts me to have Copilot rewrite it. You know, it's like the software,
00:34:43.480 the computers, everybody's telling you to cheat. Like, how can you how can you keep your integrity
00:34:48.920 in that environment when just like it's like everybody's telling you to do it. And then on
00:34:52.420 top of that, you know, OpenAI especially is making all these deals with campuses. So my mom teaches
00:34:56.960 at um a small but a reputable christian college and they have taken on they're at chat gpt college
00:35:03.060 and my mom was as a composition teacher she's like this is unacceptable we can't um be operating
00:35:08.700 like this and then you know what they added they added chat gpt college campus yeah it's what
00:35:13.600 but what's even worse they have chat gpt shepherd for clergy to tell the clergy how to do their job
00:35:21.080 like christ gpt it's called chat gpt shepherd yeah which i thought was bad enough yeah so
00:35:26.940 So, like, that's just, it's like with everything with the AI industry.
00:35:31.860 They're able to flood us so fast.
00:35:34.120 They go so much faster than our legal system.
00:35:35.860 They go so much faster than our news cycle.
00:35:37.480 Like, people can't keep up.
00:35:39.200 And I think that people have the right, I think our hearts are in the right place and we would correct.
00:35:44.200 And even our institutions like academia would want to correct.
00:35:46.880 But are they going to be able to with how fast they're being inundated with this and by how quickly, you know,
00:35:52.940 If a whole generation grows up, first they went through COVID and they missed their high school, and then now they're missing college, basically, because they cheated everything, which was happening to an uneducated generation.
00:36:05.800 I think it's a pretty serious concern.
00:36:07.900 Absolutely.
00:36:08.920 It's the kind of concern I'm much more affected by than—I do think seriously about AGI superintelligence, and I don't discount it at all.
00:36:19.580 Although I'm pretty skeptical of its imminency, you know, that it's an immediate threat.
00:36:24.900 I hope you're right.
00:36:25.960 Yeah.
00:36:26.440 Well, you know, as I oftentimes say, if I'm wrong about extinction, you can call me out.
00:36:30.980 I'll admit it.
00:36:31.620 That would be...
00:36:32.520 If we go extinct, then I'll be the first to admit I was wrong.
00:36:35.420 If I'm wrong, I would jump for joy.
00:36:36.840 I think we got to act on the worst case scenario and make sure that doesn't happen.
00:36:41.560 But man, I have no pride in the idea that like, oh, it's all going to end and I called it right.
00:36:45.980 Sure. Well, you know, I suppose in the afterlife, we'll sort it out, right? There won't be anyone around to talk about it.
00:36:54.300 You know, but the more practical concern, let's say we keep going as a species, decades, centuries, millennia, millions of years,
00:37:03.320 that this period will, as you say, will see a completely stunted generation, a generation that was demoralized.
00:37:10.680 They were told that the A.I. would do all of their jobs.
00:37:13.500 Any vocation they chose would ultimately be done by a machine.
00:37:17.520 The best case scenario would be that they were an A.I. babysitter that got to command around their A.I.s and get super rich off of it.
00:37:24.160 But by and large, the messaging to them is adapt or die.
00:37:28.180 Use the A.I.
00:37:29.180 And as you say, you know, these are young people, most of whom are extraordinarily hungry for knowledge.
00:37:36.080 Right. That age, you want to learn.
00:37:38.620 You want to expand.
00:37:39.420 You want to grow.
00:37:40.080 you want to socialize all these things and they're having screens shoved in their faces it would be
00:37:45.540 like if you could go down to the school nurse and get oxycontin and you know on on demand and
00:37:51.520 most kids you would hope wouldn't do it but many would and a growing number it's basically okay
00:37:57.420 yeah the nurse will give it to you um and this is a chachi bt campus and oh you're supposed to use
00:38:02.640 it to help you know your development like that's an honor system um it does feel like they're being
00:38:07.940 told to to use it and become become dependent yeah i it's spotty so there are a lot of professors
00:38:16.800 who are pushing back on this in their classrooms to the administrations there are a few schools
00:38:22.200 brown university being one of them but a few schools that overall are you know the the general
00:38:28.080 attitude of the administration is against all of this a lot of professors going back to the old
00:38:32.940 blue books with the pencils which you know i'm old enough to remember when the blue books were
00:38:36.960 a thing. I use a blue book, yeah. I'm old enough to remember when there were no laptops in
00:38:42.480 classrooms. When I taught briefly, I refused to have any laptops in my classroom and the kids
00:38:48.160 adapted just fine. There really wasn't a problem. But increasingly I hear from professors that
00:38:53.640 due to COVID and the lockdowns and the lost education and also just the kind of general
00:39:00.420 digital culture, the kids coming in aren't really prepared for college. I mean, some are,
00:39:05.180 but most really aren't. And it is nightmarish, you know, the idea that human beings survive,
00:39:11.560 artificial intelligence doesn't create radical abundance. And we're stuck with this global
00:39:16.160 village of the damned in which all the children are, you know, digitized and have offloaded
00:39:21.660 their cognition to the machine. In the case of what Shepard GPT offloaded their spirituality
00:39:27.080 to the machine. It's terrifying, more terrifying to me than the idea of going extinct. Going
00:39:33.740 extinct would be kind of a relief in comparison to that i mean i'd beg to differ but i i do think i
00:39:39.520 think they're all very serious and i think really all these threats that are caused by this of these
00:39:44.040 unchecked externalities of development are threats to our way of life and then one is like a final
00:39:49.320 threat to our way of life but um some are some are more survivable some are not but like potential
00:39:55.220 we also we don't know we don't we shouldn't mess around with the fabric of our society like we just
00:39:59.180 don't know what's the important load-bearing part yeah uh yeah and uh i feel very demoralized
00:40:05.920 especially with um it goes beyond just like getting the grade for kids in school it's like
00:40:11.060 they're becoming less confident in their ability to think themselves like they don't like to sort
00:40:16.920 of just represent their own thoughts this is one of the things that people i always get complimented
00:40:21.340 on this um i think for being outspoken but i more and more people are like oh i could never do that
00:40:26.180 without having AI check it or like, like, wow, you could just, you could never just, what,
00:40:30.880 thank your own thoughts, have your own ideas, or they need to go to one thing that my university
00:40:34.560 organizers were complaining about was that like, even to just answer like trivial questions or
00:40:39.800 even questions about their own preferences, like people would be like, ask chat, ask chat,
00:40:43.720 like, it's like an addiction to, they can't even stand the uncertainty of like,
00:40:47.800 working it out themselves. You know, it's hard enough to stay physically fit in today's world.
00:40:51.820 But imagine you have this with thoughts, like you encounter a little difficulty,
00:40:55.200 And there's this answer that's very soothing and easy and quick and it feels valid and like from a neutral source right at your fingertips.
00:41:02.780 Think of the potential for manipulation if there's any kind of, you know, if, you know, some person in control, some industry in control wanted people to think a certain thing, they would be able to do it.
00:41:14.860 Yeah. Increasingly, they are. And, you know, there was a problem from the television forward, you could say from the telegraph forward.
00:41:21.280 but this is a whole other level you know when i was in grad school my my main area of study was
00:41:29.320 evolutionary and cognitive science as applied to religion but my my real interest in my master's
00:41:36.300 thesis was based on this was altruism the question if evolution darwinian evolution is so harsh why
00:41:43.360 would human beings be so kind why why would the ants be so helpful to one another the termites
00:41:49.260 the bees, all of this. And recently I've been accused of being an effective altruist. Now I'm
00:41:57.540 kind of an altruist. I'm mostly an ineffective altruist. I'm not usually nice to many people
00:42:02.800 and not very long, but I'm definitely not an effective altruist. Now you have a lot of
00:42:08.920 experience in and around this group. Can you give the War Room Posse some idea of who the effective
00:42:15.860 altruists are, what their goals and tactics are. So I, disclaimer, I used to be kind of a big
00:42:24.680 personality in effective altruism. And I first got introduced to it when I started grad school
00:42:29.120 at Harvard. They were kind of at a lot of elite schools. I ended up organizing Harvard EA for six
00:42:34.520 years. Back then, the idea was mainly like, yes, so there's the possibility of helping others. And
00:42:41.400 the big insight was, like, people who are, you know, wealthy in the West, like, they can do a
00:42:47.560 lot more for people elsewhere. Or we can just even rank our causes in terms of, like, what's the
00:42:52.320 actual impact instead of, like, what caused you like on Vibes. And that that could take the same
00:42:57.220 amount of money, the same amount of, like, our personal power to do anything and, like, have a
00:43:00.540 much bigger positive impact for people in the world. I still believe this is great. But always
00:43:06.000 lurking in the back there was also this uh ai safety like cause which i remember seriously i
00:43:11.500 was like i was already vegetarian i was already like into giving to um giving to the poor i was
00:43:16.480 like really excited for a way for that to go further and the only thing i didn't like about
00:43:21.000 ea was ai safety because i just and i couldn't put my finger on it like because it's not that
00:43:25.400 i thought the arguments were wrong and that's the way a lot of people feel about hearing any
00:43:28.540 argument about a computer can become powerful and out of control and it could be a problem right
00:43:32.860 But I realized over time that it was sort of the culture I didn't like, and that culture continues to be very strong.
00:43:41.920 It is very – I know your listeners will be familiar with kind of transhumanist ideas.
00:43:46.400 Sure.
00:43:46.640 It's very in that space, wanting to –
00:43:48.660 It's kind of a descendant, an intellectual descendant from transhumanism, would you say?
00:43:53.100 Yeah.
00:43:53.440 Yeah.
00:43:53.760 Well, and so a lot of the reason that the core group that ever got this to be a big idea –
00:43:59.040 Not everybody who's into it today, of course, really knows why it became a topic.
00:44:03.120 But the interest was to use AI to be immortal and to reach the singularity to become immortal.
00:44:12.280 And then, of course, everything else would also be fixed.
00:44:15.580 And so within the mindset of effective altruism, this is kind of like an argument for everything.
00:44:21.660 Like if the AI would do everything the best, you like have to try to get the AI and apply it to whatever you're trying to do.
00:44:29.520 Or the whole the project that the version of AI safety that they worked on is called alignment.
00:44:35.760 And it was about in various different flavors of this, but like finding the true values that the AI should have and then like make and then letting it become more and more powerful, but guided by those values.
00:44:47.400 So it'll just do the right thing by humanity and ideally provide like a paradise, you know, where people get to do whatever they want.
00:44:55.700 A kinder, gentler digital god of sorts.
00:44:59.280 Sort of make a digital god that would like nanny paradise.
00:45:02.880 Like and. I always thought of this as like a not I didn't think of it as like a scientific idea.
00:45:10.640 I didn't think of it. There was always something that kind of repulsed me about it.
00:45:14.660 But as the capabilities of AI got worse, I thought like, oh, these people are definitely like they're on to something about the power of it for sure.
00:45:24.400 And after ChatGPT came out, I just had I really didn't I didn't think about it any harder up until then,
00:45:31.740 because it seemed like it really could be like hundreds of years off before we're like dealing with artificial intelligence, like anything close to human level.
00:45:38.740 and when i saw chat gbt talk like a human like i knew computers could not do that before i based
00:45:46.040 on my you know knowledge of linguistics and stuff like most linguists like argued we would never see
00:45:50.820 it in our lifetime yes and so david deutch the futurist david deutch he he argued this famously
00:45:56.860 in uh the beginning of infinity and most of that book is quite accurate but not on the llms it just
00:46:02.900 and it was um i mean to get a little nerdy kind of the thing that this kind of ai is good at is
00:46:08.020 what we thought was like human skills. So it's like associative, creative writing. We thought
00:46:14.080 that artificial intelligence would be more like mathematical, like that would be its ability. But
00:46:18.520 actually, as we were talking about, that's kind of where it has, it makes mistakes. It's not
00:46:22.580 precise. It's kind of like the creative parts of our brains. And when I, the one thing that was
00:46:30.520 even scarier about shout gbt was that it was created just by a process of searching you could
00:46:38.100 describe this whole thing as like a process of the more and more compute resources you have like the
00:46:43.280 more combinations of uh parameters it's called like model weights kind of similar to neurons
00:46:50.960 and and synapses but if you're looking you're searching like design space for brains and the
00:46:55.980 more compute you have the more you can search and the better you can search it and find like those
00:46:59.700 like really powerful options and it did that this this process was done without like learning
00:47:06.880 anything new special about how the brain works like we don't know you know how it's doing it
00:47:11.740 it's just it's a process for finding a way to do it that's described in these model weights and we
00:47:16.940 don't know what it's doing um and so i once that happened it seemed pretty obvious that if you put
00:47:23.460 more compute on it, you would get an even bigger, a more powerful model. And we, and there was
00:47:30.160 nothing standing in the way, you know, the, the only thing standing in the way is acquiring these
00:47:34.880 compute resources. Okay. So we're talking about effective altruism. We're talking about these
00:47:41.300 massive projects, huge data centers, full of GPUs screaming, developing these, these kind of virtual
00:47:49.100 brains in the mathematical space of possibilities it all reminds me of anthropic uh anthropic by
00:47:57.780 and large is my understanding this is what the word on the street is but anthropic is largely
00:48:02.400 staffed at the upper levels with people who are very friendly to effective altruism yes perhaps
00:48:07.640 even effective altruists themselves definitely and uh they are in that strange sort of uh mode
00:48:14.480 that a lot of these tech companies and the CEOs are in,
00:48:17.680 this technology, they say, could kill everyone,
00:48:21.480 but we have to build it.
00:48:22.660 Right. 0.98
00:48:23.000 Or else someone else will build one to kill everyone.
00:48:26.280 A lot of people have mixed emotions about Anthropic,
00:48:29.320 but you've been very clear on your position.
00:48:31.380 We don't have a whole lot of time left,
00:48:32.800 but if you could, I'd love to just hear your side of the story
00:48:36.580 with the Anthropic question.
00:48:38.920 I'm just going to preface this by saying this is all my opinion.
00:48:42.580 Don't want to hear from Anthropic's lawyers.
00:48:44.480 But my experience being there as this company was formed is it was definitely founded by EAs with EA values.
00:48:53.160 It was founded by Effective Altruists.
00:48:55.560 It was a break off from OpenAI because of losing confidence in Sam Altman's leadership and commitment to those values,
00:49:04.120 which is something Sam Altman did talk up early on because EAs were the people with the technical ability to do this.
00:49:09.580 um so it's it's always been that from the beginning despite what they told the atlantic
00:49:15.160 which was a lie um about ea involvement so i i'm just gonna my personal opinion i anthropic's the
00:49:21.320 one i hate the most i think i think it's anthropic final boss i anthropic final boss meaning meaning
00:49:28.420 i think that they're the one that's going to be left the others are gonna do something i mean
00:49:31.780 open ai has kind of shown its hand um especially with sam altman's duplicity um anthropic is really
00:49:38.820 successfully cultivating this group of loyalists and uh serving their interests and i try to break
00:49:46.440 their ranks i call out anthropic employees all the time for how they're betraying what i know
00:49:50.220 were the values they went into it with um but they they don't break ranks and uh they but they're
00:49:57.440 doing the same thing as all of the other ai companies and they're doing now they're at the
00:50:01.460 the edge and they went from saying they weren't going to push the frontier they were just going
00:50:05.080 to study this to help with safety. I remember. They broke that promise. Now they're at the
00:50:09.440 frontier and they're talking about how can we release this model that knows all these zero
00:50:16.320 day exploits for all our operating systems. The ultimate cyber weapon.
00:50:20.600 But they're better at creating this beneficent image and kind of playing on letting people
00:50:25.220 believe like, oh, you don't have to do anything. We'll handle it. The world's going to be great.
00:50:29.360 It's no problem. But we do have to handle it. We can't be lulled into a false sense of security.
00:50:33.760 We can't think, oh, well, Anthropic would basically do what I want.
00:50:36.940 It has to be we, the people, make known what we want.
00:50:40.840 And we have democratic control over what happens with this AI.
00:50:43.700 Well, on that note, Holly, if you would, tell the audience where they can find resources on your mission, where they can find information about Paws AI, and give them some sense of where you're going from here.
00:50:57.100 How can they follow you?
00:50:57.880 Okay, so you can go to PawsAIUS.org, and our website will branch out to everything else.
00:51:03.480 You can find out how to join a local group. You can donate there.
00:51:07.400 Where we're going, we're trying to really scale up with helping our constituents, the constituents who identify the pause position, reach their representatives.
00:51:17.840 And we we really want to help people to get through all of the confusing.
00:51:23.440 You know, it feels like a 12 hour news cycle on AI, help them focus, make their voices amplified, make their voices unified.
00:51:29.640 so you can find out more on
00:51:32.420 Paseius.org
00:51:33.700 and
00:51:35.180 I really hope to see you all there
00:51:37.860 we really need you
00:51:39.040 absolutely this is a huge fight
00:51:41.700 and you guys are fighting with all your might
00:51:44.180 I really appreciate it
00:51:45.160 where can they find you personally on Twitter
00:51:47.000 if they want to hear your scathing remarks
00:51:50.180 about Daria Amadei and Anthropic
00:51:52.320 where can they find you
00:51:53.360 well there's Paseius social media
00:51:54.980 and then there's my personal social media
00:51:56.700 at Ilex underscore Olmas
00:51:59.100 You can just search Holly Elmore on Twitter, where I say more of my personal beliefs about the situation.
00:52:05.260 And, yes, they are spicy.
00:52:06.640 Well, Holly, you're a fighter to the end.
00:52:08.480 I really appreciate you coming on.
00:52:09.720 Thank you so much.
00:52:10.520 Thank you, Joe.
00:52:11.180 Thanks for the opportunity.
00:52:12.280 Appreciate it.
00:52:14.320 If you're 65 or already on Medicare, listen up, folks, and grab a pen, maybe even a number two pencil.
00:52:23.620 Call 845-WAR-ROOM.
00:52:26.120 That's 845-WAR-ROOM.
00:52:28.020 Call it right now.
00:52:28.860 I'm serious. Call it. Now, here's why. The insurance companies and their lackeys in the
00:52:34.300 Washington swamp have built a Medicare system designed to confuse you and rip you off. Rising
00:52:41.140 premiums, denied claims, fine print nobody but a lobbyist understands. Millions of American seniors
00:52:47.520 are paying too much and getting too little. And worst of all, most don't even know it. Hey, that
00:52:53.460 could be you. That's why if you're already on Medicare or will be soon, you need to talk to
00:52:59.720 our friends at Chapter. They have a team of advisors trained to serve American seniors,
00:53:04.720 not the insurance companies. In under 20 minutes, they can find you the best plan for your needs
00:53:10.780 at the lowest cost. Why? They're a data company. They have all the data on every plan. It's totally
00:53:17.680 free there's no pressure no bs just straightforward honest help from fellow patriots so don't wait
00:53:24.740 call 845 war room right now that's 845 war room tell them bannon sent you now listen in the first
00:53:32.040 couple of days of the launch of this company with the war and posse posse members saved tens and up
00:53:38.000 to hundreds of thousands collectively of dollars in these fees go check it out today that's chapter
00:53:43.620 Call 845-WAR-ROOM. Do it today.