Bannon's War Room


WarRoom Battleground EP 909: Deep Dive With SAM HAMMOND: Confronting AI and Leviathan


Episode Stats

Misogynist Sentences

2

Hate Speech Sentences

9


Summary

In this episode of War Room, host Stephen K. Bambao talks to Sam Hammond, founder of the Foundation for American Innovation (FAI) and author of the book AI and Leviathan, about what it means to be a technology entrepreneur in the 21st century.


Transcript

00:00:00.000 This is the primal scream of a dying regime.
00:00:07.000 Pray for our enemies, because we're going medieval on these people.
00:00:12.000 Here's one time I got a free shot at all these networks lying about the people.
00:00:17.000 The people have had a belly full of it.
00:00:19.000 I know you don't like hearing that.
00:00:20.000 I know you try to do everything in the world to stop that, but you're not going to stop it.
00:00:23.000 It's going to happen.
00:00:24.000 And where do people like that go to share the big lie?
00:00:27.000 Mega Media.
00:00:29.000 I wish in my soul.
00:00:31.000 I wish that any of these people had a conscience.
00:00:34.000 Ask yourself, what is my task and what is my purpose?
00:00:38.000 If that answer is to save my country, this country will be saved.
00:00:44.000 War Room.
00:00:45.000 Here's your host, Stephen K. Band.
00:00:54.000 Good evening.
00:00:55.000 I'm Joe Allen with War Room Battleground.
00:00:58.000 If you followed my travels over the last few months, you know that I have crisscrossed the country multiple times, even at one point ending up in Switzerland interviewing robots who, as you might imagine, turned out to be racist.
00:01:11.000 I mainly tried to keep my company confined to transhumanists, Luddites, the occasional normie, and, of course, war room posse.
00:01:24.000 In the recent weeks, I found myself in some very strange situations.
00:01:29.000 One of the more unique was a Latin mass at a city I won't disclose in which all the women, of course, had their heads covered.
00:01:39.000 There were children everywhere, beautiful iconography and the strict social order you could see in the health of the family.
00:01:52.000 As I met, you could see how that social order produces a certain type of human being, a human being who is devoted not only to family but to God, a human being who is infinitely capable, I believe, of responding to the future in a way that does not destroy the past.
00:02:12.000 Now, after the mass, I found myself sitting outside of a coffee shop.
00:02:17.000 I was speaking to some of the young people. Some of them had their spouses with them.
00:02:21.000 Most all of them will soon have children, and they talked about what the future would look like for their children.
00:02:30.000 How would they protect their children from the predations of tech corporations in a future in which tech corporations basically rule the world?
00:02:40.000 What was really interesting about that was sitting right across from us was an anarchist.
00:02:46.000 He was sitting and working on a pair of boots. He's a cobbler by trade.
00:02:53.000 The anarchist was diametrically opposed to the Catholic way of thinking.
00:02:58.000 He clearly was not religious in any dogmatic way, very much obsessed with esoteric traditions, otherwise known as the occult.
00:03:08.000 Certainly, he was not into the kinds of rigid social order that a Catholic or any other kind of deeply religious community would produce.
00:03:17.000 And yet, he agreed completely with the assessment that tech corporations are a primary threat to human life.
00:03:25.000 Now, a few weeks later, Thanksgiving, I found myself in another strange kind of juxtaposition of social schemes.
00:03:35.000 In the afternoon, I joined a group of families who were deeply Christian.
00:03:41.000 We talked about a lot of things.
00:03:43.000 In regard to technology, they are as suspicious as I am.
00:03:47.000 In regard to religion, let's just say that if I'm going to heaven, it'll be after them.
00:03:53.000 But you could see in these families that same deep devotion in the way in which it expressed itself outwardly in their marriages, in their children, in their homes, in the way they order their lives.
00:04:06.000 Later on, I found myself in another and perhaps more exciting milieu.
00:04:12.000 I was invited by one Sam Hammond to a Thanksgiving dinner comprised mainly of what we would now call tech accelerationists, although in the old days you would call them transhumanists.
00:04:24.000 The turkey was delicious. The stuffing was as well. The wine, to the extent I sipped it, was not bad.
00:04:32.000 And I'm not sure, but there were bottles of Brian Johnson's snake oil lining the counter, and it may be that I myself imbibed some of this bizarre immortalist elixir.
00:04:46.000 Now, in the same spirit of consilience that anyone would approach a Thanksgiving dinner, we all discussed our differences civilly.
00:04:54.000 And in the desire to continue that conversation, I would like to welcome the War Room audience to encounter the mind of the futurist and chief economist at the Foundation for American Innovation, Sam Hammond.
00:05:10.000 Sam, thank you very much for joining me.
00:05:12.000 Thanks, Joe. You're looking younger.
00:05:14.000 It's got to be the Brian Johnson snake oil.
00:05:17.000 So, Sam, as we've discussed, your imagination is monstrous.
00:05:23.000 You basically have a brain full of Shoggoths, their tentacles creeping out of your eyes, creeping out of your ears, and in the bizarre words that you speak, casting the future in terms of a singularity.
00:05:37.000 Your book or pamphlet, AI and Leviathan, is absolutely brilliant, however nightmarish this future that you paint may be.
00:05:46.000 If you would, just give the War Room audience a sense of what you're trying to communicate with AI and Leviathan.
00:05:55.000 What is this effort in futurism?
00:05:57.000 Yeah, so the inspiration comes from an essay by Tyler Cowen called The Libertarian Paradox, where he noted that, you know, libertarians have been fighting for small government, limited government for years.
00:06:08.000 We also value markets, creative destruction, the wealth-producing propensity of capitalism.
00:06:13.000 But maybe these things were a package deal.
00:06:16.000 You know, we got the welfare state, we got the administrative state, the managerial state as a byproduct of capitalism and the success of the Industrial Revolution.
00:06:23.000 And so that, you know, sort of shook the foundation of my worldview.
00:06:27.000 I was raised and grew up pretty libertarian.
00:06:30.000 But coming to understand that these sort of enemies that we fight, like the Leviathan, the state, got stronger as a byproduct, as a bundled package deal with prosperity.
00:06:42.000 That looking ahead, are there other package deals with AI and the AI transformation?
00:06:47.000 And so the way I sort of see it is that we are sort of sitting on a knife edge.
00:06:51.000 You know, the powers of AI are incredible for everything from, you know, healthcare, biomedicine, education.
00:06:58.000 But they also are incredibly powerful tools for surveillance, for censorship, for social control.
00:07:03.000 And there's a kind of race dynamic going on between the state and the rest of society.
00:07:09.000 We end up in a digital panopticon a la China where, you know, there's a group in Shanghai that runs, you know, the surveillance systems.
00:07:17.000 And they have a big situation room.
00:07:18.000 They brag that they can identify anybody within a sharp eyes.
00:07:21.000 I think it comes from a mouse quote.
00:07:25.000 Or will we sort of start to fragment as the capabilities that are now today only possessed by the CIA or Mossad or by state agencies become sort of democratized and we can all rebundle our organizations around smaller communities.
00:07:39.000 And I sort of want to walk a middle ground where we can have our cake you need it too.
00:07:44.000 And I think it's going to be a very narrow path.
00:07:47.000 And, you know, one of my roles or purposes is to try to communicate that this is a package deal.
00:07:53.000 That you have these sort of naive techno-optimists that think we're going to just plow forward into the brave new world and everything is going to be hunky-dory.
00:08:02.000 You have people who think we're just straight up doomed.
00:08:05.000 I think we're somewhere in between where we're going to have to make very hard trade-offs.
00:08:09.000 And at the very least we should be communicating those trade-offs to the public.
00:08:12.000 You know, one thing I really appreciate about your perspective, Sam, is that you don't pull punches and you've never been shy about stating things that might be shocking to normal people.
00:08:22.000 Now, I'm not a normal person, so I'm not shocked.
00:08:25.000 I'm more intrigued than anything.
00:08:27.000 The future is wide open.
00:08:29.000 But your vision of the future is deeply informed by your education as an economist, your political education.
00:08:36.000 You had described kind of a progression from a naive anarchist to one who is much more open to the possibilities of the uses of the state and maybe even the inevitability of a large degree of centralization.
00:08:51.000 And the pairing that I read here in AI and Leviathan, the pairing I read here is particularly interesting because on the technological level you basically take as axiomatic the claims of Ray Kurzweil and other futurists that we are indeed heading towards a technological singularity.
00:09:12.000 And you're looking at how will the state respond and, you know, spoiler alert, but you say that most likely it will stabilize as Chinese style.
00:09:23.000 One world or one state control, one sort of centralized leader using AI to control the populace.
00:09:32.000 On the other side of that, you see something more akin to liberal democracy as it evolves into a high tech society where corporations basically take up that role and in between the anarchic states.
00:09:44.000 My first question regarding the technological singularity, why do you believe that in fact these technologies will keep increasing at an exponential pace?
00:09:56.000 And do you really believe that by say 2045, we'll see something like Ray Kurzweil's view where we have AIs that are millions of times smarter than human beings, most human beings locked into those systems with trodes, being regularly genetically updated to keep up with the machine?
00:10:14.000 Is that really the generalized future you see us going towards and why?
00:10:18.000 Yeah, I don't know if I can put the specifics. Right. So there's predicting the future is hard. Right.
00:10:25.000 There are some things that are more easily predicted. Right. So we can predict it's maybe setting aside the bogus climate science.
00:10:32.000 It's easier in principle to predict, you know, one degree warming over a century than what the weather will be in a month. Right.
00:10:38.000 Because the weather in a month is a random dynamic system similar with societies.
00:10:42.000 So, you know, I can say with confidence that the world of 2045 will be at least as different seeming to us as the world of 1950 was to the world of, say, 1650, just radical transformation.
00:10:54.000 And what that looks like in practice will will in part be up to us.
00:10:58.000 But, yes, I mean, you know, one thing I think with some of the more naive techno optimists where they sort of get their overconfidence is the fact that we've lived through a period of rough relative stagnation over the last 40, 50 years as we've shifted from building actually new technologies to offshoring and globalization and these sort of substitutes to true innovation.
00:11:21.000 And as we exit that era of stagnation and reencounter, like when history has restarted, we have to be prepared for a very tumultuous transition.
00:11:30.000 You know, I'm someone who thinks that the mind is essentially computation.
00:11:36.000 And I think that is a prior that maybe makes me more open to the idea that we're going to recreate mind in a digital substrate.
00:11:44.000 Right. And I think, you know, I don't just take this axiomatic. I think it's partly we're seeing it play out. Right.
00:11:49.000 We're seeing similarities between these vision models and their internals and the way our visual cortex works.
00:11:54.000 We're seeing we're learning a little bit about how like human language works by studying these language models.
00:11:59.000 And so it's less that we're building this totally alien technology, although it is alien in some some respects,
00:12:05.000 but we're really building a simulation or emulation of our of ourselves, but in a format that is potentially unbounded. Right.
00:12:14.000 When you say the mind is computation, are you using computation as a metaphor for what happens inside the human brain or, as I would say, in the human soul?
00:12:24.000 Or is it something more fundamental that the human mind and the computer truly do share the same sorts of patterns and the same sorts of processes that lead to what are what we call mind?
00:12:38.000 Yeah. And I think to say the mind is computational how to say that it's a classical computer. Those are two different things. Right.
00:12:45.000 So in some in some sense, the computational aspect of our mind gives credence to this notion of an immaterial soul. Right.
00:12:53.000 Because what what is software? Software is not the bits. It's not the transistors. Software is this immaterial pattern that sits on top of those things.
00:13:02.000 And the core insight of Alan Turing and their founders of computer science was that these things are independent of the substrate.
00:13:08.000 You can build a computer out of pneumatic tubes. You can build a computer out of hydraulic locks and gates.
00:13:14.000 And similar with the mind, the we live in a material reality constructed by our brain that is running on a particular kind of wet hardware.
00:13:23.000 But there's nothing in principle that prevents us from putting that into a different form of hardware.
00:13:27.000 And as we've seen over the progress of A.I. over the last, say, 15 years, has come not so much from deep insights into, you know, how do how do proteins fold? Right.
00:13:38.000 You know, protein folding was solved because essentially the computers caught up and we they that that problem existed within the envelope of the kinds of supercomputers we had to model protein folding.
00:13:50.000 And as computation continues to to double or quadruple in its performance and as our algorithms get more efficient, the human mind will fall within that envelope, too.
00:14:01.000 And then, you know, what Kurzweil foresees is by 2045 is the biggest supercomputers will have more computation than not just a single brain, but of all the brains put together.
00:14:10.000 And we don't really know, you know, what that transpires or what what comes out of that.
00:14:15.000 Will it be something that we wield as some single unified entity?
00:14:19.000 Will we distribute that compute in a way that gives everyone a stake or will it be monopolized?
00:14:24.000 These are the sort of questions ahead of us.
00:14:28.000 And I think the the biggest risks come from folks who think that this is just a normal technology and that the world of 2045 will just look like the world of today only more so.
00:14:38.000 Yeah, the idea that this is a normal technology, I think it's been blown away by.
00:14:42.000 What is it some over half of young people say that they've used chat bots as companions or view them as companions.
00:14:50.000 They talk to them as if they were actual people.
00:14:53.000 I think that goes well beyond anything we saw with video gaming or even social media.
00:14:58.000 And we're only three years into the chat GPT era.
00:15:01.000 On the other hand, I am praying for either a solar flare or at least an S curve.
00:15:07.000 But setting that aside, the political approach that you have here, I think, is really interesting.
00:15:14.000 And it's something that gets lost on a lot of people, especially on the right without naming names.
00:15:20.000 There are a lot of people that associate transhumanism with globalism, with leftism.
00:15:26.000 They associate techno optimism and accelerationism with the same.
00:15:30.000 If they haven't been snapped out of that hypnosis by the Trump administration yet, I think they really should.
00:15:36.000 But, you know, we've covered accelerationism and its newest form, effective accelerationism.
00:15:43.000 And I think that whether you would take that label on, you certainly sympathize with that camp.
00:15:48.000 Would you say that that's correct?
00:15:50.000 Yeah, I would say I'm an accelerationist everywhere, but, you know, super intelligence.
00:15:55.000 I think it's appropriate if we are going to build this thing to go in it with a degree of humility and trepidation.
00:16:02.000 The real point I mean to get to get at, though, is that you, by and large, would be categorized as on the right, at least in everywhere.
00:16:12.000 It counts except for technology, I would say.
00:16:14.000 And maybe you would say that techno futurism is a form of right wing expression.
00:16:21.000 But I think it's on a practical level really interesting how your interests align with someone like Steve Bannon in regard to something like chip exports to China.
00:16:31.000 You're very much a hawk on that, correct?
00:16:34.000 You wrote a fantastic article for, I believe, the American conservative a few weeks back.
00:16:38.000 Why, if you believe acceleration and technological progress are, in fact, the ways forward, why would you want to preference America or privilege America and shut out the Chinese on this?
00:16:52.000 Don't they need more advanced noodles, too?
00:16:55.000 I think there's, like, two big reasons.
00:16:58.000 One is if we are on this knife edge between different, you know, forking paths the way the world could go, and if AI is potentially monopolizing a technology as it is, as it stands to be, that hegemonic power of AI could be kind of fractal.
00:17:13.000 It could lead to runaway growth of one, you know, Google, but also at the world stage, runaway power of a single country, especially as, you know, not just through, you know, autonomous weaponry, surveillance, automated cyber attacks, but just the flywheel that will kick off as we begin to automate our industrial factories and so on and so forth.
00:17:30.000 So I'd much rather the US be in that lead in part because then we can bake in, at least, gives us some hope of baking in values like civil liberties, respect for privacy, sort of, you know, building a constitution into the AI.
00:17:43.000 The other reason is this conflict.
00:17:45.000 So, you know, I think the world wars were kind of inseparable from industrialization.
00:17:51.000 We moved from a world of, you know, agrarian craft economies to ones where we were racing to build the biggest rail networks because if we had more trains, then we could move more tanks.
00:18:01.000 And it was the fact that the France and Germany, Russia, these countries in the lead up to World War I were in this all-out race to build the biggest rail networks.
00:18:11.000 And it was the fact that it was close that I think led the conflict.
00:18:15.000 You see, you get more war and more conflict when you have two great powers that are kind of neck and neck.
00:18:22.000 And to the extent that we can, you know, make the US lead in AI and the core infrastructure sort of uncatchable, I think it reduces the threat of there being an all-out war.
00:18:33.000 Do you think the Chinese are actually in a position to catch up?
00:18:37.000 Do you think that Huawei, for instance, could actually meet the demand for data centers for computation in any way comparable to where the US is at right now?
00:18:47.000 They could if we had a totally laissez-faire free market here, right?
00:18:51.000 So, you know, the manufacturing equipment that goes into building these chips are some of the most complex pieces of engineering ever produced by mankind.
00:18:58.000 The components that go in involve thousands, tens of thousands of suppliers.
00:19:03.000 There are dozens within the supply chain, dozens of, you know, mini-monopolists.
00:19:07.000 You know, ASML in the Netherlands builds the lithography machines.
00:19:11.000 They have like a 100% market share.
00:19:13.000 And so that enables our export controls so we can, you know, tell those companies you may not export this technology to China.
00:19:19.000 China still gets some of it because they smuggle it or before the controls were in place they hoarded a bunch of it.
00:19:24.000 But they're otherwise really cut off.
00:19:26.000 Now, I think this is important because, you know, China has other advantages.
00:19:29.000 They are massively are producing us on energy.
00:19:31.000 You know, they're adding, you know, 400 gigawatts to their grid every year.
00:19:35.000 The stat I read is that they add a United States roughly every seven years to the energy grid.
00:19:40.000 Whereas our energy grid has been flatlined.
00:19:42.000 And we look at what goes into building these AI models, you know, other than having the talent and the engineers.
00:19:46.000 It's really the data centers and the energy to power them.
00:19:49.000 And we're already running up against, you know, hard constraints there, which is why we're making all these deals with the UAE and Saudi Arabia and so on where there's actually abundant energy.
00:19:58.000 China doesn't have that problem.
00:19:59.000 If these export controls were lifted, I don't think they'd just catch up.
00:20:02.000 I think they'd leapfrog us.
00:20:03.000 And, you know, we see the, you know, their their their comparative advantage is these massive infrastructure projects, just like they can do, you know, build giant highways.
00:20:12.000 They could build the the AGI cluster if we let them.
00:20:16.000 Something that's at the center of A.I. and Leviathan, which, again, I recommend the War Room Posse check out.
00:20:23.000 It's very brief.
00:20:25.000 It's a little heady in places, but you can get through it in an afternoon very, very easily.
00:20:31.000 But one of the you open up with the theme of or a kind of metaphor or a symbol of the X-ray goggles, the X-ray specs and comparing A.I. to that.
00:20:41.000 It gives people the power to see beyond what they could otherwise see.
00:20:45.000 And someone like me, I mean, I see the development of these technologies, especially in regard to surveillance, and it's very off putting.
00:20:54.000 You don't want to be seen in that way.
00:20:56.000 You don't want someone to have that power over you.
00:20:59.000 And so my first instinct is to reject it, lambast it, do anything I can to push it away.
00:21:04.000 You, on the other hand, are approaching it much more from the perspective of how can this be used?
00:21:10.000 And how can you use these technologies to protect yourself from surveillance or any other kind of predation?
00:21:18.000 Can you go into that a little bit and break down the X-ray spec metaphor?
00:21:22.000 Yeah, and part of it is recognizing that technology is often more discovered than invented, right?
00:21:28.000 So if one day we woke up and there were X-ray specs that would be built with sort of off-the-shelf technology that no one could ever control, what would happen?
00:21:37.000 Suddenly, I could see through your clothes. I could see through walls. I could break into banks. I could cheat at poker at the casino.
00:21:44.000 There are all these systems in our society that would just suddenly break because of this new capability.
00:21:48.000 There's kind of three canonical ways society could respond.
00:21:52.000 We could sort of change our culture or change our norms.
00:21:55.000 We could become nudists and embrace post-privacy norms.
00:21:59.000 We could...
00:22:00.000 Kind of like social media influencers basically do now with their souls.
00:22:03.000 Right, exactly.
00:22:04.000 We could adapt or do mitigation so we could retrofit our homes with copper wire or anything that blocks the X-ray penetration.
00:22:14.000 And the third option is we have an X-ray Leviathan, the all-seeing state that orders all the X-ray glasses to be handed over to feds
00:22:24.000 and then they use their monopoly on X-ray glasses to scan our bodies to make sure we don't have them.
00:22:28.000 But the core point is the fourth option of nothing happening or some stable equilibrium is not tenable, right?
00:22:36.000 Because it's fundamentally a kind of collective action problem.
00:22:39.000 I want the glasses but I don't want you to have the glasses.
00:22:41.000 But you have the exact same incentive and so very quickly we move into a new world where we all have the glasses and we have to do something about it.
00:22:47.000 And AI is very similar. It's hardly even a metaphor.
00:22:50.000 You know, we even have some of these like Meta Ray-Ban glasses that, you know, could you imagine...
00:22:55.000 You could imagine downloading a machine learning model for...
00:22:57.000 You know, there are such models for detecting people through walls using Wi-Fi signal displacement.
00:23:03.000 Yeah.
00:23:04.000 I'm sure that won't be on the Apple App Store, but people will jailbreak these things.
00:23:09.000 We're going to go to break shortly, but in our remaining moments before, can you just tee up the idea that one of the more dramatic predictions you make is that the democratization of AI,
00:23:22.000 AI diffusing across the population, whether it be America or any other country, is going to inevitably lead to regime change.
00:23:30.000 Why? Why is this your core argument before moving on to the distant future?
00:23:37.000 Yeah, it's not that I'm a technological determinist, but I do see the way in which, you know, our institutions or governments or organizations are technologically contingent.
00:23:46.000 Right. So, you know, the growth of the administrative state, for instance, was presaged by and partly driven by the telegraph and early rail networks that let Washington, D.C.
00:23:57.000 have agents of the state be in far away parts of the country and be able to still communicate and get back and forth.
00:24:03.000 And so whenever you have a big technology shock to the core sort of inputs to organizations, the ability to monitor, to broker contracts, to enforce contracts, principal agent costs, the ability to, if I give you a job that you're going to execute on that job.
00:24:18.000 When those costs come down radically, you get new kinds of institutions. Right. We saw that in micro with with Uber and Lyft.
00:24:23.000 Right. That was a regime change. You know, these were public taxi commissions that, you know, were, you know, quasi governmental.
00:24:30.000 And for the people involved, for the taxi drivers involved is incredibly violent and dramatic.
00:24:36.000 You know, you saw protesters in Paris throwing rocks off of bridges and so on.
00:24:39.000 You know, I think for the rest of us, it was a massive improvement.
00:24:43.000 But that was a shift that happened quite dramatically within a span of less than five years.
00:24:47.000 You know, the ridership completely flipped.
00:24:49.000 And that was because of mobile and Internet and these new technologies leading to new kinds of organizational forms.
00:24:55.000 Well, if there's any one thing that people need to keep in mind as this transition unfolds,
00:25:02.000 it's that you're going to need some kind of economic hedge against total economic disruption.
00:25:08.000 Wouldn't you agree? I think so.
00:25:10.000 And that note, there are a lot of politicians that should be getting coal in their stocking for Christmas.
00:25:17.000 But Birch Gold thinks as a smart planner, you deserve silver.
00:25:21.000 That's why for every five thousand dollars you purchase between now and December 22nd,
00:25:27.000 Birch Gold will send you an ounce of silver, which is up over 60 percent this year.
00:25:32.000 That's you, Uber drivers.
00:25:34.000 Get that silver.
00:25:35.000 Get that gold before it's too late.
00:25:36.000 And the Waymo's take your job.
00:25:38.000 See, smart people diversify and have a hedge.
00:25:41.000 That's why I encourage you to buy gold from Birch Gold.
00:25:45.000 With the rate cuts from the Fed in 2026, the dollar will be worth less.
00:25:50.000 And as the Waymo's come in, you're going to need something to throw through their windows.
00:25:54.000 Get a brick of gold.
00:25:56.000 What happens if the A.I. bubble bursts?
00:25:59.000 You're going to have nothing but Bitcoin and gold.
00:26:02.000 So diversify.
00:26:03.000 Let Birch Gold Group help you convert an existing IRA or 401k into a tax sheltered IRA in physical gold.
00:26:11.000 And for every five thousand dollars you buy, you'll get an ounce of silver for your stocking or for your kids.
00:26:18.000 What a great way to teach them about saving smartly.
00:26:21.000 Take out your phone.
00:26:23.000 Don't open the Uber app.
00:26:24.000 Don't open the Lyft app.
00:26:26.000 Don't open the Waymo app.
00:26:28.000 Go to your SMS and text BANNON to 989-898 to claim your eligibility for this offer.
00:26:36.000 Again, text BANNON to the number 989-898 today.
00:26:42.000 Because Birch Gold's free silver with qualifying purchase promotions ends on December 22nd.
00:26:50.000 Text BANNON to 989-898.
00:26:53.000 Back soon with Sam Hammond and total regime change in America's heart.
00:27:01.000 There's a lot of politicians that should be getting coal in their stockings for Christmas,
00:27:05.000 but Birch Gold thinks as a smart planner, you deserve silver.
00:27:10.000 That's why for every five thousand dollars you purchase between now and December 22nd,
00:27:15.000 Birch Gold will send you an ounce of silver free.
00:27:20.000 Remember, silver is up over 60% this year and over $60 an ounce.
00:27:27.000 You see, smart people diversify and have a hedge.
00:27:31.000 That's why I encourage you to buy gold from Birch Gold.
00:27:34.000 With the rate cuts from the Fed in 2026, the dollar will be worth less.
00:27:38.000 And what happens if the AI bubble bursts?
00:27:41.000 Diversify.
00:27:42.000 Let Birch Gold Group help you convert an existing IRA or 401k into a tax-sheltered IRA in physical gold.
00:27:51.000 Let me repeat that.
00:27:52.000 Physical gold.
00:27:53.000 And for every $5,000 you buy, you'll get an ounce of silver for your stocking or for your kids.
00:28:00.000 What a great way to teach them about saving smartly.
00:28:04.000 Just text my name, Bannon, B-A-N-N-O-N to 989-898 to claim your eligibility for this offer.
00:28:11.000 Again, text Bannon, B-A-N-N-O-N to the number 989-898 today.
00:28:17.000 Because Birch Gold's free silver with qualifying purchase promotion ends the 22nd of December of this year.
00:28:26.000 Make sure you text Bannon to 989-898.
00:28:30.000 Do it today.
00:28:31.000 Imagine having the world's most connected financial insider feeding you vital information.
00:28:36.000 The kind of information only a handful of people have access to.
00:28:41.000 And that could create a fortune for those who know what to do with it.
00:28:46.000 That's exactly what you get when you join our frequent guest and contributor, Jim Rickards, in his elite research service, Strategic Intelligence.
00:28:56.000 Inside Strategic Intelligence, you'll hear directly from Jim and receive critical updates on major financial and political events before they hit the mainstream news.
00:29:06.000 He'll put you in front of the story and tell you exactly what moves to make for your best chance to profit.
00:29:13.000 As a proud American, you do not want to be caught off guard.
00:29:17.000 Sign up for Strategic Intelligence right now at our exclusive website.
00:29:22.000 That's RickardsWarRoom.com.
00:29:24.000 RickardsWarRoom.com.
00:29:26.000 You go there.
00:29:27.000 You get strategic intelligence based upon predictive analytics.
00:29:32.000 Do it today.
00:29:33.000 Right now.
00:29:34.000 RickardsWarRoom.com.
00:29:36.000 If you're a homeowner, you need to listen to this.
00:29:40.000 So listen up.
00:29:41.000 In today's artificial intelligence and cyber world, scammers are stealing home titles with more ease than ever.
00:29:49.000 And your equity, the equity in your home, your life savings is the target.
00:29:55.000 Now, here's how it works.
00:29:57.000 Criminals forge your signature on one document.
00:29:59.000 Use a fake notary stamp.
00:30:01.000 Pay a small fee with your county and boom.
00:30:05.000 Your home title has been transferred out of your name.
00:30:08.000 Then they take out loans using your equity or even selling your property.
00:30:13.000 You won't even know it's happened until you get a collection or foreclosure notice.
00:30:20.000 So let me ask you, when was the last time you checked your home title?
00:30:25.000 If you're like me, the answer is never.
00:30:29.000 And that's exactly what scammers are counting on.
00:30:32.000 That's why I trust Home Title Lock.
00:30:35.000 Before I met them, I never checked on this.
00:30:37.000 Now I'm safe and now I'm secure.
00:30:40.000 Use promo code Steve at HomeTitleLock.com to make sure your title is still in your name.
00:30:46.000 You'll also get a free title history report plus a free 14-day trial of their $1 million
00:30:53.000 triple lock protection.
00:30:55.000 That's 24-7 monitoring of your title, urgent alerts to any changes, and if fraud should happen,
00:31:02.000 they'll spend up to $1 million to fix it.
00:31:06.000 Go to HomeTitleLock.com now.
00:31:08.000 Use promo code Steve.
00:31:10.000 That's HomeTitleLock.com, promo code Steve.
00:31:14.000 Do it today.
00:31:16.000 Do it now.
00:31:18.000 Still America's Voice family.
00:31:20.000 Are you on Getter yet?
00:31:21.000 No.
00:31:22.000 What are you waiting for?
00:31:23.000 It's free.
00:31:24.000 It's uncensored.
00:31:25.000 And it's where all the biggest voices in conservative media are speaking out.
00:31:29.000 Download the Getter app right now.
00:31:31.000 It's totally free.
00:31:32.000 It's where I put up exclusively all of my content 24 hours a day.
00:31:35.000 You want to know what Steve Bannon's thinking?
00:31:37.000 Go to Getter.
00:31:38.000 That's right.
00:31:39.000 You can follow all of your favorites.
00:31:40.000 Steve Bannon.
00:31:41.000 Charlie Kirk.
00:31:42.000 Jack Posobiec.
00:31:43.000 And so many more.
00:31:44.000 Download the Getter app now.
00:31:45.000 Sign up for free and be part of the movement.
00:31:49.000 All right, War Room Posse.
00:31:50.000 We are back with Sam Hammond, Chief Economist at the Foundation for American Innovation.
00:31:56.000 He is the author of AI and Leviathan, a very slim-tracked, packed full of nightmarish futures, but also tips on how to survive them.
00:32:08.000 Okay, Sam, if we could just return briefly to the concept of regime change, the breakdown of the current order under the pressure of AI and other downstream technologies.
00:32:19.000 You don't necessarily present this as something that's ideal or even something that's desired, but you do present it as a cultural and political landscape that people will have to deal with.
00:32:31.000 So if we could just return really quickly to the mechanisms by which democratized AI will, in fact, erode current institutions and how you think people should find that narrow corridor as you describe it in the book.
00:32:47.000 Yeah.
00:32:49.000 Part of this is a continuation of existing trends, right?
00:32:51.000 So, you know, if the 1950s and 60s NASA did the Apollo project, today it's being done by SpaceX.
00:32:57.000 And we're seeing very similar phenomenon across, you know, many parts of our broken institutions, right?
00:33:01.000 I think we all recognize that the U.S. government and bureaucracy is overwrought.
00:33:06.000 It's decaying.
00:33:07.000 It's decrepit.
00:33:08.000 War Room Posse would definitely agree with that.
00:33:10.000 And what have we done instead?
00:33:11.000 Well, we've started to outsource it.
00:33:13.000 And I think even if we had the sort of competence of the 1950s Eisenhower administration or something like that, the fact is a lot of this talent, a lot of the know-how is embodied in these private corporations, right?
00:33:25.000 So we're using Palantir as our spy agency.
00:33:28.000 We're using SpaceX as our launch capability.
00:33:32.000 And I just see running that forward becoming more and more true.
00:33:37.000 And especially when you look at the roadmaps what these AI companies are saying they want to build, right?
00:33:41.000 You know, today we have chatbots.
00:33:42.000 You know, OpenAI has this actually spelled out in their research plan.
00:33:46.000 Next is innovators.
00:33:47.000 So AIs that don't just, you know, do your homework but can actually autonomously do new science, make new discoveries.
00:33:55.000 Then after that is AI organizations, right?
00:33:58.000 So these are not just a single AI or a single chatbot but autonomous AIs working in teams as part of an AI corporation.
00:34:05.000 And then once you have sort of end-to-end AI corporations, the world starts to look very, very different, right?
00:34:12.000 We're going to have, you know, these companies, these organizations potentially making millions, billions of dollars autonomously.
00:34:20.000 They might have a human at the top sitting at the chairman of the board.
00:34:23.000 But otherwise, any human would be a friction, would be a bottleneck.
00:34:27.000 We're talking about something like Jeff Bezos ruling over an army of robots that do everything.
00:34:33.000 You just take out the human who is now led around by an algorithm and you replace him with Digit the robot.
00:34:38.000 Yeah, that's barely even a joke.
00:34:40.000 You know, Elon Musk has been fighting for shareholder control over Tesla for this very reason, right?
00:34:46.000 Because he said, you know, it's less about the money.
00:34:48.000 You know, this trillion-dollar package he's gotten is more about the control because I'm going to use Tesla to build a humanoid robot army.
00:34:55.000 You know, they're planning to ramp to 50,000, 100,000 by next decade, millions of these human robots coming off the factory line.
00:35:01.000 Yeah, building basically an Optimus Gigafactory right now in Texas, correct?
00:35:06.000 Right.
00:35:07.000 And so that's a lot of power under one person, but it's also a new kind of organization.
00:35:14.000 You know, we complain about the DMV or whatever, but a lot of the jobs that governments do are already extremely exposed to current AI technology.
00:35:23.000 You know, auditing, law, accounting.
00:35:26.000 These things are going to fall this decade.
00:35:28.000 And if there's going to be sort of a balance between the private sector and the public sector, if we can have our state capacity, the minimum viable government we need to enforce contract and make sure we maintain rule of law, we need to keep up, right?
00:35:42.000 If we don't, and I think this is sort of a safe default scenario that the government doesn't adapt quickly enough, then it will just be displaced in the same way we're already seeing.
00:35:51.000 And when you say we, you mean the United States?
00:35:54.000 I think broadly speaking, most Western democracies are pretty exposed because of our slow procedural orientation where we take our time and the technology doesn't wait, right?
00:36:09.000 And so it calls for, I think, this balancing act where we want to be pushing AI into government, but also taking that as an opportunity to set standards.
00:36:18.000 Because, you know, another worry is not just the private concentration of power, but, you know, you can imagine some tin pot dictator.
00:36:25.000 You know, what is the thing that keeps them from having total power?
00:36:28.000 Well, it's the fact that, you know, the military could do a coup or, you know, their generals will defy an order.
00:36:34.000 But if all those become sort of automated, if the whole machinery of government becomes AI, then it's a matter of just changing the prompt and you change your government.
00:36:43.000 And so we need to build in some levels of privacy, civil liberties, engineering into the tech stack itself so that we don't have this sort of lock-in effect that could arise.
00:36:55.000 But, you know, this goes to my point that I think a lot of this technology is at this point, you know, the Pandora's box has been opened.
00:37:04.000 There's no way up but through.
00:37:05.000 We can try to resist the technology, but really I think a better path is to try to steer the technology, to master the technology, not let it master us.
00:37:12.000 So my own perspective, you know, I can appreciate your view as a futurist and as an economist and seeing these trends going forward and seeing them as being quasi inevitable.
00:37:24.000 What do you do about it on a practical level, on an economic level?
00:37:27.000 But as a humanist, as a flea-bitten monkey person, I am much more concerned about, well, then what do people do?
00:37:37.000 Like, what do regular people do?
00:37:39.000 What do my friends and family do in the face of this?
00:37:41.000 Do we buy Bitcoin?
00:37:43.000 Do we become human AI symbiotes?
00:37:46.000 Do we vote for the new tech accelerationist party?
00:37:49.000 Like, what would you, in your view, what sorts of futures would a blue collar working man be dealing with or a small business owner be dealing with?
00:38:00.000 Like, how would they respond to this?
00:38:02.000 Well, I mean, over the next, say, five years, I think a lot of blue collar labor work is still relatively safe.
00:38:12.000 I think there's a lot of ways that AI could be empowering the small business owners and entrepreneurs.
00:38:16.000 You know, the fact that you can now do your own marketing and graphic design and things that would normally require big teams or get legal counsel essentially for free.
00:38:25.000 Now, you know, in the long run, if we want to ask, you know, what is sort of my vision for the best possible outcome?
00:38:32.000 And again, this is not necessarily a forecast.
00:38:35.000 This is now me telling you what I would want to happen.
00:38:38.000 Is, you know, I see there are potential opportunities for AI to be a corrective to a lot of the problems of modernity.
00:38:45.000 Right. And, you know, I think a lot of conservatives and a lot of right wing thought is a part reaction to modernity and the trade offs that came from.
00:38:53.000 Yes, we want to have these large scale systems because they're more efficient and they produce standards.
00:38:59.000 And by modernity, you mean to include managerialism, bureaucracy, egalitarianism, the post enlightenment era where, where, you know, we lost something with that, too.
00:39:10.000 We lost, you know, local communities.
00:39:12.000 We lost, you know, the interfacing with our neighbor.
00:39:15.000 We got pushed into big metropolises and under the thumb of a sort of impersonal bureaucracy, a Kafkaesque bureaucracy.
00:39:25.000 And again, you know, I see that as a worthwhile trade off because we are more prosperous.
00:39:30.000 We have longer living standards.
00:39:32.000 We can try to recreate community, but it was a real trade off.
00:39:35.000 And is there a way in which A.I. could, you know, insofar as it does start to dissolve some of these these state functions and and these forces of homogenization enable a new kind of, you know, high tech communitarianism.
00:39:48.000 You know, I want to go back to the, you know, one room schoolhouse that was down the road before it all consolidated into these big school.
00:39:54.000 But with robots.
00:39:55.000 Well, you could, you know, you could have the A.I. tutor in the morning and the jujitsu class in the evening.
00:40:00.000 Right. And you're definitely going to want to keep up with their physical prowess.
00:40:03.000 Well, I think I think that's actually true.
00:40:05.000 I think, you know, the the if the early part of the 21st century was, you know, was good for the nerds.
00:40:12.000 I think the latter half will be good for the for the jocks.
00:40:15.000 OK, I was never much of a jock, but I do appreciate the sentiment.
00:40:21.000 At least it's a monkey person sentiment.
00:40:24.000 But you see where I'm sort of going with this is, you know, and there's and this is also what animates a lot of the more conservative pro-AI folks.
00:40:32.000 As they see the power of A.I. to, you know, dissolve Hollywood to, you know, at least in the short run, deflate the sort of managerial professional class economy.
00:40:43.000 You know, these laptop workers that rule over us.
00:40:46.000 You know, when that becomes plentiful, then it's the electrician or the plumber that is actually in high demand.
00:40:52.000 Now, I just think that will be a relatively short transitional window where, you know, at some point we'll also have robotics that do that as well.
00:40:58.000 And so to your point, to your question, then what then what?
00:41:01.000 You know, what are we left doing?
00:41:02.000 And I look around the world at, you know, what can we, you know, garner some inspiration for?
00:41:07.000 Like what's close to this sort of post-scarcity world today?
00:41:10.000 And I see, you know, the Gulf states.
00:41:12.000 I see, you know, UAE, Qatar, Saudi Arabia.
00:41:14.000 They have trillion, you know, trillion dollar sovereign wealth funds.
00:41:17.000 They essentially have, you know, free social services for their citizens.
00:41:20.000 They import these guest workers that build their stadiums, which from their point of view are basically robots.
00:41:25.000 They do these.
00:41:26.000 Yeah.
00:41:27.000 You know, in the original etymological root of the word, you know, robot comes from, what is it?
00:41:31.000 Robota in Czech, which means servant or slave.
00:41:34.000 Right.
00:41:35.000 So I guess they are, they're Czech robots.
00:41:37.000 So I think there is a world where we end up in a kind of, you know, rentier state.
00:41:41.000 And I think this is one of the things we have to balance.
00:41:44.000 You know, the reason Saudi Arabia has a big sovereign wealth fund is because if they don't, they suffer a resource curse.
00:41:49.000 And they, you know, so we need to, you know, I think the Trump administration has been quite thoughtful and has a lot of foresight in the fact that Trump wants a U.S. sovereign wealth fund.
00:42:01.000 But that leads up to the question of what do we do on a daily basis?
00:42:04.000 And, you know, we look back in history.
00:42:06.000 What did, you know, people in the 1600s do on a daily basis?
00:42:10.000 Well, they, yes, they sowed the land or whatever, but they also, you know, went to church.
00:42:15.000 They raised their families.
00:42:17.000 They participated in community life.
00:42:19.000 They went to rituals and had services.
00:42:21.000 And I think there's a world where we can get back to something that is potentially more human than what we have today because it, because AI has this potential for this radical relocalization of human society.
00:42:33.000 So taking this line of thought, by the way, before we go, I will say one more prayer for a solar flare.
00:42:41.000 And if failing that, just give us an S curve, but a long flat S curve.
00:42:48.000 I'm intrigued by the roots of your thought in what was once called transhumanism is now called science and technology.
00:42:57.000 We were both reading Ray Kurzweil's The Age of Spiritual Machines around the same time, 2001 or so.
00:43:05.000 And it made a deep impression on me, the totalizing vision of technology, the idea of superhuman,
00:43:11.000 superhuman AI, all human beings attached to it through nanobots or whatever, the indistinguishable nature of physical and virtual reality, all that.
00:43:20.000 But when I read it, it just sounded like a nightmare world.
00:43:23.000 You know, I'd read Ted Kaczynski a couple of years before and oftentimes joke that, you know, on one shoulder is Ray Kurzweil and on the other is Ted Kaczynski, sort of like a devil and a fallen angel on each shoulder.
00:43:36.000 For you, my sense is that Kurzweil had a different impact.
00:43:41.000 Am I correct about that?
00:43:43.000 I think the primary impact it had was just looking back at, you know, how much he got right through relatively simple methods.
00:43:50.000 Right. So, you know, people will nitpick that his timing is off here and there.
00:43:55.000 But Age of Spiritual Machines came out in 1999 and he predicted that, you know, we'd have human level AI, AGI by 2029, which if you look at the betting markets and the other forecasting sites is roughly where things are converging.
00:44:10.000 You know, he may have had a slightly different path, how to get there.
00:44:14.000 Right. He talked about whole brain emulation.
00:44:16.000 Yeah, that we'd scan the brain. And in a way, we did that indirectly. Right.
00:44:20.000 These large language models are trained on human generated data.
00:44:24.000 And in the limit, they are learning the thing that generated that data, not the data itself.
00:44:29.000 And the thing that generated that data is a mind, which is why these, you know, these companies are actually even talking, starting to talk about, you know, the welfare of the AI.
00:44:38.000 So what I got from Kurzweil was just, first of all, that history hasn't ended, that we should not limit our imagination.
00:44:47.000 And if you talk to most people, they're relatively linear thinkers, whereas Kurzweil always trusts, you know, there's these exponential trends and we got to take them very seriously.
00:44:54.000 And then secondly, that that you can do a lot and go a long way with these very simple forecasting methods of, you know, what will be the biggest supercomputer?
00:45:03.000 What will be the most, you know, what is the computational power of our brain and when will those two lines intersect?
00:45:08.000 Yeah, you go into a bit, you give a hat tip to the early extropians, Max Moore and others in that genre.
00:45:17.000 You know, Max Moore is the reason we're saying transhumanism when he pivoted to that as a term.
00:45:22.000 And you also give a hat tip to the effective accelerationists.
00:45:27.000 I get the sense that you also see it as being part of the same trend.
00:45:31.000 I want to pivot, if we can, to your vision of the future, your timeline.
00:45:38.000 I mean, if there's one thing that Ray Kurzweil can be given credit for, it's that he had the guts to say, this is what I believe is going to happen.
00:45:47.000 And he laid out a very specific path.
00:45:50.000 You do the same.
00:45:51.000 And if you would just give the audience a sense, you broke it down into three basic periods.
00:45:57.000 The immediate future, about six, seven years from now, beginning in 2036 and then ending, of course, in the 2040s as we approach the singularity.
00:46:08.000 What are the different elements that people should expect to see as we move forward towards this imagined, I would say, singularity?
00:46:16.000 Sure.
00:46:17.000 So let's start with where we are today.
00:46:19.000 Today we are in a place where we have Google, OpenAI, Anthropic, X, are sort of in a neck-to-neck race to release the best general-purpose language model.
00:46:30.000 The big breakthrough last year were reasoning models, thinking models, models that can actually do tasks.
00:46:35.000 And now the application of reinforcement learning, which is an AI training technique that basically gives these models goals and goal-directed behavior.
00:46:43.000 Now, there's an organization called METR, M-E-T-R, that tracks the level of autonomy in these systems.
00:46:49.000 By autonomy, I mean what's the longest task that they can do before they sort of become discombobulated and fall off track and start to drift.
00:46:58.000 That is now doubling every seven months or so.
00:47:01.000 Kind of in a Moore's Law fashion.
00:47:03.000 Yeah.
00:47:04.000 Faster in Moore's Law.
00:47:05.000 Moore's Law was every two years.
00:47:06.000 This is every seven months.
00:47:07.000 So the best model today, at least that's publicly released, is Gemini 3 from Google.
00:47:14.000 It can perform tasks, engineering tasks that take humans roughly two and a half hours with a high degree of reliability.
00:47:21.000 You know, in seven months, that'll be five hours.
00:47:24.000 Then seven months from then, it'll be 10 hours.
00:47:26.000 And then 20 hours.
00:47:27.000 And then suddenly, you know, very quickly, we have systems that are doing things autonomously that would normally take humans or teams of humans weeks or months.
00:47:35.000 Those are going to be incredibly powerful.
00:47:37.000 They're going to be incredibly useful economically because this is when you move from AI being a tool to being a direct substitute for all kinds of, at least at first, white collar work.
00:47:47.000 The greater replacement.
00:47:49.000 Yes, your words.
00:47:54.000 And then, you know, but it's also very dual use, right?
00:47:57.000 So the autonomy of these systems can be used to automate your Excel job, but it could also be used to execute cyber espionage campaigns, as Anthropic just revealed.
00:48:06.000 They disrupted a Chinese effort using their models and their servers running autonomously to spy on U.S. corporations and government agencies.
00:48:15.000 So I think that's really the next, say, two or three year period.
00:48:19.000 I think we could see, you know, a major run up in these cyber attacks and, you know, potentially in ways in which the Internet becomes somewhat unusable.
00:48:29.000 Or at least we need to build new Internet rails, both because it'll be hard to know what's real and what's not, the proliferation of deep fakes.
00:48:36.000 But also, you know, the cyber, the level of cyber threats, the vulnerabilities in our cyber infrastructure are very severe.
00:48:43.000 And if we don't fix them fast enough, we may have to just build alternatives.
00:48:48.000 And you see the appropriate response or at least the most effective response is people basically moving into gated communities, both in reality, physical reality and virtually, right?
00:48:59.280 Yeah, you see this in my privatization.
00:49:01.680 Yeah, you see this with, you know, online communities, right?
00:49:04.360 So if you go on Facebook and look at, you know, the comments on, you know, some fake image of like an African child who built a Jesus statue out of shrimp shells, you know, you see all the people commenting and be like, oh, man, you know, praise the Lord.
00:49:21.040 And it's like, well, you know, we're not going to make it.
00:49:23.980 Those people are not going to make it.
00:49:24.960 But, you know, how do you solve bots?
00:49:27.200 Well, you can either have some identification system where we all scan our iris like WorldCoin wants to do, or you end up moving into these more gated communities where you are very selective about who gets in.
00:49:38.160 And I think that that's already happening in the digital realm.
00:49:40.000 I think it will increasingly happen in the real world, too.
00:49:42.620 And ultimately leading to the singularity.
00:49:45.020 If I may, should I read the last passage of the book?
00:49:48.420 Is this too much of a spoiler?
00:49:50.660 You know, you describe this privatization.
00:49:52.680 You describe a city powered, but, you know, with a massive singularitarian data center powered by fusion.
00:50:00.180 And they are about to go God mode.
00:50:02.800 And you say the city is a home to a fusion-powered supercluster with billions of times more computational power than every human brain combined.
00:50:10.420 It just completed its first big training run, and the new model is ready to be tested.
00:50:14.440 The engineers have read the sequences and know the danger, but their pride, curiosity, and benthamite expected value calculations all scream, turn it on.
00:50:25.880 Besides, who's going to stop them?
00:50:30.300 Nightmarish future indeed, Sam Hammond.
00:50:32.880 I cannot deny it will happen or not, but I'm still praying for that solar flare.
00:50:37.360 AI and Leviathan.
00:50:38.800 Where can people find the book, and where can people follow you?
00:50:41.420 So this is a limited run, but it's drawn from an essay series.
00:50:45.060 If you just Google it, you can find it written there up for free.
00:50:48.720 All right, Sam Hammond, I really appreciate you coming by.
00:50:51.480 Thank you very much for hanging out with us flea-bitten monkey people for now until it's all robots.
00:50:57.400 Before the robots, though, the IRS needs more money, your money.
00:51:01.860 If you owe the IRS back taxes, they can garnish your wages, levy your bank accounts, and even seize your retirement or take your home.
00:51:09.040 Don't let the IRS target you.
00:51:11.420 Call the professionals at Tax Network USA.
00:51:15.280 Their tax lawyers and enrolled agents are experts in powerful programs that may even help you eliminate your tax debt.
00:51:21.900 Whether you owe a few thousand or a few million, they can help you.
00:51:25.080 With one phone call, you can start the process of stopping the threatening demand letters.
00:51:30.360 Call 1-800-958-1000.
00:51:34.180 That's 1-800-958-1000.
00:51:37.280 Or visit TNUSA.com slash Bannon.
00:51:43.100 And of course, you still need to diversify your assets in the face of the singularity.
00:51:49.360 Diversify, let Birch Gold Group help you convert an existing IRA or 401k into a tax-sheltered IRA in physical gold.
00:51:59.360 Just text Bannon to 989-898 to claim your eligibility for this offer.
00:52:05.340 Again, text Bannon to the number 989-898 today.
00:52:10.000 Because Birch Gold's free silver with qualifying purchase promotion ends on December 22nd.
00:52:16.100 Thank you.
00:52:16.980 Good night.
00:52:17.380 Holidays are here.
00:52:19.600 And while the rest of the world follows the script written by Big Pharma, real Americans are taking control of their own health.
00:52:27.720 You do not need permission from a corporate medical machine to care for yourself.
00:52:32.800 That is where All Family Pharmacy comes in.
00:52:35.280 It is simple.
00:52:36.700 Go to AllFamilyPharmacy.com slash Bannon.
00:52:39.400 That is one word.
00:52:41.540 AllFamilyPharmacy.com.
00:52:43.160 Place your order online.
00:52:44.700 And a licensed physician reviews it.
00:52:48.340 Once approved, your medication is shipped directly to your home.
00:52:52.880 Whether it's ivermectin, antibiotics, or your everyday medications, they have you covered.
00:52:59.620 Everything is handled quickly, privately, and securely.
00:53:03.500 This holiday season, don't let Big Pharma or the government tell you how to stay healthy.
00:53:09.580 Visit AllFamilyPharmacy.com slash Bannon and save 10% with code Bannon10.
00:53:17.280 Take control of your health.
00:53:19.400 Stay independent.
00:53:21.300 Stay free.
00:53:22.680 And stay healthy.
00:53:23.580 Stay healthy.
00:53:23.780 Stay healthy.
00:53:24.080 Stay healthy.
00:53:24.780 Stay healthy.
00:53:25.540 Stay healthy.