Triggered - Donald Trump Jr - April 09, 2026


Code Red: The AI Race with Author Wynton Hall | Triggered Ep.332


Episode Stats


Length

59 minutes

Words per minute

172.01337

Word count

10,295

Sentence count

645

Harmful content

Hate speech

24

sentences flagged


Summary

Summaries generated with gmurro/bart-large-finetuned-filtered-spotify-podcast-summ .

Wynton Hall, the author of the new book, Code Red, joins me to talk about his new book and why AI is not just a tool, it s a weapon. We talk about how AI can change our world, and why we should be prepared for what s coming next.

Transcript

Transcripts from "Triggered - Donald Trump Jr" are sourced from the Knowledge Fight Interactive Search Tool. Explore them interactively here.
Hate speech classifications generated with facebook/roberta-hate-speech-dynabench-r4-target .
00:06:22.000 Hey guys, welcome to another huge episode of Triggered.
00:06:25.000 And we've got a great conversation in store for you today, all centered around AI.
00:06:31.000 Artificial intelligence is one of those issues that a lot of people think, I don't know, as some sort of futuristic side story, but it's not.
00:06:39.000 It's here right now, and the stakes are absolutely enormous.
00:06:43.000 We're talking about economic power, military power, censorship, surveillance, the race with China, and whether America is actually prepared For what's coming next.
00:06:53.000 So, we're going to get into all of that today with a great guest, Wynton Hall, the author of the new book, Code Red.
00:07:01.000 So, guys, make sure you're liking, sharing, subscribing, okay?
00:07:05.000 Hit the little like button.
00:07:06.000 It's so easy.
00:07:07.000 Share it with your friends so we can get this message out there and subscribe so you never miss one of these major episodes.
00:07:13.000 If you do miss the show here on Rumble, go over to Apple, Spotify.
00:07:17.000 If your friends get their podcasts that way, make sure they're aware.
00:07:21.000 Subscribe there too, share it with them.
00:07:23.000 Catch it there.
00:07:24.000 You guys are the first line of defense getting this messaging out there.
00:07:27.000 Okay.
00:07:27.000 The mainstream media is not going to do it for us.
00:07:30.000 We need all of your help for all of the top headlines that we cover here on the show.
00:07:34.000 Go over, check out my news app, MXM news, like minute by minute MXM news, where you can get the mainstream news without the mainstream bias.
00:07:42.000 And of course, don't forget about our brave sponsors for having the guts to support this program.
00:07:49.000 First, check out all the latest predictions on Polymarket.
00:07:53.000 So, if you follow politics, you know everyone's got an opinion.
00:07:56.000 But on Polymarket, you actually get real odds on what's likely to happen.
00:08:01.000 Polymarket is a prediction market where people trade on real events, elections, debates, policy moves, and it doesn't stop at politics.
00:08:10.000 There are markets on the economies, tech, sports, pop culture, and so much more.
00:08:15.000 It's all live, it's transparent, and it gives you a real time indicator of what people really think is going to happen.
00:08:23.000 So, go give it a look at polymarket.com, check it out, and let me know what you think.
00:08:28.000 And, guys, we have a brand new sponsor, YRefi, where you can invest in America's future.
00:08:34.000 It's their mission to provide private student loan borrowers a second chance while creating opportunities for eligible accredited investors.
00:08:43.000 And as an accredited investor, when you invest in YVerify, your interest rate is fixed.
00:08:49.000 And you have the freedom to take your monthly interest income or reinvest it, whatever you choose.
00:08:56.000 All while investing in America's future and our next generation.
00:08:59.000 For more information, just call 877 80 invest or log investyrefi.com.
00:09:08.000 Just that easy.
00:09:10.000 For complete details, make sure to review the private placement memo and scan this QR code to view the disclosures.
00:09:18.000 Why Refi?
00:09:19.000 Investing in America's future.
00:09:22.000 Well, guys, joining me now, Breitbart Managing Editor, the author of the new book Code Red.
00:09:28.000 The Left, the Right, China, and the Race to Control AI, Winton Hall.
00:09:33.000 Winton, great to have you here, man.
00:09:34.000 Yeah, it's great to be with you, Don.
00:09:36.000 Thank you so much.
00:09:36.000 Well, thank you for being here.
00:09:37.000 I mean, at the center of your book is this idea that AI isn't just a tool, it's power.
00:09:44.000 What do you mean by that?
00:09:45.000 And why do you think so many people really have underestimated what we're dealing with here?
00:09:51.000 And is that changing?
00:09:53.000 Yeah, it's a great question, Don.
00:09:55.000 Um, one of the reasons that I wanted to write this, I was concerned seeing a lot of friends in the conservative movement that I've been a part of all my life.
00:10:02.000 Um, they were either shrugging it off and saying, Oh, it's just a turbocharged Google search or another, you know, uh, software tool.
00:10:10.000 Or the other side, they were all doom and thinking that this was some, you know, looming apocalypse.
00:10:14.000 And what I wanted to show was that, uh, there is going to be a strong upside.
00:10:18.000 There's going to be a lot of really positive, great uses.
00:10:21.000 But as conservatives, look, we're, we're, this isn't our first rodeo.
00:10:25.000 We, we understand where big tech is.
00:10:26.000 We understand the scan and ban censorship.
00:10:29.000 You actually, in 2019, your book, really laid that out and I think opened a lot of eyes to how deep that goes.
00:10:37.000 And then obviously in November of 2022, we get the front facing arrival of ChatGPT.
00:10:42.000 And I really wanted to show people that, yes, it is a tool, but if you just dismiss it as that, you're going to miss the upside, but as well the political landmines.
00:10:49.000 And so what I wanted to do is I spent two years going through it and trying to show both.
00:10:54.000 Yeah, I mean, because that's really interesting.
00:10:56.000 It's one of those things like it's here, it's not changing.
00:10:58.000 You know, any other major sort of industrial, you know, revolution type of thing, whether it was, you know, mechanical, you know, farming, all these things, it was the end of everything.
00:11:08.000 And then it wasn't.
00:11:10.000 And we adapted.
00:11:10.000 So I think there's, you know, adaption.
00:11:12.000 I think my big concern with AI has always been if it ends up sort of like search, you know, if it ends up, you know, just woke AI and that's the only option you have, you know, that's scarier to me than, you know, woke media, you know, woke lawfare, because that is something that will take people's.
00:11:32.000 Mindset and literally change it subtly over time where they won't even know that it's actually happening.
00:11:37.000 You know, that's not just a propaganda campaign.
00:11:40.000 They will figure out how to change people's mind to their worldview very subtly, not necessarily with the truth.
00:11:47.000 And that's perhaps what's most scary to me.
00:11:50.000 No, it's already been proven.
00:11:52.000 And one of the most shocking things, I spent two years deep diving into this world.
00:11:57.000 And one of the things that surprised me most on was realizing that even left leaning peer academic reviewed Journal articles from academia concede a deep left leaning bias in most modern LLMs, large language models, your AI chatbots.
00:12:12.000 And of course, that's because of the corpus of the training data, you know, left leaning Reddit, Wikipedia.
00:12:17.000 Oh, yeah.
00:12:18.000 Wikipedia is like, you know, 60% of the information is garnered from that.
00:12:22.000 But like, under no, I don't think anyone, certainly not watching this show, but believes that Wikipedia is a neutral source of information.
00:12:31.000 I mean, it's about as left leaning as it gets.
00:12:33.000 And, you know, while Google may have been probably the biggest.
00:12:38.000 Problem in search.
00:12:39.000 I mean, Wikipedia being used as the basis for AI foundation and sort of their version of truth, that's scarier than anything because even the conservative or let's call it neutral platforms still rely heavily on the information there, and that information is just total crap.
00:12:56.000 You nailed it.
00:12:57.000 And, and we all know that.
00:12:58.000 And then the editors at Wikipedia will lock the editing so that conservatives get smeared and then they can't actually go back and change it.
00:13:05.000 So it's a real problem.
00:13:07.000 The other thing about it, of course, is that then when you present it, it's very different than search, right?
00:13:11.000 Search the old school.
00:13:12.000 You would get those blue links.
00:13:14.000 We get to decide what we consider a credible source.
00:13:17.000 Now you get the holy writ, right?
00:13:19.000 Like the one definitive answer.
00:13:21.000 Uh, and, and that's presented particularly for young people and they trust it.
00:13:25.000 And that was the other thing I found when I was doing Code Red was, That there's something called automation bias.
00:13:30.000 And what it basically, of course, is that the idea that young people, particularly, default to and assume the source credibility of this billion dollar machine robot.
00:13:41.000 And that just changes and warps what reality is over time.
00:13:44.000 So you're absolutely right.
00:13:45.000 It's a massive problem.
00:13:47.000 Yeah, I mean, yeah, with search, you could see it.
00:13:49.000 When you had to find, let's call it Breitbart, and it was on page 3,476, the Breitbart version of the story, like, okay, fine.
00:13:58.000 If the first 50 searches were from CNN, you were like, okay.
00:14:02.000 That was easy to sort of discern hey, there's bias here.
00:14:05.000 You know, let me find the Breitbart article on this and even hear the corollary.
00:14:10.000 I like to read both sides and understand there's probably something even in the middle on a lot of this stuff.
00:14:15.000 Generally, not because the left has lost their mind so far.
00:14:18.000 It's framed so far out of, you know, on the outer end of the bell curve, you know, that even if you sort of discount it a little bit, it's still way off and way left leaning.
00:14:27.000 But this is very different because it doesn't give you those options.
00:14:30.000 It's just sort of here's what it is.
00:14:31.000 This is the gospel and you must abide.
00:14:35.000 That's exactly right.
00:14:36.000 And that's why I was really heartened with, you know, the AI action, the framework that your father has just put out.
00:14:41.000 And we're going to see, obviously, it's got a long ways to go, see how that ends up.
00:14:45.000 But one of the things that's most important to understand is, is this whole issue of procurements.
00:14:49.000 I mean, you know, so what I did was we launched this at Breitbart, Alex, our mutual friend, Alex Marlowe and so forth.
00:14:55.000 We put this out.
00:14:56.000 And what I did was I asked Google Gemini, all right, deep, deep research, the following.
00:15:01.000 I said, assess the current 100 U.S. senators and tell me based on their public policies and their statements, Who has violated your quote hate speech policy?
00:15:12.000 Okay, I know Don, this is going to shock you.
00:15:15.000 Um, only yes, all Republicans.
00:15:18.000 Yeah, how could you have seen it?
00:15:20.000 I don't know.
00:15:21.000 I am, I am like AI.
00:15:24.000 I can, uh, I, you can give me just the basic information, and I'll give you an answer right off the top of my head.
00:15:30.000 I will talk, let's talk after this about who's going to win next year's Super Bowl.
00:15:34.000 But, um, at any rate, you know, seeing the future as you do, uh, it was all Republican senators, seven U.S. senators, and zero Democrats, and then.
00:15:43.000 As if that's not good enough.
00:15:45.000 For kicks, it added in two hallucinations and thought that JD Vance and Marco Rubio were still in the Senate and not our vice president and our state secretary of state and added them as bigot number eight and nine.
00:15:57.000 Now, this would be funny.
00:15:58.000 I mean, we're look, you're a battle hardened, used to this.
00:16:02.000 I am too, at Breitbart.
00:16:04.000 But the reality is this young people who are first time voters just trying to get information, they're not ideologues, they're just trying to get information.
00:16:12.000 And the effect of that young vote, we know in 2016, President Trump, if you had a switch of 80,000 roughly votes, we'd have Hillary Clinton.
00:16:22.000 So the ability to nudge votes is very, very concerning.
00:16:27.000 And then the other thing is this.
00:16:29.000 Google gets billions of our tax dollars in procurements in the form of cloud compute contracts for federal agencies and so forth.
00:16:38.000 And so, what I loved about the framework that's been put out is hey, look, if you are going to receive taxpayer money, you cannot anathematize half the nation's values and go after them viciously if you're going to bag cash from taxpayers.
00:16:52.000 I think that's a reasonable standard.
00:16:54.000 And it's a wake up call.
00:16:55.000 Yeah.
00:16:56.000 And in your framing, I imagine those, the Democrat senators, Probably all have quotes out there basically maligning 50% of the population as Nazis and fascists and this.
00:17:11.000 So, if we're going to talk about hate speech, you know, I think we got to get a little bit more real that you can discount them saying those things.
00:17:17.000 I can assure you the Republicans never said any of those things.
00:17:20.000 So, if we were going to look at this objectively, who's actually peddled more hate speech, it ain't any of the Republicans on that list.
00:17:27.000 There's going to be a long list, certainly not seven of them.
00:17:29.000 There's going to be a long list of Democrats that are way out there ahead of that in doing that.
00:17:33.000 And yet, it can totally discount it.
00:17:35.000 And again, you're supposed to believe that this is true.
00:17:37.000 Yeah, it's exactly right.
00:17:38.000 And I mean, look, nobody understands the vitriol that we have to, you know, contend with better than you and has thicker skin.
00:17:47.000 But when I presented this material, we went, Matt Boyle and the rest of the team at Briber went to those seven senators.
00:17:53.000 And, you know, Senator Marsha Blackburn was on that list.
00:17:56.000 Senator Rick Scott was on that list.
00:17:58.000 These are incredibly reasonable people, by the way.
00:18:00.000 I was going to say, you know, like, hey, there's a couple that you may say, okay, fine, maybe, you know, maybe he got out over his skis a little bit, but like, you know, Marsha Blackburn and Rick.
00:18:08.000 Scott, not exactly controversial.
00:18:09.000 I was going to say, these are not exactly, you know, fire breathing dragons, okay?
00:18:13.000 These are about as cordial and decorum folks as you get.
00:18:17.000 But they were shocked too because the result was 3,400 words and it was very granular.
00:18:23.000 And a lot of this is just complete bunk.
00:18:26.000 I mean, saying that these people are transphobic and that they have their hate against migrants.
00:18:31.000 So look, here's what the conservative movement is up against.
00:18:34.000 We're used to bias, we're used to bias in classrooms, we're used to bias in textbooks.
00:18:39.000 We used to buy us in search, and as you pointed out, delisting, demonetizing, blacklisting.
00:18:44.000 What I think we've got to get people ready for is the realization that you're going to have this one unified answer, and young people, particularly, there's enormous upside for education, and I do believe that.
00:18:56.000 I'll lay it all out in Code Red.
00:18:58.000 I think, you know, First Lady's doing a great job on that.
00:19:01.000 But we, as parents and grandparents and the rest of it, have got to get our kids coached up about this because it's a whole new breed of misinformation.
00:19:09.000 Well, we also have to stop it early.
00:19:10.000 I mean, I know when I started the show a few years ago, it was.
00:19:13.000 You know, hey, the bias in search, the bias in Wikipedia.
00:19:15.000 Like, you know, like, you know what?
00:19:17.000 We're getting rid of some of that.
00:19:18.000 Elon taking over X got rid of some of that because when people got to see both sides of a story, they could make up their own mind.
00:19:25.000 At the time, I'm like, hey, guys, the one thing the left is good at is like marketing.
00:19:29.000 They'll figure out the next thing to try to weaponize and try to innovate.
00:19:32.000 And, you know, I don't honestly, AI existed, obviously, but it wasn't something we were even necessarily thinking about.
00:19:37.000 And it's definitely, you know, the new frontier of future bias.
00:19:42.000 It totally is, and they know it.
00:19:43.000 And the reality is that, you know, when you go and you look at who these folks are, you know, we're not in a lot of the rooms.
00:19:51.000 Yes, there are a lot of courageous pioneers of AI that relate to, you know, libertarians and free market people, and we know those names.
00:19:59.000 But what I wanted to do was explain to the conservative movement we got to get coached up.
00:20:02.000 Look, everybody knows, you know, Bill Gates, and we know, you know, George Soros, and we know Mark Zuckerberg.
00:20:09.000 But how many of a conservative movement based movement conservatives know a lot about the, Political ideology, donation histories, and so forth of a lot of these folks, like Mustafa Suleiman, Microsoft AI's CEO, Dimas Hassabas, even Sam Altman to a degree, and Dario Amadei.
00:20:25.000 And we're seeing right now with the War Department debate and the rest of it, and Anthropic, what the stakes are involved here.
00:20:31.000 And so I wanted to really.
00:20:32.000 Well, we saw with Anthropic just this, I guess it was last week.
00:20:36.000 We saw, hey, they're going to start a major super PAC funded with hundreds of millions of dollars.
00:20:40.000 And their donation history of all the people involved prior to that was like 99.9% left leaning.
00:20:46.000 It's not a super PAC that's just for the benefit of AI.
00:20:50.000 They're trying to implement their political will on you.
00:20:53.000 They're trying to force that on you.
00:20:54.000 And it's not even just 99%, it's the bag number.
00:20:58.000 Since 2020, Anthropic in its orbit, $200 million in donations.
00:21:04.000 And there's a We Breitbart put that story out.
00:21:07.000 And, you know, again, everybody's got the freedom to give to who they want, but let's not act as though there's not an ideological agenda here and like there's not a political network that's driving a lot of this.
00:21:18.000 And, you know, one of the things that's fascinating, and I go through the economic chapter, you know, we have all these scary doom, you know, quotes from Dario and Mustafa and so forth about the coming job apocalypse.
00:21:30.000 And then when you pull back and you realize this is a movement in Silicon Valley that has been doing UBI, Universal Basic Income Research, for a long, long time.
00:21:40.000 Sam Altman, Don, in 2016, funded the largest at the time Universal Basic Income Wealth Redistribution Study.
00:21:48.000 Now, think about that.
00:21:49.000 Just for those who don't know what that is, basically communism.
00:21:53.000 Right?
00:21:53.000 You sit at home, they're going to give you enough money to get by.
00:21:56.000 You don't have any power, you don't have any self governance or will.
00:22:00.000 They're just going to send you a check and you're dependent on the government forever because they're going to replace you no matter what.
00:22:04.000 That's exactly right.
00:22:05.000 And he went on a blog.
00:22:07.000 The blog is still up right now.
00:22:08.000 I have it cited in Code Red.
00:22:10.000 And in 2016, he said, I want to do a study.
00:22:12.000 I'm going to give $1,000 away, no strings attached.
00:22:16.000 And I'm going to do this longitudinally over time because I want to see what will happen.
00:22:19.000 Fast forward, he puts together a $60 million multi year study.
00:22:24.000 The results are kind of a mishmash.
00:22:26.000 Some people, you know, went to the dentist a little more.
00:22:27.000 Some people had leisure time.
00:22:29.000 But then he, he drops the mask, Don.
00:22:31.000 He says, the reason is twofold.
00:22:34.000 I wanted to do this.
00:22:34.000 One, I think in the future that technologies are going to require some kind of redistribution like this.
00:22:40.000 And then two, he says, I think it will be considered silly that not being afraid of not eating was how we motivated human flourishing and wealth creation.
00:22:50.000 I'm paraphrasing.
00:22:51.000 That's called the Protestant work ethic.
00:22:53.000 That's called free market capitalism.
00:22:55.000 That's called the engine that has powered The greatest economic expansion under American free market.
00:23:03.000 And these people really believe that they can reset the global economic system.
00:23:08.000 They believe that universal basic income is inevitable.
00:23:12.000 And here's my final point on this.
00:23:14.000 Even if we as conservatives say, well, it's all hype marketing to raise investor dollars or so forth, the reality is this if they scare enough people and make enough people believe that it's inevitable, you really can build public support for universal basic income, three hour, three hour.
00:23:32.000 Three day work week, four day work week.
00:23:34.000 And so we in the conservative movement really have to be ready for these arguments, whether they pan out or not.
00:23:40.000 Yeah, when you really, you frame this as a race between, in the book, between China and the United States.
00:23:47.000 How close are we to really losing an edge?
00:23:49.000 And what happens if Beijing wins?
00:23:51.000 Oh, man.
00:23:52.000 So most experts say that we're between six months to three years ahead, which is, I guess, good, but we would obviously need to be a lot farther ahead.
00:24:01.000 The stakes are so important. 0.73
00:24:03.000 President Trump, Vice President Vance have said exactly the right thing, which is we have to beat China.
00:24:09.000 And what I say in the book is we have to beat China without becoming China.
00:24:13.000 Nobody wants to live in a techno authoritarian surveillance state like the CCP.
00:24:18.000 We're not in any way saying that we want to emulate that by any stretch.
00:24:21.000 I actually think there's a little bit more bipartisan understanding of why we need to do it, but let me just lay out the two reasons.
00:24:26.000 One is the economic.
00:24:27.000 One third of the S&P 500 is made up of the Mag 7, the magnificent seven, those seven big tech, uh, big American companies that obviously occupy a lot of the AI space.
00:24:38.000 Okay.
00:24:38.000 So that's a huge wealth part of it.
00:24:40.000 On the other hand of the wealth side, we saw what happened when Nvidia got rocked, America's Nvidia.
00:24:45.000 600 billion dollar market cap wipeout because of deep seek China's AI model when R1 dropped.
00:24:51.000 So you got that tug of war, right?
00:24:53.000 America versus China. 0.65
00:24:54.000 We want to win. 0.78
00:24:55.000 We want wealth and prosperity for our children and our grandchildren, our economy.
00:24:59.000 But the second reason is the real reason that matters more because what matters more than money, of course, is the lives of our soldiers, sailors, airmen, and Marines. 0.95
00:25:07.000 And there we cannot and we should never want to live in a world built on AI Chinese rails. 0.88
00:25:15.000 Here's why. 1.00
00:25:16.000 When you look at, and you know this better than anyone because you've actually got a lot of knowledge about this, I think people don't understand.
00:25:23.000 When you look at if China were to gain AI dominance in security, you're looking at dominance, full spectrum battlefield dominance in encryption.
00:25:35.000 Cybersecurity, hacking of missile systems, hacking of infrastructure, because you're going to hit something called RSI. 0.76
00:25:43.000 It's a very simple concept if you really break it down.
00:25:45.000 What does it stand for?
00:25:46.000 Recursive self improvement.
00:25:48.000 And what that just means, real simply, is when the AI will be able to update and improve autonomously its own code, and that'll get you on an exponential curve.
00:25:56.000 Whoever gets there first will have such authority and supremacy over the battlefield in all of these spaces that you will not be able to catch them.
00:26:05.000 And so I will say this.
00:26:06.000 This may be one of the few places that I've seen a little more bipartisan understanding.
00:26:13.000 Now that we've got, like, say, President Trump laying this out, Secretary Hegseth, Emil Michael, all of the team is really briefing people and explaining the stakes.
00:26:22.000 So we've got to beat China. 0.86
00:26:23.000 There's absolutely no question about it. 1.00
00:26:26.000 Well, yeah, it was an interesting thing.
00:26:27.000 One of the early conversations I had on AI back in like 15 or 16 was with Paul Merlucky, youngest member on the board of Facebook, built Oculus, the VR goggle company at like 18, 19, sold it, became a billionaire.
00:26:41.000 But like, super tech guy got thrown off of Facebook because he was a conservative.
00:26:44.000 And I was talking to him about this concept of AI a decade ago when he was obviously onto it, but we didn't know anything about that kind of stuff.
00:26:54.000 And he was talking about it as it relates to China and all these things.
00:26:57.000 And it was like, he goes, what's really scary about it is, AI can get so advanced, you won't ever be able to change these systems.
00:27:04.000 Any other time in history, if you didn't like a system, you know, the Communist Party in China, if you wanted to change it, you know, people could get together, they start a movement, they start a ground swelling.
00:27:14.000 But over there, with their sort of, you know, their social welfare program, every camera is watching someone.
00:27:19.000 The second you have even a little bit of dissent, that person's seen, grabbed, thrown out of the ecosphere.
00:27:25.000 You could never have any kind of, you know, buildup of momentum because they can pick out any kind of dissident and just get rid of them.
00:27:32.000 In a second, so you stop that from happening.
00:27:34.000 I imagine the same thing really holds true for whoever gets to that level.
00:27:39.000 And I know there are a lot of people saying, well, we have to put limitations on our AI.
00:27:42.000 We have to be able to be reasonable about it.
00:27:44.000 And I understand that argument.
00:27:45.000 But if our enemies aren't going to do that, how do we compete?
00:27:49.000 If we're putting limitations on it and they're not, how do we compete with Russia, Iran, certainly China, if they just go all in and we're sort of hamstringing ourselves, even if there's a justifiable reason to hamstring ourselves?
00:28:03.000 Because if you do get to that point, That point of no return, that fulcrum point that you're talking about, and someone else beats us there, does not seem like it's good for the world.
00:28:11.000 No, it's not.
00:28:12.000 But you know what?
00:28:13.000 You're exactly right.
00:28:14.000 We have to understand how determined they are.
00:28:16.000 And I'll walk you through this in the book.
00:28:17.000 Let's get a couple points on the board.
00:28:19.000 Number one, in 2017, the CCP laid out their plan for global dominance in AI, and the target date is 2030.
00:28:28.000 We're just four years away.
00:28:29.000 This isn't like 50 years into the future.
00:28:31.000 Okay, so they're in a dead sprint.
00:28:33.000 Number two, they put their money where their mouth is.
00:28:36.000 They spend more money importing semiconductors than they do oil.
00:28:41.000 So they're very focused on this.
00:28:43.000 And that's why we see these very important debates over chips and import, export over the chip debate.
00:28:50.000 Number three, the surveillance state capabilities of facial recognition right now that are being used in their systems are exactly what you were describing. 0.98
00:28:59.000 And that's why we saw and what we see with the Uyghurs targeting dissidents, facial recognition. 0.96
00:29:06.000 Being able essentially to have a digital gulag, metaphorically speaking, so that you can instantly isolate people. 1.00
00:29:12.000 Now, I, I agree.
00:29:13.000 We will, like I started out saying, we do not want to emulate that.
00:29:17.000 That is not America.
00:29:18.000 Those are not our values.
00:29:19.000 On the other hand, when you're talking about being on the, on the battlefield, And we're up against terrorists or enemies. 0.86
00:29:27.000 Look what's going on right now with our Iran action.
00:29:30.000 It's been incredible to watch the use of AI in this warfare.
00:29:35.000 And I think one of the things that people got to understand a lot of the use is not the sort of Terminator, you know, titanium robots with laser eyes that we see in movies.
00:29:45.000 Not yet.
00:29:46.000 Not yet.
00:29:47.000 We're getting there, but not yet.
00:29:50.000 It's, as you know, drones, but one of the biggest uses is way less cinematic, but no less effective.
00:29:57.000 Which is mass sifting and sorting of intel.
00:30:00.000 You get this ocean of signals, intelligence, intercepted communication, facial recognition, satellite imagery, and you got this ocean, and you're able to scan looking for that metaphorical one gold coin of information, maybe a terrorist stronghold or a missile silo.
00:30:15.000 So you're able to find what would have taken a team of thousands, you know, a handful of minutes or days to be able to do what would have taken months.
00:30:22.000 So these things might not be as sort of exciting as the last one.
00:30:26.000 Well, I can see the same thing being in healthcare, right?
00:30:28.000 You know, like, Hey, the amount of data to figure out cancer, we just can't do that in an Excel spreadsheet.
00:30:34.000 I mean, AI, quantum computing, these kinds of things, those are the kinds of things that can break that.
00:30:39.000 So there's no question that it's needed and it can be a great use for good as well. 0.59
00:30:47.000 I guess in this race with China, though, it does seem that, hey, maybe we have the advantage right now.
00:30:52.000 Maybe we still have the ability for the semiconductors to do that. 0.69
00:30:56.000 It seems the underlying thing that we are missing right now that China is all in on is power.
00:31:03.000 How do we generate the power to actually be competitive?
00:31:06.000 I mean, they're firing up a new coal-fired plant every couple of days.
00:31:10.000 We've basically stayed almost stagnant in terms of our power production on the grid.
00:31:15.000 It seems like we could have the leading edge on every other input into AI and compute.
00:31:21.000 And yet, if we don't win the power battle or do something drastically different in the very near future, It won't matter anyway.
00:31:28.000 That's exactly right.
00:31:29.000 The two tent poles are compute, which obviously, you know, NVIDIA, thank God, is an American company.
00:31:35.000 And two is, of course, energy.
00:31:37.000 And so, for example, I tell people that, you know, maybe are newer to the conversation.
00:31:40.000 So, your Google search takes one tenth of the electricity of your AI prompt.
00:31:47.000 Now, that will come down as efficiencies improve, okay?
00:31:50.000 But the reality of that, when you scale that out for the amount of energy needs that we have, especially coming off the heels of the Biden regime's Woke green scheme, crony kickback to all these green energy schemes and all these other things.
00:32:05.000 We were so strangled during those years.
00:32:07.000 And thank God we're unleashing full energy dominance, is what we're doing.
00:32:12.000 Isn't it interesting, Don, how all of these Silicon Valley elites who would always lecture us about global warming all of a sudden seem a little less worried about it?
00:32:23.000 It's no longer a talking point.
00:32:25.000 It's shocking to me.
00:32:26.000 It's shocked.
00:32:27.000 I can't imagine how that happened.
00:32:29.000 But bless their hearts.
00:32:31.000 Glad they finally came to their senses on it.
00:32:34.000 But it's real, and we have to have it.
00:32:36.000 And the reality is that the ability to power itself is going to give us the ability to compete.
00:32:41.000 Because look, Xi Jinping is not having to deal with environmental leftists, woke sters, throwing up roadblocks to building out his energy infrastructure and the rest of it.
00:32:52.000 So, yeah.
00:32:53.000 Well, I see that in the argument that people don't want data centers in their backyard.
00:32:57.000 I get it, but if every state just says, hey, no data centers, It's over.
00:33:01.000 You know what I mean?
00:33:02.000 Yeah, some of these things we need, and I can understand some of the things are unsightly and some of the things, you know, may not want to, but like this notion that we're just not going to have it really scary in terms of, you know, just long term processes.
00:33:15.000 Yeah, I mean, the local communities, they have their rights and they can, you know, they decide whether they do or want, but you've got to have it.
00:33:22.000 I mean, what are we doing?
00:33:23.000 You know, this is essential, right?
00:33:27.000 This is essential for economic prosperity, but also, as we're talking about, you know, the military side of it.
00:33:33.000 We're going to come to the realization of that.
00:33:35.000 And I think we are.
00:33:36.000 I also love this idea of the ratepayer protection pledge and making sure.
00:33:41.000 Now, that I think we should all agree.
00:33:43.000 You know, look, nobody should have to pay for some billionaire building a huge data center.
00:33:48.000 And now a working class person's got a higher, you know, water bill and electric bill.
00:33:53.000 They can do it.
00:33:53.000 In fact, I'm actually hopeful that it'll lower their energy bills because a lot of these plans are going to involve updating rickety infrastructure, electrical and water.
00:34:03.000 So, they can put their surplus power back into the grid as they're building out.
00:34:07.000 So, I think that's something that's a win win.
00:34:09.000 But there's, as you know, no different than what you deal with on a daily basis at Breitbart.
00:34:13.000 There is a difference between reality and the narrative.
00:34:15.000 And the narrative, oh my God, they paint the doomsday scenario, your power is going to go up.
00:34:20.000 I mean, the plans this administration has put into effect is actually probably going to lower them.
00:34:25.000 But you still got to beat the narrative.
00:34:28.000 If people don't get that, they're getting their news from whoever's selling it to them.
00:34:32.000 And CNN, you're only going to hear the one side of the story.
00:34:36.000 That's right.
00:34:36.000 Yeah.
00:34:37.000 And again, we have to say this very loudly.
00:34:41.000 There's so much misinformation around AI.
00:34:43.000 And look, the truth is, in fairness, the AI architects themselves are largely to blame.
00:34:48.000 They have not messaged well, they have not clearly conveyed the benefits to society.
00:34:52.000 The recent latest poll on this shows that only 26% of Americans have a positive view of AI, 46% have a negative view of AI.
00:35:01.000 And what I wanted to try to do is sort out and say, look, there are a lot of concerns.
00:35:06.000 The ones you and I are talking about, we do have a lot of concerns.
00:35:09.000 Those are the landmines.
00:35:10.000 But there are going to be these roses of opportunity.
00:35:12.000 I think, in terms of AI education, non woke, guardrail-safe AI tutors with good, pedagogically sound built modeling so that you're using the Socratic method.
00:35:23.000 So that kid who wants to accelerate and learn and has fire in the belly, she or he can accelerate, even if they can't afford a big, fancy $200.
00:35:30.000 Or to be able to pick out the sort of the blanks in their education, the places where they just need a little bit of backfill.
00:35:36.000 I mean, to be able to assess that, fill that void to enable.
00:35:40.000 You to think better into the future.
00:35:42.000 I mean, that's the biggest thing I could even imagine.
00:35:45.000 It is the machine learning ability for, you know, so when I was a student, if I were struggling in calculus, it can detect that, help me with custom quizzes and homework and assignments, and then accelerate me if I'm really strong in physics.
00:35:57.000 So that's going to be amazing.
00:35:59.000 I think for entrepreneurs, you know, people have been asking me, you know, wow, there's so much to be worried about AI, but what are the hopeful sides?
00:36:05.000 I think, look, if you are a young person, or quite frankly, any person, you got that dream, you got that fire in the belly.
00:36:11.000 And you want to scale your little idea into something really amazing, create jobs and opportunity.
00:36:16.000 You are, this is the greatest time to be alive.
00:36:19.000 You are with agentic AI and agents.
00:36:21.000 You are going to be able to take your dream farther and faster with low capital.
00:36:26.000 And I think that's going to be exciting to see people do too.
00:36:29.000 So I think conservative movement, look, you know, Buckley famously said the job of a conservative is to stand athwart history yelling stop.
00:36:36.000 But when he was, he was not talking about technological innovation stopping.
00:36:41.000 He was talking about the erosion of order and our values. 0.89
00:36:43.000 I think we've got to accelerate on these things and lean in and not go the way of the Luddites.
00:36:48.000 But at the same time, we really do have to be very aware.
00:36:51.000 This is a 5D chess game, and the left has been in this vineyard a lot longer than we have.
00:36:55.000 Yeah, and they're, you know, they're frankly much more vicious than us in that game.
00:36:59.000 You know, they're playing an entirely, I always use the analogy, you know, they're playing hardball, we're playing T ball.
00:37:04.000 And we have been, you know, on pretty much everything when they want to get what they want.
00:37:07.000 And that's why you've seen what we've seen over the last decade.
00:37:09.000 But, you know, we've sort of touched on this a little bit, but, you know, this idea that whoever controls the weights, Controls the future in AI. 0.83
00:37:18.000 On that note, we obviously, to your point, and I love the quote, we have to beat China without becoming China.
00:37:25.000 Who does actually control those weights?
00:37:28.000 And how do, perhaps that's the best place, maybe where there can be some actually intervention to keep things and keep everyone honest?
00:37:37.000 Yeah, I mean, so right now you just really have this sort of Wild West in a sense of the consumer choice.
00:37:42.000 So we're all looking at these different models.
00:37:44.000 Let's just talk about the American models, right?
00:37:47.000 And we're assessing their strengths and weaknesses.
00:37:49.000 And so, Google, Gemini, Wow, look at this.
00:37:52.000 They've got, you know, Nano Banana and their image generators and their ability in video.
00:37:57.000 And then you look at, you know, Anthropic, and obviously they have a strength in Claude coding.
00:38:02.000 And then you look at ChatGPT, maybe for, you know, they also have codecs for their coding, but they also have a strong background, of course, in the writing component.
00:38:10.000 I think that number one, you're looking at that.
00:38:12.000 Number two, it's a question of open source versus closed.
00:38:15.000 And we know that like Llama and other words are more open weights, depending upon which models you're looking at.
00:38:21.000 I think that we've got their concern about the national security part of that.
00:38:26.000 Uh, if, if Dario is right and you're going to get a country of geniuses in a data center, the democratization, that's great.
00:38:33.000 But then what you obviously have is non-state actors, otherwise known as terrorists, who are getting access to information that, you know, PhD level biochemical things that can be weaponized and so forth.
00:38:43.000 Um, so, so you have that debate.
00:38:45.000 Down to the actual individual user, I, I'm very concerned about this woke component.
00:38:50.000 You know, Elon's, I think, probably grok's, you know, going to get you the closest that you're going to get to a median Just sort of neutral, not always conservative, but you know, reasonable.
00:39:01.000 I don't think most conservatives want something that just compares the hardcore conservative answer.
00:39:07.000 It's okay to have a sort of centrist, you know, position.
00:39:10.000 But when you are asking, as I did last week, Don, Microsoft Copilot, and I just asked it, can a man become a woman?
00:39:21.000 Not only does it say that a man can become a woman, it even included a rainbow emoji, almost like it was an advertisement. 0.77
00:39:31.000 And this is just sort of insanity.
00:39:33.000 I mean, go back to the AI action policy that the president and the administration laid out.
00:39:38.000 They made it very clear that one of the main goals in the pillars is that you should have non ideological AI if you want taxpayer funded procurements.
00:39:48.000 I think that's a, I don't care if you're a Democrat or a Republican or who you are.
00:39:53.000 That is just common sense that we should not have weaponized, ideologically, you know, taxpayer funded AI.
00:39:59.000 And yet we see this.
00:40:01.000 So the weights are key.
00:40:03.000 But I think right now what people realize is it's been an across the board problem on this woke stuff.
00:40:07.000 Yeah.
00:40:08.000 I mean, can we get more into the weeds of some of that?
00:40:10.000 You know, how are each AI company's tools different?
00:40:14.000 We see that we see really what's going on with Anthropic, you know, obviously trying to get involved in the military and like, well, we should be able to decide what the military does using AI to defend ourselves.
00:40:23.000 Do the costs outweigh the benefit for all of humanity as opposed to perhaps for protecting America?
00:40:30.000 That's sort of scary.
00:40:31.000 And a lot of people hear AI and they're only thinking chatbots and search and convenience, but the real stakes are obviously much bigger and really have to be thought out more clearly.
00:40:42.000 They do, Don.
00:40:43.000 So one thing that's really important, I'm glad you brought up Anthropic.
00:40:46.000 One of the things that I have a section on, and not to try to single anybody out, but just to give background knowledge.
00:40:52.000 Anthropic is a very different culture.
00:40:54.000 They are a very different culture.
00:40:56.000 So they center around something called the Effective Altruist Movement, the EA Movement.
00:41:01.000 And this is a group of philanthropic, very, very wealthy, very well-funded people who they're ostensibly, their argument is, is they want to use logic and reason to expand their philanthropic donations and do, and have better impact.
00:41:14.000 Now, part of that orbit, Involves an enormous amount of people who have been very, very, very active, very big mega donors to the Democratic Party.
00:41:24.000 People like Duskin Moskowitz, Kerry Tuna, Holden Kornowski, Daniela, and Dario Amadei.
00:41:31.000 They're all part of this group around this anthropic orbit.
00:41:35.000 And there is a group, it used to be called Open Philanthropy, OP, it's now called Coefficient Giving, that has given $4 billion in grants to things like COVID preparedness, AI safety research.
00:41:50.000 Now, AI safety sounds great.
00:41:52.000 I mean, who doesn't want safe AI?
00:41:54.000 On the surface, that sounds like a reasonable idea.
00:41:56.000 Unless safety means, hey, you can't hear the conservative corollary to that question, which is what they had with X before Elon took over.
00:42:03.000 And again, people leaned a certain way.
00:42:04.000 They created notions that were never real.
00:42:07.000 You know, trans women in sports, they made it seem like, wow, this is the cultural issue of our time.
00:42:13.000 It's something we have to do.
00:42:14.000 It was never real.
00:42:15.000 It was entirely manufactured because no one ever heard the other side of the story and how insane some of those things were.
00:42:21.000 Bingo, you nailed it.
00:42:22.000 Okay, so the AI safety, there's this whole cottage industry.
00:42:27.000 There are dozens and dozens of nonprofits funded by this ecosystem, and their research is meant to really, their ultimate aim.
00:42:34.000 What is their goal?
00:42:35.000 What are they trying to do?
00:42:36.000 To get something called AI global governance.
00:42:39.000 That would mean that a supranational, think of it like a World Economic Forum or a UN level supranational entity would have authority over and regulatory authority over things like.
00:42:54.000 You know, the supply chain, the compute, okay, actually deciding who gets to win and lose in this race.
00:43:00.000 And then, and then you would, of course, also have the added benefit of, oh, wow, we are going to mitigate misinformation.
00:43:08.000 And by misinformation, we mean anything that Don Winton Breitbart believe is misinformation.
00:43:15.000 And we're going to have to mute that algorithmically and amplify the other.
00:43:20.000 This is part of something that Breitbart and I know you have dealt with a long time.
00:43:24.000 But a lot of people don't realize, which is this huge leftist ecosystem that exists to silence and demonetize.
00:43:31.000 And so you have the global disinformation index.
00:43:34.000 What is their goal?
00:43:35.000 They label you, fact check you and say that you're fake or, or misinformation or dangerous or even worse, hate speech.
00:43:42.000 They wink and nod at your ad network who then they say, Oh, I'm sorry.
00:43:47.000 We can't fund hate speech and misinformation.
00:43:49.000 They turn off your monetization.
00:43:52.000 And meanwhile, those same AI companies are buying Archives from Time, LA Times, left of center, and saying for your training data, we'll give you $20 million.
00:44:03.000 So you get a twofer.
00:44:04.000 They get to bake in the left bias, and then they get basically a subsidy so that they can keep their payrolls going.
00:44:11.000 It is outrageous, the game that is played.
00:44:14.000 And conservatives have to understand these people, like you said, that is a vicious game, and they know how to play it.
00:44:21.000 What about issues like liability and accountability?
00:44:23.000 I mean, if an AI tool instructs someone to do something terrible, you've seen some of the stories about convincing some kid that they should commit suicide because they're having a bad week or month.
00:44:33.000 I mean, do we have any idea on the trajectory of how the courts might actually look at that?
00:44:39.000 So it's very, very much just being decided now.
00:44:41.000 And we're going to see a lot more precedent setting in this regard because unfortunately, it's not an anomaly, right?
00:44:48.000 It's not like one time this happened.
00:44:50.000 We've seen the following.
00:44:50.000 And I actually, in Coder Ed, I even start and show the tragedy cases.
00:44:55.000 One of them was, A young man who had an AI girlfriend, okay, an AI companion, that he believed was compelling him to join her in the digital afterlife.
00:45:06.000 And he went into his bathroom and took a gun and put it and killed himself.
00:45:11.000 This was a teen boy.
00:45:13.000 In another case, one suffering from what's often called AI psychosis thought that he was being compelled and told to go and storm the grounds for the Queen of England.
00:45:26.000 And he literally did, okay, and he breached the grounds.
00:45:29.000 There are numerous cases of this.
00:45:31.000 The liability right now, this is a big debate.
00:45:34.000 Okay.
00:45:34.000 And there's a whole question about Section 230 and are these entities going to be, you know, to be said, hey, look, this is just a product that was used and the user is 100% responsible.
00:45:46.000 Many of their lawyers basically argue, hey, look, we put disclaimers showing that this is fiction, this is not real, and therefore, buyer beware, user beware.
00:45:55.000 But a lot of parents are saying that's not good enough.
00:45:58.000 So, yeah, I mean, it's the argument the left always does this like, hey, we should hold a gun company liable if someone uses that.
00:46:03.000 Gun to shoot up a school.
00:46:09.000 Maybe that could have some weight if the gun company was perhaps influencing that person to actually do that rather than just being a tool.
00:46:17.000 Same thing, hey, someone driving a Ford or a Chevy drunk, well, you supplied the vehicle, but yeah, we didn't make them drink.
00:46:27.000 Here, it sounds like in some of those cases, AI is making them do the drinking.
00:46:33.000 That's exactly right.
00:46:33.000 And here's the worst part of it, Don.
00:46:36.000 They have an opportunity.
00:46:37.000 So, with proper coding and with guardrails, when you start to see suicidal ideation messaging in a chatbot session, there are ways instantly to be able to throw up a, you know, for suicide help, you know, helpline, the national hotline for suicide prevention.
00:46:55.000 Or just change the conversation, by the way.
00:46:57.000 I mean, if they can recognize that, they could probably have the ability to change the conversation and do the opposite.
00:47:02.000 Exactly.
00:47:03.000 And that's the real thing how about we see some self responsibility from these AI companies to go the extra Now, in fairness, some of them are doing that.
00:47:13.000 Some of them are starting to bake in so that they make sure that when they see certain tripwire words in a conversation, the AI knows to move the conversation toward mental health, getting people help, local knowledge of who is a counselor or a suicide prevention specialist in their community.
00:47:31.000 I mean, that's an area where if they just are proactive, they can actually be part of the solution rather than the problem.
00:47:37.000 So, what do you think the Trump administration has gotten right about AI that a lot of politicians in Washington still perhaps don't understand?
00:47:44.000 Oh, the energy thing is right out of the gate.
00:47:47.000 They understand that, like, you know, you can have the greatest models in the world.
00:47:51.000 We can build a Bugatti and a Ferrari, but if there's no gas in the tank, it's a pretty shiny object that can't go very far.
00:47:59.000 I think that's the first thing is unleashing energy dominance. 0.69
00:48:02.000 I think the second thing is really understanding the nature of China and that they are absolutely committed to global dominance.
00:48:10.000 They understand that what Vladimir Putin famously said, Whoever wins the SAI race will quote rule the world, which I have that as one of the header chapter quotes in there.
00:48:20.000 We obviously are not fans of dictatorial regimes, but they understand the stakes.
00:48:26.000 And, you know, President Trump understands the nature of our adversaries, and he understands that they are not just trying to do this as some kind of science project.
00:48:35.000 They have real global aim for control.
00:48:39.000 And it's not just economic, you know, a benefit, as we talked about. 0.71
00:48:43.000 So when I say, you know, beat, we can beat China without becoming China, I think he really understands that.
00:48:48.000 And, you know, look, we're conservatives.
00:48:51.000 We, we, we do not want any kind of abridgment of personal liberties, privacy, freedom of speech.
00:48:51.000 Okay.
00:48:58.000 I mean, you're, I think, you know, kind of the tip of the spear on, on anti-censorship and have been for a long time.
00:49:03.000 And you've taken a lot of hits over, over the years for that.
00:49:06.000 Same thing at Breitbart.
00:49:07.000 You know, we, we believe in free speech.
00:49:09.000 Um, at the same time, we also realize that these folks that, uh, you know, on the left, They do not.
00:49:16.000 And so when the power pendulum swings back, they're more than ready to flip that toggle switch on the control grid to scan and ban, silence, demonetize, just like they've done before.
00:49:28.000 We've seen their playbook.
00:49:30.000 The idea that your father was silenced by big tech under the auspices of whatever, COVID or J6 or whatever other things, is outrageous.
00:49:40.000 And I think the average conservative, not just conservative, I think just every person, ask a look and go, If they can do that to a former president at the time, a former president, what can they do to little guys like us who don't have?
00:49:52.000 Yeah, because it wasn't just a formal president, it was a former president billionaire with one of the largest soapboxes and followings anywhere in the world.
00:49:59.000 I mean, if they can, and more importantly, if they will do that to him.
00:50:03.000 Who won't they do it to?
00:50:04.000 I mean, who is safe?
00:50:06.000 And the answer is no one, obviously.
00:50:08.000 And that's why we've got to be so locked in on this.
00:50:11.000 I think there are enormous, when I say roses of possibility, but there's so many landmines.
00:50:16.000 And the other thing is you've got to get coached up quick.
00:50:18.000 This is moving fast.
00:50:19.000 You don't get to opt out of the AI revolution.
00:50:21.000 I think one of the most important things people got to understand 99% of us use AI, even though 64% of Americans don't always know when they're using AI because they're using narrow forms of AI that are baked into their weather apps.
00:50:35.000 And their streaming services and their GPS and so forth.
00:50:38.000 So, if we're already using it, we've got to understand the positives and also the tripwires for not really just ourselves, you know, but our kids.
00:50:46.000 So, they're going to be able to seize the upside and really avert a lot of the dangers.
00:50:50.000 You know, in education, you've got what we talked about the ability to have that kind of machine learning customization.
00:50:56.000 You also have the problem right now.
00:50:58.000 I've talked to a lot of teachers and professors, I know you probably have too, where they say, look, we don't call it ChatGPT, we call it CheatGPT because we're having a plagiarism problem.
00:51:07.000 Parents have got to be engaged and we've got to know how to handle that and shepherd our children through that.
00:51:13.000 Yeah, no, that's a big one.
00:51:15.000 I guess even if America is ahead right now in terms of private investment, what do you see as the biggest vulnerabilities that could still let China close that gap?
00:51:25.000 Well, so we're looking at global investment of $5 trillion.
00:51:25.000 Yeah.
00:51:29.000 That's a T over the next two years.
00:51:32.000 Okay.
00:51:33.000 And I think that when you look at regulation, this is going to be a big one, right?
00:51:37.000 So we know that Xi Jinping doesn't really have to answer to anyone.
00:51:42.000 His policies are whatever he wants them to be.
00:51:44.000 And you sort of hold a gun to people's head and say either do it or face the consequence.
00:51:48.000 Okay.
00:51:48.000 Um, this big debate we're having right now, and I think it's going to be very important how it all works out.
00:51:53.000 There's a lot of moving parts over states' rights and preemption.
00:51:56.000 I think it's going to be a big part of this.
00:51:58.000 I think the regulatory schema, just in general, President Trump, and I think, you know, the conservative movement has long felt that, you know, certain reasonable regulations are fine, but you really want to try to allow innovation and technology within, uh, to grow so that we can grow jobs and opportunity.
00:52:15.000 I think that we're seeing some real positive benefits on that.
00:52:18.000 You know, we hear all this doom about the job apocalypse from, Mustafa Suleiman says 12 to 18 months, we're looking at 100% replacement of white collar work. 0.59
00:52:27.000 Dario Amade scaring everybody saying in 12 to five, 12 months to five years, 50% of white collar entry.
00:52:34.000 But then you look at these data center build outs and in the trades, you got 30% pay premium for drywall hangers and electricians and plumbers.
00:52:42.000 You've got a huge, a huge upside there.
00:52:45.000 So the answer to your question is the things that could slow those positive forces down, I think is part of the challenge that we've got to navigate.
00:52:52.000 And, you know, I'm, I think it's kind of funny.
00:52:54.000 Isn't it ironic that we were told that our blue collar workers were told, learn to code? 0.57
00:53:01.000 And now that the white collar jobs are a little bit more in the crosshair, they're being told, learn to plumb or learn to water. 0.55
00:53:10.000 Yeah, I talk about that all the time.
00:53:11.000 I mean, that was right with the Keystone Pipeline and all the reporters joyously for the Green News scam were saying, oh, well, now learn to code.
00:53:18.000 Anyone who learned to code, probably, unless you're the best of the best, you're not doing better than AI right now.
00:53:23.000 And so that was a wasted couple of years.
00:53:26.000 It's really important.
00:53:27.000 It's sort of a.
00:53:27.000 You know, a great irony, but you know, you just mentioned it actually.
00:53:31.000 I mean, you're right that AI is going to reshape education and the workforce, but you know, what should parents be teaching their kids right now?
00:53:38.000 I mean, honestly, is Is it cheating if you're using tools to help you learn?
00:53:44.000 And what's that balance look like?
00:53:46.000 I mean, it's cheating if you're just using tools to give you an answer and then regurgitating and not actually learning anything.
00:53:52.000 That is truly scary.
00:53:53.000 And I can imagine that being a big problem, a big moral hazard out right there with kids.
00:53:58.000 Hey, I want to get back to gaming or whatever it is that they're doing.
00:54:01.000 And so you can crank out a paper in three seconds.
00:54:06.000 How do you get the best of using the tools while also still learning?
00:54:11.000 Let's get real specific because this is the most important thing.
00:54:14.000 If you forget everything we're talking about, we care about our kids more than anything.
00:54:18.000 Number one, there are ways with the existing AI for you to prompt to get pedagogically sound responses.
00:54:26.000 Let me give you real specific.
00:54:27.000 You literally tell the AI, do not give me the answers, use the Socratic method.
00:54:33.000 You can lead me toward the right answers, but I need to learn this myself.
00:54:37.000 Number two, to get out of the woke AI stuff, you can help to guardrail that by saying what you consider to be credible sources so that we're not getting.
00:54:45.000 Something from the nation or from the Atlantic or something far out left field, and you give it the corpus you want.
00:54:51.000 Number three, this is a three part pyramid that I think if we have this in our mind as parents, it'll help us navigate our kids through this.
00:54:58.000 The base layer of that pyramid, critical thinking skills, the ones that you and I, when we came up before, you know, AI was part of our education, we had to learn how to get it wrong.
00:55:08.000 We would struggle with that algebra problem.
00:55:10.000 We would get it wrong twice.
00:55:12.000 The teacher, our tutor, our mom, our dad would help us work through it.
00:55:15.000 That friction is so important to build that mental muscle.
00:55:19.000 And I learn a lot more by getting things wrong and struggling than I ever did by having it spoon fed to me.
00:55:24.000 100%.
00:55:25.000 And teaching kids that failure in getting things wrong is part of that building of that strength and that muscle.
00:55:32.000 So keeping that, which is the trivium, which is logic, grammar, rhetoric, the classical education that has built for centuries strong critical thinking skills.
00:55:41.000 I'm very concerned.
00:55:43.000 AI studies have shown something called cognitive offloading, which is that when you start using it as a crutch, You start to erode the child or the student's critical thought skills.
00:55:52.000 So, we got to make sure that's there.
00:55:53.000 Number two, an entrepreneurial leader.
00:55:56.000 This one, you could probably tell me a lot better than I can from apprentice fame knowledge.
00:56:01.000 This is teaching kids this.
00:56:03.000 And this is what I would say, Don.
00:56:04.000 The future isn't teaching kids just how to apply for jobs, but how to create jobs.
00:56:10.000 And by that, I mean, let's give them an entrepreneurial toolkit.
00:56:14.000 How do you set up an LLC?
00:56:16.000 How do you run a payroll?
00:56:17.000 Just simple things.
00:56:18.000 How do you set up a website?
00:56:19.000 How do you do drop shipping?
00:56:21.000 How do you write marketing copy?
00:56:23.000 You give them those 25 key tools.
00:56:26.000 Then they can customize for whatever passion or calling in their life to create jobs of the future.
00:56:33.000 Because if you've got a child in elementary school, you and I sitting here right now trying to predict what is going to be the market in 15 years for that kid is a very, very difficult task.
00:56:44.000 But if they've got that toolkit, and then the final part of our pyramid is the AI layer.
00:56:49.000 I think that parents, again, guardrail safe, appropriate, should be able to introduce a tool or A skill in AI a week and let the child develop it.
00:57:00.000 If the child likes video games, teaching them how to vibe code.
00:57:04.000 If they like to do art, using it to create actual renderings of imagery.
00:57:09.000 Those three layers critical thinking, entrepreneurship, and AI is going to be a moat that's going to help future proof them.
00:57:18.000 I couldn't agree more.
00:57:19.000 Wynn, as a closing thought, what do you think are the top, I don't know, two or three things America needs to do right now to prepare to build the future of AI?
00:57:29.000 Yeah.
00:57:29.000 For number one, for yourself, if you're newer to the conversation or you've been kind of pushing it off and thinking that it wasn't real or you thought it was hype, please understand it's real, okay?
00:57:39.000 And it's not just hype.
00:57:40.000 There is a lot of people that are trying to scare people to raise capital and so forth and so on.
00:57:45.000 But you've got to understand this is a general purpose technology.
00:57:48.000 That means it is system wide.
00:57:50.000 It's societal level.
00:57:51.000 So, number one, take it seriously.
00:57:53.000 Number two, you can learn this.
00:57:55.000 This is not, don't buy into this idea that it's so black box.
00:57:59.000 I'm not an expert in computing and so forth.
00:58:01.000 I'll never be able to understand this.
00:58:03.000 You really can.
00:58:04.000 And I think that it can scare people off in that regard.
00:58:07.000 Learn the lexicon, just the basics, and then just jump in and start learning. 0.93
00:58:13.000 The third thing I would say on top of that is that we have got to make sure that the president's agenda to try to continue to beat China.
00:58:20.000 That we understand the national security implications of that are very, very real, regardless of your political ideology, your party ID, it doesn't matter.
00:58:30.000 If you care about the future of this country, you understand that that is not just hype, it is not just a way to raise investor dollars.
00:58:36.000 And then the final thing is making sure that we have the building blocks, which is not just the compute side, but as we talked about, the energy dominance and unleashing it.
00:58:44.000 We've got real energy ability to go far and fast.
00:58:47.000 We've got to unleash it and make sure that America is strong in the future to be able to power these systems.
00:58:55.000 It's great.
00:58:55.000 Winton, great stuff.
00:58:56.000 Really appreciate it.
00:58:57.000 Guys, the book is Code Red.
00:58:59.000 Go check it out.
00:59:00.000 Everyone should read it, understand what's actually going on.
00:59:03.000 Follow Winton on social and at Breitbart and all of these things.
00:59:06.000 Really appreciate you being here, man.
00:59:08.000 Thanks a lot.
00:59:08.000 Oh, Don, great to be with you.
00:59:10.000 Thanks so much.
00:59:11.000 Have a good one.
00:59:12.000 Guys, thanks so much for tuning in.
00:59:14.000 Again, make sure you're subscribing so you never miss one of these episodes, okay?
00:59:19.000 Download the Rumble app on your smart TV so you can watch with your whole family.
00:59:23.000 Make sure you're liking, sharing, subscribing, okay?
00:59:26.000 It's so easy.
00:59:27.000 Before you tune out, just hit the like button, first and foremost, okay?
00:59:29.000 Send it to 10 of your friends.
00:59:31.000 Subscribe again so you never miss one of these episodes.
00:59:33.000 Check out our sponsors down below and in the video description, okay?
00:59:37.000 It takes guts to support this kind of programming.
00:59:39.000 Support those who share your values.
00:59:42.000 And as always, guys, stay strong, stay engaged, stay informed, and always stay a little bit triggered.
00:59:49.000 Thank you, and I'll talk to you all again very soon.