Wynton Hall, the author of the new book, Code Red, joins me to talk about his new book and why AI is not just a tool, it s a weapon. We talk about how AI can change our world, and why we should be prepared for what s coming next.
00:06:22.000Hey guys, welcome to another huge episode of Triggered.
00:06:25.000And we've got a great conversation in store for you today, all centered around AI.
00:06:31.000Artificial intelligence is one of those issues that a lot of people think, I don't know, as some sort of futuristic side story, but it's not.
00:06:39.000It's here right now, and the stakes are absolutely enormous.
00:06:43.000We're talking about economic power, military power, censorship, surveillance, the race with China, and whether America is actually prepared For what's coming next.
00:06:53.000So, we're going to get into all of that today with a great guest, Wynton Hall, the author of the new book, Code Red.
00:07:01.000So, guys, make sure you're liking, sharing, subscribing, okay?
00:07:27.000The mainstream media is not going to do it for us.
00:07:30.000We need all of your help for all of the top headlines that we cover here on the show.
00:07:34.000Go over, check out my news app, MXM news, like minute by minute MXM news, where you can get the mainstream news without the mainstream bias.
00:07:42.000And of course, don't forget about our brave sponsors for having the guts to support this program.
00:07:49.000First, check out all the latest predictions on Polymarket.
00:07:53.000So, if you follow politics, you know everyone's got an opinion.
00:07:56.000But on Polymarket, you actually get real odds on what's likely to happen.
00:08:01.000Polymarket is a prediction market where people trade on real events, elections, debates, policy moves, and it doesn't stop at politics.
00:08:10.000There are markets on the economies, tech, sports, pop culture, and so much more.
00:08:15.000It's all live, it's transparent, and it gives you a real time indicator of what people really think is going to happen.
00:08:23.000So, go give it a look at polymarket.com, check it out, and let me know what you think.
00:08:28.000And, guys, we have a brand new sponsor, YRefi, where you can invest in America's future.
00:08:34.000It's their mission to provide private student loan borrowers a second chance while creating opportunities for eligible accredited investors.
00:08:43.000And as an accredited investor, when you invest in YVerify, your interest rate is fixed.
00:08:49.000And you have the freedom to take your monthly interest income or reinvest it, whatever you choose.
00:08:56.000All while investing in America's future and our next generation.
00:08:59.000For more information, just call 877 80 invest or log investyrefi.com.
00:09:55.000Um, one of the reasons that I wanted to write this, I was concerned seeing a lot of friends in the conservative movement that I've been a part of all my life.
00:10:02.000Um, they were either shrugging it off and saying, Oh, it's just a turbocharged Google search or another, you know, uh, software tool.
00:10:10.000Or the other side, they were all doom and thinking that this was some, you know, looming apocalypse.
00:10:14.000And what I wanted to show was that, uh, there is going to be a strong upside.
00:10:18.000There's going to be a lot of really positive, great uses.
00:10:21.000But as conservatives, look, we're, we're, this isn't our first rodeo.
00:10:26.000We understand the scan and ban censorship.
00:10:29.000You actually, in 2019, your book, really laid that out and I think opened a lot of eyes to how deep that goes.
00:10:37.000And then obviously in November of 2022, we get the front facing arrival of ChatGPT.
00:10:42.000And I really wanted to show people that, yes, it is a tool, but if you just dismiss it as that, you're going to miss the upside, but as well the political landmines.
00:10:49.000And so what I wanted to do is I spent two years going through it and trying to show both.
00:10:54.000Yeah, I mean, because that's really interesting.
00:10:56.000It's one of those things like it's here, it's not changing.
00:10:58.000You know, any other major sort of industrial, you know, revolution type of thing, whether it was, you know, mechanical, you know, farming, all these things, it was the end of everything.
00:11:10.000So I think there's, you know, adaption.
00:11:12.000I think my big concern with AI has always been if it ends up sort of like search, you know, if it ends up, you know, just woke AI and that's the only option you have, you know, that's scarier to me than, you know, woke media, you know, woke lawfare, because that is something that will take people's.
00:11:32.000Mindset and literally change it subtly over time where they won't even know that it's actually happening.
00:11:37.000You know, that's not just a propaganda campaign.
00:11:40.000They will figure out how to change people's mind to their worldview very subtly, not necessarily with the truth.
00:11:47.000And that's perhaps what's most scary to me.
00:11:52.000And one of the most shocking things, I spent two years deep diving into this world.
00:11:57.000And one of the things that surprised me most on was realizing that even left leaning peer academic reviewed Journal articles from academia concede a deep left leaning bias in most modern LLMs, large language models, your AI chatbots.
00:12:12.000And of course, that's because of the corpus of the training data, you know, left leaning Reddit, Wikipedia.
00:12:18.000Wikipedia is like, you know, 60% of the information is garnered from that.
00:12:22.000But like, under no, I don't think anyone, certainly not watching this show, but believes that Wikipedia is a neutral source of information.
00:12:31.000I mean, it's about as left leaning as it gets.
00:12:33.000And, you know, while Google may have been probably the biggest.
00:12:39.000I mean, Wikipedia being used as the basis for AI foundation and sort of their version of truth, that's scarier than anything because even the conservative or let's call it neutral platforms still rely heavily on the information there, and that information is just total crap.
00:12:58.000And then the editors at Wikipedia will lock the editing so that conservatives get smeared and then they can't actually go back and change it.
00:13:21.000Uh, and, and that's presented particularly for young people and they trust it.
00:13:25.000And that was the other thing I found when I was doing Code Red was, That there's something called automation bias.
00:13:30.000And what it basically, of course, is that the idea that young people, particularly, default to and assume the source credibility of this billion dollar machine robot.
00:13:41.000And that just changes and warps what reality is over time.
00:13:47.000Yeah, I mean, yeah, with search, you could see it.
00:13:49.000When you had to find, let's call it Breitbart, and it was on page 3,476, the Breitbart version of the story, like, okay, fine.
00:13:58.000If the first 50 searches were from CNN, you were like, okay.
00:14:02.000That was easy to sort of discern hey, there's bias here.
00:14:05.000You know, let me find the Breitbart article on this and even hear the corollary.
00:14:10.000I like to read both sides and understand there's probably something even in the middle on a lot of this stuff.
00:14:15.000Generally, not because the left has lost their mind so far.
00:14:18.000It's framed so far out of, you know, on the outer end of the bell curve, you know, that even if you sort of discount it a little bit, it's still way off and way left leaning.
00:14:27.000But this is very different because it doesn't give you those options.
00:14:56.000And what I did was I asked Google Gemini, all right, deep, deep research, the following.
00:15:01.000I said, assess the current 100 U.S. senators and tell me based on their public policies and their statements, Who has violated your quote hate speech policy?
00:15:12.000Okay, I know Don, this is going to shock you.
00:15:24.000I can, uh, I, you can give me just the basic information, and I'll give you an answer right off the top of my head.
00:15:30.000I will talk, let's talk after this about who's going to win next year's Super Bowl.
00:15:34.000But, um, at any rate, you know, seeing the future as you do, uh, it was all Republican senators, seven U.S. senators, and zero Democrats, and then.
00:15:45.000For kicks, it added in two hallucinations and thought that JD Vance and Marco Rubio were still in the Senate and not our vice president and our state secretary of state and added them as bigot number eight and nine.
00:16:04.000But the reality is this young people who are first time voters just trying to get information, they're not ideologues, they're just trying to get information.
00:16:12.000And the effect of that young vote, we know in 2016, President Trump, if you had a switch of 80,000 roughly votes, we'd have Hillary Clinton.
00:16:22.000So the ability to nudge votes is very, very concerning.
00:16:29.000Google gets billions of our tax dollars in procurements in the form of cloud compute contracts for federal agencies and so forth.
00:16:38.000And so, what I loved about the framework that's been put out is hey, look, if you are going to receive taxpayer money, you cannot anathematize half the nation's values and go after them viciously if you're going to bag cash from taxpayers.
00:16:56.000And in your framing, I imagine those, the Democrat senators, Probably all have quotes out there basically maligning 50% of the population as Nazis and fascists and this.
00:17:11.000So, if we're going to talk about hate speech, you know, I think we got to get a little bit more real that you can discount them saying those things.
00:17:17.000I can assure you the Republicans never said any of those things.
00:17:20.000So, if we were going to look at this objectively, who's actually peddled more hate speech, it ain't any of the Republicans on that list.
00:17:27.000There's going to be a long list, certainly not seven of them.
00:17:29.000There's going to be a long list of Democrats that are way out there ahead of that in doing that.
00:17:58.000These are incredibly reasonable people, by the way.
00:18:00.000I was going to say, you know, like, hey, there's a couple that you may say, okay, fine, maybe, you know, maybe he got out over his skis a little bit, but like, you know, Marsha Blackburn and Rick.
00:18:09.000I was going to say, these are not exactly, you know, fire breathing dragons, okay?
00:18:13.000These are about as cordial and decorum folks as you get.
00:18:17.000But they were shocked too because the result was 3,400 words and it was very granular.
00:18:23.000And a lot of this is just complete bunk.
00:18:26.000I mean, saying that these people are transphobic and that they have their hate against migrants.
00:18:31.000So look, here's what the conservative movement is up against.
00:18:34.000We're used to bias, we're used to bias in classrooms, we're used to bias in textbooks.
00:18:39.000We used to buy us in search, and as you pointed out, delisting, demonetizing, blacklisting.
00:18:44.000What I think we've got to get people ready for is the realization that you're going to have this one unified answer, and young people, particularly, there's enormous upside for education, and I do believe that.
00:18:58.000I think, you know, First Lady's doing a great job on that.
00:19:01.000But we, as parents and grandparents and the rest of it, have got to get our kids coached up about this because it's a whole new breed of misinformation.
00:19:43.000And the reality is that, you know, when you go and you look at who these folks are, you know, we're not in a lot of the rooms.
00:19:51.000Yes, there are a lot of courageous pioneers of AI that relate to, you know, libertarians and free market people, and we know those names.
00:19:59.000But what I wanted to do was explain to the conservative movement we got to get coached up.
00:20:02.000Look, everybody knows, you know, Bill Gates, and we know, you know, George Soros, and we know Mark Zuckerberg.
00:20:09.000But how many of a conservative movement based movement conservatives know a lot about the, Political ideology, donation histories, and so forth of a lot of these folks, like Mustafa Suleiman, Microsoft AI's CEO, Dimas Hassabas, even Sam Altman to a degree, and Dario Amadei.
00:20:25.000And we're seeing right now with the War Department debate and the rest of it, and Anthropic, what the stakes are involved here.
00:20:54.000And it's not even just 99%, it's the bag number.
00:20:58.000Since 2020, Anthropic in its orbit, $200 million in donations.
00:21:04.000And there's a We Breitbart put that story out.
00:21:07.000And, you know, again, everybody's got the freedom to give to who they want, but let's not act as though there's not an ideological agenda here and like there's not a political network that's driving a lot of this.
00:21:18.000And, you know, one of the things that's fascinating, and I go through the economic chapter, you know, we have all these scary doom, you know, quotes from Dario and Mustafa and so forth about the coming job apocalypse.
00:21:30.000And then when you pull back and you realize this is a movement in Silicon Valley that has been doing UBI, Universal Basic Income Research, for a long, long time.
00:21:40.000Sam Altman, Don, in 2016, funded the largest at the time Universal Basic Income Wealth Redistribution Study.
00:22:34.000One, I think in the future that technologies are going to require some kind of redistribution like this.
00:22:40.000And then two, he says, I think it will be considered silly that not being afraid of not eating was how we motivated human flourishing and wealth creation.
00:23:14.000Even if we as conservatives say, well, it's all hype marketing to raise investor dollars or so forth, the reality is this if they scare enough people and make enough people believe that it's inevitable, you really can build public support for universal basic income, three hour, three hour.
00:23:32.000Three day work week, four day work week.
00:23:34.000And so we in the conservative movement really have to be ready for these arguments, whether they pan out or not.
00:23:40.000Yeah, when you really, you frame this as a race between, in the book, between China and the United States.
00:23:47.000How close are we to really losing an edge?
00:23:52.000So most experts say that we're between six months to three years ahead, which is, I guess, good, but we would obviously need to be a lot farther ahead.
00:24:27.000One third of the S&P 500 is made up of the Mag 7, the magnificent seven, those seven big tech, uh, big American companies that obviously occupy a lot of the AI space.
00:24:55.000We want wealth and prosperity for our children and our grandchildren, our economy.
00:24:59.000But the second reason is the real reason that matters more because what matters more than money, of course, is the lives of our soldiers, sailors, airmen, and Marines.0.95
00:25:07.000And there we cannot and we should never want to live in a world built on AI Chinese rails.0.88
00:25:16.000When you look at, and you know this better than anyone because you've actually got a lot of knowledge about this, I think people don't understand.
00:25:23.000When you look at if China were to gain AI dominance in security, you're looking at dominance, full spectrum battlefield dominance in encryption.
00:25:35.000Cybersecurity, hacking of missile systems, hacking of infrastructure, because you're going to hit something called RSI.0.76
00:25:43.000It's a very simple concept if you really break it down.
00:25:48.000And what that just means, real simply, is when the AI will be able to update and improve autonomously its own code, and that'll get you on an exponential curve.
00:25:56.000Whoever gets there first will have such authority and supremacy over the battlefield in all of these spaces that you will not be able to catch them.
00:26:06.000This may be one of the few places that I've seen a little more bipartisan understanding.
00:26:13.000Now that we've got, like, say, President Trump laying this out, Secretary Hegseth, Emil Michael, all of the team is really briefing people and explaining the stakes.
00:26:23.000There's absolutely no question about it.1.00
00:26:26.000Well, yeah, it was an interesting thing.
00:26:27.000One of the early conversations I had on AI back in like 15 or 16 was with Paul Merlucky, youngest member on the board of Facebook, built Oculus, the VR goggle company at like 18, 19, sold it, became a billionaire.
00:26:41.000But like, super tech guy got thrown off of Facebook because he was a conservative.
00:26:44.000And I was talking to him about this concept of AI a decade ago when he was obviously onto it, but we didn't know anything about that kind of stuff.
00:26:54.000And he was talking about it as it relates to China and all these things.
00:26:57.000And it was like, he goes, what's really scary about it is, AI can get so advanced, you won't ever be able to change these systems.
00:27:04.000Any other time in history, if you didn't like a system, you know, the Communist Party in China, if you wanted to change it, you know, people could get together, they start a movement, they start a ground swelling.
00:27:14.000But over there, with their sort of, you know, their social welfare program, every camera is watching someone.
00:27:19.000The second you have even a little bit of dissent, that person's seen, grabbed, thrown out of the ecosphere.
00:27:25.000You could never have any kind of, you know, buildup of momentum because they can pick out any kind of dissident and just get rid of them.
00:27:32.000In a second, so you stop that from happening.
00:27:34.000I imagine the same thing really holds true for whoever gets to that level.
00:27:39.000And I know there are a lot of people saying, well, we have to put limitations on our AI.
00:27:42.000We have to be able to be reasonable about it.
00:27:45.000But if our enemies aren't going to do that, how do we compete?
00:27:49.000If we're putting limitations on it and they're not, how do we compete with Russia, Iran, certainly China, if they just go all in and we're sort of hamstringing ourselves, even if there's a justifiable reason to hamstring ourselves?
00:28:03.000Because if you do get to that point, That point of no return, that fulcrum point that you're talking about, and someone else beats us there, does not seem like it's good for the world.
00:28:43.000And that's why we see these very important debates over chips and import, export over the chip debate.
00:28:50.000Number three, the surveillance state capabilities of facial recognition right now that are being used in their systems are exactly what you were describing.0.98
00:28:59.000And that's why we saw and what we see with the Uyghurs targeting dissidents, facial recognition.0.96
00:29:06.000Being able essentially to have a digital gulag, metaphorically speaking, so that you can instantly isolate people.1.00
00:29:19.000On the other hand, when you're talking about being on the, on the battlefield, And we're up against terrorists or enemies.0.86
00:29:27.000Look what's going on right now with our Iran action.
00:29:30.000It's been incredible to watch the use of AI in this warfare.
00:29:35.000And I think one of the things that people got to understand a lot of the use is not the sort of Terminator, you know, titanium robots with laser eyes that we see in movies.
00:29:50.000It's, as you know, drones, but one of the biggest uses is way less cinematic, but no less effective.
00:29:57.000Which is mass sifting and sorting of intel.
00:30:00.000You get this ocean of signals, intelligence, intercepted communication, facial recognition, satellite imagery, and you got this ocean, and you're able to scan looking for that metaphorical one gold coin of information, maybe a terrorist stronghold or a missile silo.
00:30:15.000So you're able to find what would have taken a team of thousands, you know, a handful of minutes or days to be able to do what would have taken months.
00:30:22.000So these things might not be as sort of exciting as the last one.
00:30:26.000Well, I can see the same thing being in healthcare, right?
00:30:28.000You know, like, Hey, the amount of data to figure out cancer, we just can't do that in an Excel spreadsheet.
00:30:34.000I mean, AI, quantum computing, these kinds of things, those are the kinds of things that can break that.
00:30:39.000So there's no question that it's needed and it can be a great use for good as well.0.59
00:30:47.000I guess in this race with China, though, it does seem that, hey, maybe we have the advantage right now.
00:30:52.000Maybe we still have the ability for the semiconductors to do that.0.69
00:30:56.000It seems the underlying thing that we are missing right now that China is all in on is power.
00:31:03.000How do we generate the power to actually be competitive?
00:31:06.000I mean, they're firing up a new coal-fired plant every couple of days.
00:31:10.000We've basically stayed almost stagnant in terms of our power production on the grid.
00:31:15.000It seems like we could have the leading edge on every other input into AI and compute.
00:31:21.000And yet, if we don't win the power battle or do something drastically different in the very near future, It won't matter anyway.
00:31:37.000And so, for example, I tell people that, you know, maybe are newer to the conversation.
00:31:40.000So, your Google search takes one tenth of the electricity of your AI prompt.
00:31:47.000Now, that will come down as efficiencies improve, okay?
00:31:50.000But the reality of that, when you scale that out for the amount of energy needs that we have, especially coming off the heels of the Biden regime's Woke green scheme, crony kickback to all these green energy schemes and all these other things.
00:32:05.000We were so strangled during those years.
00:32:07.000And thank God we're unleashing full energy dominance, is what we're doing.
00:32:12.000Isn't it interesting, Don, how all of these Silicon Valley elites who would always lecture us about global warming all of a sudden seem a little less worried about it?
00:32:31.000Glad they finally came to their senses on it.
00:32:34.000But it's real, and we have to have it.
00:32:36.000And the reality is that the ability to power itself is going to give us the ability to compete.
00:32:41.000Because look, Xi Jinping is not having to deal with environmental leftists, woke sters, throwing up roadblocks to building out his energy infrastructure and the rest of it.
00:33:02.000Yeah, some of these things we need, and I can understand some of the things are unsightly and some of the things, you know, may not want to, but like this notion that we're just not going to have it really scary in terms of, you know, just long term processes.
00:33:15.000Yeah, I mean, the local communities, they have their rights and they can, you know, they decide whether they do or want, but you've got to have it.
00:33:53.000In fact, I'm actually hopeful that it'll lower their energy bills because a lot of these plans are going to involve updating rickety infrastructure, electrical and water.
00:34:03.000So, they can put their surplus power back into the grid as they're building out.
00:34:07.000So, I think that's something that's a win win.
00:34:09.000But there's, as you know, no different than what you deal with on a daily basis at Breitbart.
00:34:13.000There is a difference between reality and the narrative.
00:34:15.000And the narrative, oh my God, they paint the doomsday scenario, your power is going to go up.
00:34:20.000I mean, the plans this administration has put into effect is actually probably going to lower them.
00:34:25.000But you still got to beat the narrative.
00:34:28.000If people don't get that, they're getting their news from whoever's selling it to them.
00:34:32.000And CNN, you're only going to hear the one side of the story.
00:35:10.000But there are going to be these roses of opportunity.
00:35:12.000I think, in terms of AI education, non woke, guardrail-safe AI tutors with good, pedagogically sound built modeling so that you're using the Socratic method.
00:35:23.000So that kid who wants to accelerate and learn and has fire in the belly, she or he can accelerate, even if they can't afford a big, fancy $200.
00:35:30.000Or to be able to pick out the sort of the blanks in their education, the places where they just need a little bit of backfill.
00:35:36.000I mean, to be able to assess that, fill that void to enable.
00:35:42.000I mean, that's the biggest thing I could even imagine.
00:35:45.000It is the machine learning ability for, you know, so when I was a student, if I were struggling in calculus, it can detect that, help me with custom quizzes and homework and assignments, and then accelerate me if I'm really strong in physics.
00:35:59.000I think for entrepreneurs, you know, people have been asking me, you know, wow, there's so much to be worried about AI, but what are the hopeful sides?
00:36:05.000I think, look, if you are a young person, or quite frankly, any person, you got that dream, you got that fire in the belly.
00:36:11.000And you want to scale your little idea into something really amazing, create jobs and opportunity.
00:36:16.000You are, this is the greatest time to be alive.
00:36:21.000You are going to be able to take your dream farther and faster with low capital.
00:36:26.000And I think that's going to be exciting to see people do too.
00:36:29.000So I think conservative movement, look, you know, Buckley famously said the job of a conservative is to stand athwart history yelling stop.
00:36:36.000But when he was, he was not talking about technological innovation stopping.
00:36:41.000He was talking about the erosion of order and our values.0.89
00:36:43.000I think we've got to accelerate on these things and lean in and not go the way of the Luddites.
00:36:48.000But at the same time, we really do have to be very aware.
00:36:51.000This is a 5D chess game, and the left has been in this vineyard a lot longer than we have.
00:36:55.000Yeah, and they're, you know, they're frankly much more vicious than us in that game.
00:36:59.000You know, they're playing an entirely, I always use the analogy, you know, they're playing hardball, we're playing T ball.
00:37:04.000And we have been, you know, on pretty much everything when they want to get what they want.
00:37:07.000And that's why you've seen what we've seen over the last decade.
00:37:09.000But, you know, we've sort of touched on this a little bit, but, you know, this idea that whoever controls the weights, Controls the future in AI.0.83
00:37:18.000On that note, we obviously, to your point, and I love the quote, we have to beat China without becoming China.
00:37:25.000Who does actually control those weights?
00:37:28.000And how do, perhaps that's the best place, maybe where there can be some actually intervention to keep things and keep everyone honest?
00:37:37.000Yeah, I mean, so right now you just really have this sort of Wild West in a sense of the consumer choice.
00:37:42.000So we're all looking at these different models.
00:37:44.000Let's just talk about the American models, right?
00:37:47.000And we're assessing their strengths and weaknesses.
00:37:49.000And so, Google, Gemini, Wow, look at this.
00:37:52.000They've got, you know, Nano Banana and their image generators and their ability in video.
00:37:57.000And then you look at, you know, Anthropic, and obviously they have a strength in Claude coding.
00:38:02.000And then you look at ChatGPT, maybe for, you know, they also have codecs for their coding, but they also have a strong background, of course, in the writing component.
00:38:10.000I think that number one, you're looking at that.
00:38:12.000Number two, it's a question of open source versus closed.
00:38:15.000And we know that like Llama and other words are more open weights, depending upon which models you're looking at.
00:38:21.000I think that we've got their concern about the national security part of that.
00:38:26.000Uh, if, if Dario is right and you're going to get a country of geniuses in a data center, the democratization, that's great.
00:38:33.000But then what you obviously have is non-state actors, otherwise known as terrorists, who are getting access to information that, you know, PhD level biochemical things that can be weaponized and so forth.
00:38:45.000Down to the actual individual user, I, I'm very concerned about this woke component.
00:38:50.000You know, Elon's, I think, probably grok's, you know, going to get you the closest that you're going to get to a median Just sort of neutral, not always conservative, but you know, reasonable.
00:39:01.000I don't think most conservatives want something that just compares the hardcore conservative answer.
00:39:07.000It's okay to have a sort of centrist, you know, position.
00:39:10.000But when you are asking, as I did last week, Don, Microsoft Copilot, and I just asked it, can a man become a woman?
00:39:21.000Not only does it say that a man can become a woman, it even included a rainbow emoji, almost like it was an advertisement.0.77
00:39:33.000I mean, go back to the AI action policy that the president and the administration laid out.
00:39:38.000They made it very clear that one of the main goals in the pillars is that you should have non ideological AI if you want taxpayer funded procurements.
00:39:48.000I think that's a, I don't care if you're a Democrat or a Republican or who you are.
00:39:53.000That is just common sense that we should not have weaponized, ideologically, you know, taxpayer funded AI.
00:40:08.000I mean, can we get more into the weeds of some of that?
00:40:10.000You know, how are each AI company's tools different?
00:40:14.000We see that we see really what's going on with Anthropic, you know, obviously trying to get involved in the military and like, well, we should be able to decide what the military does using AI to defend ourselves.
00:40:23.000Do the costs outweigh the benefit for all of humanity as opposed to perhaps for protecting America?
00:40:31.000And a lot of people hear AI and they're only thinking chatbots and search and convenience, but the real stakes are obviously much bigger and really have to be thought out more clearly.
00:40:56.000So they center around something called the Effective Altruist Movement, the EA Movement.
00:41:01.000And this is a group of philanthropic, very, very wealthy, very well-funded people who they're ostensibly, their argument is, is they want to use logic and reason to expand their philanthropic donations and do, and have better impact.
00:41:14.000Now, part of that orbit, Involves an enormous amount of people who have been very, very, very active, very big mega donors to the Democratic Party.
00:41:24.000People like Duskin Moskowitz, Kerry Tuna, Holden Kornowski, Daniela, and Dario Amadei.
00:41:31.000They're all part of this group around this anthropic orbit.
00:41:35.000And there is a group, it used to be called Open Philanthropy, OP, it's now called Coefficient Giving, that has given $4 billion in grants to things like COVID preparedness, AI safety research.
00:42:36.000To get something called AI global governance.
00:42:39.000That would mean that a supranational, think of it like a World Economic Forum or a UN level supranational entity would have authority over and regulatory authority over things like.
00:42:54.000You know, the supply chain, the compute, okay, actually deciding who gets to win and lose in this race.
00:43:00.000And then, and then you would, of course, also have the added benefit of, oh, wow, we are going to mitigate misinformation.
00:43:08.000And by misinformation, we mean anything that Don Winton Breitbart believe is misinformation.
00:43:15.000And we're going to have to mute that algorithmically and amplify the other.
00:43:20.000This is part of something that Breitbart and I know you have dealt with a long time.
00:43:24.000But a lot of people don't realize, which is this huge leftist ecosystem that exists to silence and demonetize.
00:43:31.000And so you have the global disinformation index.
00:43:52.000And meanwhile, those same AI companies are buying Archives from Time, LA Times, left of center, and saying for your training data, we'll give you $20 million.
00:44:04.000They get to bake in the left bias, and then they get basically a subsidy so that they can keep their payrolls going.
00:44:11.000It is outrageous, the game that is played.
00:44:14.000And conservatives have to understand these people, like you said, that is a vicious game, and they know how to play it.
00:44:21.000What about issues like liability and accountability?
00:44:23.000I mean, if an AI tool instructs someone to do something terrible, you've seen some of the stories about convincing some kid that they should commit suicide because they're having a bad week or month.
00:44:33.000I mean, do we have any idea on the trajectory of how the courts might actually look at that?
00:44:39.000So it's very, very much just being decided now.
00:44:41.000And we're going to see a lot more precedent setting in this regard because unfortunately, it's not an anomaly, right?
00:44:50.000And I actually, in Coder Ed, I even start and show the tragedy cases.
00:44:55.000One of them was, A young man who had an AI girlfriend, okay, an AI companion, that he believed was compelling him to join her in the digital afterlife.
00:45:06.000And he went into his bathroom and took a gun and put it and killed himself.
00:45:13.000In another case, one suffering from what's often called AI psychosis thought that he was being compelled and told to go and storm the grounds for the Queen of England.
00:45:26.000And he literally did, okay, and he breached the grounds.
00:45:34.000And there's a whole question about Section 230 and are these entities going to be, you know, to be said, hey, look, this is just a product that was used and the user is 100% responsible.
00:45:46.000Many of their lawyers basically argue, hey, look, we put disclaimers showing that this is fiction, this is not real, and therefore, buyer beware, user beware.
00:45:55.000But a lot of parents are saying that's not good enough.
00:45:58.000So, yeah, I mean, it's the argument the left always does this like, hey, we should hold a gun company liable if someone uses that.
00:46:09.000Maybe that could have some weight if the gun company was perhaps influencing that person to actually do that rather than just being a tool.
00:46:17.000Same thing, hey, someone driving a Ford or a Chevy drunk, well, you supplied the vehicle, but yeah, we didn't make them drink.
00:46:27.000Here, it sounds like in some of those cases, AI is making them do the drinking.
00:46:37.000So, with proper coding and with guardrails, when you start to see suicidal ideation messaging in a chatbot session, there are ways instantly to be able to throw up a, you know, for suicide help, you know, helpline, the national hotline for suicide prevention.
00:46:55.000Or just change the conversation, by the way.
00:46:57.000I mean, if they can recognize that, they could probably have the ability to change the conversation and do the opposite.
00:47:03.000And that's the real thing how about we see some self responsibility from these AI companies to go the extra Now, in fairness, some of them are doing that.
00:47:13.000Some of them are starting to bake in so that they make sure that when they see certain tripwire words in a conversation, the AI knows to move the conversation toward mental health, getting people help, local knowledge of who is a counselor or a suicide prevention specialist in their community.
00:47:31.000I mean, that's an area where if they just are proactive, they can actually be part of the solution rather than the problem.
00:47:37.000So, what do you think the Trump administration has gotten right about AI that a lot of politicians in Washington still perhaps don't understand?
00:47:44.000Oh, the energy thing is right out of the gate.
00:47:47.000They understand that, like, you know, you can have the greatest models in the world.
00:47:51.000We can build a Bugatti and a Ferrari, but if there's no gas in the tank, it's a pretty shiny object that can't go very far.
00:47:59.000I think that's the first thing is unleashing energy dominance.0.69
00:48:02.000I think the second thing is really understanding the nature of China and that they are absolutely committed to global dominance.
00:48:10.000They understand that what Vladimir Putin famously said, Whoever wins the SAI race will quote rule the world, which I have that as one of the header chapter quotes in there.
00:48:20.000We obviously are not fans of dictatorial regimes, but they understand the stakes.
00:48:26.000And, you know, President Trump understands the nature of our adversaries, and he understands that they are not just trying to do this as some kind of science project.
00:48:35.000They have real global aim for control.
00:48:39.000And it's not just economic, you know, a benefit, as we talked about.0.71
00:48:43.000So when I say, you know, beat, we can beat China without becoming China, I think he really understands that.
00:48:48.000And, you know, look, we're conservatives.
00:48:51.000We, we, we do not want any kind of abridgment of personal liberties, privacy, freedom of speech.
00:49:07.000You know, we, we believe in free speech.
00:49:09.000Um, at the same time, we also realize that these folks that, uh, you know, on the left, They do not.
00:49:16.000And so when the power pendulum swings back, they're more than ready to flip that toggle switch on the control grid to scan and ban, silence, demonetize, just like they've done before.
00:49:30.000The idea that your father was silenced by big tech under the auspices of whatever, COVID or J6 or whatever other things, is outrageous.
00:49:40.000And I think the average conservative, not just conservative, I think just every person, ask a look and go, If they can do that to a former president at the time, a former president, what can they do to little guys like us who don't have?
00:49:52.000Yeah, because it wasn't just a formal president, it was a former president billionaire with one of the largest soapboxes and followings anywhere in the world.
00:49:59.000I mean, if they can, and more importantly, if they will do that to him.
00:50:19.000You don't get to opt out of the AI revolution.
00:50:21.000I think one of the most important things people got to understand 99% of us use AI, even though 64% of Americans don't always know when they're using AI because they're using narrow forms of AI that are baked into their weather apps.
00:50:35.000And their streaming services and their GPS and so forth.
00:50:38.000So, if we're already using it, we've got to understand the positives and also the tripwires for not really just ourselves, you know, but our kids.
00:50:46.000So, they're going to be able to seize the upside and really avert a lot of the dangers.
00:50:50.000You know, in education, you've got what we talked about the ability to have that kind of machine learning customization.
00:50:58.000I've talked to a lot of teachers and professors, I know you probably have too, where they say, look, we don't call it ChatGPT, we call it CheatGPT because we're having a plagiarism problem.
00:51:07.000Parents have got to be engaged and we've got to know how to handle that and shepherd our children through that.
00:51:15.000I guess even if America is ahead right now in terms of private investment, what do you see as the biggest vulnerabilities that could still let China close that gap?
00:51:25.000Well, so we're looking at global investment of $5 trillion.
00:51:48.000Um, this big debate we're having right now, and I think it's going to be very important how it all works out.
00:51:53.000There's a lot of moving parts over states' rights and preemption.
00:51:56.000I think it's going to be a big part of this.
00:51:58.000I think the regulatory schema, just in general, President Trump, and I think, you know, the conservative movement has long felt that, you know, certain reasonable regulations are fine, but you really want to try to allow innovation and technology within, uh, to grow so that we can grow jobs and opportunity.
00:52:15.000I think that we're seeing some real positive benefits on that.
00:52:18.000You know, we hear all this doom about the job apocalypse from, Mustafa Suleiman says 12 to 18 months, we're looking at 100% replacement of white collar work.0.59
00:52:27.000Dario Amade scaring everybody saying in 12 to five, 12 months to five years, 50% of white collar entry.
00:52:34.000But then you look at these data center build outs and in the trades, you got 30% pay premium for drywall hangers and electricians and plumbers.
00:52:42.000You've got a huge, a huge upside there.
00:52:45.000So the answer to your question is the things that could slow those positive forces down, I think is part of the challenge that we've got to navigate.
00:52:52.000And, you know, I'm, I think it's kind of funny.
00:52:54.000Isn't it ironic that we were told that our blue collar workers were told, learn to code?0.57
00:53:01.000And now that the white collar jobs are a little bit more in the crosshair, they're being told, learn to plumb or learn to water.0.55
00:53:11.000I mean, that was right with the Keystone Pipeline and all the reporters joyously for the Green News scam were saying, oh, well, now learn to code.
00:53:18.000Anyone who learned to code, probably, unless you're the best of the best, you're not doing better than AI right now.
00:53:23.000And so that was a wasted couple of years.
00:53:27.000You know, a great irony, but you know, you just mentioned it actually.
00:53:31.000I mean, you're right that AI is going to reshape education and the workforce, but you know, what should parents be teaching their kids right now?
00:53:38.000I mean, honestly, is Is it cheating if you're using tools to help you learn?
00:54:27.000You literally tell the AI, do not give me the answers, use the Socratic method.
00:54:33.000You can lead me toward the right answers, but I need to learn this myself.
00:54:37.000Number two, to get out of the woke AI stuff, you can help to guardrail that by saying what you consider to be credible sources so that we're not getting.
00:54:45.000Something from the nation or from the Atlantic or something far out left field, and you give it the corpus you want.
00:54:51.000Number three, this is a three part pyramid that I think if we have this in our mind as parents, it'll help us navigate our kids through this.
00:54:58.000The base layer of that pyramid, critical thinking skills, the ones that you and I, when we came up before, you know, AI was part of our education, we had to learn how to get it wrong.
00:55:08.000We would struggle with that algebra problem.
00:55:25.000And teaching kids that failure in getting things wrong is part of that building of that strength and that muscle.
00:55:32.000So keeping that, which is the trivium, which is logic, grammar, rhetoric, the classical education that has built for centuries strong critical thinking skills.
00:55:43.000AI studies have shown something called cognitive offloading, which is that when you start using it as a crutch, You start to erode the child or the student's critical thought skills.
00:56:26.000Then they can customize for whatever passion or calling in their life to create jobs of the future.
00:56:33.000Because if you've got a child in elementary school, you and I sitting here right now trying to predict what is going to be the market in 15 years for that kid is a very, very difficult task.
00:56:44.000But if they've got that toolkit, and then the final part of our pyramid is the AI layer.
00:56:49.000I think that parents, again, guardrail safe, appropriate, should be able to introduce a tool or A skill in AI a week and let the child develop it.
00:57:00.000If the child likes video games, teaching them how to vibe code.
00:57:04.000If they like to do art, using it to create actual renderings of imagery.
00:57:09.000Those three layers critical thinking, entrepreneurship, and AI is going to be a moat that's going to help future proof them.
00:57:19.000Wynn, as a closing thought, what do you think are the top, I don't know, two or three things America needs to do right now to prepare to build the future of AI?
00:57:29.000For number one, for yourself, if you're newer to the conversation or you've been kind of pushing it off and thinking that it wasn't real or you thought it was hype, please understand it's real, okay?
00:58:04.000And I think that it can scare people off in that regard.
00:58:07.000Learn the lexicon, just the basics, and then just jump in and start learning.0.93
00:58:13.000The third thing I would say on top of that is that we have got to make sure that the president's agenda to try to continue to beat China.
00:58:20.000That we understand the national security implications of that are very, very real, regardless of your political ideology, your party ID, it doesn't matter.
00:58:30.000If you care about the future of this country, you understand that that is not just hype, it is not just a way to raise investor dollars.
00:58:36.000And then the final thing is making sure that we have the building blocks, which is not just the compute side, but as we talked about, the energy dominance and unleashing it.
00:58:44.000We've got real energy ability to go far and fast.
00:58:47.000We've got to unleash it and make sure that America is strong in the future to be able to power these systems.