The Charlie Kirk Show - April 19, 2023


Rage Against the Machines? with Joe Allen and Raheem Kassam


Episode Stats

Length

35 minutes

Words per Minute

170.12263

Word Count

6,011

Sentence Count

442


Summary

Summaries generated with gmurro/bart-large-finetuned-filtered-spotify-podcast-summ .

Transcript

Transcripts from "The Charlie Kirk Show" are sourced from the Knowledge Fight Interactive Search Tool. Explore them interactively here.
00:00:00.000 Hey everybody, Tanner Charlie Kirk Show.
00:00:01.000 We talk artificial intelligence, very important, and also Trump 2024 with Raheem Kassam and Joe Allen.
00:00:07.000 Text this episode to your friends, give us a five-star review, email us freedom at charliekirk.com and get involved with TurningPointUSA.
00:00:14.000 That is tpusa.com, tpusa.com.
00:00:19.000 Email us your thoughts as always, freedom at charliekirk.com and get involved and support our show at charliekirk.com slash support.
00:00:27.000 Buckle up, everybody.
00:00:28.000 Here we go.
00:00:29.000 Charlie, what you've done is incredible here.
00:00:31.000 Maybe Charlie Kirk is on the college campus.
00:00:33.000 I want you to know we are lucky to have Charlie Kirk.
00:00:37.000 Charlie Kirk's running the White House, folks.
00:00:40.000 I want to thank Charlie.
00:00:41.000 He's an incredible guy.
00:00:42.000 His spirit, his love of this country, he's done an amazing job building one of the most powerful youth organizations ever created.
00:00:49.000 Turning point USA.
00:00:50.000 We will not embrace the ideas that have destroyed countries, destroyed lives, and we are going to fight for freedom on campuses across the country.
00:00:59.000 That's why we are here.
00:01:02.000 Brought to you by my friends, Andrew and Todd at Sierra Pacific Mortgage, 888, 888 1172 or Andrew and Todd.com.
00:01:13.000 Welcome back, everybody.
00:01:14.000 Everyone's talking about artificial intelligence, and Joe Allen has been trying to warn people for quite some time.
00:01:20.000 At least we're talking about it.
00:01:21.000 Joe, welcome back to the program.
00:01:24.000 Thank you very much, Charlie.
00:01:26.000 Is it a good idea to try to regulate it, have a government agency preemptively regulate it?
00:01:30.000 Is that a futile effort?
00:01:32.000 Your thoughts?
00:01:34.000 It's certainly futile in regard to, say, Russia or China or any other systems that may exist out there.
00:01:39.000 But, you know, actually, I'm going to add something else to this.
00:01:44.000 Right now, they've got the Commerce Department working on this, right?
00:01:47.000 They solicited the public opinion so people can email in or message their complaints or their worries or what they think that can be done.
00:01:58.000 But the fact that it's the Commerce Department, right as we see the Restrict Act moving through with a lot of really potentially damaging civil liberties violations, I think that's a real, real problem.
00:02:11.000 So as far as regulation goes, I mean, I would love to see some sort of cap put on the power of artificial intelligence.
00:02:18.000 And I'd love to see the implementation of data privacy and data ownership.
00:02:24.000 But even then, I really do think that it would only slow things down in America.
00:02:29.000 I think other countries would continue racing forward.
00:02:32.000 Yeah, so Elon was a little bit, he wasn't as clear as I would have liked.
00:02:39.000 He was a little opaque in his interview with Tucker.
00:02:41.000 So Tucker said, okay, be specific, what exactly could be done?
00:02:44.000 And Elon said, well, imagine a machine that could write very persuasively and know exactly how to communicate with the audience.
00:02:53.000 What did Elon mean by that?
00:02:54.000 He meant that a machine's going to run for political office or be able to communicate in massive channels in a way that is incredibly persuasive.
00:03:05.000 What did he mean by that?
00:03:06.000 So one of the AI doom scenarios is that an artificial superintelligence would come to the point that it would be indetectable to the normal human eye and that its goals would be either misaligned from human goals or its goals would include something that would be like on the level of kill all humans, right?
00:03:30.000 Or somewhere in between.
00:03:31.000 Now, what he meant is that right now we have all of these chatbots swarming the internet and filling the internet full of content.
00:03:41.000 And you have systems like Auto GPT, which are made to interact with the Internet.
00:03:47.000 If you had enough of those, or if you had a system that was influential enough, that was sophisticated enough, I think he said convincing or persuasive, and it was misaligned, and human beings didn't realize they were being manipulated, either really important and influential people being manipulated, or millions, billions of people being manipulated all at once.
00:04:10.000 And if it starts some sort of squabble, be they local or international, then that would be kind of the beginnings of an AI run amok and causing damage to human society.
00:04:23.000 If I could just add one thing to that, though, Charlie, the solution, the proposed solution, Elon Musk wants basically mandatory verification to prove that you're a human to be on Twitter.
00:04:33.000 That seems rational.
00:04:34.000 Why would that be bad?
00:04:36.000 It would kill anonymity, right?
00:04:38.000 So let's assume that Elon Musk is the cyborg savior that he's positioning himself as.
00:04:44.000 You're still at the mercy of Elon Musk in that system.
00:04:48.000 And Elon Musk now has, to some extent or another, your payment data and therefore your identity.
00:04:54.000 And I think that anonymity on the internet, especially in an age of mass political correctness, anonymity on the internet is necessary for certain truths to be voiced by very, very important people who would be basically canceled, so to speak, from their position should they tell the truth under their own names.
00:05:13.000 So that's interesting.
00:05:15.000 And I definitely see that.
00:05:17.000 At the same time, anonymity is an ever-evaporating and disappearing reality.
00:05:23.000 It's harder and harder, but it certainly is necessary and it should be protected speech.
00:05:28.000 So what are the other doom scenarios?
00:05:29.000 That's interesting.
00:05:30.000 I never heard it framed that way.
00:05:31.000 Can you just lay out like the best hits?
00:05:33.000 So, okay, that's one of them, a persuasive, you know, communication movement that is indecipherable from humans.
00:05:42.000 What are the other doom scenarios?
00:05:45.000 You know, in superintelligence, Nick Bostrom, Superintelligence is a 2014 book, very, very influential on Elon Musk and his way of thinking about this.
00:05:55.000 But in superintelligence, Nick Bostrom talks about an artificial intelligence system that is in control of or has access to some critical infrastructure.
00:06:06.000 And so it could be anything from a weapon system, right?
00:06:08.000 It could be nuclear weapons.
00:06:10.000 It could be dropping airplanes out of the sky, shutting off power grids, or it could be a system that's in charge of a biolab.
00:06:19.000 And so you've got biofoundries all over the place now.
00:06:21.000 And these are basically automated bio labs.
00:06:25.000 You've got robots that are doing most of the work in these labs to either do genome sequencing or active mutation for different sorts of kind of designer microbes.
00:06:37.000 And so if a system like that were to start creating pathogens and release them, then obviously that's the end of the world for at least some portion of us, not all of us.
00:06:49.000 Another thing that Nick Bostrom talks about too, though, is the possibility that a system would just simply manipulate human beings to do much the same thing.
00:06:57.000 And I would also add that, I mean, I think that let's just forget about runaway artificial general intelligence or some kind of super intelligent system.
00:07:05.000 Human beings have that power at their fingertips right now in high places.
00:07:10.000 And another fear is that as these technologies are democratized, the possibility of a terrorist getting a hold of a really, really advanced artificial intelligence system or some kind of a similar system to what is out there now and doing the same sorts of things.
00:07:28.000 And of course, now that you have machines that can code effectively, cyber attacks, mass cyber attacks become at least a much greater possibility.
00:07:37.000 So I can just see this happening now because the government is a bunch of power-hungry maniacs that we don't almost do anything effectively right now because our elites are awful.
00:07:49.000 They could use the anxiety about AI to actually create an even bigger government that would restrict our liberties and freedoms, and then we'd be even less freedom.
00:07:57.000 I mean, is the option, is the best option then to do nothing and create our own and just game out all the doom scenarios similar to kind of how we used to live when we used to think Russia was going to launch nuclear weapons at us.
00:08:09.000 I mean, is that now the prudent approach?
00:08:12.000 It's like, let's go create our own patriotic AI-based AI, if you will, in the AI arms race and do no regulation because at least we'd have a fighting shot.
00:08:23.000 I'm just trying to think rationally here.
00:08:26.000 And I think it's definitely a rational argument.
00:08:29.000 And I wish that I could make some sort of really coherent argument against it.
00:08:33.000 The only argument I have against it is that it's much more of a philosophical argument that as these technologies progress, human freedom and human dignity and certainly privacy will tend to recede.
00:08:45.000 And so these systems, as this AI arms race is ramping up, these systems are getting better and better.
00:08:52.000 So maybe you can trust Elon Musk with our fate, right?
00:08:56.000 He talks about, I love humanity.
00:08:58.000 I want to preserve humanity in the face of these evil artificial gods as I create a digital god myself.
00:09:05.000 But really, I think the question has to be asked: what sort of humanity is Elon Musk talking about?
00:09:10.000 And of course, a big part of his philosophical approach to how to deal with artificial intelligence is to link the brain directly to it.
00:09:19.000 But one other thing about this tension, that anxiety that you're talking about, that is definitely rippling through the population, it will be seized upon by people in government to secure more power.
00:09:30.000 So that's also a real problem.
00:09:33.000 And, you know, Nick Bostrom, the guy, the author of Superintelligence, he recommends a global government with mass wall-to-wall surveillance everywhere to stop technological progress outside of the hands of those that can be trusted, so to speak.
00:09:48.000 And there are a number of others who make that same argument in that camp, right?
00:09:52.000 Hugo DeGuerras, Ben Goertzel, they both made these arguments before.
00:09:56.000 And so Elon Musk has come out, said, global government is not what I want.
00:10:00.000 You ball Noah Harari has said the same.
00:10:02.000 I don't want global government.
00:10:04.000 And of course, Peter Thiel famously said that a global government would basically be the antichrist.
00:10:10.000 And that seems like what we're heading towards.
00:10:14.000 I have no, I am not confident anybody in our American and political elite even have a plan.
00:10:22.000 And I mean, it's just kind of all a lot of abstraction.
00:10:24.000 Like, well, maybe you could do this or do that.
00:10:27.000 That's really bad and not promising.
00:10:31.000 Are we the last humans?
00:10:33.000 That's a great question.
00:10:34.000 Joe Allen, so what do you I'm just reading some notes here.
00:10:40.000 You suggest that we're staring down the barrel of two different and divergent transhumanist futures.
00:10:45.000 What do you mean?
00:10:47.000 So I think that it's put really well by people like James Polos or Mary Harrington.
00:10:53.000 I can't recommend their writing enough.
00:10:56.000 Very, very intelligent.
00:10:58.000 And they are much more fatalistic about this in some ways than I or certainly Steve.
00:11:05.000 Steve Bannon comes out very strong on the point that we just got to stop this.
00:11:10.000 We have to, if not smash up the machines, halt their progress.
00:11:16.000 Polos and Harrington, I think, are more fatalistic.
00:11:18.000 And what they describe, really, in essence, is a situation in which we in the West have a choice between two types of worldly power.
00:11:29.000 The kind of borg that you see represented by people like Google or Facebook or Microsoft, the sort of corporate, politically correct Borg that is set up as a sort of police state over the rest of us, or an emperor model, which Musk represents.
00:11:46.000 He's much more of a Caesar Augustus type character.
00:11:50.000 And so it's really a choice, though, not between transhumanism or not transhumanism.
00:11:58.000 It's the style of transhumanism because the future that Musk foresees and is actively crafting with his billions is one in which we do create a godlike artificial general intelligence system that is smarter than human beings.
00:12:16.000 And you hope that it's benevolent towards human beings.
00:12:18.000 And humans connect their minds, or maybe he would even say their souls to the extent the brain is equated with the soul in the scientific realm.
00:12:27.000 You fuse your soul with that God through things like Neuralink.
00:12:31.000 You have servant robots everywhere that do all the work.
00:12:35.000 And you just assume that like human beings, and this is an example he used on Tucker, and a lot of transhumanists use this, that human beings would at least have the same kind, or just as human beings have the kindness and sentiment to keep chimpanzees and gorillas around, you hope you'll build a digital god that would have the same values and would keep us around.
00:12:58.000 Ben Goertzl describes it as though we would be squirrels in the park.
00:13:03.000 We would be squirrels in the park, and the digital gods would go on to develop themselves into something much grander.
00:13:10.000 Is that inevitable, though?
00:13:12.000 I mean, there's really no way to stop us from getting there, right?
00:13:15.000 I mean, someone is going to create something that is so advanced beyond humanity.
00:13:19.000 Or this is a theory that is less articulated, that there's the theory of technological plateau that you might get really good at word processing.
00:13:31.000 This is a theory that's gaining some steam right now, where there is no guarantee the parabola continues.
00:13:37.000 Can you articulate that?
00:13:39.000 Yeah, you know, one way to put it is that this technological increase we see right now is going to ultimately be an S curve.
00:13:46.000 So the singularity would be that this exponential increase just keeps going to an infinite degree, right?
00:13:51.000 Good for helping people write essays, parlor tricks, but yeah, keep going.
00:13:55.000 So, you know, an S curve, it just levels off at a certain point and then goes on.
00:14:00.000 Of course, you expect other S curves to emerge, but at least that's manageable.
00:14:04.000 And here's the way I see that.
00:14:06.000 I mean, that is something that is a hopeful note for those of us who don't want to see this technology go any further.
00:14:12.000 It's, of course, a hard limit for those who do.
00:14:15.000 And it's certainly possible.
00:14:16.000 There's no guarantees.
00:14:17.000 In the same way, we don't have flying cars and we don't have cold fusion that works well enough.
00:14:23.000 We won't have anything like artificial general intelligence.
00:14:27.000 I do think, though, that the never say never principle should at least hold for keeping the possibilities open for something like an AGI appearing.
00:14:37.000 It doesn't have to be like they describe it.
00:14:39.000 It just has to be powerful enough to give some worldly institution or power a leg up on its competitors.
00:14:47.000 And very few people are actually talking about it.
00:14:50.000 And so what do you think the next one year calendar, one minute remaining, will look like with artificial intelligence?
00:14:56.000 Without a doubt, you're going to see the rollout of these chatbots all across different institutions from education to corporations.
00:15:03.000 You're going to see more and more different areas of science advance because of the useful elements of these.
00:15:10.000 And what I think the biggest thing you're going to see is the fusion of human beings to artificial intelligence through mechanisms like these devices we're speaking on now.
00:15:20.000 Human AI relationships are forming as we speak.
00:15:24.000 And so you're going to get two classes of society that are starting to become self-conscious, those who want the technological revolution and those who don't.
00:15:32.000 It's coming very soon.
00:15:34.000 We have an email here from someone with a PhD.
00:15:36.000 You're going to have AI with PhD in data science, AI-driven, personalized propaganda that will control enough of the masses so the elites can achieve any policy at will.
00:15:46.000 That's very, that's that's very that's exactly what Elon and you are warning about.
00:15:50.000 All right, Joe, thank you so much.
00:15:52.000 Thank you very much, Charlie.
00:15:56.000 Look, you did the tough thing during the Chinese coronavirus.
00:15:59.000 You paid your people and pulled your business through the pandemic.
00:16:02.000 And now doing the tough thing could qualify you up to $26,000 per employee at covidtaxrelief.org.
00:16:09.000 Government funds are available to reward companies with two or more employees who stayed open during COVID.
00:16:14.000 This is not a loan, and you don't have to pay it back.
00:16:16.000 I know a lot of people that have benefited from this.
00:16:18.000 I think Congress appropriated way too much money.
00:16:21.000 This program is complicated, but nobody knows it better than the CPAs and tax experts at covidtaxrelief.org.
00:16:28.000 That is covidtaxrelief.org.
00:16:30.000 You pay nothing up front.
00:16:32.000 They do all the work and share a percentage of the cash they get you.
00:16:35.000 Businesses of all types, including nonprofits and churches, can qualify, including those who took PPP loans, even if you had an increase in sales.
00:16:43.000 You did the difficult thing for your employees during the virus.
00:16:47.000 Let covidtaxrelief.org help you get up to $26,000 per employee.
00:16:51.000 Visit covidtaxrelief.org.
00:16:53.000 That is covidtaxrelief.org, covidtaxrelief.org.
00:17:00.000 We're going to see the same way you saw a movement for organic food, grass-fed, fair trade, made in America.
00:17:08.000 You're going to see news items, newscasts, like, for example, five years from now, the Charlie Kirk Show, produced by humans, for humans, by humans.
00:17:17.000 I'm not kidding.
00:17:18.000 It's going to be a real thing because they already have AI newscasters.
00:17:21.000 They're going to have, I mean, not to mention the 40 to 50 million people that are going to lose their jobs in the next decade if this thing really ramps up.
00:17:29.000 Minimum.
00:17:31.000 And we've been playing around with AI edits of our videos.
00:17:33.000 It's just so much more efficient.
00:17:35.000 But I can guarantee you this: the host of this program will always be a human being, as inefficient as we are.
00:17:41.000 That'll be interesting to see.
00:17:43.000 Can the machine actually do commentary and make it interesting?
00:17:47.000 I bet they'll figure it out.
00:17:48.000 Joining us now is Raheem Kassam to talk 2024, amongst many other things.
00:17:54.000 And he has a new poll that he wants to discuss.
00:17:56.000 Pro-DeSantis poll conducted by Bush, Romney, Rove, Ryan.
00:18:01.000 Is that all?
00:18:01.000 And even Soros-linked operatives.
00:18:03.000 So, Raheem, tell us about the poll and tell us who's connected to the poll.
00:18:08.000 Yeah, well, thank you.
00:18:09.000 Thank you for having me.
00:18:10.000 I haven't quite outsourced the national pulse to AI just yet.
00:18:14.000 Yes, there you go.
00:18:16.000 I heard, I think, but I think this poll has probably been outsourced to AI, at least by the way it has been presented, by the way it has been repeated across a certain ecosphere on the right in the last 24, 48 hours.
00:18:30.000 You would think, I mean, it sounds very NPC.
00:18:33.000 And so, you know, when anything's sus, I think you and I probably do the same thing, right?
00:18:38.000 We think, well, if it sounds sus, probably is sus.
00:18:41.000 And you only need to sort of take one, two, three different research actions to prove a thesis out.
00:18:47.000 So yesterday morning, I wake up and I say to my team, I said, this poll seems sus to me.
00:18:52.000 Let's start looking into it within about, I don't know, what is the poll?
00:18:56.000 What were the results?
00:18:56.000 I'm sorry to interrupt.
00:18:57.000 Can you just make sure you outline that?
00:18:59.000 Yeah.
00:18:59.000 Oh, yeah, absolutely.
00:19:01.000 We'll do all of it.
00:19:02.000 So the backstory is, you know, I get up yesterday morning.
00:19:06.000 I think this poll is sus.
00:19:07.000 I send the thing out to the staff.
00:19:09.000 We go and we look through the stuff, which is Arizona, Pennsylvania, primarily 500 voters polled.
00:19:19.000 And the results show Ron DeSantis beating Joe Biden, but Donald Trump not beating Joe Biden.
00:19:25.000 And I go, yeah, I don't know that that's really a thing.
00:19:30.000 And I start to look for the crosstabs.
00:19:32.000 For the people that don't know what the crosstabs are, the cross-tabulations are the detail behind the polling.
00:19:37.000 It's the Excel spreadsheets, hundreds of pages they can go into sometimes that tell you, like, this person in this demographic said this.
00:19:44.000 This person who identifies as this said this.
00:19:48.000 This person's in this age range.
00:19:50.000 And they said that.
00:19:51.000 And there were no crosstabs released.
00:19:53.000 So the first red flag goes up.
00:19:56.000 All reputable polling companies release the cross-tabulations that underlie the polling arguments that they are making, but this one didn't.
00:20:05.000 And I said, who did this poll?
00:20:07.000 And it turns out, I'm not even making this up.
00:20:10.000 The name of the pollster is POS polling.
00:20:13.000 Okay.
00:20:14.000 There you go.
00:20:15.000 I was like, bit on the nose.
00:20:17.000 What does POS stand for?
00:20:18.000 Well, POS is public opinion strategies.
00:20:20.000 And public opinion strategies, I quickly found out, has been around since the early 90s.
00:20:25.000 They're based just down the road from Washington, D.C., in a little suburb called Alexandria in Virginia.
00:20:32.000 And then we started to look into the staff.
00:20:34.000 And all of the staff come from Romney campaign, Ryan campaign, Bush Jeb campaign, George Bush campaign, and their latest clients of this polling firm are American Crossroads, Karl Rove's organization.
00:20:52.000 Then you start to look into, okay, all right.
00:20:54.000 So we know this about them.
00:20:56.000 That's okay.
00:20:56.000 Those people exist.
00:20:57.000 I'm not denying they exist.
00:20:58.000 I'm not denying their right to run polling firms if they want to.
00:21:01.000 But then we start to look into, okay, well, who have they worked with?
00:21:04.000 Who have they worked for?
00:21:06.000 And you start to see pharmaceutical companies, Procter ⁇ Gamble, Pharma Lobby, American Medical Association, Bill Gates Foundation.
00:21:14.000 And I start to think, oh my goodness, we've hit the mother load with this.
00:21:17.000 And just before I think it can't get any worse, we find out that this POS polling has actually recently been submerged into this wider corporate apparatus in the last couple of years, a company called GP3.
00:21:32.000 And the head of GP3, the CEO of GP3, he is actually somebody who worked directly for Golden Telecom.
00:21:43.000 And Golden Telecom was owned by George Soros.
00:21:47.000 So it's interesting to me that at the same time we've seen, and by the way, there are so many more angles to this story.
00:21:54.000 Paul Ryan's brother and Paul Ryan himself now work for this company that is invested in this GP3.
00:22:02.000 And you start to see that there are layers upon layers upon layers of anti-Trump, like hardline, committed, ideological and financial anti-Trump interests at play here.
00:22:15.000 The funniest part about it, I suppose, to me is that even with all of that mentioned, they could only rig the poll to show that DeSantis was a couple of points ahead of Donald Trump.
00:22:26.000 You can't, you know, because you can't be too clever with these things.
00:22:29.000 Otherwise, people, you know, otherwise the whistles really blow on it, right?
00:22:34.000 But they could only push those numbers a tiny, tiny percentage point over the top of where Trump was.
00:22:40.000 I got to tell you, when it comes to these things, you know, I take the Christopher Hitchens view of polling.
00:22:46.000 It's broadly garbage nowadays.
00:22:48.000 Unless you're looking at somebody like Rasmussen, who I still inherently trust, it's broadly garbage.
00:22:53.000 But this stuff, this partisan stuff, I mean, this company's been slapped for its poor practices in the past as well.
00:22:59.000 But this partisan stuff, I think it's really bad to get in bed with these people.
00:23:03.000 And I think it says something about the people who are sharing this poll, knowing where it came from.
00:23:09.000 So how does it then play into the 2024 race, right?
00:23:17.000 It seems as if DeSantis has had a very difficult last six weeks and not as in good favor.
00:23:24.000 Are you seeing those same sort of macro trends right now?
00:23:28.000 Yes.
00:23:29.000 And I was saying to somebody earlier on Twitter, I was saying, look, I will report into this stuff because it's important for people to understand where the information is coming from.
00:23:39.000 This is not Raheem Kassam trying to destroy Ron DeSantis.
00:23:42.000 I actually quite like Ron DeSantis.
00:23:44.000 I quite like the old Ron DeSantis that we all quite liked.
00:23:48.000 And I remember a time in 2015 interviewing him on the radio and saying to him, Wow, you look like you could be president one day.
00:23:55.000 So this isn't anything like that.
00:23:57.000 This is the people that are sort of surrounding Ron DeSantis right now, using Ron DeSantis as their mechanism to get their ideology over the line.
00:24:06.000 And all of those people have the same thing in common.
00:24:09.000 They are frothing at the mouth, never Trumpers.
00:24:13.000 They have always been never Trumpers.
00:24:14.000 These aren't people who went off Trump.
00:24:16.000 I understand that some people did.
00:24:18.000 Whatever.
00:24:18.000 That's their problem.
00:24:19.000 They'll come to terms with it.
00:24:20.000 But these are people who, on the front page of National Review magazine, were the anti-Trump Brigade back then.
00:24:27.000 These are the people who ran the campaigns against him.
00:24:29.000 These are the people who smeared him shoulder to shoulder in lockstep with the Democrats while Trump was in office.
00:24:37.000 And now they're creating this sort of Praetorian guard around DeSantis.
00:24:42.000 But Ron DeSantis has to understand that that doesn't help him.
00:24:46.000 That hurts him.
00:24:48.000 You can coalesce as many billion-dollar globalist Republican donors as you want.
00:24:54.000 That is not going to take over from MAGA, right?
00:24:58.000 MAGA is in the veins.
00:24:59.000 It's in the blood.
00:25:00.000 It's in every breath that people take.
00:25:01.000 They believe it to their core.
00:25:04.000 This isn't something you can buy off.
00:25:05.000 It's not something that can be, you know, a check cannot be written.
00:25:09.000 No amount in a check can be written for most MAGA people to turn on that philosophy.
00:25:15.000 And that's what these donors believe.
00:25:17.000 They believe that if you throw enough money at it, if you throw enough fake polls at it, if you throw enough misdirection at it, then support for Trump will wane.
00:25:26.000 And who's going to benefit naturally?
00:25:28.000 You know, the only bigger candidate in the race so far.
00:25:31.000 Well, and I hear this a lot.
00:25:33.000 I mean, I was talking to a big donor recently.
00:25:34.000 He said, I don't know a single person who likes Donald Trump.
00:25:38.000 And my response is, well, there's a lot of them.
00:25:40.000 And that you might not know them, but they're a big part of the country.
00:25:45.000 In fact, they know you, right?
00:25:49.000 That's the point.
00:25:50.000 They know you.
00:25:51.000 They know that world.
00:25:52.000 They know your world.
00:25:53.000 They know, like, you talk to these people who don't know Trump supporters.
00:25:57.000 I would say, check your premises.
00:25:59.000 Check the people around you.
00:26:00.000 Who are you hanging out with?
00:26:01.000 Who are you talking to?
00:26:01.000 What supermarkets, if any, are you even standing in line at?
00:26:05.000 You know, you cannot go through life listening to by the way, I'm not a Marxist, so there's nothing wrong with being a billionaire, right?
00:26:13.000 If you've earned it, especially if you do something good with it.
00:26:16.000 Yes.
00:26:18.000 It just doesn't work.
00:26:19.000 Yes.
00:26:19.000 You can't be a touch that way.
00:26:22.000 It would be nice, you know, to have that little reclusive lifestyle and whatever, but then fine.
00:26:27.000 Then you're not going to get politics.
00:26:28.000 You're not going to get public opinion.
00:26:30.000 And public opinion strategies in this instance doesn't get public opinion.
00:26:33.000 Raheem Kassam, great work.
00:26:35.000 Thank you so much.
00:26:36.000 Thank you for having me.
00:26:38.000 Email usfreedom at charliekirk.com.
00:26:40.000 We're getting a fair amount of emails here about artificial intelligence and the whole deal.
00:26:45.000 I want to just talk about an email.
00:26:48.000 I'm emailing back and forth with this PhD in data science and AI lead.
00:26:51.000 He sent his link to him.
00:26:56.000 He's legit, really, really serious guy here.
00:26:59.000 And I asked him, I said, what's the most important part of the artificial intelligence debate?
00:27:03.000 Like all this.
00:27:04.000 He said, listen, we have to limit centralized surveillance or strict act.
00:27:09.000 The AI power is to manipulate what only comes from data.
00:27:13.000 It is what feeds them.
00:27:15.000 This guy is the PhD in data science and an AI lead at his job at a very reputable place.
00:27:21.000 He said ring cameras, for example, should have a closed lid unless a motion detector is activated.
00:27:26.000 Cell phones need lids that hardware disconnect the microphone when closed.
00:27:32.000 And he goes on to what the worst case scenario is and all this.
00:27:35.000 And so that's interesting.
00:27:36.000 The data might be a way for us really taking our data seriously.
00:27:40.000 That's an interesting point, is that that's really the life force.
00:27:44.000 Otherwise, the AI is not as powerful as people might think it is.
00:27:53.000 Hey, everybody, Charlie Kirk here.
00:27:55.000 Just when you thought it couldn't get any better, Mike Lindell with My Pillow is launching the My Pillow 2.0.
00:28:01.000 That's right, you heard me, MyPillow 2.0.
00:28:03.000 When Mike Lindell, great American patriot, invented My Pillow, had everything you could ever want in a pillow.
00:28:08.000 But now, 20 years later, he discovered a new technology that makes it even better.
00:28:12.000 The My Pillow 2.0 has a patented, adjustable fill on the original My Pillow, and now with a brand new fabric that is made with a temperature-regulating thread.
00:28:22.000 For exclusive listeners, the MyPillow 2.0 is buy one, get one free offer with promo code Kirk and get your best sleep ever.
00:28:29.000 MyPillow 2.0 temperature regulating technology is 100% made in America and comes with a 10-year warranty and a 60-day money-back guarantee.
00:28:38.000 Go to mypillow.com and click on the Radio Listener Square to buy one and get one free offer.
00:28:42.000 Enter promo code Kirk or call 800-875-0425 to get your MyPillow 2.0 now.
00:28:48.000 That is mypillow.com.
00:28:50.000 Promo code Kirk.
00:28:51.000 Check it out.
00:28:54.000 Lots of emails on artificial intelligence.
00:28:58.000 People say, Charlie, you're forgetting the point.
00:29:01.000 The so-called AI tech emperor that could get in control is still in a machine or on the internet.
00:29:06.000 If things ever get bad, actual humans could go on and smash the hardware and disconnect the wires.
00:29:11.000 Humans would still maintain ultimate control in the physical realm.
00:29:15.000 You know, that would actually be a smart solution, right?
00:29:17.000 That would be a smart regulation if our leaders were in charge.
00:29:19.000 There should be a kill switch mandatory.
00:29:21.000 There needs to be a plug that could be pulled.
00:29:24.000 Again, if we actually lived in a sane country, which we don't, with leaders that actually cared about issues that matter, which we don't, wouldn't a kill switch be rational?
00:29:32.000 Is there a kill switch right now of the artificial intelligence at Google?
00:29:36.000 Is there a kill switch?
00:29:38.000 Besides just good old baseball bats and hand grenades.
00:29:43.000 There needs to be some sort of ability to check and balance.
00:29:45.000 And even the left that isn't taking this issue seriously deep down, they know this stuff is really bad.
00:29:50.000 Is this something we can agree on?
00:29:52.000 Probably not, because they don't believe in God.
00:29:54.000 Therefore, they're trying to create heaven.
00:29:56.000 You can't make this up.
00:29:58.000 We're going to get into the dialogue with the thug that runs the Department of Education.
00:30:03.000 Penguin, which is a publisher, has announced they're making major revisions to 1984.
00:30:13.000 Quote, the novel requires updating in line with progressive sensibilities, said a consultant, for anti-exclusionary minds, a group engaged by Penguin to moderate Orwell's critique of totalitarianism.
00:30:28.000 For those of you that have ever read 1984, the irony is rich here.
00:30:34.000 1984 will be memory hold itself.
00:30:38.000 Boy, that would have been powerful if Orwell would have written that.
00:30:41.000 He said, for this, one day will be memory hold two.
00:30:45.000 Oof.
00:30:46.000 Someone just emailed us.
00:30:47.000 We need to go buy original copies of books and keep them.
00:30:50.000 You better believe it.
00:30:51.000 They are going to become some of the most valuable things in society.
00:30:56.000 Original copies of physical books.
00:30:58.000 They're going to get rid of all of it.
00:30:59.000 Right here, literally, they're editing 1984.
00:31:01.000 They're heavily editing 1984.
00:31:03.000 It's too critical of totalitarianism.
00:31:05.000 You're trying to tell me we're not heading for a police surveillance state.
00:31:08.000 Why would Penguin need to update 1984?
00:31:10.000 Get people used to being spied on by each other.
00:31:12.000 Big Brother is fine.
00:31:13.000 Big brother is great.
00:31:15.000 You know, in 1984, it talks about artificially intelligence.
00:31:18.000 It's artificial intelligence, artificially intelligence-generated art.
00:31:22.000 Buy books and keep them.
00:31:24.000 Your kids will thank you because soon the new 1984 that will be published will be heavily edited.
00:31:30.000 Amazing.
00:31:32.000 What is a woman?
00:31:34.000 Very simple question.
00:31:35.000 Matt Walsh asked it in his fabulous film.
00:31:38.000 Cut 51, the height of the Department of Education, has asked a simple question: What is a woman?
00:31:43.000 Play Cut 51.
00:31:45.000 So, can you please tell me, or can you please define for me, what is a woman?
00:31:51.000 Our focus at the department is to provide equal access to students, including students who are LGBTQ access, free from discrimination.
00:32:00.000 What's the definition of a woman?
00:32:02.000 You haven't given me that.
00:32:03.000 You haven't answered my question.
00:32:04.000 I think that's almost secondary to the important role that I have as Secretary of Education.
00:32:08.000 What does HHS say the definition of a woman is?
00:32:12.000 I lead the Department of Education, and my job is to make sure that all students have access to public education, which includes co-curricular activities.
00:32:21.000 What is a woman?
00:32:23.000 It's secondary.
00:32:24.000 I can't tell you what a woman is.
00:32:26.000 These people can bust.
00:32:30.000 They get overheated when you ask them a very simple question: what is a woman?
00:32:32.000 Because they know that if they actually answered the question, it would either upset the trans lobby or upset one of their core constituencies.
00:32:41.000 They cannot answer a question of what is a woman.
00:32:44.000 Isn't it amazing all these godless, miserable, upper-middle-class, secular, white liberals that go around and be like, yeah, I vote for Democrats for women's rights.
00:32:58.000 I vote for Democrats.
00:32:59.000 What is a woman?
00:33:00.000 Your leaders you put in power can't even tell you what a woman is.
00:33:05.000 New Zealand has the same issue.
00:33:07.000 This is a leader in New Zealand, the prime minister of New Zealand, Chris Hipkins, asked the question, hey, what is a woman?
00:33:14.000 Who would ever have thought how to create a glitch in the trans totalitarian left is asking an elemental, simple question?
00:33:22.000 They just start, they can't answer the question, play cut 52.
00:33:28.000 How do you, and how does this government define a woman?
00:33:31.000 To be honest, Sean, that question's come slightly out of left field for me.
00:33:37.000 The, well, biology, sex, gender, people define themselves.
00:33:43.000 People define their own genders.
00:33:45.000 Well, I think as I've just indicated, I wasn't expecting that question, so it's not something that I've pre-formulated an answer on.
00:33:53.000 But in terms of gender identity, I think people define their gender identity for themselves.
00:33:59.000 Yeah, people, they define gender identity for themselves.
00:34:04.000 By the way, the 1984 thing, it could be a parody that I'm reading, or it could be real.
00:34:07.000 That's how you know you live in Orwellian times.
00:34:09.000 I'm going to read the tweet.
00:34:10.000 The tweet is there.
00:34:11.000 It could be real.
00:34:11.000 It could be a parody.
00:34:12.000 Penguin has announced major revisions to a new edition of Orwell 1984.
00:34:16.000 The novel requires updating.
00:34:17.000 It almost sounds too on the edge to be true.
00:34:20.000 Or it could be true.
00:34:21.000 I mean, we live in such Orwellian times as it is.
00:34:24.000 But they are real.
00:34:26.000 The reason it's believable is because they're editing a ton of books right now.
00:34:29.000 They're rewriting a ton of books to try to be politically correct.
00:34:32.000 Woodhouse, Ronald Dahl.
00:34:33.000 So it is in the pattern of believability.
00:34:38.000 Boy, this artificial intelligence topic has really animated many of you.
00:34:43.000 And you've emailed us freedom at charliekirk.com.
00:34:46.000 I'm open to ideas.
00:34:48.000 You guys can email us, freedom at charliekirk.com.
00:34:50.000 How do we stop it before it takes us off?
00:34:53.000 I asked our friend here, any good news?
00:34:55.000 He said, well, potentially, it can help us be a hedge against tyranny if it enables humans to parse information faster and hence forces governments to be transparent.
00:35:05.000 But that's unlikely.
00:35:06.000 Thanks so much for listening, everybody.
00:35:08.000 Email us here Fox is always freedom at charliekirk.com.
00:35:11.000 Thanks so much for listening, and God bless.
00:35:16.000 For more on many of these stories and news you can trust, go to CharlieKirk.com.