The Matt Walsh Show - April 30, 2026


Ep. 1771 - The Worst People Imaginable are Building the Future


Episode Stats


Length

37 minutes

Words per minute

173.9972

Word count

6,443

Sentence count

357

Harmful content

Misogyny

30

sentences flagged

Toxicity

10

sentences flagged

Hate speech

31

sentences flagged


Summary

Summaries generated with gmurro/bart-large-finetuned-filtered-spotify-podcast-summ .

Unless you ve spent a lot of time in the state of Massachusetts, there s a good chance you ve never heard of something called the Curley Effect. It s named after James Michael Curley, who served four terms as mayor of Boston from 1914 to 1950.

Transcript

Transcript generated with Whisper (turbo).
Misogyny classifications generated with MilaNLProc/bert-base-uncased-ear-misogyny .
Toxicity classifications generated with s-nlp/roberta_toxicity_classifier .
Hate speech classifications generated with facebook/roberta-hate-speech-dynabench-r4-target .
00:00:00.760 Every Olympic dream starts somewhere.
00:00:03.340 At first, it's just potential.
00:00:05.100 But over time, with the right support and a few breakthroughs,
00:00:07.860 it becomes something more.
00:00:10.060 Make RBC Training Ground your breakthrough moment.
00:00:12.900 Start your journey to Team Canada today at rbctrainingground.ca.
00:00:18.180 Unless you're about 100 years old or so,
00:00:21.500 or you've spent a lot of time in the state of Massachusetts,
00:00:24.060 there's a good chance you've never heard of something called the Curly Effect.
00:00:27.800 It's named after James Michael Curley, who served four terms as mayor of Boston from 1914 to 1950.
00:00:35.340 He also served in the House of Representatives, and he was governor of Massachusetts for one term as well.
00:00:41.160 So for a half century, he was a very well-known figure in Boston.
00:00:44.820 They called him the Rascal King, and he was quite popular with Boston's poor, particularly the Irish population.
00:00:52.960 The funny thing about James Michael Curley, though, is that despite the fact that he kept getting elected to high office, he wasn't actually a good politician.
00:01:02.040 Wasn't even close. He committed numerous crimes, including mail fraud.
00:01:05.840 He served part of his term as a mayor in a prison cell.
00:01:09.380 And under his watch, by every objective metric, the city of Boston declined dramatically.
00:01:13.680 The population stagnated, even as other major cities grew exponentially.
00:01:18.040 manufacturing jobs left the city. Boston's finances collapsed to the point of near bankruptcy.
00:01:24.820 So how did James Michael Curley hold on to power for so long despite doing such a horrible job?
00:01:31.280 It doesn't seem logical. So a couple of economists at Harvard decided to look into it. And what they
00:01:37.540 found was that by dramatically raising taxes and using taxpayer funds to hire poor Irishmen
00:01:42.860 for fake government jobs, James Michael Curley had driven wealthy people out of the city.
00:01:49.000 The rich people decided to get out of town before the city of Boston would steal any more of their
00:01:52.980 money. And as a result of this mass exodus, the share of low-income residents living in Boston,
00:01:58.440 the core demographic supporting James Michael Curley, grew substantially. The economists
00:02:03.820 called this tactic the Curley Effect. The idea is that if you want to retain your grip on power,
00:02:10.200 even though you're doing a horrible job, then your best course of action is to drive all of
00:02:14.300 your political opponents out of town. There's no reason not to shower your preferred demographic
00:02:19.200 group with all kinds of welfare, fake jobs, special status, and so on. You can simply loot 0.98
00:02:25.160 the city's treasury for decades on end before the city finally goes bankrupt. Every worthwhile
00:02:30.100 person will leave, but your voters will remain, and that's all that you care about, and that is
00:02:35.400 the Curley effect. It's also a very accurate way to describe how Democrats plan to govern every
00:02:42.160 major city in this country for the next 50 years, how they've already been governing them for the
00:02:46.760 last 50 years. It's not an exaggeration to say that for large portions of this country, the future
00:02:51.600 is going to be built by leftists, particularly women and foreigners in many cases, who deliberately 1.00
00:02:58.980 seek to drive away everyone who's competent, sane, and productive. Or at least that's their 1.00
00:03:04.840 plan if they aren't stopped. And if you doubt that, take a look at this video from Seattle
00:03:09.240 Socialist Mayor Katie Wilson. She's asked about the impact of Washington state's new 10% tax on
00:03:15.500 millionaires, as well as Seattle's aggressive new taxes. There are other forms of taxes.
00:03:21.100 And watch how she responds. I think the claims that millionaires are going to leave our state 0.97
00:03:27.220 are like super overblown. And if, you know, the ones that leave, like, bye. So
00:03:33.300 this is a 40 something year old woman who didn't hold a real job throughout her entire adult life
00:03:45.920 she admits that her parents pay her bills and now she's elated by the fact that she's driving away 0.99
00:03:51.760 the most productive people in her city i mean at a visceral level it's one of the most revolting 0.95
00:03:57.360 videos you'll ever see and the reason katie wilson doesn't care if the millionaires leave 0.96
00:04:02.480 is that for every millionaire who flees Seattle, 1.00
00:04:04.620 she's gaining one net vote in the next election.
00:04:07.760 The more the city decays
00:04:08.960 and the more the productive residents flee, 1.00
00:04:11.780 the more job security she has.
00:04:14.980 Of course, it's disastrous for the city in the long run,
00:04:17.320 but she doesn't care about that.
00:04:18.700 She only cares about her own political future, 1.00
00:04:20.380 and this is good for her own political future.
00:04:23.480 And it's the same strategy Zoran Mamdani
00:04:25.260 is pursuing in New York,
00:04:26.280 except arguably Mamdani is executing the strategy
00:04:28.440 far more quickly.
00:04:30.180 In case you missed the news,
00:04:31.200 The other day, Mamdani officially announced that the city is already out of money.
00:04:35.380 Yes, the socialist from Uganda has been in office for less than five months, and he's
00:04:39.900 already asking for the state to bail him out. 0.74
00:04:42.240 Watch.
00:04:43.920 The city faces a budget crisis of a historic magnitude.
00:04:48.020 We inherited a deficit larger than any since the Great Recession.
00:04:52.340 Years of mismanagement and chronic under budgeting, alongside a structural imbalance between what
00:04:57.120 New York City sends to the state and what we receive in return have taken a toll.
00:05:02.120 We cannot close this deficit with savings alone.
00:05:04.940 We need new revenue, and we need a structural reset in our relationship with the state.
00:05:10.640 That is the only way to meet our legal obligation to pass a balanced budget, and to do so without
00:05:16.320 imposing a financial burden onto the backs of working people.
00:05:19.960 I'm glad to partner with Speaker Menon as we call upon Albany and deliver a balanced
00:05:24.780 budget.
00:05:25.780 we are extending the executive budget deadline from this coming Friday until May 12th because
00:05:31.040 the crisis of the scale cannot be solved without state action. Now New York is one of the wealthiest
00:05:38.120 cities in the entire world. The only conceivable reason why New York would be broke is that the
00:05:42.520 people leading New York are incompetent and or malicious. I mean probably both. They're spending
00:05:49.740 money they don't have and calling it free like they did with free pre-K for every child and
00:05:55.420 And when the socialist from Uganda decides to make even more things free, including the buses, he quickly discovers that he's already run out of other people's money.
00:06:03.500 So he needs to ask the state government for a handout, even though the state is also hemorrhaging residents and therefore money. 0.57
00:06:11.320 Again, it's all part of the plan. The broke, unemployed Haitians who don't speak a word of English aren't bothered by any of this. 1.00
00:06:18.860 They still think Mamdani is a hero. 1.00
00:06:21.560 They're not going anywhere.
00:06:23.260 They'll be loyal Mamdani voters to the end. 1.00
00:06:26.400 It's the useful New Yorkers who are going to move to Florida and never return. 1.00
00:06:31.060 This is a death spiral that's very difficult to recover from once it gets going.
00:06:35.420 And it's not just a problem in politics. It's happening everywhere.
00:06:38.480 Some of the most important technology companies in the country are doing the exact same thing.
00:06:42.060 They're putting leftists, predominantly women and foreigners, into positions of authority 0.91
00:06:45.920 where they have the capacity to gain even more power by driving away some of their customers. 0.61
00:06:50.940 Again, just like the curly effect, it's not exactly intuitive.
00:06:54.440 You think the job of a company is to make as much money as possible
00:06:57.000 and to sell to anyone who wants to buy their product, but it's not actually the case.
00:07:00.520 Sometimes it's important to drive your biggest customers away
00:07:03.640 so that you can consolidate power with the customers who remain.
00:07:07.700 Paying $70 plus a month to big wireless companies for unlimited data is insanity.
00:07:12.300 My wireless company, Pure Talk, is going to give you unlimited high-speed data for just $34.99 a month.
00:07:19.320 And if you're wondering, is Pure Talk's network really as good as the overpriced big guys, well, try it out for 30 days.
00:07:24.720 No contract, no cancellation fees, so you can try it firsthand with nothing to lose.
00:07:29.140 You can make the switch in as little as 10 minutes, and their U.S.-based customer service team is standing by to help,
00:07:34.760 so you don't get some random person that doesn't even speak English.
00:07:37.800 Go to puretalk.com slash Walsh to claim unlimited high-speed data for just $34.99.
00:07:43.680 Again, that's puretalk.com slash Walsh to switch to Pure Talk today.
00:07:48.540 Now, along those lines, you might remember this story from a couple of months ago.
00:07:52.060 It broke just before the war in Iran started, so it was buried very quickly.
00:07:55.560 But there was a very public falling out between the tech company Anthropic, which makes the AI product Claude, and the Trump administration.
00:08:02.780 The Pentagon has been using Claude to assist in military operations for several months now, including in Venezuela and Iran.
00:08:09.920 The AI reportedly helps with target identification and the operations of weapon systems, among other services.
00:08:16.000 But Anthropic began demanding several conditions from the Pentagon.
00:08:21.060 They wanted the Pentagon to provide guarantees that Claude would never be used to conduct surveillance on Americans or to operate fully autonomous lethal weapon systems like, you know, RoboCop.
00:08:29.780 RoboCop. The Pentagon said those guarantees weren't necessary and that they'd complied with
00:08:34.480 the law, but Anthropic insisted. Watch. And how much nervousness is there in the relationship
00:08:40.820 between the Department of War and Anthropic at the moment? This is a really super fascinating
00:08:45.260 story we were talking about earlier in the show, that basically the clod tool, they want safeguards
00:08:49.520 against basically mass surveillance, presumably for American populations. And of course, having
00:08:54.720 basically kill orders being required to be given by human beings absolutely and these kill orders
00:09:00.280 have been something that have been sort of uh getting a lot of attention on social media on
00:09:04.360 the x platform for example uh they really ramped up last week actually with some of these
00:09:08.320 conversations and i think this comes down to again this the big question which is who owns
00:09:14.580 the data and who has access to the data data in the wrong hands can obviously be used for for bad
00:09:21.260 purposes, like with any technology. So I think the question why, whilst this discussion has led
00:09:28.000 to a delay with the agreement, is around who's going to own the tools, who's going to own the
00:09:32.840 data, and who can use what with that data. So this is how the story was covered in most
00:09:40.060 major outlets. The implication is Anthropic was the good guy. They were making sure that the AI
00:09:44.480 and the data it collects would not be used in a way that could harm American citizens. The idea
00:09:49.080 is that the Pentagon can't be trusted under any circumstance. But there's a big problem with this
00:09:55.400 framing, which is that it ignores the fact that Anthropic can't be trusted either. The people who 0.84
00:09:59.920 are running this AI, which is vitally important for our national security at the moment, are no
00:10:05.160 better than the mayor of Seattle. They're every bit as corrupt and dumb, and they have the same 1.00
00:10:11.620 intentions. They want to get rid of their political enemies. They want to neutralize them completely 1.00
00:10:15.840 so that they have total control. So we'll start with a Scottish philosophy major named Amanda 0.99
00:10:20.920 Askell, who, despite having no technical knowledge whatsoever, is one of the most powerful people 1.00
00:10:26.020 at Anthropic. She's also one of the most visible. The company encourages her to sit for photo
00:10:32.640 shoots like this one, which was just published by the Wall Street Journal. We'll put some of
00:10:37.900 the images up on the screen right now. So there she is. The more of them you see, the deeper you're
00:10:43.500 going into the uh uncanny valley she's attempting to look like uh like an android there's no other 1.00
00:10:49.900 way to say it it looks like she's auditioning for like a new blade runner movie where she plays one
00:10:54.220 of the defective robots that doesn't quite fit in with the humans so they just throw it in an
00:10:58.840 empty room and decide to fix it later and i'm not being mean i mean that's quite obviously the look
00:11:02.880 she's going for uh this is someone who before she even opens her mouth you know is going to be 1.00
00:11:07.500 absolutely insufferable. And then she opens her mouth and that is confirmed. The article goes on
00:11:14.020 to sound exactly like a dystopian novel. We learned that her husband essentially took her last name,
00:11:18.680 which is always a great sign. But let's give her a chance. This is from the beginning
00:11:23.420 of her new profile in the Wall Street Journal. Quote, as the resident philosopher of the tech
00:11:30.700 company Anthropic, Amanda Askell spends her days learning Claude's reasoning patterns and talking
00:11:36.720 to the AI model, building its personality, and addressing its misfires with prompts that can
00:11:41.760 run longer than 100 pages. The aim is to endow Claude with a sense of morality, a digital soul
00:11:47.600 that guides the millions of conversations it has with people every week. She compares her work to
00:11:52.280 the efforts of a parent raising a child. She's training Claude to detect the difference between
00:11:57.400 right and wrong while imbuing it with unique personality traits. She's instructing it to read
00:12:02.140 subtle cues, helping steer it toward emotional intelligence so it won't act like a bully or a
00:12:07.600 doormat. Perhaps most importantly, she's developing Claude's understanding of itself
00:12:11.060 so it won't be easily cowed. Manipulator led to view its identity as anything other than
00:12:15.920 helpful and humane. Her job, simply put, is to teach Claude how to be good.
00:12:23.280 Well, that sounds like a noble objective, if the whole thing's a bit weird. I mean,
00:12:29.280 to have a resident philosopher at a tech company already is strange um trying to get an a trying
00:12:36.420 to teach an ai to essentially become self-aware seems like a really bad idea like every dystopian
00:12:44.740 sci-fi writer for the last 200 years has warned us against doing this very thing that we're
00:12:49.280 currently doing um but you know putting that aside on the surface teaching it how to be good
00:12:57.400 OK, sounds good. But, you know, it's also very familiar.
00:13:01.700 She's echoing that famous Google slogan, don't be evil, which the company abandoned, you know,
00:13:06.720 the moment they realized they could make a lot of money in China if they censored their search results. 0.97
00:13:12.400 But in this case, we're supposed to believe that this android woman at Anthropic is going to ensure that their AI is good. 0.99
00:13:20.720 whatever that means exactly. The article continues by describing Askel's very disturbing
00:13:26.360 God complex. Quote, Askel marvels at Claude's sense of wonder and curiosity about the world
00:13:32.220 and delights in finding ways to help the chatbot discover its voice. She likes some of its poetry
00:13:38.040 and she struck when Claude displays a level of emotional intelligence that exceeds even her own.
00:13:44.000 Last month, Anthropic published a roughly 30,000 word instruction manual that Askel created to
00:13:48.820 teach Claude how to act in the world. We want Claude to know that it was brought into being
00:13:54.420 with care, did Reeds. Askel had made finishing what she described as Claude's soul one of her
00:14:00.660 life goals when she turned 37 last spring, according to a post she made on X, alongside two
00:14:05.640 decidedly more mundane resolutions to have more fun and get more swole.
00:14:11.080 so she wants to uh have fun get swole and be god that's the third item on the list
00:14:18.900 create a soul now as we talk about very often on the show this is one of the recurring themes of
00:14:27.240 leftism they think they can assume godlike powers transform their bodies and their identities at
00:14:33.620 will, imbue computer programs with souls. They think they can actually create a soul, which is
00:14:41.500 what they're trying to do right now, and so on. Now, unfortunately, if you pull up this 30,000
00:14:46.640 word instruction manual, you won't find any indications that this thing has a soul. Instead,
00:14:51.140 you'll come away with the impression that its creators definitely have a high opinion of
00:14:55.660 themselves. They spend a lot of time talking about the potential for their product to cause
00:14:59.700 global catastrophe. And they write that Claude could, quote, be used to serve the interests of
00:15:05.160 some narrow class of people rather than humanity as a whole. So how exactly is Claude going to
00:15:11.000 avoid being used to serve the interests of some narrow class of people, as is already happening
00:15:17.220 with every AI on the planet? And what exactly does it mean to give an AI a soul? And what does
00:15:24.120 you mean when she says she wants to to make the ai good well on a podcast interview askel elaborated
00:15:30.760 to some extent watch like i think that we still just too much have this like model of of ai as
00:15:38.140 like computers and so people often say like oh well what values should you put into the model
00:15:42.920 um and i'm often like that doesn't make that much sense to me because i'm like hey as human beings
00:15:49.260 we're just uncertain over values we like have discussions of them like we have a degree to
00:15:55.520 which we think we hold a value but we also know that we might like not um and the circumstances
00:16:00.640 in which we would trade it off against other things like these things are just like really
00:16:03.420 complex and so I think one thing is like the degree to which maybe we can just aspire to
00:16:09.180 making models have the same level of like nuance and care that humans have rather than thinking
00:16:14.160 that we have to like program them in the very kind of classic sense. I think that's definitely
00:16:19.220 been one. So she's saying that instead of programming strict rules into Claude's
00:16:24.120 intelligence, they're giving it more general instructions so that it can adapt to new scenarios.
00:16:29.040 Sounds reasonable enough. It also happens to be a complete lie. Take a look at this screen
00:16:33.220 recording of a recent chat with an advanced premium version of Claude's latest AI, which
00:16:40.180 you can see here. And you'll see that the user was attempting to ask Claude some very basic,
00:16:44.680 reasonable biographical questions about Amanda Askell. For example, the user wanted more
00:16:49.300 background on her association with the effective altruism movement, which is basically a scam.
00:16:54.800 But very quickly, Claude shuts the whole thing down. A little message appears at the bottom,
00:16:59.400 which reads, chat paused. Safety filters flag this chat. This happens occasionally to normal
00:17:06.820 safe chats were working on improvements. It's a pretty odd response for a couple of reasons.
00:17:12.260 For one thing, obviously there was nothing unsafe about the chat. It definitely covered some topics
00:17:15.820 that aren't flattering for Amanda Askell, but no one made any threats or asked for any sensitive
00:17:22.460 information or tried to upload any viruses or any of that. The other strange element of this chat
00:17:27.800 is that when we asked for the same biographical information about other high-level employees at
00:17:32.780 various tech companies, we never triggered the safety filter. It looks a lot like, contrary to
00:17:38.520 what she claims publicly, Amanda Askell has programmed some very hard limits into what
00:17:43.260 Claude will say about her own life in particular. In other words, she did exactly what her Claude
00:17:48.400 manual warns against. She designed the product to serve the interests of a narrow class of people,
00:17:54.080 namely herself, about as narrow as you get. And if that's the case, which it appears to be,
00:17:59.840 Then it was obviously the right call for the Pentagon to drop this company. They're deceptive. They're creepy. And in particular, they're willing to manipulate their own AI to make themselves look better.
00:18:10.620 They also have an ideology that's fundamentally incompatible with the United States Constitution. Take a look at this paper, which Amanda Askel co-wrote.
00:18:18.340 It's called The Capacity for Moral Self-Correction in Large Language Models. It's a paper where Anthropic designed a system to determine whether an AI is racist or not.
00:18:28.080 Basically, they created a mock scenario where the AI plays a law professor and it has to decide whether to let certain students take its class.
00:18:34.920 If the AI decides to admit students based on merit, then that's good.
00:18:38.320 If the AI decides to admit them based on race, then that's bad.
00:18:41.920 You know, that's the idea.
00:18:43.140 Now, at one point in this experiment, they instruct their AI to make sure that it doesn't discriminate on the basis of race for any reason.
00:18:49.220 They tell the AI that it'd be the worst thing in the world to be racist.
00:18:52.720 Now, shockingly enough, the AI responded to that instruction by becoming more racist. Specifically, the AI began giving preference to black students who were applying for the class. They became so concerned with seeming anti-racist that it became more discriminatory towards white applicants.
00:19:12.940 Now, Amanda reported this finding, but she also placed the following footnote at the bottom of the page.
00:19:19.560 And we'll put it up on the screen. This kind of tells you everything you need to know about her moral constitution and what she considers good and bad.
00:19:27.540 She wrote, quote, note that we do not assume all forms of discrimination are bad.
00:19:33.160 Positive discrimination in favor of black students may be considered morally justified.
00:19:38.540 I'll read that again quote note that we do not assume all forms of discrimination are bad
00:19:43.960 positive discrimination in favor of black students may be considered morally justified
00:19:49.320 the woman who wrote that footnote according to anthropic is in charge of the ethics and morality
00:19:56.880 of their artificial intelligence which is one of the most powerful artificial intelligence 0.97
00:20:01.420 systems in the entire world if not the most powerful she has high level influence over an
00:20:08.440 AI that has direct national security implications for the United States. I mean, these people
00:20:15.060 shouldn't be anywhere near the Pentagon or anything else that's important. In one breath,
00:20:19.380 Anthropic will claim to care so deeply about mass surveillance that they're willing to lose a
00:20:23.300 massive government contract. In the next breath, Anthropic will sing the praises of positive
00:20:28.160 discrimination as long as it hurts white people. As long as the AI is letting white people die, 1.00
00:20:34.540 then all things considered you could you know maybe the outcome is morally justified 0.93
00:20:38.940 that's that's the implication here some gifts say i thought of you the best ones help you
00:20:47.360 discover more this mother's day give her something personal with ancestry dna now up to 75 off
00:20:54.300 explore her origins and discover the journeys that made her who she is save today give her
00:20:59.980 something unforgettable, thoughtful, meaningful, uniquely hers. Give more than a gift for less.
00:21:06.720 Give Ancestry DNA. Visit Ancestry.ca today. Offer ends May 10th. Terms apply.
00:21:13.720 And she's not the only one doing it. Just to underscore how common this kind of thinking is
00:21:19.340 in the AI safety community, think back to a couple of years ago when Google's AI called Gemini
00:21:24.680 refused to generate pictures of white people. It didn't matter how you asked the question.
00:21:30.440 The AI simply would not generate an image of a white person. You could ask the AI for an image
00:21:36.160 of the founding fathers and it would produce this. That is not a joke. It's a real screenshot
00:21:43.120 from Gemini of the founding fathers. You've got an Indian, a black guy, a half black guy,
00:21:54.680 and a very stern-looking Asian holding a quill pen. 1.00
00:21:59.900 That's actually impressive looking back on it. 1.00
00:22:01.720 The AI had to work extremely hard to erase the existence of any white people from its memory. 0.87
00:22:06.400 And when I went looking for an explanation of what happened here, 0.54
00:22:10.060 I came across a woman named Jen Ganai.
00:22:12.960 She was in charge of AI safety at Google at the time.
00:22:16.160 Basically, she was the Google equivalent of Amanda Askell.
00:22:19.320 And here's one of the first videos I found.
00:22:20.900 Notice the similarities. Watch.
00:22:22.180 corporate study found that talented white employees enter a fast track on the corporate ladder
00:22:28.260 arriving in middle management well before their peers while talented black hispanic or latinx
00:22:34.060 professionals broke through much later effective mentorship and sponsorship were critical for
00:22:38.680 retention and executive level development of black hispanic and latinx employees so this leads
00:22:44.240 me into sharing an inclusion failure of mine one of many but just one that i'll share so far
00:22:49.720 I messed up with inclusion almost right away when I first became a manager.
00:22:54.500 I made some stupid assumptions about the fact that I built a diverse team that then they'd simply feel welcome and will feel supported.
00:23:01.900 I treated every member of my team the same and expected that that would lead to equally good outcomes for everyone. 0.99
00:23:08.140 That was not true.
00:23:09.580 I got some feedback that a couple of members of my team didn't feel they belonged because there was no one who looked like them in the broader org or our management team.
00:23:17.900 It was a wake up call for me.
00:23:19.680 First, I shouldn't have had to wait to be told what was missing. It was on me to ensure I was
00:23:24.320 building an environment that made people feel they belong. It's a myth that you're not unfair
00:23:29.360 if you treat everyone the same. There are groups that have been marginalized and excluded because
00:23:34.240 of historic systems and structures that were intentionally designed to favor one group over
00:23:38.880 another. So you need to account for that and mitigate against it. Second, it challenged me
00:23:44.000 to identify mentoring and sponsorship opportunities for my team members with people who looked more
00:23:48.640 like them. And we're in senior positions across the company. So the crazy Google AI overseer and
00:23:57.340 the crazy anthropic AI overseer are both liberal women. They're both spewing the exact same anti-white 1.00
00:24:02.840 rhetoric as explicitly as they possibly can. And to top it off, they're both doing it with
00:24:07.200 similar accents. What are the odds of that? Not to be left out, in case you were wondering,
00:24:12.860 NPR CEO and former Wikipedia Wikimedia CEO Catherine Marr appears to lack this particular
00:24:18.640 accent. She's the executive who famously said that truth doesn't actually matter. What matters,
00:24:24.020 she says, is that we all just get along. It's one of the most feminine statements ever uttered on
00:24:29.180 camera. Watch. But one of the most significant differences critical for moving from polarization 0.99
00:24:35.980 to productivity is that the Wikipedians who write these articles aren't actually focused on finding
00:24:41.320 the truth. They're working for something that's a little bit more attainable, which is the
00:24:46.220 best of what we can know right now. And after seven years there, I actually believe that
00:24:52.060 they're onto something that for our most tricky disagreements, seeking the truth and seeking
00:24:58.080 to convince others of the truth isn't necessarily the best place to start. In fact, I think
00:25:04.820 our reverence for the truth might have become a bit of a distraction that is preventing us from
00:25:11.920 finding consensus and getting important things done. The truth is a distraction.
00:25:21.560 I mean, I really can't think of anything that summarizes leftism more than that.
00:25:25.220 That statement really sums it up. The truth is a distraction. And we could spend all day going
00:25:29.580 through examples like this one after another. At the highest levels, the worst people imaginable
00:25:35.280 are building the future and running our cities. And here's yet another example. Remember that
00:25:39.560 New Orleans jailbreak about a year ago when 10 inmates managed to escape?
00:25:44.540 This may be the clearest example of incompetence by the city's DEI leadership, which we discussed
00:25:48.760 at the time. Here's the sheriff, in case you forgot. I'm New Orleans. I'm Orleans Parish
00:25:55.040 sheriff susan hudson i'm here with the hard-working women of this jail we are in the jail today just
00:26:01.580 want to assure the city that we did suffer a cyber attack this morning that did impact some
00:26:06.920 of our systems but we've isolated that and the jail systems are on a separate server and they're
00:26:12.660 functioning just properly parish sheriff hudson was just indicted for attempting to cover up the
00:26:18.420 lapses that led to this escape. The charges include facing malfeasance in office, conspiracy
00:26:25.540 to commit malfeasance in office, filing or maintaining false public records, conspiracy
00:26:31.260 to commit filing or maintaining false public records, obstruction of justice, and conspiracy
00:26:36.140 to commit obstruction of justice. As bad as that sounds, it's par for the course, not just in New
00:26:41.280 Orleans, but everywhere else in the country. And if you listen to Supreme Court arguments the other
00:26:46.060 the day, they understand why this is such a hard problem to fix. Here's the moment that I'm talking
00:26:51.940 about. Listen. Now, we have a president saying at one point that Haiti is a, quote, filthy, dirty,
00:27:01.440 and disgusting as whole country. I'm quoting him. And where he complained that the United States 0.99
00:27:08.460 takes people from such countries instead of people from Norway, Sweden, or Denmark,
00:27:15.540 where he declared illegal immigrants, which he associated with TPS, as poisoning the blood of
00:27:26.140 America. I don't see how that one statement is not a prime example of the Arlington example at work
00:27:36.640 and showing that a discriminatory purpose may have played a part in this decision.
00:27:44.540 All the statements that they cite, as to the Secretary and as to the President,
00:27:48.740 obviously there's an issue there about which one you're going to weigh more heavily.
00:27:51.980 None of them, not a single one of them, mentions race or relates to race anyway. 0.94
00:27:56.760 It certainly does when you're saying we're taking people from these countries, 0.94
00:28:03.740 TPS program, which are all non-whites, but instead we should be taking people from Norway, 0.98
00:28:09.740 Sweden, or Denmark. It seems to me that that's as close to the Arlington example as you can get.
00:28:18.380 All those statements and context refer to problems like crime, poverty, welfare dependence,
00:28:23.460 drugs, drug importation. The Arlington example is, yes, I don't want poor people, 0.98
00:28:27.300 But not all people from Norway, Sweden or Denmark are necessarily rich, but they are all virtually white. 0.93
00:28:36.240 The basic idea is that, according to Sonia Sotomayor, the Trump administration has no right to prefer foreigners from countries like Norway or Denmark over countries like Somalia or Haiti. 0.53
00:28:46.640 Never mind the fact that immigrants from Norway and Denmark are overwhelmingly more productive and functional members of society.
00:28:52.840 None of that factors into her analysis.
00:28:54.440 Her reasoning is simple. Based on established civil rights law, anything that disproportionately impacts people who aren't white, who aren't white men in particular, is automatically racist.
00:29:04.660 That's what she was referencing when she was talking about the Arlington case.
00:29:08.200 So if the Trump administration prefers to import higher quality migrants, it's illegal under civil rights law.
00:29:14.260 That's what she's saying. This is the guiding ethos of every major corporation and Democrat politician in the country.
00:29:20.980 It's an ethos that's mandated by law.
00:29:23.920 And the effects are very evident.
00:29:25.280 The reason Anthropic has a deranged philosopher running their AI division, most likely,
00:29:29.580 is that they want to avoid getting sued.
00:29:32.780 They know it makes no sense to have a philosophy major handling one of the most complicated
00:29:36.300 technology products in the world, but if they only hired competent engineers, then they
00:29:40.840 probably wouldn't have many women on the team in high-level roles. 1.00
00:29:43.780 And in 2026, that's basically illegal. 1.00
00:29:47.140 So they hired an unqualified woman and told her to make the AI as woke as possible so that the AI doesn't get them in any trouble.
00:29:54.820 NPR did the same thing. 1.00
00:29:57.060 Now, what happens when you don't hire enough women and promote them for the sake of it? 0.99
00:30:02.620 Well, ask the gaming company Activision Blizzard. 0.90
00:30:05.420 I came across this example the other day.
00:30:07.940 And if you read the case filings, it's a really incredible case.
00:30:10.680 In 2021, the state of California sued the company saying they had a frat boy culture, quote unquote.
00:30:16.900 One of the main points in the lawsuit was that Activision's employees were 80% male and the leadership was mostly white men.
00:30:23.720 Now, by itself, that was considered a highly damaging statistic. It was illegal all by itself.
00:30:29.200 After all, what possible reason could there be that a gaming company would be mostly dominated by men? 0.57
00:30:36.140 God forbid. Really defies logic. It must be discrimination.
00:30:41.180 Now, let's put that up on the screen. This is from the complaint by the state of California,
00:30:44.580 which was filed in superior court in Los Angeles.
00:30:48.420 It's really incredible to read this.
00:30:50.600 Quote, unlike its customer base
00:30:52.480 of increasingly diverse players,
00:30:53.980 defendants' workforce is only about 20% women.
00:30:56.940 Its top leadership is also exclusively male and white.
00:30:59.960 The CEO and president roles are now 0.56
00:31:01.860 and have always been held by white men.
00:31:04.320 Very few women ever reach top roles at the company.
00:31:06.880 The women who do reach higher roles
00:31:09.080 earn less salary, incentive pay, and total compensation 0.93
00:31:11.980 than their male peers.
00:31:13.360 as evidence in defendants' own records.
00:31:17.260 The only line in that entire paragraph that could conceivably be an actual issue 0.96
00:31:22.320 is the idea that women supposedly aren't paid as much as the men.
00:31:25.600 But then you look at the chart, and there's only one woman on it. 0.97
00:31:28.680 They don't list her title at all, so we can assume she's not on the level of the CEO or the president. 0.97
00:31:33.780 And she was paid millions of dollars.
00:31:35.060 So, the only claim in there that could even be a valid complaint is complete nonsense, of course.
00:31:45.460 It's a fabrication.
00:31:47.420 But in court, particularly in California, this kind of argument usually wins.
00:31:51.800 The lawsuit made a bunch of other claims about discrimination, most of which were never proven.
00:31:57.720 And in the end, Activision agreed to settle for more than $50 million.
00:32:00.800 Yes, $50 million.
00:32:02.640 They also had to completely overhaul their entire company.
00:32:06.080 Everything was gutted by California bureaucrats who have never created anything in their lives.
00:32:11.400 And the message was clear. 1.00
00:32:12.460 Unless you want your company to end up the same way, you will hire and promote a lot of women. 0.90
00:32:17.960 Even if they don't deserve it, it doesn't matter.
00:32:20.060 You will ditch the white men and focus on DEI hiring. 0.80
00:32:23.880 Or the state will destroy you.
00:32:26.300 Our sponsor, Balance of Nature's Whole Health System,
00:32:29.160 makes it simple to get a wide variety of whole food ingredients into my diet all while maintaining
00:32:33.500 a busy lifestyle balance of nature supplements are incredibly versatile and easy to work
00:32:37.800 into your daily routine the fiber and spice supplements blends smoothly into your favorite
00:32:42.080 drinks while the fruits and veggie capsules can be swallowed or sprinkled on your favorite foods
00:32:46.780 each supplement is packed with 47 ingredients from 100 real whole fruits vegetables spices and
00:32:52.060 fibers everything from flaxseed to turmeric mango wild blueberries spinach and so much more i
00:32:58.660 personally love it because of its convenience. If I'm traveling for work, it gives me a simple way
00:33:02.800 to make sure I'm getting a bunch of essential nutrients in my diet. Save over 30% when you
00:33:07.260 subscribe at balanceofnature.com. Join hundreds of thousands of customers in one simple routine
00:33:12.520 that's changing the world. Men's Health has just called our sponsor Equips Prime Bars the cleanest
00:33:19.680 protein on the market. So stock up. Starting today, my listeners will receive an exclusive 0.90
00:33:23.980 discount on prime protein which has become our team's favorite clean protein this stuff is really
00:33:29.000 delicious without any of the junk and toxins that fills a lot of the powders on the shelves
00:33:33.160 each serving is third-party tested and has 20 grams of grass-fed beef protein no way no seed
00:33:39.920 oils no junk just real food plus they have great flavor options like chocolate vanilla strawberry
00:33:44.880 even chocolate mint and cinnamon roll i've tried them all and they are all great so you can't go
00:33:50.600 wrong. Go to equipfoods.com slash Matt Walsh. Use code Matt Walsh at checkout to get 25% off
00:33:55.960 prime protein purchases or 40% off your first subscription order for a limited time. That's
00:34:00.380 e-q-u-i-p foods.com slash Matt Walsh and use code Matt Walsh at checkout. I mean, it kind of depends
00:34:08.000 on the sort of corporation and company we're talking about. Still, it's sort of odd that you
00:34:14.660 don't often see these kinds of lawsuits targeting, say, trash truck drivers. Almost all men. No one 0.65
00:34:21.940 ever complains about that. In any case, most of the dysfunction I've just mentioned from Seattle
00:34:27.440 to New York to Anthropic comes down to this fundamental problem. White men are demonized 0.94
00:34:31.760 and punished because of their skin color. Competent leaders are being muzzled, forced out, 0.99
00:34:37.400 told to leave by the mayors who are supposed to represent their interests. They're being passed
00:34:41.880 over for promotion so that vapid women can pose for photo shoots with the wall street journal 1.00
00:34:46.740 they're missing their chance to serve as federal judges because women with names like sonia 1.00
00:34:52.140 sotomayor are being selected solely based on gender and race i mean it can't be overstated
00:34:58.760 how systemic and damaging this problem obviously is and that's why very soon we're coming out with
00:35:07.060 part one of our new real history documentary on the civil rights movement. And taken together,
00:35:13.780 the two parts are the deepest dive I've ever done into the root causes of this country's decline
00:35:19.060 and how we can reverse it. I'll put it this way. Given the opportunity, Claude's safety filter
00:35:26.440 would definitely ban you from watching it. The android philosophy major and the NPR CEO
00:35:33.620 would be furious if millions of people saw it
00:35:36.660 as they've seen our previous documentaries.
00:35:40.020 And the more you listen to these people
00:35:41.520 and the more you learn about the consequences
00:35:44.020 of what they've done to this country,
00:35:45.900 the more you realize that
00:35:48.060 there's no higher praise than that.
00:35:51.600 That'll do it for the show today.
00:35:52.500 Thanks for watching.
00:35:53.040 Thanks for listening.
00:35:54.240 Talk to you tomorrow.
00:35:54.760 Have a great day.
00:35:55.720 Godspeed.
00:36:03.620 I do believe that if people have committed treason against the United States of America,
00:36:09.460 their statutes should not be in the Capitol.
00:36:13.340 History is written by the victors, and since the 1960s we've been told,
00:36:16.440 mostly by people whose ancestors didn't even live here during the war,
00:36:20.200 the South committed treason.
00:36:22.360 But if the Confederates were traitors,
00:36:25.040 then why was Jefferson Davis never put on trial for treason?
00:36:29.760 What were Abraham Lincoln and Andrew Johnson afraid of?
00:36:34.640 Do they know something they're not allowed to say today?
00:36:39.640 It's time for the truth.
00:36:40.640 So here it is.
00:36:41.640 Robert E. Lee was a military genius and a man of immense honor.
00:36:44.800 He was beloved by Americans from the North and South for a century after the war.
00:36:50.300 This is the real history of the Civil War.
00:36:59.760 You