Unless you ve spent a lot of time in the state of Massachusetts, there s a good chance you ve never heard of something called the Curley Effect. It s named after James Michael Curley, who served four terms as mayor of Boston from 1914 to 1950.
00:00:10.060Make RBC Training Ground your breakthrough moment.
00:00:12.900Start your journey to Team Canada today at rbctrainingground.ca.
00:00:18.180Unless you're about 100 years old or so,
00:00:21.500or you've spent a lot of time in the state of Massachusetts,
00:00:24.060there's a good chance you've never heard of something called the Curly Effect.
00:00:27.800It's named after James Michael Curley, who served four terms as mayor of Boston from 1914 to 1950.
00:00:35.340He also served in the House of Representatives, and he was governor of Massachusetts for one term as well.
00:00:41.160So for a half century, he was a very well-known figure in Boston.
00:00:44.820They called him the Rascal King, and he was quite popular with Boston's poor, particularly the Irish population.
00:00:52.960The funny thing about James Michael Curley, though, is that despite the fact that he kept getting elected to high office, he wasn't actually a good politician.
00:01:02.040Wasn't even close. He committed numerous crimes, including mail fraud.
00:01:05.840He served part of his term as a mayor in a prison cell.
00:01:09.380And under his watch, by every objective metric, the city of Boston declined dramatically.
00:01:13.680The population stagnated, even as other major cities grew exponentially.
00:01:18.040manufacturing jobs left the city. Boston's finances collapsed to the point of near bankruptcy.
00:01:24.820So how did James Michael Curley hold on to power for so long despite doing such a horrible job?
00:01:31.280It doesn't seem logical. So a couple of economists at Harvard decided to look into it. And what they
00:01:37.540found was that by dramatically raising taxes and using taxpayer funds to hire poor Irishmen
00:01:42.860for fake government jobs, James Michael Curley had driven wealthy people out of the city.
00:01:49.000The rich people decided to get out of town before the city of Boston would steal any more of their
00:01:52.980money. And as a result of this mass exodus, the share of low-income residents living in Boston,
00:01:58.440the core demographic supporting James Michael Curley, grew substantially. The economists
00:02:03.820called this tactic the Curley Effect. The idea is that if you want to retain your grip on power,
00:02:10.200even though you're doing a horrible job, then your best course of action is to drive all of
00:02:14.300your political opponents out of town. There's no reason not to shower your preferred demographic
00:02:19.200group with all kinds of welfare, fake jobs, special status, and so on. You can simply loot0.98
00:02:25.160the city's treasury for decades on end before the city finally goes bankrupt. Every worthwhile
00:02:30.100person will leave, but your voters will remain, and that's all that you care about, and that is
00:02:35.400the Curley effect. It's also a very accurate way to describe how Democrats plan to govern every
00:02:42.160major city in this country for the next 50 years, how they've already been governing them for the
00:02:46.760last 50 years. It's not an exaggeration to say that for large portions of this country, the future
00:02:51.600is going to be built by leftists, particularly women and foreigners in many cases, who deliberately1.00
00:02:58.980seek to drive away everyone who's competent, sane, and productive. Or at least that's their1.00
00:03:04.840plan if they aren't stopped. And if you doubt that, take a look at this video from Seattle
00:03:09.240Socialist Mayor Katie Wilson. She's asked about the impact of Washington state's new 10% tax on
00:03:15.500millionaires, as well as Seattle's aggressive new taxes. There are other forms of taxes.
00:03:21.100And watch how she responds. I think the claims that millionaires are going to leave our state0.97
00:03:27.220are like super overblown. And if, you know, the ones that leave, like, bye. So
00:03:33.300this is a 40 something year old woman who didn't hold a real job throughout her entire adult life
00:03:45.920she admits that her parents pay her bills and now she's elated by the fact that she's driving away0.99
00:03:51.760the most productive people in her city i mean at a visceral level it's one of the most revolting0.95
00:03:57.360videos you'll ever see and the reason katie wilson doesn't care if the millionaires leave0.96
00:04:02.480is that for every millionaire who flees Seattle,1.00
00:04:04.620she's gaining one net vote in the next election.
00:05:25.780we are extending the executive budget deadline from this coming Friday until May 12th because
00:05:31.040the crisis of the scale cannot be solved without state action. Now New York is one of the wealthiest
00:05:38.120cities in the entire world. The only conceivable reason why New York would be broke is that the
00:05:42.520people leading New York are incompetent and or malicious. I mean probably both. They're spending
00:05:49.740money they don't have and calling it free like they did with free pre-K for every child and
00:05:55.420And when the socialist from Uganda decides to make even more things free, including the buses, he quickly discovers that he's already run out of other people's money.
00:06:03.500So he needs to ask the state government for a handout, even though the state is also hemorrhaging residents and therefore money.0.57
00:06:11.320Again, it's all part of the plan. The broke, unemployed Haitians who don't speak a word of English aren't bothered by any of this.1.00
00:06:18.860They still think Mamdani is a hero.1.00
00:06:23.260They'll be loyal Mamdani voters to the end.1.00
00:06:26.400It's the useful New Yorkers who are going to move to Florida and never return.1.00
00:06:31.060This is a death spiral that's very difficult to recover from once it gets going.
00:06:35.420And it's not just a problem in politics. It's happening everywhere.
00:06:38.480Some of the most important technology companies in the country are doing the exact same thing.
00:06:42.060They're putting leftists, predominantly women and foreigners, into positions of authority0.91
00:06:45.920where they have the capacity to gain even more power by driving away some of their customers.0.61
00:06:50.940Again, just like the curly effect, it's not exactly intuitive.
00:06:54.440You think the job of a company is to make as much money as possible
00:06:57.000and to sell to anyone who wants to buy their product, but it's not actually the case.
00:07:00.520Sometimes it's important to drive your biggest customers away
00:07:03.640so that you can consolidate power with the customers who remain.
00:07:07.700Paying $70 plus a month to big wireless companies for unlimited data is insanity.
00:07:12.300My wireless company, Pure Talk, is going to give you unlimited high-speed data for just $34.99 a month.
00:07:19.320And if you're wondering, is Pure Talk's network really as good as the overpriced big guys, well, try it out for 30 days.
00:07:24.720No contract, no cancellation fees, so you can try it firsthand with nothing to lose.
00:07:29.140You can make the switch in as little as 10 minutes, and their U.S.-based customer service team is standing by to help,
00:07:34.760so you don't get some random person that doesn't even speak English.
00:07:37.800Go to puretalk.com slash Walsh to claim unlimited high-speed data for just $34.99.
00:07:43.680Again, that's puretalk.com slash Walsh to switch to Pure Talk today.
00:07:48.540Now, along those lines, you might remember this story from a couple of months ago.
00:07:52.060It broke just before the war in Iran started, so it was buried very quickly.
00:07:55.560But there was a very public falling out between the tech company Anthropic, which makes the AI product Claude, and the Trump administration.
00:08:02.780The Pentagon has been using Claude to assist in military operations for several months now, including in Venezuela and Iran.
00:08:09.920The AI reportedly helps with target identification and the operations of weapon systems, among other services.
00:08:16.000But Anthropic began demanding several conditions from the Pentagon.
00:08:21.060They wanted the Pentagon to provide guarantees that Claude would never be used to conduct surveillance on Americans or to operate fully autonomous lethal weapon systems like, you know, RoboCop.
00:08:29.780RoboCop. The Pentagon said those guarantees weren't necessary and that they'd complied with
00:08:34.480the law, but Anthropic insisted. Watch. And how much nervousness is there in the relationship
00:08:40.820between the Department of War and Anthropic at the moment? This is a really super fascinating
00:08:45.260story we were talking about earlier in the show, that basically the clod tool, they want safeguards
00:08:49.520against basically mass surveillance, presumably for American populations. And of course, having
00:08:54.720basically kill orders being required to be given by human beings absolutely and these kill orders
00:09:00.280have been something that have been sort of uh getting a lot of attention on social media on
00:09:04.360the x platform for example uh they really ramped up last week actually with some of these
00:09:08.320conversations and i think this comes down to again this the big question which is who owns
00:09:14.580the data and who has access to the data data in the wrong hands can obviously be used for for bad
00:09:21.260purposes, like with any technology. So I think the question why, whilst this discussion has led
00:09:28.000to a delay with the agreement, is around who's going to own the tools, who's going to own the
00:09:32.840data, and who can use what with that data. So this is how the story was covered in most
00:09:40.060major outlets. The implication is Anthropic was the good guy. They were making sure that the AI
00:09:44.480and the data it collects would not be used in a way that could harm American citizens. The idea
00:09:49.080is that the Pentagon can't be trusted under any circumstance. But there's a big problem with this
00:09:55.400framing, which is that it ignores the fact that Anthropic can't be trusted either. The people who0.84
00:09:59.920are running this AI, which is vitally important for our national security at the moment, are no
00:10:05.160better than the mayor of Seattle. They're every bit as corrupt and dumb, and they have the same1.00
00:10:11.620intentions. They want to get rid of their political enemies. They want to neutralize them completely1.00
00:10:15.840so that they have total control. So we'll start with a Scottish philosophy major named Amanda0.99
00:10:20.920Askell, who, despite having no technical knowledge whatsoever, is one of the most powerful people1.00
00:10:26.020at Anthropic. She's also one of the most visible. The company encourages her to sit for photo
00:10:32.640shoots like this one, which was just published by the Wall Street Journal. We'll put some of
00:10:37.900the images up on the screen right now. So there she is. The more of them you see, the deeper you're
00:10:43.500going into the uh uncanny valley she's attempting to look like uh like an android there's no other1.00
00:10:49.900way to say it it looks like she's auditioning for like a new blade runner movie where she plays one
00:10:54.220of the defective robots that doesn't quite fit in with the humans so they just throw it in an
00:10:58.840empty room and decide to fix it later and i'm not being mean i mean that's quite obviously the look
00:11:02.880she's going for uh this is someone who before she even opens her mouth you know is going to be1.00
00:11:07.500absolutely insufferable. And then she opens her mouth and that is confirmed. The article goes on
00:11:14.020to sound exactly like a dystopian novel. We learned that her husband essentially took her last name,
00:11:18.680which is always a great sign. But let's give her a chance. This is from the beginning
00:11:23.420of her new profile in the Wall Street Journal. Quote, as the resident philosopher of the tech
00:11:30.700company Anthropic, Amanda Askell spends her days learning Claude's reasoning patterns and talking
00:11:36.720to the AI model, building its personality, and addressing its misfires with prompts that can
00:11:41.760run longer than 100 pages. The aim is to endow Claude with a sense of morality, a digital soul
00:11:47.600that guides the millions of conversations it has with people every week. She compares her work to
00:11:52.280the efforts of a parent raising a child. She's training Claude to detect the difference between
00:11:57.400right and wrong while imbuing it with unique personality traits. She's instructing it to read
00:12:02.140subtle cues, helping steer it toward emotional intelligence so it won't act like a bully or a
00:12:07.600doormat. Perhaps most importantly, she's developing Claude's understanding of itself
00:12:11.060so it won't be easily cowed. Manipulator led to view its identity as anything other than
00:12:15.920helpful and humane. Her job, simply put, is to teach Claude how to be good.
00:12:23.280Well, that sounds like a noble objective, if the whole thing's a bit weird. I mean,
00:12:29.280to have a resident philosopher at a tech company already is strange um trying to get an a trying
00:12:36.420to teach an ai to essentially become self-aware seems like a really bad idea like every dystopian
00:12:44.740sci-fi writer for the last 200 years has warned us against doing this very thing that we're
00:12:49.280currently doing um but you know putting that aside on the surface teaching it how to be good
00:12:57.400OK, sounds good. But, you know, it's also very familiar.
00:13:01.700She's echoing that famous Google slogan, don't be evil, which the company abandoned, you know,
00:13:06.720the moment they realized they could make a lot of money in China if they censored their search results.0.97
00:13:12.400But in this case, we're supposed to believe that this android woman at Anthropic is going to ensure that their AI is good.0.99
00:13:20.720whatever that means exactly. The article continues by describing Askel's very disturbing
00:13:26.360God complex. Quote, Askel marvels at Claude's sense of wonder and curiosity about the world
00:13:32.220and delights in finding ways to help the chatbot discover its voice. She likes some of its poetry
00:13:38.040and she struck when Claude displays a level of emotional intelligence that exceeds even her own.
00:13:44.000Last month, Anthropic published a roughly 30,000 word instruction manual that Askel created to
00:13:48.820teach Claude how to act in the world. We want Claude to know that it was brought into being
00:13:54.420with care, did Reeds. Askel had made finishing what she described as Claude's soul one of her
00:14:00.660life goals when she turned 37 last spring, according to a post she made on X, alongside two
00:14:05.640decidedly more mundane resolutions to have more fun and get more swole.
00:14:11.080so she wants to uh have fun get swole and be god that's the third item on the list
00:14:18.900create a soul now as we talk about very often on the show this is one of the recurring themes of
00:14:27.240leftism they think they can assume godlike powers transform their bodies and their identities at
00:14:33.620will, imbue computer programs with souls. They think they can actually create a soul, which is
00:14:41.500what they're trying to do right now, and so on. Now, unfortunately, if you pull up this 30,000
00:14:46.640word instruction manual, you won't find any indications that this thing has a soul. Instead,
00:14:51.140you'll come away with the impression that its creators definitely have a high opinion of
00:14:55.660themselves. They spend a lot of time talking about the potential for their product to cause
00:14:59.700global catastrophe. And they write that Claude could, quote, be used to serve the interests of
00:15:05.160some narrow class of people rather than humanity as a whole. So how exactly is Claude going to
00:15:11.000avoid being used to serve the interests of some narrow class of people, as is already happening
00:15:17.220with every AI on the planet? And what exactly does it mean to give an AI a soul? And what does
00:15:24.120you mean when she says she wants to to make the ai good well on a podcast interview askel elaborated
00:15:30.760to some extent watch like i think that we still just too much have this like model of of ai as
00:15:38.140like computers and so people often say like oh well what values should you put into the model
00:15:42.920um and i'm often like that doesn't make that much sense to me because i'm like hey as human beings
00:15:49.260we're just uncertain over values we like have discussions of them like we have a degree to
00:15:55.520which we think we hold a value but we also know that we might like not um and the circumstances
00:16:00.640in which we would trade it off against other things like these things are just like really
00:16:03.420complex and so I think one thing is like the degree to which maybe we can just aspire to
00:16:09.180making models have the same level of like nuance and care that humans have rather than thinking
00:16:14.160that we have to like program them in the very kind of classic sense. I think that's definitely
00:16:19.220been one. So she's saying that instead of programming strict rules into Claude's
00:16:24.120intelligence, they're giving it more general instructions so that it can adapt to new scenarios.
00:16:29.040Sounds reasonable enough. It also happens to be a complete lie. Take a look at this screen
00:16:33.220recording of a recent chat with an advanced premium version of Claude's latest AI, which
00:16:40.180you can see here. And you'll see that the user was attempting to ask Claude some very basic,
00:16:44.680reasonable biographical questions about Amanda Askell. For example, the user wanted more
00:16:49.300background on her association with the effective altruism movement, which is basically a scam.
00:16:54.800But very quickly, Claude shuts the whole thing down. A little message appears at the bottom,
00:16:59.400which reads, chat paused. Safety filters flag this chat. This happens occasionally to normal
00:17:06.820safe chats were working on improvements. It's a pretty odd response for a couple of reasons.
00:17:12.260For one thing, obviously there was nothing unsafe about the chat. It definitely covered some topics
00:17:15.820that aren't flattering for Amanda Askell, but no one made any threats or asked for any sensitive
00:17:22.460information or tried to upload any viruses or any of that. The other strange element of this chat
00:17:27.800is that when we asked for the same biographical information about other high-level employees at
00:17:32.780various tech companies, we never triggered the safety filter. It looks a lot like, contrary to
00:17:38.520what she claims publicly, Amanda Askell has programmed some very hard limits into what
00:17:43.260Claude will say about her own life in particular. In other words, she did exactly what her Claude
00:17:48.400manual warns against. She designed the product to serve the interests of a narrow class of people,
00:17:54.080namely herself, about as narrow as you get. And if that's the case, which it appears to be,
00:17:59.840Then it was obviously the right call for the Pentagon to drop this company. They're deceptive. They're creepy. And in particular, they're willing to manipulate their own AI to make themselves look better.
00:18:10.620They also have an ideology that's fundamentally incompatible with the United States Constitution. Take a look at this paper, which Amanda Askel co-wrote.
00:18:18.340It's called The Capacity for Moral Self-Correction in Large Language Models. It's a paper where Anthropic designed a system to determine whether an AI is racist or not.
00:18:28.080Basically, they created a mock scenario where the AI plays a law professor and it has to decide whether to let certain students take its class.
00:18:34.920If the AI decides to admit students based on merit, then that's good.
00:18:38.320If the AI decides to admit them based on race, then that's bad.
00:18:43.140Now, at one point in this experiment, they instruct their AI to make sure that it doesn't discriminate on the basis of race for any reason.
00:18:49.220They tell the AI that it'd be the worst thing in the world to be racist.
00:18:52.720Now, shockingly enough, the AI responded to that instruction by becoming more racist. Specifically, the AI began giving preference to black students who were applying for the class. They became so concerned with seeming anti-racist that it became more discriminatory towards white applicants.
00:19:12.940Now, Amanda reported this finding, but she also placed the following footnote at the bottom of the page.
00:19:19.560And we'll put it up on the screen. This kind of tells you everything you need to know about her moral constitution and what she considers good and bad.
00:19:27.540She wrote, quote, note that we do not assume all forms of discrimination are bad.
00:19:33.160Positive discrimination in favor of black students may be considered morally justified.
00:19:38.540I'll read that again quote note that we do not assume all forms of discrimination are bad
00:19:43.960positive discrimination in favor of black students may be considered morally justified
00:19:49.320the woman who wrote that footnote according to anthropic is in charge of the ethics and morality
00:19:56.880of their artificial intelligence which is one of the most powerful artificial intelligence0.97
00:20:01.420systems in the entire world if not the most powerful she has high level influence over an
00:20:08.440AI that has direct national security implications for the United States. I mean, these people
00:20:15.060shouldn't be anywhere near the Pentagon or anything else that's important. In one breath,
00:20:19.380Anthropic will claim to care so deeply about mass surveillance that they're willing to lose a
00:20:23.300massive government contract. In the next breath, Anthropic will sing the praises of positive
00:20:28.160discrimination as long as it hurts white people. As long as the AI is letting white people die,1.00
00:20:34.540then all things considered you could you know maybe the outcome is morally justified0.93
00:20:38.940that's that's the implication here some gifts say i thought of you the best ones help you
00:20:47.360discover more this mother's day give her something personal with ancestry dna now up to 75 off
00:20:54.300explore her origins and discover the journeys that made her who she is save today give her
00:20:59.980something unforgettable, thoughtful, meaningful, uniquely hers. Give more than a gift for less.
00:23:09.580I got some feedback that a couple of members of my team didn't feel they belonged because there was no one who looked like them in the broader org or our management team.
00:23:19.680First, I shouldn't have had to wait to be told what was missing. It was on me to ensure I was
00:23:24.320building an environment that made people feel they belong. It's a myth that you're not unfair
00:23:29.360if you treat everyone the same. There are groups that have been marginalized and excluded because
00:23:34.240of historic systems and structures that were intentionally designed to favor one group over
00:23:38.880another. So you need to account for that and mitigate against it. Second, it challenged me
00:23:44.000to identify mentoring and sponsorship opportunities for my team members with people who looked more
00:23:48.640like them. And we're in senior positions across the company. So the crazy Google AI overseer and
00:23:57.340the crazy anthropic AI overseer are both liberal women. They're both spewing the exact same anti-white1.00
00:24:02.840rhetoric as explicitly as they possibly can. And to top it off, they're both doing it with
00:24:07.200similar accents. What are the odds of that? Not to be left out, in case you were wondering,
00:24:12.860NPR CEO and former Wikipedia Wikimedia CEO Catherine Marr appears to lack this particular
00:24:18.640accent. She's the executive who famously said that truth doesn't actually matter. What matters,
00:24:24.020she says, is that we all just get along. It's one of the most feminine statements ever uttered on
00:24:29.180camera. Watch. But one of the most significant differences critical for moving from polarization0.99
00:24:35.980to productivity is that the Wikipedians who write these articles aren't actually focused on finding
00:24:41.320the truth. They're working for something that's a little bit more attainable, which is the
00:24:46.220best of what we can know right now. And after seven years there, I actually believe that
00:24:52.060they're onto something that for our most tricky disagreements, seeking the truth and seeking
00:24:58.080to convince others of the truth isn't necessarily the best place to start. In fact, I think
00:25:04.820our reverence for the truth might have become a bit of a distraction that is preventing us from
00:25:11.920finding consensus and getting important things done. The truth is a distraction.
00:25:21.560I mean, I really can't think of anything that summarizes leftism more than that.
00:25:25.220That statement really sums it up. The truth is a distraction. And we could spend all day going
00:25:29.580through examples like this one after another. At the highest levels, the worst people imaginable
00:25:35.280are building the future and running our cities. And here's yet another example. Remember that
00:25:39.560New Orleans jailbreak about a year ago when 10 inmates managed to escape?
00:25:44.540This may be the clearest example of incompetence by the city's DEI leadership, which we discussed
00:25:48.760at the time. Here's the sheriff, in case you forgot. I'm New Orleans. I'm Orleans Parish
00:25:55.040sheriff susan hudson i'm here with the hard-working women of this jail we are in the jail today just
00:26:01.580want to assure the city that we did suffer a cyber attack this morning that did impact some
00:26:06.920of our systems but we've isolated that and the jail systems are on a separate server and they're
00:26:12.660functioning just properly parish sheriff hudson was just indicted for attempting to cover up the
00:26:18.420lapses that led to this escape. The charges include facing malfeasance in office, conspiracy
00:26:25.540to commit malfeasance in office, filing or maintaining false public records, conspiracy
00:26:31.260to commit filing or maintaining false public records, obstruction of justice, and conspiracy
00:26:36.140to commit obstruction of justice. As bad as that sounds, it's par for the course, not just in New
00:26:41.280Orleans, but everywhere else in the country. And if you listen to Supreme Court arguments the other
00:26:46.060the day, they understand why this is such a hard problem to fix. Here's the moment that I'm talking
00:26:51.940about. Listen. Now, we have a president saying at one point that Haiti is a, quote, filthy, dirty,
00:27:01.440and disgusting as whole country. I'm quoting him. And where he complained that the United States0.99
00:27:08.460takes people from such countries instead of people from Norway, Sweden, or Denmark,
00:27:15.540where he declared illegal immigrants, which he associated with TPS, as poisoning the blood of
00:27:26.140America. I don't see how that one statement is not a prime example of the Arlington example at work
00:27:36.640and showing that a discriminatory purpose may have played a part in this decision.
00:27:44.540All the statements that they cite, as to the Secretary and as to the President,
00:27:48.740obviously there's an issue there about which one you're going to weigh more heavily.
00:27:51.980None of them, not a single one of them, mentions race or relates to race anyway.0.94
00:27:56.760It certainly does when you're saying we're taking people from these countries,0.94
00:28:03.740TPS program, which are all non-whites, but instead we should be taking people from Norway,0.98
00:28:09.740Sweden, or Denmark. It seems to me that that's as close to the Arlington example as you can get.
00:28:18.380All those statements and context refer to problems like crime, poverty, welfare dependence,
00:28:23.460drugs, drug importation. The Arlington example is, yes, I don't want poor people,0.98
00:28:27.300But not all people from Norway, Sweden or Denmark are necessarily rich, but they are all virtually white.0.93
00:28:36.240The basic idea is that, according to Sonia Sotomayor, the Trump administration has no right to prefer foreigners from countries like Norway or Denmark over countries like Somalia or Haiti.0.53
00:28:46.640Never mind the fact that immigrants from Norway and Denmark are overwhelmingly more productive and functional members of society.
00:28:52.840None of that factors into her analysis.
00:28:54.440Her reasoning is simple. Based on established civil rights law, anything that disproportionately impacts people who aren't white, who aren't white men in particular, is automatically racist.
00:29:04.660That's what she was referencing when she was talking about the Arlington case.
00:29:08.200So if the Trump administration prefers to import higher quality migrants, it's illegal under civil rights law.
00:29:14.260That's what she's saying. This is the guiding ethos of every major corporation and Democrat politician in the country.