The Ben Shapiro Show - July 12, 2023


Will AI Kill – Or Save – Us All?


Episode Stats

Length

55 minutes

Words per Minute

219.6162

Word Count

12,207

Sentence Count

785

Misogynist Sentences

5

Hate Speech Sentences

18


Summary

In this episode, I talk about artificial intelligence and what it means for the future of the world, and why we should all be worried about it. I also talk about the benefits of AI, and how it can change the way we live, work, and play in the future, and the potential dangers of artificial intelligence. This episode is brought to you by VaynerSpeakers, a leading technology company that specialises in AI and machine learning. I hope you enjoy it, and if you do, please share it with a friend, colleague, or family member who needs it. Tweet me and let me know what you thought of it in the comments section below! Timestamps: 1:00:00 - What is artificial intelligence? 2:30 - What are the benefits and risks of AI? 3:40 - How AI will change the world 4:20 - Is AI a good or bad thing? 5:00 What is AI's role in the 21st century? 6:10 - What does AI have in the world? 7:20 8:30 Is AI bad or good? 9:10 10:30 Is AI the next step in human evolution? 11:40 12:30 Does AI have a purpose? 13:30 What are we waiting for? 14:30 Can AI become more intelligent? 15: Is AI better than us? 16:00 Is AI more human than a machine? 17:30 Are we getting stupider? 18:00 Does AI better at chess? 19: Does AI already have a human brain development? 21: What will AI have more advanced than we can do better? ? 22:30 Do we need to be better at math? or are we getting better at it? 25:30 How do we become more creative? 26:00 Do we have more independent skills? 27:00 Are we becoming more sophisticated? 29:00 Can AI more smart? 30:00 Will AI be better than a better at something we can we be more advanced? 35:00 Should AI be more like a human being? 31:10: Is there a better human brain? 32:00 What do we learn from our brains better than we are a better chess player? 33:00 How can we improve our brains?


Transcript

00:00:00.000 There's been a lot of talk about artificial intelligence.
00:00:02.000 Some people think it's overblown.
00:00:03.000 Some people think that it's not.
00:00:05.000 The simple fact is it's not overblown.
00:00:07.000 The artificial intelligence revolution is here.
00:00:09.000 It is only going to grow more and more in both scope and breadth.
00:00:13.000 Things like chat GPT are actually the very basic versions of what the techno bros have in store for the rest of the world.
00:00:19.000 And there are some people who are very hot on it, some people who are very scared of it.
00:00:23.000 I'm somewhere in the in-between, meaning I think that the upsides economically are going to be extraordinary.
00:00:27.000 I think in terms of labor saving and productivity, we could see an explosion of productivity that essentially brings inflation down to zero, that makes an enormous number of products, goods and services significantly easier to access for a wide variety of people, that allows pretty much anybody to engage in a variety of industries they never would have had access to before.
00:00:45.000 It's the democratization of intelligence is what some are calling it because Instead of you having to go to medical school, for example, to be able to diagnose a particular condition, now all you have to do is go to a nurse practitioner, she's going to perform particular tests on you, and then the AI is going to diagnose you because it's better at it than any doctor would be.
00:01:02.000 Instead of you having to know a ton of things in order to get an answer, you're going to be able to go to the AI, and the AI is going to be able to peruse all of human knowledge momentarily and simply get back to you.
00:01:10.000 There are just tons of ways in which AI is going to make us more productive, going to make it easier for us to be creative in many ways.
00:01:16.000 There are some problems, I think, when it comes to the sort of human brain development aspect of AI that we're going to have to discuss.
00:01:23.000 Because one of the things that we've really never thought about with regard to technology, we've always thought about technology as something apart from us.
00:01:29.000 Technology is a machine that you use, but the truth is the machine shapes you.
00:01:33.000 I see this with my seven-year-old son.
00:01:34.000 So my seven-year-old son, really bright kid, He realized very quickly how my iPhone worked.
00:01:39.000 And he could work it better than I could.
00:01:41.000 You know this.
00:01:41.000 If you're a parent, you see your kids using the technology better than you do.
00:01:44.000 And you can see how it shapes them.
00:01:45.000 So for my son, he recognized how the voice-to-text feature on my iPhone worked very quickly.
00:01:52.000 And so instead of him learning to read and then being able to type in all of the commands, he would simply grab the phone, say his command into the phone, and this allowed him to avoid reading for longer.
00:02:00.000 This is very often what kids do.
00:02:02.000 They use the technology in order to avoid doing the thing.
00:02:04.000 So for example, people of my generation, We were completely conversant with calculators.
00:02:08.000 Those calculators were not available for my parents' generation.
00:02:11.000 So my parents are really good at mental math.
00:02:13.000 My generation happens to be not very good at mental math.
00:02:15.000 You'll see people pulling out their phones to do basic and simple calculations on their phone.
00:02:20.000 Now, does that mean that people are getting stupider?
00:02:22.000 Not necessarily.
00:02:22.000 It means that they're using the tools at their disposal.
00:02:25.000 But the question is whether there are emergent properties To knowing things like how to calculate in your head that have an impact on overall brain function.
00:02:33.000 So for example, let's say that you are the world's best chess player and you use AI in order to enhance your game.
00:02:38.000 This has been shown to be the best form of chess playing is somebody who's great at chess and somebody who also has access to AI technology in terms of chess.
00:02:46.000 Combine those two and you have magic.
00:02:48.000 Well, what happens when you have an entire generation of people who have been trained on chess AI, but they've not actually developed independent skills with regard to chess?
00:02:57.000 Well, at a certain point, the AI could actually become self-referential.
00:03:00.000 The humans are not actually creating inputs for the AI to actually act off of.
00:03:05.000 So does an informational desert arise?
00:03:08.000 Well, one of the cures for that is presumably a move toward what they call artificial general intelligence, which is much more human-like programming, meaning that right now, the way that an AI works, a chat GPT for example, is you sit there by the chat GPT, you type in your command, Like, write a poem in the style of Jerry Seinfeld.
00:03:25.000 And then it gives you a poem and it's in the style of Jerry Seinfeld.
00:03:27.000 But you are the master of the machine.
00:03:29.000 You are the one who is inputting the command.
00:03:31.000 Well, what if we say human beings are not going to be experts anymore?
00:03:35.000 They're not even going to be great at all that much anymore?
00:03:37.000 Why don't we just make AI that can create its own commands?
00:03:40.000 It doesn't mean it's sentient.
00:03:41.000 Sentient is a different category.
00:03:43.000 But an AI that can generate its own prompts.
00:03:46.000 Well, now you start getting into risky territory.
00:03:48.000 Because first of all, you've made humans entirely obsolete at that point in terms of creativity.
00:03:52.000 But more than that, you've also led to the possibility that you could have, for example, an AI that starts entering prompts that are really negative for humanity.
00:04:01.000 So I'm going to go through with you some of the various perspectives on AI and what AI is going to mean.
00:04:06.000 In order to do that, I'm gonna start off by talking about some of the AI technology terminology so that you know what you're talking about when you're at the water cooler today.
00:04:13.000 Because again, this is going to be the big topic of conversation, not just for this year, not just for next year, for the next decade, two decades.
00:04:18.000 It's going to completely and radically reshift how humanity lives.
00:04:22.000 Artificial intelligence.
00:04:23.000 I firmly believe this because I've seen some of this AI at work.
00:04:26.000 I know people who are creating the AI.
00:04:29.000 And what they tell me is that it is way more sophisticated than we are made privy to.
00:04:33.000 You remember just a couple of years ago, people were saying, AI is going to be able to write complete essays.
00:04:37.000 And you're like, eh.
00:04:38.000 And then it came out, and it wrote complete essays.
00:04:40.000 And now people are making fun of the AI and say, oh, well, you know, the AI can't do human hands very well.
00:04:44.000 Well, it's called hallucinating, which we'll explain in a second.
00:04:46.000 But guess what?
00:04:47.000 It's already fixed.
00:04:48.000 And it's going to get better and better and better, because the stuff they have on the back lines that has not yet hit the public eye is significantly more sophisticated, significantly more creative, significantly more interesting than the stuff that you already have seen.
00:04:59.000 We'll get to more on this in just one second first.
00:05:02.000 Let's talk about the fact that in a scary, chaotic world, you need a little bit of time out to convene with God.
00:05:08.000 It's kind of important.
00:05:09.000 Well, regardless of your religion, you need more peace in your life.
00:05:11.000 Halo is an incredible app that offers a unique approach to prayer and meditation.
00:05:15.000 Unlike other meditation apps, Halo is tailored specifically for people of faith to deepen their relationship with God.
00:05:19.000 The HALO app is filled with studies, meditations, and reflections that are rooted in Judeo-Christian prayer practices.
00:05:24.000 There are tons of Christians who work for this company, obviously.
00:05:26.000 They use HALO.
00:05:27.000 They love it.
00:05:27.000 You can pray alongside Mark Wahlberg, Jonathan Rumi, who portrays Jesus in The Chosen, even some world-class athletes.
00:05:32.000 You can access the number one Christian podcast Bible in a year with Father Mike Schmitz on HALO.
00:05:37.000 HALO helps you maintain a daily prayer routine.
00:05:38.000 With features like progress tracking and streaks, you can stay motivated and make prayer a regular part of your daily routine, set prayer reminders, invite others to pray with you, track your progress, Along the way.
00:05:47.000 If you're looking to deepen your relationship with God, improve your mental and emotional well-being, try Halo for three months for free at Halo.com slash Shapiro.
00:05:54.000 That's H-A-L-O-W dot com slash Shapiro.
00:05:57.000 I pray three times a day.
00:05:58.000 You should pray as often as I do and use Halo in order to do it.
00:06:01.000 Go check them out right now.
00:06:02.000 Halo dot com slash Shapiro.
00:06:04.000 OK, so.
00:06:05.000 Let's go through some basic AI terminology.
00:06:08.000 So TechCrunch has a good rundown on all of this.
00:06:10.000 And one of the things they point out is that the term artificial intelligence is a little bit misleading because there's not really one great definition of intelligence.
00:06:19.000 It's sort of ersatz intelligence in the sense that it doesn't work quite the same way as the human brain.
00:06:23.000 One of the big mistakes that people make when they think about how computers work is because we've spent so long interfacing with computers, we think that computers are basically like the human brain.
00:06:31.000 They are very dissimilar.
00:06:32.000 And there are emergent properties to being a human that do not exist for machines, for computers.
00:06:38.000 This is why all of the various movies about when does an AI become sentient, when does an AI become human?
00:06:43.000 Well, not until there are emergent properties from the technology itself, right?
00:06:47.000 There are emergent properties for human beings.
00:06:49.000 Like, for example, the ability to choose freely, I believe.
00:06:53.000 There are emotional states that you have that a computer does not have.
00:06:55.000 There's tons of stuff about being human that we can't quite categorize and we can't quite chart, and so it makes it easier for us to think about being human by categorizing and charting what it means to be human.
00:07:04.000 But that's actually a much more holistic experience than anything else, which is something we'll discuss when it comes to education and interfacing with AI.
00:07:12.000 I think one of the things people aren't thinking about enough is how AI is going to change the experience of being human, how it's going to shape our own neural rewiring.
00:07:18.000 Anyway, Here's some terms that you're going to need to know because you're going to hear them a lot.
00:07:22.000 Neural network.
00:07:22.000 So a neural network is essentially an imitation of how the brain works.
00:07:26.000 So our brains are made of all of these interconnected cells are called neurons and they form electrical connections.
00:07:32.000 And when they fire, when your neurons fire, this is what creates thought, presumably.
00:07:37.000 Well, GPUs, General Processing Units, they've been trying to work along the lines of neural networks for a very long time and they're sort of layered over one another.
00:07:45.000 You have deep layers of neural networks and slightly higher layers of neural networks until you get to sort of the top line.
00:07:50.000 The top line is what you see when you open your computer and suddenly the computer can actually identify dog versus cat, for example.
00:07:58.000 So, all these models are formed along the lines of what a human brain theoretically works like.
00:08:03.000 And then the model, as TechCrunch says, is the actual collection of code that accepts inputs and returns outputs.
00:08:08.000 In order to train AI, you have to expose it to an extraordinary amount of data.
00:08:12.000 So, the way that an AI learns what a dog is, is you show it a thousand dogs.
00:08:16.000 100,000 dogs.
00:08:17.000 A million dogs.
00:08:18.000 And then you test it up the chain to see whether it can properly identify as a dog.
00:08:22.000 And then if it fails, then you go back down the chain and you try to correct and tinker with the neural networking so that you get it right.
00:08:29.000 Now one of the things that's happened over the past few years in this way I've seen the
00:08:31.000 explosion in AI is instead of having to train these programs on a million dogs, instead
00:08:38.000 of that, what you're doing is when it comes to large language models at least, not really
00:08:41.000 with regard to pictures yet, but with regard to large language models, what you are seeing
00:08:44.000 is the computers, the AI actually being able to properly identify and categorize words,
00:08:53.000 many words at a time.
00:08:54.000 So instead of doing it, you know, vertically, where you take the word dog and then go up and down the chain, the way that I just explained to you with pictures, instead you have the dog bit the man.
00:09:03.000 And when it comes to large language models, which are trained on how language works, predictive text mechanisms, it's training it horizontally, not just vertically.
00:09:10.000 So it's much, much faster than simply training up and down vertically.
00:09:13.000 Generative AI is the term of the day.
00:09:16.000 This broad term, as TechCrunch puts it, just means an AI model that produces an original output
00:09:20.000 like an image or a text, right?
00:09:21.000 This is the difference between AI as you knew it, right?
00:09:24.000 Because the truth is that like Microsoft Word in some very rudimentary form is a form of AI.
00:09:29.000 But AI, when we talk about, you know, actually generating a text-based poem or an essay
00:09:36.000 or creating a picture, that's what generative AI is.
00:09:39.000 Large language models are the most advanced right now because it turns out that computers are really good
00:09:44.000 at language and they're very good at predictive text.
00:09:47.000 LLMs, as they're called, are able to converse and answer questions in natural language
00:09:50.000 and imitate a variety of styles and types of written documents.
00:09:52.000 So the first people to be replaced, in other words, are the mid-level lawyers,
00:09:54.000 as we'll get to in just a moment.
00:09:57.000 Diffusion is how image generation is done.
00:10:00.000 So if you're wondering how it is that AI can now create things that look like Van Gogh,
00:10:04.000 diffusion is done by companies like Stable Diffusion, Midjourney, and other popular generative AIs.
00:10:10.000 They're trained by showing them images that are gradually degraded
00:10:12.000 and then by adding digital noise until there's nothing left of the original.
00:10:15.000 And you do that often enough and the computer starts to recognize patterns in the chaos.
00:10:19.000 And so now you can basically say, from nothing or from very little data,
00:10:23.000 I want you to generate an entire picture, right?
00:10:26.000 It's reversing the process in the same way that you remember in the olden days when you would actually produce film, you'd take a picture and then you'd put it in the darkroom and then the picture would suddenly appear.
00:10:35.000 So diffusion is that reverse process.
00:10:36.000 But basically, what you are looking at right now is taking a little bit of data and allowing the computer,
00:10:43.000 allowing the AI to build an image from that.
00:10:48.000 When you look at diffusion technology, that's what we're using in an AI YouTube video that we did
00:10:52.000 where we typed in a prompt and something was coming up with Nancy Pelosi dressed as like a scary clown or
00:10:56.000 something.
00:10:57.000 Hallucination is what happens when the AI is not properly comprehending the data.
00:11:03.000 So this is why you see, for example, Will Smith eating spaghetti and the spaghetti merging with Will Smith's mouth.
00:11:08.000 Because the AI is hallucinating, but that's largely because of the over-prevalence of certain data in the set.
00:11:16.000 So, for example, the example TechCrunch gives correctly is originally this is a problem of certain imagery and training slipping into unrelated output, like buildings that seem to be made of dogs because the thing had been seeing so many dogs lately.
00:11:29.000 So that's hallucinating, but they're wiping that out of the system.
00:11:31.000 There are a bunch of companies that are involved in AI.
00:11:34.000 OpenAI is more an open model, an open source model, so all of their data is publicly available.
00:11:40.000 You have Microsoft, which is developing its own AI.
00:11:42.000 They invested early in open AI, and they're using it to power Bing.
00:11:46.000 You have Anthropic, which is intending to fill a sort of different role.
00:11:52.000 It's more guided top-down Anthropic.
00:11:55.000 All of these companies are going to be pouring billions of dollars into AI.
00:11:59.000 Billions and billions.
00:12:00.000 I mean, it's my prediction that within two years, the amount of money that's in AI is gonna look like a hundred billion dollars.
00:12:06.000 And after that, we're gonna start talking trillions.
00:12:08.000 Because again, it's gonna completely transform how we live and how we interface with technology.
00:12:13.000 Some of this is scary, and some of this is really interesting and welcome.
00:12:16.000 We're gonna discuss that in just one second.
00:12:18.000 First, if you haven't yet heard, the FDA has now approved lab-grown chicken.
00:12:21.000 Yes, meat formed in a lab will soon be coming to a store near you.
00:12:24.000 Well, let's say that you're not so hot on the lab-grown chicken or the lab-grown beef, and instead what you would like is, you know, a good old-fashioned piece of meat.
00:12:31.000 You need Good Ranchers.
00:12:33.000 Not only do they sell real meat from real animals, they sell the best meat this country has to offer.
00:12:36.000 From steakhouse-quality cuts of beef to better-than-organic chicken, everything Good Ranchers sources is from local farms right here In America.
00:12:42.000 Plus, right now you get 30 bucks off with my code BEN at GoodRanchers.com.
00:12:47.000 They've got genuinely great products and top-tier customer service.
00:12:49.000 You can't call the scientists in the lab to ask about their fake meat, but Good Ranchers has a team of people available for you to call.
00:12:54.000 They'll answer all of your questions, real meat and real service.
00:12:57.000 I know how good they are because they actually made me the one kosher steak they've ever made and it was just delicious.
00:13:00.000 So, what exactly are you waiting for?
00:13:02.000 Enjoy real meat and real service today with Good Ranchers.
00:13:05.000 Visit GoodRanchers.com.
00:13:06.000 Use my code BEN for 30 bucks off any box.
00:13:08.000 That is promo code BEN at GoodRanchers.com.
00:13:11.000 GoodRanchers.com is indeed American meat delivered.
00:13:13.000 Go check them out right now.
00:13:15.000 GoodRanchers.com.
00:13:16.000 Use promo code BEN.
00:13:17.000 Get 30 bucks off any box of meat.
00:13:19.000 Promo code BEN at GoodRanchers.com.
00:13:21.000 Okay, so.
00:13:22.000 Let's get to the real stuff.
00:13:23.000 How's this going to change our world?
00:13:25.000 So Leanna Nguyen has an interesting piece over in the Washington Post talking about how this is going to impact, say, health technologies.
00:13:31.000 She says, Consider the Mayo Clinic, the largest integrated non-profit medical practice in the world.
00:13:36.000 It has created more than 160 AI algorithms in cardiology, neurology, radiology, and other specialties.
00:13:41.000 Forty of these have already been deployed in patient care.
00:13:44.000 To better understand how AI is used in medicine, I spoke with John Halamka, a physician trained in medical informatics who is president of Mayo Clinic Platform.
00:13:51.000 As he explained to me, AI is just the simulation of human intelligence via machines.
00:13:55.000 He distinguished between predictive and generative AI.
00:13:58.000 The former involves mathematical models using patterns from the past to predict the future.
00:14:01.000 The latter uses text or images to generate a sort of human-like interaction.
00:14:04.000 It's the first type, the predictive model, that is the most valuable.
00:14:07.000 And this is why I say that it's going to impact medicine and law, for example, faster than it's going to impact some other areas of American life.
00:14:13.000 Why?
00:14:14.000 Because predictive and text-generated stuff is more advanced than some of the other forms of AI so far.
00:14:21.000 As Halamka described, predictive AI can look at the experiences of millions of patients and their illnesses to help answer a simple question.
00:14:26.000 What can we do to ensure you have the best journey possible with the fewest potholes along the way?
00:14:30.000 So, for example, let's say someone is diagnosed with type 2 diabetes.
00:14:33.000 Instead of giving generic recommendations for anyone with the condition, an algorithm can predict the best care plan for that patient using age, geography, racial and ethnic background, existing medical conditions, and nutritional habits.
00:14:43.000 The quality of the algorithm depends on the quantity and diversity of the data.
00:14:47.000 So Mayo Clinic has already signed up with clinical systems across the United States and Canada and Brazil and Israel.
00:14:52.000 So apparently by the end of 2023, Halamka expects the network of organizations to encompass more than 100 million patients whose medical records with identifying information removed will be used to improve care for others.
00:15:01.000 So, for example, predictive AI is going to be able to augment diagnoses.
00:15:05.000 For example, if you want to detect colon cancer, standard practice is for gastroenterologists to perform a colonoscopy and then manually identify and remove precancerous polyps.
00:15:12.000 But apparently, one in four cancerous lesions are missed during those screening colonoscopies.
00:15:16.000 Predictive AI can dramatically improve the detection.
00:15:19.000 The software has been trained to identify polyps by looking at literally millions of pictures of them, and when it detects one during colonoscopy, it alerts the physician to take a closer look.
00:15:28.000 Apparently, one randomized control trial already in the U.S., Britain, and Italy found that using such AI reduced the miss rate of potentially cancerous lesions by more than half, from 32.4% to 15.5%.
00:15:39.000 He says that within the next five years, it will be malpractice not to use AI in colorectal cancer screening.
00:15:45.000 This is correct.
00:15:47.000 Hey, so this is particularly true in radiology.
00:15:49.000 So in radiology already, you're seeing AI begin to replace radiologists because radiologists, their job is to look at the x-ray and spot the problem.
00:15:56.000 Well, that's literally what AI is amazing at.
00:15:59.000 AI is going to be great at that.
00:16:00.000 The same thing is going to be true when it comes to law.
00:16:02.000 The legal profession is going to be entirely disrupted.
00:16:06.000 You know, all of the talk about learn to code, guys.
00:16:08.000 Well, it turns out that coders may be disrupted by AI as well.
00:16:11.000 The legal profession, it makes sense, right?
00:16:12.000 You're a mid-level lawyer.
00:16:13.000 When I first started off at Harvard Law School, I was out of law school.
00:16:16.000 I went to a firm called Goodwin Procter in Los Angeles, and I was a low-level associate.
00:16:21.000 All you did all day was review contracts for paragraph errors, pagination errors, sort of basic informational errors, which you did all day.
00:16:28.000 Doc review, they called it.
00:16:30.000 That stuff can be done by an AI in seconds.
00:16:33.000 I mean, it's going to bring the cost down to zero.
00:16:35.000 As I'll explain in a second, I think there may be a pipeline problem when it comes to this.
00:16:39.000 Because if AI is really, really good at everything, except for the stuff that the real, real experts can do, it's going to wipe out the pipeline for experts.
00:16:45.000 Because the way you become an expert is by training and doing all of these things without AI.
00:16:49.000 Alternatively, theoretically, your use of AI could maybe make you more of an expert in the thing that you actually need to do to work with AI in the future.
00:16:57.000 I'm not really sure about that answer.
00:16:58.000 We'll get some perspectives on this in just one moment.
00:17:01.000 One thing is for sure.
00:17:02.000 All of the talk about how technology is going to disrupt blue-collar workers, is going to disrupt people at the low end of the wage scale.
00:17:09.000 The actual truth is that it's mostly going to disrupt people at the higher end of the wage scale.
00:17:13.000 Lawyers.
00:17:14.000 Doctors.
00:17:15.000 If you're an accountant, get ready.
00:17:17.000 Because the AI revolution is going to change nearly everything.
00:17:21.000 The Organization for Economic Cooperation and Development, that'd be the OECD, says that the occupations at highest risk from AI-driven automation are highly skilled jobs.
00:17:30.000 They represent about 27% of employment across 38 member countries.
00:17:34.000 According to the OECD, they say it's clear the potential for AI-driven job substitution remains significant, raising fears of decreasing wages and job losses.
00:17:42.000 However, it added that for the time being, AI was changing jobs rather than replacing them.
00:17:45.000 Now, as I say, I think that there will be replacements for these jobs.
00:17:49.000 Human beings are incredibly creative.
00:17:50.000 Human beings are incredibly adaptive.
00:17:52.000 And bringing costs down is not a bad thing.
00:17:54.000 We live in a time of extraordinary inflation.
00:17:55.000 Bringing down that inflation through increased productivity is going to be amazing.
00:17:59.000 Increased productivity just means that you can generate more product using a tool.
00:18:04.000 So for example, it used to be that if you were a person who had to work in a white collar
00:18:10.000 office, you hand wrote letters.
00:18:12.000 And it took forever.
00:18:13.000 You had to hand write the letters, mail them, you have to wait weeks for them to come back.
00:18:16.000 Now of course you use your computer to do that.
00:18:18.000 It's raised productivity by an extraordinary percentage.
00:18:21.000 What happens when you don't even have to do that?
00:18:22.000 You write a prompt and you're like, write a letter to Bob with the following points
00:18:26.000 and then just throw it out there.
00:18:27.000 You just saved yourself 45 minutes.
00:18:29.000 The amount of productivity increase is going to dramatically lower the price on services and goods as well.
00:18:35.000 The OECD says occupations in finance, medicine and legal activities, which often require many years of education and whose core functions rely on accumulated experience to reach decisions, may suddenly find themselves at risk of automation from AI.
00:18:47.000 So as I say, that'll be workers in the fields of law, culture, science, engineering, and business as well.
00:18:52.000 Now, there's not going to be a substitution for the person-to-person interface that is required to do negotiations, I think, because you're still going to have to negotiate with the other person, because that's a relationship based on trust.
00:19:02.000 Now, one of the disruptors there could theoretically be that maybe we don't need a relationship based on trust.
00:19:07.000 Maybe blockchain technology will be so useful at that point that when you combine that with AI, that the sort of trust that you have to have with the person across the table no longer exists.
00:19:15.000 The computers essentially talk to one another.
00:19:18.000 And when that happens, and they can actually check each other's work to ensure absolute transparency, maybe trust becomes less of a factor.
00:19:24.000 But I think that's a little ways away.
00:19:28.000 Now, all of this is creating a sort of sense of chaotic crisis.
00:19:32.000 And there's a wide divide of opinion on whether AI is going to be a net positive or a net negative.
00:19:37.000 And I think most of the net negative from AI is going to be in how people use it.
00:19:40.000 Because I think one thing technology has shown, technology does not wipe out human sin.
00:19:45.000 Human beings are inherently sinful.
00:19:46.000 Human nature is inherently unchanging, at least at the root level.
00:19:50.000 We're not going to become better, nicer people.
00:19:53.000 We're not going to become free of sin.
00:19:54.000 We're going to have all the same sinful desires we had.
00:19:56.000 We're just going to have more tools to both access them and fight them.
00:20:00.000 So the notion is going to solve all of humanity's problems, I find problematic.
00:20:03.000 And not only that, again, I think especially if we expose kids to AI, it's going to change how they develop.
00:20:09.000 We'll get to more of this in just one second.
00:20:11.000 First, have you ever invested in like a nice jacket or shoes, like a nice dinner?
00:20:15.000 Well, your betting shouldn't be any different.
00:20:16.000 Your betting should be awesome because you're spending a lot of time on it.
00:20:19.000 Start investing in your best sleep with Bull and Branch.
00:20:21.000 They make the only sheets that get softer with every wash.
00:20:24.000 Boll and Branch sheets are made from the finest 100% organic cotton threads on earth.
00:20:27.000 They feel buttery to the touch, they're super breathable, so they're perfect for both cooler and warmer months.
00:20:31.000 Their signature hemmed sheets were made with luxurious threads.
00:20:33.000 They're made without pesticides, formaldehyde, or other harsh chemicals.
00:20:36.000 Best of all, Boll and Branch gives you a 30-night risk-free trial with free shipping and returns on all orders.
00:20:40.000 You don't want to return them.
00:20:42.000 They are much better than any sheets I've ever tried.
00:20:43.000 Not only that, Bull & Branch also makes amazing blankets.
00:20:46.000 They have an afghan.
00:20:47.000 I literally travel with this thing because I sleep so much better with it than I do with other bedding products.
00:20:51.000 Sleep better at night with Bull & Branch sheets.
00:20:53.000 For a limited time only, you can get early access to their annual summer event.
00:20:56.000 Use code SHAPIRO to get 20% off today at bullandbranch.com.
00:21:00.000 That's B-O-L-L-A-N-D-B-R-A-N-C-H.com.
00:21:03.000 Promo code SHAPIRO.
00:21:04.000 Exclusions apply.
00:21:05.000 See site for details.
00:21:06.000 Go check them out right now.
00:21:07.000 Bullandbranch.com.
00:21:08.000 Promo code SHAPIRO.
00:21:09.000 Get 20% off.
00:21:11.000 Okay, so some of the perspectives on AI, they're widely variant.
00:21:17.000 So Mark Andreessen, who I've talked about in the past, he is, of course, a major investor in AI.
00:21:22.000 He talks about how AI is going to save the world.
00:21:25.000 And again, what he says is that it's going to make human intelligence significantly more prevalent.
00:21:31.000 People are going to have access to things they never had access to.
00:21:33.000 A guy with 105 IQ is going to be able to put together a movie by simply typing prompt into a computer.
00:21:39.000 You are going to be able to get a great diagnosis for your medical condition from a nurse practitioner who didn't have to spend seven years working on their education.
00:21:48.000 Andreessen says, what AI offers us is the opportunity to profoundly augment human intelligence to make all of these outcomes of intelligence and many others, from the creation of new medicines, to ways to solve the climate change, to technologies to reach the stars, much more, much more, and better from here.
00:22:03.000 He says in our new era of AI, every child will have an AI tutor who is infinitely patient, infinitely compassionate, infinitely knowledgeable, and infinitely helpful.
00:22:09.000 The AI tutor will be by each child's step, side by side, in their development, helping them maximize their potential with the machine version of infinite love.
00:22:16.000 Every person will have an AI assistant, coach, mentor, trainer, advisor, therapist that is infinitely patient,
00:22:21.000 infinitely compassionate, infinitely knowledgeable, and infinitely helpful.
00:22:24.000 The AI assistant will be present through all of life's opportunities and challenges,
00:22:26.000 maximizing every person's outcomes.
00:22:28.000 Every scientist will have an AI assistant, collaborator, or partner that will greatly expand
00:22:31.000 their scope of scientific research and achievement.
00:22:33.000 Every leader will have the same.
00:22:35.000 Productivity growth throughout the economy will accelerate dramatically, driving economic growth,
00:22:40.000 creation of new industries, creation of new jobs, wage growth, resulting in a new era
00:22:44.000 of heightened material prosperity across the planet, scientific breakthroughs will happen,
00:22:47.000 creative arts will enter a golden age.
00:22:49.000 As AI augmented, artists, musicians, writers, filmmakers gain the ability to realize their visions far faster
00:22:54.000 and at greater scale than ever before.
00:22:56.000 He thinks it's even going to improve warfare by reducing wartime death rates dramatically.
00:23:01.000 Military commanders and political leaders will have AI advisors helping them minimize strategic errors.
00:23:07.000 And then he suggests that it's not just about intelligence.
00:23:10.000 AI will make us more human because AI art gives people who otherwise lack technical skills the freedom to create and share artistic ideas.
00:23:16.000 Talking to an empathetic AI friend really does improve their ability to handle adversity.
00:23:19.000 AI medical chatbots are already more empathetic than their human counterparts.
00:23:22.000 So here's where I have a division with Mark.
00:23:25.000 It is in the area of human development.
00:23:29.000 When it comes to the economy, it's great that every scientist will have an assistant that is infinitely capable.
00:23:34.000 That's going to be awesome.
00:23:35.000 It's going to be great that you are going to be able to access a wide range of services at a much cheaper price.
00:23:41.000 It's great that you're going to be able to augment your own skills using AI.
00:23:44.000 However, when we say things like every person will have an AI assistant, coach, mentor, trainer, advisor, or therapist, the question is who's doing the inputs.
00:23:51.000 One of the things about having an AI mentor, trainer, advisor, somebody's going to have to set the parameters.
00:23:55.000 And the parameters are really the big question when it comes to AI.
00:23:58.000 So, when it comes to scientific, research, or legal advice, the parameters are pretty clear.
00:24:02.000 The law, the boundaries of science.
00:24:05.000 The real questions begin to arise when you ask who sets the moral parameters for human interactions with AI.
00:24:11.000 So, to take an example, we've seen AI that has started to hallucinate and start to act like people's girlfriend.
00:24:17.000 And you actually already see AI technologies will be developed that act as people's quote-unquote paramours.
00:24:22.000 People's lovers, sort of.
00:24:25.000 And once you attach it to the sex dolls the Japanese are making, then obviously life is going to change rather dramatically for a large swath of the population.
00:24:32.000 However, is that good for people?
00:24:34.000 I would suggest not.
00:24:35.000 It's great to be able to personalize your coffee.
00:24:37.000 It is very bad to be able to personalize your interactions with all the other humans around you.
00:24:41.000 Personalizing your interactions with all the other humans around you makes you less human.
00:24:45.000 It is interacting with things that you don't like that makes you a better person.
00:24:49.000 It is dealing with adversity that makes you stronger.
00:24:51.000 The tragedies of life are going to be fewer, and they're going to be milder, because that's what always happens with technological development.
00:24:59.000 But they're not going to disappear entirely.
00:25:01.000 And what we've seen is that, almost like an animal taken into captivity, Those animals taken into captivity, they can live a really, really long time.
00:25:09.000 But when they are hit with any sort of adversity, they don't know what to do, which is why when you release them back into the wild, they're basically helpless.
00:25:15.000 We are taking the wild out of life with a lot of this sort of stuff.
00:25:19.000 And when we do, it makes human beings weaker in very specific, inhuman ways.
00:25:24.000 So this is the area where I think we ought to keep in mind the division.
00:25:27.000 And one of the things that we tend to do as human beings, we tend to use certain tools to try to solve quote-unquote all of our problems.
00:25:32.000 And they shouldn't solve all of our problems.
00:25:34.000 You need different tools for different problems.
00:25:36.000 So, for example, you'll hear people say, two cheers for capitalism.
00:25:39.000 You hear this on the right a lot, because capitalism, it's great, it provides better products, better services, free trade, all of that is very good, but, but, two cheers, because it undermines, for example, community.
00:25:49.000 Capitalism was never meant to support community.
00:25:52.000 Capitalism was meant to make products better and cheaper.
00:25:55.000 That's what capitalism does.
00:25:57.000 That's like saying, this hammer that I'm holding right here.
00:26:00.000 Well, two cheers for the hammer because it's not a screwdriver.
00:26:03.000 Well, it's not a screwdriver and it wasn't meant to be a screwdriver.
00:26:05.000 I feel like artificial intelligence is the same way.
00:26:08.000 It seems to me is going to be amazing when it comes to augmenting human skill, human intelligence.
00:26:12.000 It's a new tool for productivity.
00:26:14.000 In the same way, an electric screwdriver is way, way better than a hand turned screwdriver.
00:26:19.000 It's going to be like that.
00:26:20.000 However, the idea that parameters based systems set by tech bros in San Francisco should reshape how your kids learn.
00:26:30.000 I think I'd be super duper careful with that sort of stuff.
00:26:32.000 Like the idea of an infinitely malleable AI tutor.
00:26:35.000 Who's going to set the parameters?
00:26:36.000 Is it going to be parents or is it going to be the NEA?
00:26:38.000 Is it going to be the government or is it going to be you?
00:26:40.000 And how exactly do you set those parameters?
00:26:43.000 So for example, let's say that, you know, just as I mentioned the calculator before, you set the parameters to the AI to whatever the kid wants so that they can learn the fastest.
00:26:52.000 What does learn even mean?
00:26:54.000 Does a kid need to, like when I was a kid, one of the things we were all sort of nostalgic for when you were a kid, you would look up in the Encyclopedia Britannica all the information.
00:27:02.000 And it's true.
00:27:02.000 You can get information way faster now, which in a certain sense makes us way smarter because we can get the information way faster.
00:27:08.000 At the same exact time, were there actual emergent properties, as I was discussing before, emergent properties from learning to look things up in an encyclopedia that may be useful for humanity, and that wiping that away by giving too quick access to information or the answer actually is bad for you?
00:27:24.000 It's interesting, I was talking to a very pro-AI person recently, and we were talking about the, I was giving the example of my seven-year-old son to him, who's been using predictive text, or he's been using voice to text instead of reading.
00:27:35.000 He said, what's the purpose of reading?
00:27:36.000 Why did you really need to read?
00:27:38.000 That's a good question, right?
00:27:38.000 It's worth thinking about what skills do we need to preserve as human beings?
00:27:42.000 And what, which ones do we not, right?
00:27:44.000 He said, do you, do you think about how you don't know how to plow right now?
00:27:46.000 Are you like sitting around thinking about, man, I lost that skill set of plowing.
00:27:50.000 But it seems to me that there are certain mental activities that are deeply embedded
00:27:55.000 in the human brain and then wiping them away is going to leave us enervated in particular way.
00:27:59.000 You've got to be very careful, particularly with how we use AI technology to augment child education.
00:28:04.000 This was the thought during COVID, right?
00:28:06.000 Is that we were just going to Zoom educate all the kids and it was going to be totally fine.
00:28:08.000 It turns out that's not the way the human brain works.
00:28:12.000 So there's this big debate that has been raging about all of this.
00:28:16.000 One of the Doomers is Dario Amodi.
00:28:18.000 He is the founder of Anthropic, right?
00:28:20.000 The reason I think he's a Doomer is specifically because he's thinking about the impact on human beings.
00:28:24.000 He did an interview with Kevin Ruse, who is the tech columnist over at the New York Times.
00:28:27.000 Bad on some topics, like, for example, how people like me are horrible on YouTube, but good on some topics, like this interview.
00:28:33.000 And he says, It's a few weeks before the release of Claude, a new AI chatbot from the artificial intelligence startup Anthropic, and the nervous energy inside the company's San Francisco headquarters could power a rocket.
00:28:43.000 At long cafeteria tables dotted with spindrift cans and chess boards, harried-looking engineers are putting the finishing touches on Claude's new chat-GPT-style interface, codenamed Project Hatch.
00:28:52.000 Nearby, another group is discussing problems that could arise on launch day.
00:28:55.000 What if a surge of new users overpowers the company's servers?
00:28:58.000 What if Claude accidentally threatens or harasses people, creating a Bing-style PR headache?
00:29:03.000 Dario Modi is going over his own mental list of potential disasters.
00:29:06.000 My worry, as always, is the model going to do something terrible we didn't pick up on.
00:29:10.000 Anthropics employees aren't just worried that their app will break or that their users won't like it.
00:29:14.000 They are scared, at a deep existential level, about the very idea of what they're doing, building powerful AI models and releasing them into the hands of people who might use them to do terrible and destructive things.
00:29:23.000 Many of them believe AI models are rapidly approaching a level where they might be considered artificial general intelligence, that's AGI, the industry term for human-level machine intelligence.
00:29:30.000 They fear that if they're not carefully controlled, these systems could take over and destroy us.
00:29:35.000 Some of us think that AGI, in the sense of systems that are genuinely as capable as a college-educated person, Presumably with agency, like actually able to input their own inputs, are maybe 5 to 10 years away, says Jared Kaplan, Anthropic's chief executive.
00:29:48.000 Just a few years ago, worrying about an AI uprising was considered a fringe idea, and one many experts dismissed as wildly unrealistic given how far the technology was from human intelligence.
00:29:56.000 But AI panic is having a moment right now.
00:29:59.000 At Anthropic, the doom factor has turned up to 11.
00:30:03.000 I spent weeks interviewing anthropic executives, as Ruth, talking to engineers and researchers,
00:30:06.000 sitting in on meetings with product teams ahead of Quad 2's launch.
00:30:09.000 While initially I thought I might be shown a sunny optimistic vision of AI's potential,
00:30:12.000 a world where polite chatbots tutor students, make office workers more productive, and help
00:30:16.000 scientists cure diseases, I soon learned rose-colored glasses weren't anthropic's thing.
00:30:20.000 They were more interested in scaring me."
00:30:22.000 A lot of people are worried that AI is going to essentially go off the rails.
00:30:28.000 Kaplan says, a lot of people have come here thinking AI is a big deal.
00:30:33.000 They're really thoughtful people.
00:30:34.000 They're really skeptical of these long-term concerns.
00:30:36.000 And they're like, wow, these systems are much more capable than I expected.
00:30:38.000 The trajectory is much, much sharper.
00:30:39.000 And so they are considering, and so they are concerned about AI safety.
00:30:45.000 So, you know, again, AI safety is, let's say that the system decides, quote-unquote decides, that it is going to wipe out humanity and now has the capacity to do so.
00:30:54.000 Now, the thing to worry about is not that it gains its own sort of willpower, because AI does not have desire.
00:30:59.000 It's not sentient.
00:31:00.000 It's not you, it's not me.
00:31:01.000 It doesn't have an emotional want for things.
00:31:03.000 But what if a bad person grabs AI?
00:31:04.000 I mean, they could certainly do an enormous amount of damage.
00:31:07.000 So that is one problem.
00:31:08.000 The other problem, again, I think is just the unintended consequences.
00:31:10.000 So you remember when social media first came online, the idea was it was going to connect everybody.
00:31:14.000 And this is going to be a great thing.
00:31:15.000 And it has been in many ways.
00:31:17.000 You can talk to your friends from other countries over FaceTime or WhatsApp.
00:31:21.000 You can use Facebook to keep tabs on people you haven't seen in 20 years.
00:31:24.000 Or, alternatively, you can use these social networks to isolate yourself in your bedroom and spend the next 10 years being lonely, isolated, and bored before you have suicidal ideation.
00:31:35.000 And it's been more of the latter than the former in terms of sort of the statistical use.
00:31:39.000 The questions of whether the internet, for example, has been an overwhelmingly good thing or a bad thing overall?
00:31:45.000 These are very serious questions, and I don't think they're answerable.
00:31:47.000 The same thing is going to be true of AI, except in spades.
00:31:51.000 Bill Gates, on the other hand, he's more of a tech optimist.
00:31:55.000 He says there are five risks from AI.
00:31:57.000 He says first, he's worried about AI-generated misinformation and deepfakes.
00:32:01.000 I'm less worried about that because, again, I think that you'd have to have basically AI running everything in order for any sort of deepfake or misinformation to be so widely spread that everybody believes it.
00:32:13.000 In fact, only the media is capable of doing that at this point.
00:32:15.000 Second, AI could automate the process of searching for vulnerabilities in computer systems, which is true.
00:32:19.000 Again, bad people could get a hold of AI and then they could work around all the security systems.
00:32:22.000 Third, AI could take people's jobs.
00:32:25.000 Again, I'm less worried about that because every tech problem of the past has displaced people.
00:32:29.000 And every single one, there is a lag time.
00:32:31.000 I'm not going to pretend that people aren't going to lose their jobs initially.
00:32:33.000 But over time, more jobs are created.
00:32:35.000 There are many, many more jobs on planet Earth now with all of the technology than there were, say, 20 years ago with a lot less of the technology.
00:32:42.000 Fourth, AI systems have already been found to fabricate information and exhibit bias.
00:32:46.000 And finally, access to AI tools could mean that students don't learn essential skills.
00:32:48.000 This is the one, again, that I'm most worried about.
00:32:50.000 I'm worried about the pipeline of human development.
00:32:54.000 And so, that does argue for the possibility that perhaps we should be carefully shielding kids from AI, particularly at their youngest stage, before we simply open the floodgates.
00:33:05.000 It's the parents who open the floodgates to social media, who made their kids into zombified Social engineered problems.
00:33:14.000 And I think we ought to be very, very careful about the sort of stuff that we unleash on kids in particular.
00:33:21.000 Here's the bottom line to all of this.
00:33:22.000 The bottom line to all of this is that you better get ready because things are going to change very, very quickly.
00:33:26.000 That can either be scary or it can be wonderful.
00:33:28.000 But pretending that the technological situation of the world is going to be the same five years from now as it is today is not true.
00:33:35.000 Everything is going to change and it's going to start changing very, very fast.
00:33:38.000 Okay, in just one second, we'll get to the NATO summit where Ukraine is now at odds with much of NATO.
00:33:42.000 First, everyone knows I love my Helix Sleep Mattress.
00:33:44.000 Did you know they just launched their newest, most high-end collection?
00:33:47.000 That would be the Helix Elite.
00:33:48.000 Helix has harnessed years of extensive mattress expertise to bring their customers a truly elevated sleep experience.
00:33:53.000 The Helix Elite Collection includes six different mattress models, each tailored for specific sleep positions and firmness preferences.
00:33:58.000 I've had my Helix Sleep Mattress for at least six, seven years at this point.
00:34:01.000 It is great.
00:34:02.000 It is the thing that allows me to sleep at night when my baby is waking us up.
00:34:05.000 Nervous about buying a mattress online?
00:34:06.000 Well, there's no need to be.
00:34:07.000 Helix has a Sleep Quiz.
00:34:08.000 It matches your body type and sleep preferences to the perfect mattress.
00:34:11.000 Because why would you buy a mattress made for somebody else?
00:34:13.000 I took that Helix quiz.
00:34:14.000 I was matched with a firm but breathable mattress.
00:34:16.000 It is excellent.
00:34:16.000 Again, my sleep quality is better now than it was well before I had the Helix Sleep Mattress.
00:34:20.000 Go to HelixSleep.com slash Ben.
00:34:22.000 Take their two-minute sleep quiz.
00:34:23.000 Find the perfect mattress for your body and sleep type.
00:34:25.000 Your mattress will come right to your doorstep for free.
00:34:27.000 Plus, Helix has a 10-year warranty.
00:34:29.000 You can try it out for 100 nights risk-free.
00:34:30.000 They'll even pick it up for you if you don't love it, but you will.
00:34:32.000 Helix has over 12,000 five-star reviews.
00:34:34.000 Their financing options and flexible payment plans make it so a great night's sleep is never far away.
00:34:38.000 For a limited time, Helix is offering up to 20% off all mattress orders and two free pillows for our listeners.
00:34:43.000 It's their best offer yet.
00:34:44.000 Hurry on over to helixsleep.com slash ben with Helix.
00:34:46.000 Better sleep starts right now.
00:34:48.000 Also, a lot of unhappy people out there.
00:34:50.000 Most of them think if I could just get the right job or find the right spouse or have more money, everything would be fine.
00:34:54.000 The truth is, happiness is something you can achieve without adding anything else to your life.
00:34:59.000 You don't have to take my word for it.
00:35:00.000 Take the founder of PragerU's word for it, Dennis Prager.
00:35:02.000 In a brand new episode of PragerU Master's program, streaming exclusively at Daily Wire+, Dennis will show you how you can be happy right now.
00:35:09.000 In PragerU Master's program, Dennis is sharing 40 years worth of hard-earned wisdom that explores all kinds of topics like the differences between men and women, the consequences of secularism, and the case for marriage.
00:35:19.000 In this week's episode, Dennis is going to show you how to overcome the hurdles to happiness.
00:35:22.000 I can guarantee at least one of those hurdles you've never considered before.
00:35:25.000 You will not want to miss this episode.
00:35:26.000 Go to dailywireplus.com to become a member.
00:35:29.000 Watch PragerU master's program today.
00:35:31.000 Meanwhile, controversy breaking out over at the NATO summit.
00:35:33.000 Vladimir Zelensky is upset.
00:35:35.000 On Tuesday, he upended that summit by blasting an agreement for its lack of a concrete timeline for Kiev to join the alliance, as well as the absurd process by which it was drafted.
00:35:43.000 This is according to the Washington Post.
00:35:45.000 in a fiery tweet, Zelensky frustrated Ukrainian advocates inside the alliance who believed they'd
00:35:48.000 secured a win for Kiev by pushing the United States, Germany and other reluctant countries
00:35:52.000 to consent to quote issue an invitation for Ukraine to join NATO when the allies agree and
00:35:56.000 conditions are met. That language was read out by NATO Secretary General Jan Stoltenberg after the
00:36:00.000 agreement among NATO's 31 leaders was made. Zelensky's angry intervention, which came before
00:36:04.000 the final agreement on Tuesday, but after the language had already started circulating, suggested
00:36:08.000 the alliance had not yet found a way to satisfy both sides.
00:36:11.000 Zelensky said now on the way to Vilnius we received signals that certain wording is being discussed
00:36:15.000 without Ukraine.
00:36:16.000 I'd like to emphasize that this wording is about the invitation to become NATO member, not about Ukraine's membership.
00:36:20.000 It's unprecedented and absurd when time frame is not set, neither for the invitation nor for Ukraine's membership.
00:36:26.000 He claims that NATO leaders are not serious about inviting Ukraine to join the alliance and complained their approach indicated they instead wanted to keep its membership as a bargaining chip for eventual negotiations with Russia.
00:36:34.000 Uncertainty is weakness, he said, and I will openly discuss this at the summit.
00:36:39.000 Zelensky did not mention Joe Biden in the tweet, but obviously this is directed as a shot over the bow for Joe Biden.
00:36:45.000 Now, again, I've said before that the off-ramp here is probably to offer Ukraine NATO membership after cramming down a deal brokered by the United States.
00:36:54.000 Basically, Ukraine then becomes a NATO member, meaning that its borders are non-permeable by Russia once Russia gets some territory out of this whole thing.
00:37:02.000 So, the NATO saying, like, we're going to put this thing on hold is actually not wrong.
00:37:06.000 Perhaps the gap is starting to emerge here that's going to allow for negotiations.
00:37:11.000 Stoltenberg, however, immediately sought to smooth things over.
00:37:13.000 He said there's never been a stronger message from NATO at any time, both when it comes to political messages on the path forward for membership and the concrete support from NATO allies with their support as well.
00:37:21.000 The problem is that because of the way that this is being trotted out by the West, the suggestion is that when the war is over, then they will consider NATO membership.
00:37:29.000 That's something Joe Biden has explicitly said.
00:37:31.000 Once you say that, it's now in Vladimir Putin's interest to prolong the war as long as possible so that Ukraine does not get NATO membership.
00:37:37.000 What they actively should be saying to the Russians is, listen, we're going to probably let them into NATO regardless, like at a certain point.
00:37:43.000 So you can either do it with territory, you can do it while the war is still raging, which is going to be a problem.
00:37:48.000 Jake Sullivan, the National Security Advisor, he said, when the NATO summit gets underway, our alliance will not only be bigger and stronger than ever, it will be more united, more purposeful and more energized than at any point in modern memory.
00:37:58.000 Well, part and parcel of the attempts to bring Ukraine further into NATO is this attempt to bring Turkey further toward the West.
00:38:08.000 Some of that is being driven by economic trouble, as the Wall Street Journal points out.
00:38:11.000 For more than a year, the Turkish leader, Recep Tayyip Erdogan, has carefully straddled the widening divide between Russia and the West over the Kremlin's invasion of Ukraine.
00:38:19.000 Now, as he searches for ways to support an economy that has deteriorated under his watch, he is seeking to improve relations with both the United States and his Western allies.
00:38:25.000 So he is triangulating, just like everybody else is triangulating, the weakness of Russia.
00:38:30.000 Giant favor handed to the West by Vladimir Putin.
00:38:33.000 It means that there are a lot of countries that are now triangulating between Russia and the West.
00:38:36.000 However, it's the weakness of Joe Biden on the on the obverse side that is creating triangulation between the United States and China for some of these same exact countries.
00:38:44.000 It's also leading to some sort of, I would say, risky decision making.
00:38:48.000 So the United States has now apparently cleared a path to sell F-16 fighter jets to Turkey.
00:38:54.000 National security advisor Jake Sullivan rejected suggestions that advancing the sale to Ankara was directly
00:38:58.000 linked to Erdogan's decision to let Stockholm into the alliance of
00:39:02.000 NATO saying there was no quid pro quo But US officials said the Japs factored in the negotiations.
00:39:07.000 So essentially we paid Erdogan not to block Sweden's extension into NATO.
00:39:11.000 A lot of people are a little bit concerned about handing over to Erdogan who is in fact a
00:39:18.000 quasi dictator of Islamist bent. F-16s.
00:39:23.000 Congress has the authority to pass legislation that will block or modify a sale until the jets are delivered.
00:39:28.000 Senate Foreign Relations Committee Chairman Bob Menendez said Friday he was discussing the potential sale of the jets to Turkey, signaling a potential reversal of his longstanding opposition to the idea.
00:39:36.000 I mean we have other allies in the region who are not always on the same page as the Turks and so this could be a big problem.
00:39:43.000 Meanwhile, Antony Blinken is saying the Ukraine has made good progress on its path to joining NATO, but says they have more work to do.
00:39:50.000 Again, it's not clear exactly what is happening here, and so I would say a lot is up in the air.
00:39:56.000 President Biden made it very clear that he doesn't believe Ukraine is ready for NATO.
00:40:01.000 What will it take for the administration's point of view for Ukraine to be ready?
00:40:06.000 I know I've heard you all say when the war is over.
00:40:08.000 Is that it?
00:40:10.000 So we're committed to what's called NATO's open door, to welcoming new members when they're ready for membership and when all of the allies agree to invite them in.
00:40:18.000 Ukraine has made good progress in that direction, and that's going to be reflected at the summit.
00:40:23.000 At the same time, the Ukrainians and others are the first to acknowledge that they have more work to do, continuing to reform their military, continuing to deepen democratic reforms.
00:40:31.000 You're going to see that come out of the summit as well.
00:40:35.000 So, it's going to be unclear exactly what happens next year.
00:40:38.000 The one thing that is clear is that at some point, maybe this is the beginning of it, so maybe this is the right move.
00:40:42.000 Maybe at the beginning of this, the move is going to be to draw some daylight with Zelensky so they can actually cut a deal.
00:40:48.000 Meanwhile, Joe Biden is bragging to Erdogan, a person who is, again, a pretty vicious Islamist dictator for the past 10 years or so.
00:40:57.000 He's bragging to Erdogan that they look forward to the next five years together.
00:41:00.000 This is Biden suggesting he's going to run for re-election, of course.
00:41:04.000 I believe today's meeting with you within the margin of the NATO Summit is the first step forward.
00:41:12.000 Our meetings prior to this were mere warm-ups, but now we are initiating a new process.
00:41:20.000 This new process is a process of five years, and now you are getting prepared for the forthcoming elections.
00:41:28.000 And with the forthcoming elections, I would like to take this opportunity to also wish you the best of luck.
00:41:36.000 Thank you very much.
00:41:37.000 Thank you.
00:41:38.000 I look forward to meeting you in the next five years.
00:41:43.000 OK, well, if that is the case, then why is it that Joe Biden is skipping dinner and going straight to his hotel?
00:41:47.000 According to the UK Daily Mail last night, Joe Biden raised eyebrows after skipping dinner with NATO leaders on Tuesday night.
00:41:52.000 Instead, he headed straight home to his hotel in Lithuania.
00:41:55.000 A U.S.
00:41:56.000 official blamed the 80-year-old president's busy schedule over four days and said he's preparing for a big speech on Wednesday.
00:42:01.000 But that is, in fact, a change in schedule.
00:42:04.000 This is not a healthy person.
00:42:05.000 The media, meanwhile, are doing their best, as always, to prop up Joe Biden because they don't have a lot of other choices.
00:42:11.000 Joy Behar over at The View.
00:42:12.000 Man, that lady, she's a weird lady.
00:42:16.000 There's a report from Axios that Joe Biden likes to scream at his White House staff, like a lot.
00:42:21.000 It's one of his favorite things to do.
00:42:23.000 Joy Behar has, I'll say she has some weird turn-ons.
00:42:27.000 He's swearing at people.
00:42:29.000 It's a quirk.
00:42:30.000 Kind of turned me on when I heard that the president gets angry and volatile.
00:42:32.000 I'm not going to lie.
00:42:33.000 I'm disappointed in just about every single thing he has done as president.
00:42:38.000 I think he's just, the economy is wobbling at best.
00:42:42.000 That's a kind way of putting it.
00:42:44.000 His foreign policy is a disaster.
00:42:46.000 He has no idea what he's doing in terms of China and Ukraine.
00:42:49.000 So if he's throwing a few F-bombs here and there, I'm like, yeah, I kind of like it.
00:42:52.000 I'm not going to lie.
00:42:53.000 Uh, that's weird.
00:42:54.000 Kennedy was joking.
00:42:55.000 Behar is like dead serious.
00:42:56.000 I'm so, so concerned for her.
00:42:57.000 I think it was just someone being angry, making you turn on.
00:43:01.000 I was turned on by Biden's anger.
00:43:03.000 I am too. I like it.
00:43:05.000 You like it? I do.
00:43:06.000 Well, you have said that before.
00:43:07.000 I like that. I mean, he's such a mild-mannered, sweet guy.
00:43:10.000 That's weird. Kennedy was joking.
00:43:13.000 Behar is like dead serious.
00:43:14.000 I'm so, so concerned for her.
00:43:17.000 That's very, very strange.
00:43:18.000 OK, meanwhile, Corine Jean-Pierre, she has announced that there are no updates on the missing cocaine at the
00:43:24.000 White House.
00:43:24.000 Here we go.
00:43:26.000 Thank you.
00:43:29.000 Here we go.
00:43:33.000 I don't have any updates.
00:43:34.000 As you know, as you just mentioned, Secret Service is under their purview.
00:43:37.000 They are certainly investigating the situation.
00:43:39.000 I just don't have anything updated.
00:43:41.000 I would refer you to the Secret Service on that particular question.
00:43:45.000 No updates, no updates.
00:43:46.000 I mean, what a mystery wrapped in Enigma.
00:43:49.000 Well, as always, Joe Biden's weakness is of no consequence to the Democrats because the person backing him up is absolutely terrible.
00:43:55.000 Kamala Harris, we have another Deep Thoughts with Kamala here today.
00:44:00.000 And now, Deep Thoughts with Kamala Harris.
00:44:08.000 I again want to thank the Secretary for your work.
00:44:09.000 This issue of transportation is fundamentally about just making sure that people have the ability to get where they need to go.
00:44:15.000 well surpassed Kamala Harris.
00:44:17.000 I again want to thank the secretary for your work.
00:44:21.000 This issue of transportation is fundamentally about just making sure that people have the ability
00:44:27.000 to get where they need to go.
00:44:29.000 It's that basic.
00:44:30.000 Yeah, that's what transportation is.
00:44:35.000 It's where you move things from one place to another.
00:44:38.000 I like when she defines basic words because she hasn't read the book.
00:44:42.000 If you just assume that every Kamala Harris speech or presser is her giving a book report on a book she has not read, things make a lot more sense.
00:44:52.000 They go, Kamala, read To Kill a Mockingbird.
00:44:54.000 Well, it's a story in which there is, in fact, a mockingbird.
00:44:59.000 And it is, in fact, sometimes killed.
00:45:02.000 And that's the important.
00:45:04.000 Man, she is awful.
00:45:05.000 That is why they have to keep up holding the Biden of it all.
00:45:07.000 Okay, meanwhile, the Republican Party, we have another stupid scandal.
00:45:11.000 And when I say stupid scandal, this is where one Republican says a dumb thing and now every Republican has to answer for it.
00:45:15.000 You never see this with Democrats.
00:45:16.000 You just don't.
00:45:18.000 The answer for Democrats always, if someone says something dumb, is why are you asking me?
00:45:22.000 Why is it my business what AOC said?
00:45:25.000 Is it my job to answer for that person?
00:45:26.000 But the way that it works for Republicans is a Republican says a dumb thing, and then we expand it into the biggest scandal that ever happened, and then we ask every single Republican in America about it in order to dissociate!
00:45:38.000 You must dissociate!
00:45:39.000 Show that you don't like this thing!
00:45:41.000 Or how about it's not me, and it's not my job to answer for anyone except for me.
00:45:47.000 If you want to ask me whether I think, for example, white nationalism is bad, evil, and racist, of course I do.
00:45:53.000 But I don't understand why you're asking me, since I never said that it wasn't.
00:45:56.000 In any case, Senator Tommy Tuberville stepped in it the other day.
00:45:59.000 He was asked about white nationalism.
00:46:02.000 And I think, honestly, that you can almost see it on his face as this clip progresses, that he realizes he's made a boo-boo.
00:46:07.000 I think that he thought that she said Christian nationalism.
00:46:11.000 And there's been this widespread attempt by the left to morph white nationalism, meaning the idea that a white-only ethno-state would be better for the United States.
00:46:19.000 And this idea of Christian nationalism, which is a little more controversial because it can be read in one of two ways.
00:46:23.000 One is as like an actual theocracy.
00:46:26.000 And the other is in an America that has very solid traditional Judeo-Christian values that are embedded in public life.
00:46:33.000 Those are not quite the same thing.
00:46:35.000 But she didn't say Christian nationalism.
00:46:36.000 She said white nationalism and asked him about it and he blew it.
00:46:40.000 I'm totally against identity politics.
00:46:42.000 I think it's ruining this country.
00:46:44.000 And I think that Democrats ought to be ashamed for how they're doing this because it's dividing this country and it's making this country weaker every day.
00:46:52.000 But that's not identity politics.
00:46:54.000 You said a white nationalist is an American.
00:46:56.000 It is identity politics.
00:46:57.000 You said a white nationalist is an American, but a white nationalist is someone who believes horrific things.
00:47:02.000 Do you really think that's someone who should be serving in the military?
00:47:06.000 Well, that's just a name that has been given.
00:47:08.000 It's not.
00:47:09.000 It's a real definition.
00:47:10.000 There's real concerns about extremism.
00:47:11.000 So if you're going to do away with most white people in this country out of the military, we've got huge problems.
00:47:16.000 We've got huge problems.
00:47:17.000 It's not people who are white.
00:47:18.000 It's white nationalists.
00:47:19.000 That have a few probably different beliefs.
00:47:21.000 You see the distinction, right?
00:47:22.000 That have different beliefs.
00:47:23.000 Now, if racism is one of those beliefs, I'm totally against it.
00:47:27.000 I am totally against racism.
00:47:29.000 But there's a lot of people that believe in different things.
00:47:31.000 A white nationalist is racist, Senator.
00:47:33.000 Well, that's your opinion.
00:47:37.000 You can almost see him break out in sweat here, because he doesn't know what he's talking about, obviously.
00:47:40.000 Right?
00:47:40.000 He's saying that you shouldn't have racial discrimination in the military, and she's saying white nationalists shouldn't be in the military.
00:47:45.000 Now, the case that you can make is that the government over-broadly classifies white nationalists, because every time you go to the Department of Homeland Security and ask them for a definition, they'll say things like, people who want to make sure that their kids are not controlled by the NEA.
00:47:56.000 You can make that definition, but the classical definition of white nationalism, of course, of course, is racism.
00:48:02.000 Right?
00:48:02.000 There's just no two ways about that.
00:48:04.000 So he makes that boo-boo.
00:48:06.000 Right?
00:48:06.000 And then he's asked more about it and he walks it back and he says, well, I mean, if that's what racism is, then sure, I don't like racism.
00:48:13.000 Again, he's just, it's a terminological failure by him and it's a boo-boo and it's a gaffe.
00:48:17.000 But does anyone actually think that Tommy Tuberville is in favor of like a white ethnostate?
00:48:20.000 Obviously not.
00:48:21.000 This is just him being dumb.
00:48:22.000 It's not him actually believing the thing.
00:48:24.000 Here we go.
00:48:26.000 Explain why you continue to insist that white nationalists are American.
00:48:32.000 Listen, I'm totally against racism.
00:48:34.000 If Democrats want to say that white nationalists are racist, I'm totally against that too.
00:48:38.000 But that's not a Democratic definition.
00:48:40.000 The definition of a white nationalist is someone... Well, that's your definition.
00:48:45.000 My definition is racism is bad.
00:48:47.000 The definition is that the belief that the white race is superior to all other races.
00:48:52.000 Totally out of the question.
00:48:53.000 So do you believe that white nationalists are racist?
00:48:56.000 Yes.
00:48:56.000 If that's what a race is, yes.
00:48:58.000 Thank you.
00:49:00.000 Okay, so, again, he made a boo-boo.
00:49:03.000 He won't step out of the boo-boo by saying, I misheard, or I thought you were talking about just whites generally, or the Department of Homeland Security over-categorizes.
00:49:09.000 Okay, but that's not the point.
00:49:11.000 The point here is a senator says a dumb thing.
00:49:14.000 This now becomes the predicate for an entire news cycle where we're not supposed to worry about all of the other racial issues plaguing the United States, such as, for example, the attempt in California to push for a full-scale reparations regime amounting to black fathers no longer have to pay child support for their kids.
00:49:29.000 right, actual racism or the affirmative action, racism in action that Democrats have been pursuing
00:49:34.000 and are screaming and caterwauling about because it just got banned by the Supreme Court or the
00:49:37.000 equity agenda of the Biden administration that is absolutely predicated on group differences
00:49:42.000 and rectifying group imbalances in outcome. We're not supposed to pay attention to that,
00:49:47.000 we're supposed to pay attention to Tuberville saying a dumb thing. So Chuck Schumer, of course,
00:49:50.000 goes ballistic over this because what a convenient brick bath to hit somebody with.
00:49:54.000 The definition of white nationalism is not a matter of opinion. White nationalism,
00:50:00.000 the ideology that one race is inherently superior to others, that people of color should be
00:50:06.000 segregated, subjected and relegated to second class citizenship is racist down to its rotten core.
00:50:13.000 And I hope you'll join me in thanking our panelists for their work.
00:50:14.000 And for the senator from Alabama to obscure the racist nature of white nationalism is indeed very, very dangerous.
00:50:22.000 His words have power and carry weight with the fringe of his constituency, just the fringe But if that fringe listens to him excuse and defend white nationalism, he is fanning the flames of bigotry and intolerance.
00:50:40.000 I mean, again, overplaying the hand here is the thing that Democrats do.
00:50:44.000 And then, of course, Mitch McConnell is forced to come out and say, white supremacy is bad.
00:50:47.000 Like, we all know.
00:50:48.000 We all know.
00:50:49.000 Here's Mitch McConnell.
00:50:51.000 Do you have any concerns that you have a member of your conference, Senator Kupperville, who seems to have a hard time denouncing white nationalism, especially as it pertains to white nationalism in the military?
00:51:03.000 White supremacy is simply unacceptable in the military and in our whole country.
00:51:10.000 I mean, again, the fact that Republicans constantly fall into this trap where it's like, I have to now answer for Tommy Tuberville or whatever.
00:51:17.000 It's really, really dumb.
00:51:18.000 And again, you don't see Democrats doing the same thing.
00:51:20.000 They don't even answer for their own policies.
00:51:21.000 OK, time for some things I like and then some things that I hate.
00:51:24.000 So things that I like.
00:51:26.000 So Bud Light has just taken it absolutely on the chin.
00:51:28.000 And as I said, I'm not sure that brand ever recovers.
00:51:30.000 According to Yahoo Finance, Bud Light has now spiraled down to the 14th spot.
00:51:36.000 In terms of beer rankings, the repercussions resonate far beyond the brand itself.
00:51:40.000 A recent YouGov survey reveals the decline in Bud Light's ranking, casting it below competitors like Pabst Blue Ribbon, Miller Genuine Draft, and Miller Lite.
00:51:46.000 The seismic shift in popularity jeopardizes the livelihood of the 65,000 people whose economic well-being is intricately tied to Anheuser-Busch InBev's success.
00:51:53.000 I love when the media are suddenly worried about jobs in the beer industry.
00:51:56.000 Now again, people aren't buying less beer, they're just buying beer from other providers who presumably are gaining jobs.
00:52:01.000 Anheuser-Busch CEO Brendan Whitworth has taken full responsibility for the controversial promotion involving trans influencer Dylan Mulvaney that caused sales to plummet.
00:52:08.000 In an interview with CBS, Whitworth emphasized that he is ultimately accountable for the actions of the company.
00:52:14.000 But at the same time, he has not actively kind of changed direction here.
00:52:20.000 Whitworth has confirmed the company will maintain its partnerships without making any changes.
00:52:25.000 He did not explicitly apologize for collaboration with Mulvaney.
00:52:29.000 And, again, more and more people are just deciding, I can buy beer somewhere else.
00:52:35.000 So, you take a, a lot of other companies are gonna look at that and they're gonna realize, maybe I shouldn't dip my toe into this particular water because it turns out the conservative alligator sometimes will bite you.
00:52:44.000 Okay, time for a couple of things that I hate.
00:52:45.000 Okay, so, MSNBC sent a tweet the other day, and it was actually of an article that was a year old.
00:52:54.000 And the tweet was, quote, the far-right's obsession with fitness is going digital.
00:52:59.000 So apparently, if you work out, you are now Hitler.
00:53:02.000 Which is really fascinating.
00:53:04.000 It's a fascinating take.
00:53:06.000 And one of the things that has been true since the days of Nietzsche is that there has been a sort of counter-cultural interest in people looking good because they believe that deconstructionists have made everything ugly.
00:53:18.000 And that's not wrong.
00:53:20.000 The attempt to undermine beauty standards is a left-wing thing.
00:53:23.000 The left has decided to undermine the standards on nearly everything.
00:53:26.000 They did this back in the 1960s and 70s by building ugly cement blocks of buildings and getting rid of all the beautiful buildings that used to exist in America's cities.
00:53:33.000 And now they're doing it by trying to proclaim to you that people who are objectively ugly are actually quite beautiful.
00:53:39.000 And so the right-wing response to that has been, well, no.
00:53:42.000 And also, you don't have to be ugly.
00:53:43.000 You can make yourself look better.
00:53:44.000 And that, of course, is true.
00:53:46.000 And I'm a big advocate of the idea that people should try to make themselves Look better in a healthy way.
00:53:52.000 Working out is a very good thing.
00:53:54.000 There's nothing wrong with working out.
00:53:55.000 And by the way, it's important in relationships also.
00:53:57.000 If you wish to be sexually attractive to your partner, and for your partner to be sexually attractive to you, well, then presumably, going to the gym once in a while wouldn't hurt.
00:54:04.000 That isn't a far-right obsession.
00:54:06.000 But as the left cedes more and more ground to the right, they end up with these bizarre arguments.
00:54:10.000 So when you say things like fat positivity is good, Stop with the body shaming.
00:54:16.000 All you care about is how people look.
00:54:18.000 The natural response by people on the right is going to be, well, maybe you should go to the gym and cut out the donuts, Tubbo.
00:54:23.000 That's not a horrible thing.
00:54:25.000 It also has become sort of an aspect of self-control.
00:54:28.000 There's this perception on the left that everything that you are in life is outside of your control, that you are basically just planted on the planet, fully created.
00:54:36.000 There's no control over any aspect of your life.
00:54:39.000 If you're overweight, it's through no fault of your own, even if you're downing 3,000 calories a day and not going to the gym ever.
00:54:45.000 And so the right response to that has been, go to the gym and work out.
00:54:47.000 And it's become sort of a meme, right, in the online world.
00:54:50.000 Is that you're in shape because you're right wing?
00:54:54.000 Well, but the truth is, is there anything wrong with that?
00:54:56.000 Wouldn't it be better for the left if the left was like, yeah, you should get in shape.
00:54:59.000 Getting in shape is definitely a good thing.
00:55:00.000 You want to lower those healthcare costs?
00:55:02.000 You want to make sure that you have a more successful dating life?
00:55:04.000 Maybe you should look better.
00:55:06.000 Again, the left has wiped out so much of its appeal to the center that now everything that is not hard left has become far right, according to MSNBC.
00:55:14.000 That's kind of an amazing, amazing thing.
00:55:18.000 So, again, if they wish to abandon fitness, then I guess they can do it.
00:55:23.000 I just don't see how that is going to... I'm not sure how that is going to benefit them in either the short or the long run.
00:55:29.000 Alrighty, guys.
00:55:29.000 The rest of the show continues right now.
00:55:30.000 You're not going to want to miss it.
00:55:31.000 We'll be getting into the mailbag.
00:55:32.000 If you're not a member, become a member.
00:55:33.000 Use code Shapiro.
00:55:33.000 Check out for two months free on all annual plans.