The Ben Shapiro Show - November 14, 2021


Eric Schmidt | The Ben Shapiro Show Sunday Special Ep. 120


Episode Stats

Length

1 hour and 2 minutes

Words per Minute

193.25172

Word Count

12,104

Sentence Count

709

Misogynist Sentences

4

Hate Speech Sentences

12


Summary

Founder and former CEO of Google, Eric Schmidt joins The Ben Shapiro Show to discuss his new book, The Age of AI and Our Human Future, and his thoughts on artificial intelligence and its impact on our not-too-distant future. Plus, we discuss Mark Zuckerberg s transforming Facebook into the metaverse, whether Google is a monopoly limiting all others in its market, and whether big tech is censoring what they consider misinformation. This is a Sunday special. This show is sponsored by ExpressVPN. Don t like Big Tech and the government spying on you? Visit Express VPN.org/TheBenShapiroShow to become a member and join the conversation. You'll get access to all of the show's Sunday special episodes, plus access to special bonus episodes throughout the week. You won't want to miss it! Subscribe to Dailywire to receive notifications when new episodes are available. Subscribe and comment to stay up to date on all things Ben Shapiro. Learn more about your ad choices. Use the promo code: PGPodcasts to receive 10% off your first month when you shop at Poshmark.com/Poshmark when you become a patron. You'll also get 10% discount when you enter the discount code: PMPODCAST. Rate, review, and become a supporter when you leave a review. The average rate is $5 or more than $10 gets you an ad-free version of the entire show gets a complimentary membership plan starting at $99 a month, and get 20% off the offer gets you get a maximum of $99.00, plus a month and a FREE VIP membership plan when you sign up for VIP membership gets a discount of $49 or $99 gets a maximum, plus they get a discount when they get VIP access to VIP access gets $99, and they get my ad-only membership gets $5, and I'll get $4, VIP access, and two-place get a choice of VIP membership offer, they also get a complimentary rate, and a discount, they get two months get two-weekend promo code, and she gets $4-choice, they'll get a VIP membership? They also get $5-only two-choice of $5 and a VIP discount, and also get VIP membership and VIP membership starts only two-day access to the entire place they can choose a discount offer. Thanks for listening to the show? Subscribe?


Transcript

00:00:00.000 I used to say, ten years ago, that this world was optional.
00:00:05.000 That if you don't like it, just turn it off.
00:00:08.000 It's okay.
00:00:09.000 No problem.
00:00:09.000 You can't take a break from it anymore.
00:00:12.000 It's become the way you educate, the way you entertain, the way you make money, right?
00:00:16.000 The way you communicate, the way you have a social life.
00:00:19.000 So we're going to have to adapt these platforms to these human regulations.
00:00:22.000 I used to think, it's fine to have all that bad stuff online because people can just turn it off.
00:00:27.000 But they can't now.
00:00:29.000 Our guest is credited for scaling Google into the 21st century, taking the company from its Silicon Valley startup roots to a global tech leader.
00:00:37.000 Eric Schmidt was Google's CEO for 10 years, 2001 to 2011.
00:00:40.000 During that time, the company launched Google Maps, Gmail, Google Chrome, and bought YouTube, just to name a few of its many milestones.
00:00:47.000 After serving as CEO, he became executive chairman and then technical advisor for Alphabet Inc., Google's parent company, totaling 19 years at Google, not to mention nearly 20 years prior, leading other tech and software-based companies.
00:00:59.000 In 2020, Eric left Google for new ventures, notably contributing $1 billion through his philanthropic organization to fund talented people offering science and technological innovations to the world's largest and hardest problems.
00:01:10.000 He also co-authored a book this year alongside Henry Kissinger and Dan Huttenlocher, The age of AI and our human future.
00:01:17.000 The three of them use their diverse careers, Eric writing from his entrepreneurship and technology business expertise, to inform on the current state of artificial intelligence and to raise essential questions for a world fully integrated with AI.
00:01:30.000 Eric gives me his thoughts on our not-too-distant future.
00:01:32.000 Plus, we discuss Mark Zuckerberg's transforming Facebook into the metaverse, whether Google is a monopoly limiting all others in its market, and whether big tech is censoring what they consider misinformation.
00:01:53.000 This is the Ben Shapiro Show's Sunday special.
00:01:55.000 This show is sponsored by ExpressVPN.
00:01:57.000 Don't like big tech and the government spying on you?
00:01:59.000 Visit ExpressVPN.com slash Ben.
00:02:02.000 Just a reminder, we will be doing some bonus questions at the end with Eric Schmidt.
00:02:05.000 The only way to get access to that part of the conversation is to become a member.
00:02:09.000 We're going to go deep at the end on the impacts of AI on human reason, how it's going to redefine what it means to be a human.
00:02:15.000 You're not going to want to miss it.
00:02:15.000 Head on over to dailywire.com.
00:02:17.000 Become a member.
00:02:17.000 You'll have the full conversation.
00:02:19.000 Eric, thanks so much for joining the show.
00:02:21.000 Thank you, Ben.
00:02:22.000 I've wanted to be on for a long time.
00:02:24.000 I really appreciate that and hope you survive whatever blowback comes your way to us having an open conversation.
00:02:30.000 So why don't we start with the topic of your brand new book, which is artificial intelligence.
00:02:35.000 So for a lot of folks who are not versed in this, including me, when I hear artificial intelligence, I tend to think of, you know, You know, the Iron Giant or Gizmo, basically robots that talk back to you.
00:02:46.000 But obviously, AI is much more sophisticated and all encompassing than that.
00:02:50.000 I think people are not fully aware of what AI means for the future of humanity.
00:02:54.000 So why don't we start by defining what AI is?
00:02:57.000 Well, first, thank you for this, and it's exactly the right starting question.
00:03:02.000 Most of your audience, when they think of AI, they think of the following.
00:03:06.000 They think of a man who creates a killer robot.
00:03:10.000 The killer robot escapes, and the female scientist somehow slays the robot.
00:03:16.000 And that is precisely not what we're talking about here.
00:03:19.000 That's great fiction and it's a great movie, but it's not going to happen that way.
00:03:24.000 What's really going to happen is that our world will be surrounded by digital intelligences.
00:03:31.000 And these intelligences are partially like humans, and they'll make things more efficient.
00:03:36.000 They'll make new discoveries.
00:03:38.000 They'll be around.
00:03:40.000 They'll help educate our children.
00:03:41.000 They'll discover new things.
00:03:44.000 We can talk to them.
00:03:45.000 They'll provide companionship.
00:03:47.000 They'll also change the way we think about war and our own identity.
00:03:52.000 We wrote the book to explore those questions.
00:03:55.000 So, first question that sort of comes to mind immediately for me here is whether human beings are ready for this.
00:04:01.000 So, Brett Weinstein and Heather Hying wrote a book recently where they talked about the fact that so much of the stuff that we interact with even now is well beyond the capacity of sort of our evolutionary adaptability.
00:04:13.000 That if you look back to sort of the lizard brains we developed in the savannas and jungles hundreds of thousands of years ago and we were trying to survive.
00:04:20.000 We are now using devices and we really don't know the impact of those devices on how we think.
00:04:25.000 Obviously, the sort of stuff that comes to mind for a lot of folks is the use of social media.
00:04:29.000 The addictiveness, for example, of social media.
00:04:33.000 Are human beings going to be able to adapt to a world that is filled with all of these things that human beings were not privy to in the natural environment?
00:04:43.000 It's gonna be really stressful, and we need to have a whole new philosophy about how we're gonna deal with what this does to humanity.
00:04:52.000 It's interesting that 400 years ago, we went through something which is a transition from the age of faith, where people believe that God told them what to do, to what is called the age of reason, where you could have critical thinking, and you could look at a situation, and you could say, is that good for society, or good for me, or a moral judgment, and so forth.
00:05:13.000 And this age of reason spawned the Industrial Age, everything we see around us.
00:05:18.000 Before that, we were very, very primitive.
00:05:19.000 It was called the Dark Ages.
00:05:22.000 So, we believe, and why we wrote the book, is that we're going to go into a new epoch of human experience.
00:05:28.000 Because humans are not used to having a human-like assistant, partner, opponent, and so forth, in their world.
00:05:37.000 If you look at what happened with social media, People like myself, who helped build these systems, had no idea that they would be used by, for example, the Russians to try to change our elections, or the addictive behavior would drive people crazy with special interest groups, especially full of falsehoods.
00:05:57.000 We didn't know that this would have happened.
00:06:01.000 If we'd known it, would we have done it?
00:06:03.000 I don't know.
00:06:04.000 But this time, these tools are much more powerful.
00:06:07.000 They're much more capable of, for example, driving you crazy, making you addicted, and also educating you and making you more powerful.
00:06:15.000 We want to have a philosophical discussion about what we're doing to humans with this technology.
00:06:20.000 So in a second, I want to get back to sort of the political impact of technology that you sort of mentioned there.
00:06:25.000 But I want to talk to you about what exactly these technologies are going to look like on a more practical level.
00:06:31.000 So you talk about them teaching your kids or you talk about them educating you, creating new innovations.
00:06:36.000 How fast are we going to get to this sort of very sophisticated AI, because right now one of the limitations on AI is that it doesn't seem to be capable of the sort of mass innovation that human minds are capable of at this point.
00:06:48.000 But you're talking about essentially machines being able to create new machines, which is a different sort of thing.
00:06:53.000 How's humanity going to deal with that?
00:06:54.000 And what does that mean for the future of, say, work and job creation?
00:06:57.000 Again, incredibly good question.
00:07:00.000 And we don't really know.
00:07:02.000 There is a dispute, even among the authors, on how quickly this stuff will happen.
00:07:08.000 I would say 15 years.
00:07:11.000 Others would say 25 years.
00:07:13.000 Does it really matter?
00:07:14.000 There's so much investment in this space that I can tell you that it's going to happen in some form, and it'll happen more quickly than we think.
00:07:22.000 And we're not prepared for it.
00:07:24.000 And when I say we're not prepared for it, we're not prepared for the new companies, the impact on the development of children.
00:07:30.000 How do you raise a child where the child's best friend is not human?
00:07:33.000 What happens, for example, you have a bear, and you give the kid a bear, and every year you give the kid a smarter bear, and at 12, the bear is the kid's best friend, and the bear is watching television, and the bear says to the kid, I don't like this TV show, and the kid says, I don't either.
00:07:48.000 Is that okay?
00:07:50.000 That's a really significant change in human existence, which is being built now in various forms.
00:07:57.000 Do we really want to put our kids through that?
00:08:00.000 I'm not sure, but I want to have the debate about it.
00:08:03.000 I'll give you another example.
00:08:04.000 Let's think about war and conflict.
00:08:07.000 One of the characteristics of war is everything is occurring faster.
00:08:11.000 So what happens when the AI system, which remember is not perfect, it makes mistakes, says you've got 24 seconds to press this button before you're dead.
00:08:22.000 You can't see the missile, but I can.
00:08:25.000 Do you really think a human's not going to press that button?
00:08:27.000 I don't think we know how to deal with the compression of time, this intelligence, and the characteristic of this intelligence is it's dynamic, it's emergent, Right?
00:08:37.000 It's imprecise, so it makes mistakes, and it's learning.
00:08:42.000 We've never had a technology partner, a platform, an object, a machine that was like that.
00:08:50.000 So one of the things that's been in the media a lot lately is obviously Facebook's change over to meta.
00:08:55.000 Zuckerberg is now talking about the metaverse and sort of you're now living in a different reality.
00:09:01.000 And this is, as I say, a wide difference from the sort of way that we have lived for all of human history, which was Whatever our aspirations, whatever our sense of identity, they were bounded by the reality around us, right?
00:09:12.000 You could think that you were the strongest guy in the room and that lasted precisely as long as you didn't run into the guy who's stronger than you.
00:09:18.000 You might think that you were invulnerable precisely up until the time you fell off a cliff and hit a rock at the bottom.
00:09:22.000 But in a virtual reality space, in a place where you're never forced to actually confront other human beings in a real human way, when you can create yourself, when you can self-create your own identity, it seems like kind of a recipe for In some ways, the end of humanity as we know it, and that's not meant in terms of mass death, but certainly in terms of what we think our limitations are.
00:09:43.000 And I think that actually could breed some pretty bad things, not just a feeling of freedom.
00:09:48.000 You know, the metaverse has been being worked on for almost 30 years.
00:09:52.000 And the basic idea here is that you put AR or virtual reality headsets on and you live in a virtual world where you're more handsome, you're more beautiful, you're more powerful, you're richer, you have a comfy life.
00:10:05.000 It sounds great, doesn't it?
00:10:07.000 Wouldn't you prefer to be in that life than the reality of your life where you have to struggle and where the commute is terrible and people are yelling at you and so forth?
00:10:15.000 Wouldn't it be wonderful to be in such a virtual reality?
00:10:19.000 I'm not sure.
00:10:20.000 I'm worried that people will decide that this virtual reality is so addictive, it's so fun to be that fake person than who you really are, that people will cross over.
00:10:33.000 They'll get up in the morning, they'll do whatever they have to do, and then they'll put the headsets on, and they'll just sit there for hours and hours and hours.
00:10:42.000 That's not a great future for humanity.
00:10:45.000 Eric, in just a second, I want to ask you about that future for humanity and what it looks like in terms of separating people into two groups.
00:10:51.000 We've talked a lot about income inequality in the recent past, but I'm wondering if we're now going to get to a true sort of human inequality that is very bad for the future of civilization via these sorts of devices in one second.
00:11:03.000 First, let's talk about life insurance.
00:11:05.000 What's easier than opening a can of cranberry sauce, getting free life insurance quotes with Policy Genius?
00:11:10.000 If someone relies on your financial support, whether it's a child, an aging parent, even a business partner, you do need life insurance.
00:11:17.000 It's just a thing that you need.
00:11:19.000 I mean, let's say, for example, that you have been Given the gift of being able to create ice from your fingers, but suddenly you lose control of that gift and you put an entire city under the threat of complete extinction thanks to an endless winter, you might think to yourself, man, they're going to be mad.
00:11:35.000 Should have gotten life insurance.
00:11:37.000 Policy Genius makes it easy to compare quotes from over a dozen top insurers all in one place.
00:11:40.000 Why compare?
00:11:41.000 Well, you could say 50% or more on life insurance by comparing quotes with Policy Genius.
00:11:45.000 You could save $1,300 or more per year on life insurance by using Policy Genius to compare policies.
00:11:50.000 The licensed experts of PolicyGenius work for you, not the insurance companies, so you can trust them to help you navigate every step of the shopping and buying process.
00:11:56.000 That kind of service has earned PolicyGenius thousands of five-star reviews across Trustpilot and Google.
00:12:01.000 And eligible applicants can get covered in as little as a week thanks to an award-winning policy option that swaps the standard medical exam requirement for a simple phone call.
00:12:08.000 This exclusive policy was recently rated number one by Forbes Advisor, higher than options from Ladder, Ethos, and Bestow.
00:12:14.000 Getting started is simple.
00:12:15.000 First, head on over to PolicyGenius.com slash Shapiro.
00:12:17.000 In minutes, you can work out how much life insurance coverage you need and compare personalized quotes to find your best price.
00:12:22.000 When you're ready to apply, the PolicyGenius team will handle the paperwork and the scheduling for free.
00:12:26.000 PolicyGenius does not add on extra fees.
00:12:29.000 Head on over to PolicyGenius.com slash Shapiro right now.
00:12:31.000 Get started.
00:12:32.000 PolicyGenius.
00:12:33.000 When it comes to insurance, it's nice to get it right.
00:12:36.000 So, Eric, you were talking about, you know, whether people are just going to plug into virtual reality and just live there.
00:12:42.000 And it does remind me of a thought experiment that the philosopher Robert Nozick once posited, the experience machine.
00:12:47.000 And he specifically said that if you gave people a choice whether they could plug into essentially this.
00:12:53.000 They could plug into a world in which all risk was abated.
00:12:55.000 You were going to live a very happy life.
00:12:57.000 You're going to get all the same emotional responses you would in real life.
00:12:59.000 Would you plug in?
00:13:00.000 And he posited that for a lot of people, the answer would be no, because you want to have a real impact on the world.
00:13:04.000 I wonder if you think that humanity is going to break down into two groups, people who say yes to the experience machine and people who say no to the experience machine.
00:13:11.000 And frankly, whether virtual reality, which is not, in fact, reality, reality is going to innervate our civilization so entirely that people who don't plug in, people who are less technologically adept, People who care less about the digital world actually end up inheriting the earth while the most sophisticated players in the space are distracted online, essentially.
00:13:32.000 You know, a lot of people in literature have talked about this, and they've talked about a drug called Soma, for example, which in literature would sort of take all your cares away.
00:13:42.000 I'm not sure humans do very well in a careless world where they don't have mission and meaning.
00:13:50.000 It's also possible that the inverse of what we just said is going to occur.
00:13:55.000 That we're going to get into this virtual reality that seems so incredibly sexy and then we're going to have all the same behaviors.
00:14:02.000 We're going to have greed and envy and bad behavior and All of the things that we don't like about the physical world.
00:14:11.000 One of the things that we assumed when we built all of this that we're now dealing with was that connecting everyone would be a great thing.
00:14:19.000 And I'm very proud of the fact that I worked really hard to connect everyone in the world.
00:14:23.000 So don't be shocked that they actually talk to each other and that they say things to each other that you don't agree with.
00:14:29.000 It was noble to connect the world and then we discover we don't like some of the things they're talking about.
00:14:34.000 That's crazy.
00:14:36.000 So, why don't we figure out a way to keep everybody connected but also grounded in physical reality, but also keep them safe, right, from stalking and trolling and some of the worst kind of behaviors.
00:14:46.000 Imagine if you're trolled in physical life, but then you go to the virtual world and you get trolled there too.
00:14:52.000 How depressing.
00:14:53.000 Another possibility, which we talk about in the book, is that this network of AI system control, right, which ultimately affects traffic and the way you get pharmaceuticals and the way you pay your taxes and the way you are learning, is so overwhelming that people will become nativist or naturalist.
00:15:13.000 And they'll say, I'm revolting against the robot overlords.
00:15:17.000 It's not a robot, it's a computer.
00:15:19.000 But I'm revolting against this digital world and the control that it has over me.
00:15:24.000 That I believe in my own individual freedom and the only way I can do that is to turn it off.
00:15:29.000 And my guess is that 10 or 20 percent of the people will ultimately say, turn this thing off.
00:15:36.000 So Eric, this does raise a question that Andrew Yang has raised with regard to artificial intelligence.
00:15:41.000 We talked to him before his run for mayor of New York and he was running for president at the time talking about universal basic income.
00:15:48.000 The idea being that AI was going to become so sophisticated that it took over a lot of the fields that we don't tend to think of as AI driven.
00:15:54.000 So typically job loss due to technology in particular industries has been relegated toward the lower end of the income spectrum.
00:16:00.000 It's been in manufacturing gigs or it's been in self-checkout counters at the local Walmart.
00:16:06.000 But it hasn't been in the so-called creative fields.
00:16:08.000 And as AI gets more sophisticated and as it moves up on the food chain in terms of what kind of jobs it can replace, are we looking at the possibility of mass unemployment?
00:16:17.000 And if so, what is the solution to that?
00:16:19.000 Redistributionism may not be enough because it turns out that people kind of like working.
00:16:23.000 They need something to do.
00:16:24.000 And the sort of fantasy that people are then going to spend their off hours painting, especially if the AIs can paint better than we can, you could be looking at a total loss of human meaning here.
00:16:33.000 You said that so well.
00:16:36.000 Can I argue with your premise for a second?
00:16:38.000 Sure.
00:16:38.000 Let's just use some data.
00:16:40.000 I've spent the last 10 years listening to people tell me that the most endangered job in the world is truck driving.
00:16:48.000 That truck driving will be replaced by robots, by computers.
00:16:51.000 Self-driving trucks will take over.
00:16:53.000 What's the number one shortage in terms of people, in terms of people we can put in jobs in America today?
00:17:00.000 Truck driving.
00:17:02.000 So I think there's something wrong with this basic argument.
00:17:05.000 And I think what happens is that, yes, some of these jobs will go away.
00:17:11.000 But they're going away because the markets are getting more efficient.
00:17:14.000 I'm a capitalist.
00:17:15.000 Capitalism makes market more efficient.
00:17:17.000 More efficient markets produce different jobs in a larger space.
00:17:22.000 So if you start from the premise that our goal is to have economic growth, Which benefits everyone.
00:17:28.000 The rich people, the poor people, everyone.
00:17:30.000 The answer to a lot of the problems in the world is a job.
00:17:33.000 Right?
00:17:34.000 So we want to create as many jobs as we can.
00:17:36.000 Now, if we go back to your argument that AI will displace jobs, well, so far, we've just been through this horrific pandemic, and we're trying to turn the economy back on, and we can't find enough people who want to work.
00:17:49.000 That's a different problem than we don't have jobs for them.
00:17:52.000 We have the jobs for them, but maybe they're not in the right place, or they're not skilled, or they're not motivated, or whatever.
00:17:57.000 It's more complicated than what people like you and me think.
00:18:01.000 So it seems to me that the following things are true.
00:18:04.000 First, there will be lots of jobs, but AI will displace a lot of current jobs.
00:18:10.000 Think about a security guard.
00:18:12.000 The security guard basically sits around and watches all day.
00:18:15.000 Well, a computer's better at that than the security guard.
00:18:17.000 See, if all the security guard is doing is watching, then that can be replaced.
00:18:22.000 But if the security guard is also your brand, they're warm and friendly and we love you and we welcome you to our home or to our business or what have you, well, then that's a function that maybe AI won't replace.
00:18:36.000 So you're going to see people pushed into more human roles And there'll be lots of them.
00:18:42.000 Eventually, the scenario that everybody talks about is going to be true.
00:18:48.000 Some number of decades, maybe a hundred years, the technology world will be so good that we can grow food, we can build buildings cheaply, literally everyone can live like a millionaire today.
00:19:00.000 This is a long time ago, long after I'm dead.
00:19:02.000 It'll probably be true.
00:19:04.000 We don't know what people will do, but one thing I'm sure is that people are not going to be painting.
00:19:10.000 They're going to find meaning.
00:19:12.000 You know, in other words, we may get rid of the security guard role, but people will come up with more legal structures, or they'll come up with more medical structures, or something else.
00:19:22.000 But to say that somehow innovation will ultimately eliminate meaning in jobs is not supported by history.
00:19:30.000 So one of the things that this raises for me is the question of a civilization without children.
00:19:35.000 Because you mentioned kids before and the kid who makes friends with the teddy bear who then becomes their best friend and is essentially sentient and incapable of making decisions and interacting with kids.
00:19:44.000 But I want to ask a different question, which is we are a very adult-driven society, a society that is built around the needs and pleasures of people who are capable of giving consent.
00:19:53.000 We're not a society that's particularly driven by people having kids or what people should do with those kids.
00:20:00.000 And it seems like the AI world is driving more in the adult direction that if we are interacting online, we're not interacting physically.
00:20:08.000 That kids are a lot of work, right?
00:20:09.000 I have three of them under the age of eight and they are an awful lot of work.
00:20:12.000 And you know, it's not as much work as existing in a VR reality where you can hang out with all of your adult friends all day.
00:20:17.000 The West is already experiencing a pretty large-scale crisis of underpopulation, not overpopulation.
00:20:22.000 Every single major Western country is now reproducing at less than replacement rates.
00:20:27.000 And it seems like, you know, the distractions that are created by not only an AI world, but also by interactive beings that are capable of fulfilling your every need, right?
00:20:38.000 This could totally undercut the future of human reproduction, of raising healthy children.
00:20:43.000 It's dangerous stuff.
00:20:45.000 This is completely unappreciated, and you've nailed it.
00:20:49.000 We need to have human growth, that is, the number of people.
00:20:54.000 My position is simple.
00:20:55.000 We need more immigration.
00:20:57.000 We need more reproduction.
00:20:58.000 We need more families.
00:20:59.000 We need larger numbers of kids.
00:21:01.000 Because that's wealth.
00:21:03.000 The ultimate answer to your life is the legacy that you leave in your children and your grandchildren and so forth, and the extraordinary achievements that they will develop because of the training and the fatherhood, in your case, that you were providing.
00:21:18.000 We can't lose that.
00:21:19.000 It's so important.
00:21:21.000 And I'm completely in agreement with you that we're building systems that are addictive for humans.
00:21:28.000 And by addiction, I mean addictive for adults.
00:21:30.000 We see this in all sorts of ways.
00:21:32.000 Dating is later.
00:21:34.000 Marriage is later.
00:21:35.000 Family formation is later.
00:21:36.000 The number of children is smaller.
00:21:38.000 It's true, by the way, in every developed country, and it's also true in China.
00:21:43.000 So let's talk about the threat of China.
00:21:45.000 So you mentioned China in this context.
00:21:47.000 China has been pouring money into AI.
00:21:50.000 China obviously has been using technology as a wedge in order to build connections with a wide variety of nations, whether it is trying to offer 5G with strings attached to countries that are underdeveloped, or whether it is trying to through direct foreign aid, make alliances with countries that have access to rare earth minerals, China's become much more threatening on the geopolitical stage, but it's underappreciated what they're doing in the technological space, both in terms of subsidizing technology and
00:22:18.000 in terms of stealing technology from the West. How do we deal with the threat of China in the space? How threatening is it if China gains an advantage in the AI space?
00:22:26.000 Well, I was fortunate to be the chairman of a commission that the Congress set up, a bipartisan commission on national security and artificial intelligence.
00:22:34.000 And we concluded that the threat from China is quite real, that they are at the moment over-investing compared to the United States.
00:22:42.000 They're producing more PhDs.
00:22:44.000 The quality of their work this year is now to the point where it's on par or better than the best papers that we're producing.
00:22:51.000 So they have arrived, and they have arrived quickly, and they're over-investing.
00:22:55.000 The consequences of this could be very, very significant.
00:22:59.000 You mentioned Huawei and 5G, but think about TikTok.
00:23:03.000 TikTok uses a different artificial intelligence algorithm.
00:23:07.000 Your teenagers are perfectly happy to use it, and I don't mind if the Chinese know where your teenagers are, but I do mind if they use that platform to begin to affect the way we communicate with each other.
00:23:18.000 And so there's every reason to think about the AI world that we want in front of us represents Western values.
00:23:24.000 That is the Western values of democracy and free speech and free debate and so forth and so on, which are certainly not present on the Chinese side.
00:23:32.000 There are also all sorts of national security issues.
00:23:35.000 I want to be careful about your statement about Cold War, because that's a way of representing a certain thing.
00:23:41.000 China, by the way, has an economy which on a purchasing power basis is larger than America's now.
00:23:46.000 Russia was never anywhere near that.
00:23:48.000 Also, if you look today, after all this discussion about China, we have more trade between China and the U.S.
00:23:54.000 this month than we ever have.
00:23:57.000 So we're not in a Cold War, we're in a rivalry.
00:24:00.000 And the rivalry has huge stakes.
00:24:03.000 In the rivalry, imagine if China becomes the dominant player in AI, semiconductors, computer science, energy, transportation, face recognition, electronic commerce, and other services.
00:24:17.000 That's my whole world.
00:24:19.000 That's everything in tech.
00:24:21.000 Trillions and trillions of dollars of companies that are being formed today are in danger if we don't invest.
00:24:28.000 And our report basically said the government needs to help with research funding.
00:24:33.000 There's lots of hiring that has to occur within the government.
00:24:36.000 We have to build partnerships with like-minded countries, South Korea, Japan, European countries, and so forth.
00:24:42.000 We've got to take this seriously.
00:24:43.000 We've gotten bipartisan support of this because both sides are concerned about this for different reasons.
00:24:50.000 So we may be able to actually affect change.
00:24:52.000 Imagine if the semiconductors that we use in our conversation right now and that your viewers are using today were made in China and had Chinese infiltration in one form or another.
00:25:03.000 You wouldn't be comfortable and I wouldn't be comfortable.
00:25:06.000 You have to really think about this from the standpoint of national security and open communications.
00:25:12.000 So Eric, you mentioned they're not using the language of Cold War, but instead using the language of rivalry.
00:25:16.000 I think that many of us in the West tend to think of China in terms of rivalry because, as you mentioned, there is so much trade with China because we are so integrated with their economy.
00:25:26.000 But China is obviously pursuing a mercantilist economic policy while we are pursuing a much more free market policy.
00:25:32.000 Which means, do they see it as a rivalry or do they see it as an actual Cold War?
00:25:36.000 Well, I'll tell you what the Chinese have said publicly.
00:25:39.000 I've not talked to them privately because we haven't been able to go to China for two years because of COVID.
00:25:45.000 What they say publicly is that the history of China was it was this enormous middle kingdom.
00:25:51.000 And for various reasons involving the open wars and so forth, 150 years ago, they were embarrassed.
00:25:57.000 They withdrew.
00:25:58.000 They were put under pressure.
00:25:59.000 Their strength was constrained by historic accidents.
00:26:03.000 They've all been taught this.
00:26:05.000 And now this new president, President Xi, who's not that new anymore, is basically using this language to, shall we say, inflame nationalism, to build confidence.
00:26:15.000 He gives speeches now which say, we will never, ever be at the thumb of these other oppressors.
00:26:22.000 That's the language that they use.
00:26:23.000 This is fomenting enormous self-confidence.
00:26:29.000 And that self-confidence is fine as long as it stays in its region.
00:26:33.000 The problem with the self-confidence on either side is it can lead to a series of strategic missteps, a series of overconfident moves.
00:26:44.000 Dr. Kissinger, who's a co-author with us on our book, talks a lot about the potential that the current situation feels a lot like the preamble to World War I, where you had rapid industrialization, new technology, old rivalries that got stoked, and then a series of errors precipitated a horrendous war.
00:27:06.000 And I can tell you, having worked for the U.S.
00:27:08.000 military for five years in my 20% job when I was at Google, a real conflict with China would be so horrific, it has to be avoided.
00:27:19.000 We have to find ways to coexist with China, even if we don't like what they're doing.
00:27:24.000 And they have to find ways to coexist with the system we built, even if they don't like what we're doing.
00:27:29.000 Having that distrust, but a working relationship is better than world destruction.
00:27:34.000 So how should technology companies deal with China?
00:27:37.000 One of the big issues, obviously, is that it seems to a lot of folks, including me, that China has taken advantage of our free, open markets in order to cheat.
00:27:45.000 I mean, they've stolen an enormous amount of Western technology.
00:27:49.000 It seems as though they dictate terms to Western corporations.
00:27:52.000 Western corporations want to do business in China, and China basically then forces those Western corporations to abide by particular rules In order to even do business there, that obviously, on a public level, it's most clear with regards to, for example, the NBA, where you had a GM of the Houston Rockets criticizing China over Hong Kong, and the NBA came down hard on the GM of the Houston Rockets, lest China close its markets to the NBA.
00:28:15.000 But you've seen this in the tech sphere also, when you're at Google.
00:28:17.000 Obviously, there was significant controversy over Google search results and Chinese censorship.
00:28:22.000 How should tech companies be dealing with the fact that China says, if you want to play in our pond, you got to play by our rules?
00:28:28.000 Well, indeed, Google left China in 2010 in a very controversial move because Google did not want to be subject to the rules that you just said.
00:28:35.000 I hate to be blunt about this, but the fact of the matter is you have the rise of another power of a similar size that operates differently from the way we do.
00:28:47.000 We can complain all we want to about the things that they do, but since we're not willing to bomb them and we're not willing to attack them, the use of force to force our way is not going to work.
00:28:58.000 We're going to have to find ways and incentives.
00:29:01.000 I'll give you an example.
00:29:02.000 The Trump administration and a fellow, the guy who did it is a guy named Dan Pottenger, who's really, really smart under Trump.
00:29:10.000 figured out that Chinese semiconductor leadership is really, really important to keep back.
00:29:18.000 And our recommendation in our report is that we try to stay two generations ahead of China in semiconductors.
00:29:26.000 So the Trump administration, for example, identified technology in a company called ASML.
00:29:32.000 It's complicated, but basically it's extremely small lines that go under the wafers.
00:29:39.000 And this particular company, which is a Dutch company, is the only source of that.
00:29:43.000 And the Trump administration basically made it impossible for that hardware and software to make it to China.
00:29:49.000 That's a good example of the kind of move that we need to consider to make sure that strategically our model continues to win.
00:30:00.000 I understand the issue of the history and openness and they have a different system, but it's sort of like, okay, what do you want to do about it?
00:30:06.000 I think the answer is, instead of complaining about them, there's lots to complain about.
00:30:11.000 Why don't we complain about ourselves and fix ourselves?
00:30:13.000 And in particular, why don't we focus on technology excellence, platform excellence, doing the things we do best.
00:30:21.000 The American system is different from the Chinese system.
00:30:23.000 The Chinese system is integrated, essentially an autocracy.
00:30:27.000 They have something called military-civil fusion.
00:30:29.000 There's no difference between the military and businesses.
00:30:33.000 The universities are all interlinked and funded by the government.
00:30:37.000 The sum of all of that is just a different system.
00:30:40.000 Why don't we make our system stronger?
00:30:42.000 Our system is the union of the government and universities and the private sector.
00:30:46.000 I'll give you another example.
00:30:48.000 Who made the best vaccines?
00:30:50.000 Well, here, again, under Trump, Operation WorkSpeed, the government guaranteed the products, independent of whether they worked or not.
00:30:58.000 The universities developed the technologies, and the businesses took huge risks and made a gazillion dollars and saved us.
00:31:07.000 That's the best of America.
00:31:09.000 They all worked together to solve a problem.
00:31:12.000 That's what we should be doing.
00:31:13.000 So in a second, I want to ask you on a broader level about isolation versus interconnectedness with China.
00:31:18.000 And I also want to get into some of the issues as to who controls the AI world and who exactly should we be trusting with the developments of this technology?
00:31:29.000 First, let's talk about how you send your mail.
00:31:31.000 Now, this holiday season, the lines at the post office are going to be super long.
00:31:34.000 Everybody's getting back out there.
00:31:35.000 You've got to send some packages to friends and family.
00:31:37.000 Well, what if you want to do that, but you don't want to go to the post office?
00:31:40.000 Well, do what we do.
00:31:41.000 Use Stamps.com.
00:31:42.000 Stamps.com lets you compare rates, print labels, and access exclusive discounts on UPS and USPS services all year long.
00:31:48.000 It just makes sense, especially if your business sends more mail and packages during the holidays.
00:31:53.000 Here at Daily Wire, we've used Stamps.com since 2017.
00:31:56.000 No more wasting our time.
00:31:57.000 Whether you're selling online or running an office or a side hustle, Stamps.com can save you so much time, money, and stress during the holidays.
00:32:03.000 Access all the post office and UPS shipping services you need without taking the trip.
00:32:07.000 And get discounts you can't find anywhere else, like up to 40% off USPS rates and 76% off UPS.
00:32:13.000 Going to the post office instead of using Stamps.com is kind of like taking the stairs instead of the elevator.
00:32:18.000 I mean, you might get a little more exercise, but also, would you really want to do it, like, every single day?
00:32:22.000 Just take the elevator.
00:32:23.000 If you spend more than a few minutes a week dealing with mail and shipping, Stamps.com is a lifesaver.
00:32:27.000 You'll save a lot of time and a lot of money.
00:32:29.000 This is why you should be doing it.
00:32:30.000 Save time and money this holiday season with Stamps.com.
00:32:33.000 Sign up with promo code Shapiro for a special offer that includes a four-week trial, free postage, digital scale, no long-term commitments or contracts.
00:32:39.000 Head on over to Stamps.com, click the microphone at the top of the page, enter promo code Shapiro.
00:32:44.000 Okay, so let's talk about interconnecting this versus isolation, particularly in regards to the tech world.
00:32:49.000 So you have these enormous multinational corporations, transnational corporations, places like Google.
00:32:55.000 So you mentioned that Google withdrew from China in 2010.
00:32:58.000 At the same time, we do want to have impact for Western companies in China.
00:33:02.000 So how exactly should tech companies be playing that game?
00:33:06.000 I think it's unlikely that the large tech companies will have a big role in China.
00:33:09.000 given the amount of isolationism and autarky that the Chinese regime is pursuing?
00:33:15.000 I think it's unlikely that the large tech companies will have a big role in China.
00:33:21.000 Tesla has just done a large development in China, and I'm sure that the Chinese will take advantage of that incredible transfer of knowledge, and it will also help the Chinese local manufacturers.
00:33:34.000 I think it's just how they operate.
00:33:36.000 I think the most likely scenario is that we're going to see information technology in the West, and then a different kind of information technology in China and the BRI countries, who are its partners.
00:33:49.000 And the reason is that China and her partners are pretty much unified in the way they want people to express things, the way they want to deal with anonymity, the way they want to deal with the messy aspects of free speech and the internet.
00:34:03.000 Whereas in the West, we're pretty committed to dealing with it and supporting it in some way.
00:34:09.000 So, if you look, all of the interesting tech companies that do anything online are essentially blocked or limited in China in one way.
00:34:18.000 It's interesting, by the way, that Apple has managed to create the following interesting structure.
00:34:24.000 Apple is regulated by the Chinese, and they fight all day as to what goes on in the App Store.
00:34:29.000 So Apple could withdraw from China, but they want their revenue.
00:34:32.000 But on the other hand, China is also the manufacturing source for Apple's products, which is of great value, the Chinese.
00:34:40.000 And so they have an uncomfortable relationship.
00:34:43.000 So the most likely scenario is either no relationship or an uncomfortable relationship of the kind that I've described with Apple.
00:34:50.000 In Google's case, various people have tried to re-enter in various ways, and they've not been successful so far.
00:34:58.000 So I want to move kind of back to the domestic sphere and back to the West for a second, because a lot of what we're talking about here, as far as machine learning and AI, we've seen sort of the proto version of that in how we deal with social media.
00:35:10.000 And that, of course, has been incredibly messy.
00:35:12.000 You mentioned early on Russian interference or attempted interference in the 2016 election.
00:35:17.000 Obviously, from the right side of the aisle, there's a strong belief that a lot of the tech companies, particularly including YouTube, Google, Facebook, that these places Yeah, our designing algorithms that that may not be free with regard to transfer of information that somebody is setting the rules.
00:35:33.000 There's a lot of distrust as to who is setting the rules at these companies, generally speaking.
00:35:37.000 Now, I'm not comfortable with the government setting the rules because I think that the government generally tends to be run by political hacks who then do exactly what what benefits them.
00:35:44.000 But at the same time, there's a lot of disquiet, and I think a lot of it is justified, as to who sets the rules.
00:35:49.000 So when there's a hate speech regulation on Google, or if there's a hate speech standard on YouTube, how do you define terms like this, and who defines what is sort of the best public square?
00:35:59.000 When you take that sort of disquiet and you extend it to things as all-encompassing as AI, you can see why people are freaked out.
00:36:04.000 This is a good preamble to the general point we make in our book.
00:36:08.000 That the tools for manipulating information, and I mean that broadly for good or bad, are going to be very broadly distributed.
00:36:17.000 And so the software to do the kinds of scenarios on any side that you described will be open source and available to pretty much any actor, a company, an institution, a liberal organization, a conservative organization, and so forth.
00:36:33.000 And that manipulation is not so good for society.
00:36:37.000 In the book, what we say is that eventually people will have their own assistants, basically software that understands what they care about and also make sure the sources of manipulation of them are honest.
00:36:53.000 In other words, it says, I'm going to choose this article for you and this other article is really false and I'm really not going to choose it for you unless you want me to.
00:37:01.000 It'll be under your control.
00:37:03.000 It'll be the only way that people deal with this extraordinary explosion of what I'm going to just call misinformation and disinformation.
00:37:10.000 It'll come from governments, it'll come from our own government, it'll come from others, it'll come from evil people and well-meaning people.
00:37:18.000 And that's a big change in human society.
00:37:20.000 We were all, all of us were grown up thinking that you should just believe whatever people told you.
00:37:25.000 And now a better understanding is that people are trying to manipulate you and you have to find your own way of thinking through this.
00:37:33.000 With respect to the bias question, there's no question that the majority of the tech companies are in liberal states and the majority of their employees are, for example, Democratic supporters and so forth and so on.
00:37:45.000 That's, I think, to some degree, cultural.
00:37:47.000 There are ways to address what you said.
00:37:49.000 The most important thing is public pressure to make sure you have balance.
00:37:54.000 And calling for regulation is always a problem.
00:37:57.000 Because bringing in a regulator on either side, right, It's highly likely to fix things in their current way and prohibit innovation.
00:38:06.000 The way we're going to compete with China is by innovating ourselves ahead.
00:38:10.000 What I want is innovation that's consistent with our values which include open discourse, freedom of expression, freedom of thought, freedom of seeking things.
00:38:20.000 Eric, one of the things that's come up in this context is the question of monopoly.
00:38:25.000 So there have been a number of political actors who have suggested that companies like Google effectively are monopolies because they control so much of the ad revenue market or because they control so much of the search market.
00:38:35.000 Do you think that Google is a monopoly?
00:38:37.000 And if so, what should be done about that?
00:38:39.000 How should we view the problem of monopoly given that while there are a wide variety of sources that are available on Google, Google does control, for example, the ranking of search results?
00:38:47.000 Yeah, the Google question we've had since I've been there, which is more than 20 years, and the question goes, OK, would you like someone else to decide how to do ranking?
00:38:58.000 So ranking is an algorithmic decision that computers make, humans don't make.
00:39:02.000 And my first month at Google, somebody called up and said, you're biased.
00:39:05.000 And I said, well, the computer makes the decision.
00:39:08.000 And they said, no, there must be a human making the decision.
00:39:11.000 And I sat down with the engineers, and I'm convinced that there was no bias.
00:39:15.000 Because the algorithm made the decision.
00:39:18.000 If you want to regulate the algorithm, then write down how you would like it to be different.
00:39:22.000 What are the rules?
00:39:23.000 And you'll find it's extremely difficult.
00:39:26.000 The people who are calling for antitrust often say these big companies, which includes the FANG, I guess we're going to call them the MANG companies now because Facebook became meta.
00:39:39.000 Not sure quite if that's a good idea or not.
00:39:42.000 But the key thing about the Meng stocks is they're very large and very integrated.
00:39:47.000 And some people say, ah, let's break them up.
00:39:51.000 So Elizabeth Warren, for example, said, let's take the App Store and separate it from the iPhone.
00:39:56.000 Let's take YouTube and separate it from Google, et cetera.
00:40:00.000 There's a list.
00:40:01.000 The problem with these things is they don't actually, they make a small number of companies.
00:40:09.000 It's good for the shareholders to break them up because the value in aggregate will go up, right?
00:40:13.000 That's the bizarre thing.
00:40:14.000 It's good for the shareholders in sum because of the distributions they'll get, but the problem is it doesn't actually address the problem that people, it just essentially creates more big players.
00:40:26.000 You have to come up with a way of addressing the network effects in these platforms and the good execution.
00:40:34.000 And I haven't seen it.
00:40:36.000 I think we'd be better off regulating the worst aspects, you can make a list, I can make a list, rather than breaking them up.
00:40:45.000 In other words, if there's a specific thing that you don't like, that Facebook does, Apple does, or Google, write it down, right?
00:40:52.000 And try to figure out a way to regulate it in such a way that it's a fair regulation.
00:40:56.000 That's better than trying to break all these things up.
00:40:59.000 So in a second, I want to ask you about how the algorithms do work because obviously somebody built the algorithm in the first place.
00:41:05.000 There have to be some sort of inputs or maybe I'm wrong about that.
00:41:07.000 So I want to ask you about that in just one moment.
00:41:09.000 First, let's talk about buying jewelry for a loved one this holiday season.
00:41:13.000 It's the holiday season.
00:41:14.000 You know what that means.
00:41:15.000 You have to figure out what to get for your loved one.
00:41:18.000 Mom, your girlfriend, your wife.
00:41:20.000 Well, you know the answer to this, gentlemen.
00:41:22.000 And the answer is fine pearl jewelry, which is not going to break your budget.
00:41:27.000 At the Pearl Source, you get the highest quality pearl jewelry at up to 70% off retail prices.
00:41:31.000 Why?
00:41:31.000 Well, because Pearl Source cuts out the middleman by eliminating those crazy markups by jewelry stores and selling directly to you, the consumer.
00:41:38.000 You can shop securely from the comfort of your home at The Pearl Source.
00:41:40.000 You'll find the largest selection of pearls available anywhere.
00:41:43.000 Every jewelry piece is custom-made specifically for you.
00:41:45.000 With global supply chain problems and shipping carriers expecting major delays of delivery times, as you get closer to the holidays, now is indeed the best time to start shopping for the holiday season.
00:41:54.000 So do not wait!
00:41:55.000 The Pearl Source offers fast and free two-day shipping on every order with zero contact delivery.
00:42:00.000 Everything comes beautifully packaged, In an elegant jewelry box, it's ready to be given as a gift.
00:42:04.000 I know this because I've given my wife stuff from The Pearl Source.
00:42:06.000 She loves it.
00:42:06.000 It is spectacular.
00:42:07.000 And here's the thing.
00:42:08.000 For a limited time, listeners to The Ben Shapiro Show Sunday Special can take 20% off your entire order.
00:42:14.000 Don't wait until it's too late to do that holiday shopping.
00:42:16.000 Head on over to ThePearlSource.com slash Ben.
00:42:19.000 ThePearlSource.com slash Ben.
00:42:21.000 Enter promo code Ben at checkout for 20% off your entire order.
00:42:24.000 If you want fine pearl jewelry at the best prices online, go straight to The Source.
00:42:27.000 ThePearlSource.
00:42:28.000 ThePearlSource.com backslash Ben.
00:42:30.000 Enter promo code Ben at checkout.
00:42:32.000 Alrighty, so let's talk about the algorithms.
00:42:34.000 So we hear this a lot in the tech world.
00:42:36.000 And for those of us who aren't in the tech world, it's somewhat confusing.
00:42:38.000 So we'll hear something like, you know, the algorithm makes the decision.
00:42:41.000 And those of us who are not in the tech world say, right, but somebody wrote the algorithm.
00:42:44.000 There have to be some sort of variables that are used as the inputs.
00:42:47.000 And so do the biases of the people who are building the algorithms come out in how the algorithm actually operates?
00:42:53.000 So for example, if the algorithm suggests that certain types of speech are less favored than others or certain types of speech, which are more favored by more people because they are put out by the legacy media, are quote unquote more trustworthy than other sources.
00:43:07.000 Is that a problem with the algorithm?
00:43:08.000 Or is that a problem with the inputs?
00:43:10.000 Or how does that get separated?
00:43:11.000 I don't want to claim that these algorithms are perfect, but I can tell you how they're really built.
00:43:17.000 You've got teams of tens of people, hundreds of people, and they write an awful lot of code, and they test it, and they test it, and test it.
00:43:25.000 And their success is based on some objective function, like a quality score or a click-through rate.
00:43:33.000 Now, let me give you the extreme version.
00:43:36.000 A company, a social media company, only focuses on revenue.
00:43:41.000 To get revenue, you need as much engagement as possible.
00:43:45.000 How do you get the most engagement?
00:43:46.000 You get as much outrage as possible.
00:43:49.000 You get addicted to the outrage and you can't get rid of this.
00:43:52.000 And the money goes up and up and up and up.
00:43:56.000 That's the natural outcome of these social networks if you want to maximize revenue.
00:44:01.000 Hopefully the leaders of these companies are trying to do other things than just maximizing revenue.
00:44:05.000 But it's a good example where the algorithm, which is using this fictional example, all it wants is more money.
00:44:13.000 So all it's gonna do is get more attention and drive you insane, right?
00:44:18.000 I don't know how you regulate that, but that's an example of a company that would be very good for its shareholders, but a bad company to be part of.
00:44:27.000 How do we bridge the gap between, I think there have been a few semantic games that have happened over the course of the last several years.
00:44:32.000 It's really kind of fascinating to see how the media treatment of big tech companies and social media companies changed radically in the aftermath of 2016.
00:44:38.000 So in 2012, there was wide praise for the social media companies, How they had affected change in, for example, the Arab Spring, or how they connected people during the original Black Lives Matter movement, or how they had connected people during the 2012 campaign.
00:44:51.000 And by 2016, after Trump won, there seemed to be a pretty radical shift in how the media started treating social media.
00:44:55.000 Suddenly, social media was the bugaboo.
00:44:57.000 Social media had opened the door too broadly.
00:44:59.000 It had allowed too much disinformation.
00:45:01.000 And the question of disinformation, which is effectively, there's a definition to that term, right?
00:45:06.000 Disinformation being an opposing state actor or Some sort of entity that is antithetical to the interests of the United States actively putting out information that is wrong in order to undermine the institutions.
00:45:18.000 That's disinformation.
00:45:19.000 Then there's a shift semantically from disinformation to misinformation.
00:45:23.000 Misinformation meaning either something that I think is false or something that is actively false.
00:45:27.000 And all these lines started to get blurred.
00:45:29.000 This is where, as a conservative, I start to get very wary is the notion that there is some sort of great gatekeeper that There can be a gatekeeper function performed by an algorithm that goes beyond simply saying, we're not going to allow articles to be disseminated that say 2 plus 2 equals 5.
00:45:44.000 Instead, we are going to go to an outside group of, for example, fact checkers who may be biased in their own right, and then use an aggregate score, which looks objective, but actually isn't because the inputs are not objective.
00:45:55.000 And one of the things that I think many of us on the right are pushing for is less regulation specifically because we want more speech on these platforms, not less speech.
00:46:03.000 That's why I think that it's very odd to see the common cause being made between some on the left who want the speech heavily regulated on these platforms and some people on the right who want the platforms broken up.
00:46:13.000 What's the best way to open up these platforms if you are, for example, a startup, right?
00:46:17.000 My company was a startup.
00:46:17.000 We started this company in 2015.
00:46:20.000 This company was launched on the back of the power of social media and then pretty much every day we have to fight in the press and we have to fight social media, you know, algorithms that are attempting to upgrade, for example, the New York Times and maybe downgrade sites like ours because we aren't quite as established.
00:46:36.000 What you're describing is the current world that we did not anticipate 10 years ago.
00:46:42.000 The fantasy that we all had was that we would see an enormous explosion in speech and we would see a great deal of decentralized actors.
00:46:51.000 A very, very large number of small players who would collectively be very interesting.
00:46:58.000 Because I agree with you that we need more speech.
00:47:00.000 We need more speech from everyone.
00:47:02.000 And we also, by the way, need more listening to everyone.
00:47:06.000 And instead what we've ended up with is a concentration of essentially large vertical sites which have some of the aspects that you're describing.
00:47:14.000 What's the answer?
00:47:16.000 I hope it's not government regulation.
00:47:18.000 I agree with you.
00:47:19.000 I hope the answer is more competitors.
00:47:22.000 So what will it take to get competitors to all of these players?
00:47:26.000 An example is that a former Google executive has left to create a competitor to Google, which is run very differently.
00:47:32.000 We'll see how well it does.
00:47:33.000 In my view, that's a welcome thing.
00:47:36.000 We need competition, for example, for Facebook and the way it operates.
00:47:40.000 We need different ways of experiencing ourselves.
00:47:44.000 In a situation where the former president was banned by Twitter and Facebook, he needs a competitor who will host him and will give him an audience so that his voice can be heard.
00:47:54.000 That makes sense to me on those terms.
00:47:57.000 Now the problem you have, so let's assume we have all of that.
00:48:00.000 Let's assume the answer is more speech and more good speech.
00:48:03.000 There doesn't seem to be a penalty for lying anymore.
00:48:07.000 There doesn't seem to be a penalty for absolute factual falsehoods.
00:48:12.000 And especially in the case of the COVID situation, where they affect lives, We've got to come up with one, some solution which is not an overlord, and not a regulator, but which basically causes people to believe that we should focus on facts and not fiction.
00:48:29.000 And we haven't figured that out yet.
00:48:31.000 At Google, and Google's not perfect, we face this question.
00:48:34.000 And Google is full of falsehoods.
00:48:36.000 But we have a ranking algorithm, and hopefully the falsehoods are lower than the non-falsehoods.
00:48:42.000 And when we were building Google, what we did is we would do trade-offs between revenue and quality.
00:48:49.000 And I arbitrarily decided that whenever we had an improvement, half of it would go to quality and half of it would go to revenue, because I couldn't decide between the two.
00:48:57.000 So that's an example of informed decision that, in my view, helped the company become the high gold standard of answers.
00:49:04.000 But it's not perfect.
00:49:05.000 We need equivalent conversations.
00:49:07.000 But let me tell you that if you're in the business of peddling falsehoods, Which you better not.
00:49:13.000 But if you are in that, and you're peddling falsehood, then there's no check on you.
00:49:18.000 Shame on you.
00:49:19.000 Right?
00:49:19.000 Get your facts right.
00:49:21.000 And I'm not talking about theories and so forth.
00:49:23.000 I'm talking about basic facts.
00:49:25.000 Do your research.
00:49:26.000 What I've learned as a citizen, is when people make claims, did you know this?
00:49:30.000 And did you know that?
00:49:31.000 People say all sorts of things.
00:49:32.000 You know, I check.
00:49:34.000 I check because I don't want to be misinformed by my friend.
00:49:38.000 Let alone by any form of media.
00:49:40.000 And I would encourage people, you need to be your own best advocate for truth.
00:49:46.000 And do your own research.
00:49:48.000 Don't just believe what people tell you.
00:49:50.000 So Eric, in the COVID context, it's actually a really interesting context to talk about this, because you're right that there is a category of disinformation from particular states, actually, and certainly misinformation and actual untruths that were put out with regard to COVID, ranging from the origins of COVID, perhaps, to the effectiveness of the vaccines, which are in fact highly effective against hospitalization and death, although not nearly as effective against infection or reinfection.
00:50:18.000 But one of the things that came up in this context, and it really does go to human nature, and this may go to the broader conversation about AI, is that because the data were constantly moving, because there wasn't a lot of data about COVID at the beginning, it was a brand new virus.
00:50:32.000 And because that data was being gathered, there was a lot of shifting that happened by the institutional players in the society. We had Anthony Fauci, for example, claiming at the beginning that people should not wear masks and then admit that he was telling a platonic lie and probably should wear masks. And then we were told after the vaccines were available that you shouldn't mask. And then after Delta came up, then it was you should wear a mask again. And this has happened across the board and the sort of shifting and moving by the institutions itself mirrored by the tech companies, which basically said the institutions are the best available source of information.
00:51:01.000 Therefore, if you contradict them, we're going to downgrade your information.
00:51:06.000 It actually drove, I think, part of the backlash to the actual truth in a lot of these cases.
00:51:11.000 I think it drove a lot of anti-vax sentiment that the tech companies seem to be, quote unquote, silencing other forms of argumentation, even if they were right to silence particular notions about, you know, completely unfactual ideas about COVID.
00:51:26.000 We've been through a pandemic that occurs hopefully less than every 100 years.
00:51:33.000 Sorry, more than every 100 years.
00:51:34.000 Very rare.
00:51:36.000 And there were lots of mistakes made at the beginning, including the ones you described and many others in my view.
00:51:42.000 There was a point sometime last summer Where the facts became clear that the vaccines would work, and God save us, it really worked.
00:51:52.000 And people are alive today.
00:51:54.000 It's also true that masks work.
00:51:57.000 It's also true that public health matters.
00:52:00.000 It's also true that you as a COVID, if you have COVID, you're most infectious the two days before you have symptoms.
00:52:09.000 So to say that I'm just going to have my own COVID and have a good time, which is, in my view, a perfectly fine, speaking as a sort of libertarian view, perfectly fine.
00:52:18.000 It's okay for you to do that.
00:52:19.000 It's not okay for you to do that for other people.
00:52:21.000 That's where the narrative got confused.
00:52:24.000 My issue now is we are now at a point where the science is well established.
00:52:29.000 I think it's not okay for people to propagate falsehoods that involve population health and people's lives.
00:52:35.000 I just honestly don't think it's okay.
00:52:37.000 And that's independent of whether the Democrats made a mistake or the Republicans made a mistake.
00:52:42.000 There's plenty of blame to go around.
00:52:44.000 I've been advocating, by the way, for a private commission to go and actually analyze all this.
00:52:49.000 It would have to be bipartisan.
00:52:51.000 Everything that I've done in the commission space has been bipartisan and it's worked well.
00:52:55.000 You can bridge across the aisle around important things for our society.
00:53:00.000 We are all Americans, right?
00:53:03.000 Somehow we forget in the fight that we are Americans, that we have the same culture and the same values.
00:53:08.000 We've got to figure out solutions.
00:53:10.000 I will tell you that the ease with which people can market falsehoods Because of the new tools.
00:53:18.000 And I'm not taking sides on this.
00:53:20.000 It's dangerous.
00:53:22.000 Because a lot of people are just trying to get through the day.
00:53:25.000 They just want someone to help them understand what's going on.
00:53:27.000 They don't want to become their own experts on everything.
00:53:30.000 They have lives and busy and so forth.
00:53:32.000 They want to watch TV or whatever it is.
00:53:34.000 We need to help them and make sure they're not inundated with falsehoods.
00:53:37.000 Okay, so Eric, you're talking about, you know, the informational question and sort of what should be allowed and what shouldn't.
00:53:43.000 And I agree with you, there's definitely bright line issues, right?
00:53:46.000 If you say that N95 is completely ineffective, that's obviously a lie.
00:53:50.000 It's obviously not true.
00:53:52.000 If you want to say that the federal government of the United States unleashed this on people, just in order to kill them and take control. That obviously is untrue. And there are a lot of patent untruths out there. But one of the questions that I think is going to crop up, as we mentioned, more and more often as AI becomes more and more a part of our lives, is who gets to make the call as to what is a bright line lie and what is just stuff that is sort of controversial. And there are edge cases. And my fear is that the edge cases start to bleed over into more mainstream points of view. The Overton
00:54:21.000 window starts to shrink because we've seen that happen in nearly every area of American public speech, that the Overton window has shrunk pretty radically. And when that happens, there are two dangers. One is that actual innovative ideas and questions get left out. And the other is that eventually it breeds a massive backlash.
00:54:37.000 When you wish enough people out and enough perspectives out into the cornfield, they tend to unify and then attack the center.
00:54:43.000 We need to get back to the way the world, at least that I grew up in, where speech was considered okay if it was controversial, but it was also debated.
00:54:53.000 And it was not done in a screaming way.
00:54:55.000 I think it's incredibly important that our universities reflect all the points of view and that they welcome them.
00:55:02.000 And that there be public debate over it.
00:55:05.000 I think it's fine for people to make these kinds of claims.
00:55:09.000 I just don't want the computers to then sell them.
00:55:12.000 One way to understand my view is to say I'm really a strong believer in adult public speech.
00:55:19.000 Free speech.
00:55:20.000 I'm not in favor of the equivalent speech for computers.
00:55:25.000 In other words, the fact that you're a crazy person, I'm sorry to say this, you're a crazy person who has a crazy idea, that's fine.
00:55:33.000 But it's not okay in my view that your crazy idea then becomes magnified to the point where it becomes a dominant theme when it's just frankly something that you made up.
00:55:41.000 We haven't figured out as a society how to address this thing.
00:55:45.000 The tools that we're developing are addictive.
00:55:49.000 The more clicks, the more revenue, the more success, the more fame.
00:55:53.000 We've got to find a balance there.
00:55:54.000 and the more exciting the narrative, and by the way, the more disruptive and the more aggressive the narrative, the more clicks, the more revenue, the more success, the more fame.
00:56:04.000 We've got to find a balance there.
00:56:05.000 If you look at history, when the printing press came out, the original broadsheets and so forth were regulated.
00:56:12.000 Advertising was regulated because it got out of control.
00:56:16.000 It began to affect the society.
00:56:18.000 We're now about to make everyone be a publisher and everyone be a potential misinformer.
00:56:22.000 We've got to sort that out.
00:56:24.000 In our book, what we say is that this affects the way people will believe that they're human.
00:56:30.000 I was in a lunch restaurant in Florida recently, and I was looking at everybody, and everyone at the table, this is at lunch, was on their phones, rather than looking at each other and interacting.
00:56:43.000 Maybe they were bored with each other, I don't know.
00:56:46.000 Right?
00:56:46.000 That's how addictive these tools are.
00:56:48.000 The AI system is going to get so good that it'll be able to generate addiction for you.
00:56:54.000 It's so precise that you'll be so consumed by it.
00:56:58.000 And why do I know this?
00:56:59.000 Because it's reproducible.
00:57:01.000 We collectively, as the tech industry, can see what you care about, and we can give you more and more and more and more.
00:57:08.000 And frankly, that's a pretty good life.
00:57:10.000 Oh my God, another one!
00:57:11.000 Oh my God, another one!
00:57:13.000 When I was growing up, we had music with records where the songs were in order.
00:57:18.000 There was track four and track eight.
00:57:21.000 Now your playlists are assembled for you by computers.
00:57:25.000 Well, a world where the playlist of your life and your information is assembled for you It's pretty powerful to be the group that's assembling that.
00:57:34.000 And what if they, for example, promulgate values we don't agree with, like racism or sexism or doing bad things to children or things like that?
00:57:44.000 How do we want to handle that?
00:57:45.000 I'm not asking for more regulation, but I'm asking for some judgment here on the use of these tools.
00:57:51.000 Eric, it seems to me that a lot of what we're talking about here is only going to be solved in the offline world.
00:57:56.000 So we're constantly looking for solutions to the problem because the problem exists in the online world or in the social media space.
00:58:03.000 But I mean, as I've found, it's the ability to actually see and speak to one another as human beings that's really important.
00:58:09.000 And as AI becomes much more powerful, as we spend more and more time online, society seems to be getting significantly worse.
00:58:14.000 It's a lot easier to treat people badly anonymously than it is in person.
00:58:18.000 Simultaneously, the deepest connections are the ones that can be found in person.
00:58:23.000 There's a great irony to the fact that all of these tools that we're supposed to bring people together have sort of flattened our existence in a pretty significant way.
00:58:31.000 There is something innate to the human-to-human connection that is necessary in order to build actual social capital.
00:58:38.000 And as we spend more time online, which was supposed to generate social capital, is it building more social capital or is it just fragmenting us more by flattening us?
00:58:44.000 Well, the good news is the bowling alone theory, which was 20 years ago, was false, right?
00:58:48.000 The bowling alone theory was that we would all be sitting isolated watching television at home.
00:58:53.000 In fact, if you look at our young people, they spend literally every minute online of their waking day.
00:59:00.000 Their phones are next to their bed.
00:59:01.000 If they wake up in the middle of the night, they get on their phone.
00:59:03.000 If they get up in the morning, the first thing they check is their phone.
00:59:06.000 So we already know the answer of where is your teenager.
00:59:08.000 Your teenager is online on their phone.
00:59:11.000 We also know where your adult friends are.
00:59:13.000 We also know where your kids are.
00:59:14.000 We also know where your parents are.
00:59:16.000 They're all online.
00:59:18.000 So we've answered that question.
00:59:19.000 The question is, what kind of a world do we want?
00:59:22.000 I used to say, ten years ago, that this world was optional.
00:59:27.000 That if you don't like it, just turn it off.
00:59:30.000 It's okay.
00:59:31.000 No problem.
00:59:32.000 I don't mind.
00:59:32.000 I think it's good to take a break.
00:59:34.000 You can't take a break from it anymore.
00:59:37.000 It's become the way you educate, the way you entertain, the way you make money, right?
00:59:41.000 The way you communicate, the way you have a social life.
00:59:43.000 So we're going to have to adapt these platforms to these human regulations.
00:59:47.000 I used to think it's fine to have all that bad stuff online because people can just turn it off, but they can't now.
00:59:53.000 And what you're going to see, whether we like it or not, is if the tech companies don't figure out a way to organize this in such a way, they will get regulated on this, and the initial regulator will be Europe.
01:00:04.000 And by the way, in case we're confused on this, China has solved this problem.
01:00:08.000 China does not allow anonymous browsing.
01:00:11.000 Furthermore, so you're logged in, they know who you are, and they have tens of thousands of people who watch for forbidden speech.
01:00:18.000 They have very sophisticated AI algorithms that watch for what people say, and then the police come knocking.
01:00:25.000 We don't want that.
01:00:26.000 I mean, I do wonder, you know, as a religious Jew, one of the big things for us, and it seems to be more and more relevant every day, is the Sabbath, right?
01:00:32.000 This enforced, mandatory, get-offline kind of circuit breaker that exists every Friday night to every Saturday night.
01:00:38.000 And it really does, I think, ground you in a certain reality.
01:00:42.000 On a social level, I'm not talking about legislation here.
01:00:44.000 On a social level, it seems to me that it's going to be very important that human beings do find a way to disconnect from this technology, at least for a little while, and start to re-ground themselves in a material reality that exists offline.
01:00:56.000 I agree with that.
01:00:57.000 I worked in Utah for a while, and in Utah the LDS Mormons have a similar procedure involving evenings where they're connected only to their family because their families are so important.
01:01:09.000 I went with Governor Bill Richardson to North Korea a few years ago, and one of the things that we did is we left our phones before we went to North Korea because obviously you couldn't use them.
01:01:19.000 And for three days, we had an experience that I'd not had in decades.
01:01:25.000 We talked to each other, we got to know each other, the traveling group, we enjoyed our trip.
01:01:30.000 The moment we got back out of North Korea, we're online, we're back at it.
01:01:34.000 It was as though we didn't learn anything from our three days offline, and we went right back.
01:01:40.000 That's how powerfully addicted these technologies are.
01:01:43.000 So, in a second, I do want to ask you a couple of more questions, but if you want to hear Eric Schmidt's answers, and these are going to be deep questions, then first, you have to head on over to Daily Wire and become a member.
01:01:53.000 Go to dailywire.com, click subscribe, you can hear the rest of our conversation over there.
01:01:57.000 Well, Eric Schmidt, it's been a pleasure having you here.
01:02:00.000 The book, again, is The Age of AI and Our Human Future.
01:02:03.000 Go check it out, it's available for purchase right now.
01:02:06.000 Eric, thanks so much for joining the show, really appreciate it.
01:02:07.000 Thank you, Ben, as always, and I'll see you soon.
01:02:27.000 Our guests are booked by Caitlin Maynard.
01:02:28.000 Editing is by Jim Nickel.
01:02:30.000 Audio is mixed by Mike Coromino.
01:02:31.000 Hair and makeup is by Fabiola Cristina.
01:02:33.000 Title graphics are by Cynthia Angulo.
01:02:35.000 The Ben Shapiro Show Sunday Special is a Daily Wire production.