Founder and former CEO of Google, Eric Schmidt joins The Ben Shapiro Show to discuss his new book, The Age of AI and Our Human Future, and his thoughts on artificial intelligence and its impact on our not-too-distant future. Plus, we discuss Mark Zuckerberg s transforming Facebook into the metaverse, whether Google is a monopoly limiting all others in its market, and whether big tech is censoring what they consider misinformation. This is a Sunday special. This show is sponsored by ExpressVPN. Don t like Big Tech and the government spying on you? Visit Express VPN.org/TheBenShapiroShow to become a member and join the conversation. You'll get access to all of the show's Sunday special episodes, plus access to special bonus episodes throughout the week. You won't want to miss it! Subscribe to Dailywire to receive notifications when new episodes are available. Subscribe and comment to stay up to date on all things Ben Shapiro. Learn more about your ad choices. Use the promo code: PGPodcasts to receive 10% off your first month when you shop at Poshmark.com/Poshmark when you become a patron. You'll also get 10% discount when you enter the discount code: PMPODCAST. Rate, review, and become a supporter when you leave a review. The average rate is $5 or more than $10 gets you an ad-free version of the entire show gets a complimentary membership plan starting at $99 a month, and get 20% off the offer gets you get a maximum of $99.00, plus a month and a FREE VIP membership plan when you sign up for VIP membership gets a discount of $49 or $99 gets a maximum, plus they get a discount when they get VIP access to VIP access gets $99, and they get my ad-only membership gets $5, and I'll get $4, VIP access, and two-place get a choice of VIP membership offer, they also get a complimentary rate, and a discount, they get two months get two-weekend promo code, and she gets $4-choice, they'll get a VIP membership? They also get $5-only two-choice of $5 and a VIP discount, and also get VIP membership and VIP membership starts only two-day access to the entire place they can choose a discount offer. Thanks for listening to the show? Subscribe?
00:00:29.000Our guest is credited for scaling Google into the 21st century, taking the company from its Silicon Valley startup roots to a global tech leader.
00:00:37.000Eric Schmidt was Google's CEO for 10 years, 2001 to 2011.
00:00:40.000During that time, the company launched Google Maps, Gmail, Google Chrome, and bought YouTube, just to name a few of its many milestones.
00:00:47.000After serving as CEO, he became executive chairman and then technical advisor for Alphabet Inc., Google's parent company, totaling 19 years at Google, not to mention nearly 20 years prior, leading other tech and software-based companies.
00:00:59.000In 2020, Eric left Google for new ventures, notably contributing $1 billion through his philanthropic organization to fund talented people offering science and technological innovations to the world's largest and hardest problems.
00:01:10.000He also co-authored a book this year alongside Henry Kissinger and Dan Huttenlocher, The age of AI and our human future.
00:01:17.000The three of them use their diverse careers, Eric writing from his entrepreneurship and technology business expertise, to inform on the current state of artificial intelligence and to raise essential questions for a world fully integrated with AI.
00:01:30.000Eric gives me his thoughts on our not-too-distant future.
00:01:32.000Plus, we discuss Mark Zuckerberg's transforming Facebook into the metaverse, whether Google is a monopoly limiting all others in its market, and whether big tech is censoring what they consider misinformation.
00:01:53.000This is the Ben Shapiro Show's Sunday special.
00:02:24.000I really appreciate that and hope you survive whatever blowback comes your way to us having an open conversation.
00:02:30.000So why don't we start with the topic of your brand new book, which is artificial intelligence.
00:02:35.000So for a lot of folks who are not versed in this, including me, when I hear artificial intelligence, I tend to think of, you know, You know, the Iron Giant or Gizmo, basically robots that talk back to you.
00:02:46.000But obviously, AI is much more sophisticated and all encompassing than that.
00:02:50.000I think people are not fully aware of what AI means for the future of humanity.
00:02:54.000So why don't we start by defining what AI is?
00:02:57.000Well, first, thank you for this, and it's exactly the right starting question.
00:03:02.000Most of your audience, when they think of AI, they think of the following.
00:03:06.000They think of a man who creates a killer robot.
00:03:10.000The killer robot escapes, and the female scientist somehow slays the robot.
00:03:16.000And that is precisely not what we're talking about here.
00:03:19.000That's great fiction and it's a great movie, but it's not going to happen that way.
00:03:24.000What's really going to happen is that our world will be surrounded by digital intelligences.
00:03:31.000And these intelligences are partially like humans, and they'll make things more efficient.
00:03:47.000They'll also change the way we think about war and our own identity.
00:03:52.000We wrote the book to explore those questions.
00:03:55.000So, first question that sort of comes to mind immediately for me here is whether human beings are ready for this.
00:04:01.000So, Brett Weinstein and Heather Hying wrote a book recently where they talked about the fact that so much of the stuff that we interact with even now is well beyond the capacity of sort of our evolutionary adaptability.
00:04:13.000That if you look back to sort of the lizard brains we developed in the savannas and jungles hundreds of thousands of years ago and we were trying to survive.
00:04:20.000We are now using devices and we really don't know the impact of those devices on how we think.
00:04:25.000Obviously, the sort of stuff that comes to mind for a lot of folks is the use of social media.
00:04:29.000The addictiveness, for example, of social media.
00:04:33.000Are human beings going to be able to adapt to a world that is filled with all of these things that human beings were not privy to in the natural environment?
00:04:43.000It's gonna be really stressful, and we need to have a whole new philosophy about how we're gonna deal with what this does to humanity.
00:04:52.000It's interesting that 400 years ago, we went through something which is a transition from the age of faith, where people believe that God told them what to do, to what is called the age of reason, where you could have critical thinking, and you could look at a situation, and you could say, is that good for society, or good for me, or a moral judgment, and so forth.
00:05:13.000And this age of reason spawned the Industrial Age, everything we see around us.
00:05:18.000Before that, we were very, very primitive.
00:05:22.000So, we believe, and why we wrote the book, is that we're going to go into a new epoch of human experience.
00:05:28.000Because humans are not used to having a human-like assistant, partner, opponent, and so forth, in their world.
00:05:37.000If you look at what happened with social media, People like myself, who helped build these systems, had no idea that they would be used by, for example, the Russians to try to change our elections, or the addictive behavior would drive people crazy with special interest groups, especially full of falsehoods.
00:05:57.000We didn't know that this would have happened.
00:06:01.000If we'd known it, would we have done it?
00:06:04.000But this time, these tools are much more powerful.
00:06:07.000They're much more capable of, for example, driving you crazy, making you addicted, and also educating you and making you more powerful.
00:06:15.000We want to have a philosophical discussion about what we're doing to humans with this technology.
00:06:20.000So in a second, I want to get back to sort of the political impact of technology that you sort of mentioned there.
00:06:25.000But I want to talk to you about what exactly these technologies are going to look like on a more practical level.
00:06:31.000So you talk about them teaching your kids or you talk about them educating you, creating new innovations.
00:06:36.000How fast are we going to get to this sort of very sophisticated AI, because right now one of the limitations on AI is that it doesn't seem to be capable of the sort of mass innovation that human minds are capable of at this point.
00:06:48.000But you're talking about essentially machines being able to create new machines, which is a different sort of thing.
00:06:53.000How's humanity going to deal with that?
00:06:54.000And what does that mean for the future of, say, work and job creation?
00:07:14.000There's so much investment in this space that I can tell you that it's going to happen in some form, and it'll happen more quickly than we think.
00:07:24.000And when I say we're not prepared for it, we're not prepared for the new companies, the impact on the development of children.
00:07:30.000How do you raise a child where the child's best friend is not human?
00:07:33.000What happens, for example, you have a bear, and you give the kid a bear, and every year you give the kid a smarter bear, and at 12, the bear is the kid's best friend, and the bear is watching television, and the bear says to the kid, I don't like this TV show, and the kid says, I don't either.
00:08:07.000One of the characteristics of war is everything is occurring faster.
00:08:11.000So what happens when the AI system, which remember is not perfect, it makes mistakes, says you've got 24 seconds to press this button before you're dead.
00:08:25.000Do you really think a human's not going to press that button?
00:08:27.000I don't think we know how to deal with the compression of time, this intelligence, and the characteristic of this intelligence is it's dynamic, it's emergent, Right?
00:08:37.000It's imprecise, so it makes mistakes, and it's learning.
00:08:42.000We've never had a technology partner, a platform, an object, a machine that was like that.
00:08:50.000So one of the things that's been in the media a lot lately is obviously Facebook's change over to meta.
00:08:55.000Zuckerberg is now talking about the metaverse and sort of you're now living in a different reality.
00:09:01.000And this is, as I say, a wide difference from the sort of way that we have lived for all of human history, which was Whatever our aspirations, whatever our sense of identity, they were bounded by the reality around us, right?
00:09:12.000You could think that you were the strongest guy in the room and that lasted precisely as long as you didn't run into the guy who's stronger than you.
00:09:18.000You might think that you were invulnerable precisely up until the time you fell off a cliff and hit a rock at the bottom.
00:09:22.000But in a virtual reality space, in a place where you're never forced to actually confront other human beings in a real human way, when you can create yourself, when you can self-create your own identity, it seems like kind of a recipe for In some ways, the end of humanity as we know it, and that's not meant in terms of mass death, but certainly in terms of what we think our limitations are.
00:09:43.000And I think that actually could breed some pretty bad things, not just a feeling of freedom.
00:09:48.000You know, the metaverse has been being worked on for almost 30 years.
00:09:52.000And the basic idea here is that you put AR or virtual reality headsets on and you live in a virtual world where you're more handsome, you're more beautiful, you're more powerful, you're richer, you have a comfy life.
00:10:07.000Wouldn't you prefer to be in that life than the reality of your life where you have to struggle and where the commute is terrible and people are yelling at you and so forth?
00:10:15.000Wouldn't it be wonderful to be in such a virtual reality?
00:10:20.000I'm worried that people will decide that this virtual reality is so addictive, it's so fun to be that fake person than who you really are, that people will cross over.
00:10:33.000They'll get up in the morning, they'll do whatever they have to do, and then they'll put the headsets on, and they'll just sit there for hours and hours and hours.
00:10:42.000That's not a great future for humanity.
00:10:45.000Eric, in just a second, I want to ask you about that future for humanity and what it looks like in terms of separating people into two groups.
00:10:51.000We've talked a lot about income inequality in the recent past, but I'm wondering if we're now going to get to a true sort of human inequality that is very bad for the future of civilization via these sorts of devices in one second.
00:11:03.000First, let's talk about life insurance.
00:11:05.000What's easier than opening a can of cranberry sauce, getting free life insurance quotes with Policy Genius?
00:11:10.000If someone relies on your financial support, whether it's a child, an aging parent, even a business partner, you do need life insurance.
00:11:19.000I mean, let's say, for example, that you have been Given the gift of being able to create ice from your fingers, but suddenly you lose control of that gift and you put an entire city under the threat of complete extinction thanks to an endless winter, you might think to yourself, man, they're going to be mad.
00:11:41.000Well, you could say 50% or more on life insurance by comparing quotes with Policy Genius.
00:11:45.000You could save $1,300 or more per year on life insurance by using Policy Genius to compare policies.
00:11:50.000The licensed experts of PolicyGenius work for you, not the insurance companies, so you can trust them to help you navigate every step of the shopping and buying process.
00:11:56.000That kind of service has earned PolicyGenius thousands of five-star reviews across Trustpilot and Google.
00:12:01.000And eligible applicants can get covered in as little as a week thanks to an award-winning policy option that swaps the standard medical exam requirement for a simple phone call.
00:12:08.000This exclusive policy was recently rated number one by Forbes Advisor, higher than options from Ladder, Ethos, and Bestow.
00:13:00.000And he posited that for a lot of people, the answer would be no, because you want to have a real impact on the world.
00:13:04.000I wonder if you think that humanity is going to break down into two groups, people who say yes to the experience machine and people who say no to the experience machine.
00:13:11.000And frankly, whether virtual reality, which is not, in fact, reality, reality is going to innervate our civilization so entirely that people who don't plug in, people who are less technologically adept, People who care less about the digital world actually end up inheriting the earth while the most sophisticated players in the space are distracted online, essentially.
00:13:32.000You know, a lot of people in literature have talked about this, and they've talked about a drug called Soma, for example, which in literature would sort of take all your cares away.
00:13:42.000I'm not sure humans do very well in a careless world where they don't have mission and meaning.
00:13:50.000It's also possible that the inverse of what we just said is going to occur.
00:13:55.000That we're going to get into this virtual reality that seems so incredibly sexy and then we're going to have all the same behaviors.
00:14:02.000We're going to have greed and envy and bad behavior and All of the things that we don't like about the physical world.
00:14:11.000One of the things that we assumed when we built all of this that we're now dealing with was that connecting everyone would be a great thing.
00:14:19.000And I'm very proud of the fact that I worked really hard to connect everyone in the world.
00:14:23.000So don't be shocked that they actually talk to each other and that they say things to each other that you don't agree with.
00:14:29.000It was noble to connect the world and then we discover we don't like some of the things they're talking about.
00:14:36.000So, why don't we figure out a way to keep everybody connected but also grounded in physical reality, but also keep them safe, right, from stalking and trolling and some of the worst kind of behaviors.
00:14:46.000Imagine if you're trolled in physical life, but then you go to the virtual world and you get trolled there too.
00:14:53.000Another possibility, which we talk about in the book, is that this network of AI system control, right, which ultimately affects traffic and the way you get pharmaceuticals and the way you pay your taxes and the way you are learning, is so overwhelming that people will become nativist or naturalist.
00:15:13.000And they'll say, I'm revolting against the robot overlords.
00:15:19.000But I'm revolting against this digital world and the control that it has over me.
00:15:24.000That I believe in my own individual freedom and the only way I can do that is to turn it off.
00:15:29.000And my guess is that 10 or 20 percent of the people will ultimately say, turn this thing off.
00:15:36.000So Eric, this does raise a question that Andrew Yang has raised with regard to artificial intelligence.
00:15:41.000We talked to him before his run for mayor of New York and he was running for president at the time talking about universal basic income.
00:15:48.000The idea being that AI was going to become so sophisticated that it took over a lot of the fields that we don't tend to think of as AI driven.
00:15:54.000So typically job loss due to technology in particular industries has been relegated toward the lower end of the income spectrum.
00:16:00.000It's been in manufacturing gigs or it's been in self-checkout counters at the local Walmart.
00:16:06.000But it hasn't been in the so-called creative fields.
00:16:08.000And as AI gets more sophisticated and as it moves up on the food chain in terms of what kind of jobs it can replace, are we looking at the possibility of mass unemployment?
00:16:17.000And if so, what is the solution to that?
00:16:19.000Redistributionism may not be enough because it turns out that people kind of like working.
00:16:24.000And the sort of fantasy that people are then going to spend their off hours painting, especially if the AIs can paint better than we can, you could be looking at a total loss of human meaning here.
00:17:34.000So we want to create as many jobs as we can.
00:17:36.000Now, if we go back to your argument that AI will displace jobs, well, so far, we've just been through this horrific pandemic, and we're trying to turn the economy back on, and we can't find enough people who want to work.
00:17:49.000That's a different problem than we don't have jobs for them.
00:17:52.000We have the jobs for them, but maybe they're not in the right place, or they're not skilled, or they're not motivated, or whatever.
00:17:57.000It's more complicated than what people like you and me think.
00:18:01.000So it seems to me that the following things are true.
00:18:04.000First, there will be lots of jobs, but AI will displace a lot of current jobs.
00:18:12.000The security guard basically sits around and watches all day.
00:18:15.000Well, a computer's better at that than the security guard.
00:18:17.000See, if all the security guard is doing is watching, then that can be replaced.
00:18:22.000But if the security guard is also your brand, they're warm and friendly and we love you and we welcome you to our home or to our business or what have you, well, then that's a function that maybe AI won't replace.
00:18:36.000So you're going to see people pushed into more human roles And there'll be lots of them.
00:18:42.000Eventually, the scenario that everybody talks about is going to be true.
00:18:48.000Some number of decades, maybe a hundred years, the technology world will be so good that we can grow food, we can build buildings cheaply, literally everyone can live like a millionaire today.
00:19:00.000This is a long time ago, long after I'm dead.
00:19:12.000You know, in other words, we may get rid of the security guard role, but people will come up with more legal structures, or they'll come up with more medical structures, or something else.
00:19:22.000But to say that somehow innovation will ultimately eliminate meaning in jobs is not supported by history.
00:19:30.000So one of the things that this raises for me is the question of a civilization without children.
00:19:35.000Because you mentioned kids before and the kid who makes friends with the teddy bear who then becomes their best friend and is essentially sentient and incapable of making decisions and interacting with kids.
00:19:44.000But I want to ask a different question, which is we are a very adult-driven society, a society that is built around the needs and pleasures of people who are capable of giving consent.
00:19:53.000We're not a society that's particularly driven by people having kids or what people should do with those kids.
00:20:00.000And it seems like the AI world is driving more in the adult direction that if we are interacting online, we're not interacting physically.
00:20:09.000I have three of them under the age of eight and they are an awful lot of work.
00:20:12.000And you know, it's not as much work as existing in a VR reality where you can hang out with all of your adult friends all day.
00:20:17.000The West is already experiencing a pretty large-scale crisis of underpopulation, not overpopulation.
00:20:22.000Every single major Western country is now reproducing at less than replacement rates.
00:20:27.000And it seems like, you know, the distractions that are created by not only an AI world, but also by interactive beings that are capable of fulfilling your every need, right?
00:20:38.000This could totally undercut the future of human reproduction, of raising healthy children.
00:21:03.000The ultimate answer to your life is the legacy that you leave in your children and your grandchildren and so forth, and the extraordinary achievements that they will develop because of the training and the fatherhood, in your case, that you were providing.
00:21:50.000China obviously has been using technology as a wedge in order to build connections with a wide variety of nations, whether it is trying to offer 5G with strings attached to countries that are underdeveloped, or whether it is trying to through direct foreign aid, make alliances with countries that have access to rare earth minerals, China's become much more threatening on the geopolitical stage, but it's underappreciated what they're doing in the technological space, both in terms of subsidizing technology and
00:22:18.000in terms of stealing technology from the West. How do we deal with the threat of China in the space? How threatening is it if China gains an advantage in the AI space?
00:22:26.000Well, I was fortunate to be the chairman of a commission that the Congress set up, a bipartisan commission on national security and artificial intelligence.
00:22:34.000And we concluded that the threat from China is quite real, that they are at the moment over-investing compared to the United States.
00:22:44.000The quality of their work this year is now to the point where it's on par or better than the best papers that we're producing.
00:22:51.000So they have arrived, and they have arrived quickly, and they're over-investing.
00:22:55.000The consequences of this could be very, very significant.
00:22:59.000You mentioned Huawei and 5G, but think about TikTok.
00:23:03.000TikTok uses a different artificial intelligence algorithm.
00:23:07.000Your teenagers are perfectly happy to use it, and I don't mind if the Chinese know where your teenagers are, but I do mind if they use that platform to begin to affect the way we communicate with each other.
00:23:18.000And so there's every reason to think about the AI world that we want in front of us represents Western values.
00:23:24.000That is the Western values of democracy and free speech and free debate and so forth and so on, which are certainly not present on the Chinese side.
00:23:32.000There are also all sorts of national security issues.
00:23:35.000I want to be careful about your statement about Cold War, because that's a way of representing a certain thing.
00:23:41.000China, by the way, has an economy which on a purchasing power basis is larger than America's now.
00:24:03.000In the rivalry, imagine if China becomes the dominant player in AI, semiconductors, computer science, energy, transportation, face recognition, electronic commerce, and other services.
00:24:43.000We've gotten bipartisan support of this because both sides are concerned about this for different reasons.
00:24:50.000So we may be able to actually affect change.
00:24:52.000Imagine if the semiconductors that we use in our conversation right now and that your viewers are using today were made in China and had Chinese infiltration in one form or another.
00:25:03.000You wouldn't be comfortable and I wouldn't be comfortable.
00:25:06.000You have to really think about this from the standpoint of national security and open communications.
00:25:12.000So Eric, you mentioned they're not using the language of Cold War, but instead using the language of rivalry.
00:25:16.000I think that many of us in the West tend to think of China in terms of rivalry because, as you mentioned, there is so much trade with China because we are so integrated with their economy.
00:25:26.000But China is obviously pursuing a mercantilist economic policy while we are pursuing a much more free market policy.
00:25:32.000Which means, do they see it as a rivalry or do they see it as an actual Cold War?
00:25:36.000Well, I'll tell you what the Chinese have said publicly.
00:25:39.000I've not talked to them privately because we haven't been able to go to China for two years because of COVID.
00:25:45.000What they say publicly is that the history of China was it was this enormous middle kingdom.
00:25:51.000And for various reasons involving the open wars and so forth, 150 years ago, they were embarrassed.
00:26:05.000And now this new president, President Xi, who's not that new anymore, is basically using this language to, shall we say, inflame nationalism, to build confidence.
00:26:15.000He gives speeches now which say, we will never, ever be at the thumb of these other oppressors.
00:26:23.000This is fomenting enormous self-confidence.
00:26:29.000And that self-confidence is fine as long as it stays in its region.
00:26:33.000The problem with the self-confidence on either side is it can lead to a series of strategic missteps, a series of overconfident moves.
00:26:44.000Dr. Kissinger, who's a co-author with us on our book, talks a lot about the potential that the current situation feels a lot like the preamble to World War I, where you had rapid industrialization, new technology, old rivalries that got stoked, and then a series of errors precipitated a horrendous war.
00:27:06.000And I can tell you, having worked for the U.S.
00:27:08.000military for five years in my 20% job when I was at Google, a real conflict with China would be so horrific, it has to be avoided.
00:27:19.000We have to find ways to coexist with China, even if we don't like what they're doing.
00:27:24.000And they have to find ways to coexist with the system we built, even if they don't like what we're doing.
00:27:29.000Having that distrust, but a working relationship is better than world destruction.
00:27:34.000So how should technology companies deal with China?
00:27:37.000One of the big issues, obviously, is that it seems to a lot of folks, including me, that China has taken advantage of our free, open markets in order to cheat.
00:27:45.000I mean, they've stolen an enormous amount of Western technology.
00:27:49.000It seems as though they dictate terms to Western corporations.
00:27:52.000Western corporations want to do business in China, and China basically then forces those Western corporations to abide by particular rules In order to even do business there, that obviously, on a public level, it's most clear with regards to, for example, the NBA, where you had a GM of the Houston Rockets criticizing China over Hong Kong, and the NBA came down hard on the GM of the Houston Rockets, lest China close its markets to the NBA.
00:28:15.000But you've seen this in the tech sphere also, when you're at Google.
00:28:17.000Obviously, there was significant controversy over Google search results and Chinese censorship.
00:28:22.000How should tech companies be dealing with the fact that China says, if you want to play in our pond, you got to play by our rules?
00:28:28.000Well, indeed, Google left China in 2010 in a very controversial move because Google did not want to be subject to the rules that you just said.
00:28:35.000I hate to be blunt about this, but the fact of the matter is you have the rise of another power of a similar size that operates differently from the way we do.
00:28:47.000We can complain all we want to about the things that they do, but since we're not willing to bomb them and we're not willing to attack them, the use of force to force our way is not going to work.
00:28:58.000We're going to have to find ways and incentives.
00:29:02.000The Trump administration and a fellow, the guy who did it is a guy named Dan Pottenger, who's really, really smart under Trump.
00:29:10.000figured out that Chinese semiconductor leadership is really, really important to keep back.
00:29:18.000And our recommendation in our report is that we try to stay two generations ahead of China in semiconductors.
00:29:26.000So the Trump administration, for example, identified technology in a company called ASML.
00:29:32.000It's complicated, but basically it's extremely small lines that go under the wafers.
00:29:39.000And this particular company, which is a Dutch company, is the only source of that.
00:29:43.000And the Trump administration basically made it impossible for that hardware and software to make it to China.
00:29:49.000That's a good example of the kind of move that we need to consider to make sure that strategically our model continues to win.
00:30:00.000I understand the issue of the history and openness and they have a different system, but it's sort of like, okay, what do you want to do about it?
00:30:06.000I think the answer is, instead of complaining about them, there's lots to complain about.
00:30:11.000Why don't we complain about ourselves and fix ourselves?
00:30:13.000And in particular, why don't we focus on technology excellence, platform excellence, doing the things we do best.
00:30:21.000The American system is different from the Chinese system.
00:30:23.000The Chinese system is integrated, essentially an autocracy.
00:30:27.000They have something called military-civil fusion.
00:30:29.000There's no difference between the military and businesses.
00:30:33.000The universities are all interlinked and funded by the government.
00:30:37.000The sum of all of that is just a different system.
00:30:40.000Why don't we make our system stronger?
00:30:42.000Our system is the union of the government and universities and the private sector.
00:31:13.000So in a second, I want to ask you on a broader level about isolation versus interconnectedness with China.
00:31:18.000And I also want to get into some of the issues as to who controls the AI world and who exactly should we be trusting with the developments of this technology?
00:31:29.000First, let's talk about how you send your mail.
00:31:31.000Now, this holiday season, the lines at the post office are going to be super long.
00:31:57.000Whether you're selling online or running an office or a side hustle, Stamps.com can save you so much time, money, and stress during the holidays.
00:32:03.000Access all the post office and UPS shipping services you need without taking the trip.
00:32:07.000And get discounts you can't find anywhere else, like up to 40% off USPS rates and 76% off UPS.
00:32:13.000Going to the post office instead of using Stamps.com is kind of like taking the stairs instead of the elevator.
00:32:18.000I mean, you might get a little more exercise, but also, would you really want to do it, like, every single day?
00:32:30.000Save time and money this holiday season with Stamps.com.
00:32:33.000Sign up with promo code Shapiro for a special offer that includes a four-week trial, free postage, digital scale, no long-term commitments or contracts.
00:32:39.000Head on over to Stamps.com, click the microphone at the top of the page, enter promo code Shapiro.
00:32:44.000Okay, so let's talk about interconnecting this versus isolation, particularly in regards to the tech world.
00:32:49.000So you have these enormous multinational corporations, transnational corporations, places like Google.
00:32:55.000So you mentioned that Google withdrew from China in 2010.
00:32:58.000At the same time, we do want to have impact for Western companies in China.
00:33:02.000So how exactly should tech companies be playing that game?
00:33:06.000I think it's unlikely that the large tech companies will have a big role in China.
00:33:09.000given the amount of isolationism and autarky that the Chinese regime is pursuing?
00:33:15.000I think it's unlikely that the large tech companies will have a big role in China.
00:33:21.000Tesla has just done a large development in China, and I'm sure that the Chinese will take advantage of that incredible transfer of knowledge, and it will also help the Chinese local manufacturers.
00:33:36.000I think the most likely scenario is that we're going to see information technology in the West, and then a different kind of information technology in China and the BRI countries, who are its partners.
00:33:49.000And the reason is that China and her partners are pretty much unified in the way they want people to express things, the way they want to deal with anonymity, the way they want to deal with the messy aspects of free speech and the internet.
00:34:03.000Whereas in the West, we're pretty committed to dealing with it and supporting it in some way.
00:34:09.000So, if you look, all of the interesting tech companies that do anything online are essentially blocked or limited in China in one way.
00:34:18.000It's interesting, by the way, that Apple has managed to create the following interesting structure.
00:34:24.000Apple is regulated by the Chinese, and they fight all day as to what goes on in the App Store.
00:34:29.000So Apple could withdraw from China, but they want their revenue.
00:34:32.000But on the other hand, China is also the manufacturing source for Apple's products, which is of great value, the Chinese.
00:34:40.000And so they have an uncomfortable relationship.
00:34:43.000So the most likely scenario is either no relationship or an uncomfortable relationship of the kind that I've described with Apple.
00:34:50.000In Google's case, various people have tried to re-enter in various ways, and they've not been successful so far.
00:34:58.000So I want to move kind of back to the domestic sphere and back to the West for a second, because a lot of what we're talking about here, as far as machine learning and AI, we've seen sort of the proto version of that in how we deal with social media.
00:35:10.000And that, of course, has been incredibly messy.
00:35:12.000You mentioned early on Russian interference or attempted interference in the 2016 election.
00:35:17.000Obviously, from the right side of the aisle, there's a strong belief that a lot of the tech companies, particularly including YouTube, Google, Facebook, that these places Yeah, our designing algorithms that that may not be free with regard to transfer of information that somebody is setting the rules.
00:35:33.000There's a lot of distrust as to who is setting the rules at these companies, generally speaking.
00:35:37.000Now, I'm not comfortable with the government setting the rules because I think that the government generally tends to be run by political hacks who then do exactly what what benefits them.
00:35:44.000But at the same time, there's a lot of disquiet, and I think a lot of it is justified, as to who sets the rules.
00:35:49.000So when there's a hate speech regulation on Google, or if there's a hate speech standard on YouTube, how do you define terms like this, and who defines what is sort of the best public square?
00:35:59.000When you take that sort of disquiet and you extend it to things as all-encompassing as AI, you can see why people are freaked out.
00:36:04.000This is a good preamble to the general point we make in our book.
00:36:08.000That the tools for manipulating information, and I mean that broadly for good or bad, are going to be very broadly distributed.
00:36:17.000And so the software to do the kinds of scenarios on any side that you described will be open source and available to pretty much any actor, a company, an institution, a liberal organization, a conservative organization, and so forth.
00:36:33.000And that manipulation is not so good for society.
00:36:37.000In the book, what we say is that eventually people will have their own assistants, basically software that understands what they care about and also make sure the sources of manipulation of them are honest.
00:36:53.000In other words, it says, I'm going to choose this article for you and this other article is really false and I'm really not going to choose it for you unless you want me to.
00:37:03.000It'll be the only way that people deal with this extraordinary explosion of what I'm going to just call misinformation and disinformation.
00:37:10.000It'll come from governments, it'll come from our own government, it'll come from others, it'll come from evil people and well-meaning people.
00:37:18.000And that's a big change in human society.
00:37:20.000We were all, all of us were grown up thinking that you should just believe whatever people told you.
00:37:25.000And now a better understanding is that people are trying to manipulate you and you have to find your own way of thinking through this.
00:37:33.000With respect to the bias question, there's no question that the majority of the tech companies are in liberal states and the majority of their employees are, for example, Democratic supporters and so forth and so on.
00:37:45.000That's, I think, to some degree, cultural.
00:37:47.000There are ways to address what you said.
00:37:49.000The most important thing is public pressure to make sure you have balance.
00:37:54.000And calling for regulation is always a problem.
00:37:57.000Because bringing in a regulator on either side, right, It's highly likely to fix things in their current way and prohibit innovation.
00:38:06.000The way we're going to compete with China is by innovating ourselves ahead.
00:38:10.000What I want is innovation that's consistent with our values which include open discourse, freedom of expression, freedom of thought, freedom of seeking things.
00:38:20.000Eric, one of the things that's come up in this context is the question of monopoly.
00:38:25.000So there have been a number of political actors who have suggested that companies like Google effectively are monopolies because they control so much of the ad revenue market or because they control so much of the search market.
00:38:35.000Do you think that Google is a monopoly?
00:38:37.000And if so, what should be done about that?
00:38:39.000How should we view the problem of monopoly given that while there are a wide variety of sources that are available on Google, Google does control, for example, the ranking of search results?
00:38:47.000Yeah, the Google question we've had since I've been there, which is more than 20 years, and the question goes, OK, would you like someone else to decide how to do ranking?
00:38:58.000So ranking is an algorithmic decision that computers make, humans don't make.
00:39:02.000And my first month at Google, somebody called up and said, you're biased.
00:39:05.000And I said, well, the computer makes the decision.
00:39:08.000And they said, no, there must be a human making the decision.
00:39:11.000And I sat down with the engineers, and I'm convinced that there was no bias.
00:39:15.000Because the algorithm made the decision.
00:39:18.000If you want to regulate the algorithm, then write down how you would like it to be different.
00:39:26.000The people who are calling for antitrust often say these big companies, which includes the FANG, I guess we're going to call them the MANG companies now because Facebook became meta.
00:39:39.000Not sure quite if that's a good idea or not.
00:39:42.000But the key thing about the Meng stocks is they're very large and very integrated.
00:39:47.000And some people say, ah, let's break them up.
00:39:51.000So Elizabeth Warren, for example, said, let's take the App Store and separate it from the iPhone.
00:39:56.000Let's take YouTube and separate it from Google, et cetera.
00:40:14.000It's good for the shareholders in sum because of the distributions they'll get, but the problem is it doesn't actually address the problem that people, it just essentially creates more big players.
00:40:26.000You have to come up with a way of addressing the network effects in these platforms and the good execution.
00:41:31.000Well, because Pearl Source cuts out the middleman by eliminating those crazy markups by jewelry stores and selling directly to you, the consumer.
00:41:38.000You can shop securely from the comfort of your home at The Pearl Source.
00:41:40.000You'll find the largest selection of pearls available anywhere.
00:41:43.000Every jewelry piece is custom-made specifically for you.
00:41:45.000With global supply chain problems and shipping carriers expecting major delays of delivery times, as you get closer to the holidays, now is indeed the best time to start shopping for the holiday season.
00:42:32.000Alrighty, so let's talk about the algorithms.
00:42:34.000So we hear this a lot in the tech world.
00:42:36.000And for those of us who aren't in the tech world, it's somewhat confusing.
00:42:38.000So we'll hear something like, you know, the algorithm makes the decision.
00:42:41.000And those of us who are not in the tech world say, right, but somebody wrote the algorithm.
00:42:44.000There have to be some sort of variables that are used as the inputs.
00:42:47.000And so do the biases of the people who are building the algorithms come out in how the algorithm actually operates?
00:42:53.000So for example, if the algorithm suggests that certain types of speech are less favored than others or certain types of speech, which are more favored by more people because they are put out by the legacy media, are quote unquote more trustworthy than other sources.
00:43:11.000I don't want to claim that these algorithms are perfect, but I can tell you how they're really built.
00:43:17.000You've got teams of tens of people, hundreds of people, and they write an awful lot of code, and they test it, and they test it, and test it.
00:43:25.000And their success is based on some objective function, like a quality score or a click-through rate.
00:43:33.000Now, let me give you the extreme version.
00:43:36.000A company, a social media company, only focuses on revenue.
00:43:41.000To get revenue, you need as much engagement as possible.
00:43:49.000You get addicted to the outrage and you can't get rid of this.
00:43:52.000And the money goes up and up and up and up.
00:43:56.000That's the natural outcome of these social networks if you want to maximize revenue.
00:44:01.000Hopefully the leaders of these companies are trying to do other things than just maximizing revenue.
00:44:05.000But it's a good example where the algorithm, which is using this fictional example, all it wants is more money.
00:44:13.000So all it's gonna do is get more attention and drive you insane, right?
00:44:18.000I don't know how you regulate that, but that's an example of a company that would be very good for its shareholders, but a bad company to be part of.
00:44:27.000How do we bridge the gap between, I think there have been a few semantic games that have happened over the course of the last several years.
00:44:32.000It's really kind of fascinating to see how the media treatment of big tech companies and social media companies changed radically in the aftermath of 2016.
00:44:38.000So in 2012, there was wide praise for the social media companies, How they had affected change in, for example, the Arab Spring, or how they connected people during the original Black Lives Matter movement, or how they had connected people during the 2012 campaign.
00:44:51.000And by 2016, after Trump won, there seemed to be a pretty radical shift in how the media started treating social media.
00:44:55.000Suddenly, social media was the bugaboo.
00:44:57.000Social media had opened the door too broadly.
00:44:59.000It had allowed too much disinformation.
00:45:01.000And the question of disinformation, which is effectively, there's a definition to that term, right?
00:45:06.000Disinformation being an opposing state actor or Some sort of entity that is antithetical to the interests of the United States actively putting out information that is wrong in order to undermine the institutions.
00:45:19.000Then there's a shift semantically from disinformation to misinformation.
00:45:23.000Misinformation meaning either something that I think is false or something that is actively false.
00:45:27.000And all these lines started to get blurred.
00:45:29.000This is where, as a conservative, I start to get very wary is the notion that there is some sort of great gatekeeper that There can be a gatekeeper function performed by an algorithm that goes beyond simply saying, we're not going to allow articles to be disseminated that say 2 plus 2 equals 5.
00:45:44.000Instead, we are going to go to an outside group of, for example, fact checkers who may be biased in their own right, and then use an aggregate score, which looks objective, but actually isn't because the inputs are not objective.
00:45:55.000And one of the things that I think many of us on the right are pushing for is less regulation specifically because we want more speech on these platforms, not less speech.
00:46:03.000That's why I think that it's very odd to see the common cause being made between some on the left who want the speech heavily regulated on these platforms and some people on the right who want the platforms broken up.
00:46:13.000What's the best way to open up these platforms if you are, for example, a startup, right?
00:46:20.000This company was launched on the back of the power of social media and then pretty much every day we have to fight in the press and we have to fight social media, you know, algorithms that are attempting to upgrade, for example, the New York Times and maybe downgrade sites like ours because we aren't quite as established.
00:46:36.000What you're describing is the current world that we did not anticipate 10 years ago.
00:46:42.000The fantasy that we all had was that we would see an enormous explosion in speech and we would see a great deal of decentralized actors.
00:46:51.000A very, very large number of small players who would collectively be very interesting.
00:46:58.000Because I agree with you that we need more speech.
00:47:02.000And we also, by the way, need more listening to everyone.
00:47:06.000And instead what we've ended up with is a concentration of essentially large vertical sites which have some of the aspects that you're describing.
00:47:36.000We need competition, for example, for Facebook and the way it operates.
00:47:40.000We need different ways of experiencing ourselves.
00:47:44.000In a situation where the former president was banned by Twitter and Facebook, he needs a competitor who will host him and will give him an audience so that his voice can be heard.
00:47:54.000That makes sense to me on those terms.
00:47:57.000Now the problem you have, so let's assume we have all of that.
00:48:00.000Let's assume the answer is more speech and more good speech.
00:48:03.000There doesn't seem to be a penalty for lying anymore.
00:48:07.000There doesn't seem to be a penalty for absolute factual falsehoods.
00:48:12.000And especially in the case of the COVID situation, where they affect lives, We've got to come up with one, some solution which is not an overlord, and not a regulator, but which basically causes people to believe that we should focus on facts and not fiction.
00:48:36.000But we have a ranking algorithm, and hopefully the falsehoods are lower than the non-falsehoods.
00:48:42.000And when we were building Google, what we did is we would do trade-offs between revenue and quality.
00:48:49.000And I arbitrarily decided that whenever we had an improvement, half of it would go to quality and half of it would go to revenue, because I couldn't decide between the two.
00:48:57.000So that's an example of informed decision that, in my view, helped the company become the high gold standard of answers.
00:49:48.000Don't just believe what people tell you.
00:49:50.000So Eric, in the COVID context, it's actually a really interesting context to talk about this, because you're right that there is a category of disinformation from particular states, actually, and certainly misinformation and actual untruths that were put out with regard to COVID, ranging from the origins of COVID, perhaps, to the effectiveness of the vaccines, which are in fact highly effective against hospitalization and death, although not nearly as effective against infection or reinfection.
00:50:18.000But one of the things that came up in this context, and it really does go to human nature, and this may go to the broader conversation about AI, is that because the data were constantly moving, because there wasn't a lot of data about COVID at the beginning, it was a brand new virus.
00:50:32.000And because that data was being gathered, there was a lot of shifting that happened by the institutional players in the society. We had Anthony Fauci, for example, claiming at the beginning that people should not wear masks and then admit that he was telling a platonic lie and probably should wear masks. And then we were told after the vaccines were available that you shouldn't mask. And then after Delta came up, then it was you should wear a mask again. And this has happened across the board and the sort of shifting and moving by the institutions itself mirrored by the tech companies, which basically said the institutions are the best available source of information.
00:51:01.000Therefore, if you contradict them, we're going to downgrade your information.
00:51:06.000It actually drove, I think, part of the backlash to the actual truth in a lot of these cases.
00:51:11.000I think it drove a lot of anti-vax sentiment that the tech companies seem to be, quote unquote, silencing other forms of argumentation, even if they were right to silence particular notions about, you know, completely unfactual ideas about COVID.
00:51:26.000We've been through a pandemic that occurs hopefully less than every 100 years.
00:51:57.000It's also true that public health matters.
00:52:00.000It's also true that you as a COVID, if you have COVID, you're most infectious the two days before you have symptoms.
00:52:09.000So to say that I'm just going to have my own COVID and have a good time, which is, in my view, a perfectly fine, speaking as a sort of libertarian view, perfectly fine.
00:53:52.000If you want to say that the federal government of the United States unleashed this on people, just in order to kill them and take control. That obviously is untrue. And there are a lot of patent untruths out there. But one of the questions that I think is going to crop up, as we mentioned, more and more often as AI becomes more and more a part of our lives, is who gets to make the call as to what is a bright line lie and what is just stuff that is sort of controversial. And there are edge cases. And my fear is that the edge cases start to bleed over into more mainstream points of view. The Overton
00:54:21.000window starts to shrink because we've seen that happen in nearly every area of American public speech, that the Overton window has shrunk pretty radically. And when that happens, there are two dangers. One is that actual innovative ideas and questions get left out. And the other is that eventually it breeds a massive backlash.
00:54:37.000When you wish enough people out and enough perspectives out into the cornfield, they tend to unify and then attack the center.
00:54:43.000We need to get back to the way the world, at least that I grew up in, where speech was considered okay if it was controversial, but it was also debated.
00:54:53.000And it was not done in a screaming way.
00:54:55.000I think it's incredibly important that our universities reflect all the points of view and that they welcome them.
00:55:02.000And that there be public debate over it.
00:55:05.000I think it's fine for people to make these kinds of claims.
00:55:09.000I just don't want the computers to then sell them.
00:55:12.000One way to understand my view is to say I'm really a strong believer in adult public speech.
00:55:20.000I'm not in favor of the equivalent speech for computers.
00:55:25.000In other words, the fact that you're a crazy person, I'm sorry to say this, you're a crazy person who has a crazy idea, that's fine.
00:55:33.000But it's not okay in my view that your crazy idea then becomes magnified to the point where it becomes a dominant theme when it's just frankly something that you made up.
00:55:41.000We haven't figured out as a society how to address this thing.
00:55:45.000The tools that we're developing are addictive.
00:55:49.000The more clicks, the more revenue, the more success, the more fame.
00:55:54.000and the more exciting the narrative, and by the way, the more disruptive and the more aggressive the narrative, the more clicks, the more revenue, the more success, the more fame.
00:56:24.000In our book, what we say is that this affects the way people will believe that they're human.
00:56:30.000I was in a lunch restaurant in Florida recently, and I was looking at everybody, and everyone at the table, this is at lunch, was on their phones, rather than looking at each other and interacting.
00:56:43.000Maybe they were bored with each other, I don't know.
00:57:21.000Now your playlists are assembled for you by computers.
00:57:25.000Well, a world where the playlist of your life and your information is assembled for you It's pretty powerful to be the group that's assembling that.
00:57:34.000And what if they, for example, promulgate values we don't agree with, like racism or sexism or doing bad things to children or things like that?
00:57:45.000I'm not asking for more regulation, but I'm asking for some judgment here on the use of these tools.
00:57:51.000Eric, it seems to me that a lot of what we're talking about here is only going to be solved in the offline world.
00:57:56.000So we're constantly looking for solutions to the problem because the problem exists in the online world or in the social media space.
00:58:03.000But I mean, as I've found, it's the ability to actually see and speak to one another as human beings that's really important.
00:58:09.000And as AI becomes much more powerful, as we spend more and more time online, society seems to be getting significantly worse.
00:58:14.000It's a lot easier to treat people badly anonymously than it is in person.
00:58:18.000Simultaneously, the deepest connections are the ones that can be found in person.
00:58:23.000There's a great irony to the fact that all of these tools that we're supposed to bring people together have sort of flattened our existence in a pretty significant way.
00:58:31.000There is something innate to the human-to-human connection that is necessary in order to build actual social capital.
00:58:38.000And as we spend more time online, which was supposed to generate social capital, is it building more social capital or is it just fragmenting us more by flattening us?
00:58:44.000Well, the good news is the bowling alone theory, which was 20 years ago, was false, right?
00:58:48.000The bowling alone theory was that we would all be sitting isolated watching television at home.
00:58:53.000In fact, if you look at our young people, they spend literally every minute online of their waking day.
00:59:34.000You can't take a break from it anymore.
00:59:37.000It's become the way you educate, the way you entertain, the way you make money, right?
00:59:41.000The way you communicate, the way you have a social life.
00:59:43.000So we're going to have to adapt these platforms to these human regulations.
00:59:47.000I used to think it's fine to have all that bad stuff online because people can just turn it off, but they can't now.
00:59:53.000And what you're going to see, whether we like it or not, is if the tech companies don't figure out a way to organize this in such a way, they will get regulated on this, and the initial regulator will be Europe.
01:00:04.000And by the way, in case we're confused on this, China has solved this problem.
01:00:08.000China does not allow anonymous browsing.
01:00:11.000Furthermore, so you're logged in, they know who you are, and they have tens of thousands of people who watch for forbidden speech.
01:00:18.000They have very sophisticated AI algorithms that watch for what people say, and then the police come knocking.
01:00:26.000I mean, I do wonder, you know, as a religious Jew, one of the big things for us, and it seems to be more and more relevant every day, is the Sabbath, right?
01:00:32.000This enforced, mandatory, get-offline kind of circuit breaker that exists every Friday night to every Saturday night.
01:00:38.000And it really does, I think, ground you in a certain reality.
01:00:42.000On a social level, I'm not talking about legislation here.
01:00:44.000On a social level, it seems to me that it's going to be very important that human beings do find a way to disconnect from this technology, at least for a little while, and start to re-ground themselves in a material reality that exists offline.
01:00:57.000I worked in Utah for a while, and in Utah the LDS Mormons have a similar procedure involving evenings where they're connected only to their family because their families are so important.
01:01:09.000I went with Governor Bill Richardson to North Korea a few years ago, and one of the things that we did is we left our phones before we went to North Korea because obviously you couldn't use them.
01:01:19.000And for three days, we had an experience that I'd not had in decades.
01:01:25.000We talked to each other, we got to know each other, the traveling group, we enjoyed our trip.
01:01:30.000The moment we got back out of North Korea, we're online, we're back at it.
01:01:34.000It was as though we didn't learn anything from our three days offline, and we went right back.
01:01:40.000That's how powerfully addicted these technologies are.
01:01:43.000So, in a second, I do want to ask you a couple of more questions, but if you want to hear Eric Schmidt's answers, and these are going to be deep questions, then first, you have to head on over to Daily Wire and become a member.
01:01:53.000Go to dailywire.com, click subscribe, you can hear the rest of our conversation over there.
01:01:57.000Well, Eric Schmidt, it's been a pleasure having you here.
01:02:00.000The book, again, is The Age of AI and Our Human Future.
01:02:03.000Go check it out, it's available for purchase right now.
01:02:06.000Eric, thanks so much for joining the show, really appreciate it.
01:02:07.000Thank you, Ben, as always, and I'll see you soon.
01:02:27.000Our guests are booked by Caitlin Maynard.