#280 — The Future of Artificial Intelligence
Episode Stats
Words per Minute
160.62764
Summary
Eric Schmidt is a technologist, entrepreneur, and philanthropist. He joined Google in 2001 and served as its CEO and Chairman from 2001 to 2011, and as Executive Chairman and Technical Advisor thereafter. In 2017, he co-founded Schmidt Futures, a philanthropic initiative that bets early on exceptional people who are helping to make the world better. And most recently, he is the co-author of a new book, The Age of AI and Our Human Future. In this conversation, we cover how AI is affecting the foundations of our knowledge, and how it raises questions of existential risk. We talk about the good and the bad of AI, both narrow and ultimately AGI, artificial general intelligence. We also talk about cyber-war, autonomous weapons, and other concerns about the risk posed by autonomous weapons and how our thinking about containing the risk here by analogy to the proliferation of nuclear weapons probably needs to be revised. In an important conversation, which I hope you will find useful, I bring you Eric Schmidt. I am here with Eric Schmidt, and I am so grateful to be with him. I think we have a hard-out at an hour here. -Sam Harris If you are not yet a subscriber, you re not currently on our subscriber feed and would like to become one, please consider becoming a supporter of what we re doing here, then you re gonna want to become a subscriber. We don t run ads on the podcast, and therefore you won t miss out on the benefits that come with the podcast! Thanks to our sponsorships, we don t get any ad-free version of The Making Sense Podcast, which is made possible entirely through the support made possible by the support of our listeners, you get 10% off the making sense Podcasts, and you get 20% off of the podcast only by becoming a member of the Making Sense Community. You get a better podcast listening to the podcast and a better chance of getting 10% discount, and a discount on future episodes, and 5% of the Podcasts only, too good at making sense? Subscribe to Making Sense. Thanks for listening and Subscribe to the Podcast! - Sam Harris Thank you, Sam Harris and I hope that you enjoy what we're doing this podcast makes you think about what we do here? - Your feedback helps us make sense of the things we re making sense, right there, and we re listening to it, too, and it helps us all make sense.
Transcript
00:00:00.000
Welcome to the Making Sense Podcast. This is Sam Harris. Just a note to say that if
00:00:12.180
you're hearing this, you are not currently on our subscriber feed and will only be hearing
00:00:16.320
the first part of this conversation. In order to access full episodes of the Making Sense
00:00:20.820
Podcast, you'll need to subscribe at samharris.org. There you'll find our private RSS feed to
00:00:26.420
add to your favorite podcatcher, along with other subscriber-only content. We don't run
00:00:31.100
ads on the podcast, and therefore it's made possible entirely through the support of our
00:00:35.000
subscribers. So if you enjoy what we're doing here, please consider becoming one.
00:00:46.420
Okay, jumping right into it today. Today I'm speaking with Eric Schmidt. Eric is a technologist,
00:00:53.460
entrepreneur, and philanthropist. He joined Google in 2001 and served as its CEO and chairman from
00:01:02.180
2001 to 2011, and as executive chairman and technical advisor thereafter. In 2017, he co-founded
00:01:10.940
Schmidt Futures, a philanthropic initiative that bets early on exceptional people who are helping to make
00:01:16.540
the world better. He is the host of Reimagine with Eric Schmidt, his own podcast. And most recently,
00:01:23.740
he is the co-author of a new book, The Age of AI and Our Human Future. And that is the topic of today's
00:01:30.780
conversation. We cover how AI is affecting the foundations of our knowledge and how it raises
00:01:37.680
questions of existential risk. So we talk about the good and the bad of AI, both narrow AI and
00:01:45.360
ultimately AGI, artificial general intelligence. We discuss breakthroughs in pharmaceuticals and
00:01:54.060
other good things. But we also talk about cyber war and autonomous weapons and how our thinking about
00:02:01.780
containing the risk here by analogy to the proliferation of nuclear weapons probably needs to
00:02:09.600
be revised. Anyway, an important conversation, which I hope you find useful. And I bring you Eric Schmidt.
00:02:24.080
I am here with Eric Schmidt. Eric, thanks for joining me.
00:02:28.880
So we have, I think we have a hard out at an hour here. So amazingly, that's a short podcast for me.
00:02:35.100
So I'm going to be, there's going to be a spirit of urgency hanging over the place. And we will be
00:02:40.020
efficient in covering the fascinating book that you have written with Henry Kissinger and Daniel
00:02:47.800
That's right. And Dr. Kissinger, of course, is this former Secretary of State. And Dan Huttenlacher
00:02:52.020
is now the Dean of Artificial Intelligence and Computer Science at the Schwartzen Center at MIT. He's a
00:03:01.080
Yeah. And that book is The Age of AI and Our Human Future, where you cover, you know, most of what I have
00:03:07.980
said about AI thus far, and every case where I have worried about our possible AI future has been focused
00:03:16.460
on the topic of AGI, artificial general intelligence, which you discuss briefly in the book, but it's not your
00:03:23.180
main focus. So I thought maybe we could save that for the end, because I would love to get your take on AGI.
00:03:29.080
But there are far more near-term concerns here and considerations that we could cover. And you are
00:03:36.640
quite well-placed to cover them, because if I'm not mistaken, you ran Google for, what was it, 10 years?
00:03:45.460
What was your background before that? How did you come to be the CEO of Google?
00:03:50.320
Well, I'm a computer scientist. I have a PhD in the area. And I worked for 45 years in tech in one way or
00:03:57.040
the other whole bunch of companies. Larry and Sergey brought me in as the early CEO of the company, and we built
00:04:03.860
it together. After a decade, I became chairman, Larry became CEO, and then he replaced himself with Sundar,
00:04:09.920
who's now doing a phenomenal job at Google. So I'd say collectively, this group, of which I'm a member, built one of
00:04:19.200
Yeah. And obviously, Google is quite involved in developing AI. I just saw just the other day that
00:04:25.780
there's a new, I think it's a 540 billion parameter language model that is beating the average human at
00:04:34.440
something like 150 cognitive tests now. And it seems like the light is at the end of the tunnel there.
00:04:41.100
It's just going to be a larger model that's going to beat every human at those same tasks. But before
00:04:46.840
we get into some of the details here, I just want to organize our general approach to this. There are
00:04:53.320
three questions that Kant asked in his critique of pure reason, I think it was, which seem unusually
00:05:01.000
relevant to the development of AI. The first is, what can we know? The second is, what should we do?
00:05:09.340
And the third is, what is it reasonable to hope for? And I think those really do capture almost every
00:05:19.260
aspect of concern here. Because as you point out in the book, AI really promises to, and it has already
00:05:26.820
begun to shift the foundations of human knowledge. So the question of what we can know and how we can
00:05:32.900
know it is enormously salient now. And maybe we can talk about some of those examples. But obviously,
00:05:38.760
this question of what should we do and what can we reasonably hope for captures the risks we're
00:05:45.180
running in developing these systems. And we're running these risks well short of producing anything
00:05:52.740
like artificial general intelligence. And it's interesting that we're on a path now where we're
00:05:57.680
really not free to decline to produce this technology. I mean, to my eye, there's really no brake
00:06:04.680
to pull. I mean, we're in a kind of AI arms race now. And the question is how to put that race for
00:06:12.060
more intelligence on a footing that is not running cataclysmic risk for us. So before we jump into the
00:06:20.660
details, I guess I'd like to get your general thoughts on how you view the stakes here and where
00:06:27.200
you view the field to be at the moment. Well, of course, we wrote the book Age of AI
00:06:32.540
precisely to help answer the questions you're describing, which are perfectly cast.
00:06:39.340
And what's happened in the book, which is written roughly a year ago and then published,
00:06:44.720
we described a number of examples to illustrate the point. One is the development of new moves in
00:06:51.520
the game of Go, which is 2,500 years, which were discovered by a computer, which humans had never
00:06:57.680
discovered. It's hard to imagine that humans wouldn't have discovered these strategies, but
00:07:02.740
they didn't. And that calls the question of, are there things which AI can learn that humans cannot
00:07:08.740
master? That's a question. The second example that we use is the development of a new drug called
00:07:15.020
Hallicin, which is a broad spectrum antibiotic, which could not be done by humans, but a set of
00:07:21.680
neuroscientists, biologists, and computer scientists put together a set of programs that ultimately searched
00:07:28.260
through 100 million different compounds and came up with candidates that were then subsequently tested,
00:07:34.760
advancing drugs at an enormous rate. That's another category of success in AI. And then the third is what
00:07:41.340
you've already mentioned, which is large language models. And we profile in the book GPT-3, which is
00:07:46.220
the predecessor of the one you described. And it's eerie. On the back cover of our book, we say to the
00:07:53.400
GPT-3 computer, are you capable of human reasoning? And it answers, no, I am not. You may wonder why I
00:08:02.140
give you that answer. And the answer is that you are a human reasoning machine, whereas I am a language
00:08:09.740
model that's not been taught how to do that. Now, is that awareness or is that clever mimicry? We don't
00:08:17.680
know. But each of these three examples show the potential to answer Kant's questions. What can we
00:08:24.660
know? What will happen? What can we do about it? Since then, this past few weeks, we've seen the
00:08:31.500
announcement that you mentioned of this enormous large language model, which can beat humans on many
00:08:36.100
things. And we've also seen something called DAL-E, which is a text-to-art program. You describe roughly
00:08:43.740
what you want, and it can generate art for you. Now, these are the beginnings of the impact of
00:08:49.000
artificial intelligence on us as humans. So Dr. Kissinger, Dan, and myself, when we looked at those,
00:08:54.820
we thought, what happens to society when you have these kinds of intelligence? Now, they're not human
00:09:02.200
intelligence. They're different kinds of intelligence in everyday life. And we talk about all the
00:09:07.160
positives, of which there are incredible positives. Better materials, better drugs, more efficient
00:09:13.540
systems, better understanding, better monitoring of the earth, additional solutions for climate change.
00:09:20.380
There's a long, long list which I can go through. Very, very exciting. And indeed, in my personal
00:09:25.800
philanthropy, we are working really hard to fund AI-enabled science discoveries. We recently announced
00:09:32.160
a grant, a structure with a guy named James Manyika, who's a friend of mine, of $125 million
00:09:39.360
to actually go and fund research on the really hard problems in AI, the ones that you're mentioning and
00:09:47.480
others, and also the economic impacts and so forth. So I think people don't really know.
00:09:51.400
The real question is, what happens when these systems become more commonplace? Dr. Kissinger
00:09:58.840
says, if you look at history, when a system that is not understandable is imposed on people,
00:10:06.180
they do one of two things. They either invent it as a religion, or they fight it with guns. So my
00:10:14.760
concern, and I'll say it very directly, is we're playing with the information space of humans.
00:10:21.420
We're experimenting at scale without a set of principles as to what we want to do. Do we care
00:10:28.500
more about freedom? Do we care more about efficiency? Do we care more about education? And so forth.
00:10:33.620
And Dr. Kissinger would say, the problem is that these decisions are being made by technical people
00:10:39.460
who are ignorant of the philosophical questions that you so ably asked. And I agree with him,
00:10:45.040
speaking as an example of that. So we recommend, and indeed, I'm trying to now fund that people begin
00:10:54.580
in a multidisciplinary way to discuss the implications of this. What happens to national security?
00:10:59.560
What happens to military intelligence? What happens to social media? What happens to your children when
00:11:06.940
your child's best friend is a computer? And for the audience who might be still thinking about
00:11:13.820
the killer robot, we're not building killer robots, and I hope we never do. This is really about
00:11:20.760
information systems that are human-like, that are learning, they're dynamic, and they're emergent,
00:11:26.400
and they're imprecise, being used and imposed on humans around the world. That process is unstoppable.
00:11:35.780
It's simply too many people working on it, too many ways in which people are going to manipulate it,
00:11:41.500
including for hostile reasons, too many businesses being built, and too much success for some of the
00:11:47.980
early work. Yeah, yeah. I guess if I can just emphasize that point, the unstoppability is pretty
00:11:54.400
interesting, because it's just anchored to this basic fact that intelligence is almost by definition
00:12:02.160
the most valuable thing on Earth, right? And if we can get more of it, we're going to, and we clearly
00:12:08.460
can. And all of these narrow intelligences we've built thus far, you know, all that are effective,
00:12:14.940
that come to market, that we pour resources into, are superhuman, more or less right out of the gate,
00:12:21.620
right? I mean, it's just, it's not a question of, I mean, human level intelligence is a bit of a
00:12:27.400
mirage, because the moment we get something that's general, it's going to be superhuman. And so we can
00:12:32.780
leave the generality aside, all of these piecemeal intelligences are superhuman. And I mean, the example
00:12:40.360
you give of the new antibiotic, Hallicin, I mean, it's fascinating, because it's not just a matter of
00:12:47.320
doing human work faster. If I understand what happened in that case, this is an AI detecting
00:12:54.380
patterns and relationships in molecules already known to be, you know, safe and efficacious as
00:13:02.640
antibiotics, and detecting new properties that human beings very likely would never have conceived of,
00:13:10.100
and may in fact be opaque to the people who built the AI and may remain opaque. I mean, one of the
00:13:17.260
issues you just raised is the issue of transparency. Many of these systems are built in such a way as to
00:13:23.720
be black boxes, and we don't know how the AI is doing what it's doing in any specific way. It's just
00:13:32.220
training against data and against its own performance, so as to produce a better and better result, which qualifies as
00:13:40.100
intelligent and even superhumanly so. And yet, it may remain a black box. Maybe we can just close the
00:13:47.360
loop on that specific problem here. Are you concerned that transparency is a necessity when decision-making
00:13:57.120
is important? I mean, just imagine the case where we have a something like an AI oracle that we are
00:14:04.660
convinced makes better decisions than any person or even any group of people, but we don't actually
00:14:12.160
know the details of how it's making those decisions, right? So this is, I mean, you can just multiply
00:14:18.400
examples as you like, but just, you know, questions of, you know, who should get out of prison, you know,
00:14:23.100
the likelihood of recidivism in the case of any person, or, you know, who's likely to be, you know,
00:14:29.140
more violent, you know, at the level of conviction, right? Like, what should the prison sentence be?
00:14:34.400
I mean, it's very easy to see that if we're shunting that to a black box, people are going to
00:14:41.380
get fairly alarmed that in any differences in outcome that are not transparent. Perhaps you
00:14:48.960
have other examples of concern, but do you think transparency is something that, I mean, one question
00:14:54.360
is, is it technically feasible to render black boxes transparent when it matters? And two, is
00:15:01.720
transparency as important as we intuitively may think it is? Well, I wonder how important transparency
00:15:07.820
is for the simple fact that we have teenagers among our midst, and the teenagers cannot explain
00:15:13.940
themselves at all, and yet we tolerate their behavior with some restrictions because they're not
00:15:19.340
full adults. So, but we wouldn't let a teenager fly an airplane or operate on a patient. So I think a
00:15:26.940
pretty simple model is that at the moment, these systems cannot explain how they came to their
00:15:32.600
decision. There are many people working on the explainability problem. Until then, I think it's
00:15:37.520
going to be really important that these systems not be used in what I'm going to call life safety
00:15:42.620
situations. And this creates all sorts of problems, for example, in automated war, automated
00:15:48.040
conflict, cyber war, those sorts of things, where the speed of decision-making is faster than what
00:15:53.120
humans can, what happens if it makes a mistake? And so, again, we're at the beginning of this process,
00:16:00.200
and most people, including myself, believe that the explainability problem and the bias problems
00:16:05.580
will get resolved because there's just too much money, too many people working on it,
00:16:10.580
maybe at some cost, but we'll get there. That's historically how these things work. You start off
00:16:14.380
with stuff that works well enough, but it shows a hint of the future, and then it gets industrialized.
00:16:19.820
I'm actually much more focused on what's it like to be human when you have these specialized systems
00:16:26.400
floating around. My favorite example here is Facebook, where they change their feed to amp it
00:16:33.400
using AI. And the AI that they built was around engagement. And we know from a great deal of social
00:16:40.980
science that outrage creates more engagement. And so, therefore, there's more outrage on your feed.
00:16:47.460
Now, that was a clearly deliberate decision on part of Facebook. Presumably thought it was a good
00:16:52.560
product idea, but it also maximized their revenue. That's a pretty big social experiment, given the
00:16:58.380
number of users that they have, which is not done with an understanding, in my view, of the impact of
00:17:04.340
political polarization. Now, you sit there and you go, okay, well, he doesn't work at Facebook. He
00:17:09.940
doesn't really understand. But many, many people have commented on this problem. This is an image
00:17:17.120
of what happens in a world where all of the information around you can be boosted or manipulated
00:17:22.460
by AI to sell to you, to anchor you, to change your opinion, and so forth. So, we're going to face some
00:17:29.240
interesting questions. In the information space, the television and movies and things you see online
00:17:35.440
and so forth, do there need to be restrictions on how AI uses the information it has about you
00:17:42.120
to pitch to you, to market to you, to entertain you? These are questions. We don't have answers.
00:17:48.680
But it makes perfect sense that in the industrialization of these tools, the tools that I'm
00:17:53.680
describing, which were invented in places like Google and Facebook, will become available to everyone,
00:17:59.240
in every government. So, another example is a simple one, which is the kid is a two-year-old and gets
00:18:07.320
a toy. And the toy gets upgraded every year and the kid gets smarter. The toy is now, the kid is now 12
00:18:13.480
and there's the 10 years from now, there's a great toy. And this toy is smart enough in non-human terms
00:18:20.820
to be able to watch television and decide if the kid likes the show. So, the toy is watching the
00:18:27.340
television and the kid, the toy says to the kid, I don't like this show, knowing that the kid's not
00:18:33.500
going to like it. And the kid goes, I agree with you. Now, is that okay? Probably. Well, what happens
00:18:41.180
if that same system that's also learning learns something that's not true? And it goes, you know,
00:18:49.620
kid, I have a secret. And the kid goes, tell me, tell me, tell me. And the secret is something which
00:18:56.720
is prejudicial or false or bad or something like that. We don't know how to describe,
00:19:02.820
especially for young people, the impact of these systems on their cognitive development.
00:19:08.020
Now, we have a long history in America of having school boards and textbooks which are approved at the
00:19:14.060
state level. Are the states going to monitor this? And you sit there and you say, well, no parent would
00:19:20.400
allow that. But let's say that the normal behavior of this toy, it's smart enough, understands the kid
00:19:26.480
well enough to know the kid's not good at multiplication. So, the kid's bored and the toy says, I think we
00:19:33.540
should play a game. Kid goes great. And of course, it's a game which strengthens his or her
00:19:38.640
multiplication capability. So, on the one hand, you want these systems to make people smarter,
00:19:45.420
make them develop, make them more serious adults, make the adults more productive.
00:19:50.340
Another example would be my physics friends. They just want a system to read all the physics books
00:19:54.580
every night and make some adjustments to them. Well, the physicists are adults who can deal with this.
00:19:59.800
But what about kids? So, you're going to end up in a situation, at least with kids and with
00:20:04.960
elderly who are isolated, where these tools are going to have an out-of-proportion impact
00:20:11.680
on society as they perceive it. We've never run that experiment. Dynamic, emergent, and not precise.
00:20:20.140
I'm not worried about airplanes being flown by AI because they're not going to be reliable enough
00:20:24.760
to do it for a while. Now, we should also say for the listeners here that we're talking about a term
00:20:32.860
which is generally known as narrow AI. It's very specific, and we're using specific examples,
00:20:39.160
drug discovery, education, entertainment. But the eventual state of AI is called general intelligence,
00:20:46.600
where you get human kind of reasoning. In the book, what we describe that as the point where the
00:20:53.820
computer can set its own objective. And today, the good news is the computer can't choose its objective.
00:21:04.540
Yeah. Yeah. Well, hopefully, we'll get to AGI at the end of this hour. But I think we should talk
00:21:11.160
about the good and the bad in that order, and maybe just spend a few minutes on the good. Because
00:21:17.700
the good is all too obvious. Again, and intelligence is the most valuable thing on Earth. It's the thing
00:21:25.120
that gives us every other thing we want, and it's the thing that safeguards everything we have. And if
00:21:32.560
there are problems we can't solve, well, then we can't solve them. But if there are problems that can
00:21:37.560
be solved, the way we will solve them is through greater uses of our intelligence. And insofar as we
00:21:46.080
can leverage artificial intelligence to solve those problems, we will do that, more or less regardless of
00:21:52.440
the attendant risks. And that's the problem, because the attendant risks are increasingly obvious,
00:21:57.920
and it seems not at all trivial. And we've already proven we're capable of implementing
00:22:05.720
massive technological change without really thinking about the consequences at all. You cite
00:22:12.920
the massive psychological experiment we've performed on all of humanity with no one really consenting,
00:22:19.880
that is, social media. And it's, you know, the effects are ambiguous at best. I mean, there's
00:22:26.240
some obviously bad effects, and it's not even straightforward to say that democracy or even
00:22:33.220
civilization can survive contact with social media. I mean, that remains to be seen, given how divisive
00:22:39.520
some of its effects are, I consider social media to be far less alarming than the prospect of having
00:22:46.300
an ongoing nuclear doctrine anchored to a proliferating regime of cyber espionage, cyber terrorism,
00:22:57.660
cyber war, all of which will be improved massively by layering AI onto all of that. So before we jump
00:23:06.740
into the bad, which is, you know, really capturing my attention, is there anything specifically you
00:23:12.540
want to say about the good here? I mean, if this goes well, what are you hoping for? What are you
00:23:18.600
expecting? Well, there are so many positive examples that we honestly just don't have time to make a list.
00:23:25.340
I'll give you a few. In physics and math, the physicists and mathematicians have worked out the
00:23:32.220
formulas for how the world works, at least at the scientific level. But many of their calculations
00:23:37.580
are not computable by modern computers. They're just too complicated. An example is how do clouds
00:23:44.160
actually work is a function of something called the Navier-Stokes equations, which for a normal-sized
00:23:49.700
cloud would take 100 million years for a computer to figure out. But using an AI system, and there's a
00:23:56.200
group at Caltech doing this, they can come up with a simulation of the things that they care about.
00:24:02.220
In other words, the AI provides enough accuracy in order to solve the more general climate modeling
00:24:10.460
problem. If you look at quantum chemistry, which is sort of how does, how do chemical bonds work
00:24:17.020
together? Not computable by modern methods. However, AI can provide enough of a simulation
00:24:23.740
that we can figure out how these molecules bind, which is the halicin example.
00:24:28.760
In drug discovery, we know enough about biology that we can basically predict that if you do
00:24:37.320
these compounds with, you know, this antibody, we can make it stronger, we can make it weaker,
00:24:44.080
and so forth, in the computer, and then you go reproduce it in the lab. There's example after example
00:24:50.400
AI is being used from existing data to simulate a non-computable function in science. And you say,
00:24:59.560
what's he talking about? I'm talking about the fact that the scientists have been stuck for decades
00:25:05.320
because they know what they want to do, but they couldn't get through this barrier. That unleashes
00:25:11.080
new materials, new drugs, new forms of steel, new forms of concrete, and so forth and so on.
00:25:17.440
It also helps us with climate change, for example, because climate change is really about energy and
00:25:23.160
CO2 emission and so forth. These new surfaces, discoveries, and so forth will make a material
00:25:27.680
difference. And I'm talking about really significant numbers. So that's an example.
00:25:33.060
Another example is what's happening with these large language models that you mentioned earlier,
00:25:38.180
that people are figuring out a way to put a conversational system in front of it so that
00:25:41.900
you can talk to it. And the conversational system has enough state that it can remember what it's
00:25:46.820
talking about. It's not like a question, answer, question, answer, and it doesn't remember.
00:25:51.140
It actually remembers the context of, oh, we're talking about the Oscars, and we're talking about
00:25:55.260
what happened at the Oscars, and what do I think? And then it sort of goes, and it gives you a thoughtful
00:26:00.160
answer as to what happened and what is possible. In my case, I was playing with one of them
00:26:06.840
a few months ago. And this one, I asked the question, what is the device that's in 2001,
00:26:14.940
A Space Odyssey, that I'm using today? There's something from 1969 that I'm using today that
00:26:20.180
was foreshadowed in the movie. And it comes right back and says, the iPad. Now that's a question that
00:26:26.840
Google won't answer if you ask it the way I did. So I believe that the biggest positive impact
00:26:34.400
will be that you'll have a system that you can verbally or by writing, ask it questions,
00:26:40.920
and it will make you incredibly smarter, right? That it'll give you the nuance and the understanding
00:26:46.660
and the context. And you can ask it another question, and you can refine your question.
00:26:50.940
Now, if you think about it in the work you do, or that I do, or that a scientist does,
00:26:55.280
or a politician, or an artist, this is enormously transformative.
00:26:59.140
So example after example, these systems are going to build scientific breakthroughs,
00:27:07.160
scalable breakthroughs. Another example was that a group at DeepMind figured out the folding
00:27:13.180
structure of proteins. And proteins are the way in which biology works. And the way they fold determines
00:27:19.300
their effectiveness, what they actually do. And it was thought to be not really computable.
00:27:23.960
And using these techniques in a very complicated way with a whole bunch of protein scientists,
00:27:29.700
they managed to do it. And their result was replicated in a different mechanism with different
00:27:34.180
AI from something called the Baker Lab in University of Washington. The two together have given us a map
00:27:40.380
of how proteins work, which in my view is worthy of a Nobel Prize. That's how big a discovery that is.
00:27:46.480
All of a sudden, we are unlocking the way biology works, and it affects us directly.
00:27:50.460
But those are some positive examples. I think the negative examples...
00:27:55.760
Well, let's wait, because I'm chock full of negative examples.
00:27:59.300
But I'm interested in how even the positive can disclose a surprisingly negative possibility,
00:28:09.340
or at least it becomes negative if we haven't planned for it ethically, politically, economically.
00:28:15.320
And so you imagine the success. You imagine that more and more... So what you've just pictured was
00:28:22.260
a future of machine and human cooperation, right, and facilitation, where people just get smarter
00:28:30.740
by being able to have access to these tools, or they get effectively smarter. But you can imagine,
00:28:38.340
just in the limit, more and more getting seeded to AI, because AI is just better at doing these things.
00:28:45.480
It's better at proving theorems. It's better at designing software. It's better, it's better,
00:28:50.020
it's better. And all of a sudden, the need for human developers at all, or human mathematicians at
00:28:56.160
all, or you just make the list as long as you want. It seems like some of the highest status jobs
00:29:05.540
cognitively might be among the first to fall, which is to say, I certainly expect at this point
00:29:12.820
to have an AI radiologist, certainly, before I have an AI plumber. And there's a lot more above and
00:29:24.120
beyond the radiology side of that comparison that I think is going to fall before, you know,
00:29:29.980
the basic manual tasks fall to robots. And this is a picture of real success, right? Because
00:29:37.900
in the end, all we're going to care about is performance. We're not going to care about
00:29:42.000
keeping a monkey in the loop just for reasons of sentimentality. You know, if you're telling me
00:29:49.240
that my car can drive a thousand times better than I can, which is to say that, you know, it's going
00:29:55.020
to reduce my risk of getting in a fatal accident, you know, killing myself or killing someone else
00:29:59.840
by a factor of a thousand if I just flip on autopilot, well, then not only am I going to flip it on,
00:30:07.060
I'm going to consider anyone who declines to do that to be negligent to the point of criminality.
00:30:13.100
And that's never going to change. Everything is going to be in the position of a current chess
00:30:18.480
master who knows that the best player on earth is never going to be a person ever again, right?
00:30:25.920
Because of AlphaZero. So take that wherever you want.
00:30:29.880
I disagree a little bit, and I'll tell you why. I think you're correct in about 30 years,
00:30:35.620
but I don't think that argument is true in the short term.
00:30:38.180
Yeah. No, I was not, just to be clear, I'm not suggesting any timeframe there. I'm just saying,
00:30:43.260
ultimately, if we continue to make progress, something like this seems bound to happen.
00:30:49.760
Yes. But what I want to say is, I defy you to argue with me that making people smarter is a bad
00:30:58.500
thing. Okay. So let's start with the premise of the human assistant, that is the thing that you're
00:31:06.760
using, will make humans smarter. It'll make it deeper, better analysis, better choices.
00:31:14.840
But at least the current technology cannot replace essentially the free will of humans.
00:31:23.240
They sort of wake up in the morning, you have a new idea, you decide something, you say,
00:31:26.960
that's a bad idea, so forth and so on. We don't know how to do that yet. And I have some speculation
00:31:31.880
on how that will happen. But in the next decade, we're going to not be solving that problem. We'll
00:31:38.180
be solving a different problem, which is how do we get the existing people doing existing jobs to do
00:31:43.380
them more efficiently, that is smarter, better, faster. One of the, when we looked at the funding
00:31:50.100
for this AI program that I've since announced, the funding 125 million, a fair chunk of it is going
00:31:56.040
to really hard computer science problems. Some of them include, we don't really understand how to
00:32:01.240
explain what they're doing. As I mentioned, they're also brittle. When they fail, they can fail
00:32:06.540
catastrophically. Like, why did it fail? And no one can explain. There are hardening, there are resistance
00:32:12.340
to attack problems. There are a number of problems of this kind. These are hard computer science problems,
00:32:17.520
which I think we will get through. They use a lot of power, the algorithms are expensive, that sort of
00:32:22.340
thing. But we have also focusing around the impact on jobs and employment and economics.
00:32:28.480
We're also focusing on national security. And we're focusing on the question that you're asking,
00:32:33.940
which is, what's our identity? What does it mean to be human? Before general intelligence comes,
00:32:41.120
we have to deal with the fact that these systems are not capable of choosing their own outcome,
00:32:46.940
but they can be applied to you as a citizen by somebody else against your own satisfaction.
00:32:53.600
So the negatives before AGI are all of the form, misinformation, misleading information,
00:33:02.580
creating dangerous tools, and for example, dangerous viruses. For the same reason that we built a
00:33:09.100
fantastic new antibiotic drug, it looks like, you could also imagine a similar evil team of producing
00:33:16.500
an incredible number of bad viruses, things that would hurt people. And you could imagine in that
00:33:22.120
scenario, they might be clever enough to be able to hurt a particular race or particular sex or
00:33:26.960
something like that, which would be totally evil and obviously a very bad thing. We don't have a way
00:33:32.620
of discussing that today. So when I look at the positives and negatives right now, I think the
00:33:38.160
positives, as with many technologies, really overwhelm the negatives, but the negatives need to be
00:33:45.140
looked at. And we need to have the conversation right now about, let's use social media, which is an easy
00:33:51.740
whipping boy here. I would like, so I'm clear what my political position is, I'm a very strong proponent
00:33:59.000
of freedom of speech for humans. I am not in favor of freedom of speech for computers, robots, bots,
00:34:07.260
so forth and so on. I want an option with social media, which says, I only want to see things that a human
00:34:13.160
has actually communicated from themselves. I want to know that it wasn't snuck in by some Russian
00:34:19.320
agent. I want proof of providence and I want to know there's a human. And if it's a real human who's
00:34:25.440
in fact, you know, an idiot or crazy or whatever, I want to be able to hear their voice and I want to
00:34:30.800
be able to decide I don't agree with it. What's happening instead is these systems are being boosted.
00:34:36.700
They're being pitched, they're being sold by AI. And I think that's got to be limited in some way.
00:34:43.820
I'm in favor of free speech, but I don't want only some people to have megaphones.
00:34:49.220
And if you talk to politicians and you look at the political structure in the country,
00:34:53.820
this is a completely unintended effect of getting everyone wired. Now, is it a human or is it a
00:35:01.180
computer? Is it a Russian, a Russian compromise plane, or is it an American? Those things need
00:35:07.540
to get resolved. You cannot run a democracy without some level of trust.
00:35:12.400
Yeah. Yeah. Well, let's take that piece here. And obviously it extends beyond
00:35:16.800
the problem of AI's involvement in it, but the misinformation problem is enormous.
00:35:23.640
What are your thoughts about it? Because I'm just imagining we've been spared thus far the worst
00:35:30.780
possible case of this, which is just imagine under conditions of where we had something like perfect
00:35:39.120
deep fakes, right, that were truly difficult to tell apart from real video, what would the
00:35:45.640
controversy around the 2020 election have looked like or the war in Ukraine and our dealings with
00:35:52.080
Putin at this moment, right? Like just imagine, you know, a perfect deep fake of Putin declaring a
00:35:59.480
nuclear first strike on the U.S. or whatever. I mean, you just, you know, just imagine essentially a
00:36:05.220
writer's room from hell where you have smart, creative people spending their waking hours figuring out how
00:36:12.200
to produce media that is shattering to every open society and conducive to provoking international
00:36:21.360
conflict. That is clearly coming in some form. I guess my first question is, are you hopeful that
00:36:29.820
the moment that arrives, we will have the same level of technology that can spot deep fakes? Or is there
00:36:36.500
going to be a lag there of months, years that are going to be difficult to navigate?
00:36:43.660
We don't know. There are people working really hard on generating deep fakes, and there are people
00:36:48.640
working really hard on detecting deep fakes. And one of the general problems with misinformation
00:36:54.320
is we don't have enough training data. The term here is, in order to get an AI system to know
00:37:01.160
something, you have to give it enough examples of good, bad, good, bad, and eventually you can say,
00:37:06.300
oh, here's something new, and I know if it's good or bad. And one of the core problems in
00:37:10.960
misinformation is we don't have enough agreement on what is misinformation or what have you.
00:37:15.640
And the thought experiment I would offer is, President Putin in Russia has already shut down
00:37:21.260
the internet and free speech and controls the media and so forth. So let's imagine that he was
00:37:30.320
If you'd like to continue listening to this conversation, you'll need to subscribe at
00:37:34.400
samharris.org. Once you do, you'll get access to all full-length episodes of the Making Sense podcast,
00:37:39.600
along with other subscriber-only content, including bonus episodes and AMAs, and the conversations
00:37:45.800
I've been having on the Waking Up app. The Making Sense podcast is ad-free and relies entirely on
00:37:51.340
listener support. And you can subscribe now at samharris.org.