#84 — Landscapes of Mind
Episode Stats
Words per Minute
161.91957
Summary
In this episode of the Making Sense Podcast, I speak with Kevin Kelly about his new book, The Inevitable, about the 12 technological forces that will shape our future. We discuss AI, the nature of consciousness, and the role of technology in shaping the world, and how we need to embrace these things in order to steer the many ways in which we do have control and influence over them. We don t run ads on the podcast and therefore, therefore, are made possible entirely through the support of our listeners. If you enjoy what we re doing here, please consider becoming a supporter of what we're doing here by becoming a subscriber. You'll get access to all kinds of premium features, including ad-free versions of the podcast, as well as access to our most popular podcasting platform, The Huffington Post, where you can read and subscribe to all sorts of news and discussion about the happenings around the world of culture, politics, technology, and culture. And, of course, there's plenty of time to catch up on the latest episodes of Making Sense! Subscribe to the podcast on your favorite podcast platform, wherever you get your podcasts, if you're listening to the thing you care about things that matter. Thanks for listening! and Happy Listening! Make sense! Sam Harris and Kevin Kelly Make Sense - The Making Sense podcast. - Sam Harris Music: "Space Traveler" by Jeff Kaale (ft. John Singleton ( ) Art: "Out of Control" by Ian Dorsen ( ) "Goodbye" by Kevin Kelly ( ) and "Space Junk (feat. by by The Good Morning and ( ) by The Lonely Planet ( ) - "The Good Morning America" by John Rigsby ( ) is out on the Good Morning Podcast ( ) (featuring John Rocha ( ) & & by Shadydave ( ) . by Jeff Perla ( , "Good Morning" ( ) , . , and , "Good Luck" by , & ( by James Gray ( ) (feat., ) ( ) with is out of New York Magazine ( . , ) and (c) & , Thank You by Bill Simmons ( ) -- Thank You, Kevin Kelly? ( ), and .
Transcript
00:00:10.880
Just a note to say that if you're hearing this, you are not currently on our subscriber
00:00:14.680
feed and will only be hearing the first part of this conversation.
00:00:18.420
In order to access full episodes of the Making Sense Podcast, you'll need to subscribe at
00:00:24.060
There you'll find our private RSS feed to add to your favorite podcatcher, along with
00:00:30.520
We don't run ads on the podcast, and therefore it's made possible entirely through the support
00:00:35.800
So if you enjoy what we're doing here, please consider becoming one.
00:00:49.020
Kevin helped launch Wired Magazine and was its executive editor for his first seven years.
00:00:54.660
So he knows a thing or two about digital media.
00:00:58.440
And he's written for the New York Times, The Economist, Science, Time Magazine, The Wall
00:01:08.920
His previous books include Out of Control, New Rules for the New Economy, Cool Tools, and
00:01:18.100
And his most recent book is The Inevitable, Understanding the Twelve Technological Forces
00:01:27.220
And Kevin and I focused on this book, and then spent much of the conversation talking
00:01:32.840
about AI, the safety concerns around it, the nature of intelligence, the concept of the
00:01:40.780
singularity, the prospect of artificial consciousness, and the ethical implications of that.
00:01:49.020
We don't agree about everything, but I really enjoyed the conversation.
00:02:11.960
Listen, so many people have asked for you, and obviously, you know, I've known you and
00:02:18.960
I'll talk about how we first met at some point.
00:02:21.900
You are so on top of recent trends that are subsuming everyone's lives that it's just great
00:02:30.540
So before we jump into all these common topics of interest, how would you describe what you
00:02:41.860
And they're often visual packages, but I like to take ideas, not necessarily my ideas,
00:02:49.880
but other people's ideas, and present them in some way.
00:02:53.740
And that kind of is what I did with magazines, beginning with the Holworth Review, formerly
00:03:00.200
called Co-Evolution Coralie, the Holworth Catalogues, Wired, websites like Cool Tools, and my books.
00:03:07.800
So you've written these two recent books on technology, what technology wants, and your
00:03:15.360
How would you summarize the arguments you put forward in those books?
00:03:19.520
At one level, I'm actually trying to devise a proto-theory of technology.
00:03:28.240
So before Darwin's theory of biology, the evolutionary theory, there was a lot of naturals and they
00:03:36.540
had these curiosity cabinets where they would just collect biological specimens and there
00:03:45.660
There was no framework for understanding how they were related or how they came about.
00:03:51.760
And in many ways, technology is like that with us.
00:03:55.260
We have this sort of parade of one invention after another and there's really no theory about
00:04:00.900
how these different species of technology are related and how they come together.
00:04:09.000
So at one level, my books were trying to devise a rough theory of their origins and perhaps
00:04:19.500
no surprise, cutting to the punchline, I see these as an extension and acceleration of the
00:04:28.200
same forces that are at work in natural evolution, or cosmic evolution for that matter.
00:04:34.480
And that if you look at it in that way, this system of technology that I call the tech name
00:04:43.340
is in some ways the extension and acceleration of the self-organizing forces that are running
00:04:53.640
And the second thing I'm trying to do is to say that there is a deterministic element
00:04:59.700
in this, both in evolution and in technological systems.
00:05:06.120
And a lot of, at the very high level, a lot of what we're going to see and have seen is
00:05:13.940
And so therefore is inevitable and that we as humans, individuals and corporately need
00:05:20.160
to embrace these things in order to be able to steer the many ways in which we do have
00:05:30.800
So I would say like the, once you invented electrical wires and you invented switches and stuff, you'd
00:05:37.140
And so the telephone was inevitable, but the character of the telephone was not inevitable.
00:05:41.900
You know, iPhone was not inevitable and, and we have a lot of choices about those, but
00:05:48.040
the only way we make those choices is by embracing and using these things rather than prohibiting
00:05:53.800
So now you start the book, The Inevitable, with some very amusing stories about how clueless
00:05:59.780
people were about the significance of the internet in particular.
00:06:02.860
I was vaguely aware of some of these howlers, but you just wrap them all up in one paragraph
00:06:08.500
and it's, it's amazing how blind people were to what was coming.
00:06:13.580
So you, you, you cite Time and Newsweek saying that, that more or less the internet would amount
00:06:19.160
One network executive said it would be the CB radio of the nineties.
00:06:23.220
There was a wired writer who bought the domain name for McDonald's, McDonald's.com and couldn't
00:06:29.540
give it away to McDonald's because they couldn't see why it would ever be valuable to them.
00:06:33.740
Now I don't recall being quite that clueless myself, but I, I'm, I'm continually amazed
00:06:44.540
And I mean, if you had told me five years ago that I would soon be spending much of my
00:06:49.320
time podcasting, I would have said, what's a podcast.
00:06:51.920
And if you had told me what a podcast was, essentially describing it as on demand radio,
00:06:57.640
I would have been absolutely certain that there was no way I was going into radio.
00:07:03.320
I feel personally, no ability to see what's coming.
00:07:07.680
How, why do you think it is so difficult for most people to see into the, even the very
00:07:16.580
It's, I don't, I don't think I have a good answer about why we find it hard to imagine the
00:07:22.040
future, but it is true that the more we know about the, in other words, the experts in a
00:07:30.200
certain field are often the ones who are most blinded by the changes.
00:07:34.640
We did this thing at Wired called reality check and we would poll different experts and non-experts
00:07:41.900
in some future things like, you know, whether they're going to use like laser drilling in
00:07:47.160
dentistry or, you know, flying cars and stuff like that.
00:07:54.860
And when these came around later on in the future, it was the experts who were always
00:08:01.460
underestimating, who, who, who are, I guess, overestimating when things were going to happen.
00:08:07.120
There was, they were more pessimistic and it was sort of the people who, so the people who
00:08:11.000
knew the most about things were often the ones that were most wrong.
00:08:16.700
And, um, so I think, I, I think it's kind of like we know too much and we, um, find it
00:08:24.540
hard to release and believe things that seemed impossible.
00:08:30.380
Um, so, so, uh, the other observation that I would make about the things that have surprised
00:08:36.520
me the most in, in the last 30 years, and I think the things that will continue to surprise
00:08:42.220
us in the next 30 years all have to do with the fact that the things that are most surprising
00:08:49.820
are actually things are done in collaboration at a scale that we've not seen before, like
00:08:55.920
things like Wikipedia, Facebook, or even cell phones and smartphones to some extent, that
00:09:02.120
basically we are kind of organizing work and collaboration at a scale that was just really
00:09:11.920
And that's where a lot of these surprises have been originating is this, this, the, the,
00:09:17.840
our ability to collaborate in real time and scales that, that were just unthinkable before.
00:09:27.780
Um, and, um, for me, most of these surprises have, have been in, have had that connection.
00:09:35.320
Well, I, I know you and I want to talk about AI because I think that's, that's an area where
00:09:39.760
we'll find some, I think, significant overlap, but also some disagreement.
00:09:44.460
And I want to spend most of our time talking about that, but I do want to touch on some of
00:09:50.020
the, the issues you raise in, in the inevitable, because you, you divide the book into these,
00:09:57.920
I'm sure some of those will come back around in our discussion of AI, but take an example
00:10:04.540
I mean, one, one change that a podcast represents over radio is that it's, it's on demand.
00:10:10.720
You can listen to it whenever you want to listen to it.
00:10:16.440
So there's no, there's no barrier to listening to it.
00:10:19.580
People can slice it and dice it in any way they want.
00:10:23.360
They, they, people remix it, that people have taken snippets of it and put it behind music.
00:10:28.740
So it becomes the basis for other people's creativity.
00:10:32.780
Ultimately, I would imagine all the audio that exists and all the video that exists will
00:10:37.740
be searchable in a way that text is currently searchable, which is, that's a real weakness
00:10:42.740
But eventually you'll be able to search and get exactly the snippet of audio you want.
00:10:48.100
And this change in just this one domain of, of how people listen to a conversation that
00:10:57.460
So there was the, the flow or the, the, the, the verb of the remixing was to your point,
00:11:03.200
the fact that, um, that was the big, the big change in music, which the music companies
00:11:08.620
didn't kind of understand that they thought that the free aspect of downloads of these files
00:11:14.140
was because people wanted to cheat them and get things for free.
00:11:17.820
But the, the, the, the chief value was that the, the freeze and freedom is that people
00:11:22.760
could take these music files, they could get less than an album, they could kind of remix
00:11:27.940
They could then manipulate them, make them into playlists.
00:11:31.740
They could do all these things that make it much more fluid and liquid, um, and manipulable
00:11:39.360
And, um, and that was the great attraction for people.
00:11:43.420
The fact that it was, doesn't cost anything was sort of a bonus that wasn't the main event
00:11:48.620
and that all the other things that you mentioned about this aspect of podcasts, of getting them
00:11:54.300
on demand, the shift from owning things to having access to things.
00:11:58.880
If you have instant, um, access, uh, anytime, anywhere in the world, um, that's part of the
00:12:06.020
shift there, the, the, the shift, um, away from things that are static and, um, monumental
00:12:14.280
to things that are incomplete and always in the process.
00:12:19.040
This, the movement from centralized to decentralized is also made possible when you have things
00:12:25.760
in real time, you know, when you're in a world of like the Roman error, when you, uh,
00:12:31.920
where it's very little information flows, the best way to organize an army was to have people
00:12:38.200
give a command at the top and everybody below would, would follow it because the commander
00:12:43.900
had the most information, but in a world in which information flows liquidly and pervasively
00:12:51.100
everywhere, then a decentralized system is much more powerful, uh, because you can actually,
00:12:57.900
um, have the edges and steer as well as the center and center becomes less important.
00:13:07.500
And your example of, of the podcast is just a perfect example where all these trends in
00:13:17.540
And I would say in the future, we would continue to remix the elements inside a podcast and that
00:13:27.360
we would, you know, have, um, podcasts within VR that will have, um, podcasts, as you said,
00:13:34.100
that are searchable and have AI, um, remix portions of it, or that we would, you know, begin to do
00:13:42.900
all the things that we've done with texts and annotations and footnoting would be brought to
00:13:49.160
So if you just imagine what we've done with podcasts and now multiply that by every other
00:13:54.900
medium from GIFs to YouTube, um, we're entering into an era where we're going to have, um, entirely
00:14:06.180
brand new genres of art, expression, and media.
00:14:13.120
And we're just, again, at the beginning of this process.
00:14:16.560
What do you think about the, the new capacity to fake media?
00:14:20.480
So now I think you, you must've seen this, I think it was a TED talk initially where I saw it,
00:14:24.460
but it's been unveiled in various formats now where they can fake audio so well that given
00:14:32.080
the sample that we've just given them, they could, someone could produce a fake conversation
00:14:36.080
between us where we said all manner of reputation destroying things.
00:14:41.000
And it wouldn't be us, but it would be, I think by current technology, undetectable as a fraud.
00:14:47.440
And I think there, there are now video versions of this where you can get someone's mouth to move
00:14:53.860
So it looks like they're delivering the fake audio, although the facial display is not totally
00:14:58.940
convincing yet, but presumably it will be at some point.
00:15:03.380
There's, I've, I've, in a hand-waving way, not really knowing what I'm talking about,
00:15:07.480
I've imagined there must be some blockchain-based way of ensuring against that.
00:15:15.100
So, so, um, in, I don't know, 1984 or something, I did a cover story for the Holworth Review of CQ,
00:15:26.360
Um, it was called Photography as the End of Evidence for Anything.
00:15:29.900
And we were, we used a very expensive, um, Cytex machine.
00:15:36.220
It was like multi-million dollar machine, which cost, uh, tens of thousands of dollars an hour
00:15:46.160
So, you know, National Geographic and Time and Life magazine had access to things
00:15:52.680
But we decided to Photoshop, uh, flying saucers arriving in San Francisco.
00:15:59.900
And, um, the point of this article was that, okay, this was the beginning of using photography
00:16:06.920
And what I kind of, uh, concluded back then was that the only, well, there's two things.
00:16:13.660
One was, um, the primary evidence of believability was simply going to be the reputation of the source.
00:16:21.580
So, for most people, you wouldn't be able to tell.
00:16:25.000
And that we already have that thing with text, all right?
00:16:28.060
I mean, it's like words, you know, you could quote somebody, you can say, put some words
00:16:32.520
and say, Sam Harris says this and it would look just like it was real.
00:16:39.040
Well, the only way you could know was basically you have to trust the, the, the source and
00:16:43.960
the same thing was going to happen with photography.
00:16:49.160
And so they're coming up to this place where text is, which is basically you can only rely
00:16:55.340
The second thing we discovered from this was that, and this also kind of applied to this
00:17:00.240
question of like, when you have AI and agents, how would you be able to tell if they're human
00:17:04.820
And the thing is, is that for most cases, like in a movie right now, you can't tell whether
00:17:12.440
something has been CGI, whether it's real actor or not, we're, we've already left that
00:17:18.560
behind and, but we don't care in a certain sense.
00:17:22.360
And when we call up on a phone and there's a robot, an agent there, and we're trying to
00:17:28.820
do a service problem, in some ways we don't really care whether it's a human or not.
00:17:32.980
If they're giving us good service, but in the cases where we do care, there will always
00:17:42.600
There's forensic ways to, to really come decide whether this photograph has been doctored,
00:17:50.380
whether, um, a CGI is actually been used to, to make a frame, whether this audio file has
00:17:57.880
been altered, there, there will always be some way if you really, really care, but in most
00:18:05.080
And we will just have to rely on the reputation of, of the source.
00:18:10.140
And so, um, I think we're going to kind of get to this, to the place where text is already,
00:18:17.540
If, if someone's making it up, then you have no way to tell by looking at the text, you
00:18:23.760
But that doesn't address the issue of fake news.
00:18:26.500
And for, for, for, for there, I think what we're going to see is a, like a truth signaling
00:18:32.200
layer added on somewhat, maybe using AI, but mostly to devise what I would think is going
00:18:39.440
to be kind of like a, a probability index to a statement that would be made in a networked
00:18:44.620
way rather than it'll, it'll involve Wikipedia and Snopes and, and in places, you know, maybe
00:18:49.980
other academics, but it would be like page rank, meaning that you'll have a statement, you
00:18:58.840
They'll be like, that's, that statement has a 95% probability or 98% probability of being
00:19:05.960
And then other statements will have a 50% probability of being true and others will have a 10% probability.
00:19:11.580
And that will come out of a networked analysis of these, these sites or these, you know, the
00:19:19.880
So, so these other sources have a high reliability because in the past they had been true.
00:19:24.620
And this, this, this network of, of, uh, corresponding sources, which are ranked themselves by other
00:19:34.100
sources in terms of their reliability will generate some index number to a statement.
00:19:41.180
And as the statements get more complex, that's a, becomes a more difficult job to do.
00:19:45.760
And that's where the AI could become involved in trying to detect the pattern out of, um, all
00:19:53.240
And so, um, you'll get a probability score of, of this statement is likely truthfulness.
00:20:03.460
That's kind of like a prediction market for epistemology.
00:20:07.820
So in, in light of what's happening and the trends you discuss in, in the inevitable, if
00:20:14.320
you had a child going to college next year, what would you hope that he or she study and
00:20:20.680
or ignore in light of what opportunities will soon exist?
00:20:25.800
One of the things I talk about in the book is this idea that we're all in going to be
00:20:29.080
perpetual newbies, no matter whether we're 60 or 16 or six, that, um, we're feeling very
00:20:35.880
good that we've mastered, you know, smartphones and we know laptops, but the gestures and how
00:20:41.340
things work, this kind of literacy, but, you know, in five years from now, there'll be a
00:20:45.780
new platform, virtual reality, whatever it might be.
00:20:48.660
And we'll have to learn another set of gestures and commands and logic.
00:20:54.120
And so the, the digital natives right now have a past because they, uh, are doing with technology
00:21:03.380
that was invented, um, after they were born, but, but eventually, um, they're going to have
00:21:09.860
And they're going to be in the same position as the old folks of having to learn these things.
00:21:18.380
And I think the really only literacy or skill that should be taught in schools is so that
00:21:26.300
when you graduate, you have learned how to learn.
00:21:29.920
So learning how to learn is, is the, the, the, the meta skill that you want to have.
00:21:35.540
And really, I think the only one that makes any difference because whatever language you're
00:21:39.980
going to learn is not necessarily going to be the one that you are going to get paid for
00:21:47.360
So I, I, I think this idea of, of learning how to learn is the real skill that you should
00:21:57.300
And for extra bonus for, for, for the ultimate golden pass, if you can learn how you learn
00:22:05.880
best yourself, if you can optimize your own style of learning, that's the superpower that
00:22:11.240
you want that I think almost takes a lifetime to get to.
00:22:15.440
And some people like Tim Ferriss are much better at dissecting how they learn and understanding
00:22:22.040
But if you can get to that state where you have really, really understand how you personally
00:22:32.500
And I think that's what we want to aim for is that every person on the planet today will
00:22:39.480
learn how to learn and will optimize how they learn best.
00:22:43.780
And that, I think, is what schools should really be aiming for.
00:22:48.520
Yeah, I was going to say our mutual friend, Tim, seems well poised to take advantage of
00:22:57.260
I want to, I'll set this up by just how this, this podcast got initiated because though I, I,
00:23:05.440
You recently sent me an email after hearing my podcast on robot ethics with Kate Darling.
00:23:12.360
And in that email, you, you sketched ways where you think you and I disagree about the implications
00:23:21.520
You were also reacting to my TED talk on the topic and also a panel discussion that you saw
00:23:28.500
where I was on stage with, with Max Tegmark and Elon Musk and Jan Talon and other people
00:23:33.860
who were at this conference on, on AI at Asilomar earlier this year.
00:23:38.500
In, you, you wrote in the setup to this email, and now I'm quoting you, there are at least
00:23:43.080
five assumptions the super AI crowd hold that I can't find any evidence to support.
00:23:48.340
In contradistinction to this orthodoxy, I find the following five heresies to have more evidence.
00:23:57.420
So, quote, smarter than humans is a meaningless concept.
00:24:01.420
Two, humans do not have general purpose minds and neither will AIs.
00:24:06.080
Three, emulation of human thinking will be constrained by cost.
00:24:11.840
Four, dimensions of intelligence are not infinite.
00:24:15.880
And five, intelligences are, are only one factor in progress.
00:24:20.820
Now, I think these are all interesting claims, and I think I agree with several of them, but
00:24:26.000
most of them don't actually touch what concerns me about AI.
00:24:31.520
So, I think we should talk about all of these claims, because I think they get at interesting
00:24:36.920
But I think I should probably start by just summarizing what my main concern is about AI.
00:24:42.020
So, we can, as we talk about your points, we can also just make sure we're, we're hitting
00:24:47.480
And, you know, you, when you talk about AI and when you talk about this one trend in your
00:24:53.180
book, perhaps the most relevant, cognifying, you know, essentially putting intelligence into
00:24:57.760
everything that can be made intelligent, you can sound very utopian, and I can sound very
00:25:06.060
So, but I actually think we, we overlap a fair amount.
00:25:09.400
I guess my main concern can be summarized under the, the heading of the alignment problem,
00:25:15.960
which is now kind of a phrase of jargon among those of us who are worried about AI gone wrong.
00:25:22.400
And there are really two concerns here with AI, and, and, and I think that they're, they're
00:25:29.760
concerns that they're visited on any powerful technology.
00:25:33.540
And the first is just the obvious case of people using it intentionally in ways that cause
00:25:40.500
So, it's just the kind of the bad people problem.
00:25:45.040
It's a problem that probably never goes away, but it's not the interesting problem here.
00:25:49.840
I think that the, the interesting problem is the unintended consequences problem.
00:25:53.360
So, it's the situation where even good people with the best of intentions can wind up committing
00:25:59.220
great harms because the technology is such that it's not, it won't reliably conform to
00:26:08.040
So, for, for a powerful technology to be safe or, you know, or, or to be operating within our
00:26:14.980
risk tolerance, it has to be the sort of thing that good people can reliably do good things
00:26:20.320
with it rather than accidentally end civilization or, or do something else that's terrible.
00:26:26.060
And for that, for this to happen with AI, it's going to have to be aligned with our values.
00:26:33.560
And so, again, this is often called the, the alignment problem.
00:26:36.120
When you have autonomous systems working in ways and increasingly powerful systems and
00:26:42.160
ultimately systems that are more powerful than any human being and even any collection
00:26:46.520
of human beings, you need to solve this, this alignment problem.
00:26:51.120
But at this point, people who haven't thought about this very much get confused or, or at least
00:26:57.880
they wonder, you know, why on earth would an AI, however powerful, fail to be aligned with
00:27:04.640
our values, because after all, we, we built these things or we will build these things.
00:27:09.200
And they imagine a kind of silly Terminator style scenario where just, you know, robot armies
00:27:15.900
start attacking us because for some reason they have started to hate us and, and want to kill
00:27:21.060
And that, that really isn't the issue that, that even the most dystopian people are, are thinking
00:27:27.440
And it's not, it's not the issue I'm thinking about.
00:27:29.160
It's, it's not that our machines will become spontaneously malevolent and want to kill us.
00:27:34.640
The issue is that they, they can become so competent at meeting their goals that if their
00:27:41.740
goals aren't perfectly aligned with our own, then the unintended consequences could be so
00:27:50.000
And, and there are, there are cartoon versions of this, as you know, which more clearly dissect
00:27:56.780
I mean, they're, they're as cartoonish as the Terminator style scenarios, but they're, they're
00:28:00.820
I mean, it's something like Nick Bostrom's paperclip maximizer to review.
00:28:05.600
I think many people are familiar with this, but so Nick Bostrom imagines a machine whose
00:28:10.420
only goal is to maximize the number of paperclips in the universe, but it's a super powerful,
00:28:19.040
And given this goal, it could quickly just decide that, you know, every atom accessible,
00:28:24.860
including the atoms in your own body are, are best suited to be turned into paperclips.
00:28:30.520
And, you know, obviously we wouldn't build precisely that machine, but the point of, of
00:28:35.620
that kind of thought experiment is to point out that these machines, even super intelligent
00:28:40.640
machines will not be like us and they'll lack common sense or they'll, or they'll only have
00:28:47.220
the common sense that we understand how to build into them.
00:28:51.920
And so the bad things that they might do might be very counterintuitive to us and therefore
00:28:59.500
And just, you know, kind of the final point I'll make to set this up.
00:29:01.900
I think we're misled by the concept of intelligence.
00:29:05.720
Because when we talk about intelligence, we assume that it includes things like common
00:29:12.160
In the space of this concept, we insert something fairly anthropomorphic and, and, and familiar
00:29:19.880
But I think intelligence is more like competence or effectiveness, which is just an ability
00:29:26.780
to meet goals in an environment or across a range of environments.
00:29:32.060
And given a certain specification of goals, even a superhumanly competent machine or system
00:29:41.320
of machines might behave in ways that would strike us as completely absurd.
00:29:47.340
And yet we, we will not have closed the door to those absurdities, however dangerous, if we
00:29:53.060
don't anticipate them in advance or, or, or figure out some generic way to, to solve this alignment
00:29:58.860
Um, so I think a good place to start is where we agree.
00:30:03.060
And, um, I think where we, the first thing I think we both agree on is, is that we have
00:30:08.720
a very poor understanding of what our own intelligence is as humans.
00:30:12.880
Um, and I would, um, make a further statement that I think the common conception that we have
00:30:20.420
of IQ is a very misleading notion of intelligence and humans that, that we can kind of rank intelligences
00:30:29.340
in a relative scale, a single dimension of, you know, and this is the taken from Nick Bostrom's
00:30:34.940
own book that, you know, you have a single dimension and you have, uh, the, the intelligence
00:30:39.800
of a mouse say, or the IQ of a mouse, and then a rat's a little bit more and then that
00:30:45.400
And then you have the kind of a really dumb human and average human, and then a super
00:30:50.320
And then there's the, the super AI, which is kind of off the charts in terms of, uh,
00:30:57.920
And that, I think is a very, very misleading idea of what intelligence is.
00:31:03.960
It's obviously the human intelligence is, um, a suite, a symphony, a portfolio of dozens,
00:31:11.660
20, maybe, who knows how many different modes or nodes of, of thinking there's perception,
00:31:18.060
there's symbolic reasoning, there's a deductive reason, inductive reasoning, and emotional intelligence,
00:31:24.280
spatial navigation, long-term memory, short-term memory.
00:31:28.020
There's, there's, there's, there's many, many different nodes of thinking.
00:31:32.700
And of course, that complex varies person by person.
00:31:37.820
And, um, when we get into the animal kingdom, we have a different mixture of, of these.
00:31:47.260
Uh, but in some cases, um, they're, uh, a particular node that we might have may actually
00:31:54.740
be higher in, um, maybe superior in, in, in an animal in terms of, uh, I mean, if you've
00:32:01.100
seen some of these, um, the chimpanzee, yeah, chimpanzees, remembering the locations of numbers,
00:32:06.220
it's like, oh my gosh, obviously it's like, we're just, they're, they're, they're, they're
00:32:13.100
We should just describe that so that people are aware of what, because they should find that
00:32:17.420
What it is, is a chimpanzee has a screen and there's a series of, of, uh, numbers in sequence
00:32:23.760
or numbers that appear in different positions on the screen very, very briefly.
00:32:29.240
It's like a checkerboard with, with a suddenly illuminates with, let's say 10 different digits
00:32:33.480
and you have to select all the digits in order and select all the, you know, the, you have
00:32:38.800
to hit the right squares and the numbers then disappear.
00:32:43.000
And you have to remember, you see, it sees it for like a split second and you have
00:32:46.480
to remember where they are and you have to go back and hit the locations in order.
00:32:50.620
And no human can, can do this, but this, for some reason, chimps seem to be able to do
00:32:57.680
So, so they have some kind of a, uh, a short-term memory or a long-term memory.
00:33:02.020
I'm not sure what kind of memory of spatial memory that does, that, that really, um, would
00:33:08.920
And so, um, so I think we both agree that, that, that the human intelligence is very complex.
00:33:14.600
And, and, um, my suggestion about thinking about AI is, is, is, is always to use plural,
00:33:22.080
to try to talk about AIs, because I think as we make these synthetic types of minds, we're
00:33:29.020
going to make thousands of different species of them with different combinations of these
00:33:34.860
primitive, these kind of primitive, uh, modes of thinking.
00:33:39.520
And that what we think of ourselves, our own minds, we think of our, that, that we think
00:33:49.580
It's very much like this, the illusion of us having an eye or being center.
00:33:54.660
There's an illusion that we have a kind of a unified universal intelligence.
00:33:58.560
But in fact, we have a, we've evolved a very, very specific, um, combination of elements
00:34:10.040
in, in thinking that are not really general purpose at all.
00:34:14.540
They're, they're, they're, they've, they've, it's a very specific purpose to survive on this
00:34:22.200
When we compare our intelligence to the space of possible intelligences, we're going to see
00:34:29.580
that we're not at the center of some universal, but we're actually at the edge, like we are
00:34:33.920
in the real galaxies of, of, of possible minds.
00:34:38.200
And what we're doing with AI is actually going to make a whole zoo of possible ways of thinking,
00:34:44.480
including inventing some ways of thinking that don't exist in biology at all today.
00:34:53.380
So, so, so when, the way we made, uh, artificial flying is we looked at natural flight and mostly
00:35:03.460
And we tried to, to, to artificially fly by flapping wings.
00:35:07.780
The way we made artificial flying is we invented a type of flight that does not exist in nature
00:35:12.400
at all, which was a fixed wing and a propeller.
00:35:14.520
And we are going to do the same thing of, of inventing ways of thinking that cannot really
00:35:21.480
occur in biology, biological tissue that will be different, a different way of thinking.
00:35:29.280
And, um, we'll combine those into maybe many, many new complexes of, of, of types of, of
00:35:37.300
thinking to do, um, and achieve different, different things.
00:35:41.400
And there may be, uh, problems that are so difficult in science or business that human
00:35:49.220
type thinking alone cannot reach that we will have to work with a two-step process of inventing
00:35:55.820
a different kind of thinking that we can then together work to solve some of these problems.
00:35:59.840
So I think just like there's a kind of a misconception in thinking that humans are sort of on this ladder
00:36:08.400
of evolution where we are superior to the animals that are below us, in reality, the way evolution
00:36:17.700
works is that it kind of radiates out from a common ancestor of 3.7 billion years ago.
00:36:24.580
So we're all equally evolved and you, the way, the proper way to think about it is like,
00:36:29.760
are we superior to the, uh, to the starfish, to the giraffe?
00:36:35.680
They have all enjoyed the same amount of evolution as we have.
00:36:39.020
The proper way to kind of map this is to map this in a possibility space and saying these
00:36:45.080
creatures excel in this niche and these creatures excel in this niche and they aren't really superior
00:36:51.200
to us in, in that way. It's even hard to determine whether they're more complicated than us or more
00:36:57.040
complex. So I think a better vision of AIs is to have a possibility space of all the different
00:37:04.960
possible ways you can think. And some of these complexes will be greater than what humans are,
00:37:11.860
but we can't have a complex of, of intelligence that maximize everything. That's just the engineering
00:37:21.180
principle. The engineering maximizes you cannot optimize all, everything you want to do. You're
00:37:28.260
always bound by, by resources and time. So you have to make trade-offs. And if you want to have a
00:37:35.700
Swiss army knife version of intelligence that has all the different things, then they're
00:37:41.840
going to be kind of mediocre in all the things that they do. Um, you can always excel in another
00:37:48.560
version, another dimension by just specializing in that particular node of, of thinking and thought.
00:37:55.620
And so, um, this idea that we're going to make this super version of human intelligence that somehow
00:38:07.500
excels us in every dimension, I think is, I don't see any evidence for that.
00:38:14.220
Let me try to map what you just said on to the way I think about it, because I agree with
00:38:20.380
most of what you said. I think the last bit I don't agree with, but I certainly, and I, I come to a
00:38:26.620
different conclusion or I have a different, at least I have a very vivid concern that survives
00:38:32.940
contact with all the things you just said. I certainly agree that IQ does not map on to
00:38:39.580
the way we think about the intelligence of other species. To ask, you know, what is the IQ of an
00:38:44.700
octopus doesn't make any sense. And it's fine to think about human intelligence, not as a, a single
00:38:51.980
factor, but as, as a constellation of things that we care about. And our, our notion of intelligence could
00:38:58.620
be fairly elastic or that we could suddenly care about other things that, that we haven't cared
00:39:03.580
about very much. And we would want to wrap that up in, in terms of assessing a person's intelligence.
00:39:09.260
And as you mentioned, emotional intelligence, for instance, I think that's a, a discrete capacity that,
00:39:15.580
that, you know, doesn't segregate very reliably with something like mathematical intelligence, say,
00:39:20.860
and, you know, it's, it's fine to talk about it. I think there are reasons why you might want to test
00:39:27.020
it separately from IQ. And, and I think the notion of general intelligence as measured by IQ is, is more
00:39:33.820
useful than, than many people let on. But I definitely take your point that we're this constellation of
00:39:39.180
cognitive capacities. So putting us on a spectrum with a, with a chicken, you know, as I did in,
00:39:45.660
in my Ted talk is more or less just saying that you can issue certain caveats, which, which I didn't
00:39:51.740
issue in that talk, but issuing those caveats still makes this a valid comparison, which is that of the
00:39:57.900
things we care about in cognition, of the things that make us able to do the extraordinarily heavy
00:40:06.460
lifting and unique things we do, like, you know, building a global civilization and producing science
00:40:13.340
and art and mathematics and music and everything else that is making human life both beautiful and
00:40:19.420
durable. There are, there are not that many different capacities that we need to enumerate in order to
00:40:25.580
capture those abilities. It may be 10, it's not a thousand, and a chicken has very few of them. Now,
00:40:33.660
a chicken may be good at other things that we can't even imagine being good at, but for the purposes of
00:40:38.460
this conversation, we don't care about those things, and those things are clearly not leading to chicken
00:40:43.660
civilization and chicken science and the chicken version of the internet. So of the things we care
00:40:49.980
about in cognition, and again, I think the list is, is small, and it's possible that there are things on
00:40:55.980
the list that we really do care about that we haven't discovered yet. Take something like emotional
00:41:00.700
intelligence. Let's say that we, we roll back the clock, you know, 50 years or so, and there's very few
00:41:06.700
people thinking about anything like emotional intelligence, and then put us in the presence of,
00:41:13.500
you know, very powerful artificial intelligent technology, and we don't even think to build
00:41:19.180
emotional intelligence into our systems. It's clearly possible that we could leave out something
00:41:24.140
that is important to us just because we haven't conceptualized it. But of the things we know that
00:41:29.580
are important, there's not that many of them that lead us to be able to, you know, prove mathematical
00:41:34.780
theorems or invent scientific hypotheses or propose experiments, you know, and then if you add things
00:41:42.940
like even emotional intelligence, the ability to detect the emotions of other people in their tone
00:41:49.900
of voice and in their facial expressions, say. These are fairly discrete skills, and here's where I begin
00:41:57.580
to edge into potentially dystopian territory. Once the ground is conquered in artificial systems,
00:42:05.740
it never becomes unconquered. Really, the preeminent example here is something like chess, right? So,
00:42:10.940
for the longest time, chess playing computers were not as good as the best people, and then suddenly they
00:42:17.820
were as, you know, more or less as good as the best people, and then, you know, more or less 15 minutes
00:42:22.460
later, they were better than the best people, and now they will always be better than the best people.
00:42:27.660
And I think we're living in this bit of a mirage now where you have human computer teams, you know,
00:42:34.700
cyborg teams, you know, much celebrated by people like Gary Kasparov, who's been on the podcast talking
00:42:40.700
about them, which are for the moment better than the best computers. So, you know, having the ape still
00:42:46.460
in the system gives you some improvement over the best computer, but ultimately, the ape will just
00:42:53.100
be adding noise, or so I would predict. And once computers are better at chess and better than any
00:43:01.020
human computer combination, that will always be true, but for the fact that we might merge with
00:43:07.260
computers and cease to be merely human. And when you imagine that happening to every other thing we care
00:43:14.460
about in the mode of cognition, then you have to imagine building systems that escape us in their
00:43:23.100
capacities. They could be highly alien in terms of what we have left out in building them, right? So,
00:43:31.980
again, if we had forgotten to build in emotional intelligence, or we didn't understand emotional
00:43:38.700
intelligence enough to build everything in that humans do, we could find ourselves in the presence of
00:43:43.980
you know, say, the most powerful autistic system, you know, the universe has ever devised, right? So
00:43:50.860
we've left something out, and it's only kind of quasi familiar to us as a mind, but, you know, godlike in
00:43:57.900
its capacities. I think it's just the fact that once the ground gets conquered in an artificial system,
00:44:05.340
it stays conquered. And by definition, you know, the resource concerns that you mentioned at the end,
00:44:12.220
you know, if you build a Swiss army knife, it's not going to be a great sword, and it certainly isn't
00:44:16.700
going to be a great airplane. Well, then, I just think that doesn't actually describe what will
00:44:23.100
happen here. Because when you compare the resources that a superhuman intelligence will have, especially
00:44:30.140
if it's linked to the internet, you compare that to a human brain or any collection of human brains,
00:44:36.380
I don't know how many orders of magnitude difference that is. And in terms of the time frame of
00:44:41.660
operation, I mean, you're talking about systems operating a billion times faster than a human brain,
00:44:47.340
there's no reasonable comparison to be made there. And that's where I feel like the possibility of
00:44:53.580
something like the singularity or something like an intelligent explosion is there and worth worrying
00:44:59.740
about. So, again, I'd like to go where we agreed. So, do you use the term?
00:45:06.540
If you'd like to continue listening to this conversation, you'll need to subscribe at
00:45:10.220
samharris.org. Once you do, you'll get access to all full-length episodes of the Making Sense podcast,
00:45:15.820
along with other subscriber-only content, including bonus episodes and AMAs,
00:45:20.860
and the conversations I've been having on the Waking Up app. The Making Sense podcast is ad-free
00:45:25.900
and relies entirely on listener support. And you can subscribe now at samharris.org.