Making Sense - Sam Harris - December 19, 2016


#57 — An Evening with Richard Dawkins and Sam Harris (1)


Episode Stats

Length

42 minutes

Words per Minute

145.5178

Word Count

6,162

Sentence Count

384

Misogynist Sentences

1

Hate Speech Sentences

4


Summary

On today's podcast, I play the audio from the first of two live events I did with Richard Dawkins in Los Angeles last month. These were fundraisers for his foundation, the Richard Dawkins Foundation for Reason and Science, which is also in the process of merging with the Center for Inquiry, making them the largest foundation for defending science and secularism from politically weaponized religion. And as you'll hear, we had a lot of fun, and it was really satisfying to have a conversation like this live, as opposed to privately over Skype. So, as I say at the end of this episode, this has given me an idea for how to produce more podcasts like this, and now I give you an evening with Dawkins, the first night, and a second event in a later podcast, which will be much, much better. I can guarantee that the two nights will be reasonably different, because different questions will come up. But we won't hew too narrowly to the questions, as we'll just have conversation. And I find that I want to ask you, Richard, what do you think about socks? And I'm not sure what you're not sure about them. And I think that's not a question you should be asking, as as we come out here with a question about socks, is it? I don't know, but I think you should try to figure that out for yourself. . - Sam Harris Sam Harris is the host of the Making Sense Podcast, a podcast that explores the intersection of science, philosophy, and pop culture. His work has been published in The New York Times bestselling books, and other media outlets. He's also a regular contributor to The Huffington Post, and is a frequent contributor to the New York Magazine, and he's a frequent guest on the radio show on the podcast Making Sense. Make sure to subscribe to Making Sense, The Stranger podcast. and subscribe on Apple Podcasts. If you enjoy what we're doing here, please consider becoming a supporter of what we re doing here. We don't run an ad-free version of Making Sense? - it's made possible entirely through the support of our sponsorships, so you'll get a better idea of what's going on in the making sense podcast. Thank you, making sense! by becoming a member of the podcast, and you'll be making sense, too! - by becoming one of us, too, and we're making sense.


Transcript

00:00:00.000 Welcome to the Making Sense Podcast.
00:00:08.820 This is Sam Harris.
00:00:10.880 Just a note to say that if you're hearing this, you are not currently on our subscriber
00:00:14.680 feed and will only be hearing the first part of this conversation.
00:00:18.420 In order to access full episodes of the Making Sense Podcast, you'll need to subscribe at
00:00:22.720 samharris.org.
00:00:24.060 There you'll find our private RSS feed to add to your favorite podcatcher, along with
00:00:28.360 other subscriber-only content.
00:00:30.520 We don't run ads on the podcast, and therefore it's made possible entirely through the support
00:00:34.640 of our subscribers.
00:00:35.900 So if you enjoy what we're doing here, please consider becoming one.
00:00:46.620 On today's podcast, I'll be playing the audio from the first of two live events I did with
00:00:52.780 Richard Dawkins in Los Angeles last month.
00:00:55.160 These were fundraisers for his foundation, the Richard Dawkins Foundation for Reason
00:00:59.680 and Science, which is also in the process of merging with the Center for Inquiry, making
00:01:04.640 them the largest foundation for defending science and secularism from politically weaponized
00:01:11.240 religion.
00:01:12.360 Their work is suddenly even more relevant in the U.S. because although Trump himself isn't
00:01:17.240 a religious demagogue, he's promised to appoint a few to the Supreme Court.
00:01:21.720 And he's also put a creationist in charge of the Department of Energy, which both stewards
00:01:27.040 our nuclear weapons and funds more basic science research than any other branch of government.
00:01:31.980 So now we have Rick Perry in charge of all that.
00:01:35.340 His immediate predecessors were each physicist.
00:01:38.240 One was a Nobel laureate.
00:01:39.660 And Perry is a man who I would be willing to bet my life couldn't utter three coherent sentences
00:01:46.800 on the topic of energy as a scientific concept.
00:01:50.440 So I would urge you to become a member of CFI or the Richard Dawkins Foundation.
00:01:55.160 One membership now covers both organizations.
00:01:58.140 And once you are a member, you'll occasionally receive action alerts requesting that you contact
00:02:02.920 your elected representatives on matters of public policy.
00:02:06.080 As many have noted, non-believers are somewhere between 10 and 20, 25 percent of the U.S. population.
00:02:13.520 It's hard to know for sure.
00:02:15.580 But we almost certainly outnumber many other subgroups in the U.S.
00:02:20.940 And we are disproportionately well-educated, needless to say.
00:02:24.880 And yet we have almost no political power.
00:02:27.580 Right now, this will only change once we make ourselves heard.
00:02:31.020 Richard was doing a speaking tour to raise funds for his foundation and for CFI.
00:02:36.440 And he asked me to join him at one of these events.
00:02:38.840 And our event in L.A. sold out almost immediately.
00:02:42.800 And so we booked the hall for a second night, and that sold out too.
00:02:46.640 And I'll bring you the audio from that second event in a later podcast.
00:02:50.660 But as you'll hear, we had a lot of fun, and it was a great crowd.
00:02:54.260 And it was really satisfying to have a conversation like this live,
00:02:58.060 as opposed to privately over Skype.
00:03:01.020 So, as I'll say at the end, this has given me an idea for how to produce some more podcasts like that.
00:03:07.140 And now I give you an evening with Richard Dawkins, the first night.
00:03:16.660 Thank you all for coming.
00:03:17.880 This is really, it's an honor to be here, and it really is an honor to be here with you, Richard.
00:03:23.740 I get to return the favor.
00:03:25.060 He had me at Oxford, I think, five years ago.
00:03:28.560 So welcome to Los Angeles.
00:03:30.280 So I'm going to, this is going to be very much a conversation, but what I did, I was worried about this.
00:03:38.660 I wasn't worried about tonight, I was worried about tomorrow night.
00:03:41.820 My fear was that Richard and I would have a scintillating conversation tonight,
00:03:45.200 and then tomorrow night, try doggedly to recapitulate it, word for word, and yet feign spontaneity.
00:03:55.780 And if you know my position online, you know, that doesn't work.
00:03:59.060 So what I did is I went out to all of you asking for questions, and I got thousands.
00:04:05.920 And so I picked among what looked promising.
00:04:09.460 So I can guarantee that the two nights will be reasonably different, because different questions will come up.
00:04:14.820 But we won't hew too narrowly to the questions.
00:04:17.440 We'll just have a conversation.
00:04:19.440 But as we come out here, I find that I want to ask you, Richard, about your socks.
00:04:24.220 And I'm not...
00:04:24.900 I'm not sure what the question is.
00:04:29.680 I've just come from Las Vegas, the conference of PsyCon, and one of the things we had was a workshop on cold reading,
00:04:38.140 which is the technique whereby so-called mentalists are supposed to read each other's thoughts.
00:04:42.840 And what they're really doing is just simply looking at the clothes and the general appearance and assessing it.
00:04:48.100 And we had to pair off for this workshop, and I was with a nice young woman, and we sort of sized each other up.
00:04:54.600 And I said to her, I think I'm getting that you come from somewhere in the west of the states.
00:05:01.400 I think maybe not California, maybe a bit further north.
00:05:05.400 And of course, I was simply reading her label, which said she came from Oregon.
00:05:11.100 And then she summed me up, and she said, I think you may have some problem with your eyes.
00:05:18.220 Maybe colorblind.
00:05:19.820 I'm serious about this.
00:05:28.160 I'm trying to spread a meme for wearing odd socks.
00:05:33.180 There's a kind of tyranny of forcing us to buy socks in chairs.
00:05:44.620 Shoes have chirality.
00:05:46.840 Left shoe and right shoe are not interchangeable.
00:05:48.980 But socks don't.
00:05:51.460 And when you lose one of a pair of socks, you're forced to throw the other one away.
00:05:56.900 Which is absurd.
00:05:58.620 So what I want...
00:05:59.980 Although, honestly, Richard, you just told me a story that suggests that shoes are interchangeable.
00:06:06.720 Oh my God, that's right.
00:06:07.900 That's rather an embarrassing story.
00:06:09.840 Someone is going to find the relevant video on the internet.
00:06:12.340 I will tell the story now you've let the cat out of the bag.
00:06:14.980 I was doing a television film called Sex, Death, and the Meaning of Life.
00:06:21.780 And in the death episode, we were talking about suicide.
00:06:28.160 And there's a famous suicide spot.
00:06:30.020 It's a bit like San Francisco, the Golden Gate Bridge, where people have famously jumped to their death.
00:06:34.620 And all around this place, Beachy Head, is a very, very high cliff in the south of England.
00:06:40.060 There are rather sad little crosses where people have jumped off.
00:06:44.100 And we were filming the sequence on suicide, and I had to walk very solemnly and slowly and in a melancholy frame of mind past these crosses.
00:06:54.840 And the camera was focused on my feet, walking past these little low crosses.
00:06:59.900 And I felt incredibly uncomfortable.
00:07:02.960 I had this sort of uncanny feeling of being uncomfortable, and I couldn't understand why.
00:07:11.200 And then eventually, it was my feet that were uncomfortable, walking past these crosses.
00:07:17.700 And eventually, the director called cut, and we went off, and I took my shoes off, because they were so painful.
00:07:23.200 And only then did I realize I put them on the wrong way around.
00:07:27.740 So this is preserved for posterity in close-up.
00:07:30.940 I want to see that video. Someone find that video.
00:07:34.420 The worst thing is, none of the television audience ever wrote in to complain about this.
00:07:40.000 So maybe this at least will arouse their attention.
00:07:47.480 So the first question, Richard, which I thought could provoke some interesting reflection, is,
00:07:53.860 why do you both court so much controversy?
00:07:58.020 Well, we don't, do we?
00:08:00.940 We don't court it. It pursues us.
00:08:04.420 Well, I think, I mean, what I've noticed is that there are undoubtedly people who are friends of ours,
00:08:12.240 colleagues of ours, who agree with us down the line,
00:08:15.060 who seem to feel no temptation to pick all of the individual battles we pick.
00:08:21.960 And one doesn't have to be a coward not to want to fight all of these culture war battles, although it helps.
00:08:32.860 But we have friends who are decidedly not cowards, who, I mean, someone like Steve Pinker,
00:08:37.380 he stakes out controversial positions, but he is not in the trenches in quite the same way as we are.
00:08:44.100 And I'm wondering what you think about that.
00:08:46.980 I mean, did you see a choice for yourself?
00:08:49.360 Do you find yourself revisiting this choice periodically?
00:08:51.960 I think it's a perfectly respectable position to take that a scientist has better things to do.
00:08:57.880 And I don't take that position, and I think you don't either.
00:09:00.700 I do think it's important to fight the good fight when we do have, when science, when reason has vocal and powerful and well-financed enemies.
00:09:15.540 And so I'm not sure what particular battles the questioner has in mind when he says we caught controversy.
00:09:22.320 But I suppose I believe so strongly in truth, and if I see truth being actively threatened by competing ideologies which actually not only would fight for the opposite of truth,
00:09:41.660 but would indoctrinate children in the opposite of truth, I feel impelled to fight only verbally.
00:09:50.660 I mean, I don't feel impelled to actually get a rifle or something.
00:09:55.220 Well, there's time yet.
00:10:00.660 So I guess the dogma that has convinced so many fellow scientists and intellectuals, academics, that there is no reason to fight,
00:10:12.900 certainly one of those dogmas is Stephen Jay Gould's idea of NOMA, non-overlapping magisteria.
00:10:18.520 That strikes me as a purely wrong-headed and destructive idea.
00:10:23.480 Do you want to explain that to people?
00:10:25.300 I think so. I think we probably agree about that.
00:10:26.860 Non-overlapping magisteria.
00:10:28.220 He wrote a book called, what was it again called?
00:10:31.700 The Rock of Ages?
00:10:32.560 The Rock of Ages, that's right.
00:10:34.940 So science has the age of the rocks, and religion has the rock of ages.
00:10:38.360 And the idea was that science and religion both have their legitimate territories,
00:10:44.760 which they shouldn't impinge upon each other.
00:10:49.520 Science has the truth about the real world, and that's science's department.
00:10:53.880 Religion has what he described as moral questions and, I think, deep questions of existence.
00:11:01.000 Meaning and morality.
00:11:02.180 Well, I would strongly dispute the idea that we should get our morals from religion.
00:11:09.480 For goodness sake, whatever else we get our morals from, it must not be religion.
00:11:13.680 That would be...
00:11:15.520 If you imagine what the world would be like if we actually did get our morals from the Bible or the Koran,
00:11:21.920 it would be totally appalling, and was appalling in the time when we did get from the Bible,
00:11:27.360 and it is now appalling in those countries where they get it from the Koran.
00:11:31.240 So don't let us get our morals from religion.
00:11:33.960 As for the deep fundamental questions, I take those to be things like,
00:11:39.940 where did the laws of physics come from?
00:11:42.980 What is the origin of all things?
00:11:45.100 What is the origin of the cosmos?
00:11:48.660 What happened before the Big Bang?
00:11:51.520 Those are scientific questions.
00:11:53.520 It may be that science can never answer them, but if science cannot answer them,
00:12:00.020 sure as hell religion can't answer them.
00:12:02.760 I don't actually think anything can answer them if science can't.
00:12:06.580 It's an open question whether things like the origin of the physical constants,
00:12:13.740 those numbers which physicists can measure but can't explain,
00:12:17.500 the origin of the laws of physics, whether those will ever be explained by science,
00:12:21.600 if they are well and good, if they're not, then nothing will explain them.
00:12:28.020 The idea, I mean, Steve Gould was careful to say that these separate magisteria
00:12:34.060 must not encroach on each other's territory,
00:12:36.960 and so the moment religion encroaches on science's territory,
00:12:41.940 for example in the case of miracles, then it's fair game for scientific criticism.
00:12:46.260 But my feeling about that is that if you take away the miracles from religion,
00:12:51.840 you've taken away most of why people believe in them.
00:12:56.660 People believe in the supernatural because they believe biblical or Quranic stories
00:13:02.020 which suggest that there have been supernatural miracles,
00:13:05.660 and if you deprive them of that, then they've lost everything.
00:13:09.160 To take Christianity as only one example, that has been spelled out in every generation.
00:13:14.400 I mean, starting with Paul, he said, you know,
00:13:16.500 if Christ be not risen, your faith is vain, or something close to that.
00:13:21.280 So you can't get around the fact that religious people care about what's true,
00:13:29.280 and they purport to be making claims, truth claims, about the nature of reality.
00:13:34.040 They think certain historical figures actually existed, some of them may be coming back.
00:13:39.760 Yes, virgin birth.
00:13:41.100 Books issued occasionally from a divine intelligence, and so there's just no way to...
00:13:47.740 I never met Gould, but I just can't believe the currency this idea has in science.
00:13:55.740 No, I agree.
00:13:56.480 It's become very fashionable among the scientific establishment.
00:14:00.040 It was more or less endorsed by the U.S. National Academy of Sciences.
00:14:04.680 As for the separation, as for the idea that religion doesn't stray into science's territory,
00:14:13.380 imagine the following scenario, imagine that some sort of scientific evidence,
00:14:18.020 perhaps DNA evidence, were discovered, perhaps somewhere in a cave in Palestine,
00:14:26.020 and it was demonstrated that, say, Jesus never had a father.
00:14:31.160 I mean, it's inconceivable how that could happen.
00:14:32.940 Just suppose it was, suppose there was scientific evidence.
00:14:35.460 Can you imagine theologians saying, oh, that's science, that's not our department.
00:14:39.320 We're not going to...
00:14:40.280 They would love it.
00:14:42.640 It would be meat and drink to them.
00:14:44.420 Yeah, yeah.
00:14:46.800 Many people who are not atheists believe that your efforts against religion are wasted,
00:14:51.400 and that the net result of your work is to simply offend religious people.
00:14:55.300 There's a widespread myth that people can't be reasoned out of their faith.
00:14:59.340 Please talk about this.
00:15:00.460 It's just uncanny that there are the most memorable quips and quotes and phrases.
00:15:07.300 Anything that is aphoristic seems to have undue influence on our thinking.
00:15:12.560 And there's this aphorism that is usually attributed to Swift.
00:15:16.260 And I think he says something like it.
00:15:17.700 It's not quite the version that has been passed down to us.
00:15:20.840 But this idea that you can't reason someone out of a view that he wasn't reasoned into.
00:15:26.100 And this just strikes the mind of Homo sapiens as so obviously true.
00:15:31.520 And if you look at my inbox, it is so obviously false.
00:15:34.960 So tell me about your experience reasoning with your readers.
00:15:42.340 I think it would be terribly pessimistic to think that you cannot reason.
00:15:46.020 I mean, I think I'd just give up, probably die, if I thought I couldn't reason people out of their silliness.
00:15:54.820 I would accept, would you agree with this, that there are some people who demonstrably do know all the evidence and even understand the evidence, but yet still persist in...
00:16:10.820 Yeah, well, so there'll be a couple of questions that will bring us onto that territory because there's more to reason about than science has tended to allow or that secular culture has tended to allow.
00:16:24.540 So people have these intense transformative experiences or they have these hopes and fears that aren't captured by you saying, don't you understand the evidence for evolution?
00:16:36.560 But this is more of a conversation that people don't tend to have.
00:16:41.460 But yeah, I would agree that people certainly resist conclusions that they don't like the taste of.
00:16:47.060 I can think of two examples.
00:16:49.160 One I mentioned in the reception beforehand, a professor of astronomy somewhere in America who writes papers, mathematical papers in astronomical journals,
00:17:00.600 in which his mathematics, his mathematical ideas, accept that the universe is 13.8 billion years old, and yet he privately believes it's 6,000 years old.
00:17:13.700 So here is a man who knows his physics, he knows his astronomy, he knows the evidence that the universe is 13 billion years old.
00:17:21.760 And yet, so split-brained is he, that he actually privately departs from everything in his professional life.
00:17:32.520 Well, surely we have to accept that he, I don't know, cannot be reasoned out, but I mean, he already knows the evidence and will not be reasoned out of his foolishness.
00:17:45.720 Yeah, I didn't say that you could always succeed, but I think, and clearly there are, I mean, I have this bias, as you do,
00:17:55.240 that if the conversation could just proceed long enough, the ground for science would continually be conquered and it never gets reversed.
00:18:03.880 And it is being and will be.
00:18:05.460 Yeah, yeah, and you never see the, I mean, this is a unidirectional conquest of territory.
00:18:11.180 So you never see a point about which science was once the authority, but now the best answer is religious.
00:18:19.320 Yeah, that's right.
00:18:19.980 Right, but you always see the reverse of that.
00:18:22.340 That's right.
00:18:22.700 And that's...
00:18:23.440 And actually, most scientists who call themselves religious, if you actually probe them, I mean, they don't believe really stupid things like six-day creation and things.
00:18:33.360 Most of them don't.
00:18:33.940 Yeah, although I find that Christian scientists, not Christian scientists as in the cult, but scientists who happen to be Christian, believe much more than your average rabbi.
00:18:47.180 This is a way.
00:18:48.180 That's true.
00:18:48.820 Yeah.
00:18:49.280 And this Christianity is, and Muslim scientists no doubt return the favor.
00:18:53.680 I get the feeling your average rabbi, like your average chaplain of an Oxford college, doesn't actually believe in God at all.
00:19:00.580 Yeah.
00:19:00.860 I've met that rabbi.
00:19:06.700 So, there's a couple of fun questions here that I just wanted to hear Richard react to.
00:19:15.420 Are there any biological extinctions that you would consider virtuous?
00:19:19.340 For instance, should we eradicate the mosquito?
00:19:22.280 You have ten seconds to decide.
00:19:23.700 It would have to be more than one mosquito.
00:19:29.460 There's the malaria mosquito, the yellow fever mosquito.
00:19:33.920 Yeah, all mosquitoes.
00:19:35.280 All mosquitoes.
00:19:38.580 Mosquitoes are unbelievably beautiful creatures.
00:19:41.060 That's the most irrational thing ever.
00:19:50.020 The great expert on fleas.
00:19:56.900 She presented the Department of Zoology in Oxford with a gigantic blown-up photograph of a mosquito, and it was a fantastic piece of work of art.
00:20:08.280 By a malevolent god.
00:20:09.600 Yes.
00:20:11.820 If ever there were proof of God's malevolence, it's got to be the mosquito.
00:20:16.320 I have no hesitation in killing individual mosquitoes.
00:20:26.320 Wouldn't you want to be a little more efficient than that with CRISPR or something?
00:20:31.700 I haven't thought about it before.
00:20:33.960 I think I would not wish to completely extinguish.
00:20:40.420 Can I throw a little more on the balance?
00:20:42.340 We've had, reliably, year after year, two million people killed by mosquito-borne illness.
00:20:48.420 Now it's cut down to, I think, 800,000.
00:20:50.700 So we're making progress with bed nets.
00:20:52.940 But for some reason, I find myself less reluctant to extinguish the malarial parasite that the mosquito bears.
00:21:00.980 But that's probably not very logical.
00:21:03.120 I mean, we have extinguished the smallpox virus, except for a few lab cultures.
00:21:14.900 Yes, and then, like geniuses, then we tell people how to synthesize it online.
00:21:18.500 So the flip side of that, of course, is the Jurassic Park question.
00:21:25.900 Should we reboot the T-Rex?
00:21:28.840 Yes.
00:21:29.280 If we had to.
00:21:34.740 That's fantastic.
00:21:36.180 I wish, I mean, I thought the Jurassic Park method of doing it was incredibly ingenious, and I love that.
00:21:42.160 What was not ingenious was the ludicrous injection of, was it, chaos theory, or one of those nine days wonder fashionable things.
00:21:54.640 I don't remember.
00:21:55.300 But the idea of getting mosquitoes in amber and extracting DNA and reconstructing dinosaurs, that's an amazingly good science fiction idea.
00:22:06.020 If only it were possible.
00:22:07.140 Unfortunately, the DNA is too old for that to happen.
00:22:12.540 If it were, I would definitely wish to see that done.
00:22:16.520 What could go wrong?
00:22:26.140 Richard seems to want to live in a maximally dangerous world.
00:22:32.000 Filled with mosquitoes and T-Rexes.
00:22:37.140 So now, you and I were speaking about your books.
00:22:39.960 You've written some very important books on 10 years apart, and so you have an anniversary this year of the Selfish Gene, which is the 40th.
00:22:52.720 And the Blind Watchmaker has its 30th anniversary.
00:22:58.120 And Climbing Mount Improbable is the 20th, and then The God Delusion is the 10th.
00:23:02.380 So actually, I wanted to give you a chance to talk about the titles of the first two.
00:23:12.940 The Selfish Gene has provoked an inordinate amount of confusion, and The Blind Watchmaker is a phrase that is useful to understand.
00:23:20.960 So do you want to talk about that?
00:23:22.560 The Selfish Gene is misunderstood, I think, mostly by those who have read it by title only.
00:23:30.700 As opposed to the rather substantial footnote to the title, which is the book itself.
00:23:35.760 It could equally well have been called The Altruistic Individual, because one of the main messages of the book is that selfish genes give rise to altruistic individuals.
00:23:52.460 So it is mostly a book about altruism, mostly a book about the opposite of selfishness.
00:23:58.360 So it certainly should not be misunderstood as advocating selfishness or saying that we are, as a matter of fact, always selfish.
00:24:07.160 All it really means is that natural selection works at the level of the gene, as opposed to any other level in the hierarchy of life.
00:24:15.680 So genes that work for their own survival are the ones that survive, tautologically enough, and they are the ones that build bodies.
00:24:28.040 So we, all of us, contain genes that are very, very good at surviving, because they've come down through countless generations.
00:24:34.540 And they are copied accurately, with very high fidelity, from generation to generation, such that there are genes in you that have been around for hundreds of millions of years.
00:24:47.900 And that's not true of anything else in the hierarchy of life.
00:24:52.300 Individuals die.
00:24:55.060 They don't, they survive only as a means to the end of propagating the genes that built them.
00:25:01.780 So individual bodies, organisms, should be seen as vehicles, machines, built by the genes that ride inside them, for passing on those very same genes.
00:25:13.360 And it is the potential eternal long-livedness of genes that makes them the unit of selection.
00:25:21.040 So that's really the meaning of the selfish gene.
00:25:24.900 As I say, the book could have been called the altruistic individual.
00:25:27.940 It could have been called the cooperative gene for another reason.
00:25:30.300 It could have been called the immortal gene, which is a more sort of Carl Sagan-esque title.
00:25:39.820 It's a more poetic title.
00:25:42.200 And in some ways, I rather regret not calling it the immortal gene.
00:25:46.340 But anyway, I'd start with it now.
00:25:48.840 There's a common, I think, misunderstanding of evolution that leads people to believe that absolutely everything about us must have been selected for.
00:26:00.480 Otherwise, it wouldn't exist.
00:26:02.220 Yes.
00:26:02.360 So people ask about what's the evolutionary rationale for post-traumatic stress disorder or depression.
00:26:11.700 I'm not saying that there is no conceivable one, but it need not be the case that everything we notice about ourselves was selected for or that there's a gene for that.
00:26:21.740 Well, this is very interesting.
00:26:22.840 I mean, I'm actually a bit of an outlier here.
00:26:26.920 I mean, I'm about as close as biologists come to accepting what you've described as a misconception.
00:26:34.100 Because I do think that selection is incredibly powerful, and mathematical models show this.
00:26:44.420 J.B.S. Haldane, one of the three founding fathers of population genetics, did theoretical calculation in which he postulated an extremely trivial character.
00:27:00.420 He didn't mention it, but it might have been eyebrows.
00:27:04.960 Suppose you believe that eyebrows have been selected because they stop sweat running down your forehead into your eyes.
00:27:12.560 And it sort of sounds totally trivial.
00:27:14.940 How could that possibly save a life?
00:27:17.540 Until you realize, the first thing you might realize is that it could save your life if you were about to be attacked by a lion.
00:27:26.720 And just a slight split-second difference in how quickly you see the lion, because you've got sweat in your eyes.
00:27:34.960 Since the invention of sunblock, I think that's undoubtedly true.
00:27:39.300 Yeah, okay.
00:27:39.900 But Haldane actually did a mathematical calculation.
00:27:46.980 He said, let us postulate a character so trivial that the difference between an individual who has it and an individual who doesn't have it is only one in a thousand.
00:28:00.140 That's to say, for every thousand individuals who have this, say, the eyebrows and survive, 999 who don't have it survive.
00:28:08.560 So from any actuarial point of view, a life insurance calculator would say, well, it's totally trivial.
00:28:18.180 But it's not trivial when you think that the genes concerned is represented in thousands of individuals in the population and through thousands of generations.
00:28:29.820 That multiplies up the odds, and Haldane's calculation was that if you postulate that one in a thousand advantage, he then worked out how long would it take for the gene to spread from being, I forget exactly the figures, but say 1% of the population up to 50% of the population.
00:28:50.700 And it was a number of generations so short that it would be negligible on the geological timescale.
00:28:58.220 So it would appear to be an instantaneous piece of evolutionary change, even though the selection pressure was trivial.
00:29:06.740 Well, actually, selection pressures in the wild, when they've been measured, have been far, far stronger than that.
00:29:11.780 But there's another way of approaching the question you raise when you say something like selective advantage in various psychological diseases or something like that.
00:29:23.040 It may be that you're asking the wrong question.
00:29:25.740 It may be that by focusing on the particular characteristic which you ask the question about, you're ignoring the fact that there's something associated with that.
00:29:35.520 Let me think of an example.
00:29:40.820 You know that at night, if you've got a lamp outside, or a candle is better, if you've got a candle, insects, moths say, come and sort of, as it were, commit suicide.
00:29:53.740 I mean, they just burn themselves up in the candle.
00:29:57.260 And you could ask the question, what on earth is the survival value of suicidal self-immolation behavior in moths?
00:30:05.800 Well, it's the wrong question, because a probable explanation for it is that many insects use a light compass to steer a straight line.
00:30:17.960 Lights at night, until humans came along and invented candles, lights at night were always at optical infinity.
00:30:23.960 They were things like the moon, the stars, or the sun during the day.
00:30:29.380 And if you maintain a fixed angle relative to these rays that are coming from optical infinity,
00:30:35.640 then you just cruise at a straight line, which is just what you want to do.
00:30:40.140 A candle is not at optical infinity.
00:30:42.700 And if you work out mathematically what happens if you maintain a fixed acute angle
00:30:46.740 to the rays that are emanating in all directions out of a candle,
00:30:51.080 you perform a neat logarithmic spiral into the candle flare.
00:30:55.720 So this is an accidental byproduct of a mechanism which really does have survival value.
00:31:01.700 You have to rephrase the question, what is the survival value of maintaining a fixed angle at light rays?
00:31:08.060 And then you've got the answer.
00:31:10.680 So to ask the question, what's the advantage of suicidal self-immolation,
00:31:15.780 you've shifted to the wrong question.
00:31:18.800 Right.
00:31:20.000 And there are related issues, so there are things which provide some survival advantage
00:31:24.580 if you have one copy of the gene, but if you have both copies, then it's militarious.
00:31:29.300 Yes.
00:31:29.980 Like sickle cell anemia.
00:31:31.180 Right.
00:31:31.900 Right.
00:31:33.180 So, well then, what do you do with the concept of a spandrel, though?
00:31:36.940 Gould's concept of a spandrel, is that useful to think about?
00:31:40.540 Yeah, okay, yes.
00:31:41.960 A spandrel's are, Lewentyn and Gould wrote a notorious and overrated paper in 1979,
00:31:55.880 in which Gould went to King's College, Cambridge, where there's a most beautiful building,
00:32:02.940 and the Gothic arches have gaps, inevitably form gaps, which are called spandrels,
00:32:12.040 and they actually have a name, and they're often filled with ornamentation.
00:32:17.160 And the spandrels themselves are accidental byproducts of something which really matters,
00:32:22.100 which is the Gothic arch.
00:32:23.160 And so, the point they were making is that things that we, it's really almost the same point
00:32:30.160 that I was making just now, but asking the wrong question.
00:32:33.500 Spandrels are...
00:32:34.720 You can't ask, what's the purpose of a spandrel?
00:32:36.620 That's right, yes.
00:32:37.600 It's a derivative of the thing you were building.
00:32:39.300 Exactly, yes.
00:32:40.320 What are your thoughts about artificial intelligence?
00:32:43.340 Please discuss its relationship to biological evolution and how it could develop in the future.
00:32:47.800 I think it's a question for you, Sam.
00:32:49.040 Yes, and I fear everyone's heard my thoughts on artificial intelligence.
00:32:54.680 I find this increasingly interesting.
00:32:56.500 It's something that I became interested in very late.
00:33:00.820 And in fact, unless you were in the AI community until very recently,
00:33:05.780 the dogma that had been exported from computer science to neuroscience and psychology and adjacent fields
00:33:14.360 was that AI basically hadn't panned out.
00:33:18.440 There was no real noticeable success there that should get anyone worried or particularly excited.
00:33:26.020 Then, all of a sudden, people started making worried noises,
00:33:28.800 and then there were obvious gains in narrow AI that were getting sexier and sexier.
00:33:35.740 And now, it was really the first time I thought about the implications
00:33:39.000 of ongoing progress in building intelligent machines and progress at any rate.
00:33:45.860 It really doesn't have to be that Moore's Law continues indefinitely.
00:33:49.900 We just need to keep going.
00:33:53.200 And at a certain point, we will find ourselves in the presence of machines that are as intelligent as we are.
00:34:01.020 They may not be human-like, although presumably we'll build them to be as much like ourselves in all the good ways as possible.
00:34:08.880 But this interests me for many different reasons because, one, I'm actually worried in terms of existential risk.
00:34:16.840 It's on my short list for things to actually worry about.
00:34:20.660 But the flip side of that is that it's one of the most hopeful things.
00:34:24.620 If anything seems intrinsically good, it's intelligence, and we want more of it.
00:34:28.640 So, insofar as it's reasonable to expect that we are going to make more and more progress automating things
00:34:35.360 and building more intelligent systems, that seems very hopeful, and I think we can't but do it.
00:34:41.720 And the other point of interest for me, and this is kind of my hobby horse,
00:34:45.060 is actually what we were talking about on stage last time some years ago when I wrote The Moral Landscape.
00:34:50.860 I'm interested in collapsing this perceived distance between facts and values,
00:34:56.820 the idea that morality somehow is uncoupled to the world of science and truth claims.
00:35:04.380 And I think that once we have to start building, and we even have to start even now with things like self-driving cars,
00:35:11.140 once we start building our ethics into machines that within their domain are more powerful than we are,
00:35:18.440 the sense that there are no better and worse answers to ethical questions,
00:35:23.740 that we should all be moral relativists, that all cultures are equal with respect to what constitutes a good life.
00:35:30.640 That just, I mean, there's going to be somebody sitting at the computer waiting to code something,
00:35:35.420 and if you don't put these...
00:35:37.240 You've actually got to build in some moral values.
00:35:39.220 You have to build in the values, and if you don't build it in, you're building in those values.
00:35:44.320 So if you build a self-driving car that isn't distinguishing between people and mailboxes,
00:35:51.580 well, then you've built a very dangerous self-driving car.
00:35:54.660 The more relevant tuning which people have to confront is,
00:35:57.840 do you want a car that...
00:36:00.460 I mean, the car's going to have to make a choice between protecting the occupant and protecting pedestrians, say.
00:36:06.620 So now how much risk do you want as the driver of the car to assume in order to spare the lives of occupants?
00:36:13.400 You're constantly facing a trolley problem, and you're the one to be sacrificed.
00:36:19.360 And your point is that whereas trolley problems are these hypothetical things where you have to imagine,
00:36:26.760 you've got a runaway trolley and you're standing at points,
00:36:29.440 and it's about to mow down five people,
00:36:33.520 and if you pull the lever to swing the points, it'll kill one person.
00:36:38.700 So you, with holding the lever in your hand, have the dilemma,
00:36:43.100 should I save five people and kill one?
00:36:45.360 But you know that by your action in pulling the lever,
00:36:48.720 you're going to kill a person who wouldn't otherwise have died.
00:36:52.380 And I think, Sam, you're making the point that AI,
00:36:56.280 I mean, automatic machines, robotic machines,
00:36:58.180 are going to need to have a moral system built into them,
00:37:01.780 and so that the trolley problem is going to be faced by the programmer
00:37:06.000 who's actually writing the software.
00:37:08.120 Oh, it's already the case, yeah.
00:37:10.300 Yes.
00:37:11.000 And it just will proceed from there.
00:37:13.600 So just imagine a system more intelligent than ourselves
00:37:17.820 that we have seeded with our morality.
00:37:22.240 And again, this is going to be a morality that the smartest people we can find doing this work
00:37:26.940 will have to agree by some consensus is the wisest morality we've got.
00:37:33.460 And so obviously the Taliban and Al-Qaeda are not going to get a vote in that particular project.
00:37:40.100 At that first pass, everything you've heard about moral relativism just goes out the window
00:37:45.860 because we will be desperate to find the best answer we can find on every one of these questions
00:37:51.980 and desperate to build a machine that when it, in the real limit case,
00:37:57.460 where it begins to make changes to itself,
00:38:00.660 it doesn't make changes that we find, in the worst case, incompatible with our survival.
00:38:06.580 Making changes to itself is what more conventionally worries people,
00:38:10.460 the von Neumann machine, which is capable of reproducing
00:38:15.040 and thereby possibly evolving by natural selection
00:38:17.900 and completely supplanting humans, completely taking over.
00:38:24.300 This is, of course, a science fiction scenario, but it's not totally unrealistic.
00:38:29.100 Not at all, given the fact that one path toward developing AI
00:38:33.300 is to build genetic algorithms that function along similar lines.
00:38:40.800 There's a Darwinian principle of just it getting better and better
00:38:43.900 in response to data and error correction,
00:38:46.460 and it may not even be clear how it has gotten better.
00:38:51.360 So we could look forward to a time in the distant future
00:38:54.300 when we have a hole like this filled with silicon and metal machines
00:38:59.080 looking back and speculating on some far distant dawn age
00:39:04.700 when the world was peopled by soft, squishy, organic, water-based life forms.
00:39:15.600 But the data transfer would be instantaneous,
00:39:18.380 so there would have been no reason to come out here.
00:39:20.440 and you just take the firmware upgrade.
00:39:25.660 But maybe the world would be a better and a happier place.
00:39:30.020 Well, my real fear is that it won't be illuminated by consciousness at all,
00:39:34.660 because I'm agnostic at the moment as to whether or not
00:39:38.540 mere information processing and a scaling of intelligence
00:39:42.820 by definition gets you consciousness.
00:39:45.400 It may, in fact, be the case that it gets you consciousness.
00:39:48.080 I'm not conscious, by the way.
00:39:49.580 It is a genuine, a very difficult philosophical problem, I think.
00:39:56.240 Why, I mean, it would seem to be perfectly possible
00:39:58.960 to build a machine or an animal or a human
00:40:02.300 which can do all the things that we do,
00:40:06.080 all the intelligent things that we do,
00:40:07.840 all the life-saving things that we do,
00:40:09.860 and yet not be conscious.
00:40:11.360 And it's genuinely mysterious why we need to be conscious, I think.
00:40:14.960 Yeah, and I think it remains so.
00:40:17.160 I think it's because consciousness is,
00:40:20.020 the conscious part of you is generally the last to find out
00:40:24.760 about what your mind just did.
00:40:27.480 You know, you're not, you're playing catch-up.
00:40:29.620 And what you call consciousness is, in every respect,
00:40:36.060 an instance of some form of short-term memory.
00:40:39.460 Now, there's different kinds of memory,
00:40:41.940 and this is integrated in different ways,
00:40:43.560 but you are, I mean, there's just a transmission time for everything.
00:40:47.460 So you can't be aware of a perception or a sensation
00:40:52.540 the instant it hits your brain,
00:40:55.800 because it's hitting your brain isn't one discrete moment.
00:40:59.380 And so there's a whole time of integration,
00:41:02.480 and so the present moment is this layered,
00:41:08.300 subjectively speaking, it's this layering of memories,
00:41:11.960 even when you are distinguishing the present
00:41:16.120 from what you classically call a memory.
00:41:18.360 And so it's not, it is a genuine mystery
00:41:21.480 why consciousness would be necessary,
00:41:26.200 or what couldn't a machine as complex as a human brain do
00:41:32.060 but for the emergence of this subjective sense,
00:41:36.660 this inner dimension of experience?
00:41:38.600 I don't even know what the solution would look like
00:41:40.880 and whether it would be solved by biologists
00:41:42.900 or by philosophers or by computer scientists.
00:41:47.440 Well, I'm just worried that, yeah,
00:41:49.300 and that is, you've just articulated
00:41:51.620 what philosophers call the horror problem.
00:41:53.840 If you'd like to continue listening to this conversation,
00:41:58.800 you'll need to subscribe at samharris.org.
00:42:01.520 Once you do, you'll get access to all full-length episodes
00:42:03.940 of the Making Sense podcast,
00:42:05.620 along with other subscriber-only content,
00:42:07.940 including bonus episodes and AMAs,
00:42:10.300 and the conversations I've been having
00:42:11.900 on the Waking Up app.
00:42:13.460 The Making Sense podcast is ad-free
00:42:15.240 and relies entirely on listener support.
00:42:18.040 And you can subscribe now at samharris.org.