Making Sense - Sam Harris - April 24, 2015


#8 — Ask Me Anything 1


Episode Stats

Length

38 minutes

Words per Minute

159.1247

Word Count

6,101

Sentence Count

315

Misogynist Sentences

1

Hate Speech Sentences

12


Summary

In this episode of the Making Sense Podcast, host Sam Harris answers a question from a listener about the state of the atheist community and its relationship to the civil rights struggle of the past and the current struggles of civil rights activists like Jim Crow and Jim Crow, and whether or not the struggle of atheists should be treated the same as those of other minority groups like blacks, gays, and lesbians. Sam also discusses artificial intelligence and its impact on our understanding of the world, and why he thinks it should be regulated in the same way as the Civil Rights Movement and the Civil Liberties Movement have been regulated in our modern society, and what it means for the future of our society as a place of civil liberties and civil rights and why it s more important than ever to have a free and fair election system. We don t run ads on the podcast, and therefore it s made possible entirely through the support of our listeners, who are making possible by becoming a supporter of what we re doing here, by becoming one of our subscribers. If you enjoy what we're doing, please consider becoming a member of the M&S Member. You'll get access to full episodes of the podcast as well as access to the full archive of all the podcast episodes, including the latest "Making Sense" episodes available on our social medias, wherever you get your news and discussion, and much more! Subscribe to the podcast wherever you re listening to this podcast is available. Subscribe Today! Learn more about your ad choices. Become a supporter: bit.ly/support-and-subscribe to our new episodes of Making Sense? You get 10% off the podcast when you sign up for the podcast becomes available on Audible, and get 20% off your ad-free version starting next week! Subscribe on iTunes, too! Become one of my podcast, I'll be giving you a chance to receive 5% off my next episode on the next episode next week, starting on Monday, September 1st, September 5th, starting at $5/7th, only 5GB, and I'll get 7% off his ad-only version of the next month, only 3 months get 5GBRMS, I'm giving you access to his full-throttled, heists, he'll get 5% discount, I won't be able to review his full review, and he'll be getting 5GB maxed out for 7GBs, and 5GBs will get full access to my full review of the entire podcast, too get a discount on the entire course, and all that gets full access, too he gets a discount, plus I'll also get his full rate, plus he gets access to all that, plus his full guide, and more.


Transcript

00:00:00.000 Welcome to the Making Sense Podcast.
00:00:08.820 This is Sam Harris.
00:00:10.880 Just a note to say that if you're hearing this, you are not currently on our subscriber
00:00:14.680 feed and will only be hearing the first part of this conversation.
00:00:18.420 In order to access full episodes of the Making Sense Podcast, you'll need to subscribe at
00:00:22.720 samharris.org.
00:00:24.060 There you'll find our private RSS feed to add to your favorite podcatcher, along with
00:00:28.360 other subscriber-only content.
00:00:30.240 We don't run ads on the podcast, and therefore it's made possible entirely through the support
00:00:34.640 of our subscribers.
00:00:35.880 So if you enjoy what we're doing here, please consider becoming one.
00:00:46.760 For today's episode, I've decided to do an Ask Me Anything podcast.
00:00:50.740 So I solicited questions on Twitter and got some hundreds of them, and I will do my best
00:00:55.820 to answer as many as I can over the next hour or so.
00:00:59.540 And these will, by definition, not be on a theme.
00:01:03.780 How does the struggle of atheists for acceptance compare with that of women, blacks, gays, etc.?
00:01:09.740 How long until true equality arrives?
00:01:13.160 Well, I'm not sure I would want to draw any strict analogy between the civil rights struggles
00:01:17.960 of blacks and gays and women and that of atheists, because while atheism is, as a political identity,
00:01:26.600 more or less a non-starter in American politics at the moment, which is to say that you just
00:01:30.520 cannot have a political career, or certainly no reasonable expectation of one, while being
00:01:36.520 out of the closet as an atheist.
00:01:38.560 Nevertheless, atheists are disproportionately well-educated and well-off financially and powerful.
00:01:45.900 Far more than 5% of the people you meet in journalism or academia or Silicon Valley are atheists.
00:01:54.660 This is just my anecdotal impression, and I don't know of any scientific polling that has been done
00:01:59.200 on this question apart from among scientists, where the vast majority are non-believers,
00:02:05.100 and the proportion of non-believers only increases among the most successful and influential scientists.
00:02:11.840 But I'm reasonably confident that when you are in the company of wealthy, connected, powerful
00:02:19.940 people, internet billionaires and movie stars and the people who are running major financial
00:02:26.560 and academic institutions, you are disproportionately in the presence of people who are atheists.
00:02:34.340 So while I'm as eager as anyone to see atheism get its due, or rather to see reason and common
00:02:41.820 sense get their due in our political discourse, I don't think it's fair to say that atheists
00:02:50.240 have the same kind of civil rights problem that blacks and gays and women traditionally have
00:02:56.480 had in our society.
00:02:57.500 Now, in the Muslim world, things reverse entirely, because of course, to be exposed as an atheist
00:03:05.400 is in many places to live under a death sentence, and that's a problem that the civilized world
00:03:10.300 really has to address.
00:03:13.420 What is your view on laws that prevent people from not hiring on the basis of religion?
00:03:19.180 Well here, I'm sure I'm going to stumble into another controversy.
00:03:22.180 I tend to take a libertarian view of questions of this kind, so I think people should be free
00:03:28.940 to embarrass themselves publicly, to destroy their reputations, to be boycotted.
00:03:34.640 So if you want to open a restaurant that only serves red-headed people, I think you should
00:03:39.160 be free to do that.
00:03:40.340 If you only want to serve people over six feet tall, you should be free to do that.
00:03:44.220 And by definition, if you only want to serve Muslims, or you only want to serve whites,
00:03:48.660 or if you only want to serve Jews, if you want a club that excludes everyone but yourself,
00:03:54.220 I think you should be free to do all these things, and people should be free to write
00:03:58.700 about you, pick it in front of your store or clubhouse or restaurant.
00:04:03.200 But I think law is too blunt an instrument, and this is not to disregard all of the gains
00:04:08.960 we've made for civil rights based on the laws.
00:04:11.460 But at this point, I think we should probably handle these things through conversation and
00:04:15.860 reputation management, rather than legislate who businesses have to hire or serve.
00:04:21.880 I think if the social attitudes of a business are egregious and truly out of step with those
00:04:29.640 of the community, well, then they will suffer a penalty.
00:04:32.800 And it's only because 50 years ago, the attitudes of the community were so unenlightened that we
00:04:39.060 needed rather heavy-handed laws to ram through a sane and compassionate social agenda.
00:04:45.580 And some might argue that we're still in that situation.
00:04:48.260 I think less so by the hour.
00:04:51.180 And at a certain point, I think law is the wrong mechanism to enforce positive social attitudes.
00:04:56.580 And of course, my enemies will summarize this as Sam Harris thinks that it should be legal
00:05:02.160 to discriminate against blacks and gays and women.
00:05:05.840 Can you say something about artificial intelligence, AI, and your concerns about it?
00:05:12.860 Yeah, well, this is a very interesting topic.
00:05:15.760 The question of how to build artificial intelligence that isn't going to destroy us is something
00:05:22.300 that I've only begun to pay attention to, and it is a rather deep and consequential problem.
00:05:28.400 I went to a conference in Puerto Rico focused on this issue, organized by the Future of Life
00:05:33.420 Institute, and I was brought there by a friend, Elon Musk, who no doubt many of you have heard
00:05:39.800 of.
00:05:40.800 And Elon had recently said publicly that he thought AI was the greatest threat to human survival,
00:05:47.120 perhaps greater than nuclear weapons.
00:05:48.960 And many people took that as an incredibly hyperbolic statement.
00:05:53.920 Now, knowing Elon and knowing how close to the details he's apt to be, I took it as a very
00:06:02.240 interesting diagnosis of a problem, but I wasn't quite sure what I thought about it because
00:06:07.480 I hadn't really spent much time focusing on the progress we've been making in AI and its
00:06:14.160 implications.
00:06:14.660 So I went to this conference in San Juan, held by and for the people who are closest to doing
00:06:20.860 this work.
00:06:21.320 This was not open to the public.
00:06:22.840 I think I was one of maybe two or three interlopers there who just hadn't been invited, but sort
00:06:28.560 of got himself invited.
00:06:29.840 And what was fascinating about that was that this was a collection of people who were very
00:06:34.120 worried, like Elon and others who felt that we have to find some way to pull the brakes,
00:06:39.860 even though that seems somewhat hopeless, to the people who were doing the work most energetically
00:06:47.320 and most wanted to convince others not to worry about having to pull the brakes.
00:06:53.680 And what was interesting there is that what I heard outside this conference and what you
00:06:59.380 hear, let's say, on edge.org or in general discussions about the prospects of making real breakthroughs
00:07:06.860 in artificial intelligence, you hear a time frame of 50 to 100 years before anything terribly
00:07:12.400 scary or terribly interesting is going to happen.
00:07:15.520 In this conference, that was almost never the case.
00:07:18.580 Everyone who was still trying to ensure that they were doing this as safely as possible was
00:07:22.440 still conceding that a time frame of five or ten years admitted of rather alarming progress.
00:07:29.580 And so when I came back from that conference, the edge question for 2015 just happened to
00:07:36.240 be on the topic of AI, so I wrote a short piece distilling what my view now was.
00:07:41.640 Perhaps I'll just read that.
00:07:42.920 It won't take too long and hopefully it won't bore you.
00:07:45.820 Can we avoid a digital apocalypse?
00:07:48.700 It seems increasingly likely that we will one day build machines that possess superhuman intelligence.
00:07:54.000 We need only continue to produce better computers, which we will unless we destroy ourselves
00:07:58.360 or meet our ends some other way.
00:08:00.600 We already know that it's possible for mere matter to acquire, quote,
00:08:04.440 general intelligence, the ability to learn new concepts and employ them in unfamiliar contexts.
00:08:09.920 Because the 1,200 cc's of salty porridge inside our heads has managed it,
00:08:14.540 there's no reason to believe that a suitably advanced digital computer couldn't do the same.
00:08:19.600 It's often said that the near-term goal is to build a machine that possesses, quote,
00:08:23.540 human-level intelligence.
00:08:24.940 But unless we specifically emulate a human brain, with all its limitations, this is a false goal.
00:08:31.620 The computer on which I'm writing these words already possesses superhuman powers of memory
00:08:35.960 and calculation.
00:08:37.140 It also has potential access to most of the world's information.
00:08:40.440 Unless we take extraordinary steps to hobble it, any future artificial general intelligence,
00:08:46.080 known as AGI, will exceed human performance on every task for which it is considered a source
00:08:51.280 of intelligence in the first place.
00:08:53.460 Whether such a machine would necessarily be conscious is an open question.
00:08:57.100 But conscious or not, an AGI might very well develop goals incompatible with our own.
00:09:03.220 Just how sudden and lethal this parting of the ways might be is now a subject of much colorful speculation.
00:09:08.840 So just to make things perfectly clear here, all you have to grant to get your fears up and running
00:09:15.140 is that we will continue to make progress in hardware and software design, unless we destroy ourselves some other way.
00:09:24.380 And that there's nothing magical about the wetware we have running inside our heads,
00:09:29.900 and that an intelligent machine could be built of other material.
00:09:33.600 Once you grant those two things, which I think everyone who has thought about the problem will
00:09:38.880 grant, I can't imagine a scientist not granting that one, we're going to make progress in computer
00:09:46.520 design unless something terrible happens, and two, that there's nothing magical about biological
00:09:53.020 material where intelligence is concerned.
00:09:56.460 Once you've granted those two propositions, you now will be hard-pressed to find some handhold
00:10:01.740 with which to resist your slide into real concern about where this is all going.
00:10:07.760 So back to the text.
00:10:09.280 One way of glimpsing the coming risk is to imagine what might happen if we accomplished our aims
00:10:13.700 and built a superhuman AGI that behaved exactly as intended.
00:10:18.520 Such a machine would quickly free us from drudgery and even from the inconvenience of doing most
00:10:22.580 intellectual work.
00:10:24.020 What would follow under our current political order?
00:10:27.240 There's no law of economics that guarantees that human beings will find jobs in the presence of
00:10:31.720 every possible technological advance.
00:10:34.540 Once we built the perfect labor-saving device, the cost of manufacturing new devices would
00:10:39.320 approach the cost of raw materials.
00:10:41.340 Absent a willingness to immediately put this new capital at the service of all humanity,
00:10:46.100 a few of us would enjoy unimaginable wealth and the rest would be free to starve.
00:10:51.160 Even in the presence of a truly benign AGI, we could find ourselves slipping back to a state
00:10:56.260 of nature, policed by drones.
00:10:59.220 And what would the Russians or the Chinese do if they learned that some company in Silicon Valley
00:11:03.360 was about to develop a super-intelligent AGI?
00:11:06.740 This machine would, by definition, be capable of waging war, terrestrial and cyber, with unprecedented
00:11:12.220 power.
00:11:12.760 However, how would our adversaries behave on the brink of such a winner-take-all scenario?
00:11:17.880 Mere rumors of an AGI might cause our species to go berserk.
00:11:21.960 It is sobering to admit that chaos seems a probable outcome, even in the best-case scenario,
00:11:27.240 in which the AGI remained perfectly obedient.
00:11:30.500 But of course, we cannot assume the best-case scenario.
00:11:33.580 In fact, quote,
00:11:34.680 The control problem, the solution to which would guarantee obedience in any advanced AGI,
00:11:40.320 appears quite difficult to solve.
00:11:42.440 Imagine, for instance, that we build a computer that is no more intelligent than the average
00:11:46.560 team of researchers at Stanford or MIT.
00:11:49.420 But because it functions on a digital timescale, it runs a million times faster than the minds
00:11:54.200 that built it.
00:11:55.120 Set it humming for a week and it would perform 20,000 years of human-level intellectual work.
00:11:59.880 What are the chances that such an entity would remain content to take direction from us?
00:12:05.120 And how could we confidently predict the thoughts and actions of an autonomous agent that sees
00:12:10.100 more deeply into the past, present, and future than we do?
00:12:13.980 The fact that we seem to be hastening towards some sort of digital apocalypse poses several
00:12:18.280 intellectual and ethical challenges.
00:12:20.600 For instance, in order to have any hope that a superintelligent AGI would have values commensurate
00:12:25.740 with our own, we would have to instill those values in it, or otherwise get it to emulate
00:12:30.440 us.
00:12:31.440 But whose values should count?
00:12:33.980 Should everyone get a vote in creating the utility function of our new colossus?
00:12:38.320 If nothing else, the invention of an AGI would force us to resolve some very old and boring
00:12:42.920 arguments in moral philosophy.
00:12:45.500 And perhaps I don't need to spell this out any further, but it's interesting that once you
00:12:49.380 imagine having to build values into a superintelligent AGI, you then realize that
00:12:55.720 you need to get straight about what you think is good.
00:12:59.060 And I think this, the advent of this technology would cut through moral relativism like a laser.
00:13:08.900 Who is going to want to engineer into this thing the values of theocracy, you know, traditional
00:13:17.340 religious authoritarianism?
00:13:20.460 You want to build homophobia and intolerance toward free speech into a machine that makes
00:13:27.980 tens of thousands of years of human-level intellectual progress every time it cycles?
00:13:32.940 I don't think so.
00:13:34.360 Even designing self-driving cars presents potential ethical problems that we need to get straight
00:13:41.240 about.
00:13:41.520 Any self-driving car needs some algorithm by which to rank order bad outcomes.
00:13:47.720 So if you want a car that will avoid a child who dashes in front of it in the road, perhaps
00:13:55.520 by driving up on the sidewalk, you also want a car that will avoid the people on the sidewalk
00:14:01.100 or preferentially hit a mailbox instead of a baby carriage, right?
00:14:07.200 So you need some intelligent sorting of outcomes here.
00:14:10.740 Well, these are moral decisions.
00:14:12.320 Do you want a car that is unbiased with respect to the age and size of people or the color
00:14:20.160 of their skin?
00:14:21.080 Would you like a car that was more likely to run over white people than people of color?
00:14:27.400 That may seem like a peculiar question.
00:14:29.380 But if you do psychological tests, say a trolley problem tests on liberals, and this is the
00:14:34.560 one psychological experiment that I'm aware of where liberals come out looking worse than
00:14:39.680 conservatives reliably, if you test them on whether or not they would be willing to sacrifice
00:14:46.320 one life to save five or one life to save a hundred, and you give subtle clues as to the
00:14:52.520 color of the people involved.
00:14:53.980 If you say that LeBron belongs to the Harlem Boys Choir, and there's some scenario under which
00:15:01.000 he can be sacrificed to save Chip and his friends who study music at Juilliard, they simply won't
00:15:08.820 take a consequentialist approach to the problem.
00:15:11.460 They will not sacrifice a black life to save any number of white lives.
00:15:15.960 Whereas if you reverse the variables, they will sacrifice a white life to save black lives
00:15:20.940 rather reliably.
00:15:22.560 Now, conservatives, strangely, are unbiased in this paradigm, which is to say colorblind.
00:15:27.340 Well, do we like bias here?
00:15:30.460 Do you want a self-driving car that preferentially avoids people of color?
00:15:36.540 Or you have to decide.
00:15:38.360 We either build it one way or the other.
00:15:41.360 So this is an interesting phenomenon where technology is going to force us to admit to
00:15:47.080 ourselves that we know right from wrong in a way that many people imagine isn't possible.
00:15:54.860 Okay, back to the text.
00:15:57.180 However, a true AGI would probably acquire new values, or at least develop novel and perhaps
00:16:02.060 dangerous near-term goals.
00:16:03.680 What steps might a superintelligence take to ensure its continued survival, or access to
00:16:08.880 computational resources?
00:16:10.740 Whether the behavior of such a machine would remain compatible with human flourishing might
00:16:14.740 be the most important question our species ever asks.
00:16:18.240 The problem, however, is that only a few of us seem to be in a position to think this question
00:16:22.260 through.
00:16:23.500 Indeed, the moment of truth might arrive amid circumstances that are disconcertingly informal
00:16:27.760 and inauspicious.
00:16:28.540 Picture ten young men in a room, several of them with undiagnosed Asperger's, drinking
00:16:33.840 Red Bull and wondering whether to flip a switch.
00:16:36.680 Should any single company or research group be able to decide the fate of humanity?
00:16:41.700 The question nearly answers itself.
00:16:44.660 And yet it is beginning to seem likely that some small number of smart people will one
00:16:48.420 day roll these dice, and the temptation will be understandable.
00:16:51.440 We confront problems—Alzheimer's disease, climate change, economic instability—for which
00:16:57.340 superhuman intelligence could offer a solution.
00:17:00.360 In fact, the only thing nearly as scary as building an AGI is the prospect of not building
00:17:05.020 one.
00:17:06.360 Nevertheless, those who are closest to doing this work have the greatest responsibility
00:17:10.260 to anticipate its dangers.
00:17:12.500 Yes, other fields pose extraordinary risks.
00:17:14.680 But the difference between AGI and something like synthetic biology is that in the latter,
00:17:20.560 the most dangerous innovations, such as germline mutation, are not the most tempting, commercially
00:17:26.220 or ethically.
00:17:27.380 With AGI, the most powerful methods, such as recursive self-improvement, are precisely
00:17:32.940 those that entail the most risk.
00:17:34.720 We seem to be in the process of building a god.
00:17:37.680 Now would be a good time to wonder whether it will or even can be a good one.
00:17:41.780 I guess I should probably explain this final notion of recursive self-improvement.
00:17:46.840 The idea is that once you build an AGI that is superhuman, well then the way that it will
00:17:54.060 truly take off is if it is given or develops an ability to improve its own code.
00:18:00.480 Just imagine something, again, that could make literally tens of thousands of years of human-level
00:18:07.660 intellectual progress in days or even minutes, improving itself.
00:18:13.020 Not only learning more, but learning more about how to learn and improving its ability
00:18:18.080 to learn.
00:18:18.980 Then you have this exponential takeoff function where this thing stands in relation to us
00:18:24.860 intellectually, the way we stand in relation to chickens and sea urchins and snails.
00:18:32.040 Now this may sound like a crazy thing to worry about.
00:18:35.680 It isn't.
00:18:36.400 Again, the only assumptions are that we will continue to make progress and that there's
00:18:42.080 nothing magical about biological substrate where intelligence is concerned.
00:18:48.680 And again, I'm agnostic as to whether or not such a machine would by definition be conscious.
00:18:53.760 So let's assume it's not conscious.
00:18:55.860 So what?
00:18:56.380 You're still talking about something that will have the functional power of a god, whether
00:19:01.840 or not the lights are on.
00:19:04.240 So perhaps you got more than you wanted from me on that topic.
00:19:08.520 I like you, but as an atheist, I find statism to be a dangerous form of religion.
00:19:13.200 And I won't paint a billion people as barbarians.
00:19:16.060 Okay, well, there are two axes to grind there.
00:19:18.980 Well, this whole business about statism I find profoundly uninteresting.
00:19:23.100 This is a separate conversation about the problems of U.S. foreign policy, the problems of bureaucracy,
00:19:30.900 the problems of the tyranny of the majority, or the tyranny of empowered minorities, oligarchy.
00:19:39.100 These are all topics that can be spoken about.
00:19:41.860 To compare a powerful state per se with the problem of religion is just to make a hash of everything that's important to talk about here.
00:19:54.180 And the idea that we could do without a powerful state at this point is just preposterous.
00:19:59.020 So if you're an anarchist, you're either 50 or 100 years before your time,
00:20:04.440 notwithstanding what I just said about artificial intelligence, or you're an imbecile.
00:20:08.120 So we need the police, we need the fire department, we need people to pave our roads.
00:20:12.840 We can't privatize all of this stuff.
00:20:15.380 And privatizing it would beget its own problems.
00:20:18.840 So whenever I hear someone say, you worship the religion of the state,
00:20:23.820 I know I'm in the presence of someone who just isn't ready for a conversation about religion,
00:20:28.360 and isn't ready to honestly talk about the degree to which we rely and are wise to rely on the powers of a well-functioning government.
00:20:38.580 Now, insofar as our government doesn't function well, well then we have to change it,
00:20:42.180 we have to resist its overreaching into our lives.
00:20:46.400 But behind this concern about statism is always some confusion about the problem of religion.
00:20:53.660 And, again, this person ends his almost question with,
00:20:57.800 I won't paint a billion people as barbarians.
00:20:59.980 Well, neither will I.
00:21:01.800 And again, when I criticize Islam, I'm criticizing the doctrine of Islam.
00:21:05.720 And insofar as people adhere to it, to the letter, then I get worried.
00:21:10.560 But there'll be much more on this topic when I publish my book with Majid Nawaz.
00:21:14.340 As I originally said that was happening in June, that's unfortunately been pushed back to October
00:21:19.460 because it is still hard to publish a physical book, apparently.
00:21:23.460 But you will have your fill of my thoughts about how to reform Islam when that comes out.
00:21:29.680 What do you think of Cenk, Uyghur's, the Young Turks' attack on you and Ayan recently?
00:21:36.900 Well, I guess I've ceased to think about it.
00:21:39.120 I pushed back against it briefly, saying on Twitter, obviously my three hours with Cenk had been a waste of time.
00:21:46.840 It appears to have been a waste of time, at least for him.
00:21:49.440 I think many people got some benefit from listening to us go round and round
00:21:54.940 and get wrapped around the same axle for three hours.
00:21:57.940 It actually wasn't a waste of time for him because I heard from a former employee there
00:22:02.080 that that was literally the most profitable interview they've ever put on their show.
00:22:06.000 I don't know what he made off of that interview, and I don't begrudge him making money off his show, obviously.
00:22:11.460 But I feel that Cenk now systematically acts in bad faith on this topic.
00:22:17.540 He has made no effort to accurately represent my views.
00:22:23.580 Again, it's child's play to pick a single sentence from something that I've said or written
00:22:29.040 and to hew to a misinterpretation of that sentence and attack me.
00:22:35.620 And I think that the thing I finally realized here, and this is not just a problem with Cenk,
00:22:40.800 it's with all the usual suspects and all of their followers on Twitter,
00:22:45.720 I've just reluctantly begun to accept the fact that when someone hates you,
00:22:51.140 they take so much pleasure from hating you that it's impossible to correct a misunderstanding.
00:22:58.220 That would force your opponent to relinquish some of the pleasure he's taking in hating you.
00:23:03.840 This is an attitude that I think we're all familiar with to some degree.
00:23:07.320 Once you're convinced that somebody is a total asshole,
00:23:11.080 where you've lost any sense that you should give them the benefit of the doubt,
00:23:14.680 and then you see one more transgression from them,
00:23:18.620 another thing that confirms whatever attitude in them you hate,
00:23:23.120 whether they're homophobic or they're racist or they don't believe in climate change or whatever it is.
00:23:29.540 And once that has calcified, that view of that person has calcified in you,
00:23:34.960 and you see yet one further iteration of this thing,
00:23:39.140 well, then you're not inclined to second-guess it.
00:23:41.580 You're not inclined to try to read between the lines.
00:23:44.680 And in fact, if someone shows you that transgression isn't what it seemed,
00:23:49.440 well, then you can be slow to admit that.
00:23:52.380 This is not totally foreign to me.
00:23:54.300 I noticed this in myself.
00:23:56.220 This is something that I do my best to shed.
00:23:59.180 I think it's an extremely unflattering quality of mind.
00:24:02.820 This is not where I want to be caught standing.
00:24:05.240 But my opponents seem to be always standing here.
00:24:07.960 And that makes conversation impossible.
00:24:11.960 Okay.
00:24:12.400 How did you become such a good public speaker?
00:24:16.700 I have a speech class this fall, and I'm sick about it.
00:24:21.320 Well, I certainly wouldn't claim to think that I am such a good public speaker.
00:24:26.120 I think at best I'm an adequate one.
00:24:28.360 And as I wrote on my blog a couple years ago in an article entitled The Silent Crowd,
00:24:36.160 I really did have a problem with this.
00:24:38.180 I was really terrified to speak publicly early in life and overcame it,
00:24:44.420 and overcame it rather quickly just by doing it.
00:24:48.020 Meditation was helpful, but meditation is insufficient for this kind of thing.
00:24:52.200 You really, you have to do the thing you're afraid of.
00:24:55.620 You can't just get yourself into some position of confidence beforehand and hope to then do it without any anxiety.
00:25:04.320 No, you have to be willing to feel the anxiety.
00:25:07.080 And what is anxiety?
00:25:08.160 Anxiety is just a sensation of energy in the body.
00:25:12.140 It has no content, really.
00:25:14.540 It has no philosophical content.
00:25:16.740 It need not have any psychological content.
00:25:18.760 It's like indigestion.
00:25:20.900 You know, you wouldn't read a pattern of painful sensation in your abdomen after a bad meal
00:25:29.220 and imagine that it says something negative about you as a person, right?
00:25:35.120 This is a negative experience that is peripheral to your identity.
00:25:38.740 But something about anxiety suggests that it lands more at the core of who we are.
00:25:44.540 You're a fearful person.
00:25:46.220 But you need not have this relationship to anxiety.
00:25:48.780 Anxiety is a hormonal cascade that you can just become willing to feel and even interested in.
00:25:57.100 And it need not be the impediment to doing the thing that you are anxious about doing.
00:26:02.920 Not at all.
00:26:03.520 And so I go into this in more detail on my blog, but this is just something to get over.
00:26:08.600 It's worth pushing past this and not caring whether you appear anxious while doing it.
00:26:14.940 Just do your thing.
00:26:16.940 And you will eventually realize that you can do it happily.
00:26:20.420 But, you know, some people are natural speakers.
00:26:23.520 They're natural performers.
00:26:24.760 This is what they are comfortable doing.
00:26:27.100 They love to do it.
00:26:28.020 They're loose.
00:26:28.800 They have access to the full bandwidth of their personality in that space.
00:26:34.260 And, you know, I am not that way.
00:26:37.060 And even being comfortable doing it, I'm not that way.
00:26:40.200 It doesn't come naturally.
00:26:41.520 And I'm happy I've fooled at least you.
00:26:44.580 If I'm a good public speaker, it's a statement that I have something interesting to say.
00:26:49.640 If you pay close attention, you'll see that I just kind of drone on in a monotone.
00:26:54.540 And my lack of style is, to some degree, a necessity because I want to approach public speaking very much as a conversation.
00:27:04.340 I get uncomfortable whenever my pattern of speech departs too much from what it would be in a conversation with one person at a dinner table.
00:27:14.000 Now, if you're standing in front of a thousand people, it's going to depart somewhat.
00:27:17.500 It's just the nature of the situation.
00:27:19.280 But I try to be as conversational as possible.
00:27:22.100 And when I'm not and when someone else isn't, it begins to strike me as dishonest.
00:27:28.500 Yet I will grant you that the performance aspect of public speaking allows for what many people appreciate as the best examples of oratory.
00:27:40.620 So you just listen to, you know, Martin Luther King Jr.
00:27:43.760 He is so far from a natural speech pattern.
00:27:48.480 It is pure performance.
00:27:50.840 Just imagine being seated at a table at a dinner party across from someone who was speaking to you the way MLK spoke in his speeches.
00:28:01.540 You would know that you were in the presence of a madman.
00:28:05.660 It would be intolerable, right?
00:28:07.280 It would be terrifying.
00:28:08.240 So that distance between what is normal in conversation and what is dramaturgical in a public speech, I don't want to traverse that too far.
00:28:21.320 I'm not comfortable doing it and I actually tend to find it suspect as a member of the audience.
00:28:27.120 What is really entailed in Dzogchen meditation?
00:28:31.200 Is it the loss of I, that is the self, or does it go beyond that?
00:28:35.780 Well, traditionally speaking, it goes beyond that in certain ways.
00:28:39.320 But I think the core point is what's called non-dual awareness, to lose the sense of subject-object awareness in the present moment and to just rest as open, centerless consciousness and just fully relax into whatever is arising without hope and fear, without praise and blame, without grasping at the pleasant or pushing away the unpleasant.
00:29:05.440 So it's a kind of mindfulness, but it's a mindfulness of there being nothing at all to grasp at as self.
00:29:13.300 So it's, yes, selflessness is the core insight.
00:29:16.080 They don't tend to talk about selflessness.
00:29:18.520 They talk about non-duality.
00:29:19.860 Any suggestions or advice if I want to do two years of silent meditation on retreat?
00:29:27.680 Yeah, well, just don't do it by yourself.
00:29:30.160 You really need guidance if you're going to go into a retreat of any significant length.
00:29:34.700 So find a meditation center where they're doing a practice that you really want to do and find a teacher you really admire and who you trust and then follow their instructions.
00:29:45.800 A couple more questions about meditation.
00:29:49.900 Why do we do it sitting up?
00:29:51.840 If having a straight back is valuable, why not do it lying down?
00:29:55.760 Well, you can do it lying down.
00:29:57.000 It's just, it's harder.
00:29:58.340 We're so deeply conditioned to fall asleep lying down that most people find that meditation is just a precursor to a nap in that case.
00:30:07.220 But it can be a very nice nap.
00:30:08.680 And if you're injured or if you're just tired of sitting, you know, lying down is certainly a reasonable thing to attempt.
00:30:16.120 I just, most people find that it is harder to stay awake.
00:30:19.440 And people often have a problem with sleepiness while sitting up.
00:30:22.660 So that's the reason.
00:30:24.700 I haven't read any of your books, but want to soon.
00:30:27.180 Does your view that there's no free will give you sympathy for your enemies?
00:30:31.300 Yes, it does.
00:30:32.380 I've talked about this a little bit.
00:30:35.120 It does, it is an antidote to hatred.
00:30:37.120 I have a long list of people who I really would hate if I thought they could behave differently than they do.
00:30:46.980 Now, occasionally I'm taken in by the illusion that they could and should be behaving differently.
00:30:51.760 But when I have my wits about me, I realize that I am dealing with people who are going to do what they're going to do.
00:30:58.440 And my efforts to talk sense into them are going to be as ineffectual as they will be.
00:31:04.740 And there's really no place to stand where this was going to be other than it is.
00:31:10.420 And so it really is an antidote to hating some of the dangerously deluded and impossibly smug people
00:31:18.740 I have the misfortune of colliding with on a regular basis.
00:31:22.980 Can the form of human consciousness be distinguished from its contents, or are the two identical?
00:31:30.720 And that's an interesting question.
00:31:31.940 I think it's, insofar as I understand it, there are a couple different ways I can interpret what you've said there.
00:31:37.260 But I think human consciousness clearly has a form, both conscious and unconscious.
00:31:43.120 When you're talking about the contents of consciousness, you're talking about what is actually appearing before the light of consciousness.
00:31:51.200 That is, what is available to attention in each moment, what can be noticed.
00:31:55.680 But there's much that can't be noticed, which is structuring what can.
00:31:59.920 So the contents are dependent upon unconscious processes, which are noticeably human, in that the contents they deliver are human.
00:32:11.700 So, for instance, an example I often cite is our ability to understand and produce language.
00:32:18.880 The ability to follow grammatical rules, to notice when they're broken.
00:32:23.960 And all of these processes are unconscious, and yet this is not something that dogs do, it's not something that chimps do.
00:32:31.580 We're the only ones we know to do it, and all of this gets tuned in a very particular way in each person's case.
00:32:40.260 For instance, I'm totally insensitive to the grammatical rules of Japanese.
00:32:44.640 When Japanese is spoken in my presence, I don't hear much of anything linguistic.
00:32:48.140 So the difference between being an effortless parser of meaning and syntax in English, and being little better than a chimpanzee in the presence of Japanese,
00:32:59.100 that difference is, again, unconscious, yet determining the contents of consciousness.
00:33:04.340 So there are both unconscious and conscious ways in which consciousness, in our case, is demonstrably human.
00:33:12.920 And I don't really think you can talk about the humanness of consciousness beyond that.
00:33:16.900 Because for me, consciousness is simply the fact that it's like something to have an experience of the world.
00:33:24.320 The fact that there's a qualitative character to anything, that's consciousness.
00:33:29.060 And if our computers ever acquire that, well, then our computers will be conscious.
00:33:33.480 What's your opinion of the rise of the new nationalist right in Europe and the issue of Islam there?
00:33:39.500 There's a very unhappy linkage there.
00:33:41.400 The nationalist right has an agenda beyond resisting the immigration of Muslims.
00:33:46.600 But clearly, we have a kind of fascism playing both sides of the board here.
00:33:50.860 And that's a very unhappy situation and a recipe for disaster at a certain point.
00:33:58.240 I think the problem of Islam in Europe is of deep concern now.
00:34:02.900 And especially so probably in France, although it's bad in many countries.
00:34:08.420 You have a level of radicalization and a disinclination to assimilate on the part of far too many people.
00:34:19.700 And it's a problem unlike the situation in the United States for reasons that are purely a matter of historical accident.
00:34:27.980 But I think it's a cause of great concern.
00:34:31.540 And it is, as I said in that article on fascism, it is a double concern that liberals are sleepwalking on this issue.
00:34:41.220 And that to express a concern about Islam in Europe gets you branded as a right winger or a nationalist or a xenophobe.
00:34:51.280 Because these are the only people who have been articulating the problem up to now, with a few notable exceptions like Ayaan Hirsi Ali and Douglas Murray in the UK and Majid Nawaz, who I've mentioned a lot recently.
00:35:05.640 So it's not all fascists who are talking about the problem of Islamism and jihadism in Europe.
00:35:11.700 But for the most part, liberals have been totally out to lunch on this topic.
00:35:17.140 And one wonders what it will take to get them to come around.
00:35:22.440 Lots of questions here.
00:35:24.040 Apologies for not getting to put the tiniest fraction of them.
00:35:28.100 There appear to be now hundreds.
00:35:29.840 So what charity organization do you think is doing the best work?
00:35:33.440 There are two charities, unrelated to anything that I'm involved in, that I, by default, give money to, Doctors Without Borders and St. Jude's Children's Hospital.
00:35:44.640 Both do amazing work and work for which there really is no substitute.
00:35:50.020 So, for instance, when people use any of the affiliate links on my website or you see in a blog post where I link to a book, let's say I'm interviewing an author and I link to his book.
00:35:59.440 If you buy his book or anything else on Amazon through that affiliate link, well, then 50% of that royalty goes to charity and rather often it's Doctors Without Borders or St. Jude's.
00:36:12.200 I just think when you're helping people in refugee camps in Africa or close to the site of a famine or natural disaster or civil war where you're doing pioneering research on pediatric cancer and never turning any child away at your hospital for want of funds, it's hard to see a better allocation of money than either of those two projects.
00:36:38.500 I reject religion entirely, but I'm curious how you, with complete certainty, know there is no God.
00:36:45.280 What proof do you have?
00:36:47.060 Well, this has the burden of proof reversed.
00:36:50.460 It's not that I have proof that there is no God.
00:36:52.680 I can't prove that there's no Apollo or Zeus or Isis or Shiva.
00:36:57.580 These are all gods who might exist.
00:37:01.640 But, of course, there's no good evidence that they do.
00:37:05.700 And there are many things that suggest that these are all the products of literature.
00:37:12.160 When you're looking on the mythology shelf in a bookstore, you are essentially perusing the graveyard of dead gods.
00:37:20.480 If you'd like to continue listening to this conversation, you'll need to subscribe at SamHarris.org.
00:37:27.740 Once you do, you'll get access to all full-length episodes of the Making Sense podcast, along with other subscriber-only content, including bonus episodes and AMAs and the conversations I've been having on the Waking Up app.
00:37:39.940 The Making Sense podcast is ad-free and relies entirely on listener support.
00:37:44.320 And you can subscribe now at SamHarris.org.
00:37:50.480 And you can subscribe now at SamHarris.org.