Making Sense - Sam Harris - January 05, 2023


Making Sense of Foundations of Morality | Episode 3 of The Essential Sam Harris


Episode Stats

Length

44 minutes

Words per Minute

157.59941

Word Count

7,078

Sentence Count

337

Misogynist Sentences

7

Hate Speech Sentences

6


Summary

The goal of this series is to organize, compile, and juxtapose conversations hosted by Sam Harris into specific areas of interest. This is an ongoing effort to construct a coherent overview of Sam s perspectives and arguments, the various explorations and approaches to the topic, the relevant agreements and disagreements, and the pushbacks and evolving thoughts which his guests have advanced. The purpose of these compilations is not to provide a complete picture of any issue, but to entice you to go deeper into these subjects. Along the way, we ll point you to the full episodes with each featured guest, and at the conclusion, we'll offer some reading, listening, and watching suggestions which range from fun and lighthearted to densely academic. So if you enjoy what we re doing here, please consider becoming a supporter of The Making Sense Podcast. We don t run ads on the podcast, and therefore, therefore, it s made possible entirely through the support of our subscribers. by becoming a member of the podcasting community, you ll benefit entirely from the support made possible by the support you're getting from the membership of the Making Sense Community. You'll hear plenty of crossover into other topics as these dives into the archives unfold, and your thinking about a particular topic may shift as you realize its contingent relationships with others, and so you'll be better able to make sense of the topic. So, get ready to become a member! This is Sam Harris, and you'll get to hear more about what we're doing here. . in this episode of The Essential Sam Harris: Sam Harris What is The Foundations of Morality? - Episode 1: What is Morality, What is it? What does it mean to be a moral good thing? Why is it a good thing ? Why does it matter what we should be good? How can we know what it's a good good thing and what it s good for us to have a good idea of what it means to be good in the world ? What would you like to know about morality? And so on and why it s important to be moral good and moral good, anyway? Can science answer questions of morality How do we know that it s better than we can be better than other things? and so on? We ll find out in the next episode of Making Sense of the Foundations Of Morality The Moral Landscape?


Transcript

00:00:00.000 Welcome to the Making Sense Podcast.
00:00:08.820 This is Sam Harris.
00:00:10.880 Just a note to say that if you're hearing this, you are not currently on our subscriber
00:00:14.680 feed and will only be hearing the first part of this conversation.
00:00:18.440 In order to access full episodes of the Making Sense Podcast, you'll need to subscribe at
00:00:22.720 samharris.org.
00:00:24.140 There you'll find our private RSS feed to add to your favorite podcatcher, along with
00:00:28.360 other subscriber-only content.
00:00:30.520 We don't run ads on the podcast, and therefore it's made possible entirely through the support
00:00:34.640 of our subscribers.
00:00:35.900 So if you enjoy what we're doing here, please consider becoming one.
00:00:49.060 Welcome to The Essential Sam Harris.
00:00:51.480 This is Making Sense of the Foundations of Morality.
00:00:55.440 The goal of this series is to organize, compile, and juxtapose conversations hosted by Sam
00:01:03.520 Harris into specific areas of interest.
00:01:06.480 This is an ongoing effort to construct a coherent overview of Sam's perspectives and arguments,
00:01:12.220 the various explorations and approaches to the topic, the relevant agreements and disagreements,
00:01:18.220 and the pushbacks and evolving thoughts which his guests have advanced.
00:01:21.960 The purpose of these compilations is not to provide a complete picture of any issue, but
00:01:29.060 to entice you to go deeper into these subjects.
00:01:32.480 Along the way, we'll point you to the full episodes with each featured guest, and at the
00:01:37.720 conclusion, we'll offer some reading, listening, and watching suggestions, which range from fun
00:01:43.640 and light to densely academic.
00:01:47.400 One note to keep in mind for this series, Sam has long argued for a unity of knowledge where
00:01:53.260 the barriers between fields of study are viewed as largely unhelpful artifacts of unnecessarily
00:01:58.820 partitioned thought.
00:02:00.660 The pursuit of wisdom and reason in one area of study naturally bleeds into, and greatly
00:02:06.160 affects, others.
00:02:07.360 You'll hear plenty of crossover into other topics as these dives into the archives unfold.
00:02:14.160 And your thinking about a particular topic may shift as you realize its contingent relationships
00:02:19.200 with others.
00:02:20.840 In this topic, you'll hear the natural overlap with theories of free will, political philosophy,
00:02:27.060 violence, belief and unbelief, and more.
00:02:31.080 So, get ready.
00:02:33.360 Let's make sense of the foundations of morality.
00:02:36.400 Sam's most important thesis might be the one we'll be exploring in this compilation.
00:02:47.820 It's possibly his most essential argument to grasp in order to understand his positions
00:02:52.420 in the areas of politics, violence, charity, income inequality, and even atheism and religion.
00:03:00.260 He first set the argument down in book form when he wrote The Moral Landscape in 2010.
00:03:05.580 He also delivered a TED Talk, which compressed the argument's central themes into a 15-minute
00:03:11.820 presentation.
00:03:13.440 That talk was entitled, Can Science Answer Questions of Morality?
00:03:18.340 Naturally, both the book and the video are recommended to pair with this compilation.
00:03:22.480 As we explore Sam's conversations on this subject from the Making Sense archive, we'll be treading
00:03:30.140 into the exhaustively discussed philosophy of morality.
00:03:33.440 There's an endless taxonomy of positions in this field.
00:03:37.720 The ensuing picture can look like a wildly overgrown and gangly family tree, pointing to countless
00:03:44.140 frameworks with names like consequentialism, utilitarianism, virtue ethics, care ethics, constructivism,
00:03:52.700 nihilism, divine commandment theory, and deontology.
00:03:56.700 But at the base of that tree is a fork that bifurcates the topic fairly sharply.
00:04:03.100 It makes sense for us to start at that primary split and note which limb Sam climbs.
00:04:09.980 Let's label the split with one branch marked as moral realism and the other as its negation,
00:04:17.060 moral anti-realism.
00:04:18.460 The path of moral realism contends that there are such things as objective moral truths.
00:04:26.120 This would mean that, all things being equal, a declaration like the following is objectively
00:04:31.980 true.
00:04:33.440 It is morally better to give food to a starving creature than to withhold the food.
00:04:38.980 It would mean that it's possible for moral statements like this to be right or wrong.
00:04:43.620 And to take it even further, it would mean that the truth of this moral statement would
00:04:49.520 remain true even if everyone were wrong and confused about it.
00:04:54.720 For a moral realist, a statement like,
00:04:57.780 Slavery was morally wrong, is not simply a statement of opinion or the suggestion of a
00:05:03.380 distaste for the practice.
00:05:05.280 Instead, it's a contention that the argument has its foundations outside of culture, personal
00:05:10.660 preference, or historical context, and that slavery was, is, and always will be a moral
00:05:17.700 wrong.
00:05:19.100 In philosophical jargon, you could say that objectively true means that it is true from
00:05:25.600 the view from nowhere.
00:05:28.400 You've likely already gathered that the other branch of the tree, the one labeled moral anti-realism,
00:05:34.960 rejects the entire notion of objective statements in morality.
00:05:38.520 It contends that when it comes to moral statements, we don't have any path to access this so-called
00:05:45.100 view from nowhere, and that moral sentiment is always really a matter of evolved preference,
00:05:51.480 species bias, historical bias, or cultural bias.
00:05:56.300 This branch of ethics declares that the quest for a genuine foundation for our moral sentiments
00:06:01.780 and emotions, that rests outside of our biases, will always result in failure, and that ultimately,
00:06:08.360 all moral sentiments are inescapably subjective, no matter how convincing or widely accepted.
00:06:16.500 Before we go too much further, it's important to note that the outwardly expressed moral attitudes
00:06:22.420 and political positions of realists and anti-realists can strongly cohere.
00:06:28.000 It's entirely possible, even abundantly probable, to find both a realist and an anti-realist arguing
00:06:35.260 that slavery is morally wrong, and to find them both voting for the same political proposition
00:06:41.280 to outlaw the practice.
00:06:43.380 The difference between the two philosophies presents itself when they try to provide their deepest,
00:06:48.580 foundational basis for this moral judgment.
00:06:52.120 The realist claims that slavery being wrong is a kind of objective fact, not necessarily
00:06:58.900 exactly like the facts in mathematics or chemistry, but something a bit like them, or at least strongly
00:07:05.180 informed and dictated by those facts, strong enough to be elevated to a factual, moral truth.
00:07:12.300 The moral anti-realist might agree that slavery is a moral wrong, but declare that ultimately,
00:07:18.580 the foundations for that judgment are anthropocentric biases, evolved emotions, historical context,
00:07:25.560 and strong moral instincts, not anything like a scientific fact.
00:07:30.660 One name you'll hear often in this compilation, and in any discussion on this topic, is David Hume.
00:07:37.820 Hume was a brilliant philosopher from Scotland who did his writing in the 1700s.
00:07:42.600 He formulated what has come to be known as the is-ought distinction, which argued that you can't get an
00:07:49.800 ought from an is. Or, to reword it in philosophical hypothesis form, Hume argued that there is no
00:07:57.880 description of the way the universe is, which tells us how the universe ought to be.
00:08:02.760 This insight is what really fertilizes the entire branch of anti-realism in the field of ethics.
00:08:10.280 You may have already guessed that Sam very confidently moves down the moral realism branch.
00:08:16.720 And while he conceptually agrees with Hume's logic, he considers the confusion that it's caused,
00:08:22.820 and its resulting moral subjectivism and cultural relativism, to be a kind of ethical and political
00:08:28.660 emergency. Sam asserts that Hume's is-ought insight has led many people to conclude that science
00:08:35.620 really has nothing to say about morality. The relativist argument suggests that because science
00:08:41.460 pursues the is-side of Hume's distinction, and morality pursues the ought-side, questions of morality
00:08:48.420 are completely divorced from science and are purely subjective matters for which there is no objective
00:08:54.200 arbiter. Sam points out that this attitude has rendered many otherwise moral and intelligent
00:09:00.800 people mute and blind when it comes to casting judgment on the moral behaviors of others, and
00:09:06.840 especially other cultures. Sam's approach to objective morality allows him to escape this moral paralysis,
00:09:14.900 and, as you can imagine, his resulting utterances have landed him in hot water from time to time.
00:09:20.440 Before we get to our first clip, it's also important to clear something up about Sam's brand of moral
00:09:27.300 realism early so we can avoid a common misperception. Sam's argument in favor of moral realism does not
00:09:35.620 imply that there is only one correct answer to a moral question. It also does not imply that he knows the
00:09:42.120 right answer. It's only a contention that there are such right answers, or, more accurately, that there are
00:09:49.200 right directions to move towards, that it's possible to objectively compare the moral value of two
00:09:56.100 states of being and two states of the universe, and that it is possible to have real, objective
00:10:02.220 confidence in those moral assessments, and that it's therefore possible to make genuine moral progress.
00:10:09.440 But, and this is the very delicate part, it is entirely possible that one must move away from that right
00:10:18.220 direction in order to navigate towards a higher peak of moral states. This is the wrinkle that starts to
00:10:24.240 paint his moral landscape as a kind of mountain hiking adventure, with endless peaks and valleys, foggy
00:10:30.600 hilltops, dangerous caverns, canyons, wrong turns, impassable swamps, and open upward clearings.
00:10:39.440 What Sam argues is that morality, when properly understood, is a navigation problem which requires
00:10:45.980 ever-improving methods to draw better maps, manufacture accurate compasses, and devise a
00:10:52.360 good pair of binoculars so that we can have confidence that we are climbing to higher and
00:10:56.540 higher ground. So, when we brought up our first example to show the split between moral realists
00:11:04.740 and anti-realists, the idea that feeding a starving creature rather than depriving it,
00:11:10.800 we added a tiny four-word phrase in passing to qualify it, all things being equal. But the funny
00:11:18.520 thing about our actual lives and real-world situations is that all things are almost never
00:11:24.220 equal. In an actual situation you might encounter in the world, the food in question may be your last
00:11:30.720 bites, and you'll starve to death if you feed the creature. Or there may be several starving creatures
00:11:36.340 in front of you, and you only have enough food for one of them. Or maybe this creature will devour
00:11:41.880 two other healthy creatures if you feed it. Adding wrinkles like this and playing with all of these
00:11:47.660 crazy variables tends to make things unequal and morally complex. But, in an effort to distill and
00:11:55.080 expound upon different moral frameworks and discover psychological and philosophical insights,
00:12:01.000 philosophers and writers have been conjuring up fun and sometimes diabolical thought experiments in
00:12:06.560 situations like this to try to flatten or equalize certain elements and isolate others.
00:12:13.200 We'll be hearing some fun thought experiments, and some not-so-fun ones, throughout this compilation.
00:12:18.340 So, let's get to our first clip and introduce a famous thought experiment that we'll be returning
00:12:24.360 to frequently. The clip is a conversation with Australian philosopher Peter Singer, who at this
00:12:30.740 point seems to have the descriptor of world's most influential living philosopher as a permanent
00:12:36.260 addendum to his name. We'll begin with what has become a famous simple thought experiment that
00:12:42.460 Singer used in 1971 in Philosophy and Public Affairs, an academic journal that was little
00:12:49.060 known at the time. The thought experiment goes like this.
00:12:55.300 Imagine you have just purchased a nice pair of new shoes, and you're walking by a pond.
00:13:00.820 You know this pond well, and you know its depth and probable dangers. It's very shallow. It only comes
00:13:06.720 up to your waist. Suddenly, you see a small child in the pond flailing for her life and struggling.
00:13:15.120 She's clearly in distress and in imminent danger of drowning.
00:13:19.200 Do you run into the pond and rescue her, knowing that you will muddy your shoes and certainly ruin them?
00:13:26.640 If you're waiting for a more complicated or challenging choice, it's not coming.
00:13:31.760 That's the whole story, and that's the whole thought experiment.
00:13:34.520 Nearly everyone responds by saying,
00:13:37.760 Of course I run into the pond. Who cares about the shoes?
00:13:42.000 Now, Singer takes that answer and suggests that we,
00:13:45.480 and he's speaking mostly about those of us in the affluent world,
00:13:49.440 that we are all the time in a very similar moral position as the pedestrian walking by the pond.
00:13:55.880 Let's say that the shoes cost $90.
00:13:58.840 And let's also say you already had a pair of perfectly usable shoes at home.
00:14:02.740 This purchase was a luxury.
00:14:05.760 Go back to the moment when you were at the shoe store and looking at them on display.
00:14:10.820 What if, instead of making that purchase, you knew that you could donate that $90 to a charity
00:14:16.220 which had displayed solid data that it could use that money, with a very high degree of probability,
00:14:21.600 to save the life of a child in Eritrea who would otherwise soon die?
00:14:26.100 Is choosing to purchase the shoes anyway a choice that is morally equivalent
00:14:31.160 to strolling past the drowning child and keeping your new shoes shiny and clean
00:14:35.760 while she drowns to death in front of you?
00:14:37.800 This arresting question has spawned a swarm of responses, supportive movements, clever challenges,
00:14:47.880 creative edits, defeated frustrations, and counter-considerations.
00:14:52.760 We'll be playing with Singer's shallow pond a good bit throughout this compilation
00:14:56.880 to flesh out Sam's take on it and his particular run at the eternally vexing problem of morality.
00:15:04.540 An obvious distinction to draw between the moment at the pond versus the moment at the shoe store
00:15:09.400 is something like an act of omission versus an act of commission.
00:15:14.640 In other words, is there a difference between failing to act and choosing to act
00:15:19.320 if they result in the same moral outcome?
00:15:21.460 Let's jump into the first clip, where Sam is speaking with Peter Singer in episode 48.
00:15:28.220 What is moral progress?
00:15:31.420 Is there an important moral distinction between acts of omission and acts of commission?
00:15:37.000 We certainly act as though there were.
00:15:40.300 So how does, and your famous shallow pond example put some pressure on this here.
00:15:46.700 So how do you think about the difference between not saving a life that would be very easy for you to save
00:15:53.160 and taking one actively?
00:15:56.160 And this obviously also relates to end-of-life considerations of the sort you mentioned,
00:16:00.640 the difference we seem to hold on to between removing life support
00:16:05.340 and passively letting someone die versus actively killing them,
00:16:09.300 which in many cases might be the more merciful thing to do.
00:16:12.540 Yeah, so my view is that the distinction between killing and letting die
00:16:19.000 or between acts and omissions, it's put in different ways,
00:16:23.100 is not itself of great intrinsic significance.
00:16:28.200 It may be a marker for other things of more significance,
00:16:31.740 like it may be a marker for motives, for instance.
00:16:35.180 So if somebody were to say to me, suppose I say, look, you should give to this effective charity.
00:16:43.140 Let's be specific.
00:16:44.420 You should give to the Against Malaria Foundation because it will distribute bed nets
00:16:48.300 in places where there's a lot of malaria and where children die from malaria.
00:16:53.180 And if you donate what I know you can afford to donate to the Against Malaria Foundation,
00:16:58.440 they will use it to distribute bed nets and you will be saving at least one child's life.
00:17:02.740 And that's factual, I think.
00:17:04.480 That is a real organization and a real example.
00:17:07.340 And let's say the person doesn't do that, right?
00:17:10.020 So then that person has, in one sense, let a child die.
00:17:13.420 Do I think of that person exactly the same as somebody who traveled to Africa,
00:17:18.700 shot a small child and then traveled back to the United States?
00:17:21.840 Of course not.
00:17:22.500 I know that there's a huge psychological difference in that person that many of us are apathetic
00:17:31.860 or don't care enough, don't feel psychologically drawn to help people who we can't even see.
00:17:40.280 But for someone to actually have the malice and the will to travel, to find a child,
00:17:46.160 to kill that child, has to be a completely horrible, depraved person.
00:17:51.380 So sometimes the distinction between acts and omissions will signal something like that.
00:17:56.880 Why did this person go out of their way to kill?
00:17:59.460 Whereas in the other case, they simply didn't do enough to save a life.
00:18:04.260 But then let's look at another case, the medical case that you mentioned.
00:18:09.260 So an infant has been born prematurely and has had a very severe bleeding in the brain,
00:18:17.540 a hemorrhage.
00:18:18.020 The doctors do a scan of the brain.
00:18:20.540 They find that all of the parts of the brain that are associated with consciousness,
00:18:26.440 like the cortex, have been irreversibly destroyed.
00:18:30.440 Now, there's two possible things that might happen in these circumstances.
00:18:36.720 One might be that the doctors, after discussion with the parents, say,
00:18:42.280 look, your child really has a hopeless future.
00:18:44.600 They'll survive if we continue to treat them, but they'll just lie in bed all day and never
00:18:50.860 be able to communicate with anyone, probably never have any conscious experiences at all,
00:18:55.780 have to be fed through a tube and so on.
00:18:58.100 And the doctors will then say, and the parents will usually agree, so we could withdraw the
00:19:02.820 respirator.
00:19:03.600 Your baby is too small to breathe on his own.
00:19:06.960 We can withdraw the respirator and your baby will die.
00:19:09.780 And parents will typically say, if you think that's best, doctor, then I'm okay with that.
00:19:14.800 And the baby will die.
00:19:16.240 Now, that is seen as a letting die, as an allowing to die, not as a killing.
00:19:21.760 On the other hand, it might have happened that because it took some time to carry out the
00:19:27.300 diagnosis, because the baby was particularly vigorous and so on, that the baby no longer needs
00:19:33.460 a respirator.
00:19:34.860 So the prognosis is exactly the same.
00:19:36.880 The baby is never going to communicate in any way, probably never going to be conscious.
00:19:40.660 She's going to have to be fed through a tube and lie in a bed.
00:19:42.960 But you can't bring about the baby's death by withdrawing the respirator.
00:19:48.060 And let's just say that there's nothing else you can do that will bring about the baby's
00:19:51.440 death.
00:19:51.700 The baby is otherwise, apart from this massive and irreparable brain damage, the baby is
00:19:56.860 otherwise healthy.
00:19:57.660 Now, I think that if you're prepared to say that it was justifiable to withdraw the respirator,
00:20:04.380 you ought to be prepared to say it would be justifiable to give the baby a lethal injection
00:20:08.440 so that the baby dies without suffering.
00:20:10.980 There is no moral difference.
00:20:12.520 In both cases, you know exactly what the consequences of your action will be.
00:20:16.620 In both cases, your intention is to bring about the death of the child.
00:20:21.240 Your motivation is equally, I would say, equally good, equally reasonable in both cases.
00:20:28.900 So the means is really irrelevant.
00:20:33.020 But legally, of course, one is murder and the other is, well, maybe it's slightly gray in
00:20:40.260 some countries.
00:20:40.820 But anyway, it's done in every neonatal intensive care unit in every major city in the United
00:20:46.580 States.
00:20:47.680 And nobody ever gets prosecuted for it.
00:20:49.700 So it seems to be legally acceptable.
00:20:54.040 But that's, as I say, that's a case where I would think we ought to be able to accept
00:21:01.020 active steps on the basis of saying it's no different from the other case.
00:21:06.200 And certainly there are cases where the active step is the one that bypasses an immense amount
00:21:11.800 of suffering, right?
00:21:13.240 Where the passive one may...
00:21:14.140 Absolutely.
00:21:14.660 That's right.
00:21:15.140 In other cases where there is some consciousness, not exactly the case I described, but there
00:21:19.580 is some consciousness, I do know of cases where people will say, you know, no, we can't
00:21:24.920 actually take active steps to end life.
00:21:26.500 But if the baby gets pneumonia, we won't give antibiotics.
00:21:30.820 And so then the baby will suffer a lingering death from pneumonia over days or maybe even
00:21:37.480 a couple of weeks, you know, which is a horrible thing and a pointless thing to do if you decided
00:21:41.920 that it's better that the baby should die.
00:21:44.160 You know, why let the baby suffer in this way?
00:21:47.040 I want to go back to the issue of the shallow pond.
00:21:50.620 So you admit that there's a difference.
00:21:52.180 It would take a very different sort of person to go to Africa with the intention of killing
00:21:57.040 someone than merely decline to buy a bed net when told on good information that this would
00:22:05.120 save a human life.
00:22:06.780 Those are very different people.
00:22:08.320 But I think you're saying that it's natural for us to view them as different.
00:22:12.200 And because it requires actually a different psychology to do one versus the other, they
00:22:18.960 are different.
00:22:19.820 But if we abstract away from those differences and talk about public policies and what governments
00:22:25.800 should do, then the act and omission difference shouldn't be morally salient to us anymore.
00:22:33.440 Is that where you're headed with that?
00:22:35.400 I'm not going to say that it shouldn't be at all morally salient because there are questions
00:22:39.020 in what governments do in terms of the examples that they set.
00:22:43.360 But I do think it's very serious that governments allow people to die when they could prevent
00:22:47.900 them, when they have the resources to prevent them.
00:22:51.060 And so I certainly think that the governments of the wealthier nations of the world should
00:22:57.400 be getting together and developing policies to eliminate preventable child deaths and preventable
00:23:05.100 suffering from diseases.
00:23:06.600 They did make a reasonable effort in terms of the Millennium Development Goals to reduce
00:23:12.500 suffering and progress was made.
00:23:14.640 The number of children dying fell quite significantly during that period, as did the number of people
00:23:20.700 in extreme poverty.
00:23:22.380 And that's a good thing.
00:23:23.800 But I'm concerned whether sufficient progress is continuing to be made.
00:23:29.760 I think more progress could have been made even in that period, although some progress was
00:23:34.560 made, and I think we should be doing more.
00:23:37.400 And that applies to governments, but it also applies to individuals.
00:23:41.140 I think all of us who can afford to donate to effective charities ought to be doing that
00:23:49.680 because the governments are not doing enough.
00:23:52.300 How do you view the ethical significance of proximity, if there is any?
00:23:57.640 I mean, obviously, there's an immense psychological significance that the starving person on my
00:24:02.140 doorstep is different, certainly more salient than the starving person in a distant country
00:24:08.040 whose existence I know about, at least in the abstract.
00:24:11.480 Presumably, you think that that difference is far bigger than it should be.
00:24:17.460 But is there any ethical significance to proximity, the problem in your backyard as opposed to the
00:24:23.580 problem an ocean away?
00:24:26.160 Well, I'd say not to proximity in itself, again.
00:24:30.060 We can perhaps be more confident about what we're achieving when things are in our backyard
00:24:35.000 and we actually can see what's happening.
00:24:36.800 We can talk to the people who are affected by it.
00:24:38.700 But we do have very good research now about effective non-profit organizations that are
00:24:45.060 trying to help people far away.
00:24:47.960 So there's organizations like GiveWell that do research on effective charities.
00:24:53.680 There's an organization I founded called The Life You Can Save, and it has a website which
00:25:00.480 lists charities that we've vetted, and some of it draws on GiveWell's research, some of it
00:25:06.360 draws on other research.
00:25:07.340 So that we recommend effective charities.
00:25:10.940 And if you can have a high level of confidence in the effectiveness of what you're doing,
00:25:16.300 then it's not very different morally.
00:25:19.380 As you correctly said, it is very different psychologically.
00:25:22.760 But morally, it's not very different from things that are going on in your backyard.
00:25:28.040 Given that it is so different psychologically, I mean, presumably, if I told you that there's
00:25:32.860 a starving person by my front door today that I just stepped over on the way to this podcast
00:25:37.900 because I was, you know, I'm busy, you would view me with something close to horror and repugnance
00:25:44.680 and would be right to.
00:25:46.200 But if I told you that I got yet another appeal from a good charity, which I didn't act on, you
00:25:54.880 would just view me as a more or less psychologically normal, if somewhat aloof person.
00:26:01.320 Do you view our moral progress personally and collectively as a matter of collapsing that
00:26:08.760 distance as much as psychologically possible so that we really can't put distant suffering
00:26:14.800 out of sight and out of mind?
00:26:16.180 Yes, I do think that's an indicator of progress.
00:26:19.600 And it's, you know, the psychology is understandable, of course.
00:26:23.180 Our ancestors for millennia, for perhaps hundreds of thousands of years, if we go back even,
00:26:29.900 could go back even to social primates before there were humans at all, these ancestors lived
00:26:36.320 in small social groups, face-to-face groups where they knew people and they would help others
00:26:42.580 and cooperate with them in various ways, but they had no relations perhaps even to people
00:26:47.460 who lived across the mountain range in the next valley.
00:26:51.800 And now suddenly, suddenly in terms of evolutionary time anyway, we live in a world where we have
00:26:57.560 instant communications, where we have very rapid delivery of assistance, where we have good
00:27:03.600 ways of working out what is going to help people most effectively.
00:27:07.800 And our psychology has not changed rapidly enough to cope with this.
00:27:18.560 There's an interesting note about Singer's pond analogy and the idea that Sam raised about
00:27:23.780 evaluating the kind of person who would stroll by a child drowning in a pond versus the kind
00:27:29.680 of person who declines to donate to a charity.
00:27:31.860 Singer originally wrote the pond story in an essay about a mass humanitarian crisis in East
00:27:38.780 Bengal in 1971, spurred on by a civil war and a devastating cyclone.
00:27:44.820 He presented the pond to argue for the presence of a moral opportunity, and perhaps for a moral
00:27:50.080 obligation, of wealthy countries to intervene with food, shelter, and rescue.
00:27:55.800 We can map that same character analysis that Sam suggested onto the national level and ask,
00:28:01.860 what kind of country declines to feasibly rescue those in a foreign crisis versus what kind
00:28:08.340 of country declines to offer generic foreign aid absent any acute humanitarian crisis?
00:28:14.820 A screaming child drowning in a pond is an emergency, but the slow drip of individual preventable
00:28:21.360 deaths from hunger, illness, and poverty, and spread across entire continents does not seem
00:28:27.780 to present itself in that way, or to expose the kind of people we are.
00:28:32.840 But shouldn't it?
00:28:34.260 What if we gathered all of those individuals into one location, like a sports stadium, and
00:28:39.500 announced that a bomb would kill them all at midnight unless we easily diffused it?
00:28:44.360 That edit sounds extreme, but it only gathers the location of these preventable deaths to the
00:28:50.300 same venue, and it makes explicit the imminence of their demise.
00:28:54.000 Somehow that makes it feel more like a newsworthy emergency that only a moral monster would ignore.
00:29:01.060 But again, Singer argues that this may actually be the situation that most of us are in today,
00:29:06.780 if we only bothered to notice it.
00:29:09.980 This is the deeply challenging work that the pond analogy does.
00:29:13.660 So, let's stay with that last thread from Sam and Singer's conversation of proximity,
00:29:20.220 and the tension between psychology and moral philosophy.
00:29:24.660 Like all moral dilemmas and thought experiments, you can start to tinker with the variables in
00:29:29.820 certain ways that are designed to highlight how your moral intuitions might shift with each
00:29:34.340 edit.
00:29:35.440 For example, replay the pond analogy.
00:29:37.940 But this time, you see five children drowning instead of one.
00:29:44.160 They're all at different distances from you, spread throughout the pond.
00:29:48.500 You're quite certain that in the time it will take you to reach and rescue one of them,
00:29:52.680 the other four will drown and die.
00:29:55.240 Which do you go for?
00:29:56.940 Assuming you're still willing to ruin your shoes.
00:30:00.720 Maybe you decide that flipping a coin is the best method.
00:30:04.000 But what if one of the children happens to be your child?
00:30:06.700 Do you go for her no matter what?
00:30:09.700 Waiting past the cries for help from an unfortunate, unknown child?
00:30:14.460 How about if you knew all of the struggling children,
00:30:17.140 and you know that one of them has a terminal illness,
00:30:19.560 and is unlikely to live another year anyway?
00:30:22.440 Do you avoid going for that child?
00:30:24.980 What if one of the children is known to be showing signs of being a scientific prodigy,
00:30:29.480 and there are high hopes for her future,
00:30:31.860 and she's likely to be a great benefit to humanity?
00:30:34.260 What if you think all of these factors are just too vulgar,
00:30:39.200 and you simply go to whichever one happens to draw you first while you close your eyes?
00:30:44.140 Would that method favor the child who happens to yell the loudest?
00:30:48.320 If we keep our eyes open and just follow our instinct,
00:30:51.840 would we inevitably end up being drawn towards the child who's the cutest?
00:30:55.320 Or even the child who looks a little like us and reminds us of our kin?
00:31:00.100 We can keep playing these kinds of games forever.
00:31:03.340 We could even make it nearly identical to the famous trolley problem,
00:31:07.240 the thought experiment which ties five people to a railroad track,
00:31:11.140 while one person is fastened to a separate track.
00:31:13.500 In that now well-known nightmare,
00:31:16.880 you're given the choice to divert an out-of-control trolley towards the one,
00:31:21.180 rather than the five, by flipping a switch.
00:31:24.320 In our pond, we can imagine that four of the children are clinging to a rapidly deflating life raft,
00:31:30.420 and they could all grab hold of it and be dragged to safety by you,
00:31:33.740 while one isolated child is drowning by himself a hundred feet away.
00:31:37.480 Is there a right choice for problems like these?
00:31:42.040 We're going to go to our second clip to focus on the suggestion
00:31:44.940 that there are right answers to these questions.
00:31:48.500 This guest will argue that our intuitions lead us to actions
00:31:51.920 that are compromised by our evolved psychological biases
00:31:55.260 to favor creatures with which we can empathize.
00:31:58.800 To return to the issue of proximity,
00:32:01.620 it's certainly easier to empathize with someone who's close enough
00:32:04.840 to be in our visual and auditory field
00:32:06.940 and whose screams we can hear,
00:32:09.440 rather than a distant, nameless, faceless, voiceless child.
00:32:14.200 We know that it's also easier to empathize with a single child
00:32:17.740 whose name and story we know
00:32:19.340 over a huge number of distant, nameless children
00:32:22.680 who you'll never meet.
00:32:24.600 In fact, as you'll hear Sam and this next guest point out,
00:32:28.420 this specific aspect of our psychology is even more curious,
00:32:32.480 where our ability to empathize with a specific starving child
00:32:35.760 is reduced when you simply place the same child
00:32:39.040 amongst the company of thousands of others just like him.
00:32:41.780 The next guest is Paul Bloom,
00:32:45.940 a professor of psychology formerly of Yale University
00:32:49.080 and now with the University of Toronto.
00:32:52.380 Bloom has had several wonderful conversations with Sam,
00:32:55.500 and this is their first,
00:32:56.600 which came just after the release of Bloom's book
00:32:59.480 with the provocative title Against Empathy.
00:33:02.240 In it, he argues that our much-ballyhooed capacity for empathy
00:33:07.180 is not the clean moral panacea
00:33:09.760 which it is sometimes advertised to be.
00:33:12.400 In fact, it may often be more of a bug than a feature
00:33:15.600 when it comes to our moral reasoning.
00:33:18.520 Here is Sam with Paul Bloom from episode 14,
00:33:22.420 In Cold Blood.
00:33:23.300 You've come down very much on the,
00:33:28.580 really, a side of a controversy
00:33:30.180 that most people didn't even know existed,
00:33:33.020 which is that empathy, in many cases, is harmful
00:33:36.620 and is not a good piece of software
00:33:40.060 if you want to be a reliable moral actor
00:33:42.480 in normative terms.
00:33:43.720 So tell me about what you've said about empathy
00:33:46.200 and let's get into the details.
00:33:48.360 So I always have to begin
00:33:49.560 with the most boring way ever to begin anything,
00:33:51.740 which is we're talking about terminology
00:33:53.520 because people use the term empathy
00:33:56.240 in all sorts of ways
00:33:57.320 and I think my position is easily misunderstood.
00:34:00.820 If you think, some people think empathy
00:34:02.840 just as a word referring to anything good,
00:34:05.840 compassion, care, love, morality,
00:34:07.840 making the world a better place and so on.
00:34:10.120 Under that construal of empathy,
00:34:11.540 I have nothing against it.
00:34:12.700 I'm not a monster.
00:34:13.620 I mean, I want to make the world a better place.
00:34:15.960 Other people use the term empathy very narrowly
00:34:18.640 to refer to understanding
00:34:20.540 in a cold-blooded way
00:34:22.020 what's going on in the minds of other people,
00:34:24.400 understanding what they think and what they feel.
00:34:26.720 And I'm not against that too, though,
00:34:28.520 and we might want to talk about this.
00:34:30.120 I think it's morally neutral.
00:34:31.820 I think very great and wonderful and kind people
00:34:34.500 have this sort of cognitive empathy,
00:34:37.600 if you want to call it that,
00:34:38.920 but so do con men, seducers, and sadists.
00:34:44.000 Bullies are,
00:34:45.180 one reason why bullies are very good at being bullies
00:34:47.740 is that they exquisitely understand
00:34:49.520 what's going on in the heads of their victims.
00:34:51.540 Yeah, yeah.
00:34:52.360 That's often misunderstood, by the way.
00:34:53.960 We should just footnote that,
00:34:55.180 that this form of cognitive empathy
00:34:57.520 that you've just distinguished
00:34:59.460 from the other form that you're about to describe
00:35:02.100 is something that psychopaths have in spades.
00:35:05.020 When we talk about psychopaths being devoid of empathy,
00:35:08.240 it's not the empathy that allows us
00:35:10.680 to understand another person's experience.
00:35:13.300 That is not something that prototypically evil people lack.
00:35:16.900 In fact, they, as you just said,
00:35:19.000 they use this understanding
00:35:20.380 to be as successfully evil as they can be.
00:35:23.780 That's exactly right.
00:35:24.740 So, you know, another term for cognitive empathy
00:35:28.100 is social intelligence.
00:35:29.880 And I like that way of talking
00:35:31.040 because it captures the point
00:35:32.120 that intelligence is an extraordinary tool.
00:35:35.380 Without it, you know,
00:35:36.480 we couldn't do any great things.
00:35:38.080 But in the hands of somebody with malevolent ends,
00:35:40.460 intelligence could be used to make them a lot worse.
00:35:43.340 And I think that that's,
00:35:44.480 that social intelligence is exactly like that.
00:35:47.660 Mind reading, another term for it,
00:35:50.320 is a tool that could be used any way you want it.
00:35:53.100 And the very best people in the world have tons of it.
00:35:55.800 And so do the very worst people in the world.
00:35:58.520 So the sense of empathy I'm using,
00:36:01.680 and this actually matches what most psychologists
00:36:04.520 and most philosophers,
00:36:06.500 how they use the term,
00:36:08.040 is empathy is in the sense of what Adam Smith
00:36:11.380 and David Hume and other philosophers call sympathy.
00:36:14.800 And what it refers to is feeling what other people feel.
00:36:19.540 So if you're in pain,
00:36:21.160 and I feel empathy for you,
00:36:24.080 I will feel to some degree your pain.
00:36:26.820 If you're humiliated,
00:36:27.940 I will feel your humiliation.
00:36:29.260 If you are happy,
00:36:29.980 I will feel your happiness.
00:36:31.980 And you could see why people are such fans of this.
00:36:35.740 It brings me closer to you.
00:36:37.320 It dissolves the boundaries between me and you.
00:36:40.200 And there's a lot of psychological research showing
00:36:42.360 that if I feel empathy towards you,
00:36:44.720 I'm more likely to help you.
00:36:46.740 Dan Batson has done some wonderful studies on them,
00:36:48.720 and I don't contest that at all.
00:36:50.920 But the problem with empathy,
00:36:52.600 and one of the problems with empathy,
00:36:54.120 there are many,
00:36:55.020 but the main problem is it serves as a spotlight.
00:36:57.880 It zooms me in on a person in the here and now.
00:37:00.680 And as a result, it's biased, it's parochial,
00:37:06.420 it's short-sighted, and it's innumerate.
00:37:10.500 One way I put it is it's because of empathy
00:37:13.260 that governments and societies care so much more
00:37:17.000 about a little girl stuck in a well
00:37:18.900 than about millions or more people
00:37:21.520 suffering and dying through climate change.
00:37:24.500 It's because of empathy, at least in part,
00:37:27.900 that we freak out and panic
00:37:30.040 over mass shootings,
00:37:33.000 which, however horrible,
00:37:35.460 are a tiny proportion of gun homicides in America,
00:37:38.960 0.01% roughly.
00:37:41.240 Yeah.
00:37:41.740 I mean, so if you ask people,
00:37:43.720 they would say mass shootings
00:37:44.720 are the most terrible things there are.
00:37:46.380 And, you know, I live in Connecticut.
00:37:47.880 Newtown's not that far away.
00:37:49.320 After the Sandy Hook killing,
00:37:50.940 people were, including me,
00:37:52.300 were deeply upset.
00:37:54.040 But intellectually,
00:37:55.680 if you could snap your fingers
00:37:57.200 and make all the mass shootings go away forever,
00:37:59.360 and then you did that,
00:38:01.360 nobody would know based on the homicide numbers.
00:38:04.680 Yeah.
00:38:05.520 It's so tiny.
00:38:06.740 So it misdirects us.
00:38:08.360 It causes us to focus on the wrong thing.
00:38:10.920 It causes us to freak out at the suffering of one
00:38:13.540 and ignore the suffering of 100.
00:38:15.500 And in one of your books,
00:38:18.880 I forget which one,
00:38:20.220 you talk about the study
00:38:22.160 where we care more about one than about eight.
00:38:25.580 Yeah.
00:38:25.800 And you say something to the effect of,
00:38:27.480 if there's ever a non...
00:38:28.640 That's Paul Slovic's work.
00:38:30.320 That's right.
00:38:30.840 That's right.
00:38:31.640 Some wonderful studies.
00:38:32.920 And also,
00:38:33.960 somebody in Ritoff
00:38:34.900 and other investigators
00:38:36.460 have done this since.
00:38:38.520 And, you know,
00:38:39.500 and you described us
00:38:40.980 that if there's ever
00:38:41.960 a non-normative finding
00:38:44.120 in psychology,
00:38:44.840 that's it.
00:38:45.860 And so,
00:38:46.740 I think we could,
00:38:47.840 I think there's many more examples like this
00:38:49.520 that we could say,
00:38:50.420 we could look and say,
00:38:51.440 and say as rational people,
00:38:54.100 well, you know,
00:38:55.080 a black life matters
00:38:56.380 as much as a white life.
00:38:58.240 The life of an ugly person
00:38:59.780 who doesn't inspire my empathy
00:39:01.800 matters just as much
00:39:03.160 as a beautiful person who does.
00:39:05.040 And the lives of a hundred
00:39:06.460 matter more than the life of one.
00:39:09.180 Especially,
00:39:10.040 and this is the amazingly
00:39:11.900 non-normative finding
00:39:13.900 from Slovic's work,
00:39:15.120 is that especially
00:39:15.800 if those hundred
00:39:16.920 include the one
00:39:18.920 you were caring about.
00:39:20.840 So, you can set up this paradigm
00:39:22.300 where you show
00:39:23.740 a reliable loss of concern
00:39:27.060 when you add people
00:39:28.960 to the group.
00:39:30.600 So, you start with one
00:39:31.460 little girl
00:39:32.100 whose story is
00:39:33.000 very emotionally salient
00:39:34.320 and people care about her
00:39:35.920 to a maximal degree.
00:39:37.820 And then you add her brother
00:39:39.260 to the story
00:39:40.340 and people care a little less.
00:39:41.960 And then you add
00:39:42.680 eight more people
00:39:43.480 to the story
00:39:44.060 keeping the same girl
00:39:45.200 and people's care
00:39:46.440 just drops off a cliff.
00:39:48.200 That's truly amazing.
00:39:49.360 It's not one attractive girl
00:39:50.860 versus a hundred
00:39:52.580 faceless people.
00:39:54.340 It can be the one
00:39:55.340 attractive girl
00:39:56.160 along with the hundred
00:39:57.300 and you care less.
00:39:58.720 It's a magnificent
00:39:59.680 and horrible finding.
00:40:01.540 And, you know,
00:40:02.520 I've long championed
00:40:04.360 the forces of reason
00:40:05.620 and rationality
00:40:06.520 and moral judgment.
00:40:07.340 I think far more
00:40:08.480 than many social psychologists
00:40:09.640 that were capable of that.
00:40:11.560 And so,
00:40:12.200 there's an interesting
00:40:13.100 duality here.
00:40:14.420 On the one hand,
00:40:15.460 our gut feelings
00:40:16.360 push us
00:40:17.020 towards the one girl
00:40:18.080 and not the hundred,
00:40:19.180 even if the hundred
00:40:19.700 includes the girl.
00:40:21.020 On the other hand,
00:40:22.080 we're smart enough
00:40:22.780 to recognize
00:40:23.540 when we put it
00:40:24.540 in this abstract way
00:40:25.560 that that's a moral mistake.
00:40:27.080 In some way,
00:40:28.940 you could view
00:40:29.280 the moral mistakes
00:40:30.340 caused by empathy
00:40:31.260 as analogous
00:40:32.560 to the mistakes
00:40:33.160 and rationality
00:40:34.140 that people like
00:40:34.800 Donny Kahneman
00:40:35.400 have chronicled.
00:40:36.740 Where you see people
00:40:38.100 just, you know,
00:40:39.000 you get these puzzles
00:40:40.300 and you ignore
00:40:41.020 the base rates
00:40:41.880 and you get things
00:40:42.420 all messed up.
00:40:44.000 And that's,
00:40:44.420 when you step back
00:40:45.520 and look at it
00:40:46.160 and do the math,
00:40:46.940 you realize,
00:40:47.560 wow, that was a mistake.
00:40:49.020 My gut led me
00:40:49.780 in the wrong way.
00:40:51.720 Visual illusions
00:40:52.560 are another case.
00:40:53.280 It looks this way,
00:40:54.500 but it isn't.
00:40:55.140 You take out the ruler
00:40:55.920 and you measure it
00:40:56.660 and although the lines
00:40:57.620 look like they're
00:40:58.140 different lengths,
00:40:58.760 they're the same.
00:40:59.900 So we have this
00:41:00.660 additional capacity
00:41:01.680 to do this,
00:41:02.920 both for things
00:41:04.300 that connect
00:41:04.740 to the external
00:41:05.260 war-like vision,
00:41:06.060 but also for morality
00:41:07.060 where we have standards
00:41:08.040 of reason and consistency.
00:41:09.680 And we could use this
00:41:10.340 to say,
00:41:11.120 wow, our empathy
00:41:11.880 is pushing us
00:41:12.580 in the wrong direction.
00:41:13.760 Yeah.
00:41:13.920 So now,
00:41:14.400 do you see us
00:41:15.460 correcting for this
00:41:16.860 in a way
00:41:17.660 that is adequate
00:41:19.160 to the magnitude
00:41:20.580 of the moral error
00:41:21.640 or is our way
00:41:23.180 of correcting for it
00:41:24.120 more haphazard
00:41:25.380 than that?
00:41:25.980 Our way of correcting
00:41:26.920 this is always haphazard,
00:41:28.660 but the analogy I make
00:41:30.160 is with racism.
00:41:31.860 So we know
00:41:32.880 we have racist biases.
00:41:34.240 Many of us
00:41:34.700 have explicit racist biases,
00:41:36.580 but there's a lot
00:41:37.100 of evidence
00:41:37.500 for implicit racial biases,
00:41:39.280 biases that we don't know
00:41:40.640 we have even,
00:41:41.880 but that influence us
00:41:43.120 in all sorts of ways.
00:41:44.680 So what do you,
00:41:45.220 so suppose,
00:41:45.920 if you think racism
00:41:46.760 is okay,
00:41:47.480 then there's not a problem.
00:41:48.560 But suppose,
00:41:49.360 you know,
00:41:49.580 as you and I do,
00:41:50.380 we think racism is wrong.
00:41:52.020 So what do you do
00:41:53.140 about it?
00:41:53.980 Well,
00:41:54.500 the answer is not
00:41:55.420 you try harder.
00:41:56.840 You know,
00:41:57.100 we know trying hard
00:41:58.440 doesn't work
00:41:59.000 for these sort of biases,
00:42:00.680 but there are
00:42:01.700 different sorts of fixes.
00:42:03.420 So in fact,
00:42:04.080 for biases,
00:42:06.160 often there's
00:42:06.760 technological fixes.
00:42:08.700 One story,
00:42:09.460 this may be apocryphal,
00:42:10.500 but it's a good story,
00:42:11.620 is that symphony orchestras
00:42:13.120 were heavily biased
00:42:14.120 in favor of men
00:42:15.040 because they claimed
00:42:17.040 that, you know,
00:42:17.500 the people making judgments
00:42:18.380 who were both men
00:42:19.140 and women said,
00:42:20.040 men just sound better.
00:42:20.940 They have stronger,
00:42:21.640 more powerful styles.
00:42:23.620 So what they did
00:42:24.560 was they started
00:42:25.180 auditioning people
00:42:26.000 behind a screen.
00:42:27.820 And this is,
00:42:28.540 and then the sex ratio
00:42:30.540 became more normal.
00:42:31.900 So this is an example
00:42:32.920 of you've got a bias,
00:42:35.060 you don't like it,
00:42:36.400 and so you try
00:42:37.060 to fix the world
00:42:38.040 so it doesn't apply.
00:42:39.340 And I can imagine
00:42:39.980 similar things
00:42:40.720 happening with empathy,
00:42:42.760 where you change
00:42:43.520 laws and policies
00:42:44.660 so that empathy
00:42:45.320 plays less of a role.
00:42:46.680 Bloom and Sam
00:42:50.480 agree quite a bit
00:42:51.560 on a lens of morality
00:42:52.700 and how we ought
00:42:53.700 to work to discover
00:42:54.720 and then mitigate
00:42:55.940 our worst built-in
00:42:57.800 psychological impulses
00:42:59.200 when making moral decisions.
00:43:01.540 But to underline
00:43:02.660 the distinction
00:43:03.220 between what Bloom
00:43:04.260 was calling
00:43:04.860 cognitive empathy
00:43:06.220 and other forms
00:43:07.500 of concern for others,
00:43:09.560 let's note the subtitle
00:43:10.740 of Bloom's book,
00:43:11.780 which is
00:43:12.200 The Case for Rational Compassion.
00:43:16.280 Bloom's argument
00:43:17.100 is a fascinating counter
00:43:18.540 to the common advice
00:43:19.700 to trust your gut
00:43:21.160 when it comes
00:43:22.240 to difficult moral decisions.
00:43:25.120 Gut versus reason
00:43:26.260 or heart versus head
00:43:28.200 are colloquial phrases
00:43:29.860 that people use
00:43:30.840 to express the tense boundary
00:43:32.380 between our evolved instincts
00:43:33.900 and our rational moral reasoning.
00:43:36.380 We're going to continue
00:43:37.680 to trace our way
00:43:38.700 along that boundary
00:43:39.680 in this compilation,
00:43:41.260 and this time,
00:43:42.200 we'll jump back
00:43:42.940 towards the philosophical
00:43:43.940 side of things.
00:43:45.920 To return to our initial
00:43:47.580 taxonomies of moral philosophy,
00:43:49.900 we should zoom in
00:43:51.060 on another major fork
00:43:52.300 in the tree.
00:43:53.840 This is the split
00:43:54.780 between consequentialism
00:43:56.200 and deontology.
00:43:58.280 If you'd like to continue
00:43:59.480 listening to this conversation,
00:44:01.100 you'll need to subscribe
00:44:02.100 at samharris.org.
00:44:03.900 Once you do,
00:44:04.520 you'll get access
00:44:05.040 to all full-length episodes
00:44:06.320 of the Making Sense podcast,
00:44:08.000 along with other
00:44:08.540 subscriber-only content,
00:44:10.320 including bonus episodes,
00:44:11.580 and AMAs,
00:44:12.960 and the conversations
00:44:13.720 I've been having
00:44:14.280 on the Waking Up app.
00:44:15.840 The Making Sense podcast
00:44:16.740 is ad-free
00:44:17.620 and relies entirely
00:44:19.020 on listener support.
00:44:20.420 And you can subscribe now
00:44:21.660 at samharris.org.
00:44:23.100 at samharris.org.
00:44:24.160 jk.
00:44:24.700 Thank you.