Making Sense - Sam Harris - December 08, 2025


#448 — The Philosophy of Good and Evil


Episode Stats


Length

24 minutes

Words per minute

178.10059

Word count

4,377

Sentence count

245

Harmful content

Misogyny

1

sentences flagged


Summary

Summaries generated with gmurro/bart-large-finetuned-filtered-spotify-podcast-summ .

In this episode, philosopher David Edmonds joins me to talk about his new book, Death in a Shallow Pond, A Philosopher, A Drowning Child, and Strangers in Need. We talk about Peter Singer's thought experiment, the Trolley problem, and the role of thought experiments in moral philosophy.

Transcript

Transcript generated with Whisper (turbo).
Misogyny classifications generated with MilaNLProc/bert-base-uncased-ear-misogyny .
00:00:00.000 Welcome to the Making Sense Podcast. This is Sam Harris. Just a note to say that if you're
00:00:11.740 hearing this, you're not currently on our subscriber feed, and we'll only be hearing
00:00:15.720 the first part of this conversation. In order to access full episodes of the Making Sense
00:00:20.060 Podcast, you'll need to subscribe at samharris.org. We don't run ads on the podcast, and therefore
00:00:26.240 it's made possible entirely through the support of our subscribers. So if you enjoy what we're
00:00:30.200 doing here, please consider becoming one.
00:00:36.640 Hi, I'm here with David Edmonds. David, thanks for joining me again.
00:00:40.060 Thanks for having me back.
00:00:41.480 So David, you have a new book, which I really enjoyed. It's titled Death in a Shallow Pond,
00:00:46.820 A Philosopher, a Drowning Child, and Strangers in Need. And you've written a kind of a short
00:00:53.400 bio of the philosopher Peter Singer, who's also been on the podcast several times, and
00:00:58.480 of the effective altruism movement that he has spawned, along with Will McCaskill and
00:01:04.260 Toby Ord, who've also been on the podcast several times. But it's also a great history of moral
00:01:10.580 philosophy in the analytic tradition. So I just want to track through the book, really, because
00:01:16.640 it's, I think the virtues of effective altruism, as well as the concerns surrounding it, are still
00:01:25.180 worth talking about. And I think just the core concerns of moral philosophy and how we think
00:01:33.060 about doing good in the world are really of eternal interest, because it's not at all clear that we
00:01:38.660 think about these things rationally or effectively or normatively in any other way. So, but before we
00:01:44.460 jump in, remind people what you do, because you and I have spoken before and you have your own
00:01:49.040 podcast, but where can people find your work generally and what are you tending to focus on
00:01:54.040 these days? Gosh, well, I had a double life as a BBC journalist and a philosopher. I've given up the
00:02:02.460 BBC bit, so it's all philosophy from now on. I've got a podcast called Philosophy Bites, which I make
00:02:09.460 with a colleague, a friend called Nigel Warburton. And yeah, I now write philosophy books. And I'm
00:02:15.160 linked to a centre in Oxford called the Uhero Institute, which is a centre dedicated to the
00:02:20.420 study of practical ethics, applied ethics. So yeah, those are the various strings to my bow.
00:02:26.300 So why did you write this book? And why did you take the angle you took here?
00:02:31.340 Oh gosh, I mean, there's some prosaic explanations for why I wrote the book. I just written this biography
00:02:36.520 of a guy called Derek Parfitt, who, as it happens, Peter Singer says is the only genius
00:02:41.180 he ever met. And so I was thinking, I had such fun writing that book, and he was such an extraordinary
00:02:47.360 character. I thought maybe I'll have a go at writing another biography. And Peter Singer is probably the
00:02:52.920 most famous philosopher alive today. So I wrote to Peter and said, how about I write your biography?
00:03:00.420 And he said, no, thank you. So then I thought I'd write a book about the history of consequentialism,
00:03:05.860 which interests me. And that would be a book that covered Bentham and Mill and Sidgwick and all the
00:03:12.460 way up to Parfitt and Singer. And then I was sort of daunted by the prospect of that. That was an
00:03:17.520 enormous task. And then I thought what I'll do is I'll cover those subjects just through one thought
00:03:23.520 experiment. I'd written a book about 15 years ago called Would You Kill the Fat Man, which was a very
00:03:29.340 similar kind of book. Again, it was a biography of probably the most famous thought experiment in
00:03:35.840 moral philosophy, which is the trolley problem. And Peter Singer's thought experiment, which we're
00:03:40.060 going to talk about, I hope, is probably the second most famous thought experiment in moral
00:03:44.580 philosophy. But I would say much more influential than the trolley problem. So anyway, that's what got
00:03:50.060 me into the subject. Yeah. Yeah. Well, I think we should start with the thought experiment, which
00:03:56.040 as I've spoken to Peter and other philosophers on this topic before, it'll be familiar to people.
00:04:03.140 But I think we can't assume everyone has heard of it. So we should describe the thought experiment.
00:04:09.060 But before we do, perhaps we can discuss thought experiments themselves for a minute or two,
00:04:15.100 because even the act of entertaining them is somewhat controversial. What's the argument
00:04:21.940 against thought experiments? Give me the for and against what we're about to do here.
00:04:25.860 Okay. Well, thought experiments covers an enormous range of subjects. So there are thought experiments
00:04:32.440 in every area of philosophy. There are thought experiments in the philosophy of mind. There are
00:04:37.000 thought experiments in the philosophy of language. There are thought experiments in epistemology.
00:04:41.840 And there are thought experiments in moral philosophy. And the objections to thought experiments tend to
00:04:47.320 be directed particularly at thought experiments in the moral realm, I would say. So for example,
00:04:54.820 in the area of consciousness, there's a very famous thought experiment called the Chinese room.
00:05:00.740 There's another famous thought experiment when people argue about physicalism and whether everything
00:05:05.400 is physical. And that's a thought experiment called what Mary knew. And on the whole,
00:05:09.740 I mean, they are contentious and they're very heavily debated, but they don't arouse the kind of
00:05:15.980 suspicion, I think, that many moral thought experiments arouse. And the reason that moral
00:05:21.720 thought experiments arouse suspicion, well, there are many reasons, but one is people just say that our
00:05:27.240 moral intuitions are not built for weird and often wacky scenarios. They're built for normal life,
00:05:35.140 real life. And the problem with thought experiments is that they are often very strange, very artificial,
00:05:41.300 and so we shouldn't trust our intuitions. And I would say that was probably the main objection to
00:05:48.200 them. I mean, the response to that is there's a very good reason why they are artificial. The whole
00:05:53.860 point about a thought experiment is you're trying to separate all the extraneous circumstances and
00:06:00.900 factors that might be getting in the way of our thinking. And you're trying to kind of focus in
00:06:07.200 particular on one area of a problem. So you might have a thought experiment where there are two
00:06:12.980 scenarios which are different, except for the fact that one has a particular factor that the other
00:06:18.760 doesn't have. And the point is to try and work out whether that factor is making a difference or not.
00:06:23.400 And often you can only do that if you create a very artificial world, because the real world is
00:06:30.720 not like that. The real world is just full of music and noise and complications. And so the thought
00:06:37.980 experiment is designed to simplify and clarify and try and get at the nub of a problem.
00:06:43.400 Yeah. I mean, it's a kind of conceptual and even emotional surgery that's being performed. I mean,
00:06:49.140 you change specific variables and you look at the difference in response. And again, as you said,
00:06:58.360 this is often, it can often seem highly artificial or unlikely because you're looking for the pure
00:07:04.940 case. You're looking for the corner condition that really does elucidate the moral terrain.
00:07:09.960 I think we should describe both thought experiments here. I think because I think there's an analogy
00:07:13.860 between a common response to the trolley problem and what's happening in the shallow pond as well.
00:07:20.680 So before we dive into the shallow pond, I guess pun intended, describe the trolley problem case and
00:07:27.060 how it's used. Well, the main trolley problem case goes like this. You are to imagine that there is a
00:07:33.260 runaway train. It's careering down the track. There are five people tied to the track. And in the simple
00:07:41.520 case, you are on the side of the track and there's a switch and you can flick the switch and you can
00:07:47.780 divert the train down a spur. And unfortunately, on that spur, one person is tied to the track. So
00:07:55.940 the question is, should you turn the switch and divert the train away from the five to kill the one?
00:08:02.900 And that was invented by a woman called Philippa Foote in 1967. She was writing about abortion at the
00:08:09.520 time and it was in an article about abortion. And then 20 years later, an American philosopher called
00:08:16.340 Judith Jarvis Thompson comes up with another example. So this one goes like this. You ought to
00:08:21.360 imagine that the train is, again, it's out of control. It's heading down the track. There are
00:08:26.360 five people who once again are tied to the track. This time, there's a different way of saving them.
00:08:31.900 You are standing on a footbridge. You're standing next to, in the original article, it was a fat man.
00:08:40.120 Now for modern sensibilities, it's a man with a heavy rucksack. So you're standing next to a man
00:08:45.440 with a heavy rucksack. You can push the man with a heavy rucksack over the footbridge. And because
00:08:51.280 that rucksack is so heavy, or in the original case, because the man is so fat, he will stop the train
00:08:57.220 and so save the five people. And he would be killed in the process. And the puzzle that Judith
00:09:05.220 Jarvis Thompson asks us to grapple with is that she seems to think that in the first case, you should
00:09:12.340 turn the train to save the five and kill the one. But in the second case, you shouldn't push the fat
00:09:19.180 man or you shouldn't push the man with a heavy rucksack to save the five at the cost of the one.
00:09:24.040 And that was her intuition. And it's been tested all around the world. It's been tested on men.
00:09:29.900 It's been tested on women. It's been tested on the highly educated, on the less educated. It's 0.99
00:09:34.060 been tested in different countries. And on the whole, almost everybody thinks that in the first
00:09:40.280 case, it is right to turn the train. And in the second case, it's wrong to push the fat man or the
00:09:45.720 man with a heavy rucksack. And so the puzzle in this thought experiment is to explain why. Because in
00:09:50.820 both cases, you are saving five lives at the cost of one.
00:09:54.800 Yeah. And to be clear, the dissociation here is really extreme. It's something like 95%
00:10:02.240 for and against in both cases. But the groups flip, right? So in the case where you just have
00:10:09.820 to flip a switch, which is this kind of anodyne gesture of you're touching something mechanical
00:10:14.440 that diverts the train onto the other track, killing the one and saving the five, 95% of people
00:10:19.740 think you should do that. And when you're pushing the man from the footbridge, fat or otherwise,
00:10:25.880 something like 95% think you shouldn't do that because that would be a murder. And I often have
00:10:32.980 thought that there's a kind of a lack of homology between these two cases because at least in my
00:10:37.720 imagination, people are burning some fuel trying to work out whether pushing the fat man really will
00:10:44.720 stop the train, right? There's kind of an intuitive physics that seems implausible there. But leaving
00:10:48.580 that aside, I think the big difference, which accounts for the difference in behavioral or
00:10:54.680 experimental result, is that when people imagine pushing a person to his death, there's this up
00:11:02.620 close and personal, very affect driving image of actually touching the person and being the true
00:11:12.880 proximate cause of his death. Whereas in the case of flipping the switch, there's this mechanical
00:11:18.800 intermediary and you're not, you're not having to get close to the person who's going to die,
00:11:23.240 much less touch him. And that seems to be an enormous difference. And this is often put forward
00:11:29.120 as a kind of an embarrassment to consequentialism because, you know, the consequences on the surface
00:11:34.400 seem the same. We're just talking about body count. There's a net four lives that are saved.
00:11:38.680 So they should be on a consequentialist analysis, the same case. But I've always felt that this,
00:11:43.760 and I'm sure we'll cycle back to this topic a few times because I think it's important to get this
00:11:48.340 right. I've always felt that this is just a specious version or at least an incomplete and
00:11:53.660 unimaginative version of consequentialism or what consequentialism could be and should be,
00:11:59.400 which is to have a fuller accounting of all the consequences. So if in fact, it is just
00:12:03.660 fundamentally different experientially for a person to push someone to his death and to
00:12:08.820 flip a switch. And if it's different to live in a society where people behave that way versus the
00:12:14.560 other, well, then that's part of the set of consequences that we have to add to the balance.
00:12:20.680 And I think it is, I mean, I think it is obviously different and that's what's being teased out in the
00:12:25.180 experiment. So I now recognize that we should probably define consequentialism in order to continue
00:12:31.680 to this conversation. So anyway, I just lob that back to you and perhaps respond, but also give us
00:12:38.320 a presi on consequentialism. Well, consequentialism is the theory that what matters purely are the
00:12:45.440 consequences. So in these two trolley cases, as you say, the consequences of flipping the switch and
00:12:53.240 pushing the man with the heavy rucksack are the same. If you accept the hypothetical example,
00:12:58.060 which is that one person dies and five people are saved. So if you're a pure consequentialist,
00:13:04.520 it looks like there's no difference between these two cases. So there are dozens of these
00:13:10.860 trolley cases in philosophy. There were dozens of scenarios which involve runaway trains. There
00:13:16.280 are tractors. There are all sorts of things going on in these trolley cases. And it's been given a jokey
00:13:21.180 title, which is trolleyology. The study of these trolley cases is trolleyology. And they've studied
00:13:26.180 precisely the thing that you bring up. So the question is, is really the difference just a sort
00:13:32.100 of emotional difference about pushing the fat man as opposed to turning the switch? So they've tested
00:13:38.400 that and they've come up with a very ingenious way of testing it. So what they do is they ask people
00:13:44.000 the following scenario. Imagine that the man with the heavy rucksack is on the footbridge, but this time
00:13:50.580 you're standing next to a switch. And if you turn the switch, the man with the heavy rucksack will
00:13:56.500 fall through a trap door and will plummet to the ground. And once again, we'll stop the runaway train
00:14:03.360 from killing the five people. Now, if you are totally right about this, what you should get.
00:14:10.440 I see where this is going. I'd forgotten all these iterations here. And I think that it definitely
00:14:16.040 dissects out the up close and personal touchy feel part of it. But what it doesn't change is
00:14:22.260 the fact that the man himself is being manipulated, right? So you're not manipulating the train,
00:14:28.820 you're manipulating the man. And the man is becoming the instrument. His murder is the instrument
00:14:35.300 rather than the effect of the flipping the switch. I think that does seem somehow a crucial difference.
00:14:41.020 Right. So, but that's not a consequentialist difference, right? So what it is, I would just
00:14:46.940 say it is if in fact, I mean, just imagine being these two, in one universe, you flip the switch
00:14:52.320 as 95% of people think you should. And you feel while it was not, not pleasant to do,
00:14:58.860 your conscience is totally clear. In another universe, you flip the switch to the trap door
00:15:04.500 and watch this man fall to his death and stop the train. And you've, you can scarcely live with
00:15:10.160 yourself because of the, you know, the psychological toxicity of having had that experience. That's
00:15:16.580 part, for whatever the, I mean, we can talk more about the reasons why there is a difference there.
00:15:21.020 And I'm, I'm happy to hear all your thoughts on that matter. But if there just is in fact a
00:15:25.180 difference, you know, albeit maybe only in 95% of people, that's part of the consequences. And you
00:15:31.800 can imagine the ripples of those consequences spreading to any society that would make policy of a sort that
00:15:38.960 would, you know, enshrine one behavior as normative versus the other. Right. So I mean,
00:15:43.100 this is what, I mean, there are all kinds of strange examples that are hurled at consequentialism
00:15:48.540 at that seem to be defeaters of it, which always seem to me to be specious. I mean, one, one you
00:15:54.380 actually deal with in the book, which is perhaps the most common one, which is the doctor who
00:15:59.240 recognizes he's got five patients who need organ donations, and he's got a perfectly healthy person
00:16:05.020 in his waiting room, just waiting for a checkup. And he decides to euthanize this person and
00:16:10.540 distribute his organs to the waiting five, saving a net four lives. That seems on, you know, on this
00:16:16.600 narrow focus on body count to be acceptable on a consequentialist analysis. But of course it's not
00:16:21.760 because you have to look at the consequences of what it would be like to live in a society where
00:16:26.540 trust has so totally eroded because we know at any time, even by the doctor who purports to
00:16:33.380 have our wellbeing at heart, we could be casually murdered for the benefit of others. I mean,
00:16:38.900 no one would want to live in that society. It'd be a society of just continuous terror and for good
00:16:43.700 reason. So anyway, that's just my pitch that I've yet to hear, I mean, perhaps you can produce one in
00:16:49.060 this conversation, but I've yet to hear a real argument against consequentialism that takes all
00:16:54.920 consequences, all things considered into account. Right. So in your hospital case where somebody's
00:17:01.780 bopped on the head and their two kidneys and their two lungs and their heart are used to save
00:17:07.480 five patients. So you're obviously right that if that, as it were, got out, then that would be
00:17:14.800 terrifying for everybody. You would never go and visit Auntie Doris in the hospital because you'd
00:17:20.340 think, well, there's a risk that when I go and visit Auntie Doris, the same thing is going to happen
00:17:25.120 to me. Of course, what the philosopher does is they then create a hypothetical example that
00:17:30.340 just a one-off case. Yeah, it's a one-off and nobody finds out about it. And the person has
00:17:34.700 got no friends and blah, blah, blah. But again, the response to that is, well, we can't really
00:17:39.860 imagine that, you know, our intuitions aren't really coping with that really kind of cocooned example.
00:17:46.580 We're imagining that this news is going to leak out. In the trolley case, I think it's much more
00:17:54.160 complicated. It is true that people would find it more difficult to live with themselves by pushing
00:18:01.260 the fat man or by dropping the fat man through the trap door. But the question is why? And I think the
00:18:08.140 explanation is that people have one very powerful non-consequentialist intuition. And it goes
00:18:16.220 something like this, although they don't articulate it and they're very puzzled by this
00:18:21.540 thought experiment. If you put the following to them, they think, yes, this explains my intuition.
00:18:27.580 So imagine that you push the large man from the footbridge and the large man is wearing a rubber
00:18:34.720 suit. And instead of dying, he bounces off the track and he runs away. So what's your reaction to
00:18:41.720 that case? Your reaction to that case is that's not good because the whole point of pushing him over
00:18:47.980 was that so he got in the way of the train so that he would save five lives. Now imagine that in the
00:18:54.860 first case, the train is going along and it's going to kill the five people and you flick the switch and
00:19:00.540 it goes down the spur. Now imagine the person on the spur is able to extricate themselves from their
00:19:07.080 ropes and able to run away. How would you feel about that? Well, you'd feel absolutely delighted.
00:19:14.040 And why do you feel delighted? Because you haven't killed the five and you haven't had to kill the one.
00:19:20.220 So the difference between the two cases, and this comes back to the doctrine of double effect that
00:19:24.580 goes all the way back to Thomas Aquinas, is as you hinted at earlier, in the fat man case,
00:19:30.240 you are using the fat man as a means to an end. And that's not the case with the spur case. Another
00:19:37.380 way of putting that is you intend to kill the fat man when you push him over the foot. You want to
00:19:43.340 kill him. Well, you need him to get in the way. You don't intend to kill the person on the spur.
00:19:48.740 Hmm. Well, yeah, it's interesting. Well, it's, um, I'm not sure I totally buy that. That all turns
00:19:56.320 on there being an important difference between acting in a way where it seems there's a hundred
00:20:02.680 percent chance of killing a person, but still there being, it being true to say that you don't intend
00:20:08.820 to kill the person. I think it's the key distinction is the distinction between intending
00:20:14.500 and merely foreseeing. So it's the distinction in the Geneva Convention between attacking a
00:20:21.560 munitions factory. It's a collateral damage issue, right? Exactly. Exactly. Exactly. So
00:20:27.080 attacking the munitions factory, knowing that a hundred civilians will die, but this munitions
00:20:31.980 factory is so important to the enemy's war effort that the attack on the munitions factory is justified,
00:20:38.360 even though you know that a hundred people, a hundred civilians will die. It's the difference
00:20:42.480 between that and intentionally targeting those 100 civilians. So, yeah, I think I, I misspoke
00:20:48.780 a moment ago. I do clearly see that distinction. I guess it's the, um, let me see what's, what's
00:20:55.240 bothering me about this. Well, perhaps it'll come out just in further discussion here around, uh, the
00:21:00.480 other thought experiments. Well, let's talk about the shallow pond and kind of fill in more of this
00:21:05.100 picture. And, uh, I think we'll, we'll cycle back on whether consequentialism has any real retort
00:21:11.800 because I, because you said a moment ago that this is a, this was a non-consequentialist, uh, intuition
00:21:16.980 and, um, my deep bias here, I'm, I'll be happy to be, um, disabused of it, but my deep bias is that
00:21:24.360 when you drill down on, on any strongly held intuition that pushes our morality around and we
00:21:30.580 can't shake it, it is either at bottom, some intuition about consequences, about, you know,
00:21:35.660 what it would mean to live in a world where this kind of rule was repeated. So it's kind of a rule
00:21:39.680 consequentialism rather than an, an act consequentialism per se, or we just have to bite the
00:21:45.020 bullet and admit that, okay, this is a, an illusion. It's some kind of moral illusion, right? So, um,
00:21:50.980 I mean, there, there's so many things that we could care about as we're about to see and, and,
00:21:55.440 and magically don't care about. And it's, um, it is inscrutable that we, even when they're pointed
00:22:01.240 out, we don't feel differently. I mean, the one that always comes to mind for me is, you know,
00:22:05.640 if we just changed our driving laws just slightly, I mean, just to slightly inconvenience ourselves,
00:22:10.780 I mean, that we made the speed limit 10 miles an hour lower on every street in the nation. I mean,
00:22:16.320 so just speaking of, of America here, where we have 40,000 traffic deaths a year reliably, uh, and I
00:22:23.040 don't know how many people are maimed, but you know, 40,000 people are killed outright based on
00:22:27.100 how badly we drive. If we just reduce the speed limit by, you know, let's say 10 miles an hour,
00:22:32.860 we would save thousands of lives. I think there's no question of that. And it's just the real,
00:22:37.740 the only real consequence. I mean, I'm sure you could, maybe a few people would be inconvenienced
00:22:42.080 in a way that might prove fatal, but it certainly wouldn't, it would be massively offset by the number
00:22:47.240 of lives saved. The real consequence would be that it would be less fun to drive, right? Or,
00:22:52.820 or we could actually, I mean, even to make it more, um, inscrutable still, we could put governors
00:22:58.020 on all of our cars that, you know, so for whatever car, you know, from a Ferrari on down could never
00:23:04.540 exceed the speed limit, right? You could drive however you wanted, but you could just never
00:23:08.100 drive faster than the speed limit. That's technologically feasible. No one would want
00:23:12.120 that no matter how many lives it would save because it would be less fun to drive. Uh, somehow we want to,
00:23:18.000 we want to somehow carve out the possibility of driving faster than the speed limit. Uh, at least
00:23:22.540 sometimes. And yet when you talk about that body count, nobody moves from that point to the obvious
00:23:30.540 conclusion that we're all moral monsters for so callously imperiling the lives of everyone,
00:23:37.480 including our own, really. I mean, we, there's no identifiable victim in advance. That's part of the
00:23:41.460 problem, I think. But I mean, there's thousands of people, 40,000 people are guaranteed to die this
00:23:46.540 year in America based on the status quo. How is this acceptable and how are, how are we not
00:23:53.320 monstrously unethical for accepting it? Uh, and somehow the, the sense that there's even a moral
00:23:58.780 problem here evaporates before I even get, can get to the end of the sentence. Yeah. So 40,000 is a lot
00:24:03.740 of people. I think there were 58,000 killed in the whole of the Vietnam war, right? So that's,
00:24:08.300 that's, that's a big figure. And oddly in London, in much of London now, they've reduced the speed
00:24:14.380 limit to 20 miles an hour. If you'd like to continue listening to this conversation, you'll
00:24:19.360 need to subscribe at samharris.org. Once you do, you'll get access to all full length episodes of
00:24:25.220 The Making Sense Podcast. The Making Sense Podcast is ad-free and relies entirely on listener support.
00:24:31.200 And you can subscribe now at samharris.org.