Making Sense - Sam Harris - August 29, 2016


#44 — Being Good and Doing Good


Episode Stats

Length

55 minutes

Words per Minute

179.45074

Word Count

9,978

Sentence Count

422

Misogynist Sentences

2

Hate Speech Sentences

6


Summary

Will McCaskill is an associate professor of philosophy at the University of Oxford and co-founder of three non-profits based on effective altruist principles, Giving What We Can, 80,000 Hours, and the Center for Effective Altruism. He is also the author of a book, "Doing Good Better: A Radical New Way to Make a Difference," and the founder of the Play Pump, a company that provides clean drinking water to poor children in Africa. In this episode, Will talks about his journey to becoming a philosopher, his work with Tim Ferriss, and his work in philanthropy and philanthropy in general, and how he thinks about what it means to do good in the world. He also talks about why he believes that the play pump is a great idea, and why it should be used to provide clean water across the world, not just in developing countries like Africa, but in the rest of the world as well. Sam Harris is the host of the podcast Making Sense, and is a regular contributor to the New York Times, NPR, and The Huffington Post. He's also the co-host of the popular podcast, The Difficult Truth podcast, where he talks about all things New York City and New York Magazine. This podcast is produced in partnership with The New Republic. Please consider becoming a supporter of The Making Sense Podcast by becoming a patron of Making Sense by becoming one of our patron(s) and/or subscribing to the Making Sense. We don't run ads on the podcast. . Thanks to our sponsor, Sam Harris Media! We don t run ads, and therefore, we re made possible entirely through the support of our sponsorships, which helps us make a better podcast. We re all better listening to the podcast, better listening and listening to more of the things we're doing better listening, more of us listening to us, and we're making more of you, making better listening listening to you, better things, better products, and more of our podcasting ourselves, better of you listening to our podcast, making sense of things we all are better listening of us, we all together, more like that, making a better of ourselves, more listening, and better listening more of ourselves. We hope you enjoy what we re doing more of that, you like it, you're making sense, and you re making sense and we appreciate it more of it, we love you, we really do, we'll hear you.


Transcript

00:00:00.000 Welcome to the Making Sense Podcast.
00:00:08.820 This is Sam Harris.
00:00:10.880 Just a note to say that if you're hearing this, you are not currently on our subscriber
00:00:14.680 feed and will only be hearing the first part of this conversation.
00:00:18.440 In order to access full episodes of the Making Sense Podcast, you'll need to subscribe at
00:00:22.720 samharris.org.
00:00:24.140 There you'll find our private RSS feed to add to your favorite podcatcher, along with
00:00:28.360 other subscriber-only content.
00:00:30.520 We don't run ads on the podcast, and therefore it's made possible entirely through the support
00:00:34.640 of our subscribers.
00:00:35.880 So if you enjoy what we're doing here, please consider becoming one.
00:00:46.580 Today I'm speaking to William McCaskill.
00:00:50.160 Will is an associate professor in philosophy at Lincoln College, Oxford.
00:00:54.060 He was educated at Cambridge, Princeton, and Oxford.
00:00:57.440 He may in fact be the youngest tenured professor of philosophy in the world, and he's one of
00:01:03.500 the primary voices in a movement in philanthropy known as Effective Altruism, a movement which
00:01:10.140 he started with a friend.
00:01:12.780 And he's the co-founder of three non-profits based on effective altruist principles, Giving
00:01:19.780 What We Can, 80,000 Hours, and the Center for Effective Altruism.
00:01:24.940 He's also the author of a book, which I just started reading, which is really good, and
00:01:30.240 the title is Doing Good Better, Effective Altruism and a Radical New Way to Make a Difference.
00:01:36.200 And there is no question that Will is making a difference.
00:01:39.500 If you don't have two hours to spend on our whole conversation, which I absolutely loved,
00:01:44.720 please listen to the last few minutes of this podcast so that you at least know the tangible
00:01:51.520 effect the conversation had on me.
00:01:54.120 And now I give you Will McCaskill.
00:02:01.920 Well, I'm here with Will McCaskill.
00:02:04.100 Will, thanks for coming on the podcast.
00:02:05.760 Thanks for having me.
00:02:06.380 I first heard about you when you did your appearance on Tim Ferriss's podcast, and that
00:02:12.720 was a great interview, by the way.
00:02:13.900 And I'm now in the habit, sorry, Tim, of poaching your podcast guests.
00:02:20.100 This is the third time I've done this.
00:02:21.440 I did it with Jocko Willink, the Navy SEAL, and Eric Weinstein, the mathematician, VC.
00:02:28.580 And those were both great conversations.
00:02:29.920 And now I have Will here.
00:02:31.900 And so I just, the one thing I do is I try not to recapitulate the interview that was
00:02:37.200 done with Tim.
00:02:38.020 So we will not cover much of the ground you did there.
00:02:40.960 So I recommend that interview because that was fascinating.
00:02:43.260 And you have a fascinating bio, much of which we will ignore because you described it with
00:02:48.900 Tim.
00:02:49.660 But, you know, briefly, just tell me what it is you're doing in the world and how you come
00:02:54.540 to be thinking about the things you think about.
00:02:57.400 Great.
00:02:57.640 Yeah.
00:02:57.800 So I'm a, I wear a couple of hats.
00:03:00.060 I'm associate professor of philosophy at the University of Oxford with a focus on ethics
00:03:05.080 and political philosophy, a little bit of overlap with economics.
00:03:08.680 And I'm also the CEO of the Center for Effective Altruism, which is a nonprofit designed to develop
00:03:15.880 and promote the idea of effective altruism, which is the use of your time and money to do
00:03:20.460 as much good as you possibly can.
00:03:22.260 And using evidence and careful reasoning and high quality thought in order to ensure that
00:03:27.380 when you try to do good, you actually do as much good as possible, whether that's through
00:03:31.520 your charity or through your career or through what you buy and helps you choose what the
00:03:37.260 causes where you can have the biggest impact.
00:03:38.980 And put that way, it seems like a purely commonsensical approach to doing good in the world.
00:03:48.180 I think as we get into this conversation, for people who are not familiar with your work or the effective altruism movement,
00:03:55.920 they'll be surprised to learn just how edgy certain of your positions are, which is why this will be a fascinating conversation.
00:04:03.120 So I should say, up front though, you have a book entitled Doing Good Better, which I have only started,
00:04:09.380 I regret to say, but it's a very well-written and very interesting book, which I recommend people read.
00:04:15.780 It covers many things we, again, probably won't cover in this conversation.
00:04:20.660 But tell me about the play pump.
00:04:22.760 You start your book with this story, and it really encapsulates much of what is wrong and much of what is potentially right with philanthropy.
00:04:31.620 Yeah, so the play pump was developed in the late 1990s, and it was an idea that really caught the attention of people around the world,
00:04:43.720 but especially in philanthropic development communities.
00:04:47.280 And so the play pump was built in South Africa, and the idea behind it was that it was a way of providing clean water
00:04:55.220 to poor villages that didn't currently have clean water across sub-Saharan Africa and South Africa.
00:05:01.620 where it was a combination invention.
00:05:05.380 It was a children's merry-go-round, so children would push this thing, look just like a merry-go-round,
00:05:10.900 but the force from the children pushing it would pump clean water up to a reservoir that would provide the clean water for the community.
00:05:20.160 So it looked like a win-win.
00:05:23.020 The children of the village would get their first playground amenity,
00:05:25.600 and the people of the village would get clean water.
00:05:30.760 And, you know, it really took off for that reason.
00:05:33.820 So the media loved to pawn on the idea.
00:05:36.380 They said, pumping water is child's play.
00:05:38.060 It's the magic roundabout.
00:05:39.620 It got a huge amount of funding.
00:05:41.500 The first lady, Laura Bush, at the time, was part of the Clinton Global Initiative,
00:05:45.540 gave it $17 million in funding to roll this out across sub-Saharan Africa.
00:05:52.520 It won the World Bank Development Marketplace Award for being such an innovative invention.
00:05:58.440 Jay-Z promoted it, Beyonce.
00:06:00.120 Really, it was like, it was the thing within development for a while.
00:06:03.100 And when I first heard about it, I thought, wow, yeah, what an amazing idea.
00:06:07.220 This is great that you can do two things at once, making children happy, but then also providing water.
00:06:13.760 It just seems such a good example of, and everyone, of course, was like very well-intentioned behind it.
00:06:19.340 Yeah, well, I should say, reading that section of your book, which again is the first few pages,
00:06:24.420 the effect on the reader is really perfect because you find yourself on the wrong side of this particular phenomenon
00:06:30.600 because you just think, oh my God, that is the greatest idea ever, right?
00:06:34.340 This is a merry-go-round for kids that has the effect of doing all of this annoying labor
00:06:41.480 that was otherwise done by women pumping these hand pumps.
00:06:45.620 So now continue to the depressing conclusion.
00:06:47.940 Yeah, as you might expect, there's a twist in the story, which is just that simply in reality,
00:06:54.660 the play pump was a terrible idea from the start.
00:06:57.200 So unlike a normal merry-go-round, which spins freely once you push it,
00:07:00.680 in order to pump the clean water, you need constant torque.
00:07:04.380 So actually pushing this thing would be very tiring for the kids.
00:07:08.180 I mean, there were other problems too.
00:07:10.040 Sometimes they'd fall off and break limbs.
00:07:11.700 Sometimes the children would vomit from the spinning.
00:07:13.780 But the main problem was that they would just simply get very tired.
00:07:17.520 They wouldn't want to play on this thing all day.
00:07:19.580 But the community still needed this water.
00:07:21.720 And so it was normally left up to the elderly women of the village
00:07:24.600 to push this brightly colored play pump around and around for all hours of the day.
00:07:30.600 A task they found very undignified and demeaning.
00:07:33.660 And then secondly, it just wasn't even very good as a pump.
00:07:36.440 And often it was replacing very boring, but very functional Zimbabwe hand pumps,
00:07:41.700 which when you actually ask the communities, they preferred.
00:07:44.020 It would pump more water with less effort, but actually a third of the price.
00:07:47.740 There were a number of other problems too.
00:07:49.660 They would often break that down.
00:07:51.880 There had initially been an idea that maintenance would be paid for with billboards on these
00:07:56.400 reservoirs, but none of advertising companies actually wanted to pay for it.
00:08:01.320 And so these things were often left in disarray and no maintenance would happen to them either.
00:08:06.500 And so this all came to light in a couple of investigations.
00:08:08.960 And thankfully, and in what's actually a very admirable and rare case,
00:08:16.840 the people who are funding this, especially the Case Foundation,
00:08:20.860 acknowledged that this had been a big mistake and then said,
00:08:24.520 yeah, we just made a mistake.
00:08:26.140 We're no longer going to keep funding this.
00:08:28.580 What about the man who had invented or was pushing the idea of the pump?
00:08:32.680 Yeah.
00:08:33.220 So the people who are pushing it, Play Pumps International and the other field behind it,
00:08:37.100 continued to go ahead with it.
00:08:38.400 They didn't accept the criticism.
00:08:40.840 This is perhaps a phenomenon you're very familiar with.
00:08:43.700 Yes.
00:08:44.400 And so actually the organization does still continue in a vastly diminished capacity today.
00:08:49.900 They're still producing Play Pumps sponsored by companies like Colgate, Palmolive and Ford Motors.
00:08:55.940 But what is unusual in the world of doing good is that actually these,
00:09:01.680 this actually was investigated, criticism came to light and people were willing to back out.
00:09:06.860 But the lesson from this is just that what seem like, you know, good intentions aren't good enough.
00:09:11.620 What seems like a really good idea, it just seems like, yeah, this is amazing.
00:09:16.020 It's a revolutionally new idea.
00:09:18.100 Actually can just not be very effective at all.
00:09:20.680 It can even do more harm than good.
00:09:22.280 What we need to do if we want to really make a big difference is do the boring work of actually
00:09:28.420 investigating how much does this thing cost, how many people's
00:09:31.680 lives are going to be affected and buy how much, what's the evidence behind this.
00:09:36.420 And there are many other things that we could be spending our money on that
00:09:39.440 are much less sexy than the Play Pump, but do vast amounts of good.
00:09:43.360 And that's why it's absolutely crucial if we really do want to use our time and money to make
00:09:47.620 a difference, that we think about what are the different ways we could be spending
00:09:51.440 this time and money.
00:09:53.360 What's the evidence behind the different programs we could be doing?
00:09:56.100 And what's the one that's going to do the very most good?
00:09:58.080 It seems to me there are at least three elements to what you're doing that are very powerful.
00:10:03.620 And the first is the common sense component, which really is not so common as we know, which
00:10:11.900 is just to actually study the results of one's actions and in the spirit of science, see what
00:10:17.680 works and then stop doing what doesn't work.
00:10:19.820 But the other element is you are committed, I know you're personally committed, and to
00:10:26.300 some degree, I guess you can just tell me how much the EA community is also committed
00:10:30.540 explicitly to essentially giving until it hurts.
00:10:33.980 I mean, giving what many people would view as a heroic amount of one's wealth to the poorest
00:10:43.540 people in the world or to the gravest problems in the world.
00:10:47.080 And we'll talk about Peter Singer in a moment, because you've certainly been inspired by him
00:10:52.640 in that regard.
00:10:53.780 And the third component is to no longer be taken in by certain moral illusions, where
00:11:01.440 the thing that is sexiest or most disturbing isn't often the gravest need or doesn't represent
00:11:09.060 the gravest need.
00:11:09.800 And to cut through that in a very unsentimental way.
00:11:14.620 And this is where people's moral intuitions are going to be pushed around.
00:11:18.360 So let's start with the second piece, because I think the first is uncontroversial.
00:11:23.120 We want to know what is actually effective.
00:11:26.260 But how far down the path with Peter Singer do you go in terms of, because I've heard you
00:11:33.360 say, I've watched a few of your talks at this point, I've heard you say things that more
00:11:38.400 or less align you perfectly with Singer's level of commitment, which where he more or less
00:11:43.680 argues, I don't think he has ever recanted this, that you should give every free cent
00:11:51.720 to help the neediest people on earth.
00:11:54.860 It's morally indefensible to have anything like what we would consider a luxury when you're
00:12:01.400 looking at the zero-sum trade-off between spending a dollar on ice cream or whatever it
00:12:06.680 is, and saving yet another life.
00:12:10.140 So just tell me how much you've been inspired by Singer and where you may differ with his
00:12:14.440 take.
00:12:15.480 So I think there's just two framings which are both accurate.
00:12:20.300 So the first is the obligation frame, just how much are we required to give.
00:12:26.460 And Peter Singer argues that we have an obligation to give basically everything we can.
00:12:32.020 And argues for this by saying, well, imagine if you're just walking past a child down in
00:12:36.340 a shallow pond and rescued that child, or failed to rescue that child.
00:12:41.780 Now, that would be morally abominable.
00:12:43.220 What's the difference between that and spending a few thousand dollars on luxury items when that
00:12:50.280 money could have been spent to save a life in a poor country?
00:12:53.660 If you'd try to justify not saving the child in the shallow pond because you were going
00:12:58.520 to ruin your nice suit that cost you a thousand pounds, that would just be, you know, no moral
00:13:04.380 justification at all, nor if it wouldn't be a justification if it was 10,000 pounds.
00:13:09.380 And so for that reason, he argues, yeah, we have this obligation.
00:13:12.880 There's another framing as well, which we call just excited altruism, which is, I use the
00:13:18.820 story of, imagine if you're walking past a burning building and you run in, you see there's
00:13:23.120 a child there, you kick the door down and, you know, you save that child.
00:13:27.580 On this framing, the thought is just, wouldn't that be amazing?
00:13:30.720 Wouldn't you feel like that was a really special moment in your life?
00:13:33.340 You'd feel like this hero.
00:13:34.580 And imagine if you did that several times in a year, you know, you save one child from
00:13:38.620 a burning building, another time you take a bullet for someone, a third time, you know,
00:13:43.220 you save someone from drowning.
00:13:44.840 You'd think your life was really pretty special and, you know, you'd feel pretty good about
00:13:50.340 yourself.
00:13:51.840 And the truth of the matter is actually, yeah, we can do that every single year.
00:13:55.680 We can do much more than that hero who runs into the burning building and saves that child's
00:14:00.520 life just by deciding to use our money in a different way.
00:14:04.040 And so there's these two framings, obligation and opportunity.
00:14:07.660 I actually just think both are true.
00:14:09.580 Many people in the effective altruism community don't actually agree with the obligation framing.
00:14:13.860 They think they're doing what they do because it's part of their values, but there's no
00:14:17.720 sense in which they're obligated to do it.
00:14:20.320 I actually, I agree with Singer's arguments.
00:14:24.440 I think that certainly if you can help other people to a very significant extent, such as
00:14:30.460 by saving a life, while not sacrificing anything of moral significance, then you're required
00:14:35.640 to do it.
00:14:36.140 I mean, in my own case, I just think the level at which I'm at least approximately just maxing
00:14:43.840 out on how much good I can do is just nowhere close to the level at which I think, wow, this
00:14:50.580 is like a big sacrifice for me.
00:14:52.300 And so perhaps a big sacrifice in financial terms.
00:14:55.100 So, you know, as an academic, I'll be on a good middle class income and I'm planning to
00:15:00.200 give away most of my income over the course of my life.
00:15:02.360 So in financial terms, it looks like a big sacrifice, but in terms of my personal well-being,
00:15:06.940 I don't think it's like that at all.
00:15:09.400 I don't think money is actually a big factor.
00:15:11.540 And if you look in my own personal happiness, if you look at the literature on well-being,
00:15:16.520 this is also the case beyond even quite a low level of about $35,000 per year.
00:15:22.360 The relationship between money and happiness is very small indeed.
00:15:26.020 And on some ways of measuring it, it's non-existent, in fact.
00:15:29.140 And then on the other hand, being part of this community of people who are really trying
00:15:33.580 to make the world better is just very rewarding, actually, just has these positive effects in
00:15:40.840 terms of my own well-being.
00:15:42.140 So the kind of answer is just that, yeah, in theory, I agree, just even if it was the
00:15:47.420 case that, you know, I would think that, yeah, this is a moral requirement and so on.
00:15:51.760 But then in practice, it's actually just not really much of a sacrifice for me, I don't
00:15:55.860 think.
00:15:56.160 Let's linger there, because I have heard you say in response to challenges of the sort
00:16:02.360 that Singer often receives, well, if you can live comfortably and do good, well, then
00:16:07.900 that's great.
00:16:08.340 That's a bonus.
00:16:09.040 There's nothing wrong with living comfortably.
00:16:12.140 And now you have just claimed that you're living comfortably.
00:16:15.460 But in fact, by most people's view, you, I think, so spell out how much do you, what is
00:16:22.320 actually your commitment to giving money away at this point?
00:16:25.020 Yeah, so to giving money, so in 2009, I made a commitment to give everything above
00:16:29.580 £20,000 per year, inflation and purchasing power parity adjusted to Oxford's 2009.
00:16:35.580 Right.
00:16:35.820 So now that's about £24,000, a current exchange rate, that's something like $33,000.
00:16:41.540 And just to then give everything above that.
00:16:44.020 And not to wait, you do that every year.
00:16:46.400 And not to wait, yeah, I do it every year.
00:16:47.780 And then with my time as well, I just try and spend as much time as I can.
00:16:50.900 And do you actually think that would or will scale with vastly increased economic opportunity
00:16:58.620 if you get dragged into a startup next week, where now you're making millions of dollars
00:17:05.100 at some point, you aspire to keep it where you've set it now?
00:17:09.260 Yeah, absolutely.
00:17:10.160 I mean, I think, you know, the amount of money I've been earning over the last year is much
00:17:14.900 greater than when I was a postdoc or PhD student.
00:17:18.180 And in fact, that's just been a plus.
00:17:19.680 I'm happier that I'm able to give away more.
00:17:22.200 I mean, the one worry, the biggest worry I have with my commitment is just value of my
00:17:27.380 time.
00:17:28.000 So there's certain ways you can spend money in order to save time eating out rather than
00:17:32.060 making food for yourself, being more willing to get a taxi places rather than the bus.
00:17:36.220 Yeah, that's interesting.
00:17:37.380 That means I think I'd be making a mistake if my player...
00:17:40.780 So I have a kind of balancing act.
00:17:42.800 One is just because I'm using my time to, you know, build up Center for the Effect of Altivism
00:17:47.960 and so on, promote these ideas.
00:17:49.680 I want to ensure that I have as much time as possible to do that.
00:17:52.640 But then at the same time, I don't want to say, oh, well, people should be giving or it's
00:17:56.420 really good for people to be giving their money effectively.
00:17:58.880 But I don't do that because my time is so valuable that just seem kind of hypocritical.
00:18:03.860 So I also just want to demonstrate like, yeah, you can do this and just, it's actually just
00:18:08.000 a really good life.
00:18:09.220 It's not nearly as much of a sacrifice as it might seem.
00:18:11.920 Let's linger there for a moment because I think that if you are not following Singer all
00:18:17.880 the way so that the implications of his argument is really that there should be some kind of
00:18:23.460 equilibrium state where you are more or less indistinguishable from the people you're
00:18:29.280 helping at a certain point because you've helped them so much.
00:18:32.060 So you, if, if you are living a comfortable life really at all, I mean, but a comfortable
00:18:37.460 life by Western standards, you are still from Singer's view culpably standing next to the
00:18:45.100 shallow pond watching yet another child drown.
00:18:47.860 And so I'm wondering how you draw that line.
00:18:51.340 And obviously, I mean, you, and needless to say, there's no judgment in this because what
00:18:55.720 the scheme you have just sketched out is already qualifies you for sainthood in most people's
00:19:02.260 worldview at this point.
00:19:03.780 But how, so how do you think about that?
00:19:05.640 Yeah.
00:19:05.780 So why don't I give even more?
00:19:07.280 So I think, I think even on, even if you endorse kind of pure utilitarianism, you just should
00:19:13.280 maximize the amount of good you can do.
00:19:15.200 I think that just for practical reasons, that doesn't mean that you should, um, move all of
00:19:21.360 your, um, you know, keep donating until you're living on $2 per day.
00:19:26.780 Um, not if you're in a rich country, because the opportunities you have to spend, let's just
00:19:34.760 solely focus on money.
00:19:35.720 So there's lots of other ways of doing good.
00:19:37.200 But if you were to say, okay, I'm going to live a normal, um, you know, keep going in
00:19:41.500 my own job and just donate as much as I can until I'm earning like very little.
00:19:45.200 Firstly, I think, you know, that's going to damage your ability to earn more later.
00:19:49.360 It means that, um, there's risks of yourself burning out, um, which is, I think, very significant
00:19:54.440 if you're going to, you know, wear a hair shirt for three years and then completely give up
00:19:58.560 on morality altogether.
00:19:59.820 That's much worse than just donating a more moderate amount, but for the rest of your life.
00:20:04.880 And then also in terms of, yeah, your productivity and your work as well, it's just actually really
00:20:10.660 important to ensure that you've got the right balance between how much you're donating so
00:20:14.360 that you can do it positively.
00:20:15.660 And then finally, in terms of the influence you have on other people, I think if you're
00:20:19.580 able to, you know, if you're able to act as a role model, something that people actually
00:20:23.580 really aspire towards, think, yeah, this is this amazing way to live a life and look at
00:20:27.800 these people are able to donate a very significant amount and still have a really great life.
00:20:31.720 That's much more powerful because it actually might mean that many other people go and do
00:20:35.920 the same thing.
00:20:36.980 And if just one other person does the same thing as you do, you've doubled your life's
00:20:40.000 impact.
00:20:40.600 It's like a very big part of the equation.
00:20:43.320 Um, whereas if you're walking around utterly miserable, just so you can donate that extra,
00:20:48.580 that last cent to fight global poverty, you know, you might seem a little bit like an
00:20:54.140 anti-hero.
00:20:55.240 Um, and I think that's a very important consideration.
00:20:57.600 So I actually think that when it comes to the practical implication of singer's ideas,
00:21:01.720 it doesn't lead you to donate everything above, you know, $2 per day.
00:21:06.800 Um, instead you, you kind of max the optimal amount is actually quite, quite a higher level,
00:21:11.840 which is, um, maximizing the amount of good you'll do over the course of your lifetime,
00:21:16.340 bearing in mind the ability to say, get promotions or change career, earn more, the value of your
00:21:22.100 time, ensuring you're productive and ensuring you're a good role model to other people as well.
00:21:25.760 Um, and so I actually think that, yeah, the case at which I try and maximize my own impact
00:21:31.480 is, you know, way far away from the line at which, uh, I'd think this is really, really
00:21:38.640 a big, you know, a big hardship for me.
00:21:41.040 And I think that's true, at least for many people.
00:21:42.840 Yeah, this is, this is really a fascinating area and it's, it's going to get more fascinating
00:21:48.020 because it's, it just becomes strange the closer you look at it.
00:21:52.060 Now, I, I'm totally convinced by your opportunity framing.
00:21:56.280 And while I had heard it put that way before, your emphasis on that is very attractive and
00:22:04.660 very compelling.
00:22:05.580 And, and so, I mean, just to remind our listeners so that by dint of having the resources you
00:22:11.700 have, and if you're listening to this podcast in any developed country, you almost by definition
00:22:18.340 have vast resources relative to the poorest people on earth.
00:22:23.320 And this puts you in a position to quite literally save the child from the burning building.
00:22:30.100 Any moment you decide to write a check for, I mean, what, what is it that actually in your
00:22:35.100 view is sufficient to save a life using the best charity?
00:22:38.300 Yeah, so the best guess from GiveWell donating to Against Malaria Foundation is $3,400.
00:22:44.100 Right.
00:22:44.660 Well, statistically speaking, on average, save a life.
00:22:46.920 And they're keen to emphasize that, you know, that's just an estimate.
00:22:50.740 A lot of kind of assumptions go into that and so on, but they're very careful, very skeptical.
00:22:55.360 It's the best, you know, estimate that I know of.
00:22:58.120 So, so that, that opportunity is always there.
00:23:01.060 And I guess one of the challenges from a philanthropic point of view and just a, the point of view
00:23:08.860 of one's own maximizing one's own psychological well-being is to make that opportunity as salient
00:23:16.320 as possible.
00:23:16.900 Because obviously writing a check doesn't feel like rushing across the street and grabbing
00:23:21.420 the child out of the burning building and then being rewarded by all the thanks you'll
00:23:26.700 get from, from your neighbors.
00:23:28.720 But if you could fully internalize the ethical significance of the act, something like that
00:23:37.300 reward is available to us.
00:23:39.040 At least that's what you're arguing.
00:23:40.840 I'm convinced that is a good way of seeing it.
00:23:44.360 And, and so therefore taking those opportunities more and more and making them more emotionally
00:23:49.920 real seems like a very important project for all of us who have so much.
00:23:55.520 The other side, the, the, the, the, the, the Singerian obligation side, I think is, is fraught
00:24:02.600 with other issues.
00:24:04.340 And so I just want to explore those a little bit.
00:24:07.320 The problem we're dealing with here is that we are beset by many different forms of moral
00:24:13.580 illusion where we effortlessly care about things that are in the scheme of things, not all
00:24:19.660 that important and can't be goaded to care about things that are objectively and subjectively
00:24:26.700 when you actually connect with the lives of the people suffering these conditions, the
00:24:30.380 most important problems on earth.
00:24:31.700 And the classic example is, you know, a girl falls down a well, it's one girl, it's one
00:24:37.320 life.
00:24:38.600 And what you see is wall to wall coverage on every channel for 24 hours, tracing the, the
00:24:44.660 rescue successful or otherwise of this little girl.
00:24:47.380 And yet a genocide could be raging in Sudan and not only do we, can't we be moved to
00:24:54.800 care, we so reliably can't be moved to care that the news organizations just can't bear
00:25:00.580 to cover it.
00:25:01.260 I mean, they, they give us a little bit of it just because it's their obligation, but
00:25:04.280 it's, it's five minutes and they know that that's, that's a losing proposition for them
00:25:09.200 anyway.
00:25:10.020 Yeah.
00:25:10.280 Yeah.
00:25:10.500 So that's, that's the situation we're in.
00:25:12.280 And that seems like a bug, not a feature of our ethical hardware.
00:25:16.360 For me, it exposes an interesting paradox here because, because the, the most disturbing
00:25:25.240 things are not reliably the most harmful in the world and the most harmful things are
00:25:33.260 not reliably the most disturbing.
00:25:35.260 And you can talk about this from the positive or negative sides.
00:25:37.840 We can talk about the goods we don't do, and we can talk about the harms people cause.
00:25:41.940 And so it's an example from my first book, the end of faith to find out that your grandfather
00:25:46.960 flew a few bombing missions over Dresden in the war is one thing to find out that he killed
00:25:54.540 a woman and her children with a shovel is another.
00:25:57.580 Now he undoubtedly would have killed more women and children flying that bombing mission.
00:26:02.780 But given the difference between killing from 30,000 feet by dropping bombs and killing up
00:26:12.540 close and personal, and this is where the paradox comes in, we recognize that it would take a
00:26:17.340 different kind of person with a very different set of internal motives, intentions, and global
00:26:24.060 properties of his mind and emotional life to do the latter versus the former.
00:26:29.900 So a completely ordinary person like ourselves could be, by dint of circumstance, detached
00:26:36.520 enough from the consequences of his actions so as to drop the bombs from the plane.
00:26:42.080 It takes a proper psychopath or somebody who was pushed into psychopathy by his experience
00:26:50.180 to kill people in that way with a shovel.
00:26:53.720 And to flip this back to philanthropy, it is a very different person who throws out the appeal
00:27:01.840 from UNICEF, casually ignoring the fact that he has foregone yet another opportunity to save a life.
00:27:09.280 That person is very different from the person who would pass a child drowning in a shallow pond
00:27:14.300 because he doesn't want to get his shoes wet.
00:27:15.940 And so the utilitarian equation between life and life, which Singer's obligation story rests on,
00:27:27.760 doesn't acknowledge the fact that it really would require a different person to ignore suffering that
00:27:36.420 was that salient or to perpetrate, in the case of creating harms, suffering that's that salient.
00:27:42.720 And yet we're being asked to view them as equivalent for the purpose of parsing the ethics.
00:27:50.540 So I think there's an important distinction between assessing acts and assessing a person,
00:27:56.580 assessing a person's character.
00:27:58.480 And I think normally when we go about doing model reasoning,
00:28:02.080 most of the time we're talking about people's characters.
00:28:05.020 So is this a good person in general?
00:28:07.360 Can I trust them to do good things in the future?
00:28:09.320 Is it the sort of person I want to associate with?
00:28:11.980 Whereas model philosophers are often talking about acts.
00:28:16.160 And so I think Singer as well would agree that it's in some sense a much worse person
00:28:21.860 who kills someone than who, like, intentionally kills someone than who just walks past a drowning child.
00:28:27.860 And you'd entirely agree with that.
00:28:29.560 Because in part, the idea that it's much worse to kill people intentionally is a far greater
00:28:35.360 model wrong in our society than merely failing to save someone.
00:28:41.020 Right.
00:28:41.200 Although, let me just, the difference between an act of commission and omission,
00:28:45.280 I think it brings in a different variable here.
00:28:47.180 I mean, I agree that that is a difference that we find morally salient.
00:28:51.200 But what I'm talking about here is in both cases, you are declining to help someone on the side of not doing good.
00:28:58.580 And in both cases, in war, you are knowingly killing people.
00:29:02.320 But they're just very different circumstances.
00:29:04.280 So there's different levels of salience.
00:29:06.000 And so I agree that it's kind of, I would also just be very troubled by someone who wasn't moved by
00:29:12.200 the more salient causes of suffering in human in some way.
00:29:16.500 But when we think about moral progress, I think it's absolutely crucial to pay particular attention
00:29:21.840 to those causes of suffering that are very kind of mechanized or have the salience stripped away from them.
00:29:27.360 I mean, if you look at the orders that were given to SS guards in terms of descriptions of
00:29:33.060 how to treat Jews in the Holocaust, every step has been taken to remove their humanity,
00:29:39.500 to turn it into completely banal evil.
00:29:42.840 And it's through that almost mechanization of suffering that I think humanity has committed
00:29:48.480 some of the worst wrongs in its history.
00:29:50.900 And I think that that's also going to be true today.
00:29:53.180 So when you look at practice of factory farming, or if you look at the way we incarcerate people.
00:29:58.220 So, you know, if we saw a country, as happens, that was regularly flogging or inflicting
00:30:06.280 corporal punishment on its criminals, I think that's absolutely barbaric as a practice.
00:30:11.860 But yet putting someone in a prison cell for several years is a worse harm to them.
00:30:16.920 I think it's like considerably worse the punishment we're inflicting on them.
00:30:20.080 But it doesn't give us that same like emotional resonance.
00:30:23.040 And I think insofar as there's this track record throughout human history of people doing
00:30:28.720 absolutely abominable acts, not realizing that it was morally problematic at all, even taking
00:30:34.540 its common sense, precisely because the ways in which the harm caused had been stripped
00:30:41.540 away, had been made sterile, as with the case of the SSR guards.
00:30:46.260 That should give us pause when there's some case of, you know, extreme harm that has this
00:30:51.120 property of being made sterile, should make us worry, are we in that situation again?
00:30:55.440 Are we just thinking, oh yeah, this is common sense, normal part of practice, but only because
00:31:00.040 of the way that things have been framed.
00:31:02.680 And the really powerful thought, I think, from, you know, Singer's arguments or thinking
00:31:06.920 about extreme poverty is, well, maybe we're in that situation now with respect to us in
00:31:11.980 the West compared to the global poor.
00:31:13.840 So if we look back to think of Louis XVI or something, or imagine some monarch who's incredibly
00:31:20.600 wealthy with his people starving all around him, I think, God, that's absolutely horrific.
00:31:24.540 It doesn't seem so different from the way that we are at the moment.
00:31:28.660 Everyone in a rich country in the US or UK is in the, basically most of the population
00:31:34.020 are in the richest 10% of the world's population, even once you've taken into account the fact
00:31:39.560 that money goes further overseas.
00:31:41.500 I imagine most of the listeners of this podcast are in the top few percent.
00:31:45.880 If you're earning above $55,000 per year, you're in the richest 1% of the world's population.
00:31:51.480 And this is a very unusual state to be in.
00:31:54.480 It's only in the last 200 years that we've seen such a radical divergence between our rich
00:31:59.700 countries and the poorest countries in the world.
00:32:01.560 So it's not something that our moral intuition, I think, is really caught up to.
00:32:04.880 But in the spirit of thinking, well, what are the ways in which we could be acting in a
00:32:10.300 way that seems radically wrong from the perspective of future generations, but that we take for
00:32:15.000 common sense?
00:32:16.080 I, you know, I think Singer's definitely put his finger on our possible candidate, which is
00:32:19.980 the fact that the fact that we, you know, have what by historical standards and global
00:32:25.220 standards is immense wealth, immense luxury.
00:32:28.320 And it's currently like common sense or normal just to use that on yourself rather than to
00:32:31.940 think of it as, in some sense, resources that really belong to all of humanity.
00:32:38.640 Okay.
00:32:38.860 Well, we're going to keep digging in this particular hole because this is where we are going to
00:32:43.260 reach moral gold.
00:32:44.160 So you did a debate with Giles Frazier for Intelligence Squared, which I was amused to
00:32:50.360 see that I had actually debated him as well.
00:32:53.760 He liked you much more than he liked me, I think, probably because you weren't telling
00:32:56.860 him his religion was bullshit.
00:32:58.580 I can imagine.
00:32:59.300 Giles is a priest.
00:33:01.000 But I thought he raised a very interesting point in your debate and your answer was also
00:33:07.080 interesting.
00:33:07.580 So I'll just take you back to that moment.
00:33:08.940 Again, we're in a burning house with a child who can be saved.
00:33:13.140 But in this house, there is a Picasso on the wall in another room, and you really only have
00:33:18.100 a moment to get out of there with one of these precious objects intact.
00:33:23.920 And Giles suggested that on your view, the Picasso is worth so much that you really should
00:33:31.420 save it because you can sell it and turn it into thousands and thousands of bed nets that
00:33:37.960 will save presumably thousands of lives.
00:33:41.560 I'm not sure that's actually the conversion from bed net to life.
00:33:44.240 But in any case, we're talking about a multi-million dollar painting, let's say a 50 million dollar
00:33:49.500 painting.
00:33:50.520 And your child or the child is just one life.
00:33:54.880 And so he put that to you, I think, expecting, perhaps not, but expecting that that is a knockdown
00:34:02.580 argument against your position.
00:34:04.180 And you just bit the bullet there.
00:34:06.760 So perhaps you want to respond again to that thought experiment.
00:34:10.800 Yeah.
00:34:10.920 So the first question, the first thing to caveat is that Giles is asking this as a philosophical
00:34:16.260 thought experiment.
00:34:17.180 So you strip away extraneous factors like what's the media going to think of this?
00:34:22.140 And what, you know, perhaps also kind of, are you going to be able to live with yourself
00:34:27.000 afterwards as a matter of human psychology and so on?
00:34:29.420 So stripping away those things to just have the pure thought experiment of, well, you can
00:34:33.680 save this child or you can save this Picasso and sell it by bed nets.
00:34:37.560 And the question I asked him was, well, supposing it's just two burning buildings and one there's
00:34:42.600 a single child and in the other burning building there's, let's say it's a hundred children that
00:34:46.940 you could save.
00:34:48.120 And the only way you can save those hundred children is by taking this Picasso off the wall
00:34:52.700 and using it to prop open the door of this other building such that a hundred children
00:34:57.000 can go through, um, what ought you to do?
00:35:00.100 And I think in that case, it's very clear.
00:35:02.080 You ought to save the hundred children, even if you're using this painting as a means to
00:35:06.260 doing so.
00:35:06.960 The fact that that's, you're using that as a means doesn't seem relevant.
00:35:10.980 And the reason I actually quite like this thought experiment is it really shows what a morally
00:35:15.980 unintuitive world we're in.
00:35:17.780 That actually the situation we're in right now is, is that there's a burning building.
00:35:21.380 It's just that it's behind you rather than in front of you and that there is that those
00:35:24.720 hundred children whose lives you can potentially save that are behind you and the, and not
00:35:29.420 salient to you.
00:35:31.100 And so I think like what Giles was wanting to say was that, oh, isn't this very uncompassionate?
00:35:37.620 Um, but actually I think this is just far more sophisticated compassion.
00:35:40.940 The understanding that, um, there are people on the other side of the world who are, whose
00:35:48.100 lives are just as important, who are just as in need of someone who's right in front
00:35:52.360 of you, you know, that shows a much more sophisticated, much more genuine form of compassion than just
00:35:57.840 simply being moved by whatever happens, uh, happens to be more salient to you at the time.
00:36:03.060 And so, yeah, it's like a conclusion that show, like it's a weird conclusion, but the weirdness
00:36:09.600 comes from how weird the world is, how morally unintuitive the world we happen to find ourselves
00:36:13.840 is in, which is that like, yeah, save the painting and morally speaking, save the painting and
00:36:19.540 therefore save hundreds of lives.
00:36:22.040 Um, having said that, of course, I wouldn't blame someone.
00:36:23.920 And I think like, again, as we talked about in terms of natural human psychology, it's like
00:36:28.500 perfectly natural to save the person who's right in front of you.
00:36:32.220 Um, so I wouldn't blame anyone for doing that.
00:36:34.080 But if we're talking about moral philosophy and what the morally correct choice is, then, um,
00:36:38.940 uh, I think you just have to say, have to save the hundred.
00:36:42.700 Well, so you wouldn't blame them.
00:36:45.520 And the decision to save the Picasso strikes us as so strange.
00:36:50.440 Those are two, I think two sides of the same coin.
00:36:53.600 I mean, you wouldn't blame them because you're acknowledging how counterintuitive it is to
00:36:58.820 save the Picasso and not, and not the child.
00:37:00.880 And so you're, you're really putting the onus on the world, on the situation, on all the contingent
00:37:05.920 facts of our biology and our, our circumstance that causes us to not be starkly consequentialist
00:37:15.400 in this situation.
00:37:17.300 So my concern there is that that's not, I mean, one of the reasons why I don't tend to
00:37:23.260 call myself a consequentialist, even though I am one, it's because for me, consequentialism
00:37:28.440 or historically consequentialism has been so often associated with just looking at the
00:37:34.860 numbers of bodies on both sides of the balance.
00:37:37.140 And that's how you understand the consequences of an action and judge its, its moral merits,
00:37:42.160 but there's more to it than that.
00:37:44.220 And, and, and Gile, and you just acknowledged that there was more to it than that, but you
00:37:48.060 weren't inclined to put those consequences also into the picture.
00:37:52.960 So like the question of whether you could live with yourself, right?
00:37:56.240 And that's, I think that's the spectrum of effects certainly includes whether you could
00:38:01.500 live with yourself.
00:38:02.440 And it includes, to come back to the, the moral paradox here, it also includes what kind
00:38:08.360 of person you would have to be in order to grab the Picasso and not the child.
00:38:15.580 Given all of the contingent facts we just acknowledged, given how weird the world is, or given how not
00:38:21.900 optimized the world is to produce a person who could just algorithmically always save a hundred
00:38:30.000 lives over one every time, no matter how the decision is couched, we can even make the situation
00:38:36.720 more perverse.
00:38:37.560 So, you know, I have, I have two daughters, I don't have a Picasso on the wall, but certainly
00:38:42.160 I have something that could be sold that could buy enough bed nets to save more than two lives
00:38:48.420 in Africa, right?
00:38:49.520 So the house burns down tonight, I have a choice to grab my daughters or grab this thing, whatever
00:38:55.380 it is.
00:38:56.340 And I reason thusly, having been convinced by you in this podcast, that saving at least
00:39:03.040 three lives in Africa is better than saving my two daughters.
00:39:06.120 And it's only a contingent property of my own biology and its attendant selfishness and
00:39:13.980 my, you know, the drive toward kin selection and all the rest that has caused me to even
00:39:19.380 be biased toward my daughter's lives over the lives of faceless children in Africa.
00:39:25.160 So I'm convinced and I grab the valuable thing and sell it and donate it to GiveWell and it's
00:39:32.380 used for bed nets.
00:39:33.140 And so I think even you, given everything you believe about the ethical imperative of
00:39:40.240 the singer style argument, would be horrified and rightly so that I was capable of doing
00:39:50.540 that.
00:39:50.980 That horror, or at least that distrust of my psychology, I think summarizes an intuition we
00:39:56.540 have about all of the other consequences that are in play here that we haven't thought much
00:40:02.080 about.
00:40:02.740 And again, this is a very complicated area because I see that, I don't know if you know
00:40:06.560 about my analogy with the moral landscape where we can have various peaks of well-being.
00:40:11.900 I could imagine an alternate peak on the moral landscape where we are such beings as to really
00:40:18.280 care about all lives generically as much as we care about our own children's lives.
00:40:23.940 So I feel I love my children, but I actually also feel the same love for people in Africa
00:40:30.620 I've never met and it's just as available to me and therefore my disposition not to privilege
00:40:36.560 my children is not a sign of any lack of connection to them.
00:40:41.240 It's just, you know, I'm, you know, the Dalai Lama squared.
00:40:44.700 I've got that, you know, the Bodhisattva of compassion and I've got that connection everywhere.
00:40:48.620 So I grabbed the Picasso and I can feel good about saving more lives in Africa.
00:40:53.200 But my concern is that let's, let's just acknowledge that is another peak on the moral landscape.
00:40:58.720 But between where we are now, where we love our children more than, than those we haven't
00:41:04.520 met and would be, would view it as an act of pathological altruism to let them burn and
00:41:11.500 just grab the Picasso based on our knowledge that we could do more good with it.
00:41:15.060 If we wanted to become the, the Dalai Lama style altruist, there may be this uncanny or
00:41:22.460 unhappy valley between these two peaks that we would have to traverse where there is some,
00:41:27.140 there would be something sociopathic about an ability to run this calculus and be motivated
00:41:34.020 by it.
00:41:34.900 Yeah.
00:41:35.200 So there's time to say here.
00:41:36.940 So one thing, yeah, I want to say it's like, I also don't describe myself as a consequentialist.
00:41:43.600 I think, um, the correct thing to do is to, in a moral decision is have a kind of variety
00:41:49.780 of moral lenses.
00:41:50.680 My PhD was on this topic, um, have a variety of moral lenses and take the action.
00:41:54.600 That's the best kind of compromise between the competing moral views that seem most plausible,
00:42:00.240 some of which are consequentialist, um, others might be non-consequentialist.
00:42:04.500 Um, and I do think that the case, you know, there's, and I just emphasize consequences because
00:42:10.360 everyone should agree that consequences matter and that they're very neglected, um, in terms
00:42:15.000 of the impact that we can have in the world that we're in.
00:42:17.860 And so I think the case where you've got special obligations, it's a family member or daughter
00:42:22.860 or child or a friend is, is very different from the question when it's just, um, strangers.
00:42:28.840 I think it's at least a reasonable moral view to think, I just do have this special obligation
00:42:33.820 to my friends and certainly very deep part of common sense to think I do have a special
00:42:39.100 obligation to my child.
00:42:40.320 And if I can save my child, even if it was, they were right in front of me, I can save
00:42:44.240 one child.
00:42:44.840 I can save 10, 10 strangers.
00:42:47.580 I should save my child.
00:42:49.280 And so I think that's quite a different case.
00:42:52.720 Um, although maybe we should just plant a flag there.
00:42:56.680 Cause I think that's interesting.
00:42:57.480 You know, I don't know what special obligations actually consist in apart from some argument
00:43:03.800 that one, that we just were hardwired that way and it's, it's hard to get over this hard
00:43:08.960 wiring, but two, we are better off for honoring those obligations.
00:43:13.920 And so it does resolve to consequences in some way.
00:43:16.880 If our, if our world would be much better, if we ignored those, those, those hardwired obligations
00:43:23.060 or a sense of obligation, then I think we, there's an argument for ignoring them.
00:43:27.040 So, so we can, we can, we can table that.
00:43:29.120 Yeah.
00:43:29.280 But then the second question, which is very important, um, taking us back to the Picasso
00:43:33.620 is this, and this is a way in which, uh, moral philosophy can often be kind of very confusing
00:43:40.240 to people who don't do it is that moral philosophers do engage in these thought experiments where
00:43:45.480 they say, oh, well, put aside all of these other considerations that are normally irrelevant.
00:43:49.540 And then they expect you to still have a reliable intuition, even in this very, very strange,
00:43:54.560 fantastical case.
00:43:55.380 And so I do think in the case of when you are bringing back factors, like, am I actually
00:44:00.440 psychologically capable of doing this?
00:44:02.140 How am I going to like, think about myself when I like hear like this child screams in
00:44:06.380 my, like late at night, every day, of course, that's incredibly relevant.
00:44:10.780 And then similarly, I think again, the kind of philosopher tends to focus on what acts are
00:44:15.220 the best.
00:44:16.120 Whereas in terms of the life decisions we make, biggest decisions tend to be more like, what's,
00:44:21.740 what person should I be, what are my big projects?
00:44:24.700 Um, and I think cultivating in yourself the sort of, to become the sort of person who's
00:44:29.840 empathetic enough that you won't in this situation simply do the calculations and just go and save
00:44:37.300 the Picasso.
00:44:38.400 You know, I think that might well be right, that you're just going to be a person that'll
00:44:41.860 do more good over the long run, um, if you don't do that.
00:44:45.980 That's why it's kind of, uh, a subtle case where again, you want to distinguish between
00:44:50.120 what's the best kind of character to have.
00:44:52.160 And the best character to have might, um, is plausibly one that means you do the long
00:44:56.000 thing a number of times.
00:44:57.520 That's very interesting.
00:44:58.420 So let's just, I don't want to interrupt you if you had much more you wanted to say there,
00:45:02.200 because every one of these points is so interesting, but this has been my, my gripe with certain
00:45:08.060 caricatures of utilitarianism or consequentialism, which is so, so the idea that if you can sacrifice
00:45:14.280 one to save five, well then you should, you know, you go into your doctor's office for
00:45:17.900 a checkup and he realizes he's got five other patients who need your organs.
00:45:21.300 So they just grab you and, and steal your organs and, and you now are dead.
00:45:25.800 But if you just look at the, the larger consequence of living in a world where at any moment, any
00:45:33.420 of us could be sacrificed by society to save five others, none of us want to live in that
00:45:39.200 world.
00:45:39.580 And I think for a good reason.
00:45:41.100 And so you have to grapple with a much larger spectrum of effects when you're going to talk
00:45:48.580 about consequences.
00:45:49.200 So you just, you just acknowledged one here, whereas that to be the kind of person you want
00:45:54.800 to be, who is really going to do good in the world and to be tuned up appropriately to,
00:45:59.000 to have the right social connections to other human beings.
00:46:02.160 You may in fact want to be the kind of person who privileges love of one's friends and family
00:46:09.400 over a more generic loving kindness to all human beings.
00:46:14.660 Because if you can't feel those bonds with friends and family, that has moved enough, enough
00:46:20.760 of the moving pieces in your psychology so that you're not the kind of person who is going
00:46:24.580 to care about the suffering of others as much as you might.
00:46:27.840 Yeah.
00:46:28.340 So another example of this is a number of people I know, often a consequence of this
00:46:35.900 mindset, with respect to their dietary choices, just to kind of acknowledge that, you know,
00:46:42.040 animal suffering is really bad.
00:46:43.780 And, you know, animals have real, like non-human animals have real moral status that we should
00:46:48.160 respect and won't eat some very bad forms of meat, like factory farmed chicken on that
00:46:53.180 grounds.
00:46:53.700 But for something like beef or lamb, the animals, I think, just have reasonable lives, not amazing
00:47:00.040 lives, but lives that are definitely worth living better than if they, for them, if they'd
00:47:03.120 never existed.
00:47:04.420 You actually think that's true of factory farmed beef or just, or you're now talking about
00:47:08.800 grass fed, pasture raised beef?
00:47:12.060 I mean, it's harder to factory farm a cow in the way that you can.
00:47:14.580 You can't treat them nearly as badly as you can a chicken, but as for like, which animals
00:47:20.060 have lives that are worth living or not, it's a really hard question.
00:47:23.000 But there are at least some people who will kind of justify eating that sort of meat because,
00:47:27.600 you know, what you're doing by buying that meat is increasing the demand for a certain
00:47:31.400 type of meat that then means that more animals of that type come into existence.
00:47:35.480 And those animals have good lives.
00:47:37.200 So the question is, well, who's it bad for then?
00:47:39.300 It's not bad for the cow you're eating because that cow's already dead.
00:47:42.360 It's not bad for the cow that you bring into existence because it wouldn't have existed
00:47:46.320 otherwise.
00:47:47.700 But then I just like, I can't imagine in my own case, psychologically believing both that
00:47:54.100 animal suffering is incredibly important and, you know, you should care a lot about animals
00:48:00.560 and then also kind of just eating their flesh.
00:48:02.560 It just, I don't think it's a psychological possibility for me.
00:48:05.200 So you're a vegetarian?
00:48:06.220 So I'm vegetarian.
00:48:06.840 I mean, another case given by, first given to me by Derek Parfitt is, you know, your
00:48:13.620 grandmother who you love very much, you've got a very close relationship with her.
00:48:17.960 When she dies, you just throw her in the garbage.
00:48:20.580 Just questions, well, who's that bad for?
00:48:23.500 You know, it's not bad, not bad for her because she's no longer with us.
00:48:26.200 Um, but again, it just seems like there's this, as a matter of human psychology, doing that
00:48:32.100 is very inconsistent with what seems like genuine regard for, um, that person.
00:48:37.740 But I think that's worth acknowledging.
00:48:40.100 So it's easy to cash that intuition out again in the form of consequences, in my view, which
00:48:47.860 is, yes, it's not bad for your grandmother because she's presumably not there to experience
00:48:53.280 anything, but the sense that there's something sacred about a human life or the sense that
00:48:59.080 one's love of a person needs to be honored by an appropriate framing of their death, that is good
00:49:08.420 for everyone else who's yet living, right?
00:49:10.460 And if we just chucked our loved ones in the trash, that would have implications for how
00:49:16.160 we feel about them.
00:49:17.080 And how we feel about them is the thing that causes us to recoil from treating them that
00:49:22.060 way once, once they've died.
00:49:23.840 And this is going to become more and more pressing.
00:49:26.860 These kinds of seemingly impractical bounds of philosophizing will become more and more
00:49:32.020 pressing when we can really alter our emotional lives and moral intuitions to a degree that is,
00:49:39.000 you know, very granular.
00:49:40.960 So just imagine you had a pill that allowed you to just no longer be sad, right?
00:49:47.060 So like, this is the perfectly targeted antidepressant that has no other downside, no other symptoms.
00:49:53.500 You know, pharmacologically, we may in fact get that lucky, maybe not.
00:49:56.740 But imagine a pill that just, if you're grief-stricken, you take this pill and you are no longer grief-stricken
00:50:03.160 and you can take it in any dose you want.
00:50:04.980 Now, your child has died or your mother has died and you're in grief, then the question
00:50:11.920 is, how long do you want to be sad for?
00:50:13.960 What is normative?
00:50:15.200 You know, now, would it be normative to 30 seconds after your child has died, right?
00:50:21.660 In fact, you may still be in the presence of the body to just pop that pill in a sufficient
00:50:27.520 dose to be restored to perfect equanimity.
00:50:29.900 I think most of us would feel that that is some version of chucking your grandmother in
00:50:35.600 the trash.
00:50:36.320 It doesn't honor the connection.
00:50:37.920 I mean, what does love mean if you don't shed a tear when the person you love most has died?
00:50:46.160 But finding what is normative there is really a challenge.
00:50:50.820 I don't know that there's any principle we can invoke or that we're ever going to find
00:50:54.780 that is going to convince us that we have found the right point.
00:50:57.720 But whatever feels comfortable at the end of the day there, I think, is encapsulating
00:51:04.380 all of the consequences that we feel or imagine we would feel in those circumstances.
00:51:10.040 And a loss of connection to other people is certainly a consequence that we're worried
00:51:14.300 about.
00:51:14.960 Yeah, absolutely.
00:51:15.820 And also, just if you never felt sadness that your child injured your death to your child,
00:51:21.860 like, I'm sure humans, as a matter of fact, would therefore take fewer steps to protect
00:51:27.380 their children and so on, there'd be a whole host of, I think, very bad consequences from
00:51:31.560 that quite plausibly.
00:51:33.580 And yeah, in terms of the general framing, I agree with you in terms of frustrations at
00:51:39.560 consequentialism where they create these cases where all sorts of real-world effects are just
00:51:44.940 abstracted away.
00:51:46.040 I mean, this is true for the question of how much to give as well, where in these debates,
00:51:51.700 it's normally imagined that there is just this superhuman person who could just give
00:51:55.480 all of their income to the lowest level and then not have any other areas of their life
00:51:59.680 affected negatively.
00:52:01.600 But that's just a fiction.
00:52:02.960 I mean, I think if someone thinks, well, I should give 10% because if I give more, then
00:52:09.360 I'm likely to burn out.
00:52:10.340 And I'm like, yep, you should be really attentive to your own psychology.
00:52:13.780 And that's like a really important consideration, whereas that's the sort of thing that in arguments
00:52:19.960 about consequentialism, for some reason, the critics and sometimes the proponents tend to
00:52:24.940 miss out, tend to be very kind of simplified views of the consequences.
00:52:30.260 Again, the problem comes back to the singer side of this conversation, which is if you're
00:52:35.980 only giving 10%, then you are still standing next to the shallow pond.
00:52:42.060 It is one of those slippery slope conditions where you're just, once you acknowledge that
00:52:47.660 there's this, the juxtaposition of your casual indulgence of your wants, any one of which
00:52:57.060 is totally dispensable, you could sacrifice it and your life would be, you'd be no worse
00:53:02.400 off alongside the immediate need of someone who's intolerably deprived by dint of pure bad
00:53:11.440 luck.
00:53:11.840 I mean, your privilege is a matter of luck entirely.
00:53:16.860 And all of the variables that produce it, no matter how self-made you think you are, you
00:53:22.240 didn't pick your genes.
00:53:24.320 You didn't pick the society into which you were born.
00:53:26.980 You can't account for the fact that you were born in a place that is not now Syria, fractured
00:53:33.180 by the worst civil war in memory.
00:53:35.820 Elon Musk is as self-made as anyone.
00:53:38.660 He can take absolutely no responsibility for his intelligence, his drive, the fact that
00:53:42.500 he could make it to America, and America was stable, and he did it in a time when there
00:53:46.780 was immense resources to help him do all the stuff that he's doing.
00:53:51.200 And so, again, you're still at the pond, and it feels like the conversation you would want
00:53:58.880 to have with the person who says, well, if I give any more than 10%, it's going to kind
00:54:02.760 of screw me up, and I'm not going to give much, and I'm not going to be happy, I'm not
00:54:06.700 going to be effective.
00:54:07.280 It sounds like there's still more Peter Singer-style therapy to do with that person, which is,
00:54:14.140 well, listen, come on, 12%, 14%, that's really moving you into a zone of discomfort, and there
00:54:20.580 is no stopping point short of, well, listen, I could make more money if you would let me
00:54:28.300 get on an airplane now and fly to the meeting I'm now going to miss because I don't have enough
00:54:33.240 money for a ticket, and then you begin to invoke some of the, I think you call it, earning
00:54:38.840 to give principles, right, so which we can talk about, but unless you're going to bring
00:54:42.940 in other concerns there where you can just, you can be more effective at helping more
00:54:47.300 drowning children in shallow ponds, you don't have an argument against Singer.
00:54:52.720 Yeah, I think, so there's a distinction in consequentialist ethics between what gets called
00:54:59.180 actualism and possibilism, and so supposing you have three options, you can stay home
00:55:06.900 and study, you can stay home and watch the feed, and you can watch the feed.
00:55:11.500 If you'd like to continue listening to this conversation, you'll need to subscribe at
00:55:15.400 samharris.org.
00:55:16.820 Once you do, you'll get access to all full-length episodes of the Making Sense podcast, along with
00:55:21.460 other subscriber-only content, including bonus episodes, NAMA's, and the conversations I've
00:55:27.040 been having on the Waking Up app, the Making Sense podcast is ad-free and relies entirely
00:55:32.120 on listener support, and you can subscribe now at samharris.org.