#446 — How to Do the Most Good
Episode Stats
Words per Minute
206.70366
Summary
In this episode, Dr. Michael Plante joins me to talk about the competing theories of utilitarianism, consequentialism, and deontology. We discuss which is better: utilitarianism or consequentialism? And which is more deontological?
Transcript
00:00:00.000
Welcome to the Making Sense Podcast. This is Sam Harris. Just a note to say that if you're
00:00:11.740
hearing this, you're not currently on our subscriber feed, and we'll only be hearing
00:00:15.720
the first part of this conversation. In order to access full episodes of the Making Sense
00:00:20.060
Podcast, you'll need to subscribe at samharris.org. We don't run ads on the podcast, and therefore
00:00:26.240
it's made possible entirely through the support of our subscribers. So if you enjoy what we're
00:00:30.200
doing here, please consider becoming one. I'm here with Michael Plant. Michael, thanks for joining
00:00:39.320
me. Thanks for having me on. So we were introduced by Peter Singer, and I think you, was he your
00:00:45.080
dissertation advisor? He was. All right, so maybe you can give your background before we jump into
00:00:50.340
the topics of mutual interest. Well, so I'm a philosopher and global happiness researcher.
00:00:56.240
And I kind of got started on this interest stage, about 16. I first came across philosophy. My
00:01:01.720
first lesson on philosophy, I came across the idea of utilitarianism, that we should maximize
00:01:05.980
happiness. And I thought, oh, wow, that's, I don't know if that's the whole story of ethics,
00:01:09.680
but that's a massive story of ethics. You might say it was a waking up moment. And then over the
00:01:16.700
next 20 years, I've kind of pursued two topics. There's this philosophical question, or should we
00:01:20.860
maximize happiness? I mean, I thought that was quite plausible, but lots of people thought it was
00:01:24.580
nuts. So what's going on there? And then this empirical question of, well, how do we do that?
00:01:29.260
You know, what in fact, you know, how can we apply happiness research to finding out what really we
00:01:32.960
ought to do? And I've been kind of pursuing those tracks, and those have taken me to what I'm doing
00:01:37.160
now. So maybe we should define a few terms before we proceed. I mean, a couple will be very easy,
00:01:42.980
and then I think happiness will be very hard. But you just mentioned utilitarianism. How do you
00:01:49.440
define that? And do you differentiate it from consequentialism? And what is the rival metaethical
00:01:56.480
position or positions that, if they exist, I'm uncertain as to whether they actually exist,
00:02:02.320
but we can talk about that. Well, so utilitarianism is, or classical utilitarianism,
00:02:06.320
is the view that what one ought to do is to maximize the sum total of happiness. And then that differs
00:02:11.880
from consequentialism, where consequentialism is one ought to do the most good. So you don't necessarily
00:02:17.580
have to define good in terms of happiness. You can think of that as desires or other sorts of
00:02:22.340
things. And then these kind of consequentialist theories contrast with what are called sort of
00:02:27.360
deontological or kind of common sense ethical theories, where those theories will say, sometimes
00:02:33.400
you should maximize the good, but also there are constraints. There are things you shouldn't do.
00:02:39.040
You know, you shouldn't kill people to, you know, save lives, perhaps. And there are prerogatives.
00:02:43.920
So there are things you, you maybe would be good for you to do, but you don't have to do. So maybe
00:02:48.980
the utilitarian might say, look, you should give lots and lots of money to charity. And the
00:02:52.900
deontologist would say, well, you know, I recognize it would be better in some way for the, you know,
00:02:57.000
for the world, for people if I did that, but I don't have to do that. I have these kind of,
00:03:00.340
these kind of prerogatives. So that's that kind of lay of the land.
00:03:03.260
Now, do you feel that one of these positions wins? I mean, how would you define your
00:03:08.660
meta-ethics and in the, I mean, I think I've said this before in the podcast, but perhaps you're
00:03:14.900
unaware of it. I do think any sane deontology collapses to some form of consequentialism
00:03:21.480
covertly. I mean, if you say it's not all about maximizing the good, there's some very important
00:03:27.000
principles that we must hold to, like, you know, cons categorical imperative or some other
00:03:34.780
deontological principle. To my eye, what is smuggled in there covertly is the claim that
00:03:40.340
that principle is on balance good, right? I mean, like if someone knew that the categorical
00:03:47.220
imperative was guaranteed to produce the worst outcomes across the board, I don't think most
00:03:53.080
deontologists would bite the bullet there and say, yeah, that's what we want, the worst outcomes across
00:03:57.140
the board. They're holding to it because it is on its face intuitively a great way to implement
00:04:03.040
something like rule utilitarianism or rule consequentialism. What's your thoughts on that?
00:04:08.780
So lots of the objections which you might make are kind of against utilitarianism,
00:04:12.260
that it's taking maximizing too seriously, are also problems you're likely to find to a lesser
00:04:16.780
degree in non-consequentialist theories. So an example is, you know, sort of a kind of a classic
00:04:21.980
differentiating point would be you shouldn't kill one person to save lives. So you might say,
00:04:27.600
well, you shouldn't kill one person to save five people. And the consequences might say,
00:04:30.540
well, look, you probably should do that, you know, assuming that's just sort of a, there's no kind
00:04:34.300
of extra complexity to it. But then if you kind of up the ante and you say, well, what about if you
00:04:38.240
kill one person to save a million lives or a billion lives? Then the moderate consequentialist
00:04:42.600
might think, well, this is outweighed, these kind of normative badness of killing is outweighed by the
00:04:47.940
kind of the goodness of the life saved. So you might think that there's kind of still what's going on
00:04:53.680
under the hood of these deontological theories is there's still kind of some implicit maths going on,
00:04:58.880
like trading off bits and pieces. But, uh, so that's sort of an accusation that consequentialists
00:05:04.180
might make against deontologists, but I mean, deontologists will, will kind of fight back and
00:05:08.620
say, well, actually, look, you don't, I mean, there are kind of conceptions of deontological
00:05:12.000
theories where you, you kind of can't do it exactly like that. And so it's kind of a, there's an open
00:05:17.220
debate, uh, which, you know, perhaps we, it's kind of too much in the weeds, but as to whether you can
00:05:21.920
just reduce, um, deontological theories to kind of looking at value plus some kind of other normative
00:05:27.100
principles, and you know, some people think you can, and other people think that, think that you
00:05:30.200
can't. Yeah. The attacks on consequentialism always boil down in my experience to not actually
00:05:37.800
paying attention to the full set of consequences that follow from any action. So when someone says,
00:05:43.880
well, if you're a consequentialist, you should be happy to have your doctor come, you know,
00:05:47.920
you show up to the doctor's office for a checkup, your doctor, knowing that he's got five other
00:05:53.140
patients who could use your organs, uh, he should, could just come out and, you know, anesthetize
00:05:57.300
you and kill you and, and transplant your organs and his other patients. And that's a net benefit
00:06:01.380
for the world. Uh, you know, five people get organs and, and one person dies. And that's often
00:06:06.720
put forward or examples like that are often put forward as kind of a knockdown argument against
00:06:10.640
consequentialism. But what people are not adding on the balance there is all of the consequences
00:06:15.560
that follow from such a callous and horrific practice, right? I mean, if all, if everyone
00:06:21.340
knew that at any moment they might be swept off the street and butchered for the benefit
00:06:25.920
of others, I mean, what kind of society would we be living in? And what, you know, what would
00:06:29.440
it be, what would it mean to be a doctor and how would you feel about your doctor and how
00:06:32.860
would the doctor be able to, you know, sleep at night, et cetera. I mean, so the consequences
00:06:36.880
just propagate endlessly from a practice like that. And it's just obviously awful. And no one
00:06:41.780
wants to live in that society for good reason. But again, this is all just a story
00:06:44.800
of consequences. It's not the story of some abstract principle. But anyway, we don't have
00:06:48.500
to get wrapped around that axle. I just wanted to touch that. So if you're a consequentialist
00:06:54.020
of, of whatever description, what should you care about in the end?
00:06:59.360
Well, there are kind of a few options as to which kind of consequences you're going to say
00:07:03.560
matter. So one which, um, I think any consequentialist is going to buy into is, is wellbeing. So wellbeing
00:07:10.040
kind of term of art and philosophy for what ultimately makes someone's life go well for them
00:07:13.980
kind of three canonical theories of wellbeing. You've got hedonism. So happiness is what
00:07:18.160
matters. You've got desire theories where getting what you want is what matters. And
00:07:21.420
then you've got this thing called the objective list where it's usually a few things. Maybe
00:07:25.000
it's, you know, a happiness and desires are on there, but it might also be things like
00:07:28.780
truth, beauty, love achievement. And I think there's, you know, so any, any, that's going
00:07:32.720
to be kind of one of the key consequences. You might also think maybe there's, you want
00:07:36.520
to account for kind of equality or justice. It's kind of a, you might think it's a bit
00:07:40.300
of an open question as to whether those are a kind of dental logical principles or sort
00:07:43.800
of a value-based principles. But when I think about this and what kind of, kind of motivates
00:07:49.040
my thinking is that it just seems that I find it, I just find it very compelling that when
00:07:53.480
we're thinking about what makes someone's go life for, uh, let their life go well for
00:07:56.640
them. It's their, their happiness and their suffering. It's the kind of the quality of life
00:08:00.480
for them. It's how they, how they feel overall. And this is, I guess it's a, you know, there
00:08:05.040
are some bits of philosophy that think that this is, is kind of a mad theory and kind of
00:08:09.300
Nozick and the experience machine, you know, would you be, if you, if you really believe
00:08:12.540
in happiness, would you plug yourself into a matrix style scenario? But I think in kind
00:08:16.080
of weighing up the three theories of, of wellbeing, I just think that the hedonism, the idea that
00:08:20.660
what, what makes your life go well for you is how you feel overall. I think that's got
00:08:23.980
the, uh, that's kind of got the strongest arguments behind it and that motivates lots
00:08:28.460
Yeah. I mean, I think so to take Nozick's experience machine refutation of consequentialism here,
00:08:34.420
utilitarianism. It's again, is what, what he's pressing on there is the intuition, which I think
00:08:40.500
is widely shared by people is that, that we should have something like a reality bias, right?
00:08:46.220
That you don't want to be, you don't want your state of subjective wellbeing to be totally uncoupled
00:08:52.500
from the reality of your life in the world. You don't want to be in relationship with, um,
00:08:58.460
seeming others who are not in fact others. So you don't want to be hallucinating about everything,
00:09:02.840
right? So this is why you wouldn't want to be in the matrix. If you, in fact, you wouldn't want to
00:09:06.120
be in the matrix. Now I would grant that under there's certain conditions under which the matrix
00:09:10.860
becomes more and more tempting and reality becomes less and less. So, right. I mean, we can imagine
00:09:15.580
just some forced choice between a very awful universe that is real and a simulated one, which is
00:09:22.740
perfect. In which case we might begin to wonder, well, what's the point of reality in that case? But
00:09:28.360
I think it's, again, that this is, it's a story of, of yet more consequences at the level of people's
00:09:34.860
experience. I mean, to know that you're, um, you know, let me just imagine, you know, having the
00:09:39.860
best day of your life and you, or years of your life and you're in a relationship with people who
00:09:45.560
are incredibly important to you, who you love and to find out at some point that all of this was a
00:09:50.640
hallucination, right? And, uh, there was no, which is to say not merely that it's impermanent,
00:09:56.380
which any experienced empirical reality is. We'll all discover that at death, but, or the, even just
00:10:02.280
the end of any hour, but there would be this additional knowledge that it was fake in some
00:10:07.640
sense, right? Like the person you thought you were in the presence of sharing meaning and love with
00:10:13.020
was not a person, right? They had, they had no point of view on you. It was all just a hall of
00:10:17.660
mirrors. I think that we get an icky feeling from that and it's understandable. And that icky
00:10:23.140
feeling translates into a degradation of the wellbeing we would find in that circumstance.
00:10:29.780
But again, I, you, I don't think we can press that too far. I think having a, a loose reality
00:10:35.960
bias makes sense, but I think that you could easily argue for a ways in which you would want your view
00:10:41.880
of yourself or the world to not be the most brutal, high contrast, you know, right at all times view.
00:10:49.680
If in fact that would prove dysfunctional and corrosive in, in other ways, which I think it's,
00:10:55.660
you know, the, it's pretty easy to see that it might.
00:10:58.540
Yeah. So, I mean, in addition to that, I think a reason not to get into the,
00:11:02.540
into the experience machine is I think we have more responsibilities. If you're just stuck in
00:11:06.100
the experience machine, you can't make a difference to, to anyone else. I also, um,
00:11:10.960
a couple of more thoughts. I also think it's sort of, I'm using that the experience machine is taken
00:11:15.000
as a sort of a slam dunk objection to hedonism. When, you know, if we look at how technology is
00:11:19.880
changing, we are increasingly living in something like the experience machine. I mean, there are
00:11:23.440
some days where like, I don't leave my house. Like I interact with people the whole day,
00:11:27.660
you know, through, through the, the magic of, uh, of the internet and so on. Am I, am I in fact in
00:11:32.980
the experience machine? Right. But anyway, leaving those, uh, those bits to the side, I think a point
00:11:38.280
that's really substantially overlooked is when there's discussion about what wellbeing is, it's
00:11:42.960
often, okay. So the argument is, is happiness the only thing that matters. And then there's this sort
00:11:48.080
of, there's this sort of cognitive mistake from being, well, if happiness isn't the only thing
00:11:51.260
that matters, then it doesn't actually matter very much. And so I, I often find I have to remind
00:11:55.720
people, even if they are not hedonists and few people are, and that's, you know, that's fine.
00:12:00.020
But look, even if you don't think it's anything that matter, you do still think that it matters.
00:12:03.860
If you didn't think that it matter, you would think that people's suffering and misery
00:12:06.940
didn't matter in and of itself. And that's a very peculiar thought. So it's at least going to be
00:12:12.600
one of the things that matter, or it's going to be very important to whatever it is else that
00:12:16.840
matters intrinsically. So if you're engaging in morality and you're not taking happiness seriously
00:12:21.720
and taking suffering seriously, then you're missing a major, a major part of, um, of what really
00:12:27.520
matters. So what do you do with the fact that happiness and wellbeing are these elastic concepts that
00:12:34.600
are really impossible to define in any kind of closed way, because there's, there are frontiers
00:12:40.640
of happiness and wellbeing that we are gradually exploring. And presumably there are experiences
00:12:46.840
that, uh, we would all recognize that are, you know, better than any we've yet had. And they're
00:12:52.080
sort of out there on the horizon. And we can't, we can't really close our accounts with reality at
00:12:56.320
this point and say, Hey, you know, wellbeing, ultimate human wellbeing is this because a thousand
00:13:00.980
years from now, it may consist of something, you know, that, that we can't even form a concept
00:13:05.940
around presently. And what do you do with the fact that, and this is explicit in many of the
00:13:11.060
objections to the concept of happiness, because it somehow seems thin and doesn't somehow capture
00:13:17.100
everything that's worth wanting. What do you do with the fact that there are certain forms of
00:13:21.300
suffering and stress that seem integral to the deeper reaches of wellbeing, you know, so that it's not,
00:13:29.320
it can't purely be about avoiding pain or avoiding stress or maximizing short-term pleasure, right?
00:13:37.040
I mean, we all know what it's like to, or many of us know what it's like to go to the gym and work
00:13:41.000
out hard. And if you could experience sample that hour, it would be true to say that much of it was
00:13:47.340
excruciating. Uh, and if you were having that experience for some other reason, like if you woke up
00:13:51.820
in the middle of the night and felt the way you felt, you know, doing a deadlift or whatever, you would
00:13:55.980
run straight to the hospital, you know, convince you're, you're about to die. But because of the
00:14:01.040
context and because of the consequences of spending that hour that way, most people learn to love that
00:14:06.940
experience, even if it's negatively valenced as a matter of, you know, sensation and physiology while
00:14:12.640
having it, how do you define wellbeing or flourishing or happiness to encompass those wrinkles?
00:14:20.420
Yeah. So I think the definitional problems are maybe not so sharp. I mean, in, in kind of philosophy,
00:14:28.100
we just sort of nail them down one way or another. So wellbeing, what makes your life go well for you
00:14:32.980
overall? And then happiness, I just understand as feeling good overall. So it has this intrinsic
00:14:38.940
quality of pleasure. If you don't know what pleasure is, sorry, I don't think I can tell you what that
00:14:43.480
feels like, but that's sort of the, you know, the kind of end of the line. We just sort of recognize
00:14:47.860
there is an intuitive kind of pleasantness, kind of positive or negative valence in our experiences.
00:14:52.920
So then there's this question about the causes of happiness and, you know, what does happiness
00:14:57.040
consist in? So what I think happiness consists in is positive valence experience. And then what are
00:15:01.420
the causes of happiness? Well, you know, that's a, that's an empirical question. You're, you're
00:15:05.160
absolutely right that, you know, our, we can possibly discover lots about what are the causes of
00:15:11.440
happiness and how do they compare to each other over time? And what in fact are the best ways to
00:15:16.260
promote happiness, which hopefully we will, we will come to a due course on the bit about
00:15:20.400
suffering. Yeah. This comes up quite a bit as like, well, you know, but if you only live the
00:15:24.420
happy life, wouldn't you, this is a bit like the point you're making about kind of consequentialism
00:15:28.000
people say, well, if you, if you only experience happiness, that would in fact not maximize your
00:15:32.500
sum total of happiness over time, because you need the misery to have some happiness. But I mean,
00:15:36.580
I think that's, you know, sort of fine as a fact of the matter. If you're looking at your
00:15:40.360
experiences over time, then you, you do want some kind of good stuff and, and, and, and some bad stuff.
00:15:45.280
If you're going to, um, you know, have the greatest area under the line. I mean, we, you know, we know
00:15:50.080
this, we, uh, we do things like we take ourselves camping because we know it's going to be a miserable
00:15:55.400
experience so that then we can go back to civilization and enjoy the fruits of civilization.
00:15:59.900
Some of us do. I've stopped camping. I've retired.
00:16:04.840
I mean, you've, you've had the camping experience and maybe that, you know, you can remember,
00:16:10.380
Yeah. Well, so, but do you actually think that my intuition kind of runs the other way? I don't
00:16:15.460
think we need awful things to compare our happiness to, to recognize that we're happy. I think happiness
00:16:21.640
or human wellbeing could become increasingly refined such that the thing you're comparing the best
00:16:28.280
experience to is like, it's still a very good experience. It's just not, not nearly as good
00:16:33.700
as the best. So there's some version of camping that is better than what 99% of people experience
00:16:39.140
on a day-to-day basis, but which could become the, the reference point if one were needed of
00:16:44.460
comparison to some yet future state that's even more blissful and expansive and creative and
00:16:49.420
beautiful and, and encompassing of depth and intuitions that we, you know, very few people
00:16:56.480
Yeah. So I don't think I agree with you. It's not sort of logically necessary, but if you look at how
00:17:00.620
kind of happiness seems to work for people, it's, uh, it's highly comparative and there's some
00:17:05.000
kind of oddnesses about the things we choose that could compare ourselves to and, uh, and not others.
00:17:09.420
So I'll kind of, uh, a case in point that's kind of relevant for the moment is in the kind of the
00:17:14.820
West of world, you know, your side of the pond, my side of the pond, we're talking about a cost of
00:17:17.840
living crisis. Okay. And people are sort of feeling like they're, they're feeling the pinch, incomes are
00:17:23.140
going down, things are more expensive, but look, here's sort of another perspective on this.
00:17:27.100
If you earn the median salary in the U S which are like $40,000, you're in the top 2% of the global
00:17:34.360
distribution. And if you think about how many people, I think it's more than that. I thought
00:17:38.620
the, um, you said median, but I think the mean per capita GDP in the U S is like 65,000, something
00:17:45.480
like that. It's, I think it's, it is higher than that, but it's higher than the UK. Yeah. I'm, I'm
00:17:49.580
thinking, I'm thinking of the median. I don't, I don't know the meat, the mean, uh, GDP. Yeah. I guess,
00:17:53.480
I guess the median way is considerably lower because there's some very rich people. Yes.
00:17:57.100
Yeah. And then, um, if you're looking, uh, not just at the moment, but across time, I mean,
00:18:02.420
you know, how long of, when did homo sapiens become homo sapiens? But if by, by one estimate,
00:18:07.680
there's like 120 billion people who have ever lived. So if you put those together, if you're
00:18:12.360
alive today and earning a median salary in the U S you're in the top 0.1 to richest people,
00:18:17.940
0.1% of rich people who have ever lived. Yes. And yet what are people talking about? They're
00:18:23.040
saying, ah, it's the cost of living crisis. Things are so expensive. And, and when I,
00:18:27.100
what I make this point to people, they'll look at me like I'm strange. Well, you know,
00:18:31.100
of course that's not relevant. Like that's not how I think about my life, but you know,
00:18:34.400
that's the, that's the kind of curiosity there is that, um, how there are certain things we
00:18:39.180
compare our lives to and sort of naturally intuitively, but we could make different
00:18:43.380
comparisons. And so relating to your point, you know, we could, uh, you know, bring ourselves
00:18:47.900
to think of the misery in the world that we are otherwise avoiding. And that would give
00:18:51.480
us, uh, uh, greater happiness. But in fact, you know, we, we're in quite narrow tracks
00:18:56.600
and the kind of, we just compare ourselves to the things which are salient, the people
00:18:59.400
near around us. Yeah. And so in practice, maybe you do need that reminding now and then
00:19:04.500
of, of some misfortune that can make you grateful for the rest of your, uh, for the other parts
00:19:08.800
of your life. Well, this issue of comparison, I think runs pretty deep because given that
00:19:13.620
so much of our judgments of our own wellbeing and, and in fact, our experience of whether or not
00:19:20.720
we are flourishing is based on comparison is based on context. It's based on the, on the
00:19:27.060
cognitive framing that is laid over the, just the cut the raw sensory experience of being
00:19:34.440
oneself moment to moment. One could ask, I mean, what, what is the, we're going to get,
00:19:39.560
going to get into kind of effective altruism and what is, you know, what, what problems on
00:19:43.360
earth are worth solving and how we prioritize those things. But if it's a matter of alleviating
00:19:48.580
suffering and alleviating the most excruciating suffering first, presumably and maximizing
00:19:54.620
human wellbeing, maybe it's in fact true to say that the homeless on the streets of San
00:20:00.140
Francisco are suffering more than the poorest of the poor in sub-Saharan Africa or in an
00:20:06.760
Indian village or somewhere where objectively they are more deprived, right? Because there's
00:20:10.500
no one starving to death in San Francisco, whatever their condition. I mean, they could, they
00:20:14.660
might be dying of fentanyl abuse or something else, but there's no one starving to death
00:20:19.680
in America. That's just not a thing because there's just so much food and you can go to
00:20:24.140
a shelter, you can, you can go to a pantry or you can go to a dumpster. I mean, you can
00:20:27.960
get food, but there are places on earth where people still starve to death. Happily, that's
00:20:33.280
less and less the case. And yet, if you imagine the experience of being homeless, you know,
00:20:39.000
right outside of Salesforce Tower or wherever you are in San Francisco, the prospect of comparing
00:20:44.780
the unraveling of your life with the lives that seem to be going on so smoothly all around you
00:20:50.240
suggests to me that it's at least conceivable that that suffering, that mental suffering,
00:20:55.480
the experience of being in that bad condition is worse than much or maybe everything that's going
00:21:01.940
on in objectively poorer parts of the world. How do you think about that?
00:21:06.580
Yeah, I find that extremely plausible and very probably true. Having walked through the streets
00:21:13.360
of San Francisco and also visited some of the poorest bits of the world, yeah, I would imagine
00:21:18.240
If you'd like to continue listening to this conversation, you'll need to subscribe at
00:21:23.720
samharris.org. Once you do, you'll get access to all full-length episodes of the Making Sense
00:21:29.000
podcast. The Making Sense podcast is ad-free and relies entirely on listener support, and you