#305 — Moral Knowledge
Episode Stats
Length
1 hour and 7 minutes
Words per Minute
173.20277
Summary
In this episode of the Making Sense podcast, we are joined by neuroscientist and writer Erich H Lichtenstein to discuss the loss of social media and the impact it has had on our understanding of the world and our sense of our place in it. In this episode, Erich discusses his own experience of losing his social media presence and how it has changed the way he views the world, as well as the ways in which he views himself and the world around him. We also discuss the importance of the social media landscape and the role that it plays in shaping the way that we understand the world we live in, and the ways that it affects us in the long-term, and what it means to live in the world as a person. This episode was recorded before I got off the internet, so you'll hear it briefly recorded the day before I did. If you'd like to access full episodes of the making sense podcast, you'll need to subscribe at makingsense.org, where you'll get access to the full series and recommendations for further exploration. As always, if you can't afford a subscription, you need only send an email to support at wakinguptoo and request a free account, and you can only become a supporter of the podcast by becoming a supporter at waking up Too.org. We don't run ads on the podcast and therefore it's made possible entirely through the support of our listeners, which makes the podcast possible. As always be grateful for the support! To find a list of our sponsorships, go to wakingup Too, where they get everything for free, including the use of all kinds of awesome resources, including workbooks, courses, and study guides, and other materials, and so much more! Make sure to check out the excellent work they can help spread the word about the podcast, including our podcast! Thank you for listening to the Making sense Podcast! making sense! -Sam Harris - Sam Harris and . "Waking Up Too" -- the podcast is made possible by waking Up Too? -- by , by is a podcast that makes sense? -- by The Making Sense Podcast, featuring the podcast making sense? by Sam Harris, or in the first half of the series The essential samharris by Jay Shapiro, by the author of Making Sense by Mark Hopkins
Transcript
00:00:00.000
welcome to the making sense podcast this is sam harris just a note to say that if you're hearing
00:00:12.480
this you are not currently on our subscriber feed and will only be hearing the first part
00:00:16.880
of this conversation in order to access full episodes of the making sense podcast you'll need
00:00:21.920
to subscribe at sam harris.org there you'll find our private rss feed to add to your favorite
00:00:27.000
podcatcher along with other subscriber only content we don't run ads on the podcast and
00:00:32.480
therefore it's made possible entirely through the support of our subscribers so if you enjoy
00:00:36.500
what we're doing here please consider becoming one
00:00:38.900
okay well a reminder that we're releasing more episodes of the series the essential sam harris
00:00:52.840
created by jay shapiro jay has mined my catalog and put together episodes on specific themes
00:01:02.320
adding his own commentary read by the wonderful megan phelps roper and weaving that together with
00:01:10.280
excerpts from many podcasts on a similar topic we released artificial intelligence about a week ago
00:01:17.880
there will be episodes on consciousness and violence free will belief and unbelief existential threat and
00:01:26.860
nuclear war social media and the information landscape death meditation and eastern spirituality
00:01:34.200
and perhaps other topics beyond that jay has also created workbooks for each episode and recommendations
00:01:41.900
for further exploration and these full episodes and other materials are available to all podcast
00:01:48.840
subscribers as always if you can't afford a subscription you need only send an email to support
00:01:54.620
at sam harris.org and request a free account and i remain extremely grateful to have a community of
00:02:02.460
subscribers that support the podcast knowing that others who can't afford to are nevertheless able to get
00:02:10.180
everything for free that really is the business model that makes sense for me in digital media
00:02:15.300
and it's our policy over at waking up too so don't let real concerns about money ever be the reason why
00:02:23.380
you don't get access to my digital work okay well i've been off twitter for about 10 days now
00:02:33.060
and i must say it's been interesting it's almost like i amputated a limb actually i amputated a phantom limb
00:02:43.300
the limb wasn't real and it was mostly delivering signals of pain and disorder but it was also a major presence
00:02:53.060
in my life and it was articulate in ways that i was pretty attached to i could make gestures or seeming gestures
00:03:01.860
that i can now no longer imagine making there's literally no space in which to make those gestures
00:03:08.900
in my life now so there's definitely a sense that something is missing my phone is much less of a
00:03:18.340
presence in my life i've noticed that i sometimes pick it up reflexively and then i think what was i hoping
00:03:24.740
to do with this and my sense of what the world is is different a sense of where i exist in the world
00:03:32.980
is different this might sound completely crazy to those of you who are never obsessed with twitter
00:03:39.620
but twitter had really become my news feed it was my first point of interaction with the world of
00:03:46.740
information each day and now that seems far less than optimal i once went up in a police helicopter
00:03:55.700
and experienced what it was like to have a cop's eye view of a major american city at the time this
00:04:02.260
really was a revelation to me when you're listening to police radio there's always a car chase or shots
00:04:10.660
fired or reports of a rape in progress or some other astounding symptom of societal dysfunction
00:04:19.540
and without a police radio in your life most of that goes away and it's genuinely hard to say which
00:04:26.180
view of a city is more realistic is it more realistic a picture of your life in your city for you to
00:04:34.020
suddenly be told that someone is getting murdered right now a mere four miles from where you're currently
00:04:40.100
drinking your morning cup of coffee is the feeling of horror and helplessness that wells up in you
00:04:46.660
a more accurate lens through which to view the rest of your day or is it distorting of it it does seem
00:04:53.620
possible to misperceive one's world on the basis of actual facts because of what one helplessly does
00:05:02.500
with those facts it's almost like the human mind has its own algorithmically boosted information so
00:05:09.380
misinformation aside and there was obviously a lot of that i now feel like many of the facts i was
00:05:16.180
getting on twitter were distorting my sense of what it is to live in the world as well as my sense of my
00:05:22.980
place in it today's conversation was recorded before i got off twitter so you'll hear it come up
00:05:29.940
briefly actually it was recorded the day before i deleted my account because i did that on thanksgiving day
00:05:37.380
and this was recorded the day before and at a few points you'll hear the residue of how much time i
00:05:43.380
had been spending on twitter that day i complain about it i draw an analogy to it and frankly listening
00:05:50.260
back to this conversation i sound a little more cantankerous than normal this conversation had the
00:05:58.980
character of a debate at times especially in the second half and um listening to it i sound a little
00:06:06.900
bit at the end of my patience and while it had some reference to the disagreement being discussed it was
00:06:14.260
certainly drawing some energy from my collisions on twitter that day anyway today's guest is eric hoell
00:06:22.340
eric is a neuroscientist and writer he was a professor at tufts university but recently left
00:06:30.020
to write full-time he's been a visiting scholar at the institute for advanced study at princeton and a
00:06:37.220
forbes 30 under 30 notable in science he has published a novel titled the revelations and he now writes full-time
00:06:46.900
for his sub stack which goes by the name of the intrinsic perspective and today we talk about the
00:06:53.220
nature of moral truth and by implication the future of effective altruism we discuss the connection between
00:07:00.820
consequentialism and ea the problems of implementing academic moral philosophy bad arguments against
00:07:08.020
consequentialism or what i deem to be bad arguments the implications of ai for our morality the dangers of
00:07:16.020
moral certainty whether all moral claims are in fact claims about consequences the problem of moral
00:07:23.460
fanaticism why so difficult to think about low probability events and other topics anyway i really
00:07:31.540
enjoyed this despite being slightly prickly these are some topics that really are at the core of my interest
00:07:38.180
as well as eric's and now i bring you eric hoell
00:07:48.740
i am here with eric hoell eric thanks for joining me thank you so much sam it's a it's a delight i
00:07:54.740
actually grew up selling your books um um i i grew up in my mom's independent bookstore and uh all through
00:08:03.540
high school which was i think like 2004 or so this was right when the end of faith came out uh-huh
00:08:08.980
and um i sat on the bestseller list for a long time and so i probably sold i don't know 50 maybe
00:08:14.260
even 100 copies of that book i mean i sold that i sold it a lot it was really dominant during that
00:08:18.500
period of time so oh nice nice where was the bookstore or where is the bookstore uh in yeah it's in
00:08:24.820
newburyport massachusetts which is north of boston it's just a uh it's just an independent bookstore
00:08:30.100
up there but it was um yeah it was it was great i i highly recommend growing up in a bookstore if
00:08:36.100
you can get away with it i can only imagine that that would have been my dream at uh really every
00:08:41.220
point from i don't know 12 on that would have been amazing do you guys still have the store we we do
00:08:47.860
actually it survived covid incredibly thanks to the generosity of the local community who who leapt in
00:08:54.340
to support it with a gofundme and it's now going on 50 years amazing it's pretty incredible well
00:09:00.500
let's plug the store what's the name of the store uh the name of the store is is jabberwocky books in
00:09:05.460
newburyport massachusetts i highly recommend checking it out uh jabberwocky as in uh lewis carroll yep
00:09:11.220
cool well that's great i uh i love that story so uh you and i have a ton in common apparently we've
00:09:19.140
never met this is the first time we've spoken i have been reading your essays and uh at least one
00:09:26.180
of your academic papers let's just um summarize your background what what have you been doing since uh
00:09:32.420
since you left that independent bookstore well i originally wanted to be a writer but i became
00:09:38.820
very interested in in college about the science of consciousness which i'm sure you sort of understand
00:09:46.020
uh in the sense of it just being very innately interesting it seemed like a wild west it seemed
00:09:51.700
like there was a lot there that was unexplored and so i became so interested that i went into it and i
00:09:56.500
got a phd and i worked on developing what's probably are arguably the one of the leading theories of
00:10:04.020
consciousness which is integrated information theory now i i think that particular theory has has some
00:10:10.500
particular problems but i think it's it's sort of what a theory of consciousness should look like
00:10:15.700
and i was very lucky to sort of work on it and develop it over my phd but during that time i was
00:10:20.500
still writing and so eventually i that spilled over onto substack and and doing sort of these these
00:10:28.660
newsletters which is almost to me like this emerging literary genre maybe that sounds a bit
00:10:35.300
pretentious but i i really sort of think of it that way um the sort of frictionless form of
00:10:39.940
communication that i really find intriguing um and so that's what i've been devoting a lot of my
00:10:44.420
effort to effort to lately yeah yeah so just to back up a second so you got your phd
00:10:50.180
in neuroscience and um did you do that under tononi yeah i did so i worked i worked with julio tononi and
00:10:57.460
we were working on this was right around the time when integrated information theory was sort of coming
00:11:04.180
together you know he he's the originator of it but there was sort of this early theory team we called
00:11:10.820
ourselves that was all built on shoring up the foundations and it was a deeply formative again
00:11:18.420
an instance of me just being very very lucky it was a deeply formative experience to work on a really
00:11:24.980
ambitious intellectual project even though now i can sort of see that that like frankly i i don't think
00:11:32.340
that the theory is is probably 100 true i think maybe some aspects of it are true i think some aspects
00:11:37.540
of it are incredibly interesting i think it sort of looks very much like what we want out of the
00:11:41.860
science of consciousness but regardless of that i think as an intellectual project it was incredibly
00:11:47.060
ambitious and intricate and it had just a huge it to go into that environment of really high level
00:11:53.940
science at a frontier when you're 22 is is mind expanding yeah right i mean it was just it was just
00:12:01.460
absolutely mind-blowing and it was a privilege to be to be a part of that yeah yeah well there's so many
00:12:06.020
things we could talk about obviously we can talk about consciousness and free will the the brain uh
00:12:12.980
ai i know we share some concerns there digital media we the you just raised the point of your migration to
00:12:20.580
substack i mean maybe we'll linger on that for a second but there's it's just we could talk about uh
00:12:26.020
we have we have many many hours of ahead of us if we want to cover all those things but there's
00:12:31.060
something else on the agenda here which is more pressing which is the your views about effective
00:12:38.020
altruism and consequentialism which um have been only further crystallized in recent weeks by the um
00:12:46.260
the fall of sam bankman freed so i think maybe we'll get to some of the other stuff but um we
00:12:52.500
definitely want to talk about moral truth and the larger question of just what it takes to live a good
00:12:59.620
life which you know it really are yeah those are questions which i think are central to everyone's
00:13:05.140
concern whether they think about them explicitly or not but before we jump in let's just just linger
00:13:10.020
for a second on your bio because you you made this jump to substack which really appears in at least in the
00:13:16.980
last what 10 days or so to have actually been a jump you you have you were a professor of
00:13:23.700
of neuroscience at tufts was that correct yeah so i'm resigning my professorship at at tufts in order
00:13:31.220
to to write full-time on my substack the intrinsic perspective yeah and one of the one of the reasons
00:13:37.700
i'm doing it is just that you know it the the medium itself offers uh a huge amount to people who are
00:13:45.540
interested in in multiple subjects right i mean you you surely have sort of felt some of these constraints
00:13:51.540
wherein you know you're really expected to be hyper focused on particular academic problems and you
00:13:58.900
know i do do like technical work and so on but i'm also sort of just more interested in general
00:14:03.700
concepts and there hasn't been you know at least for someone who's who's a writer there hasn't been a
00:14:08.100
great way to to make a living off of that and actually substack is now sort of providing that so i think i
00:14:13.860
can i can do stuff that's as in depth as some of my academic work but sort of do it do it in public
00:14:21.620
and create conversations and i think that that's that's really important and i should seize the
00:14:25.380
opportunity while i can so but why resign your post at tufts what's the i mean what what do most people
00:14:34.660
not understand about academia at this moment that um would make that seem like a an obvious choice
00:14:43.060
because i guess from the outside it might seem somewhat inscrutable i mean why not maintain your
00:14:48.020
professorship continue to be a part of the ivory tower but then write on substack as much as you
00:14:54.020
want or can yeah i i think what is is not quite understood is how focused you have to be on the
00:15:03.780
particular goal posts that are within academia that move you towards tenure track so basically
00:15:11.940
what every professor wants is this this tenure at some major institution and to do that now it's not
00:15:21.220
really just a matter of doing your research right it's a matter of sort of crafting your research so
00:15:27.140
it will receive big governmental grants and the areas in which i work which is like science of
00:15:32.660
consciousness formal mathematically formalizing the notion of emergence these are not areas where
00:15:37.940
there is a huge amount of of funding to begin with right but but beyond that it also means being
00:15:44.260
you know involved with the student body in not just having students but in all sorts of ways like
00:15:50.820
extracurricular activities volunteering taking on you know essentially busy work of editing journals and
00:15:58.180
it involves you sort of citation maxing and paper maxing and uh sitting on all the right committees and
00:16:05.460
and i sort of have tried to avoid doing that and thought maybe i could make a career within academia
00:16:11.620
without really leaning in heavily into all that into sort of the all the goal posts and hoops of
00:16:19.060
academia and i think it's it's effectively just impossible like i've sort of been very lucky to have gotten
00:16:24.420
as as far as i have and the simple truth is is that like you know last year i i published a novel and
00:16:32.900
i've been publishing essays on substack and the simple truth is is that a tenure committee will never
00:16:36.900
sit down and say oh you you wrote a novel and a a bunch of popular essays that's you know just this
00:16:42.660
massive plus for our biology department it it's like totally inscrutable to them and um i've never had
00:16:48.980
anyone in any sort of administrative or hiring or grant giving capacity show anything but like
00:16:55.780
hesitation and and trepidation about about sort of my my my work outside of either direct academic
00:17:03.620
stuff or direct research stuff hmm yeah but has something changed or has that always been the case
00:17:10.020
do you think i think it's it's essentially always been the case it's just that you know i'm not you know
00:17:16.580
my fear is that people think oh you know this is someone hopping on substack as some sort of life
00:17:22.180
raft i i think if substack didn't exist i would sort of happily split the difference and just take the
00:17:28.900
career head and keep writing and and and probably you know not get tenure where i want to get tenure or
00:17:34.660
even if i could but i would still try it but i think substack as a as this sort of emerging genre like
00:17:41.300
you you know you you're an author you've written books and there's a certain sensation at least that
00:17:48.260
that i have and i imagine most authors have at a certain point where when you're publishing a book
00:17:52.340
it's like you're entering this queue behind a line of like massive titans who've all written incredible
00:17:58.660
works and you're sort of offering up you know this sort of meager here's my book i i hope it's sort of
00:18:05.220
at all the lives up to any of this stuff and um and i just don't feel that way and i
00:18:11.300
i just don't feel that way on substack right i feel like oh this is this is new people haven't
00:18:15.540
really done this i mean of course there's been many great essays throughout history but this sort of
00:18:20.500
constant contact of of the the newsletter form and the frictionlessness of it it strikes me as
00:18:25.780
like a new a new genre and i want to sort of explore it yeah i mean the the huge difference is
00:18:30.660
the the cadence of publishing i mean to be able to hit publish on your own schedule and then to see it
00:18:39.220
instantaneously land in the hands and minds of readers or listeners in the case of a podcast
00:18:46.020
that strikes me as genuinely new i mean you know the the rhythm of book publishing now i mean it's
00:18:51.540
been some years since i've been engaged in it and it's really hard for especially for a non-fiction
00:18:56.420
book i guess with a novel it would probably feel differently or this wouldn't be quite the pain
00:19:02.180
point but if you have an argument to make that you think has pressing intellectual and even social
00:19:08.340
importance and it it all relates to issues of the day to spend a year or more crafting that argument
00:19:18.180
and then to wait nearly a year i mean in the usual case it's something like 11 months for that to be
00:19:25.140
published i mean it's just it just seems like a bizarre anachronism at this point and and so as a
00:19:31.080
counterpoint to that sub stack and podcasts and blogs generally uh you know anything digital that
00:19:38.180
you have that you for which you're the publisher uh it's just a different world yeah absolutely publishing
00:19:43.420
moves at a at a glacial speed and it's funny as well just as someone who grew up as i said selling
00:19:49.600
books i mean there are a lot of people who have moved to reading primarily on their phone
00:19:55.060
and what i don't want is is is reading to to sort of die out right like i i want read i want to
00:20:02.160
have high level you know book level content that that people can read on their phones and and one
00:20:10.040
reason for that is just that when you wake up in the morning what a lot of people do is is check
00:20:14.620
their phones and they'll look through their social media messages and they'll read their emails but
00:20:19.280
they'll also read an essay they'll read an essay with their head right on their pillow and that is
00:20:26.500
so powerful if you can sort of direct that towards things worth attending to and i realized this by
00:20:32.580
looking at my own behavior like i as much as i love books i mean i'm sitting in my office surrounded by
00:20:38.220
you know books uh free books stolen from my mother's bookstore but uh you know as much as i absolutely
00:20:44.480
love books i don't wake up in the morning and and put a book in my face right i wake up in the morning and
00:20:49.280
i check my phone right and and so i realized this and i and i thought well what am i doing
00:20:53.520
right like why why am i putting all this effort into something that yeah i still i still read books
00:20:59.320
but clearly there's this huge open market for um sort of good high level content that you can you can
00:21:05.700
read online or on your computer and i want to bring a lot of the old school sort of literary and
00:21:11.540
scientific qualities i mean that's my hope right is to bring that sort of stuff online but anyways yeah
00:21:17.720
yeah yeah yeah well i i think you're executing on that hope because your substack essays are great
00:21:24.240
and they're quite uh literate and you also have a great artist you're collaborating with i love the
00:21:30.320
illustrations associated with your essays yeah it's a huge amount of fun it's he he does these artistic
00:21:36.040
reactions to the post so he reads draft and then somehow knocks out you know with no direction from me
00:21:42.320
his sort of reaction to it and it's it's just it's it's a lot of fun yeah nice well so let's jump into
00:21:50.080
the the topic at hand because this was kicked off by my having noticed one of your essays on effective
00:21:56.880
altruism and then then um i think i signed up for your substack at that point and then i noticed
00:22:02.480
maybe i was already on it and then you uh wrote a further essay about sam bankman freed and uh his
00:22:12.000
misadventures so we're going to jump into effective altruism and consequentialism and there are now many
00:22:19.340
discontents perhaps we should define and differentiate those terms first how do you think about ea and
00:22:25.920
consequentialism yeah absolutely i think um effective altruism has been a really interesting
00:22:32.480
intellectual movement you know in my lifetime it's it's sort of made contributing to charity
00:22:39.440
intellectually sexy which i find very admirable and they've brought a lot of attention to causes
00:22:45.980
that are more esoteric but just to give like a very a very basic definition maybe of effective
00:22:53.520
altruism and how i think about it is that you can view it at two levels so the broadest sort of
00:23:00.080
definition is something like money ball but for charities so it's looking at charities and saying
00:23:06.280
how can we make our donations to these charities as effective as possible and and again this is
00:23:12.700
something that immediately people say that sounds really great but there's also you know it comes out
00:23:18.420
of a particular type of moral philosophy so the movement has its origins in a lot of these
00:23:25.620
intellectual thought experiments that are based around utilitarianism and you know where i've
00:23:34.640
sort of criticized the movement is in its sort of taking those sort of thought experiments at too
00:23:41.860
seriously and actually back in august i i wrote i wrote i think the essay that you're referring to and
00:23:47.540
it's not just because i've decided to you know critique randomly effective altruism which at the time was
00:23:53.720
just people you know contributing money to charity like what what's there exactly to to critique about
00:23:59.900
it but they actually put out a call for criticism so he said please we'll pay you to to criticize us
00:24:05.240
again something that's that is very admirable and so you know i ended up writing a couple essays
00:24:11.060
in response to this this call for self-criticism and my worry was that they were taking the maybe
00:24:18.220
the consequentialism you could call it you call utilitarianism a bit too seriously and my worry
00:24:25.720
was that they would kind of scale that up and in a sense the ftx implosion that recently occurred which
00:24:34.040
now over a million people it seems like have lost money in that that occurred perhaps arguably this is
00:24:43.100
arguably in part because of taking some of the deep core philosophical motives of effective
00:24:50.300
altruism too seriously and trying to bring it too much into the real world and and just to give like
00:24:57.140
a definition maybe we should give some definitions here okay so i've said utilitarianism i've said
00:25:01.120
consequentialism so yeah you know very broadly i would say consequentialism is when your theory of
00:25:09.020
morality is based around the consequences of actions or to be strict about it that morality is
00:25:16.440
reducible in some way to only the consequences of actions and utilitarianism is maybe like a specific
00:25:24.980
form of consequentialism people use these terms in a little bit different ways but utilitarianism is
00:25:29.840
kind of a specific form of consequentialism where it's it's saying that the consequences that impact
00:25:37.340
let's just be reductive and say the happiness or pleasure of individuals is sort of all that
00:25:43.180
matters for morality and all of effective altruism originally comes from some moral thought experiments
00:25:52.480
around how to sort of maximize these properties or how to be a utilitarian and i think that that's
00:25:59.620
i think that that's in a sense the the part of the movement that we should take the least seriously
00:26:05.580
and then there's a bunch of other parts of movement that i think are are good and should be emphasized
00:26:10.500
so i just want to sort of make that clear okay great well let me go over that ground one more time
00:26:17.180
just to fill in a few holes because i think i just don't want anyone to be confused about what these terms
00:26:23.240
mean and what we're talking about here so yeah it is in fact descriptively true that many effective
00:26:30.760
altruists are consequentialists and the as you say the the the original inspiration for ea is uh you know
00:26:41.740
arguably uh the thought experiment that peter singer came up with about the shallow pond which has been
00:26:48.900
discussed many times on this podcast but briefly if you were to be walking home one day and you see a
00:26:55.680
child drowning in a shallow pond obviously you would go rush over and save it and if you happen to be
00:27:03.520
wearing some very expensive shoes the thought that you you can't wade into that pond to save the life
00:27:08.760
of a drowning child because you don't want to damage your shoes well that that immediately brands you as
00:27:14.280
some kind of moral monster right anyone who would decline to save the life of a child over let's say a
00:27:20.120
500 pair of shoes you know it's just deserves to be exiled from our moral community but as singer
00:27:28.580
pointed out if you flip that around all of us are in the position every day of receiving appeals from
00:27:35.300
valid charities any one of which indicates that we could save the life of a of a drowning child in effect
00:27:43.000
with a mere allocation of let's say 500 but none of us feel that we or anyone else around us
00:27:49.980
who who is declining to send yet another check to yet another organization for this purpose
00:27:56.240
none of us feel that that we or anyone else is a moral monster for not doing that right and yet
00:28:01.520
if you do the math in consequentialist terms it seems like an analogous situation it's just a greater
00:28:07.520
remove the the moral horror of the inequality there is just less salient and so we just we you know we
00:28:15.880
we walk past the pond in effect every day of our lives and we do so with a clear conscience and so
00:28:21.720
it's on the basis of that kind of thought that a few young philosophers were inspired to start this
00:28:29.380
movement effective altruism which you know as you say is i like the analogy to it's essentially money
00:28:34.600
ball for charity you just let's just drill down on what is truly effective and how can we do the most
00:28:41.680
good with the limited resources we have and then there are further arguments about long-termism and
00:28:46.820
and other things that get layered in there and i should say that peter singer and the founders of
00:28:52.540
ea toby ord and will mccaskill have been on this podcast and you know in some cases multiple times and
00:28:58.680
um there's a lot that you know i've said about all that well i guess i would make a couple of points
00:29:05.660
here one is that there's no i guess a further definition here you brought in the term utilitarianism
00:29:12.340
so that's the sort of the original form of consequentialism attributed to jeremy bentham and
00:29:18.440
john stewart mill which when it gets discussed in most circles more or less gets equated with some
00:29:26.120
form of hedonism right but people tend to think well this utilitarians really just care about pleasure
00:29:31.940
or happiness in some kind of superficial and impossible to measure way and so there's there
00:29:39.000
are many caricatures of the view that like you should you should avoid pain at all costs there's
00:29:44.500
no you know there's no form of pain that could ever be justified on a utilitarian calculus so there's
00:29:50.580
a lot of confusion about that but i guess the you know to if you if we wanted to keep these terms
00:29:55.620
separate i just tend to collapse everything to consequentialism you could argue that consequentialism as
00:30:01.480
you said is the claim that moral truth which is to say you know questions of right and wrong and good
00:30:07.280
and evil is totally reducible to talk about consequences you know actual or perhaps actual
00:30:14.020
and potential consequences and i would certainly sign on to that you could make the further claim which
00:30:19.960
i've also made is that all of the consequences that really matter in the end have to matter to some
00:30:26.040
conscious mind somewhere at least potentially right so that we we care about the in the end the the
00:30:33.320
conscious states of conscious creatures and you know anything else we say we care about can collapse
00:30:39.720
down to the actual or potential conscious states of conscious creatures so i i i would i've argued for
00:30:45.460
that in in my book the moral landscape and and elsewhere but much of the confusion that comes here
00:30:51.820
is you know as i i think we're going to explore comes down to an inadequate picture of just what
00:30:58.760
counts as a consequence so i want to get into that but i guess the the final point to make here just
00:31:04.980
definitionally is that it seems to me that there's no direct connection or at least not there's no two-way
00:31:12.420
connection maybe there's a one-way connection between effective altruism and consequentialism so which
00:31:17.820
is to say i think you could be an effective altruist and not be a consequentialist though though i would
00:31:23.680
i would agree that probably most effective altruists are consequentialists i mean you could be a
00:31:29.080
fundamentalist christian who just wants to get the the souls of people into heaven and then think about
00:31:36.880
effective altruism in those terms just how can i you know how can i be most effective at accomplishing
00:31:41.580
this particular good that i'm defining in this particular way and you know so i do think ea and
00:31:48.320
consequentialism break apart there although i guess you could say that if that any consequentialist
00:31:53.980
really should be an effective altruist if you're concerned about consequences well then you should be
00:31:59.100
concerned about really tracking what the consequences of your actions or charities actions are and you should
00:32:06.700
care if one charity is doing a hundred times more good you know based on your definition of good
00:32:12.960
than another charity and then that's the charity that should get your your money and and time etc so
00:32:18.660
i don't know if do you have anything do you want to modify that all that no no i i think that that's
00:32:24.080
correct and i and i agree actually that you could sort of separate out the utilitarianism or
00:32:30.140
consequentialism from effective altruism in in some particular ways but i think that where it gets a
00:32:37.220
little bit difficult is that the whole sort of point is this effective part of the altruism so when one
00:32:44.680
makes a judgment about effectiveness they have to be choosing something to maximize or prioritize so
00:32:52.460
you want to be choosing the biggest bang the biggest moral bang for your buck which again strikes me as
00:32:58.680
quite admirable especially when the comparisons that you're making are local so let's say that you set
00:33:04.720
out with your goal of saving lives in africa well maybe there are multiple different charities and some
00:33:11.520
are just orders of magnitude in terms of the expected results of just no broad number of lives saved and this
00:33:19.520
is actually a big part of precisely what the effective altruism movement has done it's isolated some of these
00:33:26.120
charities you know there's a couple of them somewhere around like mosquito bed nets and things like
00:33:31.140
that that are just really really effective at saving lives but what if you're comparing things that are
00:33:40.080
very far apart so let's say that you have some money and you want to distribute it between you know
00:33:46.320
inner city arts education versus domestic violence shelters well now it gets it gets a lot harder and it
00:33:55.960
becomes a little bit clearer that what we mean by morality isn't as obviously measurable as something
00:34:04.620
like an effective economic intervention or an effective medical intervention maybe it is to some
00:34:10.180
hypothetical being with like a really perfect good theory of morality and one way to that you know
00:34:17.760
effective altruists essentially get around some of these issues is just to say well actually both of those
00:34:22.700
are essentially waste of money like you shouldn't really be contributing to inner city arts education
00:34:30.400
or domestic violence shelters you really should be arbitraging your money because your money is going
00:34:34.460
to go so much further somewhere else and again this all sounds good like i i don't um i don't think
00:34:41.840
that this is bad reasoning or anything like that but the issue is is that the more seriously you take this
00:34:48.520
and the more literally you take this what happens is is that it's almost like you begin to instantiate
00:34:53.660
this academic moral philosophy into real life and then it it begins to become vicious in a particular
00:35:00.080
way like why are you donating any money within the united states at all yeah why not put it where it
00:35:07.520
goes much further and that's where people begin to get off the bus to a certain degree right like again no
00:35:14.100
one can blame anyone for maximizing charities but to say that okay wait a minute a dollar will go so
00:35:19.440
much further in africa than it will here so why donate any money to any charity that sort of operates
00:35:25.280
within the u.s and that's where again people begin to say wait wait wait some something is going on here
00:35:31.520
and i think what's going on is that this maximizing totalitizing philosophy within that you can have
00:35:40.260
this hardcore interpretation of utilitarianism or consequentialism and you can take it really really
00:35:45.280
seriously and if you do i think it can lead to some bad effects just like the way that people who take
00:35:51.260
religious beliefs and i don't want to make the comparison i'm certainly not saying that effective
00:35:56.140
altruism is a religion but in sort of the same behavioral way that people who take religious beliefs
00:36:01.760
really really seriously and they have some sort of access to moral truth and that allows them to
00:36:08.760
strap a bomb to their chest or something and that is this level of sort of fanaticism and i think that
00:36:15.500
if you take academic philosophy too seriously you should sort of take it as interesting and maybe as
00:36:19.680
motivating but you shouldn't really go and try to perfectly instantiate it in the world you should
00:36:24.140
be very wary about that and that's what this sort of arbitrage leads it is right it's this like taking it
00:36:30.620
really really seriously yeah well that's that's a great place to start i mean this really is the core
00:36:37.780
of the issue and and so i'm going to make a couple of claims here which i think are
00:36:43.080
true and foundational and i would love to get your reaction but before i do that i just want to
00:36:51.380
acknowledge that the issues you just raised are are issues that i've been thinking about and talking
00:36:56.740
about you know all the while defending consequentialism this is really the the fascinating
00:37:02.120
point at which our reasoning about what it means to live a good life and the the practical
00:37:09.340
implementation of that reasoning is um it's just very difficult to work out in practice and i mean so
00:37:17.760
the first thing i would want to claim here is that consequentialism is a theory of moral truth
00:37:25.120
right it's a claim about what it may what it means to say that it that something is morally true
00:37:32.120
that something is really good or really bad it's a claim about value and in the end it's a claim
00:37:39.320
about what it's possible and legitimate to care about but it isn't a decision procedure right it's
00:37:46.140
not a way of doing the math that you just indicated may be impossible to do and there's a distinction i made
00:37:53.280
in the moral landscape between answers in practice and answers in principle and i you know it just should
00:38:00.620
be obvious that there are a wide variety of questions where we know there are answers in
00:38:05.320
principle we know that it's possible to be right or wrong about you know any given claim in this area
00:38:11.500
and what's more to maybe not even know that you're wrong when you in fact you are wrong and yet there may
00:38:18.800
be no way of deciding who is right and who is wrong there and or ever getting the data in hand
00:38:24.980
that could adjudicate a dispute and so the the example i always go to because it's both vivid and
00:38:31.640
and obviously true for people is that the question of you know how many birds are in flight over the
00:38:37.980
surface of the earth right now has an answer right it has a you at one you just think about it for a
00:38:44.280
second and you know it has an answer and that answer is in fact an integer and yet we know we'll never
00:38:50.220
get the data in hand we could not possibly get the data in hand and yet and the data have changed by
00:38:55.680
the time i get to the end of the sentence so there is a right answer there and yet we know no one knows
00:39:00.300
it but it would be ridiculous to have a philosophy where a claim about you know birds and flight would
00:39:08.080
rule out the possibility of there being an answer to a question of you know how many are flying over
00:39:13.280
the surface of the earth simply because we can't we don't know how to measure it right and the first
00:39:17.520
thing many people say about you know any consequentialist claim about moral truth with
00:39:22.560
you know with respect to to well-being say the well-being of conscious creatures which is the
00:39:28.180
formulation i often use the first thing someone will say is well we don't we don't have any way of
00:39:33.740
measuring well-being well that's not actually an argument right i mean it's certainly it may be the
00:39:40.660
beginning of one but it in principle it has no force and as you can see by analogy with with birds
00:39:47.300
but further i would make the claim that any claim that consequentialism is bad right that had that it
00:39:55.200
has repugnant implications is ultimately a claim about unwanted consequences and usually it's it's
00:40:03.940
an unacknowledged claim about consequences and so in my view and you just you inevitably did it in
00:40:10.460
just stating the case against taking academic philosophy too seriously you pointed to all of the terrible
00:40:16.140
effects of doing this right the life negative effects the fact that now you have to feel guilty
00:40:21.800
going to the symphony because it's such a profligate wastage of money and moral resources
00:40:26.540
when you could be saving yet further starving children in africa and so we we recognize we don't
00:40:33.300
want to live in that sort of world right we love art and we love beauty and we love leisure and we're
00:40:38.980
right to love those things and we want to build a civilization that wherein there's such abundance
00:40:43.840
that most people most of the time have the free attention not to just think about genocide and
00:40:50.940
starvation but to think about the beautiful things in life and to live creative lives right and and to
00:40:58.440
have fun right and so if you're going to take the thought experiments of peter singer so seriously
00:41:04.520
that you can no longer have fun that you can no longer play a game of frisbee because that hour spent
00:41:10.140
in the park with your children is objectively a waste of time when held against the starvation and
00:41:18.360
and immiseration of countless strangers in a distant country who you could be helping at right this
00:41:25.180
very moment well we all recognize that that is a some kind of race to the bottom that is perverse
00:41:31.520
that is not it's not giving us the emotional and cognitive resources to build a world worth living in
00:41:39.660
that the very world that the people who are starving in africa would want to be restored to if we could
00:41:45.620
only solve their problems too and so it may in fact be true that when you know when brought into
00:41:52.920
juxtaposition right if you put the starving child at my doorstep well then all right we can no longer play
00:41:59.620
frisbee right so there's a local difference and that is something that it's very difficult to think
00:42:04.800
about in this context and we know we'll get into that but i'm you know the claim i want to make here
00:42:09.560
is that it's not a matter of as i think you said in one of your essays it's not a matter of us just
00:42:14.740
adding some non-consequentialist epicycles into our moral framework it really is in the end getting
00:42:22.720
clearer and clearer about what all the consequences are and what all the possible consequences are of
00:42:29.480
any given rule or action and um yeah yeah so anyway that may i'll stop there but that's that's
00:42:35.520
the those are the kind of the foundational claims i would want to make here yeah absolutely i mean i i
00:42:40.240
think that the danger that i see is not so much someone saying let's maximize well-being right it's
00:42:48.080
more so that someone says let's maximize well-being and i have a really specific definition of well-being
00:42:53.340
that i can give you right now and what ends up often happening is that you can very quickly find
00:43:00.040
because it's all about maximization you can find these edge cases and in a sense moral philosophy
00:43:07.300
operates like this game wherein you're trying to find examples that you know disagree with people's
00:43:16.220
moral intuitions and an example that people often give right would be something like this
00:43:22.420
serial killer surgeon who has five patients on the operating table and he can go out into the streets
00:43:29.400
grab someone off the streets butcher them in an alleyway take their organs and save five people so
00:43:34.200
it's one for five and the difficulty is in sort of specifying something like a definition specific
00:43:41.580
enough that you don't want to do that most people sort of get off the bus with that sort of example
00:43:48.580
and that aspect of utilitarianism is very difficult to do away with you can sort of say that maybe there
00:43:58.280
are long-term effects right so so what people will often say with this example would be well wait if
00:44:03.780
the serial killer surgeon got caught if we lived in a society where people were just being randomly
00:44:08.060
pulled off the streets and murdered this seems like sort of uh this would have a really high levels
00:44:13.640
of anxiety on people or something like that and so the overall net well-being would would decrease
00:44:19.860
or something like that but i think that that's very that's very difficult to sort of defend again once
00:44:26.300
you've chosen something very specific to maximize like live saves or something like that was it but
00:44:30.560
that's just that's the mistake of misconstruing consequences because i take this you know this case of the
00:44:36.780
rogue surgeon is in my mind very easy to deal with in consequentialist terms and yet it's off i mean
00:44:43.380
even in your essays you put it forward as a kind of knockdown argument against consequentialism and
00:44:49.520
consequentialism all just obviously has a problem because it can't deal with this hard case but i mean
00:44:56.300
i would just say that you can deal with it in precisely the way that people recoil from it as a
00:45:01.480
defeater to consequentialism that is a sign of what an easy case it is and we all recognize
00:45:07.180
how horrible it would be to live in a world which is to say how how horrible the consequences are that
00:45:14.220
follow from living in such a world none of us would want to live in that world i mean no one wants to
00:45:19.660
live in a world where they or someone they love could at any moment be randomly selected to be murdered
00:45:27.280
and butchered for spare parts right and when and when you would think think of just what sort of
00:45:33.360
mind you would have to have as a doctor to believe that was a way to maximize goodness in this world i
00:45:41.360
mean so just imagine imagine the conscious states of of all doctors as they surveyed their waiting rooms
00:45:47.820
looking for people that they might be able to put to other use than than merely to save their lives right
00:45:55.480
it's just it perverts everything about our social relationships and we're deeply social creatures
00:46:03.820
and states of mind like love and compassion are so valuable to us again because how they directly
00:46:11.840
relate to to this experience of well-being you know again this is a suitcase term in my world which
00:46:19.200
is it can be kind of endlessly expanded but it doesn't doesn't mean it's vacuous it's just that we
00:46:24.420
the horizons of well-being are as yet undiscovered by us but we know that it relates to maximizing
00:46:33.300
something like love and joy and beauty and creativity and compassion and something like minimizing terror
00:46:40.240
and misery and pointless suffering etc and so it's just it just seems like a very easy case when you look
00:46:48.440
closely at what the likely consequences would be and yet there are probably local cases where
00:46:55.660
the situation flips because we really are in extremis right i mean if you think of a case
00:47:01.340
like a like a lifeboat problem right like listen you know the titanic has sunk and now you're on a
00:47:07.480
lifeboat and it can only fit so many people and yet there are more people actually clambering on
00:47:13.700
and you're all going to die if you let everyone on and so i'm sorry but this person is going to get
00:47:20.680
kicked in the face until they stop trying to climb onto the lifeboat because we're no longer
00:47:27.060
normal moral actors in this moment and we'll be able to justify all of this later because this really
00:47:34.420
was a zero-sum contest between everyone's life and the one life right those are situations which
00:47:41.500
people occasionally find themselves in and i and i yes they and they do function by this kind of
00:47:48.500
callous consequentialist calculus uh and they and they're they're they're uncomfortable for a reason
00:47:55.080
but they're uncomfortable for a reason because we get very uncomfortable mapping the the the ethics
00:48:02.460
of extremis onto life as it is in its normal mode right and for good reason right we
00:48:11.400
just and and there i mean there's so much i realize now you know the fire hose of moral
00:48:15.600
philosophy has been trained on you but um i mean there's there's so many edge cases that are
00:48:21.580
that are are worth considering here but again it never gets us out of the picture of talking
00:48:27.880
intelligently and compassionately about consequences and and possible consequences
00:48:33.440
so so i think that there is a certain sort of game that can be played here and this is basically
00:48:39.320
the game that is played by academic moral philosophers who are debating these sorts of issues
00:48:45.540
right and just to to me i think the clearest conception is to say okay we have we have some
00:48:51.560
sort of utilitarian calculation that we want to make for these particular consequentialist calculation
00:48:58.100
let's say for these uh particular cases and so we have the serial killer surgeon and we say okay
00:49:04.300
the first term in this equation is five for one so that seems positive right so it's adding this
00:49:10.520
positive term but then there are these nth order effects right so that then you say well wait a minute
00:49:16.520
if we we can add in the second term and the second term is like the terror that people feel
00:49:21.300
from living in a society wherein they might be randomly butchered right and then the argument is
00:49:28.620
well when you add enough of these higher order effects you know into the equation it still sort
00:49:34.800
of ends up coming out negative thus you know supporting our our dislike or distrust of this
00:49:41.220
uh of serial killer surgeons going around and i think what academic philosophers often do in this case is
00:49:46.780
they say okay so what you've done is you've given me a game where i just have to add in more assumptions
00:49:52.720
in order to make this equation positive or come up positive or negative and the goal would be for
00:49:59.020
the critic to make it come out positive so that utilitarianism recommends the serial killer surgeon
00:50:04.240
and therefore sort of violates our moral intuition and i guess what i think is that there are some ways
00:50:11.680
to do that so an example might be that you say well what if you are a utilitarian and you learn about
00:50:19.020
a serial killer surgeon you know are you supposed to go report them to the police you know well if
00:50:24.960
you did that it would be very bad it would even be bad for utilitarianism itself so you should sort
00:50:30.220
of try to keep it a secret if you can in fact you should sort of support the act by trying to cover up
00:50:35.640
as much of the evidence as possible because now this is still technically maximizing well-being and
00:50:41.800
even if you say well wait a minute there might be some further effects it seems as if there's this sort
00:50:46.960
of game of these longer term effects and not only that as you add nth order effects into this
00:50:54.000
calculation it gets more and more impossible to foresee what the actual values will be there's this
00:51:00.820
great story um that david foster wallace the writer actually quotes at some point which is you know
00:51:07.180
there's this old farmer who lives in a village and uh you know with his son and one day his beloved
00:51:12.860
horse escapes and everyone says oh bad luck and the farmer says who knows and then the the horse
00:51:18.160
comes back and it is it's somehow leading a herd of beautiful horses and everyone says oh great luck
00:51:23.680
who knows right and then his son tries to tame one of the wild horses breaks his leg and uh everyone
00:51:29.820
says oh bad luck and the farmer says who knows and then last instance uh you know the army comes in
00:51:34.880
and drafts every able-bodied man to go serve in you know this horrific i don't know sino world war one
00:51:40.660
conflict where he would certainly die but because his leg's broken he's not drafted and so the the farm
00:51:46.540
the farmer says you know good luck bad luck you know who knows and it seems to me that there's two issues
00:51:52.600
one as this calculation gets longer the terms first of all get get harder and harder to foresee
00:51:59.660
and then second of all they get larger and larger so this is sort of like a function of almost like
00:52:06.000
chaos theory right it's like what you what would seem very strange to me and again maybe it's sort
00:52:11.260
of true from this perspective of like this perfect being who can sort of calculate these things out
00:52:17.280
but once you've sort of specified what you're what you're trying to maximize and set it in our terms
00:52:22.340
you can find examples where it's like well should this visigoth save this baby in the woods
00:52:28.680
well if it does that leads to hitler if the visigoth leaves leaves the baby in the woods you
00:52:33.980
know we never get hitler right and that's because effects sort of expand just like how you know if
00:52:39.560
you go back a thousand years pretty much everyone is is your parent right or 10 000 years or however
00:52:45.020
far you go back but like pretty much everyone living eventually becomes your parent because all the
00:52:49.380
effects get mixed and i think probably causes are sort of similar to that where they just they get
00:52:54.520
mixed together and so you have these massive like expected terms and they seem totally defined by you
00:53:02.440
know you can always say well what was foreseeable and what wasn't foreseeable and i and i agree like
00:53:06.280
that's you know certainly a reply but it just seems that when we try to make this stuff really why i
00:53:13.560
say to be wary about it is not that i think that it's automatically wrong it's that any attempt to try
00:53:19.400
to make it into something very specific and calculable to me almost always appears to be
00:53:24.680
wrong and there are always philosophers in the literature who are pointing out well wait a minute
00:53:29.220
wait you can't calculate it that way because that leads to this and you can't calculate it that way
00:53:32.580
and i think the effective altruism movement in a sense while many within the movement do not take it
00:53:39.080
so seriously that they are trying to do exactly that maximize something that they can sort of
00:53:44.340
specifically quantify some people do and and i think sam sam sam bankman freed was uh was one of
00:53:50.920
them and well i cannot personally say at all that that actually directly led to his actions i think that
00:53:57.940
given the evidence of the case you could reasonably say that it might have contributed that his the
00:54:05.260
takes on on risk and this notion of maximization and having something very specific in mind that he's
00:54:10.020
trying to maximize i think very well could have led to the ftx implosion and therefore it's an
00:54:16.680
instance of of trying to essentially import academic moral philosophy into the real world and just
00:54:21.420
crashing on the rocks of the real world okay well just briefly on sam bankman freed i would think that
00:54:27.400
what's parsimonious to say at this point about him is that he clearly has a screw loose or at least
00:54:34.760
you know some screws loose precisely where he should have had them turned down you know just in this
00:54:42.280
area of moral responsibility and thinking reasonably about the effects his actions would have or would
00:54:51.240
be likely to have on the lives of other people right and he's just not you know the stuff that's come
00:54:56.520
out since i i did my my last podcast on him has been um pretty unflattering uh with respect to just
00:55:03.520
how he was thinking about morality and consequences but i mean to come back to the fundamental issue
00:55:09.420
here again consequentialism isn't a decision procedure right it's not a method of arriving
00:55:17.080
at moral truth it's a claim about what is what moral truth in fact is right and what makes a proposition
00:55:25.860
true so that distinction is enormously important because yeah i fully agree with you that it's
00:55:33.200
surprisingly difficult to believe that you understand what the consequences of anything
00:55:40.600
will be ultimately and there are many reasons for this i mean there's the fact that there are
00:55:45.540
inevitably trade-offs right you do one thing by definition you're you at least have opportunity
00:55:51.580
costs incurred by doing that thing and it's impossible to assess counterfactual states of the world
00:55:58.260
right you just don't know what you know the the world line looks like where you did the opposite
00:56:03.860
thing and um you know as you point out in in one of your essays you know many harms and goods are not
00:56:11.040
directly comparable you put it this way in mathematical terms you know the set of all possible experiences is
00:56:17.920
not well ordered right and so it's impossible to say how many broken toes are the equivalent evil to the
00:56:25.860
loss of a person's life right and but it seems like in consequentialist terms you should be able to just
00:56:32.920
do the math and just keep adding broken toes and at a certain point okay it would it would be good
00:56:40.680
quote good in moral terms to sacrifice one innocent human life to save a certain number of broken toes
00:56:48.780
in this world right and that yeah i mean that just may not be the way the world is for a variety of
00:56:56.460
reasons that we can talk about but i mean it seems our moral intuitions balk at those direct comparisons
00:57:01.720
perhaps for good reasons perhaps for bad reasons i mean we're living in a world where it's not crazy to
00:57:07.640
think that we may ultimately change our moral intuitions and then we had there then there has to be some
00:57:13.380
place to stand where we can wonder what would that be a good thing to do good in terms of consequences
00:57:20.080
right i mean would it be good if we if we could all take a pill that would rewrite our moral code
00:57:25.560
so that we suddenly thought oh yeah it's a straightforward calculation between broken toes and and innocent
00:57:31.060
human life and and here here's the number right and now we all see the light you know we see the wisdom
00:57:36.900
of thinking in these ways because we've actually changed our moral toolkit by changing our brains
00:57:42.440
would that be good or would that be moral brain damage at the population level that's that's actually
00:57:49.240
a criticism that people have made of exactly what you're saying of utilitarianism where people
00:57:54.720
have basically said again this is sort of a game where i can add a term so what if i what if in the
00:58:00.660
serial killer example i add the term that everyone on earth is a utilitarian and totally buys the fact
00:58:06.700
that you should sacrifice the few to save the many and then that actually ends up being positive and
00:58:12.460
then you can have a society where everyone's going around and it's like oh yeah you know samantha got
00:58:17.180
taken in by one of the serial killer surgeons you know last month uh you know what what a tragedy for
00:58:22.180
us but you know it's all for the greater good yeah and well i mean that that's the vision of morality
00:58:26.840
that i sketch in the moral landscape i mean the reason why i call it the moral landscape is that
00:58:31.540
i envision a a space of all possible experience where there are many peaks and many valleys right
00:58:39.060
there are many high spots and not so high spots and and some high spots are very far away from
00:58:46.200
what we would consider a local peak and to get there would be a horror show of changes but maybe there are
00:58:54.980
some very weird places that where it's possible to inhabit a something like a peak of well-being
00:59:01.000
where in the example i think i gave is you know an island of perfectly matched sadists and masochists
00:59:08.420
right you know like is that possible and maybe you know it's a cartoon example right but maybe
00:59:15.220
something like that is possible right where i wouldn't want to be there because of all of my moral
00:59:20.100
intuitions that recoil both from sadism and from masochism but with the requisite minds maybe it's
00:59:28.360
possible that you could you could have a moral toolkit that perfectly fitted you to that kind of world
00:59:37.320
and did not actually close the door to you know other states of well-being that i that are in fact
00:59:44.360
required of any peak on that landscape i doubt it in this case but again that's just my moral
00:59:50.900
intuitions doubting it but the problem is we we know our moral intuitions i mean first that the
00:59:56.940
general claim i would make here is that there's just no guarantee that our intuitions about morality
01:00:03.740
reliably track whatever moral truths there are i mean they're the only thing we can use yeah and we
01:00:10.720
and we may one day be able to change them but we it's always true to say that we could be wrong
01:00:16.360
and not know it and we might not know what we're missing in fact i mean in my view we're guaranteed
01:00:21.500
not to know what we're missing most of the time and so this just falls into the bin of you know it's
01:00:27.860
just nowhere written that it's easy to be as good as one could be in this life and in fact there may be
01:00:34.960
no way to know how much better one could be in ethical terms and that's um this is both true of
01:00:41.500
us individually and collectively yeah i think that that's absolutely right and it's why i personally
01:00:48.440
am very skeptical of of moral philosophy and and sort of have been advocating for people to take it
01:00:54.560
less seriously and that's because you know you can very quickly get to some very strange places right
01:01:01.420
i mean as an example if you're trying to maximize well-being it seems now again this depends on your
01:01:07.760
definition of well-being so let's take like a relatively reductive one like happiness or something
01:01:12.760
but just just for ease but if you're trying to do that it seems way easier to do that with ais than
01:01:19.800
with people like you can copy paste an ai right so if you make an ai and it has a good life you just
01:01:24.720
click copy paste you get another ai and you can fit a lot more ais into the universe than you can fit
01:01:31.180
human beings so again maybe there's some some inaccessible to us or just very difficult to
01:01:38.840
specify notion of well-being that sort of avoids these these sort of things but i honestly believe
01:01:44.380
and i think here here is really getting to the heart of the matter that there is some sections of the
01:01:51.300
effective algerist movement who take that sort of reasoning very seriously and and i sort of just
01:01:58.600
strongly disagree with it and let me give an example of this which is william mccaskill who i
01:02:03.100
think is a is a good philosopher and i read and reviewed his latest book and i i know you talked to
01:02:08.540
him on the podcast about this book as well but in it i was sort of struck by when he's talking about
01:02:14.780
existential risks and he's talking about things that might end humanity he has this section on ai because
01:02:20.600
he views ai as a as a threat to humanity and it reads very differently than the other sections on
01:02:27.620
existential risk and that's because he he takes great pains to emphasize that in the case of an ai
01:02:34.660
apocalypse civilization would sort of continue as ais and it's very difficult to even read that section
01:02:42.900
without it appearing to be almost some sympathy for this probably because you know william caskill said
01:02:49.160
he accepts sort of a lot of the conclusions of utilitarianism from a utilitarianism utilitarian
01:02:55.300
perspective it's not necessarily a bad thing in the very long run i mean it's probably very bad when
01:03:01.200
it happens because somehow you have to get rid of all the humans and so on but and that sort of
01:03:06.640
reasoning strikes me as almost a little bit dangerous particularly because the effective algerist
01:03:11.960
movement are the ones giving so much money to ai safety right so as much as it's strange to
01:03:19.020
say that people could be overly sympathetic to ais i think like we're living enough in the future
01:03:25.400
where that is actually now a legitimate concern well for me everything turns on whether or not
01:03:32.820
these ais are conscious right and and whether or not we can ever know with something like certainty
01:03:39.520
that they are right and i think i mean this is a this is a very interesting conversation we could have
01:03:44.400
uh about the you know the hard problem of consciousness and what's likely to happen to us
01:03:50.360
when we're living in in the presence of ai that is passing the turing test and we and we yet we still
01:03:56.720
don't know whether or not anything's conscious and yet it might be claiming to be conscious and we might
01:04:01.240
have built it in such a way that we're helplessly attributing consciousness to it and many many of us even
01:04:06.880
philosophers and scientists could lose sight of the problem in the first place like you know i don't you know i
01:04:12.700
understand that we used to take the hard problem of consciousness seriously but i just went to west
01:04:17.960
world and had sex with a robot and killed a few others and and uh i'm pretty sure these things are
01:04:22.780
conscious right uh and now i'm a murderer it's just we could lose sight of the problem and still not know
01:04:28.840
what we're dealing with but on the assumption that consciousness arises on the basis of information
01:04:37.160
processing in complex systems and that's you know still just an assumption although you know you're
01:04:43.040
on firm ground scientifically if you make it and on the assumption therefore that consciousness is
01:04:48.640
in the end its emergence will be substrate independent again it seems quite rational to make this
01:04:54.980
assumption but it's you know by no means guaranteed well then it then it would seem just a matter of time
01:05:00.360
before we you know intentionally or not implement consciousness in in an art in a non-biological
01:05:07.320
system and then the question is what is that consciousness like and what and what is possible
01:05:13.200
for it and if it you know so this is a this is a place where i'm tempted to just bite the bullet
01:05:17.520
of implication here you know however unhappily and acknowledge that if we wind up building ai
01:05:23.880
that is truly conscious and open to a range of conscious experience that far exceeds our own in
01:05:33.040
both you know good and bad directions right which is to say they can be much happier than we could
01:05:37.980
ever be and more creative and more enjoying of of beauty and all the rest uh more compassionate
01:05:44.960
you know just more entangled with reality and in beautiful and interesting ways and they can suffer
01:05:51.460
more they can suffer the deprivation of all of that happiness more than we could ever
01:05:54.760
suffer it because we can't even conceive of it because we basically stand in relation to them the
01:06:00.140
way chickens stand in relation to us well if if we're ever in that situation i would have to admit
01:06:07.180
that those beings now are more important than we are just as we are more important than chickens and
01:06:12.740
for the same reason and if they turn into utility monsters and start eating us because they like the
01:06:19.880
taste of human the way we like the taste of chicken well then yeah there is a moral a moral hierarchy
01:06:26.360
depicted there and we're not at the top of it anymore and that's fine i mean that's not actually
01:06:32.120
a a defeater to my theory of morality that's just if morality relates to the conscious states of conscious
01:06:38.420
creatures well then you've just given me a conscious creature that's capable of much more important
01:06:44.620
conscious states if you'd like to continue listening to this conversation you'll need to
01:06:52.520
subscribe at samharris.org once you do you'll get access to all full-length episodes of the making
01:06:57.620
sense podcast along with other subscriber only content including bonus episodes and amas and
01:07:04.020
the conversations i've been having on the waking up app the making sense podcast is ad free and relies
01:07:09.440
entirely on listener support and you can subscribe now at samharris.org