#228 — Doing Good
Episode Stats
Length
2 hours and 15 minutes
Words per Minute
171.32133
Summary
In this episode, we revisit a conversation I originally recorded for the Waking Up app a couple of weeks ago, which was released there as a series of lessons. It was so well received, in fact, that I decided to release it here on the podcast and put it outside the paywall. This seems like a better holiday message than most, and I hope you enjoy it as much as I enjoyed recording it. Today's episode is a companion piece to an episode I did with philosopher Will McCaskill four years ago, and it's a great companion to today's episode because it gets into some of the fundamental issues of ethics and moral philosophy. You'll also hear that there's still a lot of work to be done in this area, and that we should focus on the actions we can all take to make the world better, rather than just talking about it. And I want to do this systematically, thinking through what it takes to save the most lives, or reduce the worst suffering, or mitigate the most catastrophic risks, in order to help solve the worst problems we face more directly than just talk about them. I've taken the pledge to give a minimum of 10% of my pre-tax income to the most effective charities every year for the rest of their lives. And the good people at Giving What We Can, which is the Foundation on Effective Altruism, have now publicly pledged to do the same thing. And they expect another $500,000 over the next year from podcast listeners who have set up their donations on a recurring basis. That's $1 million and many lives saved, just as a result of some passing comments I've been making on this podcast. That is awesome! So all of this inspired me to share this episode with you, too. I hope that you'll join me in The Making Sense Podcast. -- Sam Harris Make Sense Podcast, the podcast by Sam Harris, the creator of The Waking up App, and the podcast that helps me think about what it means to live a good life and do good in the world. Make Sense. (Make Sense Podcast by me, the author of the Making Sense by me and the founder of the podcast, Sam Harris). Sam Harris: This is a podcast about living a life of good times, good ideas, good people, and good thoughts, and making sense of the world, and a podcast that makes sense of it all. Make sense by me. .
Transcript
00:00:00.000
Welcome to the Making Sense Podcast, this is Sam Harris.
00:00:24.040
Okay, so today I'm bringing you a conversation that I originally recorded for the Waking
00:00:32.140
Up app, and we released it there as a series of separate lessons a couple of weeks back,
00:00:38.520
but the response has been such that I wanted to share it here on the podcast and put it
00:00:45.000
This seems like a better holiday message than most.
00:00:48.540
As I think many of you know, Waking Up isn't just a meditation app at this point.
00:00:54.420
It's really the place where I do most of my thinking about what it means to live a good
00:01:00.040
And this conversation is about generosity, and about how we should think about doing good
00:01:06.320
Increasingly, I'm looking to use this podcast and the Waking Up app to do more than merely
00:01:15.340
That's their primary purpose, obviously, but I want to help solve some of the worst problems
00:01:21.600
we face more directly than just talking about them.
00:01:26.180
And I want to do this systematically, really thinking through what it takes to save the
00:01:32.020
most lives or reduce the worst suffering or mitigate the most catastrophic risks.
00:01:37.880
And to this end, I've taken the pledge over at Giving What We Can, which is the foundation
00:01:43.600
on effective altruism started by the philosophers Will McCaskill and Toby Ord, both of whom have
00:01:50.300
And this pledge is to give a minimum of 10% of one's pre-tax income to the most effective
00:01:58.620
I've also taken the Founders' Pledge, which amounts to the same thing.
00:02:02.280
And I've had Waking Up become one of the first corporations to pledge a minimum of 10% of its
00:02:08.680
And the thinking behind all of this is the subject of today's podcast.
00:02:15.160
Of course, there is a bias against speaking about this sort of thing in public or even
00:02:22.580
It's often believed that it's better to practice one's generosity anonymously because then you
00:02:28.080
can be sure you're doing it for the right reasons.
00:02:30.260
You're not trying to just burnish your reputation.
00:02:32.700
As you'll hear in today's conversation, there are very good reasons to believe that this
00:02:37.800
is just not true and that the imagined moral virtue of anonymity is something we really need
00:02:45.080
In fact, I've just learned of the knock-on effects of the few times I have discussed my
00:02:53.300
And they're surprisingly substantial, just to give you a sense of it.
00:02:58.120
Last year, I released an episode titled Knowledge and Redemption, where we discussed the Barred
00:03:05.360
Prison Initiative, based on the PBS documentary that Lynn Novick and Ken Burns did.
00:03:13.720
And at the end, I think I asked you all to consider supporting that work, too.
00:03:17.680
And together, we donated $150,000, based on that one episode alone.
00:03:24.040
I've also occasionally mentioned on the podcast that I donate each month to the Against
00:03:29.540
And it was actually my first podcast conversation with Will McCaskill that convinced me to do
00:03:35.020
And I do it through the charity evaluator, GiveWell.org.
00:03:39.280
Well, the good people at GiveWell just told me that they've received over $500,000 in donations
00:03:47.720
And they expect another $500,000 over the next year from podcast listeners who have set up
00:03:54.660
So that's $1 million and many lives saved, just as a result of some passing comments I've
00:04:04.760
And then I've heard from Will McCaskill's people over at Giving What We Can, where I took their
00:04:11.140
10% pledge, which I haven't spoken about much, but it seems that hundreds of you have also
00:04:19.000
taken that pledge, again, unsolicited by me, but specifically attributing this podcast and
00:04:28.020
That's hundreds of people, some of whom may be quite wealthy or will become wealthy, who
00:04:35.380
have now publicly pledged to give a minimum of 10% of their pre-tax income to the most
00:04:42.480
effective charities every year for the rest of their lives.
00:04:49.340
So all of this inspired me to share this conversation from the Waking Up app.
00:04:53.920
Again, this is a fairly structured conversation with the philosopher Will McCaskill.
00:04:58.720
Well, some of you may remember the conversation I had with Will four years ago on the podcast.
00:05:04.280
That was episode number 44, and that's a great companion to today's episode, because it gets
00:05:11.700
into some of the fundamental issues of ethics here.
00:05:15.100
Today's conversation is much more focused on the actions we can all take to make the world
00:05:18.740
better and how we should think about doing that.
00:05:22.000
Will and I challenged some old ideas around giving, and we discussed why they're really
00:05:30.560
You'll also hear that there's still a lot of moral philosophy to be done in this area.
00:05:35.420
I don't think these issues are fully worked out at all, and that's really exciting, right?
00:05:40.060
There's a lot to talk about here, and there's something for moral philosophers to actually
00:05:44.720
do that might really matter to the future of our species.
00:05:48.080
In particular, I think there's a lot of work to be done on the ethics of wealth inequality,
00:05:53.840
both globally and within the wealthiest societies themselves.
00:05:57.800
And I'm sure I will do many more podcasts on this topic.
00:06:01.260
I suspect that wealth inequality is producing much, if not most, of our political conflict
00:06:07.960
at this point, and it certainly determines what we do with our resources.
00:06:12.740
So I think it's one of the most important topics of our time.
00:06:15.660
Anyway, Will and I cover a lot here, including how to choose causes to support, and how best
00:06:22.380
to think about choosing a career so as to do the most good over the course of one's life.
00:06:26.860
The question that underlies all of this, really, is how can we live a morally beautiful life,
00:06:34.420
which is more and more what I care about, and which the young Will McCaskill is certainly
00:06:41.460
Finally, I want to again recognize all of you who have made these donations and
00:06:47.580
pledges, as well as the many of you who have been supporting my work these many years,
00:06:52.780
and also the many of you who have become subscribers to the podcast in the last year.
00:06:57.820
I couldn't be doing any of these things without you, and I certainly look forward to what we're
00:07:25.580
So, I just posted a conversation that you and I had four years ago on my podcast onto Waking
00:07:33.520
up as well, because I thought it was such a useful introduction to many of the issues
00:07:38.320
we're going to talk about, and it was a different conversation because we got into very interesting
00:07:43.420
questions of moral philosophy that I think we probably won't focus on here.
00:07:48.240
So, it just seems like a great background for the series of lessons we're now going to sketch
00:07:56.340
But for those who have not taken the time to listen to that just yet, maybe we should summarize
00:08:03.080
Who are you, Will, and how do you come to have any opinion about altruism, generosity, what
00:08:13.900
So, I grew up in Glasgow, and I was always interested in two things.
00:08:18.340
One was kind of ideas, and then in particular philosophy when I discovered that.
00:08:25.820
So, as a teenager, I volunteered running summer camps for children who were impoverished and
00:08:36.680
But then it was when I came across the arguments of Peter Singer, in particular his arguments
00:08:41.960
that we have the moral obligation to be giving away most of our income to help people in very
00:08:47.360
poor countries, simply because such a move would not be a great burden on us.
00:08:53.020
It would be a financial sacrifice, but not an enormous sacrifice in terms of our quality
00:08:56.540
of life, but could make an enormous difference for hundreds of people around the world.
00:09:02.800
But kind of being human, I didn't really do very much on the basis of those arguments for
00:09:08.340
many years until I came to Oxford, met another philosopher called Toby Ord, who had actually
00:09:14.160
very similar ideas and was planning to give away most of his income over the course of his
00:09:19.700
And together we set up an organization called Giving What We Can, which encouraged people
00:09:24.060
to give at least 10% of their income to those organizations they think that can do the most
00:09:29.520
Sam, I know that you have now taken that 10% pledge, and I'm delighted that that's the
00:09:34.720
And since then, this kind of set of ideas that were really just two, you know, very impractical
00:09:42.240
philosophy grad students kind of setting this up and not think, you know, I certainly never
00:09:48.580
I was just doing it because I thought it was morally very important.
00:09:51.800
It turned out just a lot of people had had similar sets of ideas.
00:09:56.060
And Giving What We Can acted like a bit of a lightning rod for people all around the world
00:09:59.700
who were motivated to try to do good, but also to do it as effectively as possible.
00:10:05.680
Because at the time we had a set of recommended charities.
00:10:09.100
There was also the organization GiveWell, whose work we leaned extremely heavily on, making
00:10:13.920
recommendations about what charities that they thought would do the most good.
00:10:19.180
And effective altruism at the time focused on charity in particular, and in particular
00:10:24.280
focused on doing good for people in extreme poverty.
00:10:29.120
So now most people in the effective altruism community, when they're trying to do good,
00:10:34.020
are doing so via their career in particular, and there's a much broader range of cause areas.
00:10:43.560
And in particular, and I think increasing, are issues that might potentially affect future
00:10:50.100
And in particular, kind of risks to the future of civilization at all that Toby talked about
00:10:56.820
And I have a factoid in my memory, which I think I got from your original interview with
00:11:04.720
Am I correct in thinking that you were the youngest philosophy professor at Oxford?
00:11:12.280
So the precise fact is, when I joined the faculty at Oxford, which was age 28, I'm pretty confident
00:11:18.960
I was the youngest associate professor of philosophy in the world at the time.
00:11:24.700
Well, no doubt you're quickly aging out of that distinction.
00:11:35.320
Well, so it's great to talk to you about these things because, as you know, you've been very
00:11:42.400
You directly inspired me to start giving a minimum of 10% of my income to charity.
00:11:47.420
And also to commit waking up as a company to give a minimum of 10% of its profits to charity.
00:11:54.380
But I'm very eager to have this conversation because it still seems to me there's a lot
00:11:58.700
of thinking yet to do about how to approach doing good in the world.
00:12:03.240
There may be some principles that you and I either disagree about or maybe we'll agree
00:12:09.020
that we just don't have good enough intuitions to have a strong opinion one way or another.
00:12:13.520
But it really just seems to me to be territory that can benefit from new ideas and new intuition
00:12:24.340
And I think, you know, as I said, we will have a structured conversation here, which
00:12:31.640
And so this is really an introduction to the conversation that's coming.
00:12:36.020
And all of this relates specifically to this movement you started, Effective Altruism.
00:12:42.120
And we'll get very clear about what that means and what it may yet mean.
00:12:47.440
But this does connect to deeper and broader questions like, how should we think about doing
00:12:58.820
And what would it mean to do as much good as possible?
00:13:02.620
And how do those questions connect to questions like, what sort of person should I be?
00:13:09.020
Or what does it mean to live a truly good life?
00:13:12.220
These are questions that lie at the core of moral philosophy and at the core of any person
00:13:17.440
individual attempt to live an examined life and develop an ethical code and just form
00:13:30.440
I mean, we're all personally attempting to improve our lives, but we're also trying to
00:13:35.640
converge on a common picture of what it would mean for us to be building a world that is making
00:13:44.260
it more and more likely that humanity is moving in the right direction.
00:13:48.340
We have to have a concept of what the goal is here or what a range of suitable goals might
00:13:55.020
And we have to have a concept of when we're wandering into moral error, you know, personally
00:14:02.320
And talking about the specific act of trying to help people, trying to do good in the world
00:14:10.760
really sharpens up our sense of the stakes here and the opportunities.
00:14:15.860
So I'm really happy to be getting into this with you.
00:14:19.280
Before we get into what effective altruism is, I think we should address a basic skepticism
00:14:27.460
that people have and even very rich people have, perhaps especially rich people have
00:14:34.440
this, is a skepticism about altruism itself and in particular a skepticism about charity.
00:14:43.100
And I think there are some good reasons to be skeptical about charity, at least in a local
00:14:51.260
And I just want to lob you some of these reasons and we can talk about them.
00:14:56.300
Because I meet, I would imagine you've encountered this yourself, I meet some very fortunate people
00:15:02.680
who have immense resources and can do a lot of good in the world who are fundamentally skeptical
00:15:12.980
And the bad reason here that I always encounter is something we might call the myth of the self-made
00:15:19.600
The idea that there's somehow an ethically impregnable position to notice all the ways
00:15:28.000
in which you are responsible for all of your good luck, no matter how distorted this appraisal
00:15:36.480
You weren't born into wealth and you made it all yourself and you don't owe anyone anything.
00:15:41.900
And in fact, giving people less fortunate than yourself any of the resources you've acquired
00:15:51.280
I mean, you want to teach people to fish, but you don't want to give them fish.
00:15:54.720
There's some Ayn Randian ethic of radical selfishness combined with a vision of capitalism that, you
00:16:03.140
know, wherein free markets can account for, you know, every human problem simply by all of
00:16:09.320
us behaving like atomized selves, seeking our own happiness.
00:16:14.180
It will be no surprise to people who've listened to me that I think there's something deeply
00:16:19.880
But what do you do when someone hits you with this ethical argument that they're self-made
00:16:26.420
and everyone should aspire to also pull themselves up by their own bootstraps?
00:16:32.060
And we falsify something about the project of living a good life by even thinking in terms
00:16:45.260
So in the first case, the fact that you're a self-made man, I mean, I do disagree with the
00:16:51.980
I can predict 80% of the information about your income just from your place of birth.
00:16:58.340
Whereas, you know, you could be the hardest working Bangladeshi in the world, but if you're
00:17:03.520
born into extreme poverty in Bangladesh, it's going to be very difficult indeed to become
00:17:10.180
But even if we accepted that, the fact that you have rightly earned your money yourself doesn't
00:17:15.860
mean that you don't have any obligations to help other people.
00:17:20.360
So Peter Singer's now very famous thought experiment.
00:17:27.820
You could easily kind of wade in as deep as you like.
00:17:32.420
And you can see that there's a child drowning there.
00:17:35.320
Now, perhaps it's the case that you're an entirely self-made man.
00:17:39.820
Perhaps it's the case that the suit that you wore, you justly bought yourself.
00:17:44.860
But that really seems neither here nor there with respect to whether you ought to try and
00:17:49.320
wade in and save this child who might be drowning.
00:17:52.180
And I think that's just quite an intuitive position.
00:17:54.600
In fact, this ideal of self-actualization, of kind of being the best version of yourself
00:18:00.580
that you can be, which is the kind of admirable version of this otherwise sometimes quite
00:18:08.340
I think that is like part of being a self-actualized, authentically living person is living up to
00:18:15.580
And for most people in the world, you actually want to be a helpful, altruistic person.
00:18:20.640
Acting in that way is acting in accordance with your deepest values.
00:18:24.400
That is acting an authentic and a self-actualized life.
00:18:27.800
And then just on the second point is about whether, well, maybe charity gets in the way.
00:18:33.480
Maybe it's actually harmful because it makes people rely on bailouts.
00:18:37.300
Well, here we've got to just think about, you know, there is market failure where in the
00:18:45.040
case of public goods or externalities, markets don't do what they ought to do.
00:18:50.300
And perhaps you want government to step in, provide police or defense and streetlights
00:18:56.920
And even the most kind of hardcore libertarian free market proponent should accept that's
00:19:03.960
But then there's also cases of democratic failure too.
00:19:06.940
So what if the potential people are not protected by functioning democratic governments?
00:19:21.480
They don't, you know, the future generations are disenfranchised.
00:19:24.140
So we shouldn't expect markets or government to be taking appropriate care of those individuals
00:19:30.140
who are disenfranchised by both the market and by even democratic institutions.
00:19:35.160
And so what else is there apart from philanthropy?
00:19:39.440
So I've spoken a lot about the myth of the self-made man whenever I criticize the notion of free
00:19:47.700
It's just obvious that however self-made you are, you didn't create the tools.
00:19:56.040
So if you are incredibly intelligent or have an immense capacity for effort, you didn't
00:20:06.000
You didn't pick the environmental influences that determined every subsequent state of your
00:20:16.360
But as Will, you point out, where you were born also was a major variable in your success,
00:20:24.000
You didn't create the good luck not to be born in the middle of a civil war in a place
00:20:28.880
like Congo or Syria or anywhere else, which would be hostile to, you know, many of the
00:20:37.120
So there's something, frankly, obscene about not being sensitive to those disparities.
00:20:44.120
And as you point out, living a good life and being the sort of person you are right to
00:20:52.800
want to be has to entail some basic awareness of those facts and a compassionate impulse to
00:21:02.080
make life better for people who are much less fortunate than we are.
00:21:07.700
It's just, if your vision of who you want to be doesn't include being connected to the
00:21:15.720
rest of humanity and having compassion be part of the operating system that orients you
00:21:22.780
toward the shocking suffering of other people, you know, even when it becomes proximate, you
00:21:29.080
know, even when you're walking past Singer's shallow pond and you see someone drowning,
00:21:34.860
you know, we have a word for that orientation and it's sociopathy or psychopathy.
00:21:39.960
It's a false ethic to be so inured to the suffering of other people that you can just decide to
00:21:49.400
kind of close your accounts with even having to pay attention to it and, you know, all under
00:21:53.980
the rubric of being self-made. But, you know, none of this is to deny that in many cases things
00:22:02.140
are better accomplished by business than by charity, right, or by government than by charity,
00:22:07.180
right? So we're not denying any of that. I happen to think that building electric cars that people
00:22:12.460
actually want to drive, you know, may be the biggest contribution to fighting climate change or
00:22:18.400
certainly one of them and may be better than many environmental charities are managed to muster.
00:22:23.480
There are different levers to pull here to affect change in the world, but what also can't be
00:22:29.260
denied is that there are cases where giving some of our resources to people or to causes that need
00:22:37.220
them more than we do is the very essence of what it means to do good in the world. That can't be
00:22:43.240
disputed and Singer's shallow pond sharpens it up with a cartoon example, but it's really not such a
00:22:50.840
cartoon when you think about the world we're living in and how much information we now have and how
00:22:57.140
much agency we now have to affect the lives of other people. I mean, we're not isolated the way
00:23:03.340
people were 200 years ago and it is uncontroversial to say that anyone who would walk past a pond
00:23:09.580
and decline to save a drowning child out of concern for his new shoes or his new suit, that person is a
00:23:17.480
moral monster. And none of us want to be that sort of person. And what's more, we're right to not want
00:23:24.380
to be that sort of person. But given our interconnectedness and given how much information
00:23:29.500
we now have about the disparities in luck in this world, we have to recognize that though we're
00:23:36.980
conditioned to act as though people at a distance from us, both in space and in time,
00:23:43.540
matter less than people who are near at hand. If it was ever morally defensible, it's becoming less
00:23:50.160
defensible because the distance is shrinking. We simply have too much information. So that's,
00:23:56.460
there's just so many ponds that are in view right now. And a response to that is, I think, morally
00:24:03.120
important. But in our last conversation, Will, you made a distinction that I think is very significant
00:24:08.360
and it provides a much better framing for thinking about doing good. And it was a distinction between
00:24:14.620
obligation and opportunity. The obligation is Singer's shallow pond argument. You see a child
00:24:22.740
drowning, you really do have a moral obligation to save that child, or there's just no way to maintain
00:24:28.160
your sense that you're a good person if you don't. And then he forces us to recognize that really,
00:24:34.120
we stand in that same relation to many other causes, no matter how distant we imagine them to be.
00:24:41.100
But you favor the opportunity framing of, you know, racing in to save children from a burning house.
00:24:49.500
Imagine how good you would feel doing that successfully. So let's just put that into play
00:24:55.840
here because I think it's a better way to think about this whole project.
00:24:59.580
Yeah, exactly. So as I was suggesting earlier, just if, I mean, for most people around the world,
00:25:07.800
certainly in rich countries, if you look at your own values, well, one of those values is being a good
00:25:13.500
person. And you can see this if you think about examples like there's, you see a building on fire,
00:25:19.640
there's a, you know, a young girl kind of at the window, and you kick the door down and you run in
00:25:24.260
and you rescue that child. Like that moment would stay with you for the entire life. You would reflect
00:25:31.040
on that in your elderly years and think, ah, wow, I actually really did something that was like,
00:25:37.260
It's just, it's worth lingering there because everyone listening to us knows down to their toes
00:25:44.460
that that would be, if not the defining moment in their life, you know, in the top five,
00:25:52.740
there's just no way that wouldn't be one of the most satisfying experiences. You could live to be
00:25:59.240
150 years old, and that would still be the, in the top five most satisfying experiences of your life.
00:26:07.660
And given what you're about to say, it's amazing to consider that and how opaque this is to most of
00:26:15.300
us most of the time when we think about the opportunities to do good in the world.
00:26:19.660
Exactly. And yeah, I mean, continuing this, imagine if you did a similar thing kind of several times. So
00:26:25.960
one week, you saved someone from a burning building. The next week, you saved someone from drowning.
00:26:32.560
The month after that, you saw someone having a heart attack, and you performed CPR and saved
00:26:36.440
their life too. You'd think, wow, this is a really special life that I'm living. But the truth is
00:26:42.000
that we have that opportunity to be as much of a moral hero, in fact, much more of a moral hero,
00:26:48.140
every single year of our lives. And we can do that just by targeting our donations to the most
00:26:55.240
effective charities to help those people who are poorest in the world. We could do that too,
00:27:00.760
if you wanted to choose a career that's going to have a really big impact on the lives of others.
00:27:06.440
And so it seems very unintuitive, because we're in a very unusual place in the world.
00:27:12.880
You know, it's only over the last couple of hundred years, there's such a wild discrepancy
00:27:16.660
between rich countries and poor countries, where people in rich countries have 100 times
00:27:21.680
the income of the poorest people in the world. And where we have the technology to be able to change
00:27:26.880
the lives of people on other sides of the world, let alone the kind of technologies to,
00:27:33.000
you know, imperil the entire future of the human race, such as through nuclear weapons or climate
00:27:38.260
change. And so our moral instincts are just not attuned to that at all. They are just not sensitive
00:27:45.340
to the sheer scale of what an individual is able to achieve if he or she is trying to make a really
00:27:52.860
positive difference in the world. And so when we look at the, you know, history, you look at the
00:27:57.760
heroes, like, think about William Wilberforce or Frederick Douglass or the famous abolitionists,
00:28:05.120
people who kind of campaigned for the end of slavery and the amount of good they did,
00:28:08.940
or other of these kind of great moral leaders, and think, wow, these are like really special people
00:28:13.240
because of the amount they accomplished. I actually think that's just attainable for many,
00:28:17.700
many people around the world. Perhaps, you know, you're not quite going to be
00:28:20.900
someone who can do as much as contributing to the abolition of slavery, but you are someone who
00:28:26.360
can potentially save hundreds or thousands of lives or make a very significant difference to
00:28:32.300
the entire course of the future to come. Well, that's a great place to start. So now we will get
00:28:39.760
into the details. Okay, let's get into effective altruism per se. How do you define it at this point?
00:28:48.620
So the way I define effective altruism is that it's about using evidence and careful reasoning
00:28:55.640
to try to figure out how to do as much good as possible, and then taking action on that basis.
00:29:02.360
And the real focus is on the most good. And that's so important because people don't appreciate just
00:29:09.580
how great the difference and impact between different organizations are. When we've surveyed
00:29:14.560
people, they seem to think that the best organizations are maybe 50% better than typical
00:29:19.840
organizations like charities. But that's not really the way of things. Instead, it's that the best is
00:29:26.980
more like hundreds or thousands of times better than a typical organization. And we just see this
00:29:32.480
across the board when comparing charities, when comparing different sorts of actions.
00:29:36.220
So for global health, you will save hundreds of times as many lives by focusing on anti-malarial
00:29:43.660
bed nets and distributing them than focusing on cancer treatment. In the case of improving the
00:29:48.840
lives of animals and factory farms, you'll help thousands of times more animals by focusing on
00:29:52.820
factory farms than if you try to help animals by focusing on pet shelters.
00:29:57.060
If you look at risks to the future of civilization, man-made risks like novel pandemics are plausibly
00:30:04.260
just a thousand of times greater in magnitude than natural risks like asteroids that we might be
00:30:11.100
more familiar with. And that just means that focusing not just on doing some amount of good,
00:30:16.380
but doing the very best is just, it's so important. Because it's easy, yeah, it's easy just not to think
00:30:21.840
about how wild this fact is. So like, imagine if this was through with consumer goods. So at one
00:30:28.060
store, you want a beer. At one store, the beer costs $100. Another, it costs 10 cents. That would
00:30:34.360
just be completely mad. But that's the way things are in the world of trying to do good. It's like a
00:30:40.700
99.9% off sale or 100,000% extra fee. By focusing on these best organizations, it's just the best deal
00:30:47.840
you'll ever see in your life. And that's why it's so important for us to highlight this.
00:30:51.840
Okay, so I summarize effective altruism for myself now along these lines. So this is a working
00:31:00.620
definition, but it captures a few of the areas of focus and the difference between solving problems
00:31:09.100
with money and solving problems with your time or your choice of career. In your response to my
00:31:14.820
question, you illustrated a few different areas of focus. So you could be talking about the poorest
00:31:19.300
people in the world, but you could also be talking about long-term risk to all of humanity. So the
00:31:25.620
way I'm thinking about it now is that it's the question of using our time and or money to do one
00:31:32.380
or more of the following things. To save the most number of lives, to reduce the most suffering,
00:31:38.840
or to mitigate the worst risks of future death and suffering. So then the question of effectiveness
00:31:45.540
is, as you point out, there's so many different levels of competence and clarity around goals.
00:31:54.320
There may be very effective charities that are targeting the wrong goals, and there are ineffective
00:31:59.560
charities targeting the right ones. And this does lend some credence to the skepticism about charity
00:32:07.240
itself that I referenced earlier. And there's one example here which does a lot of work in illustrating
00:32:14.400
the problem. And this is something that you discuss in your book, Doing Good Better, which I recommend
00:32:18.980
that people read. But remind me about the ill-fated Play Pump. Yeah, so the now infamous Play Pump
00:32:28.880
was a program that got a lot of media coverage in the 2000s and even won a World Bank Development
00:32:38.440
marketplace award. And the idea was identifying a true problem that many villages in sub-Saharan
00:32:47.240
Africa do not have access to clean drinking water. And its idea was to install a kind of children's
00:32:55.480
merry-go-round. One of the roundabouts, the things you push and then jump on and spin around. And that
00:33:01.480
would harness the power of children's play in order to provide clean water for the world. So by pushing on
00:33:08.200
this merry-go-round, you would pump up water from the ground, and it would act like a hand pump,
00:33:13.640
providing clean water for the village. And so people loved this idea, the media loved it.
00:33:19.000
Said, you know, providing clean water is child's play, or it's the magic roundabout. They love to
00:33:23.320
put a pun on it. So it was, you know, it was a real hit. But the issue was that it was really a disaster,
00:33:30.200
this development intervention. So none of the local communities were consulted about whether they
00:33:35.400
wanted a pump. They liked the, you know, much cheaper, more productive, easier to use Zimbabwe hand
00:33:43.400
pumps that were sometimes in fact replaced by these play pumps. And moreover, in fact, the play pumps
00:33:51.320
were sufficiently inefficient that one journalist estimated that children would have to play on the pump
00:33:56.760
25 hours per day in order to provide enough water for the local community. But obviously children
00:34:04.040
don't want to play on this merry-go-round all the time. And so it would be left often to the elderly
00:34:09.080
women of the village to push this brightly colored play pump round and round. One of the problems was
00:34:14.440
that it didn't actually function like a merry-go-round where it would gather momentum and keep spinning.
00:34:21.000
It actually was just work to push, right? Well, exactly. You need the point of a
00:34:26.600
children's merry-go-round is you push it and then you spin. And if it's good, it's very well greased,
00:34:31.480
it spins freely. But you need to be providing energy into the system in order to pump water up
00:34:37.240
from the ground. And so it wouldn't spin freely in the same way. It was enormous amounts of work.
00:34:42.840
Children would find it very tiring. So it was just a fundamental misconception about engineering
00:34:47.720
to deliver this pump in the first place. Yeah, absolutely. And then there's just,
00:34:52.040
why would you think you can just go in and place something that has already been quite
00:34:56.360
well-optimized to the needs of the local people? It seems quite unlikely. If this was such a good
00:35:02.440
idea, you've got to ask the question, why wasn't it already invented? Why wasn't it already popular?
00:35:06.920
There's not a compelling story about, well, it's a public good or something. There's
00:35:10.600
a reason why it wouldn't have already been developed. And that's, let alone the fact that
00:35:17.720
the main issue in terms of water scarcity for people in the poorest countries is access to
00:35:22.840
clean water rather than access to water. And so instead, organizations like Dispensers for Safe
00:35:28.600
Water, which install chlorine at the point of source, so at these hand pumps, chlorine dispensers
00:35:36.280
that they can easily put into the jerry cans that they use to carry water that sanitizes the water.
00:35:41.160
These are much more effective because that's really the issue is dirty water rather than access to
00:35:46.200
water most of the time. Okay. So this just functions as a clear example of the kinds of
00:35:53.560
things that can happen when the story is better than the reality of a charity. And this, if I recall
00:36:01.240
correctly, there were celebrities that got behind this and they raised, it had to be tens of millions
00:36:07.000
of dollars for the play pump. Even after the fault in the very concept was revealed, they persisted.
00:36:15.720
I mean, they kind of got locked into this project and I can't imagine it persists to this day, but
00:36:22.360
they kept doubling down in the face of the obvious reasons to abandon this project. I mean,
00:36:28.680
this included, you know, kids getting injured on these things and, you know,
00:36:32.040
kids having to be paid to run them. And it was a disaster any way you look at it. So this is the
00:36:39.080
kind of thing that happens in various charitable enterprises. And this is the kind of thing that
00:36:45.560
if you're going to be effective as an altruist, you want to avoid. Yeah, absolutely. And just on the
00:36:52.040
whether they still continue. So I haven't checked in the last few years, but a few years ago when I did,
00:36:57.080
they were still going. And they were funded mainly by corporations like Colgate, Palmolive,
00:37:03.320
and obviously in a much diminished capacity because many of these failures were brought to light.
00:37:08.680
And that was, you know, a good part of the story. But what it does illustrate is a difference between
00:37:13.080
the world of nonprofits and the, you know, business world where in the business world,
00:37:18.120
if you make a really bad product, then, well, at least if the market's functioning well, then
00:37:23.480
the company will go out of business. You just won't be able to sell it because
00:37:27.000
the beneficiaries of the product are also the people paying for it. But in the case of nonprofits,
00:37:32.200
it's very different. The beneficiaries are different from the people paying for the goods.
00:37:35.720
And so there's a disconnect between how well can you fundraise and how good is the
00:37:39.640
program that you're implementing. And so the sad fact is that bad charities don't die, not nearly enough.
00:37:46.200
Yeah, actually, that brings me to a question of about perverse incentives here that I do think
00:37:52.520
animates sort of the more intelligent skepticism. And it is on precisely this point that, you know,
00:38:00.360
charities, good and bad, can be incentivized to merely keep going. I mean, just imagine a charity that
00:38:08.680
solves its problem. It should be that, you know, if you're trying to, let's say, you know,
00:38:14.200
eradicate malaria, you raise hundreds of millions of dollars to that end, what happens to your charity
00:38:21.800
when you actually eradicate malaria? We're not, we're obviously not in that position with respect
00:38:26.760
to malaria, unfortunately. But there are many problems where you can see that charities are
00:38:33.400
never incentivized to acknowledge that significant progress has been made and the progress is such that
00:38:40.600
it calls into question whether this charity should exist for much longer. And, you know, there may be
00:38:47.000
some, but I'm unaware of charities who are explicit about their aspiration to put themselves out of
00:38:53.560
business because they're so effective. Yeah, so I have a great example of this going wrong. So one charity
00:39:01.080
I know of is called Scotscare and it was set up in the 17th century after the personal union of England
00:39:10.440
and Scotland. And there were many Scots who migrated to London and we were the poor, we were the indigent
00:39:15.880
in London. And so it makes sense for there to be a non-profit helping make sure that poor Scots
00:39:21.640
are, you know, had a livelihood. Were they able to feed themselves and so on?
00:39:27.080
Is it the case that in the 21st century, poor Scots in London is the biggest global problem?
00:39:34.280
No, it's not. Nonetheless, Scotscare continues to this day over 300 years later.
00:39:41.240
Are there examples of charities that explicitly would want to put themselves out of business? I mean,
00:39:45.240
giving what we can, which you've joined, is one. Our ideal scenario is a situation where the idea that
00:39:52.600
you would join a community because you're donating 10% is just weird, wild. Like if you become
00:39:59.480
vegetarian, very rare that you join the kind of a vegetarian society. Or if you decide not to be a
00:40:05.160
racist or decide not to be a liar, that's not like you join the No Liars Society or the No Racist Society.
00:40:12.600
And so that is what we're aiming for, is a world where it's just so utterly common sense that if
00:40:19.400
you're born into a rich country, you should use a significant proportion of your resources to try
00:40:23.960
and help other people, impartially considered. That the idea of needing a community or needing to be
00:40:30.840
part of this kind of club or broader group of people, that just wouldn't even cross your mind.
00:40:35.800
So the day that giving what we can is not needed is a very happy day from my perspective.
00:40:42.680
So let's talk about any misconceptions that people might have about effective altruism,
00:40:48.040
because you know, the truth is I've had some myself, even having prepared to have conversations
00:40:53.480
with you and your colleague Toby Ord, he's also been on the podcast. My first notion of effective
00:41:01.000
altruism was that, very much inspired by Peter Singer's Shallow Pond, that it really was just
00:41:08.360
a matter of focusing on the poorest of the poor in the developing world, almost by definition,
00:41:15.640
and that's kind of the long and the short of it. And you're giving as much as you possibly can
00:41:20.600
sacrifice, but the minimum bar would be 10% of your income. What doesn't that capture about
00:41:27.960
effective altruism? Yeah, thanks for bringing that up, because it is a challenge we faced that
00:41:34.200
the ideas that spread are the most mimetic with respect to effective altruism are not necessarily
00:41:39.320
those that most accurately capture where the movement is, especially today. So as you say,
00:41:46.280
many people think that effective altruism is just about earning as much money as possible to give,
00:41:50.360
to give well-recommended global health and development charities. But I think there's at least
00:41:55.160
three ways in which that misconstrues things. One is the fact that there are just a wide variety of
00:42:01.480
causes that we focus on now. And in fact, among the kind of most engaged people in effect of altruism,
00:42:10.120
the biggest focus now is making sure is future generations and making sure that things go well for
00:42:16.360
the very many future generations to come, such as by focusing on kind of existential risks that Toby talks
00:42:22.360
about like man-made pandemics, like AI. Animal welfare is another cause area. It's not definitely
00:42:30.360
by no means the majority focus, but is a significant minority focus as well. And there's just lots of
00:42:36.040
people trying to get better evidence and understanding of these issues and a variety of other issues too.
00:42:43.800
So voting the forum is something that I have funded to an extent and championed to an extent. I'm really
00:42:50.200
interested in more people working on the risk of war over the coming century. And then secondly,
00:42:55.320
there are, as well as donating, which is a very accessible and important way of doing good.
00:43:01.640
There's just a lot of, in fact, the large majority of people within the effect of altruism community
00:43:06.920
are trying to make a difference, not primarily via their donations, though often they do donate too,
00:43:12.200
but primarily through their career choice by working in areas like research, policy, activism.
00:43:19.320
And then just as a kind of framing in general, we just really don't think of effective altruism as a
00:43:24.920
set of recommendations, but rather like a research project and methodology. So it's more like the
00:43:31.800
science, you know, aspiring towards the scientific revolution than any particular theory.
00:43:36.840
And what we're really trying to do is to do for the pursuit of good what the scientific
00:43:41.640
revolution did for the pursuit of truth. It's an ambitious goal, but trying to make the pursuit
00:43:47.000
of good this more rigorous, more scientific enterprise. And for that reason, we don't see
00:43:52.440
ourselves as this kind of set of claims, but rather as a living, breathing, and evolving set of ideas.
00:43:59.160
Yeah, yeah. I think it's useful to distinguish at least two levels here. I mean, one is the specific
00:44:07.720
question of whether an individual cause or an individual charity is a good one. And, you know,
00:44:15.320
but by what metric would you even make that judgment? And how do we rank order our priorities? And like,
00:44:23.000
all of that is getting into the weeds of just what we should do with our resources. And obviously that
00:44:29.000
has to be done. And I think the jury is very much out on many of those questions. And I want to,
00:44:35.080
we'll get into those details going forward here. But the profound effect that your work has had on me
00:44:42.840
thus far arrives at this other level of just the stark recognition that I want to do good in the world
00:44:52.440
by default. And I want to engineer my life such that that happens whether I'm inspired or not.
00:45:02.440
The crucial distinction for me has been to see that there's the good feeling we get from philanthropy
00:45:09.240
and doing good. And then there's the actual results in the world. And those two things are only loosely
00:45:15.480
coupled. This is one of the worst things about us that we need to navigate around or at least be
00:45:21.960
aware of as we live our lives. We tend, you know, we, you know, human beings tend not to be the most
00:45:30.280
disturbed by the most harmful things we do. And we tend not to be the most gratified by the most
00:45:35.800
beneficial things we do. And we tend not to be the most frightened by the most dangerous risks we run,
00:45:42.760
right? And so it's just, we're very easily distracted by good stories and, and other bright,
00:45:49.640
shiny objects. And the framing of a problem radically changes our perception of it. So the
00:45:56.440
effect, you know, when, when you came on my podcast four years ago was for me to just realize, okay,
00:46:01.800
well, now we're talking about GiveWell's most effective charities and the Against Malaria Foundations
00:46:08.200
is at the top. I recognize in myself that I'm just not very excited about malaria or bed nets.
00:46:15.240
The problem isn't the sexiest for me. The remedy isn't the sexiest for me. And yet I rationally
00:46:22.040
understand that if I want to save human lives, this is the dollar for dollar, the cheapest way to save a
00:46:31.080
human life. So that the epiphany for me is I just want to automate this. And you just give,
00:46:38.040
you know, every month to this charity without having to think about it. And so, you know, that,
00:46:43.400
that is gratifying to me to some degree, but the truth is I almost never think about malaria or the
00:46:50.440
Against Malaria Foundation or anything related to this project. And I'm doing the good anyway,
00:46:57.080
because I just decided to not rely on my moral intuitions day to day and my desire to rid the
00:47:04.520
world of malaria. I just decided to automate it. The recognition that there's a difference between
00:47:11.880
committing in a way that really takes it offline so that you no longer have to keep being your better
00:47:17.320
self on that topic every day of the week is just wiser and more effective to decide in your,
00:47:23.160
in your clearest moment of deliberation, you know, what you want to do, and then just to build the
00:47:27.800
structure to actually do that thing. And that's just one of several distinctions that you have
00:47:33.640
brought into my understanding of how to do good. Yeah, absolutely. I mean, it just, we've got to
00:47:39.240
recognize that we are these fallible, imperfect creatures. Donating is much like, you know, paying
00:47:46.120
your pension or something. It's something you might think, oh, I really ought to do, but it's just hard to
00:47:50.120
get motivated by. And so we need to exploit our own irrationality. And I think that comes in two
00:47:57.320
stages. First, like building up the initial motivation, you can sustain that for, you know,
00:48:03.960
perhaps feeling of moral outrage or just real kind of yearning to start to do something. You can get
00:48:10.600
that. So in my own case, when I was deciding how much should I try and commit to, to give away over the
00:48:16.120
course of my life, I just, I looked up images of children suffering from horrific topical diseases.
00:48:21.960
And that, you know, really stayed with me, kind of gave me that initial motivation. Or I still get
00:48:27.800
that if I read about the many close calls we had, where we almost had a nuclear holocaust for the
00:48:34.200
course of the 20th century. Or if I, you know, learn more history and think about what the world would
00:48:39.880
have been like if the Nazis had won the Second World War and created this global totalitarian state.
00:48:44.760
There's, you know, or fiction, like I was recently reading 1984. And again, this kind of
00:48:50.200
ways of just thinking just how bad and different the world could be, that can really create the
00:48:54.360
sense of like moral urgency. Or just, you know, on the news too, the kind of moral outrages we see
00:49:00.440
all the time. And then the second is how we direct that. And so in your own case, just saying,
00:49:06.440
yes, every time I have a podcast, I donate three and a half thousand dollars and it saves a life.
00:49:11.560
Very good way of doing that. Similarly, you can have a system where every time a paycheck comes
00:49:16.040
in, ten percent of it just, it doesn't even enter your bank account. It just goes to, or at least
00:49:21.800
immediately leaves to go to some effective charity that you've carefully thought about. And there's
00:49:27.320
other hacks too. So public commitments are a really big thing now. I think there's no way I'm backing
00:49:33.400
out of my altruism now. Too much of my identity is wrapped up in that now. So even if someone offered
00:49:40.600
me a million pounds and I could skip town, I wouldn't want to do it. It's part of who I am.
00:49:47.160
It's part of my social relationships. And that's, you know, yeah, that's very powerful too.
00:49:52.360
Actually, in a coming chapter here, I want to push back a little bit on how you are personally
00:50:00.120
approaching giving. Because I think I have some rival intuitions here that I want to see how they
00:50:05.880
survive contact with your sense of how you should live. There's actually a kind of related point here
00:50:11.800
where I'm wondering when we think of causes that meet the test of effective altruism, they still seem
00:50:20.200
to be weighted toward some obvious extremes, right? Like when you look at the value of a marginal
00:50:26.120
dollar in sub-Saharan Africa or Bangladesh, you get so much more of a lift in human well-being for
00:50:34.280
your money than you do or than you seem to in a place like the United States or the UK that, you know,
00:50:41.400
by default you generally have an argument for doing good elsewhere rather than locally. But I'm wondering
00:50:48.520
if this breaks down for a few reasons. So, I mean, just take an example like the problem of
00:50:55.880
homelessness in San Francisco, right? Now, leaving aside the fact that we don't seem to know what to
00:51:01.980
do about homelessness, it appears to be a very hard problem to solve. You can't just build shelters for
00:51:08.100
the mentally ill and substance abusers and call it a day, right? I mean, they quickly find that even
00:51:13.960
they don't want to be in those shelters and, you know, they're back out on the streets and so you
00:51:18.120
have to figure out what services you're going to provide and there's all kinds of bad incentives
00:51:23.000
and moral hazards here that, you know, when you're the one city that does it, well, then you're the
00:51:28.080
city that's attracting the world's homeless. But let's just assume for the sake of argument that we
00:51:33.140
knew how to spend money so that we could solve this problem. Would solving the problem of homelessness
00:51:38.840
in San Francisco stand a chance of rising to the near the top of our priorities, in your view?
00:51:46.700
Yeah. So, it would all just depend on how, like, the costs to save homelessness and how that compared
00:51:53.120
with our other opportunities. So, in general, it's going to be the case that the very best opportunities
00:51:58.780
are, in order to improve lives, are going to be in the poorest countries because the very best ways
00:52:06.000
of helping others have not yet been taken. So, malaria is still life. It was wiped out in the
00:52:10.600
U.S. and certainly by the early 20th century. It's an easy problem to solve. It's very cheap.
00:52:16.240
And when we look at rich countries, the problems that are still left are, you know, the comparatively
00:52:21.840
harder ones to solve for whatever reason. So, like in the case of homelessness, I'm not, you know,
00:52:28.380
sure about the original source of this fact, but I have been told that, so, yeah, for those who
00:52:33.860
haven't ever lived in the Bay Area, the problem of homelessness is horrific there. There's just
00:52:38.320
people with severe mental health issues, clear substance abuse, just, like, everywhere on the
00:52:45.400
streets. It's just, it's so prevalent. It just amazes me that one of the richest countries in the world
00:52:49.380
and one of the richest places within that country is unable to solve this problem. But I believe,
00:52:55.700
at least, that in terms of funding at the local level, there's about $50,000 spent per homeless
00:53:02.100
person in the Bay Area. And what this suggests is that the problem is not to do with a lack of
00:53:07.560
finances. And so if you were going to contribute more money there, it's unlikely to make an
00:53:11.820
additional reason. Perhaps it's some perverse intentives effect. Perhaps it's government
00:53:16.560
bureaucracy. Perhaps it's some sort of legislation. I don't know. It's not an issue I know enough
00:53:22.520
about. But precisely because the U.S. is so rich, the San Francisco Bay Area is so rich,
00:53:27.400
is that if this was something where we could turn money into a solution to the problem,
00:53:32.100
it would, more likely, more than likely, it probably would have happened already.
00:53:37.060
But that's not to say we'll never find issues in rich countries where you can do an enormous
00:53:40.820
amount of good. So open philanthropy, which is kind of a core effective altruist foundation,
00:53:48.680
one of its program areas is criminal justice reform. It started, I believe, about five years ago.
00:53:55.500
And it really did think that the benefits to Americans that it could provide by funding
00:54:03.540
changes to legislation to reduce the absurd rates of over-incarceration in the U.S.,
00:54:10.540
where, for context, the U.S. incarcerates five times as many people as the U.K. does on a per-person
00:54:16.760
basis. And there's a lot of evidence suggesting you could reduce that very significantly without
00:54:21.260
changing rates of crime. It seemed to be comparable to actually the best interventions in the poorest
00:54:28.060
countries. Of course, this has now become an even more well-focused issue. So I believe that
00:54:34.480
they're finding it harder to now, you know, make a difference by funding organizations that wouldn't
00:54:38.660
have otherwise be funded. But this is at least one example where you can get things that come up
00:54:43.640
that just for whatever reason have not yet been funded, kind of new opportunities, where you can do
00:54:48.300
as much good. It's just that I think they're going to be much competitively much harder to find.
00:54:53.440
Yeah, I think that this gets complicated for me when you look at just what we're going to
00:55:00.360
target as a reduction in suffering. I mean, it's very easy to count dead people, right? So if we're
00:55:07.540
just talking about saving lives, that's a pretty easy thing to calculate. If we can save more lives
00:55:13.900
in country X over country Y, well, then it seems like it's a net good to be spending our dollars
00:55:19.960
in country X. But when you think about human suffering, and when you think about how so much
00:55:26.320
of it is comparative, like the despair of being someone who has fallen through the cracks in a city
00:55:35.120
like San Francisco could well be much worse. I mean, I think there's certainly, I don't know what data we
00:55:41.340
have on this, but there's certainly a fair amount of anecdotal testimony that poor people in a country
00:55:47.240
like Bangladesh, while it's obviously terrible to be poor in Bangladesh, and there are many reasons to
00:55:53.840
want to solve that problem. And by comparison, when you look at, you know, homeless people on the streets
00:55:58.820
of San Francisco, they are not, they're not nearly as poor as the poorest people in Bangladesh, of course,
00:56:04.360
and nor are they politically oppressed in the same way. I mean, by global standards, they're barely
00:56:10.100
oppressed at all. But it wouldn't surprise me if we could do a complete psychological evaluation or,
00:56:17.220
you know, or just trade places with people in each condition, we would discover that the suffering of
00:56:23.900
a person who is living in one of the richest cities in the world and is homeless and drug addicted and
00:56:33.380
mentally ill or to, you know, pick off that menu of despair is actually the, you know, the worst
00:56:39.320
suffering on earth. And again, we just have to stipulate that we could solve this problem dollar
00:56:45.560
for dollar in a way that, you know, we admit that we don't know how to at the moment. It seems like
00:56:51.740
just tracking the, you know, the GDP in each place and in the amount of money it would take to
00:56:57.220
deliver a meal or get someone clothing or get someone shelter. And, you know, the power of the
00:57:02.720
marginal dollar calculation doesn't necessarily capture the deeper facts of the case, or at least
00:57:09.840
that's my concern. So I'd actually agree with you on the question of, you know, take someone who,
00:57:17.860
yeah, they're mentally unwell, they have drug addictions, they're homeless in the San Francisco
00:57:22.760
Bay Area, how bad is their day? And then take someone living in extreme poverty in India or
00:57:28.900
sub-Saharan Africa, how bad is their typical day? Yeah, I wouldn't want to make a claim that the,
00:57:36.020
that homeless person in the U.S. has a better life than the extreme poor. You know, I think it's,
00:57:43.040
it's not so hard to just hit rock bottom in terms of human suffering. And I do just think that the
00:57:50.380
homeless in the Bay Area just seem to have like really terrible lives. And so the question,
00:57:56.520
the question in terms of the difference of the, how promising it is as a cause is much to do,
00:58:01.980
much more to do with this question of whether the low hanging fruit has already been taken,
00:58:06.640
where, you know, just think about the most sick you've ever been and how horrible that was.
00:58:12.820
And imagine, you know, and now think about that for months, having malaria, for example,
00:58:19.580
and that that could, you could have avoided that for a few dollars. That's like, you know,
00:58:25.760
an incredible fact. And that's where the real difference is, I think, is in kind of the cost
00:58:31.040
to solve a problem, rather than necessarily like the kind of per person suffering. Because while
00:58:37.220
rich countries are in general happier than poorer countries, the worst off people, I mean,
00:58:42.960
especially in the US, which has such a high variance in life outcomes. Yeah, the worst,
00:58:48.400
the lives of the worst off people can easily be much the same.
00:58:51.600
Yeah, I guess there's some other concerns here that I have, which, and this, this, this speaks to
00:58:56.300
a deeper problem with consequentialism, which is, which is our orientation here, you know,
00:59:02.360
not exclusively, and people can mean many things by that term. But there's just a problem in how
00:59:08.480
you keep score, because, you know, obviously, there are bad things that can happen, which have
00:59:14.300
massive silver linings, right, which, you know, they have good consequences in the end. And there
00:59:19.080
are apparently good things that happen that, that actually have bad consequences elsewhere,
00:59:24.240
and or in the fullness of time. And it's hard to know when you can actually know
00:59:28.900
that you can assess what is what is true, the net, how you get to the bottom line of the
00:59:36.040
consequences of any actions. But like, when I think about the knock on effects of letting
00:59:41.100
a place like San Francisco become a slum, effectively, right, like, you just think of like,
00:59:46.920
the exodus in tech from California, at this moment, I don't know how deep or sustained it'll be. But
00:59:53.820
I've lost count of the number of people in Silicon Valley, who I've heard are leaving California at
01:00:01.500
this point. And the homelessness in San Francisco is very high on the list of reasons why that strikes
01:00:08.820
me as a bad outcome that has far reaching significance for society. And again, it's the kind of thing that's
01:00:16.960
not captured by just counting bodies or just looking at how cheap it is to buy bed nets.
01:00:25.440
You know, and I'm sort of struggling to find a way of framing this that is fundamentally different from
01:00:30.940
Singer's Shallow Pond that allows for some of the moral intuitions that I think many people have here,
01:00:36.640
which is that there's an intrinsic good in, in having a civilization that is producing the most
01:00:45.940
abundance possible. I mean, we want a highly technological, creative, beautiful civilization.
01:00:53.800
We want gleaming cities with beautiful architecture. We want institutions that are massively well-funded,
01:01:01.660
producing cures for diseases, rather than just things like bed nets, right? And we want beautiful art.
01:01:10.440
There are things we want, and I think there are things we're right to want that are only compatible
01:01:16.680
with the accumulation of wealth in certain respects. One framing, I mean, from Singer's framing,
01:01:26.140
those intuitions are just wrong, or at least they're premature, right? I mean, we have to save the
01:01:31.420
last child in the last pond before we can think about funding the Metropolitan Museum of Art,
01:01:38.620
right, on some level. And many people are allergic to that intuition for reasons that I understand,
01:01:47.000
and I'm not sure that I can defeat Singer's argument here, but I have this image that,
01:01:51.620
I mean, essentially we have a lifeboat problem, right? Like, you and I are in the boat, we're safe,
01:01:56.780
and then the question is, how many people can we pull in to the boat and save as well? And, you know,
01:02:02.380
as with any lifeboat, there's a problem of capacity. We can't save everyone all at once,
01:02:09.340
but we can save many more people than we've saved thus far. But the thing is, we have a
01:02:14.880
fancy lifeboat, right? I mean, civilization itself is a fancy lifeboat, and, you know,
01:02:20.240
there are people drowning, and they're obviously drowning, and we're saving some of them, and you
01:02:24.880
and I are now arguing that we can save many, many more, and we should save many, many more.
01:02:29.160
And anyone listening to us is lucky to be safely in this lifeboat with us. And the boat is not as
01:02:35.220
crowded as it might be, but we do have finite resources in any moment. And the truth is, we're,
01:02:42.700
because it's a fancy lifeboat, you know, we are spending some of those resources on things other
01:02:48.700
than reaching over the side and pulling in the next drowning person. So that, you know, there's a bar
01:02:53.540
that serves very good drinks, and, you know, we've got a good internet connection, so we can stream
01:02:58.580
movies. And, you know, while this may seem perverse, again, if you extrapolate from here,
01:03:04.340
you realize that I'm talking about civilization, which is a fancy lifeboat. And there's obviously
01:03:10.000
an argument for spending a lot of time and a lot of money saving people and pulling them in.
01:03:16.680
But I think there's also an argument for making the lifeboat better and better, so that we have
01:03:23.540
more smart, creative people incentivized to spend some time at the edge pulling people in
01:03:30.380
with better tools, tools that they only could have made had they spent time elsewhere in the boat
01:03:36.820
making those tools. And this moves to the larger topic of just how we envision building a good society,
01:03:45.720
even while there are moral emergencies right now somewhere, that we need to figure out how to
01:03:55.420
Yeah, so this is a crucially important set of questions. So the focus on knock-on effects is
01:04:02.460
very important. So when you, again, let's just take the example of saving a life, you don't just save a
01:04:08.120
life because that person goes on and does stuff. As they make the country a richer, perhaps they go and
01:04:13.440
have kids. Perhaps, you know, they will emit CO2. That's a negative consequence. They'll innovate,
01:04:19.900
they'll invent things, maybe they'll create art. There's this huge stream, basically from now until
01:04:24.980
the end of time, of consequences of you doing this thing. And it's quite plausible that the knock-on
01:04:29.600
effects, though much harder to predict, are much bigger effects than the short-term effects,
01:04:34.440
the benefits of the person you've, whose, you know, life you saved or who you've benefited.
01:04:38.300
In the case of homelessness in the Bay Area versus extreme poverty in a poor country, I'd want to say
01:04:47.360
that if we're looking at knock-on effects of one, we want to do the same for both. So, you know,
01:04:51.600
one thing I worry about over the course of the coming, you know, decades, but also even years,
01:04:57.280
is a possibility of a war between India and Pakistan. But it's a fact that rich democratic
01:05:04.100
countries seem to not to go to war with each other. So one knock-on effect of, you know, saving
01:05:10.680
lives or helping development in India is perhaps we get to that point where India is rich enough
01:05:16.820
that it's not going to want to go to war because, you know, the cost benefit doesn't pay out in the
01:05:20.980
same way. That would be another kind of potential good knock-on effect. And that's not to say that
01:05:25.620
the knock-on effects favor the extreme poverty intervention compared to the homelessness. It's just
01:05:30.900
that there's so many of them. It's very, very hard to understand how these play out.
01:05:37.760
And I think actually you then mentioned, well, we want to achieve some of the great things. So
01:05:42.880
we want, you know, to achieve the kind of highest apogees of art, of development. I mean,
01:05:51.320
a personal thing I'm sad that I will never get to see is the point in time where we just truly
01:05:56.420
understand science, where we have actually figured out the fundamental laws, especially the fundamental
01:06:01.800
physical laws. But also just, you know, great experiences too. People having, you know, peaks
01:06:08.900
of happiness that, you know, put the very greatest achievements in, of the present day, just in the,
01:06:15.480
you know, very greatest peaks of joy and ecstasy of the present day, just as basically different,
01:06:21.560
almost, you know, insignificant in comparison. That's something that really,
01:06:24.880
I do think is important. But I think for all of those things, once you're then starting to take
01:06:31.120
that seriously and take knock-on effects seriously, that's the sort of reasoning that leads you to
01:06:36.200
start thinking about what I call long-termism, which is the idea that the most important aspect of our
01:06:41.980
actions is the impact we have over the very long run, and will make us want to prioritize things like
01:06:49.140
ensuring we don't have some truly massive catastrophe as a result of a nuclear war or a man-made pandemic
01:06:56.760
that could derail this process of continued economic and technological growth that we seem to be
01:07:03.020
undergoing, or could make us want to avoid certain kind of just very bad value states, like the lock-in
01:07:09.300
of a global totalitarian regime, another thing that I'm particularly worried about in terms of the
01:07:14.460
future of humanity. Or perhaps it is just that we're worried that technological and economic growth
01:07:20.080
will slow down, and what we want to do is spur, you know, continued innovation into the future.
01:07:28.700
And I think there actually are just really good arguments for that. But I think I would be surprised
01:07:33.420
if that is what your aim is, the best way of doing that goes via some route, such as focusing on
01:07:41.920
homelessness in the Bay Area, rather than trying to kind of aim at those ends more directly.
01:07:47.680
Okay, well, I think we're going to return to this concept of the fancy lifeboat at some point,
01:07:52.640
because I do want to talk about your personal implementation of effective altruism in a subsequent
01:07:58.600
lesson. But for the moment, let's get into the details of how we think about choosing a cause in the
01:08:06.360
next chapter. Okay, so how do we think about choosing specific causes? I've had my own adventures
01:08:14.360
and misadventures with this since I took your pledge. Before we get into the specifics, I just want to
01:08:19.460
point out a really wonderful effect on my psychology that is, I mean, you know, I've always been, I think,
01:08:28.280
by real-world standards, fairly charitable. So giving to organizations that inspire me or who I think
01:08:36.360
are doing good work is not a foreign experience for me. But since connecting with you and now since
01:08:44.480
taking the pledge, I'm now, you know, aggressively charitable. And what this has done to my brain is
01:08:51.180
that there is a pure pleasure in doing this. And there's a kind of virtuous greed to help that gets
01:09:00.920
kindled. And rather than seeing it as an obligation, it really feels like an opportunity. I mean, just you
01:09:07.820
want to run into that building and save the girl at the window. But across the street, there's a boy at
01:09:14.200
the window and you want to run in over there too. And so this is actually a basis for psychological
01:09:20.960
well-being. I mean, it makes me happy to put my attention in this direction. It's the antithesis of
01:09:26.700
feeling like an onerous obligation. So anyway, I'm increasingly sensitive to causes that catch my eye
01:09:33.900
and I want to support. But I'm aware that I am a malfunctioning robot with respect to my own, you
01:09:40.940
know, moral compass. As I said, you know, I know that I'm not as excited about bed nets to stave off
01:09:47.940
malaria as I should be. And I'm, you know, I'm giving to that cause nonetheless, because I just recognize
01:09:54.080
that the analysis is almost certainly sound there. But for me, what's interesting here is when I think
01:10:00.800
about giving to a cause that really doesn't quite meet the test, well, that then achieves the status
01:10:07.180
for me of a kind of guilty pleasure. Like I feel a little guilty that I, you know, I gave that much
01:10:12.880
money to the homeless charity because, you know, Will just told me that that's not going to meet the
01:10:17.560
test. So, okay, that's going to have to be above and beyond the 10% I pledged to the most effective
01:10:23.380
charities. And so just having to differentiate the charitable donations that meet the test and
01:10:29.040
those that don't is an interesting project psychologically. I don't know, it's just, it's
01:10:33.460
just a very different territory than I've ever been with respect to philanthropy. But so this raises the
01:10:39.240
issue. So one of these charities is newly formed, right? So it does not yet have a long track record.
01:10:45.200
I happen to know the people who, or some of the people who created it. How could you fund a new
01:10:50.360
organization with all these other established organizations that have track records that you
01:10:55.440
can assess competing for your attention? First thing I want to say is just, does this count towards
01:11:01.460
the pledge? And one thing I definitely want to disabuse people of the notion of is that we think
01:11:07.280
of ourselves as the authority of like what is effective. These are our best guesses. We've
01:11:11.660
GiveWell or other organizations have put enormous amounts of research into this, but there's still
01:11:16.880
estimates. There's plenty of things you can kind of disagree with. And it's actually quite exciting
01:11:21.480
often to have someone come in and start disagreeing with us. Because maybe we're wrong, and that's
01:11:25.380
great. We can change our mind and have better beliefs. And the second thing is that early stage
01:11:30.820
charities absolutely can compete with charities with a more established track record. In just the same
01:11:36.820
way as if you think about financial investment, you know, investing in bonds or the stock market
01:11:42.700
is a way of making a return, but so is investing in startups. And if you had the view that you should
01:11:47.740
never invest in startups, then that would definitely be a mistake. And actually quite a significant
01:11:52.280
proportion of GiveWell's expenditure each year is on early stage non-profits that have the potential
01:11:58.800
in the future to become top recommended charities. And so a set of questions I would ask for any
01:12:05.540
organization I'm looking at is what is the cause that it's focused on? What's the program that it's
01:12:09.800
implementing? And then who are the people who are kind of running that program? But the kind of
01:12:15.920
background is that there's just some things we know do enormous amounts of good and have this enormous
01:12:20.340
amount of evidence for them. And so I feel like we want to be focusing on things where either there's
01:12:26.580
like very promising evidence and we could potentially get more, or it's something where in the nature of
01:12:32.620
the beast we cannot get very high quality evidence, but we have good compelling arguments for thinking
01:12:38.180
that this might be super important. So, you know, funding clean energy innovation, funding, you know,
01:12:44.120
new developments in carbon capture and storage or nuclear power or something. It's not like you can
01:12:48.560
do a randomized controlled trial on that, but I think there's good kind of theoretical arguments for
01:12:52.940
thinking that might be an extremely good way of combating climate change. It's worth bearing in
01:12:57.700
mind that like saying something that is the very best thing you can do with your money is an extremely
01:13:01.760
high bar. So, you know, if there's tens of thousands of possible organizations, there can only be
01:13:07.440
one or two that are the, you know, have the, that have the biggest bang for the buck.
01:13:12.140
All right. Well, it sounds like I'm opening a guilty pleasures fund to run alongside the Waking Up
01:13:18.080
I'm very glad that they're pleasures. I'm glad that you are sufficiently motivated. You know, it's a very
01:13:22.200
good instinct that you find out about these problems in the world, which are really bad and are
01:13:29.260
motivated to want to help them. And so I'm really glad you think of them as pleasures. I don't think
01:13:34.220
you should be beating yourself up, even if it doesn't seem like the very most optimal thing.
01:13:40.320
Yeah, yeah. No, no, I'm not. And in fact, I have an even guiltier pleasure to report, which,
01:13:45.480
you know, at the time I did it, you know, this is, this is not through a charity. This is just a,
01:13:50.980
you know, personal gift, but, and this does connect back to just the kind of lives we want to live and
01:13:58.060
how that informs this whole conversation. I remember I was listening to the, the New York
01:14:03.180
Times Daily podcast, and this was when the COVID pandemic was really peaking in the U.S. and
01:14:10.860
everything seemed to be in free fall. They profiled a couple who had a restaurant in, I think it was in
01:14:19.280
New Orleans, and they have an autistic child. And they were, you know, everyone knows that restaurants
01:14:25.760
were among the first businesses crushed by the pandemic for obvious reasons. And it was just a
01:14:32.420
very affecting portrait of this family trying to figure out how they were going to survive and get
01:14:38.080
their child to help. She, I think it was a girl, needed. So it was exactly the little girl fell down
01:14:46.500
the well sort of story compared to the genocide that no one can pay attention to because genocides are
01:14:53.400
just boring. And so I was completely aware of the dynamics of this. Helping these people could not
01:15:01.140
survive comparison with just simply buying yet more bed nets. And yet the truth is I really wanted to
01:15:11.280
help these people, right? So, you know, just sent them money out of the blue. And it feels like an
01:15:17.960
orientation that, I mean, there are two things here that kind of rise to the defense of this kind of
01:15:24.280
behavior. It feels like an orientation that I want to support in myself because it does seem like a
01:15:34.040
truly virtuous source of mental pleasure. I mean, it's better than almost anything else I do,
01:15:40.700
spending money selfishly. And psychologically, but it's both born of a felt connection and it kind of
01:15:47.020
ramifies that connection. And there's something about just honoring that bug in my moral hardware
01:15:53.060
rather than merely avoiding it. That seems, it seems like it's leading to just finding greater
01:16:01.700
happiness in helping people in general, you know, in the most effective ways in, you know,
01:16:07.300
middling effective ways. Feeling what I felt doing that is part of why I'm talking to you now,
01:16:14.000
trying to truly get my philanthropic house in order, right? So it sort of seems all of a piece
01:16:19.840
here. And I do think we need to figure out how to leverage the salience of connection to other people
01:16:27.560
and the pleasure of doing good. And if we lose sight of that, if we just keep saying that you can spend
01:16:34.660
$2,000 here, which is better than spending $3,000 over there, completely disregarding the experience
01:16:42.180
people are having engaging with the suffering of others, I feel like something is lost. And I guess
01:16:48.680
there's another variable I would throw in here is, you know, this wasn't an example of this. This
01:16:52.440
wasn't a local problem I was helping to solve. But had it been a local problem, had I been offered the
01:16:58.420
opportunity to help my neighbor, you know, at greater than rational expense, that might have been
01:17:04.200
the right thing to do. I mean, again, it's falling into the guilty pleasure bin here compared to the
01:17:09.240
absolutely optimized, most effective way of relieving suffering. But I don't know, I just feel like
01:17:15.960
there's something lost. If we're not in a position to honor a variable like locality ever, we're not
01:17:23.820
only building the world or affecting the world here, we're building our own minds. We're building
01:17:29.880
the very basis by which we would continue to do good in the world in coming days and weeks and months
01:17:36.360
and years. Yeah. So, I mean, I essentially completely agree with you and think it's really
01:17:41.220
good that you supported that family. And yeah, it reminds me in my own case, something that stayed
01:17:48.060
with me. So I lived in Oakland, California, for a while, in a very poor, predominantly black
01:17:52.980
neighborhood. And I was just out on a run. And a woman kind of comes up to me as like, asks if I can
01:17:59.440
stop and help for a second. And I thought she was just going to want help like carrying groceries or
01:18:04.460
something and be fine. It turns out she wanted me to move her couch like all the way down the street
01:18:09.500
took like two hours. And I just don't. And so, and that was out of my working day as well, because
01:18:15.860
I don't have lunch. And I just don't regret the use of that time at all. And why is that? And even
01:18:23.040
from a rational perspective, I'm not saying that this is, oh, I shouldn't just merely shouldn't beat
01:18:28.040
myself up or something. And I think it's because most of the time, we're just not the bigger
01:18:34.460
question of like, what individual action do we do? Like in any particular case, which kind of model
01:18:42.100
philosophy has typically focused on kind of act consequentialism. That's not typically the decisions
01:18:47.900
we face. We face these much larger decisions, like what career to pursue or something. Sometimes those
01:18:52.740
are more like actions. But we also face the question of just what person to be, what kind of
01:18:58.040
motivations and dispositions do I want to have? And I think the idea of me becoming this like
01:19:04.320
utility maximizing robot that is like utterly cold and calculating out of time, all the time,
01:19:10.320
I think is certainly not possible for me, given just the fact that I'm an embodied human being.
01:19:16.520
But also probably not desirable either. I don't think, you know, I don't think that an effective
01:19:22.460
Svelteism movement would have started had we all been these cold utility maximizing robots.
01:19:27.840
And so I think cultivating a personality such that you do get joy and reward and motivation from
01:19:34.660
being able to help people and get that feedback. And that is like part of what you do in your life,
01:19:39.740
I actually think can be the best way of living a life when you consider your life as a whole.
01:19:45.080
And in particular, it's not necessarily, doing those things does not necessarily trade off
01:19:51.720
very much at all, can perhaps even help with the other things that you do. So in your case,
01:19:57.400
you get this reward from supporting this like poverty-sitting family with a disabled child,
01:20:03.480
or get reward from helping people in your local community, that I'm presuming you can channel
01:20:08.560
and like helps continue the motivation to do things that might seem much more alien or just
01:20:13.900
harder to empathize with. And I think that's okay. I think we should accept that. And that's,
01:20:17.940
in fact, should be encouraged. So yeah, I think like, it's very important once we take these ideas
01:20:23.720
outside of the philosophy seminar room and actually try to live them to appreciate the instrumental
01:20:29.240
benefits of doing these kind of everyday actions, as long as it ultimately helps you stand by this
01:20:36.000
commitment to at least in part, try and do just what we rationally, all things considered,
01:20:42.020
Yeah. So you mentioned that the variable of time here, and this is another misconception about
01:20:48.020
effective altruism, that it's only a matter of giving money to the most effective causes.
01:20:54.080
You spent a lot of time thinking about how to prioritize one's time and think about doing good
01:21:00.100
over the course of one's life based on how one spends one's time. So in our next chapter,
01:21:06.840
let's talk about how a person could think about having a career that helps the world.
01:21:15.300
Okay, so we're going to speak more about the question of giving to various causes and how to
01:21:22.660
do good in the world in terms of sharing the specific resource of money. But we're now talking
01:21:27.440
about one's time. How do you think about time versus money here? And I know you've done a lot of work
01:21:34.720
on the topic of how people can think about having rewarding careers that are net positive. And you
01:21:42.660
have a website, 80,000 hours that you might want to point people to here. So just let's talk about
01:21:49.660
the variable of time and how people can spend it to the benefit of others.
01:21:55.260
Great. So the organization is called 80,000 hours, because that's the typical number of hours that
01:22:02.180
you work in the course of your life. If that's a, you know, approximately 40 year career, working 40
01:22:09.180
hours a week, 50 weeks a year. So we use that to illustrate the fact that your choice of career
01:22:15.800
is probably the, altruistically speaking, the biggest decision you ever make. It's absolutely
01:22:20.660
enormous. Yeah. People spend very little of their time really thinking through that question. I mean,
01:22:27.540
you might think if you go out for dinner, then you spend maybe 1% of the time that you would spend at
01:22:32.060
dinner thinking about where to eat, like a few minutes or something. But spending 1% of 80,000 hours
01:22:37.400
on, you know, your career decision on what you should do, that would be 800 hours, enormous amount of
01:22:43.640
time. But I mean, why did I do philosophy? Well, I, you know, I liked it at school. I could have done
01:22:51.640
maths, but my dad did maths. I wanted to differentiate myself from him. Like I didn't have a very good
01:22:56.520
reasoning process at all because we generally don't, you know, pay this nearly enough attention.
01:23:02.840
And certainly when it comes to doing good, you have an enormous opportunity to have a huge impact
01:23:08.140
through your career. And so what 80,000 hours does via its website, via podcast, and via a small
01:23:15.160
amount of one-on-one advising is try to help people figure out which careers are such that they can have
01:23:22.020
the biggest impact. And in contrast, you know, this is a much, you know, the question of what
01:23:26.940
charities do I need to is exceptionally hard. This is even harder again, because firstly, you'll be
01:23:33.960
working at many different organizations over the course of your life, probably, not just one. And
01:23:39.000
secondly, of course, there's a question of personal fit. Some people would be, some people are good at
01:23:43.120
some things and not others. That's truism. And so how should you think about this? Well, the most
01:23:50.040
important question, I think, is the question of what cause to focus on. And that involves big picture
01:23:55.340
worldview judgments and, you know, philosophical questions too. So we tend to think of the question of
01:24:03.540
cause selection by using the heuristics of what causes, and by a cause, I mean a big problem in
01:24:09.180
the world like climate change or gender inequality or poverty or factory farming or pandemics,
01:24:15.180
possibility of pandemics or AI lock-in of values. We look at those causes in terms of how important
01:24:22.240
they are, that is, how many individuals are affected by how much, how neglected they are, which is how many
01:24:28.900
resources are already going towards them. And then finally, how tractable they are, how much we can
01:24:33.120
make progress in this area. And in significant part, because of those heuristics, that's why
01:24:38.460
we've, Effective Altruism has chosen the focus areas it has, which includes pandemic preparedness,
01:24:44.580
artificial intelligence, climate change, poverty, farm animal welfare, and potentially some others
01:24:50.100
as well, like improving institutional decision-making and some areas in scientific research. And so that's
01:24:56.580
by far the biggest question, I think, because that really shapes the entire direction of your career.
01:25:01.080
And I think, you know, depending on the philosophical assumptions you put in,
01:25:06.320
can result in enormous, you know, differences in impact. Like, do you think animals count at all?
01:25:12.540
Or, like, a lot? I mean, would make enormous difference in terms of whether you ought to be
01:25:17.900
focusing on that. Similarly, like, what weight do you give to future generations versus present
01:25:21.820
generations? Potentially you can do hundreds of times as much good in one cause area as you can in
01:25:27.980
another. Yeah. And then within that, the question of where exactly to focus is going to just depend a
01:25:33.800
lot on the particular cause area where different causes just have different bottlenecks. We tend to
01:25:40.040
find that, you know, working at the best nonprofits is often great. Research is often great, especially
01:25:45.240
in kind of new, more nascent causes like safe development of artificial intelligence or pandemic
01:25:50.600
preparedness. Often you need the search. Policy is often a very good thing to focus on as well.
01:25:57.300
And in some areas, especially where, you know, money is the real bottleneck, then, you know,
01:26:04.060
trying to do good through your donations primarily and therefore trying to take a job that's more
01:26:08.620
lucrative can be the way to go too. Yeah, that's a wrinkle that is kind of counterintuitive to people.
01:26:14.900
The idea that the best way for you to contribute might in fact be to pursue the most lucrative career
01:26:22.640
that you might be especially well-placed to pursue. And it may have no obvious connection to doing good
01:26:31.680
in the world apart from the fact that you are now giving a lot of your resources to the most effective
01:26:38.000
charities. So if you're a rock star or a professional soccer player or just doing something that you love
01:26:44.380
to do and you have other reasons why you want to do it, but you're also making a lot of money that you
01:26:49.920
can then give to great organizations, well then it's hard to argue that your time would be better
01:26:55.520
spent, you know, working in the non-profit sector yourself or doing something where you wouldn't be
01:27:02.240
laying claim to those kinds of resources. Yeah, that's right. And so it can be, so within the
01:27:07.860
effective altruism community, this is now, I think, a minority of people are trying to
01:27:12.620
do good in their career via the path of what's called earning to give. And again,
01:27:17.520
it depends a lot on the cause area. So what's the, you know, how much money is there relative to
01:27:23.280
the kind of size of the cause already? And, you know, in the case of things like scientific research
01:27:29.640
or AI or pandemic preparedness, there's clearly just like a lot more demand for altruistically
01:27:35.940
minded, sensible, competent people working in these fields than there is money. Whereas in the current
01:27:41.240
case of global health and development, there's just, yeah, there are just these interventions and
01:27:45.980
programs that we could scale up with hundreds of millions, billions of dollars that we just know
01:27:51.160
work very well. And there that's kind of money is kind of more of the bottleneck. And so kind of
01:27:56.560
going back to these misconceptions about effective altruism, this idea of earning to give, it's
01:28:01.880
again, it's very mimetic. People love the, how counterintuitive it is. And, you know, it is,
01:28:07.980
it is one of the things we believe, but it's definitely kind of minority path, especially if you're
01:28:14.000
focused on some of these areas where there already is a lot of potential funding. If you can, it's
01:28:18.260
more about just how many people can we have working on these areas.
01:28:22.160
This raises another point where the whole culture around charity is not optimized for
01:28:29.740
attracting the greatest talent. We have a double standard here, which many people are aware of. I
01:28:35.700
think it's most clearly brought out by Dan Pallotta. I don't know if you know him. He gave a TED talk
01:28:41.400
on this topic and he organized some of the bike rides across America in support of various causes.
01:28:49.340
I think the main one was AIDS. He might've organized a cancer one as well, but, you know, these are
01:28:54.680
ventures that raised, I think, hundreds of millions of dollars. And I think he was criticized for spending
01:29:01.440
too much on overhead, but, you know, it's a choice where you can spend, you know, less than 5% on
01:29:06.520
overhead and raise $10 million, or you could spend 30% on overhead and raise, you know, $400 million.
01:29:13.300
Which should you do? And it's pretty obvious you should do the latter if you're going to use those
01:29:17.620
resources well. And yet there's a culture that prioritizes having the lowest possible overhead.
01:29:26.060
And also there's this sense that if you're going to make millions of dollars personally by starting a
01:29:32.940
software company or becoming an actor in Hollywood or whatever it is, there's nothing wrong with that.
01:29:40.420
But if you're making millions of dollars a year running a charity, well, then you're a greedy
01:29:46.060
bastard, right? And the idea that, you know, we wouldn't fault someone from pursuing a comparatively
01:29:53.300
frivolous and even narcissistic career for getting rich in the meantime, but we would fault someone who's
01:30:01.580
trying to cure cancer or save the most vulnerable people on earth for getting rich while doing that.
01:30:08.540
That seems like a bizarre double standard with respect to how we want to incentivize people.
01:30:15.380
I mean, because what we're really demanding is someone come out of the most competitive school,
01:30:22.680
and when faced with the choice of whether or not to work for a hedge fund or work for a charity
01:30:28.300
doing good in the world, they have to also be someone who doesn't care about earning much money.
01:30:36.160
So we need to, we're sort of filtering for sainthood or something like sainthood among the
01:30:40.860
most competent students at that stage. And that seems less than optimal. I don't know how you view that.
01:30:47.380
Yeah, I think it's, I think it's a real shame. So the, you know, newspapers every year publish rankings of
01:30:54.740
the top paid charity CEOs. And it's, you know, regarded as kind of scandal, the charity is therefore
01:31:00.920
ineffective. But what we should really care about, if we actually care about, you know, the potential
01:31:07.040
beneficiaries, the people we're trying to help, is just how much money are we giving this organization
01:31:12.800
and how much good comes out the other end. And if it's the case that they can achieve more,
01:31:17.680
because they can attract a more experienced and able person to lead the organization by paying more.
01:31:23.600
Now, sure, that's like, it's maybe a sad fact about the world. It would be nice if everyone were able
01:31:28.960
to be maximally motivated purely by altruism. But we know that's not the case. Then if they can achieve
01:31:36.040
more by doing that, then yeah, we should be encouraging them to do that. You know, there's some arguments
01:31:42.140
against like, oh, well, perhaps there's kind of race to the bottom dynamics where if one organization
01:31:48.360
starts paying more, then other organizations should need to pay more too. And it just, you
01:31:53.200
get bloat in the system. I think that's the strongest case for the idea of low overheads
01:31:59.440
when it comes to fundraising. Because if one organization is fundraising, well, perhaps in
01:32:05.540
part, they're increasing the total amount of charitable giving that happens. But they're also
01:32:09.900
probably taking money away from other organizations. And so it can be the case that a general norm of
01:32:15.980
lower overheads when it comes to fundraising is a good one. But when it comes to charity pay,
01:32:21.780
we're obviously just radically far away from that. And yeah, it shows that people are thinking about
01:32:27.480
charity in a kind of fundamentally wrong way, at least, you know, for the effect of altruist purposes
01:32:33.560
we're thinking of, which is not thinking about it in terms of outcomes, but in terms of the virtues
01:32:39.040
you damage the or how much are you sacrificing or something. And ultimately, when it comes to these
01:32:44.480
problems that we're facing, these terrible injustices, this horrific suffering, I don't really
01:32:48.900
care whether the person that helps is virtuous or not. I just want the thing. I just want the
01:32:53.000
suffering to stop. I just, I just want people to be helped. And as long as they're not doing harm
01:32:58.480
along the way, I don't think it really matters whether the people are paid a lot or a little.
01:33:04.100
I think we should say something about the other side of this equation, which tends to get
01:33:08.980
emphasized in most people's thinking about being good in the world. And this is the side of kind of
01:33:16.660
the consumer facing side of not contributing to the obvious harms in a way that is egregious or,
01:33:24.120
you know, dialing down one's complicity in this unacceptable status quo as much as possible.
01:33:30.680
And so this goes to things like becoming a vegetarian or a vegan or avoiding certain kinds
01:33:37.580
of consumerism based on concern about climate change. There's a long list of causes that people
01:33:43.280
get committed to more in the spirit of negating certain bad behavior or polluting behavior rather than
01:33:52.400
focusing on what they're in fact doing to solve problems or giving to specific organizations.
01:33:59.740
Is there any general lesson to be drawn from the results of these efforts on both fronts? I mean,
01:34:06.840
how much does harm avoidance as a consumer add to the scale of merit here? What's the longest lever
01:34:16.820
Yeah, so I think there's a few things to say. So right at the start, I mentioned one of the key insights
01:34:23.020
of effective altruism was this idea that different activities can vary by a factor of 100 or 1,000 in
01:34:29.960
terms of how much impact they have. And even within ethical consumerism, I think that happens. So if you
01:34:36.560
want to cut out most animal suffering from your diet, I think you should cut out eggs, chicken and pigs,
01:34:41.520
maybe fish, whereas beef and milk, I think are comparatively small factors. If you want to reduce
01:34:47.640
your carbon footprint, then giving up beef and lamb, reducing transatlantic fights, reducing how much
01:34:54.020
you drive makes significant differences, and dozens of times as much impact as things like recycling or
01:35:00.460
upgrading light bulbs or reusing plastic bags. From the purely consequentialist outcome-based
01:35:06.140
perspective, I think it is systematically the case that these ethical consumerism behaviors
01:35:12.100
are small in terms of their impact compared to the impact that you can do via your donations or via
01:35:18.540
your career. And the reason is just there's a very limited range of things that you can do by changing
01:35:23.960
your consumption behavior. There's just things you are buying anyway, and then you can stop.
01:35:28.620
Whereas if you're donating or you're choosing a career, then you can choose the very most effective
01:35:34.260
things to be doing. So take the case of being vegetarian. So I've been vegetarian for 15 years
01:35:40.900
now. I have no plans of stopping that. But if I think about how many animals I'm helping in the
01:35:47.540
course of a year as a result of being vegetarian, and how does that compare when I'm looking at
01:35:51.920
the effectiveness of the very most effective animal welfare charities, which are typically what
01:35:58.260
are called kind of corporate campaigns. So it turns out the most effective way of reducing the number of
01:36:04.260
that we know of, reducing the number of hens in factory farms, laying eggs in just the most
01:36:10.840
atrocious, terrible conditions of suffering, seems to be by campaigning large retailers to change
01:36:17.980
the eggs they purchase in the supply chain. You can actually get a lot of push there.
01:36:22.820
And the figures are just astonishing. It's something like 50 animals that you're preventing the
01:36:30.400
significant torture of for every dollar that you're spending on these campaigns.
01:36:34.920
And so if you just do the maths, like the amount of good you do by becoming vegetarian is equivalent
01:36:40.540
to the amount of good you do by donating a few dollars to these very most effective campaigns.
01:36:45.180
I think similar is true for reducing your carbon footprint. My current favorite climate change
01:36:51.060
charity, Clean Air Task Force, which lobbies the U.S. government to improve its regulations around
01:36:57.440
fossil fuels and promotes energy innovation as well. I think probably reduces a ton of CO2 for
01:37:03.440
about a dollar. And that means if you're in the U.S., an average U.S. citizen emits about 16 tons of
01:37:10.340
carbon dioxide equivalent. If you did all of these most effective things of cutting out meat and all
01:37:16.060
your transatlantic flights and getting rid of your car and so on, you might be able to reduce that six
01:37:21.400
tons or so. And that's, you know, the same as giving about six dollars to these most effective
01:37:27.280
charities. And so it just does seem that these are just much more powerful from the perspective of
01:37:32.100
outcomes. The next question philosophically is whether you have some non-consequentialist reason
01:37:37.460
to do these things. And there, I think it differs. So I think the case is much stronger for becoming
01:37:45.600
vegetarian than for climate change. Because if I buy a factory farmed chicken and then donate to a
01:37:54.780
corporate campaign, well, I've probably harmed different chickens. And it seems like that's,
01:38:01.460
you know, you can't offset the harm to one individual by a benefit to another individual.
01:38:05.720
Whereas if I have a lifetime of emissions, but at the same time donate a sufficient amount to
01:38:11.420
climate change charities, I've probably just reduced the total amount of CO2 going into the
01:38:17.400
atmosphere over the course of my lifetime. And there isn't anyone who's harmed, in expectation at
01:38:23.680
least, by the entire course of my life. And so it's not like I'm trading a harm to one person for the
01:38:29.520
benefit to another. But these are quite, these are quite subtle issues when we get onto these kind
01:38:34.480
of non-consequentialist reasons. Yeah, and there are also ways in which the business community and
01:38:40.200
innovation in general can come to the rescue here. So for instance, there's a company,
01:38:46.000
I believe the name is going to be changed, but it was called Memphis Meats, that is spearheading
01:38:52.960
this revolution in what's called cultured meat or clean meat, where they take a single cell from an
01:38:59.500
animal and amplify it. So, you know, no animals are killed in the process of making these steaks or
01:39:05.300
these meatballs or these chicken cutlets, and they're trying to bring this to scale. And I had
01:39:10.860
the CEO, Uma Valeti, on my podcast a couple of years ago and actually invested in the company,
01:39:17.600
along with many other people, and hopefully this will bear fruit. That's an example of something
01:39:22.940
where, though it was unthinkable some years ago, we might suddenly find ourselves living in a world where
01:39:29.380
you can buy steak and hamburger meat and pork and chicken without harming any animals. And it may
01:39:38.080
also have other significant benefits like cutting down on xenoviruses and, you know, that connects to
01:39:44.860
the pandemic risk issue. I mean, we're really where, you know, our factory farms are wet markets of
01:39:50.220
another sort, and so it is with climate change. On some level, we're waiting and expecting for technology
01:39:57.740
to come to the rescue here, where you're just bringing down the cost of renewable energy to
01:40:03.240
the point where there is literally no reason to be using fossil fuels or bringing us a new generation
01:40:10.640
of nuclear reactors that don't have any of the downsides of old ones. And again, this does connect
01:40:18.340
to the concern I had around the fancy lifeboat. We have to do the necessary things in our lifeboat
01:40:24.800
that allow for those kinds of breakthroughs, because, you know, those are the, in many cases,
01:40:31.720
the solutions that just fundamentally take away the problem rather than merely mitigate it.
01:40:38.760
Yeah. So I totally agree. And I think that, so in the case of, you know, if you're trying to
01:40:45.600
alleviate animal suffering by as much as possible, I think that, yeah, funding research into clean
01:40:51.380
meats, plausibly the best thing you can do. It's hard to make a comparison with the more direct
01:40:55.720
campaigns, but definitely plausibly the best. In the case of climate change, I've recently been
01:41:00.760
pretty convinced that the most effective thing we can be doing is promoting clean energy innovation.
01:41:07.420
In this case, this is another example of importance versus neglectedness, where you mentioned
01:41:11.860
renewables, and they are really key part of the solution. But other areas are really notably
01:41:19.160
more neglected. So carbon capture and storage, where you're capturing CO2 as it emerges from
01:41:25.920
fossil fuel power plants, and nuclear power get quite a small amount of funding compared to
01:41:31.260
solar and wind, even though the Intergovernmental Panel on Climate Change thinks that they're also a
01:41:37.300
very large part of the solution. But here, I think the distinction is focusing on issues in rich
01:41:44.120
countries in order to benefit people in those rich countries, or kind of as a means to some other
01:41:49.740
sort of benefit. And so I think it's very often the case that you should focus on, like, you might be
01:41:55.560
sending money towards things happening in a rich country like the US, but not because you're trying
01:42:00.840
to benefit people in the US, because you're trying to benefit the world. So maybe you're funding,
01:42:04.760
yeah, this clean meat startup, or you're funding research into low carbon forms of energy. And sure,
01:42:12.140
like that might happen at the US, which is still the world's research leader. That's fairly justified.
01:42:18.020
But that's kind of partly the beneficiaries in the US of these things. But it's also, it's,
01:42:23.460
you know, it's global, it's future generations, too. You're kind of influencing, as it were, the
01:42:28.840
people who are in the positions of power who have the most influence over how things are going to go
01:42:33.920
into the future. Okay, so in our next chapter, let's talk about how we build effective altruism
01:42:41.540
into our lives, and just make this as personally actionable for people as we can. Okay, so we've
01:42:50.220
sketched the basic framework of effective altruism, and just how we think about systematically evaluating
01:42:58.180
various causes, how we think about, you know, what would be prioritized, you know, with respect to
01:43:06.080
things like actual outcomes versus a good story. And we've referenced a few things that are sort of
01:43:14.200
now in the effective altruist canon, like giving a minimum of 10% of one's income a year. And that's
01:43:22.520
really, if I'm not mistaken, you just took that as a nice round number that people had some
01:43:29.080
traditional associations with, you know, in religious communities, there's a notion of tithing
01:43:33.740
that amount. And it seemed like not so large as to be impossible to contemplate, but not so small as
01:43:41.320
to be ineffectual. Maybe let's start there. How do you, so am I right in thinking that the 10%
01:43:48.360
number just, it was kind of pulled out of a hat, but seemed like a good starting point, but there's
01:43:53.760
nothing about it that's carved in stone from your point of view? Exactly. It's not, it's not a magic
01:43:58.560
number, but it's just, it's in this Goldilocks zone where Toby originally had had the thought that
01:44:05.600
he would be promoting what he calls the further pledge, which is where you just set a cap on your
01:44:10.720
income and give everything above that. But the issue, I think, seems pretty clear that if he'd been
01:44:15.540
promoting that, well, very few people would have joined him. We do have a number of people who've
01:44:20.440
taken the further pledge, but it's a very small minority of the 5,000 members we have. On the other
01:44:26.620
hand, if we were promoting a 1% pledge, let's say, well, we're probably just not changing people's
01:44:31.540
behavior compared to how much they donate anyway. So in the UK, people donate on average 0.7% of their
01:44:37.640
income. In the US, if you include educational donations and church donations, people donate about
01:44:43.140
2% of their income. So if I was saying, oh, we should donate 1%, probably those people would have
01:44:48.320
been giving 1% anyway. And so we thought 10% is in this Goldilocks zone. And like you say, it has this
01:44:54.440
long history where for generally religious reasons, people much poorer than us in earlier historical epochs
01:45:02.080
have, you know, been able to donate 10%. We also have 10 fingers. It's an Iceland number. But, you know,
01:45:09.560
many people who are part of the community donate much more than that. Many people who, you know,
01:45:14.580
affirm core people part of the effective autism community don't donate that much. They do good
01:45:19.700
via other ways instead. It's interesting to consider the psychology of this because I can imagine many
01:45:27.980
people entertaining the prospect of giving 10% of their money away and feeling, well, I could easily do
01:45:36.100
that if I were rich, but I can't do that now. And I can imagine many rich people thinking, well,
01:45:43.580
that's a lot of money, right? It's like every year after, you know, I'm making a lot of money and you're
01:45:48.940
telling me year after year after year, I'm going to give 10% away. You know, that's millions of dollars
01:45:53.740
a year. So it could be the fact that there's no point on the continuum of earning where if you're of
01:46:01.640
a certain frame of mind, it's going to seem like a Goldilocks value. You either feel too poor or too
01:46:10.200
rich and there's no sweet spot. Or, you know, to flip that around, you can recognize that however
01:46:17.700
much money you're making, you can always give 10% to the most effective ways of alleviating suffering
01:46:25.520
once you have this epiphany. You can always find those 10% at every point. And if you're not making
01:46:30.640
much money, obviously 10% will be a small amount of money. And if you're making a lot of money,
01:46:36.340
it'll be a large amount. But it's almost always the case that there's 10% of fat there to be found.
01:46:43.800
So yeah, did you have thoughts about just the psychology of someone who feels not immediately
01:46:49.500
comfortable with the idea of making such a commitment? Yeah, I think there's two things
01:46:55.340
I'd like to say to that person. One is the kind of somewhat direct argument, and second is more
01:47:00.520
pragmatic. The direct one is just that even if you feel like, oh, I could donate that amount if I
01:47:07.280
were rich. Probably you are rich if you're listening to this. So if you're single and you earn $66,000,
01:47:14.900
then you're in the global 1% of the world in terms of income distribution. And what's more,
01:47:22.760
even after donating 10% of your income, you would still be in the richest 1% of the world's population.
01:47:28.500
If you earn $35,000, which we would not think of as being a rich person, even after donating 10%,
01:47:35.300
you'd still be in the richest 5% of the world's population. And learning those facts was very
01:47:39.820
motivating for me when I first started thinking about my giving. So that's kind of direct argument.
01:47:47.100
But the more pragmatic one is to think, well, if you're at most stages in your life, you'll be
01:47:55.080
earning more in the future than you are now. You know, people's incomes tend to increase over time.
01:48:00.280
And you might just reflect, well, how do I feel about money at the moment? And if you feel kind of
01:48:06.680
all right about it, you know, perhaps you're in a situation where you're like, oh no, I'm actually
01:48:10.080
just fairly worried. There's like serious health issues or something. And then it's like, okay,
01:48:13.760
we'll take care of that first. But if you're like, well, actually, you know, life's pretty all right.
01:48:18.080
Don't think additional money will make that much of a difference. Then what you can do is just think,
01:48:22.580
okay, maybe I'm not going to give up to 10% now, but I'll give a very significant proportion of
01:48:29.220
the additional money I make any future raises. So maybe I give 50% of that amount.
01:48:33.340
Yeah. And probably after, that means that you're still increasing the amount you're earning over
01:48:37.780
time. But at the same time, you're, you know, if you do that, then over a few years, you'll probably
01:48:43.860
quite soon end up giving 10% of your overall income. So at no point in this plan do you ever
01:48:49.440
have to go backwards, as it were, living on less. In fact, you're always earning more, but yet you're
01:48:54.420
giving more at the same time. And I've certainly found that in my own life where, you know, I started
01:48:59.660
thinking about giving as a graduate student. So, you know, I now earn, you know, I now live on like
01:49:06.160
twice as much, more than twice as much as I did when I first started giving, but I'm also able to
01:49:12.400
give, you know, a significant amount of, of my income. Remind me, how have you approached this
01:49:16.960
personally? Because you haven't taken a minimum 10% pledge, you think of it differently. So what have
01:49:22.620
you done over the years? Yeah. So, you know, so I have taken the giving what we can pledge, which is
01:49:28.140
10% kind of at any point. And then I also have intention and plan to donate everything above
01:49:35.080
what is the equivalent of 20,000 pounds per year in Oxford 2009, which is now about 27,000 pounds
01:49:42.520
per year. I've never written this down as like a formal pledge. The reason being that there were just
01:49:48.960
too many possible kind of exceptions. So if I had kids, I'd want to increase that. If there were
01:49:55.020
situations where I thought my ability to do good in the world would be like fairly severely hindered,
01:49:59.420
I'd want to kind of avoid that. But that is the amount that I'm giving at the moment. And it's the
01:50:03.820
amount I plan to give for the rest of my life. Just so I understand it. So you, so you're giving
01:50:08.380
anything you make above 27,000 pounds a year to charity? That's right. Yeah, that's right. Post,
01:50:15.840
post tax. And so my income is a little bit complicated in terms of how you evaluate it
01:50:23.360
because it's my university income, but then also book sales and so on. I think on the most natural,
01:50:28.400
and there's things like speaking engagements I don't take that I could, but I think on the
01:50:32.440
most natural way of doing it, I give a little over 50% of my income. So I want to explore that
01:50:38.080
with you a little bit, because again, I'm returning to our fancy lifeboat and wondering just how fancy it
01:50:43.960
can be in a way that's compatible with the project of doing the most good in the world. And what I
01:50:51.660
detect in myself and in most of the people I meet, and I'm sure in this is an intuition that is shared
01:50:58.000
by many of our listeners, many people will be reluctant to give up on the aspiration to be wealthy
01:51:08.580
with everything that that implies. You know, obviously they want to work hard and make their
01:51:15.840
money in a way that is good for the world or at least benign. They can follow all of the ethical
01:51:22.060
arguments that would say, you know, right livelihood in some sense is important. But if people really
01:51:30.000
start to succeed in life, I think there's something that will strike many people, if not most, as
01:51:38.780
too abstemious and monkish about the lifestyle you're advertising in choosing to live on that
01:51:48.240
amount of money and give away everything above it, or even just, you know, giving away 50% of
01:51:54.480
one's income. And again, I think this does actually connect with the question of effectiveness. I
01:52:01.300
mean, so like, it's at least possible that you would be more effective if you were wealthy and
01:52:08.240
living with all that, all that that entails, living as a wealthy person. And I mean, just to take by
01:52:13.700
example, someone like Bill Gates, you know, he's obviously the most extreme example I could find
01:52:18.680
because he's, you know, he's one of the wealthiest people on earth. Still, I think he's number two,
01:52:25.380
perhaps. And he's also the probably well-established now, he's the biggest benefactor of charity in human
01:52:35.460
history, perhaps. The Gates Foundation has been funded to the tune of tens of billions of dollars by him
01:52:42.500
at this point. And so I'm sure he's spent a ton of money on himself and his family, right? I mean,
01:52:48.140
his life is probably filled to the brim with luxury, but his indulgence in luxury is still just a rounding
01:52:55.420
error on the amount of money he's giving away, right? So it's actually hard to run a counterfactual
01:53:00.640
here, but I'd be willing to bet that Gates would be less effective and less wealthy and have less
01:53:08.940
money to give away if he were living like a monk in any sense. And I think maybe more importantly,
01:53:16.560
his life would be less inspiring, less inspiring example to many other wealthy people. If Bill Gates
01:53:23.200
came out of the closet and said, listen, I'm living on $50,000 a year and giving all my money away to
01:53:30.880
charity, that wouldn't have the same kind of kindling effect I think his life at this point is in fact
01:53:37.700
having, which is you can really have your cake and eat it too. You can be a billionaire who lives in a
01:53:42.860
massive smart house with all the sexy technology, even fly around on a private jet, and be the most
01:53:51.500
charitable person in human history. And if you just think of the value of his time, right? Like if he
01:53:56.480
were living a more abstemious life, and I mean, just imagine the sight of Bill Gates spending an hour
01:54:04.600
trying to save $50 on a new toaster oven, right? You know, bargain hunting, it would be such a colossal waste of
01:54:12.060
his time, given the value of his time. Again, I don't have any specifics really about how to think about this
01:54:20.320
counterfactual. But I do have a general sense that, and actually, this is actually a point you made in our first
01:54:26.520
conversation, I believe, which is, you don't want to be an antihero in any sense, right? You want to, like, if you can
01:54:32.820
inspire only one other person to give at the level that you are giving, you have doubled the good you
01:54:39.620
can do in the world. So on some level, you want your life to be the most compelling advertisement for this
01:54:47.640
whole project. And I'm just wondering if, I mean, for instance, I'm just wondering what changes we would
01:54:53.760
want to make to, you know, Bill Gates' life at this point to make him an even more inspiring advertisement
01:55:03.200
for effective altruism to other very, very wealthy people, right? And I mean, it might be dialing down
01:55:10.240
certain things, but given how much good he's able to do, him buying a fancy car is just, it doesn't even
01:55:18.060
register in terms of actual allocation of resources. So anyway, I pitched that to you.
01:55:24.140
Yeah, terrific. I think, so there's three different strands I think I'd like to pick apart.
01:55:29.000
So the first is whether everyone should be like me. And I really don't want to make the claim. I
01:55:35.500
certainly don't want to say, well, I can do this thing so everyone else can. Because I really just
01:55:39.460
think I am in a position of such utter privilege. So being, you know, born into, you know, a middle
01:55:49.480
class family in a bitch country, being privately educated, going to Cambridge, then Oxford, then
01:55:54.920
Cambridge, then Oxford, being like, like tall and male and white and broadly straight. Like, and then
01:56:03.060
also just having kind of inexpensive tastes, like my ideal day involves sitting on a couch and drinking
01:56:09.400
tea and reading some interesting new research, and perhaps like doing, going wild swimming. It's also,
01:56:17.800
yeah. And then secondly, also, I have just these amazing benefits in virtue of the work that I do. I have
01:56:25.520
this incredibly, like I meet these incredibly varied, interesting kind of array of people. And so I just don't
01:56:32.160
really think I could stand here and say, well, everyone should do the same as me, because I
01:56:36.640
think I've just had it kind of so easy that it doesn't really feel like, you know, if I think
01:56:41.700
about the sacrifices I have made, or the things I found hard over the course of 10 years, that's much
01:56:48.860
more like doing scary things, like being on the Sam Harvest podcast, or doing a TED talk, or, you know,
01:56:55.820
meeting, you know, very wealthy or very important people, things that might kind of cause anxiety,
01:57:00.800
much more than the kind of financial side of things. But I recognize there are other people for
01:57:05.300
whom, like money just really matters to them. And I think you just, in part, you're kind of born with
01:57:12.240
a set of preferences and these things, or perhaps they're molded early on in childhood, and you don't
01:57:16.540
necessarily have control over them. So that's kind of me as an, yeah, what I'm trying to convey to this.
01:57:23.160
Second is the time value of money. And this is something I've really wrestled with, because it
01:57:30.680
just is the case that in terms of my personal impact, my donations are just a very small part
01:57:38.800
of that. Because, you know, we have been successful. We are, you know, giving what we can has now moved
01:57:44.660
$200 million. There's over one and a half billion dollars of pledge donations. The EA movement as a whole
01:57:50.880
certainly has over $10 billion of assets that kind of will be going out. And then, you know, I'm donating
01:57:56.600
my, you know, thousands of pounds per year, and it does not, well, make tens of thousands of pounds
01:58:02.680
per year. And it's all, it's just very clearly kind of small on the scale. And so that's definitely
01:58:07.660
something I've wrestled with. I don't think I lose enormous amounts of time. My guess is that it's maybe
01:58:13.000
a couple of days of time a year. I have done some things. So like, you know, via my work, I have an
01:58:20.240
assistant. If I'm doing business trips, like, that counts as expenses rather than my personal money.
01:58:26.080
So that I'm trying to keep it separate. There's some things you can't do. So like,
01:58:30.260
if you live close to your office, you know, I can't count that as a business expense, but it would
01:58:34.240
shorten your commute. So it's not like, perfect as a way of doing that. And so I do think there's an
01:58:39.660
argument, an argument against that. And I think that is definitely a reason of caution for making
01:58:44.460
kind of a very large commitment. And then the final aspect is, yeah, what sort of message you
01:58:50.820
want to send? And probably my guess is that you just want a bit of market segmentation here, where
01:58:57.020
some people should, you know, some people should perhaps show what can be done. Others should show,
01:59:03.620
well, no, actually, you can have this amazing life while, you know, not having to wear the hair
01:59:08.640
shirt and so on. You know, I think perhaps you could actually convince me that maybe I'm,
01:59:13.520
you know, sending a long message and would do more good if I had some other sort of pledge. And
01:59:18.840
maybe you would be right about that. I definitely, when I made these plans, I wasn't thinking through
01:59:24.300
these things quite as carefully as I was now. But I did want to, I did want to just kind of show a
01:59:29.080
proof of concept. Yeah, I guess I'm, I'm wondering if there's a path through this wilderness that
01:59:35.500
doesn't stigmatize wealth at all. I mean, the end game for me, in the presence of absolute
01:59:42.920
abundance is, you know, everyone gets to live like Bill Gates on some level. If we make it,
01:59:49.760
if we get to the 22nd century, and we've, you know, solved the AI alignment problem, and now we're
01:59:56.300
just pulling wealth out of the ether, I mean, essentially just, we've got Deutsche's universal
02:00:02.120
constructors, you know, building every machine atom by atom, and we can do more or less anything
02:00:07.920
we want. Well, then this can't be based on an ethic where wealth is at all stigmatized. What should
02:00:16.320
have a probrium attached to it is a total disconnection from the suffering of other people and comfort with
02:00:24.360
the more shocking disparities in wealth that we see all around us? Once a reasonably successful person
02:00:33.200
signs on to the effective altruist ethic, and begins thinking about his or her life in terms of
02:00:42.740
earning to give on some level, there's a flywheel effect here where one's desire to be wealthy
02:00:50.720
actually amplifies one's commitment to giving, so that, like, in part, the reason why you would
02:00:58.720
continue working is because you have an opportunity to give so much money away and do so much good,
02:01:04.960
and it kind of purifies one's earning in the first place. I mean, like, I can imagine, you know,
02:01:11.640
most wealthy people get to a point where they're making enough money so that they don't have to
02:01:16.920
worry about money anymore. And then there's this question, well, why am I making all this money?
02:01:23.000
Why am I still working? And the moment they decide to give a certain amount of money away a year,
02:01:30.140
just algorithmically, then they feel like, well, okay, if this number keeps going up,
02:01:35.820
that is a good thing, right? So, like, I can get out of bed in the morning and know that today,
02:01:40.280
you know, if it's 10%, you know, one day in 10 is given over wholly to solving the worst suffering
02:01:47.400
or saving the most lives or mitigating the worst long-term risk. And if it's 20%, it's, you know,
02:01:53.320
two days out of 10. And if it's 30%, it's three days out of 10. And they could even dial it up. I'm
02:01:58.260
just imagining, let's say somebody is making $10 million a year, and he thinks, okay, I can sign on
02:02:04.980
and give 10% of my income away to charity. That sounds like the right thing to do. And he's
02:02:09.540
persuaded that this should be the minimum, but he then aspires to scale this up as he earns more
02:02:15.060
money. You know, maybe this would be the algorithm, you know, for each million he makes more a year,
02:02:21.340
he just adds the percentage. So if he's making, you know, $14 million one year, he'll give 14% of
02:02:26.860
his income away. And if it's $50 million, he'll give 50% away, right? And obviously, I mean,
02:02:31.780
if let's say the minimum he wants to make is $9 million a year, well, then he can get to up to,
02:02:37.820
you know, 91% of $100 million a year, he can give that away. But I can imagine being a very wealthy
02:02:45.960
person who, you know, as you're scaling one of these outlier careers, it would be, you know,
02:02:52.820
fairly thrilling to be the person who's making $100 million that year, knowing that you're going to
02:02:59.400
give 91% of that away to the most effective charities. And you might not be the person
02:03:05.100
who would have seen any other logic in driving to that kind of wealth, you know, when you were the
02:03:11.740
person who was making $10 million a year, because $10 million a year is, was good enough. I mean,
02:03:16.380
obviously, you can live on that, you know, there's nothing materially is going to change for you as
02:03:20.740
you make more money. But because he or she plugged into this, this earning to give logic, and in some
02:03:28.680
ways, the greater commitment to earning was, was leveraged by a desire to maintain a wealthy
02:03:36.720
lifestyle, right? It's like, this person does want $9 million a year, right? Every year. But now they're
02:03:44.620
much wealthier than that, and giving much more money away. I'm just trying to figure out how we can
02:03:50.940
capture the imagination of people who would see the example of Bill Gates and say, okay, that's, that's
02:03:58.060
the sweet spot, as opposed to any kind of example that however subtly stigmatizes being wealthy in the
02:04:06.560
Yeah, I think these are good points. And it's true, I think the stigma on wealth per se, is not a good
02:04:13.680
thing, where, you know, if you build a company that's doing good stuff, and people like the
02:04:19.540
product, and they get value from it. And so there's enormous, like, surplus, so there's a lot of gains
02:04:24.580
from trade, and you get wealthy as a result of that. That's a good thing. Obviously, there's some
02:04:29.540
people who, like, make enormous amounts of money doing bad things, selling opioids, or building factory
02:04:35.060
farms. But I don't think that's the majority. And I do think it's the case that, you know, it's kind of
02:04:40.020
like optimal taxation theory. But you're, the weird thing is that you're imposing the tax on yourself,
02:04:45.640
where, depending on your psychology, if you, you know, say, I'm going to give 100% as the highest
02:04:52.440
tax rate, well, you're not incentivized to earn anymore. And so the precise amount that you want
02:04:57.400
to give is just quite sensitive to this question of just how unmotivated you're going to be in order
02:05:04.580
to earn more. So in my own case, you know, I'm not, it's very clear that I'm, the way I'm going
02:05:10.180
to do good is not primarily via my donations. So perhaps this disincentive effect is, you know,
02:05:15.700
not very important. But if my aim were to get as rich as possible, then, well, I'd need to really
02:05:23.020
look inside my own psychology, figure out how much, especially over the entire course of my life,
02:05:28.440
can I be motivated by pure altruism versus self-interest? And I strongly doubt that the
02:05:34.800
kind of optimal tax rate would be, you know, via my donations would be 100%. It would be something
02:05:41.640
That's what I'm kind of fishing for here. And, you know, I, by no means am convinced that I'm right,
02:05:47.320
but I'm just wondering if, in addition to all the other things you want, you know, as revealed in
02:05:53.380
this conversation, you know, for yourself and the world and, you know, acknowledging that you're,
02:05:59.000
you know, your primary contribution to doing good in the world might in fact be your ideas and your
02:06:04.100
ability to get them out there. I mean, like you've had the effect you've had on me and, you know,
02:06:08.200
I'm going to have my effect on, on my audience and, you know, conversations like this have the effect
02:06:13.560
that they have. And so there's no question you are inspiring people to marshal their resources in
02:06:19.420
these directions and think more clearly about these issues. But what if it were also the case
02:06:25.300
that if you secretly really wanted to own a Ferrari, you would actually make different decisions
02:06:34.240
such that in addition to all the messaging, you would also become a very wealthy person
02:06:41.860
Yeah. I mean, if it was the case, you know, if it was the case that I was planning to earn to give
02:06:49.100
and so I think a very common kind of figure for people who are going to earn to give via
02:06:55.180
entrepreneurship or other high, high earning careers is a 50% figure where they plan to give
02:07:01.660
half of what they earn, at least once they start earning a significant amount. And that has seemed
02:07:06.600
to work pretty well from the people I know. It's also notably the figure that Bill Gates uses for
02:07:11.560
his giving pledge where billionaires can join the giving pledge if they give at least 50%
02:07:20.020
Most take that pledge, if I'm not mistaken, is pushed off to the end of their life, right?
02:07:26.020
They're just imagining they're going to give it upon their death to charity, right?
02:07:29.360
So you are allowed to do that. I know, I don't know exactly the proportions. It varies. Like
02:07:35.100
the tech, tech founders tend to give earlier than other sorts of people. I'm also not just
02:07:41.560
a little bit confused about what pledging 50% of your wealth means. So if I'm a billionaire one
02:07:48.760
year and then lose half my money and I've got $500 million the next year, do I have to give half of
02:07:55.880
that? Or do I have to give half of the amount when I pledge, which would have been all my money?
02:08:01.020
Anyway, confuses me a little bit, the details of it. But it is the case that, yeah, you can
02:08:05.740
fulfill your pledge completely in the giving pledge by donating entirely after your death.
02:08:10.840
And there are questions about how much people actually fulfill these pledges too.
02:08:15.980
But then, yeah, and I think, and I really do want to say like, that's also just quite
02:08:20.680
reasonable. Different people have different attitudes to money. I think it's a very rare
02:08:24.760
person indeed that can be entirely motivated. You know, and because we're talking about motivation
02:08:31.160
over decades and we're talking about every single day, motivation can be motivated at all
02:08:36.000
times by pure altruism. I think that's very hard. And so if someone instead wants to pick,
02:08:42.460
you know, percentage number and aim to that, that seems like a sensible way to go. And in particular,
02:08:48.260
you want to be sustainable, where if it's the case that moving from, I don't know, 50% to 60%
02:08:54.660
means that actually you like, your desire to do all of this kind of burns out and you go and do
02:09:00.820
something else. That's fairly bad indeed. And you want to be someone who's like, you know,
02:09:06.420
I think the right attitude you want to have towards giving is not to be someone where it's like,
02:09:11.800
oh yeah, I'm giving this amount, but it's just so hard. And I like, I really don't like my life
02:09:16.000
unpleasant. That is, you know, not an inspiring message. Julia Wise has this wonderful member of
02:09:23.000
the effects of altruism community, has this wonderful post called Cheerfully, where she talks
02:09:27.100
about having kids and thinking about that as a question and says that, no, what you want to be
02:09:32.460
is this model, this ideal where you're doing what you're doing and you're saying, yeah, my life is
02:09:37.640
great. I'm able to do this and I'm still having a really wonderful life. That's certainly how I feel
02:09:42.600
about my life. And I think for many people who are going into these higher earning careers saying,
02:09:48.040
yeah, I'm donating 50% and my life is still like absolutely awesome. In fact, it's better
02:09:52.000
as a result of the amount I'm donating. That's the sweet spot, I think, that you want to hit.
02:09:56.900
Hmm. There's another issue here around how public to be around one's giving. And so, you know,
02:10:03.320
you and I are having a public conversation about all of this. And this is just by its very nature
02:10:10.940
violating a norm that we've all inherited or a norm or a pseudo norm around generosity and altruism,
02:10:19.680
which suggests that the highest form of generosity is to give anonymously. There's a Bible verse
02:10:28.160
around this. You don't want to wear your virtue on your sleeve. You don't want to advertise your
02:10:34.140
generosity because that conveys this message that you're doing it for reasons of self-aggrandizement.
02:10:41.760
You're doing it to enhance your reputation. You want your name on the side of the building.
02:10:46.080
Whereas if you were really just connected to the cause of doing good, you would do all of this
02:10:52.320
silently and people would find out after your death, or maybe they would never find out that
02:10:56.620
you were the one who had secretly donated millions of dollars to cure some terrible disease or to buy
02:11:03.820
bed nets. And yet, you know, you and I by association here have flipped that ethic on its head
02:11:13.900
because it seems to be important to change people's thinking around all of the issues we've been
02:11:21.660
discussing. And the only way to do that is to really discuss them. And what's more, we're leveraging
02:11:27.700
a concern about reputation kind of from the opposite side in recognizing that taking a pledge has
02:11:35.660
psychological consequences, right? I mean, when you publicly commit to do something that not only
02:11:41.220
advertises to people that this is the sort of project a human being can become enamored of,
02:11:46.340
you then have a reputational cost to worry about if you decide that you're going to renege on your
02:11:51.860
offer. So talk for a few minutes about the significance of talking about any of this in the first place.
02:12:01.280
Yeah. So I think the public aspect is very important. And it's for the reason you mentioned earlier
02:12:06.100
that take the amount of good that you're going to do in your life via donations, and then just think,
02:12:12.020
can I convince one other person to do the same? If so, you've doubled your impact. You've done your
02:12:17.660
life's work over again. And I think possibly people can do that many times over, at least in the world
02:12:23.040
today, by being this kind of inspirational role model for others. And so I think this religious
02:12:28.960
tradition where, no, you shouldn't show the generosity you're doing. You should keep that secret.
02:12:35.040
I think that looks pretty bad from an outcome-oriented perspective. And I think you need
02:12:40.540
to be careful about how you're doing it. You want to be effective in your communication as well as
02:12:44.840
your giving. Where, you know, it was very notable that Peter Singer had these arguments around giving
02:12:50.020
for almost four decades with comparatively little uptake, certainly compared to the last 10 years
02:12:57.360
of the effect of altruism movement. And, you know, my best hypothesis is that move from a framing that
02:13:04.560
appeals primarily to guilt, which is, you know, it's a low arousal motivation. You don't often get up and
02:13:10.760
start really doing things on the basis of guilt, to inspiration instead, saying like, no, this is an
02:13:15.740
amazing opportunity we have. And so this is a norm that I just really want to change. You know, in the
02:13:21.580
long run, I would like it to be a part of common sense morality that you use a significant part of
02:13:26.760
your resources to help other people. And we will only get there. We will only have that sort of
02:13:31.800
cultural change if people are public about what they're doing and able to say, yeah, this is
02:13:36.100
something I'm doing. I'm proud of it. I think you should consider doing it too. This is the world I
02:13:41.360
Hmm. Well, Will, you have certainly gotten the ball rolling in my life. And it's something I'm
02:13:48.580
immensely grateful for. And I think this is a good place to leave it. I know there will be questions
02:13:54.560
and perhaps we can build out further lessons just based on frequently asked questions that come in
02:14:02.120
in response to what we've said here. But I think that'll be the right way to proceed. So for the
02:14:07.840
meantime, thank you for doing this because I think you're aware of how many people you're affecting,
02:14:13.680
but it's still early days. And, you know, I think it'll be very interesting to see where all this goes
02:14:19.820
because I know what it's like to experience a tipping point around these issues personally. And I have
02:14:27.920
to think that many people listening to us will have a similar experience one day or another, and you
02:14:34.240
will have occasioned it. So thank you for what you're doing.
02:14:36.760
Well, thank you for taking the pledge and getting involved. And yeah, I'm excited to see how