In this episode of the Making Sense Podcast, Will McCaskill joins me to talk about his new book, "Doing Good Better: The 10th Anniversary of EA's Doing Good: A Guide to Effective Altruism in the 21st Century." We talk about the impact of the EA Foundation, how EA has grown in the last 10 years, and why it might not have all the wisdom it thinks it does.
00:01:17.460Well, the last 10 years have been eventful for EA.
00:01:20.440I think the last time we spoke, we dealt with much of the controversy around Sam Bankman
00:01:25.100Freed and FTX and all of that brain damage. Is there more to say about that? I mean, how is the
00:01:30.840EA movement slash community doing now? And what has been the net effect of all of that?
00:01:37.460Yeah, I think the main thing to say is that obviously that was a huge hit. It was like a
00:01:42.460huge knockback. But now if you're looking at influence of the ideas, you know, really what
00:01:48.740matters, then there's been an enormous restoration of growth. So if you look at, for example,
00:01:56.780how much money is being moved to effective nonprofits, that figure, I mean, it actually
00:02:02.000grew just kind of steadily, even through these periods of drama and cryptocurrency and so on.
00:02:11.320But over the last year, best guess is it grew about 50% is closing in on $2 billion a year now.
00:02:17.280And that's not just from a small number of large donors. Actually, it's across large donors and small donors and so on. Similarly, if you look at Giving What We Can members, so people who pledged 10% of their income, that had year-on-year growth of about 20% or 30%. Similarly, if you look at people engaging with effective altruism as a movement via conferences and so on, that is also growing really quite healthily.
00:02:43.760So I think the overall story is that, yeah, that was a huge hit, but the underlying ideas are very good. That means that maybe things are a little bit less in the public eye, but people are still being convinced by the importance of giving more and giving more effectively or using their career to do good. And I think that's got momentum all of its own.
00:03:03.500Right. Well, let's talk about those pieces. I mean, for me, the biggest change in my life that I ascribe to effective altruism in general, and your influence in particular, has been the pledge. I mean, just deciding in advance to give a certain amount of money or a certain percentage of money, in this case, 10% of pre-tax earnings.
00:03:23.280Just knowing that on some level that money isn't even mine when it comes in the door because it's been pre-committed to causes that seem important, that's just an enormous kind of psychological change and just got a life benefit.
00:03:38.700And it's just, you know, I've discussed this with you before, but I mean, it's just fun and virtuous and it just seems good all the way around.
00:03:45.040that places where I remain uncertain whether EA has all the wisdom it should have to inform the
00:03:51.780conversation is around just what constitutes effectiveness. I mean, how we think about that
00:03:56.100way, like the list of causes that are on the menu if you're EA versus, you know, causes that are
00:04:01.600almost by definition not on the menu. I think in your current thinking, you're arguing that we
00:04:06.240should expand the footprint of philanthropic targets beyond what is traditionally thought of
00:04:11.780obvious ea causes maybe let's just start there what so when people think about effective altruism
00:04:18.020and its causes what is the short list of causes that are obviously on the menu uh yeah so firstly
00:04:25.200thanks for bringing up the your 10 pledge and one of the amazing things looking back at you know
00:04:31.600the last 10 years including from our first podcast was 10 years ago was the impact that you taking
00:04:38.240the pledge and being public about it has had, where we're now up to 1,200, 10% pledges that
00:04:43.540have come from people who follow this podcast. And over $30 million of donations have moved.
00:04:50.540So we're talking about thousands of lives saved there, which is pretty cool.
00:04:54.500Yeah. Hopefully we haven't undone those benefits with something else I've done on the podcast.
00:04:59.220Fingers crossed. But in terms of, yeah, areas of focus. So a huge one is global health and
00:05:06.020development. And still, that's where most of the philanthropic money that gets directed goes.
00:05:13.280So just maybe I'll touch each of these as you send it over the net. The obvious cynical retort
00:05:19.480to the wisdom of that is people should be more concerned about suffering that's close to home.
00:05:26.140You know, America is kind of retrenching now under the influence of not just the orange menace
00:05:33.980in the Oval Office. But lots of people who, if they weren't EA, they were EA adjacent in Silicon
00:05:41.460Valley. I mean, all the tech bros who kind of went MAGA are, to my eye at least, building a
00:05:47.020kind of iron wall of cynicism against many of the values that you've just begun to articulate. And
00:05:54.960one brick in this wall is certainly this notion that philanthropy doesn't really work.
00:05:59.760sending money to Africa is just kind of foolish. You might be helping people, some identifiable
00:06:07.660people, but we've really doged all this so effectively now under the wisdom of Elon and
00:06:13.340his incel cult that we just saw that all of this, these are just all criminals who are wasting our
00:06:19.620money over there with USAID. The money should be used at home and it should be used for the most
00:06:24.780part. I mean, philanthropy is just a boondoggle. We should be just building businesses that are
00:06:29.120effective in solving problems that we want to solve. And this seems to be the genius of Silicon
00:06:33.260Valley and its top people now. So this first claim that global health is such an obvious target and
00:06:40.260that the differential value of every dollar over there is so much more than it is here that, you
00:06:47.700know, you can do so much more good with a single dollar in sub-Saharan Africa than you can do it
00:06:51.980in Menlo Park. There's the argument. But what do you say in the face of the cynicism?
00:06:59.120Yeah. I mean, I think this rise in cynicism is a terrible shame. And in fact, I think it will
00:07:05.720probably result in hundreds of thousands or millions of lives lost. So here are some things
00:07:11.340that are true. Building companies- Let me just, sorry to interrupt you again, but let me just add
00:07:16.840that you might've seen the Lancet study that suggests that Elon's dismantling of USAID will
00:07:22.980cost 14 million people to die unnecessarily in the next five years from infectious disease.
00:07:29.1204.5 million of whom are under the age of five. Now, I mean, those numbers, I think, I mean,
00:07:34.860I would bet my life that the tech bros will be, you know, frankly, incredulous when they hear
00:07:39.580those numbers, but let's discount them by a factor of 10. I mean, let's say it's only 1.4
00:07:43.960million people, 450,000 under the age of five, right? It's still enough evil.
00:07:48.600Mind boggling numbers. Exactly. And so it's true that building companies can be a great way of
00:07:54.360improving the world. It's also true that much aid can be ineffective, even sometimes harmful.
00:08:00.500That is just not true for the most effective global health and development interventions,
00:08:05.120which have saved hundreds of millions of lives over the course of the last 50 years.
00:08:09.960Even the leading aid skeptics like Bill Easterly will proactively say, of course,
00:08:14.700I'm not talking about global health. That has had enormous benefits. And when you look at
00:08:20.680the most effective organizations, you can show with high quality evidence, randomized controlled
00:08:26.540files, that these save lives. And in fact, the donations that have gone via GiveWell have saved
00:08:32.080hundreds of thousands, best guess, over 340,000 lives now. This is at a cost of about $5,000 per
00:08:39.720life. Whereas in the United States, a typical or, you know, good cost, low cost to give someone one
00:08:47.360year of life is about $50,000. So you're looking at kind of in the United States, giving someone
00:08:53.820an extra month of life for $5,000 or saving a child's life for $5,000 in a poorer country.
00:09:00.680Right. Right. Okay. So global health, what's the next area?
00:09:04.320The next big one is animal welfare, in particular farm animal welfare, where every year about 90
00:09:10.860billion animals are raised in factory farms and slaughtered. And the conditions they live in are
00:09:17.160truly atrocious. These are the worst off animals in the world, such that in fact, I think when
00:09:22.560those animals die, that's the best thing that happened to them because their lives are full
00:09:25.820of such suffering. And there are things we can do to have enormous impact. So organizations
00:09:32.380within the kind of broader effective altruism ecosystem championed and then funded corporate
00:09:37.900cage-free campaigns, going to big retailers and restaurant chains and advocating for them to cut
00:09:44.000out the use of eggs from caged hens. And there were many pledges to do so. 92% of those pledges
00:09:50.700have been fulfilled. Now, every year in the United States alone, there are three billion chickens
00:09:55.860that would have been brought up in caged confinement that instead have at least
00:10:01.260somewhat significantly better lives. And that was on the basis of really quite small amounts of
00:10:06.480money. We're talking about tens of millions of dollars for these campaigns. So if you're
00:10:12.000concerned about the well-being of non-human animals and what are the just worst-off creatures
00:10:16.980in the planet. Well, the amount of impact you can have per life there is just absolutely enormous.
00:10:23.380I think that factory farming is one of the worst atrocities that humanity is committing today,
00:10:28.360and sadly it's getting worse every year, but we can make this extraordinarily large impact on it
00:10:34.060in absolute terms. So yeah, so this is one area where perhaps my own cynicism creeps in. I worry
00:10:39.320that any focus on suffering beyond human suffering, it risks confusing enough people so as to damage
00:10:47.180people's commitment to these principles. So, I mean, I'm not the zero defensive factory farming
00:10:51.680coming from me here, but when I see a philosopher who's clearly EA or EA adjacent arguing on behalf
00:11:00.440of the welfare of shrimp and claiming that maybe the worst atrocity perpetrated by humans is all
00:11:07.480of the mistreatment of shrimp because they exist in such numbers and, you know, live such terrible
00:11:12.680lives. One imagines, though I don't really have strong intuitions about what it's like to be a
00:11:17.700shrimp. I just feel like those kinds of arguments, and this is where kind of the kind of vegan
00:11:23.380dogmatism can come in, like that you can occasionally find a vegan who's arguing that
00:11:27.100we need to actually do something, you know, with the state of nature so to protect the rabbits from
00:11:32.080the Fox's kind of arguments, this begins to look like a reductio ad absurdum of just the whole
00:11:37.460enterprise. I mean, you're like, okay, okay. You know, I feel like people then declare on some
00:11:42.240level ethical bankruptcy. They say like, okay, I'm just going to worry about me and my family
00:11:45.340and my friends and figure out what to do on the weekends because these philosophers have gone
00:11:49.380crazy. They're telling me that I have to worry about shrimp now. And I worry that the same thing
00:11:53.140is now in the offing. We'll talk about this when we talk about AI, when we start talking about the
00:11:57.560possible suffering of digital minds. Now, I'm not actually prejudging the intellectual case you can
00:12:03.460make for the plausible suffering of shrimp or the likely suffering of some digital minds,
00:12:09.320but if not now, but in the future. But I just think if we're going to push the conversation
00:12:15.980to a place where we're asking people to care about how NVIDIA's latest chips feel, you know,
00:12:22.560in some configuration, it's going to be, again, whatever is true, remain agnostic as to what is
00:12:28.540true or will be true once we build more powerful AI. I mean, I just think even the Dalai Lama is
00:12:34.340not going to be able to shed a tear about digital minds. That's an epistemological boundary, but
00:12:39.680even if it's not epistemological, I think it's an emotional boundary for most people, at least for
00:12:43.980the longest time. Okay, great. So lots to unpack there. And so I actually personally, I'm not
00:12:50.680convinced by the shrimp argument. But the thing I want to defend is people really taking ethical,
00:13:00.960including quite weird seeming ethical ideas seriously, and trying to reason that through
00:13:05.160for themselves, where perhaps, you know, there are some groups which should be just really
00:13:12.440thinking about PR and how ideas will be received and kind of trying to build some kind of broad
00:13:18.180coalition on that basis. But I think some people just need to be trying to figure out just actually
00:13:22.900what is moral reality at the moment? What might we be missing? So there's this historical period
00:13:29.340that I got very obsessed with in writing my last book, which is the early Quakers,
00:13:34.960which led to the British abolitionist movement. It actually led to the abolition of the slave trade
00:13:41.580and then of slave owning globally, in fact, for chattel slavery. And boy, those people were weird.
00:13:48.180But early on, like at the time, I mean, the idea that it would be immoral to own slaves
00:13:53.080was regarded as laughable, let alone many of them were vegetarian.
00:16:57.220These protect against regular, normal pandemics like we've seen throughout history, but they also protect against novel pandemics where we have the ability now to create and build new viruses, new pathogens.
00:17:11.880At the moment, that ability is constrained to people with sufficient skills in a handful of labs, but the equipment needed to do so is not that expensive and it's getting cheaper all the time.
00:17:25.060The knowledge needed to do so is becoming more and more democratized. And this is something that we really want to get ahead of because it's really not that unlikely to me, maybe I'd say one in three, that we will just see waves and waves of new pandemics as a result of people tinkering with viruses in their, you know, ultimately in their basement and it leaking out.
00:17:50.420So you're, you're, you're imagining just like endless lab leaks or you're imagining that plus
00:17:56.080biological terrorism. I'm thinking the most likely thing is lab leaks where obviously there's this
00:18:01.760big debate about COVID, but let's just put that to the side. Leaks of viruses from labs are just
00:18:07.020extremely common. In fact, they average, I think for every hundred person years of people working
00:18:13.500in at least the highest security labs, a virus leaks out. So in the United Kingdom, the foot
00:18:18.540and mouth disease, which I remember from a kid seeing just millions of like cow carcasses being
00:18:23.880burned. That was because of, um, that was the result of a lab leak where the same lab, in fact,
00:18:28.980leaked the virus two weeks after getting reprimanded for leaking it before. Um, it's actually just very
00:18:35.040hard to contain viruses. And so small mistakes can lead to leaks, but yeah, it could be that,
00:18:40.620but in even worst case scenarios, yeah. Biotero attacks or just the threats of that. So North
00:18:47.460Korea, could have a lot more bargaining power on the world stage if it could credibly say,
00:18:54.020and is in fact reckless enough to say, well, I have these bioweapons, we could release them. Yes,
00:18:59.460we would suffer mass casualties too, but I'm the dictator. I don't mind so much.
00:19:05.360Okay. Well, so what else is on the list beyond pandemics now?
00:19:09.640Yeah. And then the biggest one I would say is a kind of final catechly, though there are many
00:19:14.980other categories too, including kind of scientific development, scientific innovation, certain kind
00:19:19.560of pro-growth, like sensible pro-growth policymaking as well, but is issues around AI, where again,
00:19:25.860this has been a worry for many years, was regarded as, you know, when I wrote doing it better,
00:19:30.780utterly sci-fi, you know, something for the year 2100 perhaps, but not for now.
00:19:35.880Amazing how that changed, yeah. I mean, I still remember that. Like when I gave my AI talk at TED,
00:19:40.520which was exactly 10 years ago, I remember, I mean, I just, as a kind of rhetorical device,
00:19:46.420just said, for argument's sake, let's say we're not going to get there for 50 years, right? But
00:19:50.800I remember when I said that, I wasn't predicting that timeframe, but it seemed totally plausible
00:19:56.420to think it might take 50 years. There's no one talking in terms of 50 years, as far as I can tell
00:20:01.440now. Yeah, exactly. And I was the same. I was just, had these huge other bars on when AI could come,
00:20:07.180And it's been a lot faster than I expected.
00:20:10.980I think you sent me a link or in one of your articles, there was a link to this stat that
00:20:15.980as of like 2022, AI researchers forecast that it wouldn't be until, it was something like,
00:20:26.120you know, 5% or 2% of AI researchers thought that AI would win the math Olympiad in like,
00:20:38.820So both machine learning experts and forecasters have all been taken by surprise by just how fast progress has been, in particular on domains related to reasoning.
00:20:53.080And we're now in this very strange situation where actually the progress in AI capabilities
00:21:00.440is remarkably stable over time, which is what I would say is stable exponential progress,
00:21:08.900where there are gains in how much computing power are just being thrown at AI for training,
00:21:15.800for experimentation, for inference. There are gains in algorithmic efficiency. So how much
00:21:21.220of a punch can you get from that computing power? And then when you look at how does AI perform,
00:21:27.340whether that's on benchmarks or in terms of the time horizon of human equivalent tasks,
00:21:34.560so a task that might take a human three minutes or 30 minutes or three hours,
00:21:38.660that just occupies this relatively smooth exponential trend where at the moment AI
00:21:47.100for software engineering can do tasks that a human would typically take a few hours to do,
00:21:52.320that seems to be doubling something like every four to six months. So in, you know, think about
00:21:59.480it, a year's time, maybe you've got AI that can do what it would take a human a week, year after
00:22:05.500that will be a month, and so on. And so that really changes the dynamic of how to think about AI,
00:22:11.320whereas 10 years ago, it was much more based on kind of abstract arguments,
00:22:15.400how do agents behave and so on in general. Now we can do experiments on AI systems
00:22:22.280to get a sense of how they act, what the risks are, what the potential benefits are.
00:22:27.560And we can have a lot more confidence than we used to be able to have on when certain
00:22:33.160capabilities are coming. And in particular, the really scary point in time is when
00:22:39.080the AI loop feeds back on itself, and you are able to automate via AI the process of doing AI
00:22:46.180research itself. And there are good arguments, and my organization has done some kind of deep dive
00:22:51.780investigation into this question for thinking that you get this big leap forward in capability
00:22:57.120at that point in time. All right, we're going to jump into AI in a minute. I think that'll be the
00:23:01.060entire second half of our conversation. But you used a phrase a moment ago that caught my attention.
00:23:06.740You said something about positive growth or it just flagged for me that almost invariably our discussion about ethics and our discussion about EA in particular is kind of negatively valence.
00:23:20.300We're just talking about the risks that need to be mitigated, the suffering that needs to be alleviated.
00:23:25.400But there's this other side of the question always, when you're talking about human flourishing, we also need to think about the positive goods that remain unactualized, and a failure to actualize them is also another cost, right?
00:23:39.640and i think you're gonna i think i've seen people argue that that it's um in many respects it's it
00:23:46.600could be a larger cost i mean there's there's a and i think there's an asymmetry in our thinking
00:23:50.580and in our experience where where suffering gets weighted more heavily i mean which is to say that
00:23:56.380the the worst pains are are are worse than than the best pleasures are good yeah right however
00:24:02.700you want to grammatically finish that sentence but i do think i mean when you think about what's
00:24:07.180possible for us on the good side of the ledger and how, you know, I mean, just, we, we know
00:24:14.060nothing about the horizons of the good, really. I mean, how, how good could human life be? And
00:24:18.660what are the, you know, how can we weight the opportunity costs of the present? I mean,
00:24:23.300the things we're doing now that, that prevent us from actually exploring, you know, the deeper
00:24:29.020reaches of, of human flourishing and the ability to make a society that is, that allows for us to
00:24:35.060spend time there as opposed to just putting out fires and figuring out how not to kill one another,
00:24:41.440that's also part of the calculus. Absolutely. So medicine often has this idea that
00:24:47.300it just wants to restore normal functioning. And the point of medicine is to, if someone is below
00:24:53.760normal, we'll get them back to normal. But it doesn't care at all about going from normal to
00:24:57.600very good. Yeah. So you're not going to be in the Olympics. We just want to get you out of bed.
00:25:00.420Yeah. Except what counts as normal functioning obviously changes over time. And it is true,
00:25:05.840I think, that in the world today for present day people, you can often have more of an impact by
00:25:11.780preventing suffering than by kind of enhancing people to have even more well-being. But that's
00:25:19.820a contingent fact. And I do think that future generations will look back at our lives today
00:25:27.200and think, oh my God, they missed out. They didn't have good. And then insert goods like X and Y and
00:25:33.080Z in the same way as, you know, take our lives and imagine a different society where no one
00:25:38.480experienced love. And you'd think, wow, that's how impoverished that society would be because of this
00:25:44.440absence of a good. And so I do think that when we're looking towards the future, we should be
00:25:50.740trying to think, yeah, not merely just how can we eliminate obvious causes of suffering, but
00:25:55.980actually how can we perhaps have a life that's you know radically better today than today where
00:26:01.460the best days in my life are hundreds of times better than a typical day i would like more of
00:26:07.940that i would like more of that for everyone right yeah so i do think in those terms a lot when i
00:26:12.500when i look at the kinds of things that capture our attention certainly in politics these days
00:26:17.340i do view almost everything as an opportunity cost and so this actually brings me back to my
00:26:23.860initial question and concern around EA in specifying how we think about effectiveness.
00:26:29.780I mean, so the E in EA is effectiveness, effective altruism. And insofar as there's a bias toward
00:26:37.340the quantifiable and a bias toward hitting the targets that we just described, things like
00:26:43.860global health or pandemic risk, et cetera, or just existential risk more generally, I worry that
00:26:48.920we're sort of blind to obvious problems that are, you know, the intervention into which would be
00:26:56.820hard to quantify, certainly in advance, but they're blocking everything. I mean, like,
00:27:01.020if you could imagine a project that would have, you know, and this doesn't even sound like an
00:27:05.340expensive one, but if we could have done something in advance to have inoculated the tech bro slash
00:27:11.580Manosphere podcasters against the charms of Trump and Trumpism, right? I mean, you know,
00:27:16.040It's like the Joe Rogan and the All In podcast and Theo Vaughn and all these guys who put Trump on for hours at a stretch and didn't ask him a single skeptical question and just normalized his idiocy and dishonesty for just a vast audience.
00:27:30.340I mean, I think it's not too much to think that that, you know, since he only won by whatever, 1.5 percent, that was among the many things that perhaps overdetermined his victory.
00:27:41.140And there you wouldn't have happened. And then you just look at what an opportunity cost our current politics and, you know, America's current retreat from the world, our disavowal of value. I mean, all the values we're talking about in this podcast, America as a country has completely disavowed them. I mean, we don't care what other nations do. We certainly don't care about climate change. I mean, there might be five people on Earth now who have the bandwidth to think about climate change.
00:28:03.840we don't care about nuclear proliferation. And I think we're, you know, our retreat from the world
00:28:07.820is going to usher in a new era of that. So that if you're talking about existential risk, you know,
00:28:12.800that seems like a bad thing. The, I mentioned Elon and his doging, you know, if the Lancet is even
00:28:19.260remotely right over how many people will needlessly die as a result of that alone. I mean, that's,
00:28:23.880again, that was all downstream of a bunch of dummies talking to Trump in ways that could
00:28:29.660have easily been prevented if they only knew to prevent them. But like, that's not a project if,
00:28:35.660and it's not the most realistic thing that you would target with philanthropy, but it is the
00:28:39.380kind of thing that, you know, if you could have gotten your hands around that lever, that's
00:28:44.120arguably more important than anything that's on GiveWell's website right now, right? Given the
00:28:49.120opportunity costs we're looking at in the unraveling of American values and American
00:28:54.440politics. So I just, I'm wondering how you think about being charitable and, and allocating
00:29:01.700resources in, in the context of problems that's often have that shape. Just like, you know,
00:29:06.740the shape of what social media is doing to us and, and, and the, our capacity to cooperate
00:29:12.140about it to solve any problem. If you'd like to continue listening to this conversation,
00:29:17.160you'll need to subscribe at samharris.org. Once you do, you'll get access to all full-length
00:29:22.840episodes of the Making Sense podcast. The Making Sense podcast is ad-free and relies
00:29:27.960entirely on listener support. And you can subscribe now at samharris.org.