Making Sense - Sam Harris - March 30, 2026


#467 — EA, AI, and the End of Work


Episode Stats

Length

29 minutes

Words per Minute

176.60397

Word Count

5,218

Sentence Count

248

Misogynist Sentences

3

Hate Speech Sentences

3


Summary

Summaries generated with gmurro/bart-large-finetuned-filtered-spotify-podcast-summ .

In this episode of the Making Sense Podcast, Will McCaskill joins me to talk about his new book, "Doing Good Better: The 10th Anniversary of EA's Doing Good: A Guide to Effective Altruism in the 21st Century." We talk about the impact of the EA Foundation, how EA has grown in the last 10 years, and why it might not have all the wisdom it thinks it does.

Transcript

Transcript generated with Whisper (turbo).
Misogyny classifications generated with MilaNLProc/bert-base-uncased-ear-misogyny .
Hate speech classifications generated with facebook/roberta-hate-speech-dynabench-r4-target .
00:00:00.000 Welcome to the Making Sense Podcast. This is Sam Harris. Just a note to say that if you're
00:00:11.740 hearing this, you're not currently on our subscriber feed, and we'll only be hearing
00:00:15.720 the first part of this conversation. In order to access full episodes of the Making Sense
00:00:20.060 Podcast, you'll need to subscribe at samharris.org. We don't run ads on the podcast, and therefore
00:00:26.240 it's made possible entirely through the support of our subscribers. So if you enjoy what we're
00:00:30.200 doing here, please consider becoming one.
00:00:36.640 Will McCaskill, thanks for joining me again on the podcast.
00:00:39.560 It's great to be back on.
00:00:40.540 Yeah, I don't know how many times this is, but it's many.
00:00:42.820 Yeah, I think this is maybe number four on the main podcast.
00:00:45.500 Yeah, yeah, awesome. Well, you are my go-to guy on so many ethical questions, but
00:00:51.480 effective altruism being the frame under which we think about these things.
00:00:55.980 You have the 10th anniversary of your book, Doing Good Better, has just come upon us.
00:01:01.060 There's a new edition.
00:01:01.960 That's right.
00:01:02.920 And what's changed about the actual text?
00:01:06.400 So in the text, the statistics are updated and there's a new foreword, which is responding
00:01:11.880 to some objections and reflecting a little bit on the last 10 years' growth in some of
00:01:17.000 these ideas.
00:01:17.460 Well, the last 10 years have been eventful for EA.
00:01:20.440 I think the last time we spoke, we dealt with much of the controversy around Sam Bankman
00:01:25.100 Freed and FTX and all of that brain damage. Is there more to say about that? I mean, how is the
00:01:30.840 EA movement slash community doing now? And what has been the net effect of all of that?
00:01:37.460 Yeah, I think the main thing to say is that obviously that was a huge hit. It was like a
00:01:42.460 huge knockback. But now if you're looking at influence of the ideas, you know, really what
00:01:48.740 matters, then there's been an enormous restoration of growth. So if you look at, for example,
00:01:56.780 how much money is being moved to effective nonprofits, that figure, I mean, it actually
00:02:02.000 grew just kind of steadily, even through these periods of drama and cryptocurrency and so on.
00:02:11.320 But over the last year, best guess is it grew about 50% is closing in on $2 billion a year now.
00:02:17.280 And that's not just from a small number of large donors. Actually, it's across large donors and small donors and so on. Similarly, if you look at Giving What We Can members, so people who pledged 10% of their income, that had year-on-year growth of about 20% or 30%. Similarly, if you look at people engaging with effective altruism as a movement via conferences and so on, that is also growing really quite healthily.
00:02:43.760 So I think the overall story is that, yeah, that was a huge hit, but the underlying ideas are very good. That means that maybe things are a little bit less in the public eye, but people are still being convinced by the importance of giving more and giving more effectively or using their career to do good. And I think that's got momentum all of its own.
00:03:03.500 Right. Well, let's talk about those pieces. I mean, for me, the biggest change in my life that I ascribe to effective altruism in general, and your influence in particular, has been the pledge. I mean, just deciding in advance to give a certain amount of money or a certain percentage of money, in this case, 10% of pre-tax earnings.
00:03:23.280 Just knowing that on some level that money isn't even mine when it comes in the door because it's been pre-committed to causes that seem important, that's just an enormous kind of psychological change and just got a life benefit.
00:03:38.700 And it's just, you know, I've discussed this with you before, but I mean, it's just fun and virtuous and it just seems good all the way around.
00:03:45.040 that places where I remain uncertain whether EA has all the wisdom it should have to inform the
00:03:51.780 conversation is around just what constitutes effectiveness. I mean, how we think about that
00:03:56.100 way, like the list of causes that are on the menu if you're EA versus, you know, causes that are
00:04:01.600 almost by definition not on the menu. I think in your current thinking, you're arguing that we
00:04:06.240 should expand the footprint of philanthropic targets beyond what is traditionally thought of
00:04:11.780 obvious ea causes maybe let's just start there what so when people think about effective altruism
00:04:18.020 and its causes what is the short list of causes that are obviously on the menu uh yeah so firstly
00:04:25.200 thanks for bringing up the your 10 pledge and one of the amazing things looking back at you know
00:04:31.600 the last 10 years including from our first podcast was 10 years ago was the impact that you taking
00:04:38.240 the pledge and being public about it has had, where we're now up to 1,200, 10% pledges that
00:04:43.540 have come from people who follow this podcast. And over $30 million of donations have moved.
00:04:50.540 So we're talking about thousands of lives saved there, which is pretty cool.
00:04:54.500 Yeah. Hopefully we haven't undone those benefits with something else I've done on the podcast.
00:04:59.220 Fingers crossed. But in terms of, yeah, areas of focus. So a huge one is global health and
00:05:06.020 development. And still, that's where most of the philanthropic money that gets directed goes.
00:05:13.280 So just maybe I'll touch each of these as you send it over the net. The obvious cynical retort
00:05:19.480 to the wisdom of that is people should be more concerned about suffering that's close to home.
00:05:26.140 You know, America is kind of retrenching now under the influence of not just the orange menace
00:05:33.980 in the Oval Office. But lots of people who, if they weren't EA, they were EA adjacent in Silicon
00:05:41.460 Valley. I mean, all the tech bros who kind of went MAGA are, to my eye at least, building a
00:05:47.020 kind of iron wall of cynicism against many of the values that you've just begun to articulate. And
00:05:54.960 one brick in this wall is certainly this notion that philanthropy doesn't really work.
00:05:59.760 sending money to Africa is just kind of foolish. You might be helping people, some identifiable
00:06:07.660 people, but we've really doged all this so effectively now under the wisdom of Elon and
00:06:13.340 his incel cult that we just saw that all of this, these are just all criminals who are wasting our
00:06:19.620 money over there with USAID. The money should be used at home and it should be used for the most
00:06:24.780 part. I mean, philanthropy is just a boondoggle. We should be just building businesses that are
00:06:29.120 effective in solving problems that we want to solve. And this seems to be the genius of Silicon
00:06:33.260 Valley and its top people now. So this first claim that global health is such an obvious target and
00:06:40.260 that the differential value of every dollar over there is so much more than it is here that, you
00:06:47.700 know, you can do so much more good with a single dollar in sub-Saharan Africa than you can do it
00:06:51.980 in Menlo Park. There's the argument. But what do you say in the face of the cynicism?
00:06:59.120 Yeah. I mean, I think this rise in cynicism is a terrible shame. And in fact, I think it will
00:07:05.720 probably result in hundreds of thousands or millions of lives lost. So here are some things
00:07:11.340 that are true. Building companies- Let me just, sorry to interrupt you again, but let me just add
00:07:16.840 that you might've seen the Lancet study that suggests that Elon's dismantling of USAID will
00:07:22.980 cost 14 million people to die unnecessarily in the next five years from infectious disease.
00:07:29.120 4.5 million of whom are under the age of five. Now, I mean, those numbers, I think, I mean,
00:07:34.860 I would bet my life that the tech bros will be, you know, frankly, incredulous when they hear
00:07:39.580 those numbers, but let's discount them by a factor of 10. I mean, let's say it's only 1.4
00:07:43.960 million people, 450,000 under the age of five, right? It's still enough evil.
00:07:48.600 Mind boggling numbers. Exactly. And so it's true that building companies can be a great way of
00:07:54.360 improving the world. It's also true that much aid can be ineffective, even sometimes harmful.
00:08:00.500 That is just not true for the most effective global health and development interventions,
00:08:05.120 which have saved hundreds of millions of lives over the course of the last 50 years.
00:08:09.960 Even the leading aid skeptics like Bill Easterly will proactively say, of course,
00:08:14.700 I'm not talking about global health. That has had enormous benefits. And when you look at
00:08:20.680 the most effective organizations, you can show with high quality evidence, randomized controlled
00:08:26.540 files, that these save lives. And in fact, the donations that have gone via GiveWell have saved
00:08:32.080 hundreds of thousands, best guess, over 340,000 lives now. This is at a cost of about $5,000 per
00:08:39.720 life. Whereas in the United States, a typical or, you know, good cost, low cost to give someone one
00:08:47.360 year of life is about $50,000. So you're looking at kind of in the United States, giving someone
00:08:53.820 an extra month of life for $5,000 or saving a child's life for $5,000 in a poorer country.
00:09:00.680 Right. Right. Okay. So global health, what's the next area?
00:09:04.320 The next big one is animal welfare, in particular farm animal welfare, where every year about 90
00:09:10.860 billion animals are raised in factory farms and slaughtered. And the conditions they live in are
00:09:17.160 truly atrocious. These are the worst off animals in the world, such that in fact, I think when
00:09:22.560 those animals die, that's the best thing that happened to them because their lives are full
00:09:25.820 of such suffering. And there are things we can do to have enormous impact. So organizations
00:09:32.380 within the kind of broader effective altruism ecosystem championed and then funded corporate
00:09:37.900 cage-free campaigns, going to big retailers and restaurant chains and advocating for them to cut
00:09:44.000 out the use of eggs from caged hens. And there were many pledges to do so. 92% of those pledges
00:09:50.700 have been fulfilled. Now, every year in the United States alone, there are three billion chickens
00:09:55.860 that would have been brought up in caged confinement that instead have at least
00:10:01.260 somewhat significantly better lives. And that was on the basis of really quite small amounts of
00:10:06.480 money. We're talking about tens of millions of dollars for these campaigns. So if you're
00:10:12.000 concerned about the well-being of non-human animals and what are the just worst-off creatures
00:10:16.980 in the planet. Well, the amount of impact you can have per life there is just absolutely enormous.
00:10:23.380 I think that factory farming is one of the worst atrocities that humanity is committing today,
00:10:28.360 and sadly it's getting worse every year, but we can make this extraordinarily large impact on it
00:10:34.060 in absolute terms. So yeah, so this is one area where perhaps my own cynicism creeps in. I worry
00:10:39.320 that any focus on suffering beyond human suffering, it risks confusing enough people so as to damage
00:10:47.180 people's commitment to these principles. So, I mean, I'm not the zero defensive factory farming
00:10:51.680 coming from me here, but when I see a philosopher who's clearly EA or EA adjacent arguing on behalf
00:11:00.440 of the welfare of shrimp and claiming that maybe the worst atrocity perpetrated by humans is all
00:11:07.480 of the mistreatment of shrimp because they exist in such numbers and, you know, live such terrible
00:11:12.680 lives. One imagines, though I don't really have strong intuitions about what it's like to be a
00:11:17.700 shrimp. I just feel like those kinds of arguments, and this is where kind of the kind of vegan
00:11:23.380 dogmatism can come in, like that you can occasionally find a vegan who's arguing that
00:11:27.100 we need to actually do something, you know, with the state of nature so to protect the rabbits from
00:11:32.080 the Fox's kind of arguments, this begins to look like a reductio ad absurdum of just the whole
00:11:37.460 enterprise. I mean, you're like, okay, okay. You know, I feel like people then declare on some
00:11:42.240 level ethical bankruptcy. They say like, okay, I'm just going to worry about me and my family
00:11:45.340 and my friends and figure out what to do on the weekends because these philosophers have gone
00:11:49.380 crazy. They're telling me that I have to worry about shrimp now. And I worry that the same thing
00:11:53.140 is now in the offing. We'll talk about this when we talk about AI, when we start talking about the
00:11:57.560 possible suffering of digital minds. Now, I'm not actually prejudging the intellectual case you can
00:12:03.460 make for the plausible suffering of shrimp or the likely suffering of some digital minds,
00:12:09.320 but if not now, but in the future. But I just think if we're going to push the conversation
00:12:15.980 to a place where we're asking people to care about how NVIDIA's latest chips feel, you know,
00:12:22.560 in some configuration, it's going to be, again, whatever is true, remain agnostic as to what is
00:12:28.540 true or will be true once we build more powerful AI. I mean, I just think even the Dalai Lama is
00:12:34.340 not going to be able to shed a tear about digital minds. That's an epistemological boundary, but
00:12:39.680 even if it's not epistemological, I think it's an emotional boundary for most people, at least for
00:12:43.980 the longest time. Okay, great. So lots to unpack there. And so I actually personally, I'm not
00:12:50.680 convinced by the shrimp argument. But the thing I want to defend is people really taking ethical,
00:13:00.960 including quite weird seeming ethical ideas seriously, and trying to reason that through
00:13:05.160 for themselves, where perhaps, you know, there are some groups which should be just really
00:13:12.440 thinking about PR and how ideas will be received and kind of trying to build some kind of broad
00:13:18.180 coalition on that basis. But I think some people just need to be trying to figure out just actually
00:13:22.900 what is moral reality at the moment? What might we be missing? So there's this historical period
00:13:29.340 that I got very obsessed with in writing my last book, which is the early Quakers,
00:13:34.960 which led to the British abolitionist movement. It actually led to the abolition of the slave trade
00:13:41.580 and then of slave owning globally, in fact, for chattel slavery. And boy, those people were weird.
00:13:48.180 But early on, like at the time, I mean, the idea that it would be immoral to own slaves
00:13:53.080 was regarded as laughable, let alone many of them were vegetarian.
00:13:57.220 And that's just absurd.
00:13:59.260 What next?
00:13:59.880 They'll be saying that women should have the vote.
00:14:02.200 They should be pacifists, which they also were.
00:14:05.560 And looking back at ideas that we now think of as utterly morally common sense,
00:14:11.920 like equal rights for women or like the idea that it's utterly immoral to own slaves,
00:14:18.180 let alone the completely absurd things like men having sex with men or something.
00:14:23.360 These are things you would have been mocked for,
00:14:25.220 maybe even regarded as kind of repulsive, you know,
00:14:28.240 apobious for suggesting.
00:14:29.560 You can also add to that the picture that was given to us by Descartes and others
00:14:33.200 that, you know, animals as complex as, you know, dogs and apes
00:14:37.600 could experience no pain, right?
00:14:39.260 So they would just vivisect dogs by nailing their feet to boards
00:14:42.560 and then just performing, you know, surgery on them while alive.
00:14:45.420 Yeah. Or even torturing cats for entertainment was reasonably popular practice. So we have this
00:14:52.980 long track record of humanity getting morality wrong really quite badly. And those people who
00:15:01.000 pushed early on for those changes being what I call a moral weirdo. And I think at least some
00:15:09.700 groups need to be in the business of really trying to figure this out. And maybe that means that lots
00:15:15.440 of people will say, okay, I'm into effective giving, but not effective altruism. That comes
00:15:19.140 with all this baggage. And then I'm like, I don't really mind about labels. I don't really mind.
00:15:23.700 Then maybe, yeah, there are other people that can just take some parts and leave others. But
00:15:28.500 I think this kind of cauldron of ideas and intellectual and like moral exploration and
00:15:35.180 seriousness, including when it comes to esoteric ideas like shrimp or like digital minds or perhaps
00:15:40.960 something else, I think is something important and something I, you know, I really would like
00:15:45.540 to protect, in fact. Right. Okay. So you've got global health and animal welfare. What else is
00:15:50.720 canonical? Yeah. Yeah. So another is pandemic preparedness, which, you know, again,
00:15:57.880 in writing this book and thinking about the last 10 years, when I was first on the podcast,
00:16:02.900 Uh, you know, we have these more speculative areas like pandemic preparedness and AI.
00:16:07.200 Who knows if that's going to happen.
00:16:08.460 Exactly.
00:16:08.940 On either, on either counts.
00:16:10.120 Yeah.
00:16:10.680 And, you know, that's something I'm personally particularly excited about because it's just
00:16:14.700 the things that we can do are so slam dunk.
00:16:18.580 And even despite a pandemic that killed tens of millions of people, caused trillions of
00:16:25.000 dollars of damages, you know, what sort of lessons did the world learn?
00:16:29.580 Maybe people became more skeptical of vaccines.
00:16:33.020 Yeah, yeah, yeah.
00:16:34.080 Yet there are things that could absorb,
00:16:36.740 you know, take a lot of money,
00:16:38.140 not enormous amounts globally,
00:16:39.440 but hundreds of millions to billions.
00:16:41.360 We could have mask stockpiles.
00:16:43.760 We could build and deploy lighting
00:16:46.720 that kind of sterilizes the air.
00:16:48.300 Often these things look good,
00:16:49.460 even if you're just concerned
00:16:50.420 about the economic impacts of colds.
00:16:52.580 Right, right.
00:16:53.200 We could be monitoring wastewater
00:16:54.820 for any sort of new viruses.
00:16:57.220 These protect against regular, normal pandemics like we've seen throughout history, but they also protect against novel pandemics where we have the ability now to create and build new viruses, new pathogens.
00:17:11.880 At the moment, that ability is constrained to people with sufficient skills in a handful of labs, but the equipment needed to do so is not that expensive and it's getting cheaper all the time.
00:17:25.060 The knowledge needed to do so is becoming more and more democratized. And this is something that we really want to get ahead of because it's really not that unlikely to me, maybe I'd say one in three, that we will just see waves and waves of new pandemics as a result of people tinkering with viruses in their, you know, ultimately in their basement and it leaking out.
00:17:50.420 So you're, you're, you're imagining just like endless lab leaks or you're imagining that plus
00:17:56.080 biological terrorism. I'm thinking the most likely thing is lab leaks where obviously there's this
00:18:01.760 big debate about COVID, but let's just put that to the side. Leaks of viruses from labs are just
00:18:07.020 extremely common. In fact, they average, I think for every hundred person years of people working
00:18:13.500 in at least the highest security labs, a virus leaks out. So in the United Kingdom, the foot
00:18:18.540 and mouth disease, which I remember from a kid seeing just millions of like cow carcasses being
00:18:23.880 burned. That was because of, um, that was the result of a lab leak where the same lab, in fact,
00:18:28.980 leaked the virus two weeks after getting reprimanded for leaking it before. Um, it's actually just very
00:18:35.040 hard to contain viruses. And so small mistakes can lead to leaks, but yeah, it could be that,
00:18:40.620 but in even worst case scenarios, yeah. Biotero attacks or just the threats of that. So North
00:18:47.460 Korea, could have a lot more bargaining power on the world stage if it could credibly say,
00:18:54.020 and is in fact reckless enough to say, well, I have these bioweapons, we could release them. Yes,
00:18:59.460 we would suffer mass casualties too, but I'm the dictator. I don't mind so much.
00:19:05.360 Okay. Well, so what else is on the list beyond pandemics now?
00:19:09.640 Yeah. And then the biggest one I would say is a kind of final catechly, though there are many
00:19:14.980 other categories too, including kind of scientific development, scientific innovation, certain kind
00:19:19.560 of pro-growth, like sensible pro-growth policymaking as well, but is issues around AI, where again,
00:19:25.860 this has been a worry for many years, was regarded as, you know, when I wrote doing it better,
00:19:30.780 utterly sci-fi, you know, something for the year 2100 perhaps, but not for now.
00:19:35.880 Amazing how that changed, yeah. I mean, I still remember that. Like when I gave my AI talk at TED,
00:19:40.520 which was exactly 10 years ago, I remember, I mean, I just, as a kind of rhetorical device,
00:19:46.420 just said, for argument's sake, let's say we're not going to get there for 50 years, right? But
00:19:50.800 I remember when I said that, I wasn't predicting that timeframe, but it seemed totally plausible
00:19:56.420 to think it might take 50 years. There's no one talking in terms of 50 years, as far as I can tell
00:20:01.440 now. Yeah, exactly. And I was the same. I was just, had these huge other bars on when AI could come,
00:20:07.180 And it's been a lot faster than I expected.
00:20:10.980 I think you sent me a link or in one of your articles, there was a link to this stat that
00:20:15.980 as of like 2022, AI researchers forecast that it wouldn't be until, it was something like,
00:20:26.120 you know, 5% or 2% of AI researchers thought that AI would win the math Olympiad in like,
00:20:32.340 by like 2025.
00:20:33.120 I mean, it was just not, I mean, it was a total outlier position, but that's exactly what happened.
00:20:38.240 Yeah, exactly.
00:20:38.820 So both machine learning experts and forecasters have all been taken by surprise by just how fast progress has been, in particular on domains related to reasoning.
00:20:50.960 So mathematics, coding, and so on.
00:20:53.080 And we're now in this very strange situation where actually the progress in AI capabilities
00:21:00.440 is remarkably stable over time, which is what I would say is stable exponential progress,
00:21:08.900 where there are gains in how much computing power are just being thrown at AI for training,
00:21:15.800 for experimentation, for inference. There are gains in algorithmic efficiency. So how much
00:21:21.220 of a punch can you get from that computing power? And then when you look at how does AI perform,
00:21:27.340 whether that's on benchmarks or in terms of the time horizon of human equivalent tasks,
00:21:34.560 so a task that might take a human three minutes or 30 minutes or three hours,
00:21:38.660 that just occupies this relatively smooth exponential trend where at the moment AI
00:21:47.100 for software engineering can do tasks that a human would typically take a few hours to do,
00:21:52.320 that seems to be doubling something like every four to six months. So in, you know, think about
00:21:59.480 it, a year's time, maybe you've got AI that can do what it would take a human a week, year after
00:22:05.500 that will be a month, and so on. And so that really changes the dynamic of how to think about AI,
00:22:11.320 whereas 10 years ago, it was much more based on kind of abstract arguments,
00:22:15.400 how do agents behave and so on in general. Now we can do experiments on AI systems
00:22:22.280 to get a sense of how they act, what the risks are, what the potential benefits are.
00:22:27.560 And we can have a lot more confidence than we used to be able to have on when certain
00:22:33.160 capabilities are coming. And in particular, the really scary point in time is when
00:22:39.080 the AI loop feeds back on itself, and you are able to automate via AI the process of doing AI
00:22:46.180 research itself. And there are good arguments, and my organization has done some kind of deep dive
00:22:51.780 investigation into this question for thinking that you get this big leap forward in capability
00:22:57.120 at that point in time. All right, we're going to jump into AI in a minute. I think that'll be the
00:23:01.060 entire second half of our conversation. But you used a phrase a moment ago that caught my attention.
00:23:06.740 You said something about positive growth or it just flagged for me that almost invariably our discussion about ethics and our discussion about EA in particular is kind of negatively valence.
00:23:20.300 We're just talking about the risks that need to be mitigated, the suffering that needs to be alleviated.
00:23:25.400 But there's this other side of the question always, when you're talking about human flourishing, we also need to think about the positive goods that remain unactualized, and a failure to actualize them is also another cost, right?
00:23:39.640 and i think you're gonna i think i've seen people argue that that it's um in many respects it's it
00:23:46.600 could be a larger cost i mean there's there's a and i think there's an asymmetry in our thinking
00:23:50.580 and in our experience where where suffering gets weighted more heavily i mean which is to say that
00:23:56.380 the the worst pains are are are worse than than the best pleasures are good yeah right however
00:24:02.700 you want to grammatically finish that sentence but i do think i mean when you think about what's
00:24:07.180 possible for us on the good side of the ledger and how, you know, I mean, just, we, we know
00:24:14.060 nothing about the horizons of the good, really. I mean, how, how good could human life be? And
00:24:18.660 what are the, you know, how can we weight the opportunity costs of the present? I mean,
00:24:23.300 the things we're doing now that, that prevent us from actually exploring, you know, the deeper
00:24:29.020 reaches of, of human flourishing and the ability to make a society that is, that allows for us to
00:24:35.060 spend time there as opposed to just putting out fires and figuring out how not to kill one another,
00:24:41.440 that's also part of the calculus. Absolutely. So medicine often has this idea that
00:24:47.300 it just wants to restore normal functioning. And the point of medicine is to, if someone is below
00:24:53.760 normal, we'll get them back to normal. But it doesn't care at all about going from normal to
00:24:57.600 very good. Yeah. So you're not going to be in the Olympics. We just want to get you out of bed.
00:25:00.420 Yeah. Except what counts as normal functioning obviously changes over time. And it is true,
00:25:05.840 I think, that in the world today for present day people, you can often have more of an impact by
00:25:11.780 preventing suffering than by kind of enhancing people to have even more well-being. But that's
00:25:19.820 a contingent fact. And I do think that future generations will look back at our lives today
00:25:27.200 and think, oh my God, they missed out. They didn't have good. And then insert goods like X and Y and
00:25:33.080 Z in the same way as, you know, take our lives and imagine a different society where no one
00:25:38.480 experienced love. And you'd think, wow, that's how impoverished that society would be because of this
00:25:44.440 absence of a good. And so I do think that when we're looking towards the future, we should be
00:25:50.740 trying to think, yeah, not merely just how can we eliminate obvious causes of suffering, but
00:25:55.980 actually how can we perhaps have a life that's you know radically better today than today where
00:26:01.460 the best days in my life are hundreds of times better than a typical day i would like more of
00:26:07.940 that i would like more of that for everyone right yeah so i do think in those terms a lot when i
00:26:12.500 when i look at the kinds of things that capture our attention certainly in politics these days
00:26:17.340 i do view almost everything as an opportunity cost and so this actually brings me back to my
00:26:23.860 initial question and concern around EA in specifying how we think about effectiveness.
00:26:29.780 I mean, so the E in EA is effectiveness, effective altruism. And insofar as there's a bias toward
00:26:37.340 the quantifiable and a bias toward hitting the targets that we just described, things like
00:26:43.860 global health or pandemic risk, et cetera, or just existential risk more generally, I worry that
00:26:48.920 we're sort of blind to obvious problems that are, you know, the intervention into which would be
00:26:56.820 hard to quantify, certainly in advance, but they're blocking everything. I mean, like,
00:27:01.020 if you could imagine a project that would have, you know, and this doesn't even sound like an
00:27:05.340 expensive one, but if we could have done something in advance to have inoculated the tech bro slash
00:27:11.580 Manosphere podcasters against the charms of Trump and Trumpism, right? I mean, you know,
00:27:16.040 It's like the Joe Rogan and the All In podcast and Theo Vaughn and all these guys who put Trump on for hours at a stretch and didn't ask him a single skeptical question and just normalized his idiocy and dishonesty for just a vast audience.
00:27:30.340 I mean, I think it's not too much to think that that, you know, since he only won by whatever, 1.5 percent, that was among the many things that perhaps overdetermined his victory.
00:27:40.140 That was one of those things.
00:27:41.140 And there you wouldn't have happened. And then you just look at what an opportunity cost our current politics and, you know, America's current retreat from the world, our disavowal of value. I mean, all the values we're talking about in this podcast, America as a country has completely disavowed them. I mean, we don't care what other nations do. We certainly don't care about climate change. I mean, there might be five people on Earth now who have the bandwidth to think about climate change.
00:28:03.840 we don't care about nuclear proliferation. And I think we're, you know, our retreat from the world
00:28:07.820 is going to usher in a new era of that. So that if you're talking about existential risk, you know,
00:28:12.800 that seems like a bad thing. The, I mentioned Elon and his doging, you know, if the Lancet is even
00:28:19.260 remotely right over how many people will needlessly die as a result of that alone. I mean, that's,
00:28:23.880 again, that was all downstream of a bunch of dummies talking to Trump in ways that could
00:28:29.660 have easily been prevented if they only knew to prevent them. But like, that's not a project if,
00:28:35.660 and it's not the most realistic thing that you would target with philanthropy, but it is the
00:28:39.380 kind of thing that, you know, if you could have gotten your hands around that lever, that's
00:28:44.120 arguably more important than anything that's on GiveWell's website right now, right? Given the
00:28:49.120 opportunity costs we're looking at in the unraveling of American values and American
00:28:54.440 politics. So I just, I'm wondering how you think about being charitable and, and allocating
00:29:01.700 resources in, in the context of problems that's often have that shape. Just like, you know,
00:29:06.740 the shape of what social media is doing to us and, and, and the, our capacity to cooperate
00:29:12.140 about it to solve any problem. If you'd like to continue listening to this conversation,
00:29:17.160 you'll need to subscribe at samharris.org. Once you do, you'll get access to all full-length
00:29:22.840 episodes of the Making Sense podcast. The Making Sense podcast is ad-free and relies
00:29:27.960 entirely on listener support. And you can subscribe now at samharris.org.