Making Sense - Sam Harris - March 12, 2019


#150 — The Map of Misunderstanding


Episode Stats

Length

49 minutes

Words per Minute

148.22011

Word Count

7,374

Sentence Count

439

Misogynist Sentences

1

Hate Speech Sentences

2


Summary

In this episode, Dr. Daniel Kahneman talks about his Nobel Prize in Economics, why he won it, and what it means to be a Nobel Prizeist. He also talks about the replication crisis in science, the power of illusions, and the problems he and Amos Tversky tackled in their groundbreaking work on decision-making under uncertainty. And, of course, he discusses his new book, Thinking Fast and Slow, and why he thinks we should all be trying to figure out why we get things wrong so often, and how we can fix it. This episode was recorded at a sold-out event at The Beacon Theater in New York a couple of weeks ago, and it's a must-listen if you haven't heard him speak at that event or if you don't know who he is, you'll want to listen to the audio from the event to get a sense of who he really is and what he's been up to since the early days of his career as a psychologist and how he got his start in the field of cognitive psychology. We don't run ads on the podcast, and therefore it's made possible entirely through the support of our listeners, so if you enjoy what we're doing here, please consider becoming a supporter of the podcast and/or becoming a subscriber. You'll get access to the Making Sense Podcast and all sorts of great episodes of the show that's going to make sense of the world around you! Sam Harris' making sense of it all. Thanks to our sponsors, including Amazon Prime and Barnes & Noble, and all kinds of good stuff like that goes out there! Make sure to all of your local bookshipping options, too! Subscribe to The Making Sense and subscribe on Audible to get 10% off your first month and get 20% off the entire month for the rest of the month, plus free shipping throughout the summer, plus an additional 3 months off the first month, shipping free, shipping anywhere else you get a maximum of 3 months of the year, plus a discount on your choice of a year, and a free ad-free month, and you'll get an ad discount when you buy a copy of the entire year of the making sense edition of the course that starts in March and a course starting in May, plus all of that gets you an ad-only version of the next month, starting in September, shipping only $19.99, shipping starts in July, shipping is free!


Transcript

00:00:00.000 Welcome to the Making Sense Podcast.
00:00:08.820 This is Sam Harris.
00:00:10.880 Just a note to say that if you're hearing this, you are not currently on our subscriber
00:00:14.680 feed and will only be hearing the first part of this conversation.
00:00:18.420 In order to access full episodes of the Making Sense Podcast, you'll need to subscribe at
00:00:22.720 samharris.org.
00:00:24.060 There you'll find our private RSS feed to add to your favorite podcatcher, along with
00:00:28.360 other subscriber-only content.
00:00:30.520 We don't run ads on the podcast, and therefore it's made possible entirely through the support
00:00:34.640 of our subscribers.
00:00:35.880 So if you enjoy what we're doing here, please consider becoming one.
00:00:46.900 Welcome to the Making Sense Podcast.
00:00:48.920 This is Sam Harris.
00:00:50.920 Well, today I'm bringing you the audio from my live event with Danny Kahneman at the Beacon
00:00:55.160 Theater in New York a couple of weeks back.
00:00:57.440 This was a sold-out event in a very cool old theater.
00:01:02.920 I'd actually never been to the Beacon before, but it has a storied history in music and comedy.
00:01:09.880 Anyway, it was a great pleasure to share the stage with Danny.
00:01:13.200 Daniel Kahneman, as you may know, is an emeritus professor of psychology at Princeton University
00:01:17.980 and also an emeritus professor of public affairs at Princeton's Woodrow Wilson School of Public
00:01:24.800 and International Affairs.
00:01:26.540 He received the Nobel Prize in Economics in 2002 for the work he did on decision-making
00:01:32.880 under uncertainty with Amos Tversky.
00:01:36.420 Unfortunately, Tversky died in 1996, and he was a legendary figure who would have certainly
00:01:42.980 shared the Nobel Prize with Danny had he lived longer.
00:01:46.340 They don't give the Nobel posthumously.
00:01:49.720 In any case, I think it's uncontroversial to say that Danny has been the most influential
00:01:53.900 living psychologist for many years now, but he's perhaps best known in the general public
00:02:00.980 for his book Thinking Fast and Slow, which summarizes much of the work he did with Tversky.
00:02:06.800 Michael Lewis also recently wrote a biography of the Kahneman-Tversky collaboration, and that
00:02:14.220 is called The Undoing Project.
00:02:16.520 Anyway, Danny and I covered a lot of ground at The Beacon.
00:02:19.720 We discussed the replication crisis in science, systems one and two, which is to say automatic
00:02:27.380 and unconscious cognitive processes and more conscious and deliberative ones.
00:02:32.860 We talk about the failure of intuition, even expert intuitions, the power of framing, moral
00:02:40.900 illusions, anticipated regret, the asymmetry between threats and opportunities, the utility
00:02:47.780 of worrying, removing obstacles to wanted behaviors, the remembering self versus the experiencing
00:02:54.960 self, improving the quality of gossip, and many other topics.
00:02:59.920 Anyway, Danny has a fascinating mind, and I think you'll find this a very good introduction
00:03:05.920 to his thinking.
00:03:08.180 Of course, if you want more, his book Thinking Fast and Slow also awaits you if you haven't
00:03:13.740 read it.
00:03:15.000 And now I bring you Daniel Kahneman.
00:03:17.100 Thank you.
00:03:25.100 Well, well, thank you all for coming.
00:03:33.860 Really an honor to be here.
00:03:35.660 Danny, it's a special honor to be here with you, so thank you for coming.
00:03:39.080 My pleasure.
00:03:39.620 It's often said, and rarely true, that a guest needs no introduction, but in your case that
00:03:51.940 is virtually true.
00:03:54.140 We're going to talk about your work throughout, so people will, for the one person who doesn't
00:03:58.780 know who you are, you will understand at the end of the hour.
00:04:02.160 But I guess by way of introduction, I just want to ask, but what is the worst thing about
00:04:07.140 winning the Nobel Prize?
00:04:09.140 That's a hard question, actually.
00:04:14.900 There weren't many downsides to it.
00:04:19.400 Okay, well, nobody wants to hear your problems, Dan.
00:04:26.900 So, how would you, how do you think about your body of work?
00:04:31.380 How do you summarize the intellectual problems you have tried to get your hands around?
00:04:35.780 You know, it's been just a series of problems that occurred that I worked on.
00:04:42.400 There was no big program.
00:04:43.940 When you look back, of course, I mean, you see patterns and you see ideas that have been with
00:04:49.780 you for a long time.
00:04:50.840 But there was really no plan.
00:04:52.800 I was, you know, you follow, you follow things, you follow ideas, you follow things that you
00:05:00.180 take a fancy to.
00:05:01.660 Really, that's a story of my intellectual life.
00:05:04.680 It's just one thing after another.
00:05:07.060 Judging from the outside, it seems to me that you have told us much of what we now think we
00:05:12.080 know about cognitive bias and cognitive illusion.
00:05:17.260 And really, the picture is of human ignorance having a kind of structure.
00:05:24.360 It's not just that we get things wrong.
00:05:26.740 We get things reliably wrong.
00:05:28.260 And because of that, whole groups, markets, societies can get things wrong because the
00:05:35.100 errors don't cancel themselves out.
00:05:37.080 I mean, bias becomes systematic.
00:05:39.340 And that obviously has implications that touch more or less everything we care about.
00:05:45.000 Let's just, I want to track through your work, you know, as presented in your now famous
00:05:51.440 and well-read book, Thinking Fast and Slow.
00:05:54.340 And I just want to try to tease out what should be significant for all of us at this moment.
00:05:59.880 Because, you know, human unreason, unfortunately, becomes more and more relevant, it seems.
00:06:04.740 And we don't get over these problems.
00:06:07.600 And I guess I wanted just to begin to ask you about a problem that's very close to home now,
00:06:13.820 what is called the replication crisis or reproducibility crisis in science, in particular social sciences
00:06:20.300 and in particular psychology, and for those in the room who are not aware of what has happened
00:06:25.940 and how dire this seems, it seems that when you go back to even some of the most celebrated
00:06:31.520 studies in psychology, their reproducibility is on the order of 50-60% in the best case.
00:06:39.840 So there was one study done in, that took 21 papers from Nature and Science, which are the
00:06:45.220 most highly regarded journals, and reproduced only 13 of them.
00:06:51.280 And so let's talk about the problem we faced in even doing science in the first place.
00:06:56.980 Well, I mean, you know, the key problem and the reason that this happens is that research
00:07:04.860 is expensive.
00:07:06.440 And it's expensive personally, and it's expensive in terms of money.
00:07:10.680 And so you want it to succeed.
00:07:13.540 So when you're a researcher, you know what you want to find.
00:07:17.160 And that creates biases that you're not fully aware of.
00:07:21.700 And I think a lot of this is simply self-delusion.
00:07:25.640 That is, you know, there is a concept that's known as p-hacking, which is people very honestly
00:07:33.600 deluding themselves about what they find.
00:07:36.500 And there are several tricks of the trade that, you know, people know about them.
00:07:42.240 You are going to do an experiment.
00:07:44.180 So instead of having one dependent variable where you predict the outcome, you take two
00:07:49.040 dependent variables.
00:07:50.340 And then if one of them doesn't work, you stay with the one that does work.
00:07:55.240 You do that and things like that a few times, then it's almost guaranteed that your research
00:08:03.000 will not be replicable.
00:08:03.960 And that happens, it was first discovered in medicine.
00:08:07.440 I mean, it's more important in medicine than it is in psychology, where somebody famously
00:08:12.420 said that most published research in medicine is false.
00:08:17.100 And a fair amount of published psychological research is false, too.
00:08:23.080 Yeah, but even some of the most celebrated results in psychology, like priming and the marshmallow
00:08:29.160 test and...
00:08:29.960 Well, yeah, I mean, it's not only, it's actually, they get celebrated in part because they are
00:08:39.160 surprising.
00:08:40.160 Yeah.
00:08:40.460 And the rule is, you know, the more surprising the result is, the less likely it is to be
00:08:46.380 true.
00:08:47.600 And so that's how celebrated results get to be non-replicable.
00:08:53.220 Right.
00:08:53.880 Well, and the scariest thing I heard, I don't know how robust this study was, but someone
00:08:59.540 did a study on trying to replicate unpublished studies and found that they replicated better
00:09:06.140 than published studies.
00:09:07.560 Did you hear this?
00:09:08.160 I don't think that's replicable.
00:09:09.960 Oh, yeah.
00:09:10.340 Okay.
00:09:10.700 Yeah.
00:09:11.640 Let's hope not.
00:09:12.560 Let's talk about system one and two.
00:09:17.180 These are the structures that give us so much of our, what can be a dispiriting picture of
00:09:23.020 human rationality.
00:09:24.780 Summarize for us, what are these two systems you talk about?
00:09:29.320 I mean, before starting with anything else, there are clearly two ways that ideas come to
00:09:34.720 mind.
00:09:35.080 I mean, so if I say two plus two, then an idea comes to your mind.
00:09:40.460 You haven't asked for it.
00:09:42.160 You're completely passive, basically.
00:09:44.280 Something happens in your memory.
00:09:46.280 If I ask you to multiply, you know, 24 by 17 or something like that, you have to work
00:09:51.700 to get that idea.
00:09:52.820 So it's that dichotomy between the associative effortless and the effortful.
00:09:59.940 And that is phenomenologically obvious.
00:10:02.960 You start from there.
00:10:05.080 And how you describe it and whether you choose to describe it in terms of systems, as I did,
00:10:12.100 or in other terms, that's already a theoretical choice.
00:10:16.320 And in my view, theory is less important than the basic observation of, you know, that there
00:10:22.800 are two ways for ideas to come to mind.
00:10:25.000 And then you have to describe it in a way that could be useful.
00:10:31.280 And what I mean by that is you have to describe the phenomena in a way that will cause, help
00:10:39.380 researchers have good ideas about facts and about experiments to run.
00:10:43.980 And the system one and system two was, it's not my, not my dichotomy and even not my terminology.
00:10:52.620 And in fact, it's a terminology that many people object to, but I chose it quite deliberately.
00:10:59.180 What are the liabilities?
00:11:00.560 Because people in your book, you try to guard against various misunderstandings of this.
00:11:05.820 Well, yes, I mean, you know, there is a rule that you're taught fairly early in psychology,
00:11:11.640 which is never to invoke what is called homunculi, which are little people in your head whose
00:11:17.780 behavior explain your behavior or explain the behavior of people.
00:11:21.220 That's a no-no.
00:11:22.680 And system one and system two are really homunculi.
00:11:25.500 So I knew what I was doing when I picked those.
00:11:30.020 And, but the reason I did was that system one and system two are agents.
00:11:36.820 They have personalities.
00:11:38.580 And it turns out that the mind is very good at forming pictures and images of agents that
00:11:45.320 have intentions and propensities and traits and they're active.
00:11:51.360 And it's just easy to get your mind around that.
00:11:54.260 And that's why I picked that terminology, which many people find sort of objectionable because
00:12:00.040 they're really not agents in the head.
00:12:03.020 It's just a very useful way to think about it, I think.
00:12:05.980 So there's no analogy to be drawn between a classical, psychological, even Freudian picture
00:12:13.860 of the conscious and the unconscious.
00:12:15.820 How do you think about consciousness and everything that precedes it in light of modern psychology?
00:12:22.300 It's clearly related in the sense that what I call system one activities, the automatic
00:12:28.400 ones, one characteristic they have is that you're completely unconscious of the process
00:12:33.720 that produces them.
00:12:35.140 You just get, you know, you get the results.
00:12:37.160 You get four when you hear two plus two.
00:12:39.460 Right.
00:12:39.620 In system two activities, you're often conscious of the process.
00:12:43.980 You know what you're doing when you're calculating.
00:12:46.660 You know what you're doing when you're searching for something in memory.
00:12:51.360 So clearly, consciousness and system two tend to go together.
00:12:56.080 It's not a perfect, you know, and who knows what consciousness is anyway.
00:13:00.060 But they tend to go together.
00:13:02.480 And system one is much more likely to be unconscious and automatic.
00:13:06.500 Neither system is a perfect guide toward tracking reality.
00:13:10.740 But system one is, it's very effective in many cases.
00:13:16.220 Otherwise, it wouldn't have evolved the way it has.
00:13:18.600 But I guess maybe let's start with a picture of where our intuitions are reliable and where
00:13:26.420 they reliably fail.
00:13:28.000 How do you think about the utility of intuition?
00:13:30.120 I'll say first about system one, that our representation of the world, most of what we
00:13:36.280 know about the world, is in system one.
00:13:39.140 We're not aware of it.
00:13:40.340 So that we're going along in life with producing expectations or being surprised or not being
00:13:50.220 surprised by what happens.
00:13:51.720 All of this is automatic.
00:13:53.020 We're not aware of it.
00:13:54.080 So most of our thinking, system one thinking, most of what goes on in our mind, goes on,
00:13:59.980 and we're not aware of it.
00:14:01.300 So that's, and intuition is defined as, you know, knowing or rather thinking that you know
00:14:08.620 something without knowing why you know it or without knowing where it comes from.
00:14:13.620 And, and, and it's fairly clear, actually, I mean, that's a digression, but there is a
00:14:20.520 guy named Gary Klein, a psychologist who really doesn't like anything that I do.
00:14:25.460 And he, he, he, how does your system one feel about that?
00:14:30.460 I like Gary a lot, actually.
00:14:32.520 So, but he believes in intuition and in expert intuition, and he's a great believer in, and
00:14:40.380 he has beautiful data showing, beautiful observations of expert intuition.
00:14:46.500 So he and I, I invited him, actually, to try and figure out our differences, because obviously
00:14:51.900 I'm a skeptic.
00:14:53.300 So where intuition, marvelous, and where is it flawed?
00:14:57.020 And we worked about, we worked for six years before we came up with something, and we published
00:15:03.000 an article called The Failure to Disagree, because, in fact, there is a fairly clear boundary
00:15:09.460 about when you can trust your intuitions and when you can't.
00:15:13.000 And, and I think that's summarized in three conditions.
00:15:16.600 The first one is, the world has to be regular enough.
00:15:20.880 I mean, first of all, intuition is recognition.
00:15:24.380 And that's, Herbert Simon said that.
00:15:26.960 You have an intuition, it's just like recognizing, you know, that it's like a child's recognizing
00:15:32.460 what a dog is.
00:15:34.540 It's immediate.
00:15:36.960 Now, in order to recognize patterns and reality, which is, which is what true intuitions are,
00:15:43.900 the world has to be regular enough so that there are regularities to be picked up.
00:15:49.180 But then you have to have enough exposure to those regularities to have a chance to learn
00:15:55.260 them.
00:15:56.200 And third, it turns out that intuition depends critically on the time between when you're
00:16:03.780 making a guess and a judgment and when you get feedback about it.
00:16:07.940 The feedback has to be rapid.
00:16:10.100 And if those three conditions are satisfied, then eventually people develop intuition so that
00:16:15.840 a chess player's, chess is a prime example where all three conditions are satisfied.
00:16:21.360 So after, you know, many hours, I don't know, 10,000 or not, but many hours, a chess player
00:16:27.440 will have intuitions.
00:16:29.040 All the ideas, all the moves that come to his or her mind are going to be strong moves.
00:16:34.640 That's intuition.
00:16:36.380 Right.
00:16:36.500 So the picture is one of intuition, I mean, they're intuitions that are more innate than
00:16:42.520 others, or we're so primed to learn certain things innately that no one remembers learning
00:16:48.440 these things, you know, recognizing a human face, say.
00:16:52.100 But much of what you're calling intuition was at one point learned.
00:16:56.760 So intuition is trainable.
00:16:58.740 There are experts in various domains, chess being a very clear one, that develop what we
00:17:04.440 consider to be expert intuitions.
00:17:06.780 And yet much of the story of the blind spots in our rationality is a story of the failure
00:17:14.440 of expert intuition.
00:17:16.180 So where do you see the frontier of trainability here?
00:17:19.400 I mean, I think that what happens is that when those conditions are not satisfied, people
00:17:25.780 have intuitions too.
00:17:27.620 That is, you know, they have ideas that come to their mind with high confidence and they
00:17:31.720 think they're right.
00:17:32.460 And so the main thing...
00:17:35.380 I've met these people.
00:17:36.180 Yeah.
00:17:36.500 I mean, you know, we've all met them and we see them in the mirror and, you know, that's...
00:17:43.520 So it turns out you can have intuitions for bad reasons, you know.
00:17:50.860 So all it takes is a thought that comes to your mind automatically and with high confidence
00:17:56.700 and you'll think that it's an intuition and you'll trust it.
00:18:00.080 But the correlation between confidence and accuracy is not high.
00:18:06.440 That's, you know, one of the saddest things about the human condition.
00:18:10.100 You can be very confident in ideas and the correlation.
00:18:14.640 You shouldn't trust your confidence.
00:18:15.960 Well, so that's just, you know, yes, a depressing but fascinating fact that the signature of a
00:18:26.140 high probability that you are correct is what you feel while uttering that sentence.
00:18:31.920 I mean, psychologically, confidence is the marker of your credence in whatever proposition
00:18:38.100 it is you're entertaining and yet we know they can become totally uncoupled and often
00:18:43.880 are uncoupled.
00:18:45.940 Given what you know or think you know scientifically, how much of that bleeds back into your life
00:18:51.500 and changes your epistemic attitude?
00:18:56.100 Mine personally?
00:18:57.000 Do you hedge your bet?
00:18:58.860 How is Danny Kahneman different given what he has understood about science?
00:19:03.540 Not at all.
00:19:04.460 Not at all?
00:19:04.740 I mean, I mean, it's even more depressing than I thought.
00:19:09.220 You know, in terms of thinking, you know, my intuition is better than being better than
00:19:14.820 they were, no.
00:19:16.120 And furthermore, I have to confess, I'm also very overconfident.
00:19:20.080 So, even that I haven't learned.
00:19:22.900 Right.
00:19:23.120 So, it's hard to get rid of those things.
00:19:25.960 You're just issuing a long string of apologies?
00:19:28.380 I mean, how do you get through life?
00:19:29.680 Because you should know better.
00:19:31.200 If anyone should know better, you should know better.
00:19:33.080 Yeah, but I don't really feel guilty about it.
00:19:35.720 So, I have to...
00:19:37.500 So, how hopeful are you that we can improve?
00:19:43.720 How hopeful are you that an individual can improve?
00:19:46.560 And how hopeful are you that we can design systems of conversation and incentives that
00:19:52.160 can make some future generation find us more or less unrecognizable in our stupidity and...
00:20:00.120 Well, you know, I should preface by saying that I'm not an optimist in general, but I'm
00:20:05.740 certainly not an optimist about those questions.
00:20:09.060 I don't think that...
00:20:11.480 You know, I'm a case study because I've been studying that stuff for more than 50 years,
00:20:15.640 and I don't think that my intuitions have really significantly improved.
00:20:19.640 I can catch sometimes, and that's important.
00:20:24.000 I can catch, recognize a situation as one in which I'm likely to be making a mistake.
00:20:31.360 And this is the way that people protect themselves against visual illusions.
00:20:35.600 You can see the illusions, and there's no way you can not see it.
00:20:39.600 But you can recognize that this is likely to be an illusion, so don't trust my eyes, take
00:20:44.880 out the ruler.
00:20:46.120 There is an equivalent.
00:20:47.860 You know, there is a similar thing goes on with cognitive illusions.
00:20:51.620 Sometimes you know that your intuitions, your confident thought, is unlikely to be true.
00:21:00.460 That's quite rare.
00:21:01.540 It doesn't happen a lot.
00:21:02.520 I don't think that I've become, you know, in any significant way, smarter because of studying
00:21:08.640 errors of cognition.
00:21:11.380 Right.
00:21:13.020 Okay, let me just absorb that for a second.
00:21:16.120 What you must thirst for on some levels is that this understanding of ourselves can be
00:21:24.560 made useful or more useful than it is, because the consequences are absolutely dire, right?
00:21:31.540 I mean, our decision-making is, one could argue, the most important thing on Earth, certainly
00:21:37.020 with respect to human well-being, right?
00:21:39.780 I mean, how we negotiate nuclear test ban treaties, right?
00:21:44.120 I mean, like everything from that on down, this is all human conversation, human intuition,
00:21:50.460 errors of judgment, pretensions of knowledge, and sometimes we get it right.
00:21:55.600 And the delta there is extraordinarily consequential.
00:21:58.980 So if I told you that we, over the course of the next 30 years, made astonishing progress
00:22:06.600 on this front, right?
00:22:08.540 So that we, our generation, looks like, you know, bumbling medieval characters compared
00:22:16.480 to what our children or grandchildren begin to see as a new norm, how did we get there?
00:22:22.920 You don't get there.
00:22:23.900 You know, I mean, that's, you know, it's the same as if you told me, will our perceptual
00:22:29.640 system be very different in 60 years?
00:22:32.420 And I don't think so.
00:22:33.980 Let's take one of these biases or sources of bias that you have found.
00:22:38.800 I mean, the power of framing, right?
00:22:40.680 We know that if you frame a problem in terms of loss or you frame the same problem in terms
00:22:46.240 of gains, you get a very different set of preferences from people because people are so averse to loss.
00:22:51.700 So the knowledge of that fact, let's say you're a surgeon, right?
00:22:55.580 And you're recommending or at least, you know, proffering a surgery for a condition to your
00:23:01.820 patients who you have a, you know, you have taken a Hippocratic oath to do no harm.
00:23:06.400 And you know, because you read Danny Kahneman's book, that if you put the possibility of outcome
00:23:12.760 in terms of mortality rates versus survival rates, you are going to be moving several dials
00:23:18.900 in your patient's head one way or the other reliably, can you conceive of us ever agreeing
00:23:24.720 that there's a right answer there, like in terms of what is the ethical duty to frame this correctly?
00:23:30.080 Is there a correct framing or are we just going to keep rolling the dice?
00:23:33.860 Well, I mean, this is a lot of questions at once.
00:23:39.480 In the first place, you know, when you're talking about framing, the person who is subject to
00:23:48.300 the framing, I mean, so you have a surgeon framing something for a patient.
00:23:52.620 First of all, the patient is going to be completely unaware of the fact that there is an alternative
00:23:57.480 frame.
00:23:58.000 That's why it works.
00:23:59.920 It works because you see one thing and you accept the formulation as it is given.
00:24:06.680 So that's why framing works.
00:24:10.560 Now, whether there is a true or not true answer, so I should, let me mention the sort of the
00:24:17.880 canonical problem, which actually my late colleague Amos Sversky invented.
00:24:22.660 So in one formulation, you have a choice between, well, there is a disease that's going to cause
00:24:30.920 600 deaths unless something is done.
00:24:34.840 And you have your choice between saving 400 people or a two-third probability of saving 600.
00:24:41.920 Or alternatively, other people get the other framing that you have a choice between...
00:24:49.580 Killing 200 people.
00:24:51.180 Killing 200 people for sure, and not allowing them to die, and a one-third probability that
00:25:00.520 600 people will die.
00:25:02.180 Is there a correct answer?
00:25:03.700 Is there a correct frame?
00:25:05.620 Now, the interesting thing is people, depending on which frame you presented to them, they make
00:25:10.800 very different choices.
00:25:11.960 But now you confront them with the fact that here you've been inconsistent.
00:25:20.720 And some people will deny it, but you can convince them this is really the same problem.
00:25:27.420 You know, if you save 400, then 200 will die.
00:25:31.120 And then what happens is they're dumbfounded.
00:25:34.240 That is, there are no intuitions.
00:25:36.260 We have clear intuitions about what to do with gains.
00:25:41.780 We have clear intuitions about what to do with losses.
00:25:45.900 And when you strip it from that language with which we have intuition, we have no idea what to do.
00:25:53.060 So, you know, what is better when you stop to think about, you know, stop thinking about saving or about dying?
00:26:00.900 Well, actually, I've forgotten, if that research was ever done, I forgot what the results were.
00:26:05.960 Has the third condition been compared to the first two?
00:26:09.040 What do people do when you give them both framings and dumbfound them?
00:26:13.980 I mean, you know...
00:26:15.960 Where do the percentages go with respect to...
00:26:18.660 This is not something that, you know, we've done formally, but I can tell you that I'm dumbfounded.
00:26:24.520 That is, I have absolutely no idea.
00:26:26.500 You know, I have the same intuitions as everybody else.
00:26:30.500 You know, when it's in the gains, I want to save lives.
00:26:33.140 And when it's in the losses, I don't want people to die.
00:26:36.180 So, but that's where the intuitions are.
00:26:40.040 When you're talking to me about 600 more people staying alive with a probability two-thirds,
00:26:46.740 or, you know, when you're talking about numbers of people living, I have absolutely no intuitions about that.
00:26:52.860 So, that is quite common in ethical problems and in moral problems, that they're frame-dependent.
00:27:00.620 And when you strip the frames away, people are left without a moral intuition.
00:27:06.020 Well, and this is incredibly consequential in when you're thinking about human suffering.
00:27:10.600 So, your colleague, Paul Slovic, has done these brilliant experiments where he's shown that
00:27:16.320 if you ask people to support a charity, you talk about, you know, a famine in Africa, say,
00:27:22.800 and you show them one little girl attached to a very salient and heartbreaking narrative
00:27:28.480 about, you know, how much she's suffering, you get the maximum charitable response.
00:27:34.080 But then you go to another group and you show that same one little girl and tell her story,
00:27:38.480 but you give her a brother and the response diminishes.
00:27:41.800 And if you go to another group and you give them the little girl and her brother,
00:27:46.980 and then you say, in addition to the suffering of these two gorgeous kids,
00:27:52.280 there are 500,000 suffering children behind them suffering the same famine,
00:27:58.280 then the altruistic response goes to the floor.
00:28:01.140 It's precisely the opposite of what we understand system two should be normative, right?
00:28:08.080 The bigger the problem, the more concerned and charitable we should be.
00:28:12.940 So, to take that case, there's a way to correct for this at the level of tax codes
00:28:18.840 and levels of foreign aid and which problems to target.
00:28:22.560 We know that we are emotionally gamed by the salient personal story
00:28:28.440 and more or less morally blind to statistics and raw numbers.
00:28:33.060 I mean, there's another piece of work that you did which shows that people are so innumerate
00:28:38.140 with respect to the magnitude of problems that they will more or less pay the same amount
00:28:43.340 whether they're saving 2,000 lives, 20,000 lives, or 200,000 lives.
00:28:47.580 Yeah.
00:28:48.660 Because basically, and that's a system one characteristic,
00:28:52.820 basically you're saving one life.
00:28:55.800 You're thinking, you have an image, you have stories,
00:28:58.400 and this is what system one works on.
00:29:00.360 And this is where emotions are about.
00:29:03.040 They're about stories.
00:29:04.500 They're not about numbers.
00:29:06.480 So, it's always about stories.
00:29:08.920 And what happens when you have 500,000, you have lost a story.
00:29:13.120 A story, to be vivid, has to be about an individual case.
00:29:17.700 And when you dilute it by adding cases, you dilute the emotion.
00:29:22.140 Now, what you're describing in terms of the moral response to this is no longer an emotional response.
00:29:32.700 And this is already, you know, this is cognitive morality.
00:29:36.820 This is not emotional morality.
00:29:39.360 You have disconnected from the emotion.
00:29:41.680 You know that it's better to save 500,000 than 5,000,
00:29:46.440 even if you don't feel better about saving 500,000.
00:29:51.380 So, this is passing on to system two.
00:29:55.460 This is passing on to the cognitive system, the responsibility for action.
00:30:00.740 And you don't think that handoff can be made in a durable way?
00:30:06.340 I think it has to be made by policymakers.
00:30:09.680 And policymakers, you know, we hire some people to think about numbers
00:30:14.220 and to think about it in those ways.
00:30:17.300 But if you want to convince people that this needs to be done,
00:30:22.500 you need to convince them by telling them stories about individuals,
00:30:25.840 because numbers just don't catch the imagination of people.
00:30:31.280 What does the phrase cognitive ease mean in your work?
00:30:36.120 Well, it means that some ideas come very easily to mind
00:30:42.780 and others come with greater and greater difficulty to the point of.
00:30:47.660 So, that's what cognitive...
00:30:50.680 It's also called fluency.
00:30:53.380 Right.
00:30:53.540 It's, you know, what's easy to think about.
00:30:57.600 And there is a correlation between fluency and pleasantness, apparently,
00:31:03.300 that pleasant things are more fluent.
00:31:05.400 They come more easily.
00:31:07.240 Not always more easily, but yes, they're more fluent.
00:31:10.900 And fluency is pleasant.
00:31:13.280 So, there is that interaction between fluency and pleasure,
00:31:16.320 which I hope replicates.
00:31:18.220 So, the picture I get is of, I don't know if you reference this in your book,
00:31:25.980 I can't remember, but what happens, what we know from, you know, split-brain studies,
00:31:30.380 that for the most part, the left linguistic hemisphere confabulates.
00:31:35.280 It's continually manufacturing discursive stories that ring true to it.
00:31:41.860 And there's, in the case of actual neurological confabulation,
00:31:47.460 there's no reality testing going on.
00:31:50.200 There's nothing.
00:31:50.560 It's just, it's telling a story that is being believed.
00:31:53.440 But it seems to me that most of us are in a similar mode most of the time.
00:31:59.600 There's a very lazy reality testing mechanism coming online.
00:32:04.820 And it's just easy to take your own word for it most of the time.
00:32:12.020 I think this is really, as you say, this is a normal state.
00:32:15.860 The normal state is that we're telling ourselves stories.
00:32:19.640 We're telling ourselves stories to explain why we believe in things.
00:32:24.120 More often than not, retrospectively, in a way that bears no relationship to the system one,
00:32:29.660 bottom-up reasons why we feel this way.
00:32:31.880 But, you know, for me, the example that was formative is what happened with post-hypnotic suggestions.
00:32:42.540 So you put somebody under hypnosis and you tell them, you know,
00:32:46.900 when I clap my hands, you will feel very warm and you'll open a window.
00:32:52.440 And you clap your hands and they get up and open a window.
00:32:57.140 And they know why they opened the window.
00:32:59.420 And it has nothing to do with the suggestion.
00:33:01.260 It comes with a story.
00:33:03.560 They felt really warm and uncomfortable and they needed air and they opened the window.
00:33:09.200 Actually, in this case, you know, the cause.
00:33:11.760 The cause was the hand was clapped.
00:33:14.740 Is that going to replicate?
00:33:16.320 That one replicates, I'm pretty sure.
00:33:18.840 You know, I hope so.
00:33:21.020 Yeah, I'm sure.
00:33:23.360 Do you have a favorite cognitive error or bias?
00:33:26.940 Which of your ugly children do you like the most?
00:33:32.980 Well, yeah, I think, I mean, it's not the simplest to explain.
00:33:40.620 But my favorite one is sort of extreme predictions.
00:33:43.960 When you have very weak evidence and on the basis of very weak evidence,
00:33:48.920 you draw extreme conclusions.
00:33:51.160 I call it, technically, it's called non-regressive prediction.
00:33:55.460 And it's my favorite.
00:33:56.620 All right.
00:33:57.820 Where do you see it appearing?
00:33:59.420 Is there an example of it that you have seen?
00:34:02.500 You see it all over the place.
00:34:06.580 But when, you know, one very obvious situation is in job interviews.
00:34:12.040 So, you know, you interview someone and you have a very clear idea of how they will perform.
00:34:18.440 And even when you are told that your ideas are worthless because, in fact,
00:34:22.480 you cannot predict performance or can predict it only very poorly, it doesn't affect it.
00:34:27.080 Next time you interview the person, you have the same confidence.
00:34:32.240 Interview somebody else.
00:34:33.800 I mean, that's something that I discovered very early in my career.
00:34:37.660 I was an officer in the Israeli army as a draftee.
00:34:43.320 And I was interviewing candidates for officer training.
00:34:47.460 And I discovered that I had that uncanny power to know who will be a good officer and who won't be.
00:34:55.520 And I really could tell, you know, interviewing people.
00:34:59.060 I knew their character.
00:35:00.800 You get that sense of, you know, confident knowledge.
00:35:04.700 And then, you know, then the statistics showed that actually we couldn't predict anything.
00:35:11.820 And yet the confidence remained.
00:35:14.580 It's very strange.
00:35:16.540 Right.
00:35:17.620 Well, so there must be a solution for that.
00:35:20.320 Some people following your work must recommend that you either don't do interviews or heavily discount them, right?
00:35:27.540 Yeah, that's absolutely true.
00:35:29.980 Don't do interviews, mostly.
00:35:32.240 Right.
00:35:33.860 And don't do interviews in particular because if you run an interview, you will trust it too much.
00:35:39.960 So there have been many cases, you know, studies, I don't know about many, but there have been studies in which you have candidates, you have a lot of information about them.
00:35:54.420 And then if you add an interview, it makes your predictions worse, especially if the interviewer is the one who makes the final decision.
00:36:03.940 Because when you interview, this is so much more vivid than all the other information you have that you put way too much weight on it.
00:36:12.740 Is that also a story about just the power of face-to-face interaction?
00:36:17.420 It's face-to-face interaction.
00:36:19.660 It's immediate.
00:36:21.300 You know, anything that you experience is, you know, is very different from being told about it.
00:36:27.620 And, you know, as scientists, one of the remarkable things that I know is how much more I trust my results than anybody else's.
00:36:37.160 Right.
00:36:37.500 So, and that's true of everybody I know.
00:36:41.240 You know, we trust our own results.
00:36:43.640 Why?
00:36:44.420 No reason.
00:36:49.380 All right, then let's talk about regret.
00:36:51.560 Okay.
00:36:52.600 What is the power of regret in our lives?
00:36:56.840 How do you think about regret?
00:37:00.540 Well, I think regret is an interesting emotion.
00:37:07.920 And it's a special case of an emotion that has to do with counterfactual thinking.
00:37:14.340 That is, regret is not about something that happened.
00:37:17.140 It's about something that could have happened but didn't.
00:37:19.800 And I don't know about regret itself, but anticipated regret, the anticipation of regret, plays an important role in lots of decisions.
00:37:32.040 That is, there's a decision and you tell yourself, well, if I don't do this and, you know, and it happens, then how will I feel?
00:37:40.620 That expectation of regret is very powerful.
00:37:43.720 And it's well known in financial decisions and a lot of other decisions.
00:37:50.680 And it's connected to loss aversion as well, right?
00:37:53.420 Well, I mean, it's a form of loss.
00:37:55.220 It's a form of loss.
00:37:56.320 And it's quite vivid that you're able to anticipate how you will feel if something happens.
00:38:06.340 And that becomes very salient.
00:38:08.300 Well, does the asymmetry with respect to how we view losses and gains make sense, ultimately?
00:38:17.440 I mean, I think at some point in your work you talk about an evolutionary rationale for it because suffering is worse than pleasure is good, essentially,
00:38:28.740 because there's a survival advantage for those who are making greater efforts to avoid suffering.
00:38:33.580 But it also just seems like there's, if you put in the balance of possibility the worst possible misery and the greatest possible pleasure,
00:38:43.780 I mean, if I told you we could have the night we're going to have tonight and it will be a normal night of conversation,
00:38:51.020 or there's a part of the evening where I can give you the worst possible misery for a half hour,
00:38:57.040 followed by the best possible pleasure.
00:39:00.680 Let's have a conversation.
00:39:01.780 Yeah, let's just get a cheeseburger and a Diet Coke.
00:39:06.600 The prospect of suffering in this universe seems to overwhelm the prospect of happiness or well-being.
00:39:13.820 I know you put a lot of thought into the power of sequencing.
00:39:17.380 I can imagine that feeling the misery first and the pleasure second would be better than the reverse.
00:39:23.260 Much.
00:39:23.900 But it's not going to be enough to make it seem like a good choice, I would imagine.
00:39:28.120 How do you think of this asymmetry between pleasure and pain?
00:39:31.420 You know, the basic asymmetry is between threats and opportunities, and threats are more immediate.
00:39:39.200 And so in many situations, it's not true everywhere, there are situations where opportunities are very rare.
00:39:49.280 But threats are immediate, and they have to be dealt with immediately, so the priority of threats over opportunities must be built in by a large evolutionary loop.
00:40:00.000 But do you think we could extract an ethical norm from this asymmetry?
00:40:06.740 For instance, could it be true to say that it is more important to alleviate suffering than to provide pleasure if we had some way to calibrate the magnitude of each?
00:40:19.740 Well, in the first, we did a study, Dick Thaler and Jack Natch and I did a study a long time ago, about intuitions about fairness.
00:40:28.100 And it's absolutely clear that that asymmetry rules intuitions about fairness.
00:40:34.480 That is, there is a very powerful rule of fairness that people identify with, not to cause losses.
00:40:44.040 That is, you have to have a very good reason to inflict a loss on someone.
00:40:48.460 The injunction to share your gains is much weaker.
00:40:54.820 So that asymmetry, what we call the rights that people have, quite frequently the negative rights that people have, is the right not to have losses inflicted on you.
00:41:06.240 So there are powerful moral intuitions that go in that direction.
00:41:11.520 And the second question that you asked, because that was a compound question about well-being, yeah, I mean, I think, you know, in recent decades, there's tremendous emphasis on happiness and the search for happiness and the responsibility of governments to make citizens happy and so on.
00:41:32.680 And one of my doubts about this line of thinking is that I think that preventing misery is a much better and more important objective than promoting happiness.
00:41:46.220 And so the happiness movement, I have my doubts about on those grounds.
00:41:53.840 Given what you've said, it's hard to ever be sure that you've found solid ground here.
00:41:59.820 So there's the intuition that you just cited that people have a very strong reaction to imposed losses that they don't have to unshared gains, right?
00:42:10.660 You do something that robs me of something I thought I had.
00:42:16.000 I'm going to feel much worse about that than just the knowledge that you didn't share some abundance that I never had in the first place.
00:42:21.740 But it seems that we could just be a conversation away from standing somewhere that makes that asymmetry look ridiculous, analogous to the Asian disease problem, right?
00:42:36.060 Like it's a framing effect that we may have an evolutionary story to tell about why we're here, but given some opportunity to be happy in this world, it could seem counterproductive.
00:42:48.240 I say this already being anchored to your intuition.
00:42:51.800 I share this situation.
00:42:53.000 Yeah, I think that, you know, in philosophical debates about morality and well-being, there are really two ways of thinking about it.
00:43:06.180 And there is one way about when you're thinking of final states and what everybody will have.
00:43:12.220 And so you have, and there there is a powerful intuition that you want people more or less to be equal, or at least not to be too different.
00:43:21.360 But there is another way of thinking about it, which is given the situation and the state of society, how much redistribution do you want to impose?
00:43:32.540 And there there is an asymmetry because you are taking from some people and giving it to others.
00:43:37.160 And you don't get to the same point.
00:43:40.500 So we have powerful moral intuitions of two kinds, and they're not internally consistent.
00:43:46.840 And loss aversion has a great deal to do with that.
00:43:49.920 So given that there are many things we want and don't want, and we want and don't want them strongly,
00:43:56.640 and we are all moving individually and collectively into an uncertain future where there are threats and opportunities,
00:44:05.040 and we're trying to find our way, how do you think about worrying?
00:44:08.820 What is the advantage of worrying?
00:44:11.200 If there was a way to just not worry, is that an optimal strategy?
00:44:15.700 I think the Dalai Lama most recently articulated this in a meme, but this no doubt predates him.
00:44:21.900 Take the thing you're worried about, right?
00:44:23.880 Either there's something you can do about it or not.
00:44:26.140 If there's something you can do about it, well, then do that thing.
00:44:28.600 If you can't do anything about it, well, then why worry?
00:44:31.340 Because you're just going to suffer twice, right?
00:44:33.760 How do you think about worry, given your work here?
00:44:37.380 Well, I don't think my work leads to any particular conclusions about this.
00:44:42.380 I mean, the Dalai Lama is obviously right.
00:44:44.520 I mean, you know, why worry?
00:44:46.900 But...
00:44:47.300 Some people are going to tweet that, and it's not going to work out well for you.
00:44:49.840 On the other hand, I would like to see people worry a fair amount about the future, and even
00:44:59.720 because you don't know right now whether or not you'll be able to do anything about it.
00:45:04.720 Right.
00:45:05.160 I mean...
00:45:05.680 Maybe worry.
00:45:06.980 The only way to get enough activation energy into the system to actually motivate them to
00:45:11.880 do something is to worry.
00:45:13.300 You know, one of the problems, for example, when you're thinking of climate change, one
00:45:18.300 of the problems is you can't make people worry about something that is so abstract and distant.
00:45:24.060 Yeah.
00:45:24.460 And, you know, if you make people worry enough, things would change.
00:45:29.220 But there is...
00:45:31.160 Scientists are incapable of making the public worry sufficiently about that problem.
00:45:35.920 And to steal a technique that you just recommended, if you could make a personal story out of it,
00:45:43.040 that would sell the problem much more effectively.
00:45:45.780 It just...
00:45:46.260 Climate change is a very difficult thing to personalize.
00:45:48.680 It's very difficult to personalize, and it's not immediate.
00:45:52.480 So it's...
00:45:53.800 It really...
00:45:54.420 Climate change is the worst problem, in a way.
00:45:58.040 The problem that we're least well-equipped to deal with, because it's remote, it's abstract,
00:46:03.880 and it's not a clear and present danger.
00:46:10.840 I mean, a meteorite, you know, coming to Earth, that would mobilize people.
00:46:15.340 Climate change is a much more difficult problem to deal with, and worry is part of that story.
00:46:23.460 It's interesting that a meteorite would be different.
00:46:27.660 I mean, even if you put it far enough out there, so you have an Earth-crossing asteroid
00:46:31.900 asteroid in 75 years, there would still be some counsel of uncertainty.
00:46:38.240 People would say, well, we can't be 100% sure that something isn't going to happen in the next
00:46:43.500 75 years that will divert this asteroid.
00:46:46.900 Other people will say, well, surely we're going to come up with some technology that would be
00:46:52.440 onerously costly for us to invent now, but 20 years from now could be trivially easy for us to
00:46:57.740 invent, so why steal anything from anyone's pocketbook now to deal with it?
00:47:03.380 You could run some of the same arguments, but there's something, the problem is crystallized
00:47:06.980 in a way that climate change is.
00:47:07.400 The difference is there is a story about the asteroid.
00:47:11.100 You have a clear image of what happens if it hits, and the image is a lot clearer than climate change.
00:47:19.360 So, one generic issue here is the power of framing.
00:47:27.640 I mean, we are now increasingly becoming students of the power of framing, but we are not, we
00:47:35.860 should just be able to come up with a list of the problems we have every reason to believe
00:47:42.000 are real and significant, and sort those problems by the variable of, this is the set of problems
00:47:50.540 that we are, we know that we are very unlikely to feel an emotional response to, right?
00:47:57.760 We are just, we are not wired to appreciate, to be motivated by what we rationally understand
00:48:03.280 in these areas, and then take the cognitive step of deliberately focusing on those problems.
00:48:12.100 If we did that, if everyone in this room did that, what we're then left with is a political
00:48:17.100 problem of selling this attitude toward the rest of the matter.
00:48:20.320 I mean, you know, you used a tricky word there, and the word is we.
00:48:25.020 Who is we?
00:48:26.300 I mean, you know, in that story, who is we?
00:48:29.940 So, you are talking about a group of people, possibly political leaders, who are making
00:48:37.620 a decision on behalf of the population that, in a sense, they treat like children who do
00:48:43.000 not understand the problem.
00:48:45.700 I mean, it's quite difficult.
00:48:47.840 Surely you can't be talking about our current political leaders.
00:48:50.980 No, I'm not.
00:48:52.040 But it's actually, I find it difficult to see how democracies can effectively deal with
00:49:00.060 a problem like climate change.
00:49:02.260 I mean, you know, if I had to guess, I would say China is more likely to come up with effective
00:49:08.980 solutions than the West, because they're authoritarian.
00:49:12.740 If you'd like to continue listening to this conversation, you'll need to subscribe at
00:49:24.220 SamHarris.org.
00:49:25.740 Once you do, you'll get access to all full-length episodes of the Making Sense podcast, along
00:49:30.140 with other subscriber-only content, including bonus episodes and AMAs, and the conversations
00:49:35.640 I've been having on the Waking Up app.
00:49:37.080 The Making Sense podcast is ad-free and relies entirely on listener support, and you can
00:49:42.820 subscribe now at SamHarris.org.