#150 — The Map of Misunderstanding
Episode Stats
Words per Minute
148.22011
Summary
In this episode, Dr. Daniel Kahneman talks about his Nobel Prize in Economics, why he won it, and what it means to be a Nobel Prizeist. He also talks about the replication crisis in science, the power of illusions, and the problems he and Amos Tversky tackled in their groundbreaking work on decision-making under uncertainty. And, of course, he discusses his new book, Thinking Fast and Slow, and why he thinks we should all be trying to figure out why we get things wrong so often, and how we can fix it. This episode was recorded at a sold-out event at The Beacon Theater in New York a couple of weeks ago, and it's a must-listen if you haven't heard him speak at that event or if you don't know who he is, you'll want to listen to the audio from the event to get a sense of who he really is and what he's been up to since the early days of his career as a psychologist and how he got his start in the field of cognitive psychology. We don't run ads on the podcast, and therefore it's made possible entirely through the support of our listeners, so if you enjoy what we're doing here, please consider becoming a supporter of the podcast and/or becoming a subscriber. You'll get access to the Making Sense Podcast and all sorts of great episodes of the show that's going to make sense of the world around you! Sam Harris' making sense of it all. Thanks to our sponsors, including Amazon Prime and Barnes & Noble, and all kinds of good stuff like that goes out there! Make sure to all of your local bookshipping options, too! Subscribe to The Making Sense and subscribe on Audible to get 10% off your first month and get 20% off the entire month for the rest of the month, plus free shipping throughout the summer, plus an additional 3 months off the first month, shipping free, shipping anywhere else you get a maximum of 3 months of the year, plus a discount on your choice of a year, and a free ad-free month, and you'll get an ad discount when you buy a copy of the entire year of the making sense edition of the course that starts in March and a course starting in May, plus all of that gets you an ad-only version of the next month, starting in September, shipping only $19.99, shipping starts in July, shipping is free!
Transcript
00:00:10.880
Just a note to say that if you're hearing this, you are not currently on our subscriber
00:00:14.680
feed and will only be hearing the first part of this conversation.
00:00:18.420
In order to access full episodes of the Making Sense Podcast, you'll need to subscribe at
00:00:24.060
There you'll find our private RSS feed to add to your favorite podcatcher, along with
00:00:30.520
We don't run ads on the podcast, and therefore it's made possible entirely through the support
00:00:35.880
So if you enjoy what we're doing here, please consider becoming one.
00:00:50.920
Well, today I'm bringing you the audio from my live event with Danny Kahneman at the Beacon
00:00:57.440
This was a sold-out event in a very cool old theater.
00:01:02.920
I'd actually never been to the Beacon before, but it has a storied history in music and comedy.
00:01:09.880
Anyway, it was a great pleasure to share the stage with Danny.
00:01:13.200
Daniel Kahneman, as you may know, is an emeritus professor of psychology at Princeton University
00:01:17.980
and also an emeritus professor of public affairs at Princeton's Woodrow Wilson School of Public
00:01:26.540
He received the Nobel Prize in Economics in 2002 for the work he did on decision-making
00:01:36.420
Unfortunately, Tversky died in 1996, and he was a legendary figure who would have certainly
00:01:42.980
shared the Nobel Prize with Danny had he lived longer.
00:01:49.720
In any case, I think it's uncontroversial to say that Danny has been the most influential
00:01:53.900
living psychologist for many years now, but he's perhaps best known in the general public
00:02:00.980
for his book Thinking Fast and Slow, which summarizes much of the work he did with Tversky.
00:02:06.800
Michael Lewis also recently wrote a biography of the Kahneman-Tversky collaboration, and that
00:02:16.520
Anyway, Danny and I covered a lot of ground at The Beacon.
00:02:19.720
We discussed the replication crisis in science, systems one and two, which is to say automatic
00:02:27.380
and unconscious cognitive processes and more conscious and deliberative ones.
00:02:32.860
We talk about the failure of intuition, even expert intuitions, the power of framing, moral
00:02:40.900
illusions, anticipated regret, the asymmetry between threats and opportunities, the utility
00:02:47.780
of worrying, removing obstacles to wanted behaviors, the remembering self versus the experiencing
00:02:54.960
self, improving the quality of gossip, and many other topics.
00:02:59.920
Anyway, Danny has a fascinating mind, and I think you'll find this a very good introduction
00:03:08.180
Of course, if you want more, his book Thinking Fast and Slow also awaits you if you haven't
00:03:35.660
Danny, it's a special honor to be here with you, so thank you for coming.
00:03:39.620
It's often said, and rarely true, that a guest needs no introduction, but in your case that
00:03:54.140
We're going to talk about your work throughout, so people will, for the one person who doesn't
00:03:58.780
know who you are, you will understand at the end of the hour.
00:04:02.160
But I guess by way of introduction, I just want to ask, but what is the worst thing about
00:04:19.400
Okay, well, nobody wants to hear your problems, Dan.
00:04:26.900
So, how would you, how do you think about your body of work?
00:04:31.380
How do you summarize the intellectual problems you have tried to get your hands around?
00:04:35.780
You know, it's been just a series of problems that occurred that I worked on.
00:04:43.940
When you look back, of course, I mean, you see patterns and you see ideas that have been with
00:04:52.800
I was, you know, you follow, you follow things, you follow ideas, you follow things that you
00:05:01.660
Really, that's a story of my intellectual life.
00:05:07.060
Judging from the outside, it seems to me that you have told us much of what we now think we
00:05:12.080
know about cognitive bias and cognitive illusion.
00:05:17.260
And really, the picture is of human ignorance having a kind of structure.
00:05:28.260
And because of that, whole groups, markets, societies can get things wrong because the
00:05:39.340
And that obviously has implications that touch more or less everything we care about.
00:05:45.000
Let's just, I want to track through your work, you know, as presented in your now famous
00:05:54.340
And I just want to try to tease out what should be significant for all of us at this moment.
00:05:59.880
Because, you know, human unreason, unfortunately, becomes more and more relevant, it seems.
00:06:07.600
And I guess I wanted just to begin to ask you about a problem that's very close to home now,
00:06:13.820
what is called the replication crisis or reproducibility crisis in science, in particular social sciences
00:06:20.300
and in particular psychology, and for those in the room who are not aware of what has happened
00:06:25.940
and how dire this seems, it seems that when you go back to even some of the most celebrated
00:06:31.520
studies in psychology, their reproducibility is on the order of 50-60% in the best case.
00:06:39.840
So there was one study done in, that took 21 papers from Nature and Science, which are the
00:06:45.220
most highly regarded journals, and reproduced only 13 of them.
00:06:51.280
And so let's talk about the problem we faced in even doing science in the first place.
00:06:56.980
Well, I mean, you know, the key problem and the reason that this happens is that research
00:07:06.440
And it's expensive personally, and it's expensive in terms of money.
00:07:13.540
So when you're a researcher, you know what you want to find.
00:07:17.160
And that creates biases that you're not fully aware of.
00:07:21.700
And I think a lot of this is simply self-delusion.
00:07:25.640
That is, you know, there is a concept that's known as p-hacking, which is people very honestly
00:07:36.500
And there are several tricks of the trade that, you know, people know about them.
00:07:44.180
So instead of having one dependent variable where you predict the outcome, you take two
00:07:50.340
And then if one of them doesn't work, you stay with the one that does work.
00:07:55.240
You do that and things like that a few times, then it's almost guaranteed that your research
00:08:03.960
And that happens, it was first discovered in medicine.
00:08:07.440
I mean, it's more important in medicine than it is in psychology, where somebody famously
00:08:12.420
said that most published research in medicine is false.
00:08:17.100
And a fair amount of published psychological research is false, too.
00:08:23.080
Yeah, but even some of the most celebrated results in psychology, like priming and the marshmallow
00:08:29.960
Well, yeah, I mean, it's not only, it's actually, they get celebrated in part because they are
00:08:40.460
And the rule is, you know, the more surprising the result is, the less likely it is to be
00:08:47.600
And so that's how celebrated results get to be non-replicable.
00:08:53.880
Well, and the scariest thing I heard, I don't know how robust this study was, but someone
00:08:59.540
did a study on trying to replicate unpublished studies and found that they replicated better
00:09:17.180
These are the structures that give us so much of our, what can be a dispiriting picture of
00:09:24.780
Summarize for us, what are these two systems you talk about?
00:09:29.320
I mean, before starting with anything else, there are clearly two ways that ideas come to
00:09:35.080
I mean, so if I say two plus two, then an idea comes to your mind.
00:09:46.280
If I ask you to multiply, you know, 24 by 17 or something like that, you have to work
00:09:52.820
So it's that dichotomy between the associative effortless and the effortful.
00:10:05.080
And how you describe it and whether you choose to describe it in terms of systems, as I did,
00:10:12.100
or in other terms, that's already a theoretical choice.
00:10:16.320
And in my view, theory is less important than the basic observation of, you know, that there
00:10:25.000
And then you have to describe it in a way that could be useful.
00:10:31.280
And what I mean by that is you have to describe the phenomena in a way that will cause, help
00:10:39.380
researchers have good ideas about facts and about experiments to run.
00:10:43.980
And the system one and system two was, it's not my, not my dichotomy and even not my terminology.
00:10:52.620
And in fact, it's a terminology that many people object to, but I chose it quite deliberately.
00:11:00.560
Because people in your book, you try to guard against various misunderstandings of this.
00:11:05.820
Well, yes, I mean, you know, there is a rule that you're taught fairly early in psychology,
00:11:11.640
which is never to invoke what is called homunculi, which are little people in your head whose
00:11:17.780
behavior explain your behavior or explain the behavior of people.
00:11:22.680
And system one and system two are really homunculi.
00:11:25.500
So I knew what I was doing when I picked those.
00:11:30.020
And, but the reason I did was that system one and system two are agents.
00:11:38.580
And it turns out that the mind is very good at forming pictures and images of agents that
00:11:45.320
have intentions and propensities and traits and they're active.
00:11:51.360
And it's just easy to get your mind around that.
00:11:54.260
And that's why I picked that terminology, which many people find sort of objectionable because
00:12:03.020
It's just a very useful way to think about it, I think.
00:12:05.980
So there's no analogy to be drawn between a classical, psychological, even Freudian picture
00:12:15.820
How do you think about consciousness and everything that precedes it in light of modern psychology?
00:12:22.300
It's clearly related in the sense that what I call system one activities, the automatic
00:12:28.400
ones, one characteristic they have is that you're completely unconscious of the process
00:12:39.620
In system two activities, you're often conscious of the process.
00:12:43.980
You know what you're doing when you're calculating.
00:12:46.660
You know what you're doing when you're searching for something in memory.
00:12:51.360
So clearly, consciousness and system two tend to go together.
00:12:56.080
It's not a perfect, you know, and who knows what consciousness is anyway.
00:13:02.480
And system one is much more likely to be unconscious and automatic.
00:13:06.500
Neither system is a perfect guide toward tracking reality.
00:13:10.740
But system one is, it's very effective in many cases.
00:13:16.220
Otherwise, it wouldn't have evolved the way it has.
00:13:18.600
But I guess maybe let's start with a picture of where our intuitions are reliable and where
00:13:28.000
How do you think about the utility of intuition?
00:13:30.120
I'll say first about system one, that our representation of the world, most of what we
00:13:40.340
So that we're going along in life with producing expectations or being surprised or not being
00:13:54.080
So most of our thinking, system one thinking, most of what goes on in our mind, goes on,
00:14:01.300
So that's, and intuition is defined as, you know, knowing or rather thinking that you know
00:14:08.620
something without knowing why you know it or without knowing where it comes from.
00:14:13.620
And, and, and it's fairly clear, actually, I mean, that's a digression, but there is a
00:14:20.520
guy named Gary Klein, a psychologist who really doesn't like anything that I do.
00:14:25.460
And he, he, he, how does your system one feel about that?
00:14:32.520
So, but he believes in intuition and in expert intuition, and he's a great believer in, and
00:14:40.380
he has beautiful data showing, beautiful observations of expert intuition.
00:14:46.500
So he and I, I invited him, actually, to try and figure out our differences, because obviously
00:14:53.300
So where intuition, marvelous, and where is it flawed?
00:14:57.020
And we worked about, we worked for six years before we came up with something, and we published
00:15:03.000
an article called The Failure to Disagree, because, in fact, there is a fairly clear boundary
00:15:09.460
about when you can trust your intuitions and when you can't.
00:15:13.000
And, and I think that's summarized in three conditions.
00:15:16.600
The first one is, the world has to be regular enough.
00:15:20.880
I mean, first of all, intuition is recognition.
00:15:26.960
You have an intuition, it's just like recognizing, you know, that it's like a child's recognizing
00:15:36.960
Now, in order to recognize patterns and reality, which is, which is what true intuitions are,
00:15:43.900
the world has to be regular enough so that there are regularities to be picked up.
00:15:49.180
But then you have to have enough exposure to those regularities to have a chance to learn
00:15:56.200
And third, it turns out that intuition depends critically on the time between when you're
00:16:03.780
making a guess and a judgment and when you get feedback about it.
00:16:10.100
And if those three conditions are satisfied, then eventually people develop intuition so that
00:16:15.840
a chess player's, chess is a prime example where all three conditions are satisfied.
00:16:21.360
So after, you know, many hours, I don't know, 10,000 or not, but many hours, a chess player
00:16:29.040
All the ideas, all the moves that come to his or her mind are going to be strong moves.
00:16:36.500
So the picture is one of intuition, I mean, they're intuitions that are more innate than
00:16:42.520
others, or we're so primed to learn certain things innately that no one remembers learning
00:16:48.440
these things, you know, recognizing a human face, say.
00:16:52.100
But much of what you're calling intuition was at one point learned.
00:16:58.740
There are experts in various domains, chess being a very clear one, that develop what we
00:17:06.780
And yet much of the story of the blind spots in our rationality is a story of the failure
00:17:16.180
So where do you see the frontier of trainability here?
00:17:19.400
I mean, I think that what happens is that when those conditions are not satisfied, people
00:17:27.620
That is, you know, they have ideas that come to their mind with high confidence and they
00:17:36.500
I mean, you know, we've all met them and we see them in the mirror and, you know, that's...
00:17:43.520
So it turns out you can have intuitions for bad reasons, you know.
00:17:50.860
So all it takes is a thought that comes to your mind automatically and with high confidence
00:17:56.700
and you'll think that it's an intuition and you'll trust it.
00:18:00.080
But the correlation between confidence and accuracy is not high.
00:18:06.440
That's, you know, one of the saddest things about the human condition.
00:18:10.100
You can be very confident in ideas and the correlation.
00:18:15.960
Well, so that's just, you know, yes, a depressing but fascinating fact that the signature of a
00:18:26.140
high probability that you are correct is what you feel while uttering that sentence.
00:18:31.920
I mean, psychologically, confidence is the marker of your credence in whatever proposition
00:18:38.100
it is you're entertaining and yet we know they can become totally uncoupled and often
00:18:45.940
Given what you know or think you know scientifically, how much of that bleeds back into your life
00:18:58.860
How is Danny Kahneman different given what he has understood about science?
00:19:04.740
I mean, I mean, it's even more depressing than I thought.
00:19:09.220
You know, in terms of thinking, you know, my intuition is better than being better than
00:19:16.120
And furthermore, I have to confess, I'm also very overconfident.
00:19:25.960
You're just issuing a long string of apologies?
00:19:31.200
If anyone should know better, you should know better.
00:19:43.720
How hopeful are you that an individual can improve?
00:19:46.560
And how hopeful are you that we can design systems of conversation and incentives that
00:19:52.160
can make some future generation find us more or less unrecognizable in our stupidity and...
00:20:00.120
Well, you know, I should preface by saying that I'm not an optimist in general, but I'm
00:20:05.740
certainly not an optimist about those questions.
00:20:11.480
You know, I'm a case study because I've been studying that stuff for more than 50 years,
00:20:15.640
and I don't think that my intuitions have really significantly improved.
00:20:24.000
I can catch, recognize a situation as one in which I'm likely to be making a mistake.
00:20:31.360
And this is the way that people protect themselves against visual illusions.
00:20:35.600
You can see the illusions, and there's no way you can not see it.
00:20:39.600
But you can recognize that this is likely to be an illusion, so don't trust my eyes, take
00:20:47.860
You know, there is a similar thing goes on with cognitive illusions.
00:20:51.620
Sometimes you know that your intuitions, your confident thought, is unlikely to be true.
00:21:02.520
I don't think that I've become, you know, in any significant way, smarter because of studying
00:21:16.120
What you must thirst for on some levels is that this understanding of ourselves can be
00:21:24.560
made useful or more useful than it is, because the consequences are absolutely dire, right?
00:21:31.540
I mean, our decision-making is, one could argue, the most important thing on Earth, certainly
00:21:39.780
I mean, how we negotiate nuclear test ban treaties, right?
00:21:44.120
I mean, like everything from that on down, this is all human conversation, human intuition,
00:21:50.460
errors of judgment, pretensions of knowledge, and sometimes we get it right.
00:21:55.600
And the delta there is extraordinarily consequential.
00:21:58.980
So if I told you that we, over the course of the next 30 years, made astonishing progress
00:22:08.540
So that we, our generation, looks like, you know, bumbling medieval characters compared
00:22:16.480
to what our children or grandchildren begin to see as a new norm, how did we get there?
00:22:23.900
You know, I mean, that's, you know, it's the same as if you told me, will our perceptual
00:22:33.980
Let's take one of these biases or sources of bias that you have found.
00:22:40.680
We know that if you frame a problem in terms of loss or you frame the same problem in terms
00:22:46.240
of gains, you get a very different set of preferences from people because people are so averse to loss.
00:22:51.700
So the knowledge of that fact, let's say you're a surgeon, right?
00:22:55.580
And you're recommending or at least, you know, proffering a surgery for a condition to your
00:23:01.820
patients who you have a, you know, you have taken a Hippocratic oath to do no harm.
00:23:06.400
And you know, because you read Danny Kahneman's book, that if you put the possibility of outcome
00:23:12.760
in terms of mortality rates versus survival rates, you are going to be moving several dials
00:23:18.900
in your patient's head one way or the other reliably, can you conceive of us ever agreeing
00:23:24.720
that there's a right answer there, like in terms of what is the ethical duty to frame this correctly?
00:23:30.080
Is there a correct framing or are we just going to keep rolling the dice?
00:23:33.860
Well, I mean, this is a lot of questions at once.
00:23:39.480
In the first place, you know, when you're talking about framing, the person who is subject to
00:23:48.300
the framing, I mean, so you have a surgeon framing something for a patient.
00:23:52.620
First of all, the patient is going to be completely unaware of the fact that there is an alternative
00:23:59.920
It works because you see one thing and you accept the formulation as it is given.
00:24:10.560
Now, whether there is a true or not true answer, so I should, let me mention the sort of the
00:24:17.880
canonical problem, which actually my late colleague Amos Sversky invented.
00:24:22.660
So in one formulation, you have a choice between, well, there is a disease that's going to cause
00:24:34.840
And you have your choice between saving 400 people or a two-third probability of saving 600.
00:24:41.920
Or alternatively, other people get the other framing that you have a choice between...
00:24:51.180
Killing 200 people for sure, and not allowing them to die, and a one-third probability that
00:25:05.620
Now, the interesting thing is people, depending on which frame you presented to them, they make
00:25:11.960
But now you confront them with the fact that here you've been inconsistent.
00:25:20.720
And some people will deny it, but you can convince them this is really the same problem.
00:25:36.260
We have clear intuitions about what to do with gains.
00:25:41.780
We have clear intuitions about what to do with losses.
00:25:45.900
And when you strip it from that language with which we have intuition, we have no idea what to do.
00:25:53.060
So, you know, what is better when you stop to think about, you know, stop thinking about saving or about dying?
00:26:00.900
Well, actually, I've forgotten, if that research was ever done, I forgot what the results were.
00:26:05.960
Has the third condition been compared to the first two?
00:26:09.040
What do people do when you give them both framings and dumbfound them?
00:26:18.660
This is not something that, you know, we've done formally, but I can tell you that I'm dumbfounded.
00:26:26.500
You know, I have the same intuitions as everybody else.
00:26:30.500
You know, when it's in the gains, I want to save lives.
00:26:33.140
And when it's in the losses, I don't want people to die.
00:26:40.040
When you're talking to me about 600 more people staying alive with a probability two-thirds,
00:26:46.740
or, you know, when you're talking about numbers of people living, I have absolutely no intuitions about that.
00:26:52.860
So, that is quite common in ethical problems and in moral problems, that they're frame-dependent.
00:27:00.620
And when you strip the frames away, people are left without a moral intuition.
00:27:06.020
Well, and this is incredibly consequential in when you're thinking about human suffering.
00:27:10.600
So, your colleague, Paul Slovic, has done these brilliant experiments where he's shown that
00:27:16.320
if you ask people to support a charity, you talk about, you know, a famine in Africa, say,
00:27:22.800
and you show them one little girl attached to a very salient and heartbreaking narrative
00:27:28.480
about, you know, how much she's suffering, you get the maximum charitable response.
00:27:34.080
But then you go to another group and you show that same one little girl and tell her story,
00:27:38.480
but you give her a brother and the response diminishes.
00:27:41.800
And if you go to another group and you give them the little girl and her brother,
00:27:46.980
and then you say, in addition to the suffering of these two gorgeous kids,
00:27:52.280
there are 500,000 suffering children behind them suffering the same famine,
00:27:58.280
then the altruistic response goes to the floor.
00:28:01.140
It's precisely the opposite of what we understand system two should be normative, right?
00:28:08.080
The bigger the problem, the more concerned and charitable we should be.
00:28:12.940
So, to take that case, there's a way to correct for this at the level of tax codes
00:28:18.840
and levels of foreign aid and which problems to target.
00:28:22.560
We know that we are emotionally gamed by the salient personal story
00:28:28.440
and more or less morally blind to statistics and raw numbers.
00:28:33.060
I mean, there's another piece of work that you did which shows that people are so innumerate
00:28:38.140
with respect to the magnitude of problems that they will more or less pay the same amount
00:28:43.340
whether they're saving 2,000 lives, 20,000 lives, or 200,000 lives.
00:28:48.660
Because basically, and that's a system one characteristic,
00:28:55.800
You're thinking, you have an image, you have stories,
00:29:08.920
And what happens when you have 500,000, you have lost a story.
00:29:13.120
A story, to be vivid, has to be about an individual case.
00:29:17.700
And when you dilute it by adding cases, you dilute the emotion.
00:29:22.140
Now, what you're describing in terms of the moral response to this is no longer an emotional response.
00:29:32.700
And this is already, you know, this is cognitive morality.
00:29:41.680
You know that it's better to save 500,000 than 5,000,
00:29:46.440
even if you don't feel better about saving 500,000.
00:29:55.460
This is passing on to the cognitive system, the responsibility for action.
00:30:00.740
And you don't think that handoff can be made in a durable way?
00:30:09.680
And policymakers, you know, we hire some people to think about numbers
00:30:17.300
But if you want to convince people that this needs to be done,
00:30:22.500
you need to convince them by telling them stories about individuals,
00:30:25.840
because numbers just don't catch the imagination of people.
00:30:31.280
What does the phrase cognitive ease mean in your work?
00:30:36.120
Well, it means that some ideas come very easily to mind
00:30:42.780
and others come with greater and greater difficulty to the point of.
00:30:57.600
And there is a correlation between fluency and pleasantness, apparently,
00:31:07.240
Not always more easily, but yes, they're more fluent.
00:31:13.280
So, there is that interaction between fluency and pleasure,
00:31:18.220
So, the picture I get is of, I don't know if you reference this in your book,
00:31:25.980
I can't remember, but what happens, what we know from, you know, split-brain studies,
00:31:30.380
that for the most part, the left linguistic hemisphere confabulates.
00:31:35.280
It's continually manufacturing discursive stories that ring true to it.
00:31:41.860
And there's, in the case of actual neurological confabulation,
00:31:50.560
It's just, it's telling a story that is being believed.
00:31:53.440
But it seems to me that most of us are in a similar mode most of the time.
00:31:59.600
There's a very lazy reality testing mechanism coming online.
00:32:04.820
And it's just easy to take your own word for it most of the time.
00:32:12.020
I think this is really, as you say, this is a normal state.
00:32:15.860
The normal state is that we're telling ourselves stories.
00:32:19.640
We're telling ourselves stories to explain why we believe in things.
00:32:24.120
More often than not, retrospectively, in a way that bears no relationship to the system one,
00:32:31.880
But, you know, for me, the example that was formative is what happened with post-hypnotic suggestions.
00:32:42.540
So you put somebody under hypnosis and you tell them, you know,
00:32:46.900
when I clap my hands, you will feel very warm and you'll open a window.
00:32:52.440
And you clap your hands and they get up and open a window.
00:33:03.560
They felt really warm and uncomfortable and they needed air and they opened the window.
00:33:23.360
Do you have a favorite cognitive error or bias?
00:33:26.940
Which of your ugly children do you like the most?
00:33:32.980
Well, yeah, I think, I mean, it's not the simplest to explain.
00:33:40.620
But my favorite one is sort of extreme predictions.
00:33:43.960
When you have very weak evidence and on the basis of very weak evidence,
00:33:51.160
I call it, technically, it's called non-regressive prediction.
00:34:06.580
But when, you know, one very obvious situation is in job interviews.
00:34:12.040
So, you know, you interview someone and you have a very clear idea of how they will perform.
00:34:18.440
And even when you are told that your ideas are worthless because, in fact,
00:34:22.480
you cannot predict performance or can predict it only very poorly, it doesn't affect it.
00:34:27.080
Next time you interview the person, you have the same confidence.
00:34:33.800
I mean, that's something that I discovered very early in my career.
00:34:37.660
I was an officer in the Israeli army as a draftee.
00:34:43.320
And I was interviewing candidates for officer training.
00:34:47.460
And I discovered that I had that uncanny power to know who will be a good officer and who won't be.
00:34:55.520
And I really could tell, you know, interviewing people.
00:35:00.800
You get that sense of, you know, confident knowledge.
00:35:04.700
And then, you know, then the statistics showed that actually we couldn't predict anything.
00:35:20.320
Some people following your work must recommend that you either don't do interviews or heavily discount them, right?
00:35:33.860
And don't do interviews in particular because if you run an interview, you will trust it too much.
00:35:39.960
So there have been many cases, you know, studies, I don't know about many, but there have been studies in which you have candidates, you have a lot of information about them.
00:35:54.420
And then if you add an interview, it makes your predictions worse, especially if the interviewer is the one who makes the final decision.
00:36:03.940
Because when you interview, this is so much more vivid than all the other information you have that you put way too much weight on it.
00:36:12.740
Is that also a story about just the power of face-to-face interaction?
00:36:21.300
You know, anything that you experience is, you know, is very different from being told about it.
00:36:27.620
And, you know, as scientists, one of the remarkable things that I know is how much more I trust my results than anybody else's.
00:37:00.540
Well, I think regret is an interesting emotion.
00:37:07.920
And it's a special case of an emotion that has to do with counterfactual thinking.
00:37:14.340
That is, regret is not about something that happened.
00:37:17.140
It's about something that could have happened but didn't.
00:37:19.800
And I don't know about regret itself, but anticipated regret, the anticipation of regret, plays an important role in lots of decisions.
00:37:32.040
That is, there's a decision and you tell yourself, well, if I don't do this and, you know, and it happens, then how will I feel?
00:37:43.720
And it's well known in financial decisions and a lot of other decisions.
00:37:50.680
And it's connected to loss aversion as well, right?
00:37:56.320
And it's quite vivid that you're able to anticipate how you will feel if something happens.
00:38:08.300
Well, does the asymmetry with respect to how we view losses and gains make sense, ultimately?
00:38:17.440
I mean, I think at some point in your work you talk about an evolutionary rationale for it because suffering is worse than pleasure is good, essentially,
00:38:28.740
because there's a survival advantage for those who are making greater efforts to avoid suffering.
00:38:33.580
But it also just seems like there's, if you put in the balance of possibility the worst possible misery and the greatest possible pleasure,
00:38:43.780
I mean, if I told you we could have the night we're going to have tonight and it will be a normal night of conversation,
00:38:51.020
or there's a part of the evening where I can give you the worst possible misery for a half hour,
00:39:01.780
Yeah, let's just get a cheeseburger and a Diet Coke.
00:39:06.600
The prospect of suffering in this universe seems to overwhelm the prospect of happiness or well-being.
00:39:13.820
I know you put a lot of thought into the power of sequencing.
00:39:17.380
I can imagine that feeling the misery first and the pleasure second would be better than the reverse.
00:39:23.900
But it's not going to be enough to make it seem like a good choice, I would imagine.
00:39:28.120
How do you think of this asymmetry between pleasure and pain?
00:39:31.420
You know, the basic asymmetry is between threats and opportunities, and threats are more immediate.
00:39:39.200
And so in many situations, it's not true everywhere, there are situations where opportunities are very rare.
00:39:49.280
But threats are immediate, and they have to be dealt with immediately, so the priority of threats over opportunities must be built in by a large evolutionary loop.
00:40:00.000
But do you think we could extract an ethical norm from this asymmetry?
00:40:06.740
For instance, could it be true to say that it is more important to alleviate suffering than to provide pleasure if we had some way to calibrate the magnitude of each?
00:40:19.740
Well, in the first, we did a study, Dick Thaler and Jack Natch and I did a study a long time ago, about intuitions about fairness.
00:40:28.100
And it's absolutely clear that that asymmetry rules intuitions about fairness.
00:40:34.480
That is, there is a very powerful rule of fairness that people identify with, not to cause losses.
00:40:44.040
That is, you have to have a very good reason to inflict a loss on someone.
00:40:48.460
The injunction to share your gains is much weaker.
00:40:54.820
So that asymmetry, what we call the rights that people have, quite frequently the negative rights that people have, is the right not to have losses inflicted on you.
00:41:06.240
So there are powerful moral intuitions that go in that direction.
00:41:11.520
And the second question that you asked, because that was a compound question about well-being, yeah, I mean, I think, you know, in recent decades, there's tremendous emphasis on happiness and the search for happiness and the responsibility of governments to make citizens happy and so on.
00:41:32.680
And one of my doubts about this line of thinking is that I think that preventing misery is a much better and more important objective than promoting happiness.
00:41:46.220
And so the happiness movement, I have my doubts about on those grounds.
00:41:53.840
Given what you've said, it's hard to ever be sure that you've found solid ground here.
00:41:59.820
So there's the intuition that you just cited that people have a very strong reaction to imposed losses that they don't have to unshared gains, right?
00:42:10.660
You do something that robs me of something I thought I had.
00:42:16.000
I'm going to feel much worse about that than just the knowledge that you didn't share some abundance that I never had in the first place.
00:42:21.740
But it seems that we could just be a conversation away from standing somewhere that makes that asymmetry look ridiculous, analogous to the Asian disease problem, right?
00:42:36.060
Like it's a framing effect that we may have an evolutionary story to tell about why we're here, but given some opportunity to be happy in this world, it could seem counterproductive.
00:42:48.240
I say this already being anchored to your intuition.
00:42:53.000
Yeah, I think that, you know, in philosophical debates about morality and well-being, there are really two ways of thinking about it.
00:43:06.180
And there is one way about when you're thinking of final states and what everybody will have.
00:43:12.220
And so you have, and there there is a powerful intuition that you want people more or less to be equal, or at least not to be too different.
00:43:21.360
But there is another way of thinking about it, which is given the situation and the state of society, how much redistribution do you want to impose?
00:43:32.540
And there there is an asymmetry because you are taking from some people and giving it to others.
00:43:40.500
So we have powerful moral intuitions of two kinds, and they're not internally consistent.
00:43:46.840
And loss aversion has a great deal to do with that.
00:43:49.920
So given that there are many things we want and don't want, and we want and don't want them strongly,
00:43:56.640
and we are all moving individually and collectively into an uncertain future where there are threats and opportunities,
00:44:05.040
and we're trying to find our way, how do you think about worrying?
00:44:11.200
If there was a way to just not worry, is that an optimal strategy?
00:44:15.700
I think the Dalai Lama most recently articulated this in a meme, but this no doubt predates him.
00:44:23.880
Either there's something you can do about it or not.
00:44:26.140
If there's something you can do about it, well, then do that thing.
00:44:28.600
If you can't do anything about it, well, then why worry?
00:44:31.340
Because you're just going to suffer twice, right?
00:44:33.760
How do you think about worry, given your work here?
00:44:37.380
Well, I don't think my work leads to any particular conclusions about this.
00:44:47.300
Some people are going to tweet that, and it's not going to work out well for you.
00:44:49.840
On the other hand, I would like to see people worry a fair amount about the future, and even
00:44:59.720
because you don't know right now whether or not you'll be able to do anything about it.
00:45:06.980
The only way to get enough activation energy into the system to actually motivate them to
00:45:13.300
You know, one of the problems, for example, when you're thinking of climate change, one
00:45:18.300
of the problems is you can't make people worry about something that is so abstract and distant.
00:45:24.460
And, you know, if you make people worry enough, things would change.
00:45:31.160
Scientists are incapable of making the public worry sufficiently about that problem.
00:45:35.920
And to steal a technique that you just recommended, if you could make a personal story out of it,
00:45:43.040
that would sell the problem much more effectively.
00:45:46.260
Climate change is a very difficult thing to personalize.
00:45:48.680
It's very difficult to personalize, and it's not immediate.
00:45:58.040
The problem that we're least well-equipped to deal with, because it's remote, it's abstract,
00:46:10.840
I mean, a meteorite, you know, coming to Earth, that would mobilize people.
00:46:15.340
Climate change is a much more difficult problem to deal with, and worry is part of that story.
00:46:23.460
It's interesting that a meteorite would be different.
00:46:27.660
I mean, even if you put it far enough out there, so you have an Earth-crossing asteroid
00:46:31.900
asteroid in 75 years, there would still be some counsel of uncertainty.
00:46:38.240
People would say, well, we can't be 100% sure that something isn't going to happen in the next
00:46:46.900
Other people will say, well, surely we're going to come up with some technology that would be
00:46:52.440
onerously costly for us to invent now, but 20 years from now could be trivially easy for us to
00:46:57.740
invent, so why steal anything from anyone's pocketbook now to deal with it?
00:47:03.380
You could run some of the same arguments, but there's something, the problem is crystallized
00:47:07.400
The difference is there is a story about the asteroid.
00:47:11.100
You have a clear image of what happens if it hits, and the image is a lot clearer than climate change.
00:47:19.360
So, one generic issue here is the power of framing.
00:47:27.640
I mean, we are now increasingly becoming students of the power of framing, but we are not, we
00:47:35.860
should just be able to come up with a list of the problems we have every reason to believe
00:47:42.000
are real and significant, and sort those problems by the variable of, this is the set of problems
00:47:50.540
that we are, we know that we are very unlikely to feel an emotional response to, right?
00:47:57.760
We are just, we are not wired to appreciate, to be motivated by what we rationally understand
00:48:03.280
in these areas, and then take the cognitive step of deliberately focusing on those problems.
00:48:12.100
If we did that, if everyone in this room did that, what we're then left with is a political
00:48:17.100
problem of selling this attitude toward the rest of the matter.
00:48:20.320
I mean, you know, you used a tricky word there, and the word is we.
00:48:29.940
So, you are talking about a group of people, possibly political leaders, who are making
00:48:37.620
a decision on behalf of the population that, in a sense, they treat like children who do
00:48:47.840
Surely you can't be talking about our current political leaders.
00:48:52.040
But it's actually, I find it difficult to see how democracies can effectively deal with
00:49:02.260
I mean, you know, if I had to guess, I would say China is more likely to come up with effective
00:49:08.980
solutions than the West, because they're authoritarian.
00:49:12.740
If you'd like to continue listening to this conversation, you'll need to subscribe at
00:49:25.740
Once you do, you'll get access to all full-length episodes of the Making Sense podcast, along
00:49:30.140
with other subscriber-only content, including bonus episodes and AMAs, and the conversations
00:49:37.080
The Making Sense podcast is ad-free and relies entirely on listener support, and you can