#119 — Hidden Motives
Episode Stats
Words per Minute
182.32625
Summary
In this episode, I present the audio from a keynote I did in Denver with Robin Hanson, the author of The Elephant in the Brain: Hidden Motives in Everyday Life, and Kevin Simler, the co-author of the book. We talk about selfishness, hypocrisy, norms, norms and norms violations, cheating, self-deception, the evolutionary logic of conversation, social status, signaling and counter-signaling, common knowledge, and much more. Unfortunately, the audio is a little wonky, and there are a few moments where things cut out, but it's not that bad. Once you start listening, you will acclimatize to it. I enjoyed the event, and I really enjoyed this conversation with Robin, who is a professor of economics at George Mason University and a research associate with the Future of Humanity Institute, which you might know focuses on existential risk and other big topics of ethical importance. I hope you enjoy the event and it was a good one! Timestamps: 1:00:00 - What is the difference between hypocrisy and selfishness? 4:30 - Why is it important to be honest? 6:40 - Why are we not honest about what we're doing? 7:20 - What are the real reasons we do what we do? 8:15 - Why do we do things? 9:30 10:00 Why is there a mismatch between what we think we do and what we actually do 11: What are we supposed to do 12: Why is the problem? 13:40 14: What is a mismatch? 15: How do we have a lack of interest in innovation? 16:20 17: What does it mean? 18:10 - Why does it matter? 19:10 What is it possible for us to be a good social scientist? 21:30 What are our best friend? 22:30 Is there a problem that we can do better? 23: What do we need to do better in social science? 26:30 How do you know what we should be doing better in the future? 27: How can we learn from the past? 29:00 What are you want to know about the problem we should we know about it? 30:00 Do you have a better understanding of the problem you want? 35:00 Is it possible to learn more?
Transcript
00:00:00.000
Today, I am presenting the audio from the event I did in Denver with Robin Hanson.
00:00:22.740
Robin's a professor of economics at George Mason University, and he's the author with
00:00:26.860
Kevin Simler of a very interesting book titled The Elephant in the Brain, Hidden Motives in
00:00:32.340
Everyday Life. I give more of his bio from the stage, but I really enjoyed this conversation
00:00:38.440
with Robin. We spoke about all the related issues here of selfishness and hypocrisy and norms and
00:00:45.300
norm violations, cheating, deception, self-deception, the evolutionary logic of conversation,
00:00:53.340
social status, signaling and counter-signaling, common knowledge. There's many interesting
00:00:58.180
topics here. I enjoyed the event. Unfortunately, the audio is a little wonky. We are completely
00:01:03.880
at the mercy of whatever recording we get from these venues, and there are a few moments where
00:01:11.080
things cut out. It's a little echoey. It's not that bad. Once you start listening, you will acclimate
00:01:15.920
to it, but it was a good conversation. And so now I bring you Robin Hanson.
00:01:20.920
Ladies and gentlemen, please welcome Sam Harris.
00:01:29.640
Well, thank you all for coming out. Really, it's amazing to see you all or see some fraction
00:01:54.280
of you. I'm going to jump right into this. We have a very interesting conversation ahead
00:01:59.300
of us because I have a great guest. He is a professor of economics at George Mason University.
00:02:05.260
He is also a research associate with the Future of Humanity Institute, which you might know focuses
00:02:11.700
on existential risk and other big topics of ethical importance. He has a PhD in social science
00:02:18.860
science from Caltech, a master's in physics and the philosophy of science. He did nine years of
00:02:25.120
research with Lockheed and NASA, studying mostly artificial intelligence and also Bayesian statistics.
00:02:33.280
And he's recognized for his contributions in economics and especially in prediction markets,
00:02:39.420
but he's made contributions in many other fields. And he has written a fascinating book,
00:02:44.600
which unfortunately is not for sale here today, but you should all buy this book because it was
00:02:49.680
really, it's amazingly accessible and he just touches so many interesting topics. That book is
00:02:54.960
The Elephant in the Brain, Hidden Motives in Everyday Life. Please welcome Robin Hanson.
00:03:12.380
So your reputation for being interesting precedes you.
00:03:19.600
So I want to, there are many things we can talk about and as you know, but I want to focus
00:03:25.820
on your book and I want to move in a kind of a linear way through your book because your book is
00:03:29.900
just, is so rich and I don't think we will do the book justice, but we will try. The book is really
00:03:36.300
a kind of a sustained meditation on selfishness and hypocrisy. We have these ideas about why we do
00:03:45.900
things and then we have all of the evidence accruing for the real reasons why we do these things and the
00:03:51.940
mismatch there is, is rather harrowing to consider. And your book is just an unvarnished look at that.
00:03:58.060
So I want to tour through this, but perhaps before we get to some of these specific topics,
00:04:03.300
how do you view the project of your book? What were you up to in writing? I should say that you have a
00:04:07.700
coauthor on the book, Kevin Simler, who's not here tonight, but what were you doing writing this book?
00:04:13.940
This was what I wish I would have known when I started my social science career
00:04:17.880
many years ago. I started out in physics and then went into computer science. And in those areas,
00:04:25.320
I noticed that people were really eager for innovation. And then I seemed to see that in
00:04:29.720
social science, there were even bigger innovations possible. And so I moved there and then I was
00:04:34.200
puzzled to find that people were not very interested in innovations compared to the other areas.
00:04:39.800
And I kept also finding other puzzles in social science, ways in which our usual theories don't
00:04:44.600
make sense of what's going on. And my book, our book is an attempt to explain a lot of the major
00:04:51.640
puzzles in social science and a lack of interest in innovation. And one of the conclusions is that
00:04:57.640
we're just doing policy wrong, policy analysis wrong. But first we have to get into the basics here.
00:05:04.040
Well, so, and there's really two levels of it. There's, there's how you as a person might think
00:05:10.360
about these things, but it's the level of personal hypocrisy and then the mismatch between what your
00:05:14.840
motives actually are and what you may think they are. And then there's this, the fact that institutions
00:05:20.760
have this kind of structure or this, this blindness where the institutions think they're about something
00:05:25.560
and they're, they seem not to be upon analysis, an institution like medicine or, or a university or, or,
00:05:31.000
or, um, so what's the basic problem? Why is there this, this mismatch between what we think we're doing
00:05:37.560
and what we're actually doing? So if you've read many psychology books, you're familiar with the
00:05:42.280
idea that people are not always honest about what they're doing and why. And, uh, you might find that
00:05:48.120
trite and kind of boring by now, because of course we all know that, but so far people haven't taken
00:05:53.800
that insight to our major social institutions. And that's what we think is new and original about our
00:05:59.000
book. We say that, uh, not only are you not always honest about whether you like to go to the opera
00:06:04.120
with your spouse or whether you enjoy playing and cleaning up after your kids, you're also not
00:06:09.400
honest with yourself about why you go to school and why do you go to the doctor and why you vote,
00:06:14.200
uh, and why you do art. That is these, uh, deviations between what we think we're doing
00:06:22.120
and our actual motives, uh, infect many major social institutions and they therefore, you know,
00:06:29.880
should make us reconsider the basics of what these institutions are for and therefore why we
00:06:35.240
support them and whether we should subsidize them and how we should structure them and everything.
00:06:39.320
Right. So unlike many conversations I have here, I have a, a list of nouns that I just, that are kind
00:06:47.480
of a ladder through which we could, we could walk. Uh, let's start with norms and, and what you call
00:06:54.760
meta norms. So what is a norm and what, and why do we have them and what, and what does it mean to,
00:07:01.080
to protect them or, or to fail to protect them? So animals like chimpanzees and most other social
00:07:07.400
animals, they have a complicated social world and they pay attention to each other and they reward
00:07:11.800
others for helping them and punish others if they hurt them. So they, they have many regular behaviors,
00:07:16.840
but humans uniquely have norms in the sense that we have a rule of what you're supposed to do or not
00:07:21.800
supposed to do. And if somebody else sees you breaking the norm, it's a rule that they're
00:07:26.200
supposed to do something about it. They're supposed to tell other people that you've broken a norm and
00:07:30.600
then try to find a way to make you stop breaking the norms. And so humans have these norms about what
00:07:36.440
we're supposed to do or not supposed to do. And many of these norms are quite common around the world.
00:07:41.000
We're supposed to help each other. We're supposed to not brag, not, um, be violent to each other.
00:07:47.240
We're supposed to make group decisions together by consensus. And we're not even supposed to have
00:07:52.120
subgroup coalitions, uh, people who are aligned against the others. These are just common human
00:07:55.800
norms. And many of these norms are expressed in terms of motives. So there's a rule that we're not
00:08:01.720
supposed to hit each other on purpose. It's okay to hit accidentally, but not on purpose. And so,
00:08:08.680
we are, because our ancestors had these norms and they were so important, their social world was their
00:08:14.040
main world. Uh, we developed these big brains that we have mainly, apparently, for social reasons.
00:08:18.920
We developed these big brains to deal with our social world. And we have the biggest brains of all,
00:08:23.320
so our social world must have been the most complicated. Uh, but norms were a big part of
00:08:27.960
this world. And so we have this part of our brain that's all the time thinking about what we're doing
00:08:33.960
and trying to explain why we're following good motives. We're, and that's, in a sense, the conscious part
00:08:40.280
of your mind. You are the conscious part of your mind and you aren't necessarily the one in charge
00:08:45.640
of your mind. There's this idea that instead of, say, being the president or the king, you're the
00:08:50.200
press secretary. You don't actually know why you do things, but you're supposed to make up a good
00:08:55.800
excuse. And you do that. You're constantly looking at what you're doing and asking yourself, what would
00:09:02.600
be a good explanation for this thing I'm doing? And you're good at that. You're good at coming up with
00:09:07.800
excuses for what you're doing. But you don't actually know what you're actually doing. You don't realize
00:09:12.360
that you don't know. Yeah. And this is a very robust but not really celebrated neurological
00:09:21.320
finding. And it becomes horribly elaborated in people who have what's called a split-brain procedure,
00:09:27.640
where as a remedy for grand mal seizures, you can cut the corpus callosum, which connects the two
00:09:34.840
hemispheres of the brain. And that prevents the seizure activity from moving from one hemisphere
00:09:38.920
to the other. And what people have found going back now many decades is that the left, most people,
00:09:46.760
the left linguistically agile hemisphere confabulates reasons for doing things when those reasons are
00:09:54.040
brought out in an experimental paradigm. Those reasons are just manifestly not so. So you can present
00:09:59.320
the right hemisphere of the brain with a demand, like, you know, get up and walk toward the door.
00:10:05.320
And then you can ask the linguistically competent left hemisphere, why are you walking toward the door?
00:10:11.720
And it will confabulate a reason, like, I want to get a Coke. This is a result from a classic
00:10:16.600
experiment, which I think you cite in your book. These experiments were done by Roger Sperry and Michael
00:10:22.280
Kazanaga and Iran Zeidel. And the left hemisphere just continually completes the picture linguistically
00:10:31.240
without any apparent awareness that those claims are out of register. They're based on nothing.
00:10:37.160
They're just, I mean, this is what the word confabulate means, just to just make up this reason out of
00:10:43.080
full cloth. And it seems that, I mean, though most of us have not had our brains split, we have an
00:10:50.200
ability to give a post-hoc rationalization for why we did things, which certainly in an experimental
00:10:57.880
paradigm can be shown to really have no relationship to the proximate cause of our actions. And it is
00:11:03.320
embarrassing if caught on video. So we're living with this fact that we are our own press secretary,
00:11:13.000
giving the, at minimum, the most benign, but often just the most grandiose and apparently noble
00:11:20.440
rationale for why we're doing what we're doing. And yet evolution and other modes of logic suggest that
00:11:26.520
that isn't the reason for why we do much of what we do. Well, since you are in the habit of just making
00:11:31.640
up excuses, that means you could be wrong a lot, but doesn't mean you are wrong a lot. Maybe you
00:11:36.840
are mostly right, even though you would be wrong if you didn't know. So we have to go further than
00:11:41.640
just the possibility that you're being wrong to decide you're wrong. So we have to wonder, well,
00:11:47.560
how sure can you be about most of your activity, whether it's the real reason you have? Now,
00:11:53.480
one thing to be clear about is almost any area of life, like going to school or going to the doctor,
00:11:58.120
is big and complicated, the world's complicated, so a great many motives are relevant. And if we
00:12:03.080
average over people, surely thousands of different motives are relevant for almost everything we're
00:12:07.480
doing. And so what we have to be asking here is what is the main motive? What's the most dominant
00:12:12.360
motive? Not what's the only motive? So, I mean, just as an example, if you say the dog ate my homework
00:12:18.920
as an excuse, that only works because sometimes dogs eat homework. If dogs didn't exist,
00:12:24.040
it wouldn't make much sense. Dragon ate my homework doesn't work. So these things that we come up
00:12:29.880
with as excuses for our behavior, they only work as excuses because sometimes they're true. They
00:12:34.200
have an element of truth. So we're not going to say that your usual motive isn't at all applicable.
00:12:39.640
The claim is just, it's not as true as you think. And you're not saying that no one has noble motives.
00:12:45.240
Exactly. So there is real altruism, there's real nobility. There's real all of these things,
00:12:50.200
exactly. Sometimes people get up to get a Coke. Yes. But in addition, there are evolutionary
00:12:57.880
reasons why we would be self-deceived about our motives. We're actually, and this is based
00:13:04.040
often on the work of Robert Trivers, who's done a lot of work on self-deception and the evolutionary
00:13:08.280
logic there, we are better at deceiving others. We're better at getting away with norm violations
00:13:15.960
if we, in fact, are not aware that our press secretary is not telling the truth,
00:13:21.640
which is to say that if we, in fact, are self-deceived, we are better at deceivers.
00:13:25.400
So if we want to lie, it's better not to know we're lying because then it seems sincere.
00:13:35.240
The easy way to seem sincere is to be sincere, even if you're wrong.
00:13:38.680
It's a famous Seinfeld episode, I believe. You're not lying if you believe it.
00:13:49.000
I should say that basically, this is something you and I should probably talk about, but
00:13:54.040
the jury's out as to whether or not knowing any of what we're about to talk about is good for you.
00:14:01.240
There's a psychological experiment being performed on you and you have not consented.
00:14:04.680
Memory wipe pills will be available after the session.
00:14:11.640
So how do you think about, let's take cheating as, cheating is a classic norm violation.
00:14:17.720
There's reason to think that our brains have evolved in large measure to both to cheat and to detect
00:14:23.240
cheating in others. How do you think about cheating in your line of work?
00:14:28.120
Well, cheating is again violating norms. And so we want to live in a community where the
00:14:34.600
norms are enforced and we also want ourselves to be exceptions to the rules.
00:14:40.120
So, for example, you know, most criminals actually think crime is a bad thing.
00:14:45.320
They just think that their particular acts don't quite count as crimes. So we all basically would
00:14:52.120
like to make exceptions for ourselves. So the question is how and one of the ways we can do it
00:14:55.880
is to not be very honest about what we're doing with ourselves.
00:14:58.280
This, this may not be relevant, but I just put me in the mind of it. I've never understood why
00:15:06.360
no one remarks on the fact that when we think of like just reducing our speed limit laws,
00:15:11.480
what that would do in terms of saving lives. And we could save tens of thousands of lives a year.
00:15:16.680
But if we, if we made cars that could not exceed the speed limit,
00:15:20.440
that would guarantee that no one would exceed the speed limit. But no one would want that. No,
00:15:25.640
no one, no one who thinks that we should have speed limits would want a car that it would
00:15:31.640
slavishly follow the speed limit. How do you make that? We all, is that just synonymous with wanting
00:15:38.200
the right to cheat on the speed limit? I mean, are we all imagining some emergency where you have to
00:15:43.080
speed past the speed limit? So the whole theme here is that in your head, you think you want
00:15:49.160
things. So in your head, you think you want to enforce speed limits. With your actual actions,
00:15:54.360
you don't. You want to speed. And there's a contradiction there. And you don't want to look
00:15:59.240
at that contradiction. So you look away. And that's the elephant in your brain. As you know,
00:16:03.720
the elephant in the room is the thing we all know is there that you don't want to look at. And the
00:16:06.920
the elephant in your brain is this contradiction between what you say you want and what you actually
00:16:11.480
do. So let's, let's actually raise this issue now, whether this is this line of thinking or this
00:16:18.600
analysis has a downside. So if in fact, it's true that we are better fit to our social environment
00:16:27.240
with a certain amount of ignorance with respect to our own motives, so that we, that it's optimal,
00:16:33.240
there's like an attractor of optimal fitness, which is, which entails some measure of self-deception.
00:16:39.880
And we are in the process, you in the process of writing this book, all of us in the process
00:16:43.880
of talking about it, are to some degree undeceiving ourselves about these things. Why isn't that bad
00:16:51.800
for us? And is that, is it worth worrying whether it's bad for us? So apparently evolution constructed
00:16:58.200
you to be ignorant about why you do things. It thought, yes, it might be useful if you know why
00:17:02.680
you do things, but that's to be traded off against all these other benefits of not knowing why you do
00:17:06.760
things. So you were constructed not to know. If the situation you're in in the modern world is
00:17:12.120
much like the situation evolution anticipated for you, that's probably better in your personal interest,
00:17:17.800
not knowing. You're probably better off going on with the usual sort of ignorance that the rest of
00:17:22.280
us have had, and acting that way, because you'll get along that way, and that's what evolution
00:17:26.760
anticipated for you. Now, evolution couldn't think of everything, so you could be in an environment
00:17:31.320
today which is not something evolution might have participated in, or you might be in an unusual
00:17:35.480
situation. For example, you might be a salesperson or a manager, the sort of person for whom it's really
00:17:40.760
important to understand people's motives and to be able to read them and understand what's going on.
00:17:44.920
You also might be a nerd, like myself, that as most people can just intuitively read the social
00:17:51.000
world around them and do the right thing. Some of us can't, and some of us need more conscious
00:17:54.920
analysis in the world to figure out what's going on, and so you may appreciate this more cynical
00:17:59.400
conscious analysis, even if it has some disadvantages. But most importantly... As a self-help seminar,
00:18:06.120
I think that's not going to sell tickets. Not that you'd be nerds or anything, but some of us are.
00:18:14.520
But I also just think if you're going to be a specialist in policy analysis, if you're going to stand up and
00:18:19.240
say, I have studied education or medicine and I have thought about what changes would be better,
00:18:24.680
it's more your responsibility to know what's actually going on in those worlds, even if it costs
00:18:29.320
you some degree of social awkwardness to know that. I think at least social analysts and policy analysts
00:18:35.720
should understand these things. Yeah, so let's take an institutional example. Take education. What is it
00:18:41.640
that we're deceived about with respect to education? So again, just to be clear, just because you might be
00:18:47.240
deceived about many things doesn't mean you are. So I need to walk you through arguments to convince you
00:18:51.560
that in fact, in each area, your motives isn't what you think it is. Now, my colleague, a beloved
00:18:57.480
colleague, Ryan Kaplan, has a book just out called The Case Against Education, and he goes through a whole
00:19:02.840
book like The Treatment of This. Our chapter is just a summary of that, but a summary is sufficient.
00:19:08.440
The summary is when you ask people why do you go to school, if they are answering in front of a public
00:19:13.640
speech or in a letter of application, say, they will tell you the usual story is to learn the material
00:19:18.520
so that you can become a more useful person later. That's our standard story about school.
00:19:23.800
And there are a number of puzzles in education that just don't make sense of that theory, and I'm going
00:19:28.120
to offer another theory that makes more sense of it. Some of these puzzles are you don't actually
00:19:33.000
learn very much at school. Most of the stuff you learn isn't very useful.
00:19:37.080
Yet, people who don't learn useful things are paid more. So, bartenders who go to college make
00:19:44.840
more than bartenders who go to high school. You do make more for more years of school in terms of
00:19:50.680
your wages, but the last year of high school and the last year of college is worth as much as the
00:19:54.680
other three years combined. But you don't learn more in the last year of high school or college.
00:20:00.600
I went to Stanford for a while for free without registering or applying simply by walking in and
00:20:05.560
sitting on classes. One of the professors gave me a letter of recommendation based on my performance.
00:20:11.640
Nobody tries to stop you from doing that. Why? You can get the very best education for free if you
00:20:17.320
don't want a credential. That calls into question the idea that you're there for the learning
00:20:21.800
as opposed to something else. So, the alternative theory is that you're there to show off and to
00:20:29.080
gain a credential that shows that you are worthy of showing off. That is, you are smart, conscientious,
00:20:33.960
conformist. You're willing to do the sorts of things that they ask you to do. You take ambiguous
00:20:39.400
instructions with long deadlines and consistently over several decades, over several years, complete
00:20:45.800
mildly boring assignments. Great preparation for future workplaces.
00:20:52.920
And by the end, you have shown that and that's something employers value. And that's a standard
00:20:57.880
plausible explanation for education. Most of you will find that plausible unless you are an education
00:21:03.080
policy expert, in case you will be offended, and search for another explanation. So, in most of
00:21:10.280
these areas, most of you will nod and say, yeah, that makes sense, unless this is your precious area.
00:21:14.840
For all of us, there is something precious in our lives, something sacred. And for that,
00:21:18.680
we will be more reluctant to accept one of these more cynical explanations of what's going on there.
00:21:23.320
But as long as education isn't sacred for you, you'll probably nod and say, yeah, you don't learn
00:21:27.560
much in school. But so now, what is signal and what is noise there? Are employers wrong to value those
00:21:33.640
things? What should people do differently as a result of understanding this about the status quo?
00:21:40.200
Individually, you shouldn't do different. Individually, if you want to convince an
00:21:42.920
employer in our world that you have what it takes, you do need to go to school, jump through the hoops,
00:21:48.040
and perform well. And in fact, you might do that better if you aren't aware that you're just doing
00:21:52.280
arbitrary things to show off to an employer. That may be demotivating for you. You might be better off
00:21:56.680
pretending to yourself and believing that you're learning. It would be useful.
00:22:01.960
But the point is, you are showing that you have a characteristic, not creating a characteristic.
00:22:06.120
The school isn't changing you. It's distinguishing you. It's like certifying you as different.
00:22:11.320
So now, what's the role of common knowledge in some of these situations? You should define what
00:22:18.040
common knowledge is. It's not common knowledge. What common knowledge is?
00:22:21.320
So think about cheating. He asked about cheating. And think of the rule that you're not supposed
00:22:26.520
to drink alcohol in public. This is a rule. And there are people who are supposed to enforce
00:22:31.320
this rule, the police. And you might think this, of course, is relatively easy to enforce.
00:22:38.520
But think of the example of people putting an alcoholic beverage inside a paper bag and drinking
00:22:43.800
it outside. This happens. Now, ask yourself, how hard could it be for the police to know that you're
00:22:50.760
drinking alcohol if you're drinking some bottle of a paper bag out of stores? Of course they know.
00:22:56.440
But you're giving them an excuse to look the other way. That is, that's not common knowledge. We
00:23:02.760
don't know that we all know that we all know that it's alcohol. Somebody could be fooled and that's
00:23:08.600
enough to pretend that you don't know. So this is why it's actually much easier to cheat in many ways
00:23:15.080
than you might have thought. We have all these rules and we're supposed to enforce them, but we're
00:23:19.000
not very eager to enforce them. We'd rather go about our business and ignore the rule violations. And so
00:23:24.600
a rule violation needs to be kind of blatant. And other people need to see us see the rule violation,
00:23:29.880
and then we kind of feel forced to do something about it. But if it's not blatant, it's not something
00:23:35.720
we all can see and know that we know, then you might prefer to pretend you didn't see. And many of you
00:23:41.320
probably have seen things that are not supposed to happen as you walk by the street. And you just
00:23:45.480
keep walking, hoping that nobody saw you saw it, because then you could pretend you didn't see it
00:23:50.360
and go about your business, because it would be a pain and trouble to stop and try to enforce the rules.
00:23:54.440
Yeah. Well, also, there's so much about our social lives where we know there's a subtext to what's
00:24:03.240
going on. But if that subtext ever became explicit, it would destroy the basis of trust or a good
00:24:10.280
feeling. Or like if you said to someone, I'm only inviting you over to dinner tonight because you
00:24:16.200
invited me last time and I needed to reciprocate. Right, exactly. You know, that's why we're having
00:24:21.400
this dinner. That, on some level, that we all know that's going on, but to make it explicit is sort
00:24:29.240
of antithetical to being friends with people. Right. So there's often many levels of what's going on.
00:24:38.200
And in fact, we expect to see that in movies and stories. So if somebody, as an actor, was given a
00:24:46.760
script and the script said, you're at a romantic dinner with somebody else, and the two of you are
00:24:51.880
there talking to each other, and what you're saying to each other is, I love you, I love you too,
00:24:55.960
this is great, we're having a wonderful relationship, this is a wonderful restaurant,
00:24:59.320
isn't this a great night? The actor will tell you, I can't act that. Because there's just one level
00:25:05.560
there, and that doesn't seem plausible at all. We expect in a scene like that there to be multiple
00:25:10.600
levels. That is, there's the surface level of I love you, isn't this great, and something else
00:25:14.920
must be going on. And the actor will actually look for another level so they can act the scene.
00:25:19.800
I'm afraid you'll leave me, so I'm trying to make sure you don't. Or I'm thinking of leaving you,
00:25:23.560
and so I'm trying to let you off nice. Something to make there be two levels of motives, because that's
00:25:28.360
what we expect to see out of actors and scenes. So we are really, at some level, we kind of know
00:25:33.720
that people are quite often pretending one motive and really acting another motive.
00:25:37.640
There's one thing, one chapter in your book on conversation, which I found fascinating,
00:25:42.920
because conversation is fairly mysterious in terms of the mismatch between what we think is going on
00:25:49.160
and what is actually going on, and why it would be valued in an evolutionary sense. So let's talk about
00:25:53.880
what most people think is going on during a conversation, and what seems to actually be
00:25:58.200
going on. So we're going meta here, because of course this is a conversation here, and we will
00:26:02.600
try to pretend that this isn't true about our conversation, because that's the social exception.
00:26:09.080
The jig is up. Exactly. So the usual story, if you ask why are you talking to your friend,
00:26:15.800
why did you spend an hour talking, why didn't you do the dishes or something useful,
00:26:18.920
you might say, well, we're exchanging information. We each have information, the other person doesn't,
00:26:26.920
and by talking and exchanging information, we can all know more. And this is the standard rationale
00:26:31.800
for most of our conversations. This, what I'm about to tell you, applies not just to personal
00:26:35.880
conversation, but also applies to our news media conversations, to academic conversations and
00:26:40.600
journals. In all of them, the standard rationale is information. That's why you read the newspaper,
00:26:45.160
of course right, to get more information. Now, there are many features of our conversations
00:26:49.880
that don't fit very well with this explanation. That's, again, my main argument here is to show
00:26:54.920
you the detailed puzzles that don't fit with the explanation, then offer you another explanation
00:26:59.400
that fits better. So some of the puzzles here are, if it was about exchanging information,
00:27:05.240
we would keep track of debts. I might say, well, I've told you three useful things so far,
00:27:09.720
you haven't told me any of the useful things. It's your turn. We would be more eager to listen
00:27:15.160
than to talk. It would be our turn to talk and then sigh. Okay, fine, I'll tell you something.
00:27:19.960
We would be searching for the most valuable things to tell each other, the things that mattered most
00:27:23.480
to each other, and we would talk about important things instead of the trivialities that we usually
00:27:27.240
fill our conversations with. And it would be fine to jump from topic to topic as long as we were
00:27:32.040
saying something valuable and important, because the point is to correct to communicate information.
00:27:36.280
But as you know, the usual norm of conversation is to slowly drift from topic to topic, none of
00:27:40.760
which need to be very important, but each time we should say something relevant to that topic.
00:27:45.960
Now, an alternative explanation in sharing information with this theory is that we are
00:27:49.880
showing off our backpack of tools and resources that we can show we can bring to bear to any topic
00:27:56.600
you dare to offer. So it's important that the conversation meander in a way no one of us can control,
00:28:04.520
so that we are each challenged to come up with something relevant to whatever that is.
00:28:10.520
And by impressing you with knowing something, having a friend or a resource, having a tip,
00:28:15.160
having some experience that's relevant to whatever you bring up, I show you that if you and I stay
00:28:19.800
allies and associates in the future, whatever problems you have, I'll have something relevant.
00:28:24.520
I'm ready for you with resources that would be useful to you, because look what I can do no matter
00:28:29.240
what conversation topic comes up. Yeah, well, the mismatch between desire to listen and desire to
00:28:36.200
talk is pretty, I mean, I think that's the one that people will find very salient, because if it was
00:28:41.400
really about just getting information, we would be massively biased toward listening. We would be
00:28:47.880
stingy with, I mean, we would be pricing out all of our disclosures. We'd have much bigger ears and
00:28:53.560
smaller mouths. Yeah. So then how do you think about gossip and reputation management and what's
00:29:02.200
happening in that space? We do, in fact, exchange information. Yeah. So again, it works as an excuse
00:29:08.440
because it's partly true. We do exchange information and it is somewhat valuable. It's just not the main
00:29:13.400
thing we're doing. But often, well, the information we're exchanging is meta to the actual apparent
00:29:19.560
topic. As you may know, indirectly through what people say, they tell you other things like bragging
00:29:27.000
about themselves indirectly by telling you about their great vacation in some expensive rare place.
00:29:33.720
And they talk about each other, often in the guise of, you know, saying what's been happening. But we are
00:29:40.120
very interested in knowing about each other and evaluating each other. And so part of what's going
00:29:45.800
on when we're impressing people is we're not only impressing the people who immediately see us, we're
00:29:49.560
impressing the other people who will hear about it indirectly. And so it's important that we impress
00:29:55.000
other people in ways that can transfer through that gossip to the other people who will hear about it.
00:30:00.280
And we are trying to avoid negative gossip or negative reputation of things that would make us look
00:30:05.240
back. And this is, you know, a basic explanation for why a lot of decisions in the world are really
00:30:10.200
quite shallow. So for example, as an employer, you might look at an employee and say, this potential
00:30:17.320
employee looks really good. Yes, they don't have a college degree, but they don't need a college
00:30:21.480
degree for this. And I can tell they could do the job. But then you might think to yourself,
00:30:24.760
yes, but other people will hear that I hired this person. And they will notice that this person
00:30:28.920
doesn't have a college degree, and they will gossip about it. And then I might look bad for having
00:30:33.560
hired someone with a college degree, and maybe I just don't want to take that chance. So even if I
00:30:37.480
know that this person could do the job, I still, because I'm trying to impress this wider audience
00:30:43.080
who will gossip about it, I am pushed to make shallower choices based on less than I know.
00:30:49.800
Is there anything that you do differently in this area based on having thought about this? Do you view
00:30:55.160
gossip as a negative character trait that should be discouraged in yourself, or do you just see it as
00:31:03.560
inevitable or socially useful as a way of correcting misaligned reputations?
00:31:10.120
I understand and appreciate gossip as an important human role as a natural nerd.
00:31:15.880
I'm not as inclined or interested in it personally, but that's my failing, not the world's.
00:31:23.720
So is social status the main metric to which all of this is pegged? Is that what we're
00:31:32.760
concerned about as subtext virtually all the time? It's one of the things, but it's actually less than
00:31:38.840
people might think. So if you're forced to admit you're showing off, often the thing you want to
00:31:47.960
admit to showing off is how great you are. That is how smart or conscientious or careful, how knowledgeable,
00:31:57.160
but plausibly at least half of what you're doing in showing off is showing off loyalty,
00:32:02.920
not ability. And so perhaps we push medicine in order to show that we care about people. We
00:32:09.640
participate in politics to show that we're loyal to our side. We do a lot of things to show loyalty,
00:32:15.000
and that is not something we're as eager to admit. Because of course, by trying to be loyal, we are
00:32:21.320
showing some degree of submission to those we are trying to...
00:32:24.360
Yeah, so that is a somewhat craven motive to sign on to. I'm being loyal, right?
00:32:33.320
But why is that? In fact, loyalty is a virtue that we acknowledge.
00:32:37.320
So humans actually have two different kinds of status, and it's suspicious and noticeable that we
00:32:42.200
don't make the distinction very often and we merge them together. There's dominance and prestige.
00:32:46.920
Dominance is more having power over someone, and prestige is earning respect. And the difference
00:32:53.800
of these actually show up in where your eyes go and how you look. When somebody has dominance
00:32:58.760
over them, you are not supposed to look them in the eye. Looking them in the eye shows defiance.
00:33:02.680
But if somebody has prestige, you are supposed to look at them. They are like, we presumably up here,
00:33:07.960
you are looking at us, we are claiming we have prestige, and you're not supposed to look away at them.
00:33:12.760
I wish you wouldn't put it that way. Yes, well... How embarrassing.
00:33:17.720
And so people want to get prestige and they don't want to admit to accepting dominance or submitting
00:33:25.320
to dominance, but of course we do. And so... But do people admit to wanting prestige?
00:33:30.920
More so. They might admit to accepting prestige, although not to seeking it, of course. Now,
00:33:37.720
in ancient history, most societies had kings and their neighbors had tyrants.
00:33:45.320
Tyrants dominated because they, bad guys over there, had dominance, and those people were
00:33:49.400
submitting to dominance and what a terrible thing they had to suffer. But we have a king who has
00:33:53.160
prestige and it's okay for us to look up to and obey our king because he's worthy of the status.
00:33:58.040
And so, this is often how people come to terms with their bosses. So, from a distance,
00:34:05.640
people say how terrible it is that we all obey our bosses at work. But each of us at work often
00:34:10.920
makes peace with that by saying, well, my boss is okay. He's earned that right to be in that role. And
00:34:16.360
I'm okay with doing what he says. Right. So now, I don't want to spend a lot of time on politics,
00:34:22.440
but obviously, everything you've written about is relevant to politics. And as I was reading the
00:34:28.840
book, it seems somewhat mysterious to me that in the current moment, someone like Trump seems to
00:34:35.320
violate more or less every rule you mention in your book. I mean, the things we've evolved not to do,
00:34:41.560
or not to do brazenly, like brag, right? Or lie without any hope of being believed.
00:34:46.760
Or advertise our most crass motives in place of more noble ones that could have been plausible,
00:34:59.080
right? He seems to get away with all of this. So, how do you explain the success of what's
00:35:06.600
essentially the anti-evolutionary algorithm? Sure. So, let's start with something called
00:35:14.680
countersignaling. So, ordinarily, if you have an acquaintance and you are trying to show that you
00:35:22.360
like an acquaintance, you will do things like, you know, smiling at them, being polite, flattering them,
00:35:28.520
opening the door for them, offering them some food. Those are ways we ordinarily show someone
00:35:33.640
that we are trying to be friendly. When you have really close friends, however, often you go out of
00:35:40.680
your way to do the opposite. You insult them. You trip them. You don't show up for some meeting.
00:35:47.560
Why do you do the opposite for a close friend? So, that's in part to show that you are more than an
00:35:53.080
acquaintance. Once it's clear that you are at least an acquaintance, people might wonder,
00:35:58.360
how close are we? And doing the opposite can show that you are more than an acquaintance. A paper that
00:36:06.200
discussed this was called too cool for school. As you know, many students try to show how studious,
00:36:11.480
how good they are at school by studying hard and doing well. And then some students try to show
00:36:15.880
that they can ace everything without trying. And that's, again, countersignaling. So he's managed
00:36:22.440
to convince half the country that he's their best friend by revealing all of these. But remember,
00:36:27.800
politics is about loyalty signaling. And at one level, we might all want politicians who are high
00:36:35.000
status. We might all want politicians who are articulate and tall and went to a good school
00:36:39.560
and smart and say all the right polite things and have salmon, etc. And so, in general, we would all
00:36:45.560
want the same thing there. But if you want to show that your side is different and you are being
00:36:51.480
especially loyal to your side, you may have to go against the norm. So, as you may know, when
00:36:57.880
the election of Trump, there was a subset of our society who felt neglected, who felt that their
00:37:04.440
voice was not being heard and that the politician establishment was not catering to them. And so,
00:37:10.920
Trump stood up and said, I will cater to you. And he went out of his way to show loyalty to that group
00:37:18.840
by counter-signaling in many ways, by doing the opposite of what an ordinary politician might do
00:37:24.520
to appeal to everyone, to show I really am appealing to you and you in particular. And I'm going out of
00:37:30.280
my way to raise the cost of appealing to other people to appeal more to you to show that I really
00:37:35.560
am loyal to you. And he did convince that group that he was unusually loyal to them and they voted for
00:37:41.720
him on that basis. And he has successfully counter-signaled his way into the presidency. The rest of the
00:37:47.720
world and other people are saying, but this is not the usual leader. And of course, the people who
00:37:51.400
voted for him said, yes, that's exactly how I knew he was trustworthy on my side, is that he
00:37:57.000
counter-signaled the usual signals of overall political competence. But we often do that to signal
00:38:07.080
loyalty. We often go out of our way to pay costs to signal loyalty. So, one of our chapters is on
00:38:11.240
religion, a topic that I know my guest host up here has written a lot about. And one of the standard
00:38:17.080
stories about religion is, you may agree to unusual rituals and to believe strange things
00:38:24.600
in order to convince people that the people you share those rituals and strange beliefs with, that
00:38:29.320
you are tied to them. And that it will be expensive for you to leave them and that they
00:38:33.560
therefore rely on you. Actually, viewing back to Trump, just for a second, viewing a lot of this
00:38:39.960
through the lens of loyalty explains a few other things. Because when you look at how
00:38:45.400
people in his inner circle, or people who have to function like his press secretary,
00:38:51.400
try to make, with as brave a face as possible, try to put some positive construal on his lying or his
00:38:59.800
mistakes or his misrepresentations of fact. That does function as a kind of loyalty test.
00:39:06.680
I mean, whenever you're just, when people with real reputations have to get out there and,
00:39:11.480
you know, put both feet in their mouths, so as to pretend that the president didn't do likewise,
00:39:17.640
it is a kind of, it is a kind of, I mean, it looks like a bizarre hazing ritual from here, but
00:39:23.080
it does signal loyalty. But again, those people across the border, they have tyrants and we have kings.
00:39:29.800
It's easy to criticize the other side for being excessively loyal and submissive, but
00:39:34.440
this happens all across the political spectrum. It's not just on the Trump side.
00:39:43.160
I don't know what that yeah meant. I wasn't accepting that at all.
00:39:49.080
That was confabulation, because you were wondering. So, I know you're an economist by day.
00:39:55.800
Let's spend a few minutes on incentives, because it seems to me that
00:39:58.520
many of the problems in our lives are the result not of bad people doing bad things, because they're
00:40:06.200
bad, but all of us good people, or more or less good people, struggling to function in systems
00:40:15.400
where the incentives are not aligned so as to get us to do the right thing most of the time,
00:40:20.520
or make it easy enough to do the right thing. I think you have a line in your book about
00:40:24.120
incentives being like the wind. I mean, you can decide to row into it, or you can tack against it,
00:40:30.520
but it's better to have the wind at your back. And so, how do you think about incentives, and what's the
00:40:35.720
low-hanging fruit here? But what is it that we could be doing differently in any area of consequence?
00:40:41.080
So, our book is about motives, and money, and power, and respect are things we have as motives,
00:40:50.280
and incentives are often aligned with those things. So, we often do the things that give us more of
00:40:57.640
these things we want. But we'd rather not admit that those are our highest priorities, and so we're
00:41:05.400
usually reluctant to overtly just do what it takes to get the money or respect. So, in most areas of
00:41:12.360
life, we have to put some sort of gloss of some higher motive that we have to pretend we're trying
00:41:17.640
to do. And that means often very direct, simple incentive schemes don't work. They're too obvious.
00:41:24.200
You know, just like your incentive to reciprocate the dinner,
00:41:29.080
that's an incentive you have, but you have also an incentive to not admit that too directly.
00:41:36.040
Because otherwise, you would force them to admit that they mainly wanted the reciprocation. And so,
00:41:43.160
this is an issue with incentives that many of the problems we have in the world happen because we
00:41:49.800
have insufficient incentives to do the right thing, but often that's because we don't want to admit how
00:41:55.880
important incentives are so we don't want to admit that we need incentives so we don't restructure
00:42:00.200
things to produce the incentives we need. And because we want to pretend we don't need the
00:42:05.400
incentives. So, for example, your doctor's incentive to give you the best treatment can often be
00:42:11.000
compromised by the fact that they might, under one incentive system, just want to treat you more
00:42:15.480
just because they get paid every time they treat you, or another incentive system might need to treat
00:42:20.040
you less because they have to pay out of their pocket every time they treat you. Under either case,
00:42:24.280
their incentives might not be well aligned with you, but you could have set up some sort of more
00:42:30.200
direct incentive system where, you know, they had a stake in your health, but you might not be
00:42:35.320
comfortable with asking for that because that might show that you didn't trust your doctor. You might
00:42:40.920
rather on the surface pretend like you trust your doctor and they trust you and you have a nice,
00:42:45.400
comfortable relationship. This is also a problem in financial investment, actually. An awful lot of
00:42:51.880
people invest an awful lot in intermediaries who take a lot of money, but don't offer that much
00:42:57.800
in return. And people often just like the relationship they have with the intermediary,
00:43:02.840
and they don't want a distrusting relationship that would have some explicit stronger financial
00:43:07.240
incentives, so they accept a weak relationship. People often want to, like, feel like you had a
00:43:13.080
relationship and that relationship is degraded by the idea that you might not have trusted them.
00:43:20.920
I'm a researcher in academia and most money comes in the form of grants where they say apply for the
00:43:27.160
grant and then they might give you the grant. We've long known that prizes are often more effective. A
00:43:31.720
prize is where they say if you do the following thing then we'll give you this much money.
00:43:35.880
And a prize can give stronger incentives for people to do things, but a prize is less trusting.
00:43:40.200
And you as the granting agency often want to just form a relationship with someone and then take
00:43:45.560
credit for them as if we were buddies. And this prize sort of makes an arms-like distance where
00:43:51.720
clearly I don't trust you if I'm going to only pay you if you do this measurable thing. And so we'd
00:43:57.160
rather have this closer relationship than to have a stronger incentive. Is there a meta level to many
00:44:03.000
of these considerations where it can be reasonable to not follow the purely rational line through all
00:44:10.680
of these problems? It sounds like what would happen is if we took all of this to heart we would try to
00:44:18.680
bootstrap ourselves to some new norms that that paid better dividends by or that seem more rational
00:44:25.240
economically or otherwise, or in terms of like health outcomes. And yet given human nature we might
00:44:32.680
find the price of anchoring ourselves to those new norms to be unacceptable for one reason or another.
00:44:37.320
So the way I would summarize this is to say our usual institutions let us pretend to be trying to get
00:44:46.680
the thing we pretend to want while actually under the surface giving us the things we actually want.
00:44:55.000
Policy analysts typically try to analyze how to give policy reforms that would give us more of the things
00:45:00.760
we pretend to want. And we're usually uninterested in that because we know we don't actually want more of the things
00:45:06.680
we pretend we want. If you could design a policy reform that let us continue to pretend to get the things
00:45:14.680
we pretend to want while actually getting more of what we actually want. We'd like that, but we can't admit it.
00:45:20.360
If we stumble into it we'll stay there. But if the policy analysts were just to out loud say,
00:45:26.520
well this is a system that will give you more of this thing is what you actually want, but admit it,
00:45:30.440
don't you? We don't want to admit it and then we won't want to embrace that. So yes, what we want to do
00:45:36.200
is pay for the appearance of the thing we're pretending to want and we're often paying a lot for that appearance.
00:45:41.960
Hmm. I would love to see a transcript of what you've just said there.
00:45:49.640
So I'm going to ask you some rapid fire kind of bonus questions here. I want to leave a lot of
00:45:56.280
time for Q&A because though conversation isn't about just exchanging information, you have a lot of
00:46:02.520
information to exchange and I want to get the audience involved. But if you had one piece of advice for a
00:46:10.200
person who wanted to succeed in your area of work, what would that be?
00:46:15.720
I am an intellectual and my measure of success would be insight. There are other measures of
00:46:23.400
success. You could have a prestigious position, you could make a lot of money, you could get a lot of
00:46:28.200
friends. But if the measure of success is insight, then a number of strategies, one of which is just to
00:46:35.400
look for neglected areas. So as we talked about in conversation, there's a strong norm in ordinary
00:46:40.600
conversation to follow the conversation, to talk about what everybody else is talking about. And
00:46:45.400
academics do that, news media does that, and we do that in ordinary conversations with people. But
00:46:50.680
for intellectual contribution, if you jump right in on what everybody else is talking about, your
00:46:55.960
chances of making a large impact are pretty small. You're adding a small amount to what everybody else is
00:47:00.280
talking about. If you go talk about what somebody else isn't talking about, find something important
00:47:04.440
but neglected, your contribution can be quite large, even if you're not especially brilliant
00:47:09.320
or well-tooled. And so one very simple heuristic, if you want to produce intellectual insight, is just
00:47:14.600
to look at what other people aren't looking at that seems important and hope that later on they'll
00:47:19.320
come around to your topic and realize that you did make a contribution. But how long would you stay in
00:47:24.200
that important area, waiting for people to come around? You don't have to stay, you have to stay
00:47:29.480
long enough to make a contribution, and then you can go off looking for another area to make a
00:47:33.160
contribution to. What, if anything, do you wish you had done differently in your 20s, 30s, or 40s? You
00:47:39.320
can pick the relevant decade. Well, I wandered around a bit, much like Sam, in that I started my PhD
00:47:48.920
program at the age of 34 with two kids, eight zero and two. It's a relatively late start. That was,
00:47:54.680
in some sense, the price for continuing to switch because other areas seem to actually be more
00:47:59.400
objectively important and have more promise. But as I said before, this book that I'm out with here
00:48:08.920
is summarizing the thing I wish I would have known at the beginning of that social science career, which
00:48:12.920
is that we are just often not honest with ourselves about our motives. So the thing I'm most known for
00:48:19.480
actually is something called prediction markets, betting markets on important topics. And they do
00:48:23.960
work well, and they give people something they say they want, which is more accurate estimates and
00:48:28.120
information on important topics. And it turns out people are usually not very interested in them,
00:48:33.720
even though you can show over and over again in many ways that they work and they're cheap, etc.
00:48:38.600
And part of why I didn't realize that that would happen is I took people at the word for what they
00:48:44.680
want. So you wish you hadn't spent so much time on prediction markets? Well, I wish I would have
00:48:49.000
understood the constraint that people are not honest about what they want and thought about that
00:48:52.760
constraint when I was initially trying to design institutions. So I've read many other ideas and
00:48:58.840
worked on ideas for reforming politics and medical purchasing and information aggregation, etc. And in
00:49:04.920
each case, I assumed the usual story about what we're trying to do and worked out a better answer.
00:49:11.960
And we actually can not only work out better answers, we can show them and not only in math,
00:49:16.040
but in lab experiments and field experiments, we do actually know many ways to make the world better
00:49:19.880
substantially. And the world's not interested, most of them, because we know how to make the world
00:49:24.120
better according to the thing that people say they want, to learn more at school, to get healthier at
00:49:28.760
the hospital, to get more policy and politics. But in fact, emotionally at people's heart, they kind of know that's not what they
00:49:34.920
want. And so they're not interested. So I wish I would have known that 20 years ago. And this book is
00:49:40.280
hopefully to somebody at a younger career, somebody can pick this up. You might know a 20-year-old who's
00:49:45.320
been saying for a while, everybody's bullshitting, nobody's telling the truth. Where can I find out
00:49:50.280
what's really going on? I'm hoping our book can be that book.
00:49:53.240
So 10 years from now, what do you think you'll regret doing too much of or too little of at this point in your life?
00:50:01.640
I mean, if I knew that, I would presumably be doing something different.
00:50:07.000
Do you actually think that's true? Isn't that just one of the problems? For instance, you know
00:50:14.440
you want to lose weight, you know how to lose weight, but you still can't get the ding-dong out of your
00:50:18.200
hand? The major issue would be if I'm neglecting in the long run for the short run, right? I don't
00:50:22.200
know if I am. But yes, if I am neglecting in the long run, then I would regret not investing more in
00:50:27.480
the long run. But I am primarily investing in this long run effort to produce intellectual insight.
00:50:33.640
And I actually think there are scale economies in that. So the more fields you learn, the more
00:50:39.880
mental models and tools you have is to learn new fields. So you can actually learn new fields faster
00:50:45.480
the more fields you have. So if your intellectual project is to learn many fields and then find ways
00:50:49.320
to combine the insights from them together, that's something you continue to do more and better
00:50:53.960
as you get older. And so I'm enjoying that wave and not thinking I'm over at all.
00:51:00.200
What negative experience, one that you would not wish to repeat, has been most valuable to you?
00:51:07.240
Most valuable to me, negative experience? Or changed you for the better.
00:51:13.080
But it's got to be negative. You wouldn't want to repeat it.
00:51:15.400
Well, so early in my academic career, I sort of really just failed to do the simple standard
00:51:26.440
thing of applying to the best colleges. I'm not sure what went wrong, but somehow my family or me
00:51:31.480
somehow just did not go through the process of applying to good colleges far away. We just
00:51:35.640
sent me to the local college, which was easy for me, too easy compared to my colleagues. So I had lots
00:51:42.760
of free time. So perhaps I might have thought I should have gone to a more challenging college,
00:51:47.960
and then people would have challenged me. But that made me who I am in the sense that with all that
00:51:52.600
free time, I just started studying stuff on my own. I sort of made up my own topics and made up my own
00:51:58.360
questions and just started going in and working on things. And so actually, I was a physics
00:52:03.480
undergraduate major, and the first two years of physics classes are going over all the major topics.
00:52:08.040
And then the second, the last two years are going off over all the major topics again with more math.
00:52:13.320
And I had all these questions that the math was not answering. And so what I did in the last two
00:52:18.200
years of college was to just play with the equations, just rearrange them, try different ways. And by
00:52:24.280
spending the semester rearranging the equations, I could ace the exams. But I didn't do any of the homework.
00:52:30.920
And so professors who had a formula, like so much percentage homework, so much percentage exams,
00:52:34.920
they, you know, didn't know what to do with me exactly. And so I got low grades in some classes,
00:52:39.640
although I was, you know, people were willing to give me letters and recommendations. But basically,
00:52:43.560
that formed me. That is, I became the person who didn't like do what I was told. I wasn't following
00:52:50.440
a path that people had led for me. And I wasn't like going down, learning the things I was supposed
00:52:54.280
to learn. I was just making up my own problems and my own questions and working them out for myself.
00:52:59.320
And in the end, that has some advantages, but I'm not sure that was best overall.
00:53:05.160
I'm going to put that in the bragging category.
00:53:17.080
What worries you most about our collective future?
00:53:20.520
We are collectively ignorant compared to what we could be. We are a vast population, a vast world,
00:53:29.800
a lot of smart people, very capable people. We have many great tools. And we just don't pull that
00:53:35.240
together into a consensus that we can use very well. We fail to do something we could do quite easily.
00:53:41.960
My work on prediction markets was one attempt to try to create an institution which would allow us to
00:53:46.840
collect what we know together effectively and efficiently. And it would work if anybody was
00:53:50.200
interested. But we're not very interested. And so part of my intellectual work is just try to
00:53:55.080
diagnose why aren't we interested as part of the understanding how could we do better. And I think
00:53:59.320
this fact that we're all trying to show off to each other is part of it. So if I ask, well, what's
00:54:03.160
going wrong with our showing off? I would say the problem is we are showing off to audiences that are
00:54:08.200
too ignorant. That is, if we focused on a really smart audience, a really knowledgeable audience,
00:54:15.080
you're trying to show off to them, then we would be forced to show off in better ways.
00:54:20.200
So for example, we haven't talked much about it, but basically I've said medicine is mostly about
00:54:24.680
showing that we care rather than helping people to get healthy. So when grandma's sick, you make sure
00:54:30.360
she gets expensive medical treatment, the sort that everybody would say is the reasonable thing to do,
00:54:34.600
even if it's not actually very effective. But as long as your audience doesn't know it's not very
00:54:40.200
effective, they will still give you credit for being caring about grandma. If your audience knew that the
00:54:44.920
medicine you were pushing hurt her instead of helping her, they would not consider you as such
00:54:49.080
a caring person. So the more that our audience knows about what actually works and has what effects,
00:54:54.600
the more we would all be pushed to do things that actually had good effects as part of the process
00:54:59.960
of trying to show off and show that we care. Similarly in politics. Actually, before we move on to that,
00:55:04.840
say more about the mismatch in medicine. How is it that we know, or how is it that you think you know
00:55:10.440
that it's more about caring than about results? So again, the structure is a set of puzzles that
00:55:15.880
don't make sense from the usual point of view. So it turns out we have data on variations in health
00:55:22.680
and variations in medicine and there's almost no relationship. That is geographic areas that spend
00:55:28.360
more on medicine or have people do more doctor visits, those areas are not healthier. We also even
00:55:33.640
have randomized experiments where some people have been given randomly a low price of medicine and they
00:55:37.800
consume more and other people have a high price and they consume less and then there's no difference
00:55:43.480
in health between these groups. So at a very basic level, there's very little if not any correlation
00:55:49.240
between health and medicine. Not only that, there are other things that correlate strongly with health
00:55:53.960
that people show very little interest in. Well, there must be a lower bound to that though,
00:55:58.680
because some medicine is life-saving clearly, right? Where are you putting the line between... Well,
00:56:04.040
it's not a line. That is, there's a whole mix of medicine and some of the stuff helps and that
00:56:07.640
means other stuff hurts. So if you could just get the stuff that helps and avoid the stuff that hurts,
00:56:13.880
why then you could do better? But people show relatively interest in doing that. And so some
00:56:19.000
medicine hurts, not only does it do zero, it on average hurts. We are not interested in exercise, air
00:56:27.160
quality... But what's the measure of people not being interested in the information that would allow
00:56:31.800
them to get better medicine? We have experiments and studies where people have been given access
00:56:39.560
to information and asked if they would be willing to pay much for it and even just give it and seeing
00:56:43.560
if it affects the behavior. And consistently, if you give people privately information about the quality
00:56:48.840
of medicine, they just aren't interested and don't act on it. And they won't pay for it, right?
00:56:53.880
They certainly won't pay for it, exactly. So there was a study of people about to undergo heart surgery,
00:56:59.880
where a few percent of people undergoing heart surgery die. So that means you face a few percent
00:57:04.920
risk of death. That should be a serious situation. They said, we have statistics on the local surgeons
00:57:09.800
and the local hospitals in terms of what the percentage is of those patients dying there. And
00:57:14.200
it varies by quite a bit, twice as much in some places than other places. Would you like this information?
00:57:21.000
Only eight percent were willing to pay 50 bucks.
00:57:23.160
And those who were just given the information didn't act on it.
00:57:28.360
Why is it that I think that everyone I know is in the eight percent?
00:57:32.040
Well, that's what they're pretending. A way to understand this is to think about
00:57:37.640
Valentine's, which happened recently. On Valentine's, it's traditional to try to show that you care
00:57:42.280
about someone by, say, buying them a box of chocolates. Now, when you do this, do you ask how hungry they are
00:57:47.560
when you think about how large a box to buy? No, plausible. You need to buy as much chocolate as
00:57:54.040
it takes to show you care more than somebody else, regardless of how hungry they are, which is like
00:57:58.680
medicine. We just give people a lot of medicine, even though the extra medicine isn't very useful.
00:58:03.160
And if you ask, well, how do I know which quality of chocolate to get? You know that you need to give
00:58:07.240
a quality of chocolate, a signal that's a common signal of quality. If you happen to privately know that
00:58:12.600
this is a great kind of chocolate, or they happen to privately know a certain kind of thing is a
00:58:16.840
great kind of chocolate, that won't actually affect whether you interpret this as a generous
00:58:20.920
act. The interpretation of generosity is based on a common signal of quality. So if medicine is
00:58:26.440
a way to show that we care, then similarly, what we want is common signals of quality. We aren't
00:58:31.320
actually very interested in private signals of quality of medicine, which is what we actually see.
00:58:44.040
If you could solve just one mystery or problem...
00:58:49.640
If you'd like to continue listening to this podcast, you'll need to subscribe at
00:58:53.160
samharris.org. You'll get access to all full-length episodes of the Making Sense podcast,
00:58:58.200
and to other subscriber-only content, including bonus episodes and AMAs,
00:59:03.000
and the conversations I've been having on the Waking Up app. The Making Sense podcast is ad-free,
00:59:08.040
and relies entirely on listener support. And you can subscribe now at samharris.org.