How to Decide
Episode Stats
Summary
Annie Duke is a former professional poker player and strategist. She s now a decision making expert and strategist, and her latest book is How to Decide: Simple Tools for Making Better Choices. In this episode, she talks about how to overcome hindsight bias, how to figure out the probabilities for things that seem difficult to predict, and the importance of embracing an archer s mindset when making decisions.
Transcript
00:00:00.000
Brett McKay here and welcome to another edition of the Art of Manliness podcast. We all make
00:00:11.360
many decisions every single day from little ones like what to eat for breakfast to big
00:00:15.200
ones like whether to take a new job. Given how regularly we're deciding, we certainly
00:00:18.960
have a vested interest in getting better at this skill. But how do we do so? How can
00:00:22.600
we get better at making big choices and spend less time dithering over the insignificant
00:00:26.400
minutia that often overwhelms our mental bandwidth? And why did anyone teach us how
00:00:29.840
to do this stuff to begin with? My guest today has written a book that offers an education
00:00:33.440
and a subject matter many of us missed out on. Her name is Annie Duke. She's a former professional
00:00:37.240
poker player and is now a decision-making expert and strategist. And her latest book is How to
00:00:41.700
Decide, Simple Tools for Making Better Choices. Today on the show, Annie shares many of those
00:00:45.680
practical tools beginning with how to overcome hindsight bias and resulting, which is our
00:00:49.580
tendency to judge decisions based on their outcomes by doing something called knowledge
00:00:53.080
tracking. We then discuss how to figure out the probabilities for things that seem difficult
00:00:56.820
to predict and the importance of embracing an archer's mindset when making decisions. We
00:01:00.860
then get into when you should make decisions slowly, when you can speed up, how to employ
00:01:04.540
the only option test when making a choice, and why when a decision is hard, it's actually
00:01:08.700
really easy. After the show's over, check out our show notes at aom.is slash howtodecide.
00:01:25.460
Thanks for having me back. I'm so excited. This is when we talked, what was it? Gosh,
00:01:34.480
Yeah. That was absolutely one of my favorite podcasts that I did last time. So I'm so excited
00:01:39.960
Well, same here. Thinking in Bets is the book we talked about previously. It's a book that
00:01:43.620
I keep thinking about, even though I read it several years ago and think about the ideas.
00:01:48.620
You got a new book out though. It's a follow-up called How to Decide, Simple Tools for Making Better
00:01:53.140
Choices. And this book is basically, it's a workbook, I would describe it, of the tools
00:01:59.220
you talked about in a very, I think, theoretical way in thinking in bets, but showing people
00:02:06.660
And as I was reading this, I was thinking, how come no one ever told me this stuff before?
00:02:12.140
Because, right? I mean, like we make decisions all the time, small ones, really big ones, but
00:02:17.620
no one ever sits you down and be like, here's how you can make a good decision. Like, why
00:02:22.480
is that? Like, why don't we get taught explicitly how to do something we do every day?
00:02:27.440
Yeah. So, you know, I think this is a very deep question. So just as background, I co-founded
00:02:31.960
an organization called the Alliance for Decision Education. And we're actually trying to tackle
00:02:36.200
exactly this conundrum that you're pointing out, which is why don't we teach decision education
00:02:43.340
to, like, K-12 students. You know, when I talk to people and I ask them, you know, did
00:02:49.640
you ever have an explicit class on decision-making? You know, if anybody has, it would have been
00:02:56.560
in college. And only if you were pursuing, like, certain types of majors.
00:03:01.760
Right. I had a, I took a philosophy of decision-making class in college.
00:03:06.420
Exactly. So nobody really teaches you how to make a good decision, which is kind of strange.
00:03:12.600
So, you know, I have some theories about it. So I'll just throw a couple things out there.
00:03:17.960
One is, you know, our educational system is set up actually from way back when, from when
00:03:24.700
England obviously was very colonialist. They had, you know, people who were far and wide
00:03:32.100
and everybody needed to sort of learn very specific skills, like how to sail. So everybody had to be
00:03:39.580
taught the same thing and decision-making actually wasn't something that they were trying to teach
00:03:44.440
because they wanted people, people needed to all be doing kind of the same thing and have the same
00:03:48.640
skills. So for example, trigonometry is in there, not just because it's really good for sailing,
00:03:55.120
but also because it's hard and it doesn't directly feel like it's practical or connects to anything.
00:04:01.660
So it was meant as a screener that would tell people you're going to go on to sort of great
00:04:06.100
things because you had the grit to be able to get through trigonometry and all of you aren't.
00:04:10.660
And, and then that translated into American education with tracking. So trigonometry is literally a way
00:04:16.740
to clear out the people who aren't willing to work really hard at things that don't have any purpose,
00:04:23.640
which is kind of a weird thing to have in the school system. So our curriculum today is not
00:04:30.240
really designed for today. It's designed for a long time ago. I think that's problem number one.
00:04:35.780
Problem number two, I think is that it's kind of like walking, right? Like you've been walking your
00:04:42.640
whole life and it would never occur to you that you should take a class on how to walk.
00:04:46.660
And everybody is making, been making decisions their whole lives. And so the idea, I think that
00:04:53.880
you would maybe be bad at that, that it would be really good if you had a class that really taught
00:05:00.120
you how to make a good decision. I don't think that it's really intuitive. I don't think parents in
00:05:04.940
general think they're poor decision makers. I think that they probably think they're pretty qualified
00:05:08.820
to teach their children. And even when you look at the history of science, it wasn't until,
00:05:14.720
you know, Conor Tversky and Richard Thaler came along and people like Barry Stahl,
00:05:19.460
where they started saying people aren't perfectly rational. If you give people the information
00:05:26.180
that they need and let them make decisions, they actually aren't going to make decisions that are
00:05:30.680
necessarily really rational. And the ways that they're irrational are actually quite predictable.
00:05:35.760
And this was, you know, in the seventies, this was heresy within science and economics.
00:05:42.720
Up until then, the assumption was, you know, a rational actor. So we didn't really even start to
00:05:49.500
figure out the ways in which people are bad at decision-making until the seventies. And then it
00:05:55.940
wasn't really widely accepted until, gosh, you know, it started to gain some traction into the eighties
00:06:01.180
and nineties. And then obviously in the last two decades, people have really become wise to this.
00:06:06.460
And then you have, you know, I think it was 2014 that Thinking Fast and Slow came out and then the
00:06:12.240
general public really started to get it. So we've been pretty behind the curve on this. And so the fact
00:06:17.560
that it hasn't gotten into the school system is maybe not that surprising. So that's kind of what we're
00:06:23.680
trying to do at the Alliance for Decision Education is kind of catch K through 12 education up with where
00:06:29.560
the science is and actually where business is, because business has really accepted that this is
00:06:34.840
something that they need to work on. Well, let's talk about some of these tools you highlight in
00:06:38.540
the book. And the first one, we talked about this a bit in Thinking in Bets, but I think it's a really,
00:06:43.520
when I learned about this concept, it really, it's changed the way I think about how I interact
00:06:48.900
with the world and how I think about the world. And it's this idea of resulting. So what is resulting?
00:06:54.220
And then how does that get in the way of us making good decisions?
00:06:59.980
Yeah. So it's, that is a concept that has really taken hold from Thinking in Bets. And I'm quite
00:07:06.520
pleased because I think it's a really important concept for, to start understanding kind of where
00:07:10.980
our decision-making goes wrong. So what we want to think about is like, how do we actually learn to
00:07:16.160
become good decision-makers? And it seems obvious that the way that we do that is from experience.
00:07:20.540
So, you know, you make decisions, you get outcomes of the decisions, and then you sort of tie those
00:07:26.340
feedback loops together, then that helps you become better at making decisions. That would be
00:07:31.040
what one would hope. But resulting actually really gets in the way. So this is what resulting is. It's
00:07:36.860
basically what it sounds like. You look at an outcome and depending on the quality of the outcome,
00:07:42.560
was it good? Was it bad? Did you win? Did you lose? You then use that outcome, the quality of the
00:07:48.580
outcome, to work back to the quality of the decision. So the decision that I opened Thinking in Bets with
00:07:56.420
is Pete Carroll in the Super Bowl in 2015. He's obviously not playing, he's coaching. And he's
00:08:03.340
against the Patriots. And there are 26 seconds left in the game. So obviously it's fourth quarter.
00:08:10.960
They're on the one-yard line of the Patriots. It's second down. They have only one timeout. This is
00:08:17.060
actually a really difficult situation for them because they're down by four. So they need to be
00:08:21.360
able to score a touchdown. They can't just kick a field goal. They obviously have three downs that
00:08:25.880
they could do that in, second, third, and fourth down. But they only have 26 seconds left. So this is
00:08:31.700
quite a hard problem here because you have a clock management problem, given that you only have one
00:08:35.460
timeout. So everybody expects Pete Carroll to have Russell Wilson hand the ball off to Marshawn Lynch.
00:08:41.020
Great running back. He doesn't do that. He has Russell Wilson pass. Russell Wilson passes to the
00:08:46.560
right corner of the end zone and the ball is intercepted by Malcolm Butler. And everybody goes
00:08:51.720
nuts that this is absolutely the worst play in Super Bowl history. In fact, USA Today, the headline
00:08:58.220
that they had the next day was that it was the worst play call in NFL history, in all of NFL history.
00:09:03.600
Now, this is a really classic case of resulting because what you can do, I mean, if you go look
00:09:10.080
at any of the articles that were written at the time, like the USA Today article, for example,
00:09:14.120
it's pretty statistics free. So it doesn't tell you kind of what you need to know in order to determine
00:09:20.160
whether that was a good decision or not. Things like how likely was it that Marshawn Lynch was going
00:09:25.700
to score? Or more importantly, how likely was it that the ball was going to get intercepted?
00:09:29.540
But I could tell you those things. Marshawn Lynch was going to score about 20% of the time. It's
00:09:35.780
actually lower than people think it was. The ball was going to get intercepted less than 2% of the
00:09:40.680
time. But I don't really need to do that. All I need to do is do a thought experiment with you,
00:09:45.500
which is imagine it's the same situation, 26 seconds left in the Super Bowl against the dreaded
00:09:51.560
Patriots. They're on the one yard line down by four. Pete Carroll has one timeout, does this really
00:09:57.560
unexpected thing? And he calls for a pass play and the ball is actually complete for the game-winning
00:10:03.840
touchdown. So they catch the ball, game-winning touchdown. I'll just ask you, like, what are the
00:10:09.360
headlines look like the next day? Greatest play, gutsy, amazing. That's right. So all of a sudden,
00:10:16.540
weirdly, USA Today doesn't say it's the worst play in Super Bowl history. So in both cases,
00:10:22.140
whether the ball is complete or intercepted, people make an assumption about what the decision
00:10:29.360
quality is. If it's intercepted, they say the decision quality is terrible. If it's complete
00:10:35.220
for the game-winning touchdown, defeating the Patriots and denying them their fifth Super Bowl
00:10:40.500
ring at the time, you know, then it's the greatest play in Super Bowl history. And that's why he's going
00:10:45.380
to go to the Hall of Fame. But here's where we can see that this is an error because the decision is
00:10:51.640
the decision, right? There's math that goes into it. I told you a little bit about it. Marshawn Lynch
00:10:56.960
is only going to score about 20% of the time. Remember, he's in a compressed part of the field
00:11:01.060
on the one yard line. So there's a lot of Patriots there to stop him. The ball is only going to get
00:11:05.940
intercepted less than 2% of the time. There's some other things that go into that. Like if you pass,
00:11:11.360
you're more likely to get three plays off instead of two, which someone would assume you'd like against
00:11:15.920
the Patriots. And those are the things that we should care about as we're trying to determine
00:11:20.260
what the quality of the decision is. But the problem for us as decision makers is that if I
00:11:26.140
were to go through that, and I just went through a little bit of it, but if I were to go through the
00:11:29.480
whole thing, it's very complicated, right? Like you have to understand probability, statistics and
00:11:35.320
probability, what that does in terms of win probability, depending on the choice that you make. You need some
00:11:40.120
options theory in there so that you can understand, you know, what the value is of having the extra play
00:11:45.680
and how you might actually get to that. So there's a lot of conditionals in there as well. It's just,
00:11:51.060
it's complicated. And this is the sort of what we face when we're trying to look back on our
00:11:56.740
decisions. It's complicated. That's whether it's the Super Bowl and the last play of the Super Bowl
00:12:02.800
or trying to hire somebody based on just a CV, a few interviews and some references. These things are
00:12:08.880
very complex. So in order to simplify, what we do is we say, well, I know what the result was.
00:12:16.720
The ball was intercepted. So therefore it must've been a terrible play. I know what the result was.
00:12:22.580
It was the game winning touchdown. Therefore it must've been a great play. And you can see why it
00:12:27.660
has to do with that sort of how complex getting to the decision quality is because we don't do this
00:12:33.420
kind of resulting as much when the decision quality is really clear. Like if I go through a green light
00:12:39.480
and I get in an accident, you don't tell me going through a green light was a bad decision.
00:12:44.740
But that's because it's super, like we've already decided that. It's the rules of the road, right?
00:12:51.120
This is getting into two plus two equals four, as opposed to some kind of, you know, strange,
00:12:55.580
like linear algebra or something like that. Right. So, so it's a way that we kind of simplify
00:13:02.120
the world when we shouldn't, that really messes our decision-making up.
00:13:06.440
Gotcha. So what resulting does, it prevents you from learning whether you're actually making good
00:13:10.320
decisions. Like you could be making a decision that's terrible. Like the process, what you're
00:13:14.680
thinking about is just absolutely terrible, but you get good outcomes because of just plain dumb luck.
00:13:20.540
But you think to yourself, well, I'm making a great decision. And really eventually it's going to,
00:13:25.360
biting the butt, but you'll never know that because you're just looking at the outcome.
00:13:29.820
Yeah. It's, it's actually, I would say it's actually worse than that. So you do some things,
00:13:36.600
you get some great results from it. You decide that the decisions that led to that great result
00:13:43.780
were amazing. Then you do those things again, and maybe now you don't get such great results from it,
00:13:50.360
but now you get caught in motivated reasoning where you start to say, well,
00:13:55.180
I know the decision quality was good. So this must just be bad luck because it would be really,
00:14:02.100
really hard for you to think that the success that you'd had in the past from the decision-making
00:14:06.400
that you'd done previously was not actually because you make great decisions. That doesn't
00:14:11.800
really fit well with the way that we want to think about ourselves. And we want to think about
00:14:16.700
ourselves in a positive way. So what kind of happens is that then part of what happens with
00:14:22.020
motivated reasoning is that we'll start to sort of look for reasons that we can maintain that our
00:14:27.440
decisions were good. And this becomes really problematic as we're thinking about ourselves.
00:14:33.860
So when we look at other people, we do pretty straight resulting, which is if there's a good
00:14:38.360
outcome, it's from a good decision. If there's a bad outcome, it's from a bad decision. But when we're
00:14:43.040
actually thinking about ourselves, we have this real need to maintain a positive self-image. And part of
00:14:48.900
that obviously is I'm a good decision-maker. I do good things and bad things aren't my fault.
00:14:53.920
And because of the presence of luck, now we get into trouble, which is we have some success. We believe
00:15:00.320
that our decisions were good. Maybe they're very low quality though. And eventually they're going to,
00:15:04.720
as you said, bite us in the butt. But when they start to bite us in the butt, we start to blame
00:15:12.600
Okay. So with resulting, we judge the quality of our decision based on its outcome. And part of
00:15:17.480
overcoming that is trying to objectively separate out what was actually luck and what was skill in the
00:15:22.520
decision. But then there's another bias connected to resulting, which is hindsight bias. And that's
00:15:28.120
where you think the outcome of something was more predictable than it was. And your memory actually
00:15:33.280
can get distorted. So when you're looking back on something, you think you really knew all along
00:15:38.540
how it was going to turn out. And there's a tool you suggest using called knowledge tracking that can
00:15:43.480
help with both of these things. So can you walk us through knowledge tracking?
00:15:47.980
When you're thinking about a decision, actually think to yourself, what did I know at the time of
00:15:53.220
the decision? What revealed itself after the fact? Those things that revealed themselves after the fact,
00:15:58.120
were they knowable beforehand? If they were knowable beforehand, you still aren't done though.
00:16:04.460
You want to move on to two other questions. Could I afford to get it?
00:16:07.760
So, you know, as an example, I think I have an example in the book. You've only lived in the
00:16:13.820
South your whole life. You get offered a great job in Boston and you're trying to decide whether
00:16:17.980
to move there because you're concerned about whether you like the weather. You go up for a
00:16:22.380
couple of days in February just to kind of check it out. Doesn't seem so bad. The job's a great
00:16:28.320
opportunity. So you move there and it turns out that you hate it. Your first winter, it's just like
00:16:34.280
brutal and you end up moving back to the South. So this would be a good example where knowledge
00:16:40.780
tracking would be really helpful. Like what did I know at the time? Well, I knew that New England had
00:16:45.840
bad, you know, winters. I wanted to kind of try to figure out if I liked it or not. So I went up there
00:16:53.020
for a couple of days in February. It didn't seem so bad. And so I took the job. What revealed itself
00:16:58.000
after the fact? Well, it turns out that when I had to endure a whole New England winter, I hated it.
00:17:02.880
So now you can say to yourself, was that knowable beforehand? The answer is yes.
00:17:07.100
Right? Like I could have gone and spent a winter up in New England before deciding,
00:17:11.960
except that I couldn't afford to do that because the job wouldn't have still been available to me.
00:17:17.640
So that's just kind of like, then you just sort of shrug your shoulders and notice that that's a
00:17:22.400
case where hindsight bias really happens, where like your friends are going to be like, I knew you'd
00:17:26.040
hate it there. And you're going to say to yourself, I should have known I would have hated it.
00:17:29.700
But this is actually a really helpful tool to sort of get you away from that. Because what you can
00:17:35.100
see is, well, of course I couldn't have known that I would hated it. I hated it because I couldn't
00:17:38.840
have been up in New England for a whole winter to be able to find it out. Then you can ask yourself
00:17:45.240
the next question, which is, even if I couldn't have known it beforehand, either because it wasn't
00:17:49.760
knowable or because I couldn't afford to go find this information out, could I use that in my
00:17:56.680
decision process going forward? And then another tool that you can use, and this is in retrospect,
00:18:02.020
is actually to try to recreate for yourself, what are the possible things that could have occurred?
00:18:08.740
Right? So if you're thinking about, for example, like the job in Boston,
00:18:13.000
to actually go back and try to get yourself away from that feeling of inevitability that resulting
00:18:17.700
creates and hindsight bias creates that obviously it was going to work out horribly. And so therefore,
00:18:23.580
I should have known it, which is kind of what ends up happening. And instead say, let me try to
00:18:28.340
remember, like at the time that I was thinking about the job, what were all the different ways
00:18:32.020
it could have turned out? Right? And you can think like, I could have loved the job and become a
00:18:36.640
winter nut who then goes skiing. I could have hated the job, but loved Boston so much that I ended up
00:18:41.440
staying in Boston and I found another job. Right? I could, you know, I could, it could have been okay.
00:18:46.920
Like the job could have been okay and Boston would have been fine. And I could have spent a few years
00:18:51.320
there and ended up, you know, moving back kind of on my own terms. I could have loved the job,
00:18:57.040
but hated the weather so much that I left, or I could have loved the job, but hated the weather,
00:19:01.220
but felt it was worthwhile to stay. You know, and we can see that once we start to do that,
00:19:05.520
we start to realize like, no, the thing that happened wasn't inevitable. There were all sorts
00:19:10.660
of ways that this could have turned out. So that's kind of like, it's, those are sort of the first
00:19:15.220
three chapters of my book, you know, of how to decide the follow-up or trying to help you with
00:19:20.300
these retrospective problems. How do I look back on a decision and actually start to dig down
00:19:26.040
into the decision quality without falling into these traps where I start to, you know,
00:19:32.060
do resulting or hindsight bias or whatever. And hopefully those tools are pretty clear.
00:19:36.220
The knowledge tracking tool in particular, I think it's quite powerful.
00:19:39.200
Yeah. I like that. I actually am going to start doing that now that when I'm making a decision,
00:19:42.500
like here's the information I'm using to make that decision, like actually write that down explicitly.
00:19:47.220
Yeah. So that actually brings up a really good point. If we're looking at something
00:19:50.020
in retrospect where we don't have any record of what we thought at the time or what the
00:19:55.540
information that we had was, or what the process that we went through, who we talked to, who we
00:20:01.400
asked for advice, what they thought, any of that stuff, that then becomes really hard. As we go back
00:20:06.260
and try to do these reconstructions, you know, we, we try to do this knowledge tracking and try to
00:20:11.200
figure out what did we know at the time, or we tried to, we try to reconstruct sort of these simple
00:20:16.520
decision trees of like, here's the decision I'm thinking about what are the different outcomes that
00:20:20.300
could occur. It's just hard to do in retrospect. So the big lesson of that first section of the book
00:20:25.440
is do it beforehand. That a really great decision process is going to have you explicitly creating
00:20:33.060
some sort of evidentiary record of what your beliefs are, what your rationale or thesis is for the reason
00:20:39.720
that you're thinking about doing something, what you think the different outcomes might be, how likely
00:20:45.500
you think those are. It's going to have you interacting with other people to get their
00:20:50.060
viewpoint on it in order to improve the quality of the knowledge that's going into the decision
00:20:54.880
that you make. And that's all going to, you're going to have a record of that so that when you do get
00:20:59.600
the results, you can look back and you can say, what was my rationale? What did I believe at the
00:21:05.680
time? What was the information that I had? What did I go find out? And then you can actually ask
00:21:11.640
yourself these questions in a much clearer way that's going to allow you to close these feedback
00:21:15.440
loops in a more objective way. It won't be completely objective, but it's going to be a lot better
00:21:19.560
than it would be otherwise. And this now allows you to create really good learning loops that's
00:21:25.700
actually going to improve your decision-making going forward in a much, much faster way. It's going to
00:21:30.980
do a lot more heavy lifting. We're going to take a quick break for your words from our sponsors.
00:21:38.940
So we've talked about analyzing past decisions so we can make better decisions, but then you also
00:21:42.740
lay out like, how do you make a good decision that you haven't made yet? And you basically lay out
00:21:48.160
this process where you look at preferences, payoffs, and probabilities. You make a decision tree. So
00:21:52.860
what you do is you say, here's my decision. And then you are going to list out reasonably all the
00:21:58.680
potential possible outcomes. So like, you know, you can't, obviously you're not going to be able to do
00:22:03.020
every single possible outcome, but reasonable outcomes. It's like the example you gave of
00:22:07.080
moving from the South to Boston for a job, you know, possible outcomes. And you had to look at
00:22:12.220
the, the, the payoffs, the upsides and the downsides, like the pluses and the negatives. So if you move
00:22:16.780
to the South to Boston for this job, it could be, you love the job, you love the weather, you hate the
00:22:21.660
job, you love the weather, you love the weather, hate the job, you hate the job. I mean, just that's
00:22:26.680
we're kind of doing it. So you're going to do that. But I thought the really interesting part
00:22:30.680
of this process is figuring out probabilities and humans are really bad at this for the most part,
00:22:36.520
describing probabilities. So like, okay, how do you do that? So like, how do you figure out
00:22:40.620
with the decision of, you know, moving from the South to the, to Boston, like the probability of
00:22:47.240
whether you're going to love the job and you'll love the weather or love the job? Like, how do you
00:22:52.460
assign something for something you don't even know? Yeah. So this actually brings up something
00:22:57.160
really deep. I mean, the answer is you guess. And it, it takes a particular type of mindset to
00:23:05.080
be willing to do that. And the mindset is to say, look, no guess is ever like super random,
00:23:13.940
pretty much about anything. It's about figuring out when we think about that, you know, we have that
00:23:19.480
distinction between a guess and an educated guess. It's about how much educated can I get into it?
00:23:26.100
Because if I can, if I can get a little bit more educated into the guess than I would have, if I
00:23:31.500
hadn't tried, that actually going to really improve my decision-making. So you can kind of, so any
00:23:36.760
decision is really just, just a prediction about the future, right? If I'm thinking about moving to
00:23:41.100
Boston, I'm making some predictions about what's going to make me happiest in the future.
00:23:44.540
Now, the future is always going to be cloudy to us as mere mortals, but if you're willing to make
00:23:50.500
some guesses, then you can get to a point where it's less cloudy. And even though it's not going
00:23:56.520
to be a perfectly clear picture where you're going to know exactly, less cloudy actually does a lot of
00:24:03.980
work. It gets you pretty far. So let's, let's think about what do I mean by this, that all guesses
00:24:09.100
are educated guesses, that there really isn't anything where you should say, like, I don't know,
00:24:15.120
period, right? It should be, what do I know is really what you should be asking. So I'll give you,
00:24:20.220
I'll give you an example. Okay. So I have a new puppy. He's sleeping next to me. You've never seen
00:24:30.860
this puppy, correct? Correct. Okay. How much do you think this puppy weighs?
00:24:35.800
Um, 15 pounds. Okay. And how, what's the lowest amount you think the puppy weighs?
00:24:42.600
Five pounds. And what's the most amount you think the puppy weighs? 25.
00:24:47.520
Okay. So that's great. Uh, so you can't see the puppy. We're not on a video system. I haven't
00:24:54.220
showed you a picture of the puppy. You don't even know what breed the puppy is. Don't know. Yeah.
00:24:58.300
You don't know how old the puppy is. You just know it's a puppy. Right.
00:25:01.600
So, you know, a little bit of something about how old it is, but not a lot. And you just gave me a
00:25:05.700
really good guess. And by the way, your lower bound and upper bound captured exactly the weight.
00:25:09.620
So you gave a lower bound of five and an upper bound of 25. He's 10 pounds.
00:25:13.960
Okay. So you actually did your job. You found the right answer. It was in that range.
00:25:18.400
Now, was your point forecast of 15 exactly right? No, but it was pretty darn close when you think about
00:25:23.540
the weights of all things, right? Like if you thought like, what's the full range of things
00:25:29.020
could weigh it's zero to, I don't know how much, how much does the earth weigh, right? Like millions
00:25:34.960
of pounds, trillions of pounds. So what we've just discovered is that when I asked you to guess at the
00:25:41.060
weight of this dog, even though you've never seen the dog, you started, I assume to recruit,
00:25:46.640
well, what do I know about puppies? Right? Like, well, it's a puppy. So would she describe a dog
00:25:53.720
that's over six months as a puppy? Maybe, but probably not. I just got the puppy. So that means
00:25:59.980
the puppy's probably young, right? How much do puppies in general weigh? How much do dogs in general
00:26:05.940
weigh? How much do they compare to other things? And even if I asked you something like, which I assume
00:26:11.880
you don't know the answer to, maybe you do, like what's the distance between the earth and Jupiter?
00:26:17.260
Yeah. I have no clue. I would say, uh, 2 million miles.
00:26:22.260
Right. So lower, lower bound and upper bound. Yeah. Lower bound 2 million, upper bound 10 million.
00:26:27.580
Okay. So, um, I would probably, so I know a little something more than you do, right? So I know that
00:26:35.160
the sun is 93 million miles away. Oh, wow. Right. But notice though, even so you knew it had to be in the
00:26:43.300
millions. So did you get it exactly right? No, but, but you cleared away a lot of the possibilities
00:26:50.680
because you understood that it at least had to have a million in front of it.
00:26:55.280
And that's, that's actually a really big improvement over not having tried it at all. So when you think
00:27:04.100
about like, what's the likelihood that I'm gonna, you know, enjoy the weather in Boston?
00:27:11.900
The answer isn't, I have no clue because you know things about yourself. You've experienced some
00:27:21.820
cold weather, right? Maybe you went up there in February for a couple of days. So you tried to
00:27:28.420
get some more educated into the guests and you didn't think it was so bad. So, you know, when you go up
00:27:35.140
there, you recognize like, okay, like, um, I think that I'm probably likely enough to like it, that I'm
00:27:42.120
willing to do this. And are you going to get an exact answer? Of course not. But if you can get close, it
00:27:48.880
matters. So I talk about this as like the archer's mindset that we're really focused on the bullseye and we
00:27:55.720
feel like we've got to get to the bullseye. And what that causes us to do with is either claim we
00:28:00.180
have the bullseye when we don't, right? Or not try at all because we recognize we won't be able to hit
00:28:05.540
it. And instead we need to have more of an archer's mindset to say in archery, you have a target.
00:28:12.040
And while you might be aiming at the bullseye, you're, you also get points for hitting the target.
00:28:17.260
And so we want to realize that in decision-making, we get points for hitting the target.
00:28:21.100
And then beyond that, we get a lot of points for actually defining the size of the target
00:28:26.980
because that does a lot of work for us. So like in the puppy example, you said your lower bound was
00:28:31.100
five pounds and your upper bound was 25 pounds for the weight of this puppy, right? Well, that tells me
00:28:36.520
something about how uncertain you are. When I asked you for the lower bound on Jupiter and the upper
00:28:41.800
bound on Jupiter, this was a much wider range. Why? Because you're less certain, you have less
00:28:47.280
knowledge about the relationship between earth and Jupiter than you do between, you know, the lower
00:28:54.060
bound of the weight of this puppy and the upper bound of the weight of this puppy, which, which you
00:28:57.240
know a lot more about. We can take that to the extreme. If I asked you, what, what's your birthday?
00:29:03.100
You would give me the bullseye because you know it.
00:29:07.760
Gotcha. So yeah. So you know more than you think you do.
00:29:11.580
Gotcha. All right. So yeah, we, so you sign a probability and you even recommend like, don't just,
00:29:16.260
don't just settle for likely or not likely, like actually put a percentage on it. Cause that'll
00:29:23.680
The reason why we don't want to use words that, so there's all these words that describe probability
00:29:29.600
that we use every day, like likely, always, never real possibility. Like we say that, like,
00:29:34.520
do you want to go out to the movies this weekend? Yeah. I think that's a good possibility.
00:29:38.220
Right. Like, so, so that would be a way that we throw this around, but you, you can think about,
00:29:42.220
we do this like in a hiring process, right? Like, what do you think of that?
00:29:46.240
I think there's a good possibility. They'd be great. So we use these kinds of terms all the
00:29:50.960
time. Well, the reason why we don't want to is kind of twofold. One is that we want to be able
00:29:55.860
to circle back and actually close those feedback loops. And if we use these kinds of mushy terms,
00:29:59.680
it's hard for us to do that because the, these terms have pretty broad meanings. And when you
00:30:04.640
actually ask people, like if you survey people on something like real possibility, which is a term we
00:30:09.200
use a lot and you say, Hey, you know, when you use that word, like what probability do you actually
00:30:15.040
intend? You get a range from about 15% to 90%. So that should be the first clue that there's a
00:30:23.420
problem with those words. So the first problem has to do with that feedback loop is that when you go
00:30:27.180
back and you try to close your own feedback loops, and you've said that something's a real
00:30:30.720
possibility, you can kind of mush around in there in order to motivate a reasoning is going to get
00:30:35.540
there. Right. Because you can say, well, I said it was a real possibility. I didn't say it was like,
00:30:41.060
so if like, for example, if I hire someone and they turn out to be great, I get to say, yeah,
00:30:47.960
I told you it was a real possibility. That would be great. And then if I hire someone and they turn
00:30:52.480
out to be poor, I get to say, well, I told you it was a real possibility. They'd be great, but I
00:30:55.980
didn't say it was a hundred percent. So, so, okay. What does that help you with? So that's kind of
00:31:01.820
problem. Number one, that it helps you, doesn't help you with problem. Number two, that it doesn't
00:31:06.080
help you with is actually a very broad problem in decision-making, which is, so when we think about
00:31:12.160
problems like confirmation bias, which is just, I notice information that confirms the things I
00:31:17.960
believe to be true. And I don't notice information that disconfirms the beliefs that I have or something
00:31:24.680
like availability bias, which is that I judge things to be more frequent that I have interacted with
00:31:30.160
quite a bit or that are more vivid for me to recall. So when we take those, what you can see
00:31:34.880
is those are all me problems, right? They're all things that have to do with me trying to affirm my
00:31:40.040
beliefs or quirks of my own memory or my own experiences, the way that I've interacted with
00:31:45.280
the world and what my motivations are about the way that I'm reasoning about the world. So Kahneman
00:31:48.940
would call this the inside view, that when we're reasoning about the world, we reason about it from the
00:31:55.120
inside view. In other words, driven by our own experiences, our own knowledge, our own
00:31:59.560
perspectives on the world and the mental models that we sort of, you know, apply to the world. And
00:32:05.060
that's actually where the most of the bias is living, is in the inside view. So the antidote to
00:32:11.600
the inside view is the outside view, which is essentially one of two things. It's what's true
00:32:17.000
of the world in general. So an example of that would be a base rate, which is just how often does
00:32:22.100
something happen in a situation similar to the one that I'm considering? So I'll give you an example of a
00:32:27.660
base rate. So let's say it's when coronavirus doesn't exist yet, and you're thinking about
00:32:32.360
opening a restaurant in a certain area, and you think that you have a 90% chance of being successful
00:32:38.440
by the end of the first year. So that would be your guess inside view. You think pretty well of
00:32:44.360
yourself. You probably have some overconfidence. You're probably cherry picking some data that's kind
00:32:50.640
of getting you to that conclusion. Again, not on purpose, but because that's what we naturally do.
00:32:55.760
But the base rate, this would be getting to the outside view, would be to say,
00:32:59.860
well, how often do restaurants succeed within the first year in general in my area? And if you look
00:33:08.580
that up, what you would find out is that the percentage of restaurants that are open after
00:33:13.000
the first year is 40%. So we can see how that helps to discipline the inside view. If I think it's 90%
00:33:20.140
and the world says it's 40%, I ought to rethink my 90% number. So that's one way to do it. But
00:33:26.700
another way to do it, and this is a great way to do it, is to actually get other people's perspectives
00:33:31.400
on your situation. Because other people can be looking at your situation, and they can think
00:33:36.660
very different things than you do about it. This is even if they have the exact same data,
00:33:41.160
they may model the data differently than you. This is even if they have modeled the data exactly the
00:33:47.340
same as you, they may think that you're supposed to do different things about it, given what the
00:33:52.040
model tells you. So it's really, really good to get other people's perspective on the situation that
00:33:58.420
you're considering, on the decision that you're considering. Well, in order to do that, you have
00:34:03.780
to actually communicate clearly to other people what it is that you think, what it is that you believe
00:34:12.060
to be true of the world. And this is where terms like real possibility really become a problem.
00:34:17.420
Because if I tell you that something is a real possibility, it's so unclear that we know what
00:34:25.960
that means, right? You might think it means 20%, and I might think it means 60%. And we could think
00:34:32.420
that we totally agree that it's a real possibility this candidate could do well. And you may be thinking
00:34:37.820
it's a 20% chance, and I may be thinking it's a 60% chance, and we actually disagree. But we can't
00:34:43.640
find it out because we haven't expressed what we believe with any type of precision. So that's where
00:34:51.420
this idea of giving essentially a bullseye estimate, right, which would be like, I think it's a 55% chance,
00:35:00.480
and then giving a lower and upper bound now becomes really valuable because it's what gets people
00:35:05.860
involved in the conversation, right? So if I give the bullseye estimate, like I think it's a 55%
00:35:12.100
chance, you know exactly what I mean. And you know whether you agree with that. And then when I give
00:35:17.160
the lower and the upper bound, I tell you how certain I am about it. So I'm giving some sense to
00:35:22.200
you of what my target area is. And what's really wonderful when you do that, when you say, well,
00:35:28.400
I, you know, I think the puppy is 15 pounds with a lower bound of five pounds and an upper bound
00:35:33.840
of 25 pounds is that you have said very clearly what your beliefs are in a way that you have
00:35:38.560
actually invited me into the conversation. Inherent in that lower and upper bound is a
00:35:45.620
question of, can you help me with this? I'm telling you how much certainty I have. I've obviously thought
00:35:51.860
about it. I am telling you something that's quite precise. And is there a way that you could help me
00:35:58.360
narrow the range? And that creates really great decision-making, great conversations, because you
00:36:04.440
actually are very clear about what the conversation is and you're maximizing your access to the
00:36:09.460
outside view by doing so. So this is a pretty involved process. And I imagine when you first
00:36:14.680
start doing it, it'll take a long time, but I imagine the more you do it, you kind of, it becomes
00:36:18.880
like a skill, it becomes like intuitive. So we've talked about how to improve the quality of our
00:36:23.500
decisions. The other issue with decision-making that people have is the amount of time, like the
00:36:29.340
bandwidth people spend on making, they just agonize paralysis by analysis. But one of my favorite
00:36:34.480
sections, you give some sort of like hacks to short circuit that analysis by paralysis. Can you share a
00:36:41.400
few that you think are really powerful that people can start using today and actually see a profound
00:36:45.620
change in how much time they're spending on decisions? Yeah, absolutely. So yeah, so I want to say
00:36:51.380
like when I talk about this stuff and I say like, oh, you know, you should build out these decision
00:36:55.020
trees and you should do this knowledge tracking and you should be, you know, thinking about the
00:36:59.560
probability of different things happening. You know, the response is like, how am I ever going to make
00:37:03.200
a decision again? This is going to make me go so slow. And I just want to remind people that I was a
00:37:09.140
poker player. And obviously at the poker table, you have to, you know, you're making decisions very,
00:37:15.300
very quickly and you're iterating them a lot. I obviously don't think that you need to go really slow
00:37:20.640
on every single decision. So a couple of things on that. One is that if you do understand what a
00:37:24.960
robust decision process looks like, this is actually going to help you speed up your decisions
00:37:29.080
because you're going to be able to hone in on the things that matter instead of spending your time
00:37:33.040
spinning your wheels, thinking about things that don't matter. So this just is more efficient because
00:37:37.580
it tells you what you should care about. Number one. Number two is kind of like with riding a bike,
00:37:43.380
you have to kind of understand it in a slow way or driving a car. Like this is what the gas pedal
00:37:47.680
does. And this is what the brake pedal does. And if I turn the steering wheel this way,
00:37:51.700
this is what happens. And before you can actually put that into a more, you know, automatic, quicker
00:37:57.720
process. So understanding what a really robust decision process looks like will tell you what
00:38:04.520
the heart of the matter is, but it will also help you to speed up just because you kind of understand
00:38:08.480
what it would look like in its fullest form. But as far as most of the decisions that you're making,
00:38:15.220
generally, we kind of a little bit get it backwards. With some decisions that we should
00:38:21.000
be taking like quite a bit of time on, we'll often just go really fast. I think partly because we know
00:38:26.440
it's complicated. And so we sort of give up in that sense of, I don't want to guess because I don't
00:38:30.320
know. And so therefore I'm just going to go with my gut. And there's a wide variety of decisions where
00:38:34.900
we actually go quite slowly, where we should actually be speeding up. And it's because we're not
00:38:40.820
thinking about the type of decision that we're facing very well. So in order to figure out when
00:38:46.000
you can go fast, what you're essentially figuring out is, if I go really quickly, the time that I'm
00:38:55.040
saving comes at a cost. And that cost is that I'm probably going to increase my error rate.
00:39:01.320
So the decision is just going to be less exact. It's going to be less accurate.
00:39:04.920
And if I increase my error rate, that means that I may get a bad outcome more often than I would
00:39:10.600
have if I would have taken more time. And so that once we sort of understand that, that there's this
00:39:14.820
trade-off between time and accuracy, then we can figure out when we can go fast and when we should
00:39:22.200
slow down. And it's when we can tolerate higher probability of a bad outcome. So let's think about
00:39:30.040
when we can do that. Because that's kind of the broad framework that we want to think about it
00:39:33.640
through. And we can think about this through two things. One is impact and one is option.
00:39:37.480
So I'll give you an example of a type of decision that people take a very long time with. And I'm
00:39:42.560
sure you've experienced this. So when we all used to go to restaurants, you probably know the person
00:39:48.200
who would sit with the menu. And they'd be like staring at the menu, asking the waitstaff for their
00:39:57.080
opinion, asking every single person at the table what they were going to order, agonizing over it.
00:40:05.640
the order, they'd be like, let me go last. Do you know that person?
00:40:14.080
You did philosophy. I'm guessing you go faster.
00:40:18.920
Yeah. So, but you know that person and those people are very, very common.
00:40:23.080
So it turns out that actually, if you look at the statistics, that when you take together what to
00:40:29.480
wear, which I think is probably a faster decision in the pandemic, because it's like sweatpants and
00:40:32.940
then something that looks decent on top, but whatever. But what to wear, what to watch on
00:40:38.300
Netflix and what to eat, that people are taking about six to seven hours of work week time per year
00:40:51.320
That's a lot of time. And I believe that the reason that they do that actually is related to what we
00:40:57.160
talked about with, you know, why people sort of guess in these situations where it's really
00:41:02.520
complicated or are unwilling to guess, they just go with their gut. And it's because when you're
00:41:06.940
thinking about something like ordering off a menu, I think it feels very solvable, right?
00:41:11.480
In kind of the know thyself sense, right? Like you should be able to figure out what your own
00:41:15.300
preferences are. And you know a lot about food. And if you just asked the wait staff and you looked
00:41:21.220
at a few more pictures of the dishes on Yelp, that you should be able to get this decision.
00:41:25.680
And I'm going to put it in air quotes, right? Because it feels like a pretty simple decision
00:41:30.400
that's about your own preferences and you should be able to get it right. And you're kind of in fear
00:41:35.100
of, you know, that moment where you've tried to decide between the chicken and the fish and you
00:41:41.720
get the fish and it's yucky. And you, what do you say when that happens immediately? I made a mistake.
00:41:48.480
I should have ordered the chicken. That's what you say immediately. But if we go back to that
00:41:53.560
hindsight bias and resulting problem, that's just hindsight bias and resulting,
00:41:58.280
right? Because obviously less a time machine, there's no way for you to know that that fish
00:42:03.520
wasn't going to be very good. What you knew is that you like chicken and fish. They both seem
00:42:07.540
pretty good to you. You looked at the preparations and whatever. So it's weird to call that a mistake
00:42:14.400
when the food comes back poorly. And it's weird to say, I should have ordered the other thing,
00:42:19.800
or I should have known to do that. But that's what we do. So we're thinking very short term.
00:42:23.700
So what we want to do is actually think about what is the long-term impact of that decision going
00:42:29.360
awry. So I'll just ask you. So you get crappy fish. We have a meal and the fish is yucky.
00:42:37.020
And you're sad because it was gross. And now I catch you in a year. So it's a year later.
00:42:42.700
And I say to you, Brett, how's your year been? And I'll just ask you that. How's your year been?
00:42:48.960
It's been, all things considered, it's been pretty, it's been all right.
00:42:52.580
Yeah. All things considered. That's, yeah, that's my answer too. All things considered,
00:42:56.100
it's been okay. So that, do you remember that fish that you had when we were in that restaurant
00:43:00.280
a year ago and it was kind of gross? I don't even remember that.
00:43:03.740
Right. Does it have any effect on your happiness today?
00:43:09.140
Right. So, right. So what if I catch you in a month?
00:43:14.920
Same thing. I would have been like, we had, I don't remember. I would have forgotten about it
00:43:19.040
or I just wouldn't even been thinking about it. Yeah.
00:43:21.240
Or how about a year? I mean, how about a week? I mean, you've had 21 meals since then.
00:43:26.200
No, wouldn't even have been thinking about it. No, no. If it was really expensive, I still,
00:43:32.280
I might be a little, uh, having a little, yeah, miffed about it.
00:43:36.720
Okay. So, so this, this particular exercise of how would I, you know, does it affect my
00:43:41.820
happiness in a year? Does it affect my happiness in a month? Does it affect my happiness in a week?
00:43:45.880
It's called the happiness test. And the reason that we want to do this, this is a way for us to
00:43:52.680
go have a conversation with the future version of ourself so that the future version of ourself
00:43:56.760
can say, Hey, by the way, that decision makes no difference to you. You can get bad fish. It's,
00:44:02.320
I don't care here. Here I am a year later. It makes, it doesn't matter to me. And happiness
00:44:07.780
here is meant as a proxy for whatever the goals are that you're trying to achieve. You know,
00:44:13.380
happiness, you know, assuming when we reach our goals, one assumes that we're happier.
00:44:18.340
So that's why I use happiness as kind of a proxy. So this is a really good test to apply
00:44:23.640
to figure out if I can go faster, I can go slow. So when you feel yourself hung up in that decision
00:44:28.280
and you're taking a lot of time with it, just say, am I going to care about this in a year?
00:44:33.000
Am I going to care about this in a month? Am I going to care about this in a week?
00:44:36.520
And the sooner that you're not going to care about it, whether it turns out well or poorly,
00:44:41.660
because if the fish is great and I see you in a year, it also didn't affect your happiness at all.
00:44:46.500
This is just low impact all around. The shorter the time period in which you're not going to care
00:44:50.620
how it turns out, the faster you should go. So this is the first thing. This has to do with
00:44:56.260
impact. This decision is low impact. Good or bad, it doesn't matter. Now, in the case of ordering
00:45:05.140
off a menu, it's particularly low impact because it's also what we would call a repeating option.
00:45:09.880
So remember I said to you, when I see you in a week, you've had 21 meals since then,
00:45:14.760
assuming you eat three times a day. So that's what I'm referring to there is it's a repeating option.
00:45:20.880
So even if your lunch is bad, you get to go have something for dinner. So you get to,
00:45:24.720
you get basically get to try again pretty quickly. And that's going to be true also of like what to
00:45:29.140
wear or like what to watch on Netflix. Dating, right? If a date, if a date goes poorly, so what?
00:45:37.520
It's just a date and it's a repeating option. You can get right back on Tinder or Bumble or whatever
00:45:44.720
your app is that you like, and you can click and go on a date with somebody else.
00:45:49.760
So that's also a repeated option. Choosing classes in college, it's a repeated option.
00:45:55.260
You get to do it a lot. So when we're repeating options, we can go a little bit faster with those
00:45:59.620
decisions. And when they're low impact, we can go pretty fast. So those are kind of the two
00:46:04.060
things on the impact side. Now there's another framework that we want to think about, which is
00:46:10.460
optionality, which has to do with how easy is it for me to quit the thing that I'm doing and go and
00:46:18.700
do something else. So people may have heard, you know, Jeff Bezos talking about type one or type
00:46:24.320
two decisions or two-way door decisions versus one-way door decisions. And this is basically what
00:46:30.000
he's getting at. The easier it is for me to quit something, the faster I can go because I'm going to
00:46:35.980
be more tolerant of getting a bad outcome because I can switch. I can just quit and go do something
00:46:41.980
else. So this is actually a really, really important concept for great decision-making.
00:46:50.100
So we can apply that. Like if you're in a hiring situation, the difference between hiring an intern
00:46:56.340
versus hiring someone who's quite senior, it's going to be harder on the company, on you to unwind a
00:47:05.640
relationship with someone who's senior. So that is a less reversible decision. It also happens to be
00:47:10.860
higher impact, right? So we've got sort of both of those things working together. Whereas an intern,
00:47:16.160
if the intern doesn't work out well, that's relatively low impact, but it's also very easy
00:47:20.360
to unwind. It's not hard to, you know, part ways with an intern in the same way that it can be very
00:47:27.800
difficult to part ways with someone who's quite senior. So that there we can sort of see, like,
00:47:33.760
how should we be spending our decision-making time? And what you'll see, like people will spend,
00:47:37.280
like they'll find a couple of interns who look like they might be really great for the one job,
00:47:41.940
and they'll just agonize over that decision. Which one should I choose? I don't know. I don't know
00:47:46.500
what to do. But this is a decision which you actually should be going pretty fast on, certainly
00:47:51.540
compared to how much time you might spend on the more senior person. So that you can see how this
00:47:56.400
kind of allows us to allocate our time, right? Dating versus marrying is another example of that.
00:48:02.260
It's pretty easy to quit a date. People do it sometimes in the middle of the dates.
00:48:06.940
People do it sometimes before they actually sit down. They look across the room. I don't think
00:48:10.680
that's very nice, but people will do that. I don't recommend it. I think you should at least talk to
00:48:16.040
the person. But marrying, obviously, that is much harder to unwind. It's more difficult to actually
00:48:21.100
quit that decision. So that's one of the things that we want to think about. And then you can
00:48:26.040
start to take that and say, well, now I could actually take that into my decision-making life
00:48:30.700
and use that as a strategy. This would be called decision stacking, which is when I think that I'm
00:48:35.640
going to be facing a very big decision, what could I do that would give me a lot of information that
00:48:41.200
would help me to improve the quality, to improve the educated guesses I'm making that are going into
00:48:46.260
that bigger decision? What could I do now that's pretty low impact and pretty reversible that would
00:48:50.980
help me with that bigger decision? So like as an example, if you're thinking about moving to a
00:48:56.240
new city or moving to a new neighborhood, renting first before you buy. That's an example of decision
00:49:04.180
stacking. Agile software development is an example of decision stacking. I'm not going to do like a
00:49:11.040
large batch software development where I'm going to have to roll this out to my whole customer base
00:49:16.620
all at once. I'm going to do a small test that's pretty beta to a few people and see how they like
00:49:26.120
it, that little handful of people. And then if it doesn't work out, it's not a big deal. I can just
00:49:31.540
roll the code back. I didn't somehow upend my whole customer base. So you're lowering impact of making it
00:49:36.540
much more reversible. Another really good example of this type of decision stacking strategy would be
00:49:42.200
pop-up stores, right? If I'm thinking about releasing a new consumer packaged good, for example,
00:49:48.320
and I don't know whether people are going to respond to it and I'd like to test it maybe in a
00:49:54.740
new city or something like that. I don't want to sign a year-long lease or try to put this across all
00:49:59.640
Whole Foods or something like that. I can just do a pop-up store where the impact of that not working
00:50:05.940
out is not so great. If I do find something good about it, then that's awesome. I can get some
00:50:10.840
information out of that and it's very easy to reverse and shut down because I haven't made any
00:50:15.240
long-term commitments and I'm doing this in a very small way. So this becomes actually really
00:50:19.840
powerful that it doesn't just help you to figure out how to speed your decisions up, but it also
00:50:24.700
gives you a decision strategy for improving the quality of those long-term commitments that you're
00:50:29.900
going to have to make by trying to figure out how to do some lower impact, more reversible decisions
00:50:35.500
beforehand in order to get you the information that you want for that longer-term decision,
00:50:40.040
which is basically what dating is. It's decision stacking.
00:50:45.580
So another tool you use that I started using right away after I read about it is if you're
00:50:50.740
agonized between like two or three decisions, like you give the examples like, well, should I go to
00:50:54.420
Paris or Rome? Like, ah, which one should I do? And you just make this super simple way to frame
00:50:59.800
it. It's like, well, if you could only go to Rome, would you be happy? Yeah, I would be happy if that's
00:51:05.540
the only place I could go this year. Well, if you go to Paris, would you be happy if that's the only
00:51:09.160
place you go this year? Yeah, that would, I'd be fine. I was like, well, then either one would be
00:51:12.940
a good decision. So just pick, just flip a coin basically.
00:51:17.960
Yeah. So, so people actually will ask me like, when is it okay to go with your gut?
00:51:22.040
And this is the situation where I say it's fine. And the reason why it's fine is because it doesn't
00:51:27.020
matter. So I'm a big fan of using your gut to make decisions when it makes no difference.
00:51:32.320
So use your gut to order off a menu is totally fine, but this is a good example. So, so obviously if
00:51:38.180
you're thinking about like a European vacation, you're trying to decide between Paris and Rome,
00:51:42.280
this doesn't pass the happiness test. If you have a crappy home vacation and I see you in a year,
00:51:48.820
it probably did affect your happiness over the year. It's not a repeating option for most people.
00:51:54.020
They can't just like go on another vacation in the next month, right? This clearly has big downside.
00:51:59.020
It's very expensive. You can't really reverse it. Oh, I don't like Rome. Let me go somewhere else.
00:52:04.460
Immediately I'll just abandon and be somewhere else. I mean, these, you know, it doesn't sort of
00:52:08.700
satisfy all that stuff that would tell you that you can go fast. This is clearly something that's
00:52:13.000
very high impact. So, but what happens to us is that when we sort of get into these decisions and
00:52:18.600
we get a couple of options and the category of the decision is high impact. My big vacation for the
00:52:25.360
year, this is very high impact. Now we sort of get down into, we've got two options that we're
00:52:30.980
trying to decide between in that category of thing that's going to really matter. And now like,
00:52:37.020
you know what everybody does. It's like you're on TripAdvisor, you're looking at every single
00:52:41.660
review, you're asking anybody, you know, who's been to either Paris or Rome. And it's just like
00:52:50.020
total anxiety. But what you just pointed out with the only option test is that what's hanging you up
00:52:56.960
about this decision. What makes this decision feel so hard is that the options are from your
00:53:04.100
vantage point identical. And what I mean by from your vantage point is your vantage point where you
00:53:10.420
aren't omniscient and you don't have a time machine. This is very, very important. You cannot see
00:53:17.480
how those vacations would go in some kind of perfect sense where the image is crystal clear.
00:53:24.000
It's going to be cloudy, right? Same as the Boston problem. You don't have all the information you
00:53:31.080
need about either place. But from the standpoint of the information that you do have and what you
00:53:38.040
know about your preferences and what you can afford and the time that you have to go, Paris and Rome are
00:53:44.220
kind of identical to each other, given the acuity that you have on this issue, right? So they're the
00:53:52.000
same. And the way that you can find out they're the same is through the only option test. If Paris were
00:53:57.600
the only option that I had, would I be really, really happy with it? Of course, the answer is
00:54:02.320
yes. If Rome were the only option that I had, would I be really, really happy with it? Of course,
00:54:07.080
the answer is yes. So this brings up a really important decision process, which is when a decision
00:54:11.960
is hard, it means it's easy. Meaning when a decision is hard in this particular way, that you have two
00:54:17.120
options that you can't decide between. What that means is it's really easy because what that is telling
00:54:22.120
you is that the options are identical, that there is really no difference between the two. So whichever
00:54:27.900
one you choose, it's probably a pretty good option, which is what you get to through the only option
00:54:33.420
test. So we can go back to that intern problem, right? You have two interns that you're thinking
00:54:38.860
about for one job. They both seem really great. And now you're agonizing about which one you should
00:54:44.640
hire. But you should just step back and say, if intern A was the only person I could hire,
00:54:49.060
would I be ecstatic to have this person as the person to fill the job? Yes. If intern B were the,
00:54:55.340
you know, the only person that I had available to hire, would I be ecstatic to have that person in
00:54:59.800
the job? Yes. Okay, well then you're done. And you can flip a coin. You can go with your gut. You
00:55:04.500
could ask someone else to decide. I don't care. But don't take any more time on the decision.
00:55:09.520
And then we can sort of take this and go back to this. We can take a step back and say,
00:55:13.680
what is this telling us? And what it's telling us is that decisions are generally thresholding
00:55:18.320
problems. That there's a sorting process, which is of all the options that I have available to me,
00:55:24.440
which of them satisfies the requirements that I have for thinking that this is something that I
00:55:30.080
would want to choose. So that's the sorting process, which means I need to get it above a
00:55:34.760
certain threshold where this is going to be reasonable for me to choose. In the case of a
00:55:39.940
European vacation, this is the amount of money that I have to spend. I'd like there to be great
00:55:44.220
architecture. I'd like the food to be amazing. I want there to be history. I'd like it to be a
00:55:50.040
place where I can walk everywhere, as an example. Okay, so now you've figured out, here are my
00:55:56.300
requirements. I want to look at the options that I have of places that I can go and figure out what
00:56:00.900
meets that threshold. Once something is above that threshold, you're done with the sorting and now
00:56:06.020
you're in the picking part. Picking between options that all have met the threshold, that have all
00:56:12.600
met that sorting process and satisfied that sorting process. And once you're in the picking part of the
00:56:18.400
decision, flip a coin. I love it. It's really, really helpful. Well, Annie, this has been a great
00:56:23.660
conversation. Where can people go to learn more about the book? Okay, so anniduke.com is a great place
00:56:29.840
to go find out about me because that has kind of all things Annie Duke on it. You can find my books
00:56:35.120
there. You can find video of me talking. You can get links to podcasts that I've done. There's also a
00:56:40.680
contact form. I actually love hearing from people who've heard me speak or have read my work. In fact,
00:56:46.000
How to Decide was born of conversations with people who contacted me because what I found after I put out
00:56:53.460
Thinking in Bets was that people were asking me, you know, okay, I see what you're saying in Thinking
00:56:58.260
in Bets about uncertainty and the way it really frustrates our decisions. How would I actually make great
00:57:03.860
decisions? Like what are the tools that I could use? What would the process look like? How can I
00:57:08.100
think about these things in a clearer way? And I just realized they were asking me to write something
00:57:12.340
that was more how-to. And so I did. So I find it very helpful when people actually reach out to me.
00:57:18.180
So please don't be afraid to do that. So that's one place you can find me. I'm on Twitter at Annie Duke.
00:57:23.540
And then the last thing is, you know, I would love it if people would check out the Alliance for
00:57:27.220
Decision Education, kind of rolling back to the beginning of the conversation.
00:57:30.120
We really think it's an emergency right now to get decision education into K-12. And I, you know,
00:57:37.860
when we think about the information ecosystem that we live in, you know, and how much information is
00:57:42.600
coming in and, you know, there's disinformation and a real ability to fall into serious echo chambers,
00:57:50.520
become extremized. This necessity to be able to navigate the world and figure out what's true
00:57:57.760
and then figure out what to do about it in both the information ecosystem that we're living in.
00:58:04.080
But also, you know, right now, when we think about what's happening in terms of career trajectories
00:58:09.140
and technology and what jobs are going to look like in five years or 10 years, there's so much
00:58:14.420
around that. And it's changing so rapidly that, you know, equipping our youth to be able to sort of
00:58:20.480
navigate that changing landscape, we think is just incredibly important. And as you pointed out,
00:58:24.980
this is not something that's taught in school. And so at the Alliance, we're really trying to
00:58:29.360
change that. So I would love it if people would check the Alliance for Decision Education out.
00:58:33.720
Fantastic. Well, Annie Duke, thanks for your time. It's been a pleasure.
00:58:39.680
My guest today was Annie Duke. She's the author of the book, How to Decide. It's available on
00:58:42.940
amazon.com and bookstores everywhere. You can find out more information about her work at our website,
00:58:46.900
annieduke.com. Also check out our show notes at aom.is slash how to decide. We find links to resources
00:58:52.260
where we delve deeper into this topic. Well, that wraps up another edition of the
00:59:02.880
AOM podcast. Check out our website at artofmanliness.com where you can find our podcast
00:59:06.440
archives, as well as thousands of articles written over the years about pretty much anything you can
00:59:09.620
think of. And if you'd like to enjoy ad-free episodes of the AOM podcast, you can do so at
00:59:12.960
Stitcher Premium. Head over to stitcherpremium.com, sign up, use code MANLINESS at checkout for a free
00:59:17.200
month trial. Once you're signed up, download the Stitcher app on Android or iOS, and you can start
00:59:20.760
enjoying ad-free episodes of the AOM podcast. And if you haven't done so already, I'd appreciate
00:59:24.160
if you take one minute to give us a review on Apple Podcasts or Stitcher. It helps out a lot.
00:59:27.360
And if you've done that already, thank you. Please consider sharing the show with a friend
00:59:30.420
or family member who you think would get something out of it. As always, thank you for the continued
00:59:33.660
support. Until next time, this is Brett McKay, reminding you not only to listen to the AOM
00:59:36.500
podcast, but put what you've heard into action.