#471: Using Mental Models to Make Better Decisions
Episode Stats
Summary
Shane Parrish is a former Canadian intelligence officer and the owner of the website Fardom Street, which publishes articles about better thinking and decision making and is read by Wall Street investors, Silicon Valley entrepreneurs, and leaders across domains. We begin our conversation discussing how Shane s background as an intelligence officer got him thinking hard about hard thinking and why the musings of investors Warren Buffett and Charlie Munger has had a big influence on his approach to decision making. Shane then shares his overarching decision making philosophy and explains what mental models are and why they re powerful tools to make better decisions.
Transcript
00:00:00.000
Brett McKay here and welcome to another edition of the Art of Manliness podcast. We live in
00:00:18.940
a complex, fast-changing world. Thriving in this world requires one to make fast decisions
00:00:23.660
with incomplete information. But how do you do that without making too many mistakes?
00:00:27.420
My guest say argues that one key is stockpiling your cognitive toolbox with lots of mental
00:00:31.920
models. His name is Shane Parrish. He's a former Canadian intelligence officer and the owner
00:00:35.520
of the website Fardom Street, which publishes articles about better thinking and decision
00:00:39.160
making and is read by Wall Street investors, Silicon Valley entrepreneurs, and leaders
00:00:43.000
across domains. We begin our conversation discussing how Shane's background as an intelligence officer
00:00:47.400
got him thinking hard about hard thinking and why the musings of investors Warren Buffett
00:00:51.620
and Charlie Munger has had a big influence on his approach to decision making. Shane
00:00:55.520
then shares his overarching decision making philosophy and explains what mental models
00:00:58.820
are and why they're powerful tools to make better decisions. We then discuss why you should
00:01:02.620
focus on being consistently not stupid instead of trying to be consistently brilliant in tactics
00:01:07.640
you can use to make better decisions. After the show's over, check out our show notes at
00:01:11.360
aom.is slash Fardom Street. Shane Parrish, welcome to the show.
00:01:27.740
So you head up a website called Fardom Street. I don't know how I discovered you. It was a long,
00:01:34.000
it was a couple of years ago and I've been following it religiously ever since then because I love it.
00:01:38.020
It's a website, a learning community dedicated to learning how to think better, make better
00:01:44.660
decisions. But what's crazy is this thing, I mean, it's read by all these Wall Street investors
00:01:51.040
and leaders across fields. So how did this, how did Fardom Street start and like become this
00:01:55.820
phenomenon? Well, in 2007, I made a decision that probably impacted the lives of a lot of other
00:02:03.000
people. And I remember leaving work. And at the time I worked for an intelligence agency
00:02:07.040
and it was about 2am and I was walking home and I was struggling because I didn't know if I had made
00:02:12.640
the right decision. And I went into work the next day on about three hours of sleep because I stayed
00:02:17.480
up all night. And you know, the stakes are high, right? You have your country, you have people in
00:02:22.920
theater who are making decisions based on what you're doing. You have decisions that you're making
00:02:27.380
that affect them. You have your team, you have their families, you have your organization,
00:02:32.260
you have your country's relationship with other countries and all of that. You're making a call
00:02:36.000
on a judgment call, you know, in the wee hours of the morning after not a lot of sleep.
00:02:40.400
And I went in the next morning and I said, Hey, I went to my boss and I was like, I don't know if I'm
00:02:44.980
making these decisions right. I mean, they're working out, but I don't know if I'm doing it right.
00:02:49.320
I don't know if I'm comfortable, like that I've thought about everything. I might be missing
00:02:52.820
something. And he just laughed at me and said, you know, everybody's in the same boat and sort
00:02:58.380
of shrugged it off. And I remember going home that day going like, I think people deserve better.
00:03:03.020
And I started just doing a deep dive into how to make decisions and like, how do we learn about
00:03:08.100
the world that we're living in? And I went back and I ended up doing my MBA and the MBA proved
00:03:15.200
relatively useless in my case. I think in part because, you know, I had six, seven, seven years of work
00:03:21.520
experience at that point, which is really probably 14 because I was working 12 to 14 hours a day,
00:03:26.880
six days a week. And you just have this different view of the world when you've worked that much
00:03:32.580
and you've done, you know, sort of all the different jobs that I've done and had all the
00:03:36.140
responsibilities I had. And the world's not simple. It's complicated and it's interconnected.
00:03:41.040
And, you know, the MBA is very much like read this chapter and apply this to this case study.
00:03:45.620
And, you know, it oversimplifies things to a degree that is unhelpful. And while I was doing my MBA,
00:03:51.320
I said, well, if I'm not going to learn while I'm doing my MBA, I might as well learn on my own.
00:03:55.840
So I created this website. And at the time it was called 68131.blogger.com, which we don't own
00:04:02.340
anymore, but that was the website. And the reason that it was called that is 68131 is the zip code
00:04:08.580
for Berkshire Hathaway. And the website is an homage to Charlie Munger and Warren Buffett and their way
00:04:14.680
of thinking. And I figured nobody would type in five digits for a website. Like at the time,
00:04:19.620
this is sort of unheard of. And I didn't want a password on the site, so I didn't choose a better
00:04:23.900
domain. And then I just started keeping track of what I was learning, right? And I started with a
00:04:28.840
lot of academic stuff because I figured I would never have access to academic journals. I stopped
00:04:35.100
doing my homework for my MBA because it became formulaic, right? I knew what they wanted to hear.
00:04:39.880
I knew how they wanted things phrased. I knew how simple the world was to them.
00:04:43.860
And so I just kind of like banged out essays that they wanted, but you didn't have to really study
00:04:49.380
a lot. So my study became self-study. And I started reading the letters of Berkshire Hathaway. I started
00:04:53.840
reading everything I could on Charlie Munger. And I'm wondering silently in the background why these
00:04:59.920
two guys who in Omaha, Nebraska have created, by all accounts, one of the biggest business successes
00:05:05.780
in history and think about the world in such a complicated, interconnected way. And why aren't I
00:05:12.140
learning that at my MBA? And then as I started to look into it, a lot of these successful people
00:05:16.360
that I admire, sort of Steve Jobs and Elon Musk and all of these people, they think about the world
00:05:22.360
in this very messy sort of way. I mean, they have a way to bring it back to first principles or to walk
00:05:28.520
around a problem in a three-dimensional way, but they realize that it's interconnected and every action
00:05:32.940
that you do has a consequence to it. And I thought, man, this is a much better way to learn.
00:05:38.500
So I just started the website, started writing about it. It was anonymous because I worked at an
00:05:43.080
intelligence agency. I wasn't exactly about to put my name on a website. And slowly, I don't know why
00:05:49.720
or how, but people started to discover the website. At first it was like one person and you can kind of
00:05:55.340
see like one person following you on your RSS feed at the time. And then I think it was like two years
00:06:00.720
and I had 500 and I was like, oh my God, like, this is crazy. Like how did 500 people find this website?
00:06:06.220
And it was 2013, I think, when we, I became unanonymous at 25,000 readers. And that was a big sort of like
00:06:17.880
And that, is that when you changed it to Farnham Street?
00:06:20.140
Yeah, because everybody's sort of like, it was like Farnham Street dot something dot,
00:06:25.940
like, I don't know, at the time it was like an easier to type version, but it was still weird.
00:06:30.020
And then we went to FarnhamStreetBlog.com that year. I became unanonymous. And I think we started
00:06:36.100
the email list all in the same sort of like year. And that was a major sort of like inflection point
00:06:43.280
for us about what we were doing and what I was doing. And I was still working full time for the
00:06:48.300
intelligence agency at the time, but we started to get this audience. And our audience at the time
00:06:53.740
was probably 80% Wall Street. And I would say it's a lot less Wall Street as a percentage basis now,
00:07:00.020
but the three main audiences we have are probably Wall Street, Silicon Valley, and professional sports.
00:07:07.740
That's really interesting, professional sports. Well, we can talk about that later.
00:07:10.700
So Farnham Street, that's the address of Berkshire Hathaway, correct?
00:07:13.340
Right. So that's the street in Omaha, Nebraska, where Warren Buffett lives and works.
00:07:18.080
And it's where the headquarters for Berkshire Hathaway is.
00:07:21.540
Okay. So before we dig into what you write about, I want to backtrack to talk about how
00:07:26.640
that moment when you made that decision, when you're working for the intelligence agency,
00:07:30.860
and you're like, boy, I don't know if I made the right decision.
00:07:33.320
Like before that time, it sounds like you had a moment where you took a step back and you started
00:07:37.460
thinking, like doing metacognition, right? Thinking about how you think. But before that,
00:07:42.600
how were you making decisions? Or is it just sort of, okay, on the fly, what were you doing?
00:07:47.260
Yeah. If you think about it, like, I mean, I started August 28th, 2001. Two weeks later,
00:07:54.340
September 11th happened. And I think like, I don't know, three days after that, I was promoted.
00:08:00.200
And it had nothing to do with me. It had nothing to do with my skills. It had nothing to do with
00:08:04.780
sort of like how good I was as a person. It just had, I was in the right place at the right time to
00:08:09.660
take on a leadership role. And nobody, nobody ever taught me how to make decisions. Nobody in school
00:08:16.180
taught me like how to look at a problem in a three-dimensional way and walk around it from
00:08:20.520
different perspectives and like all the perspectives in the room. And nobody at work taught me how to
00:08:26.580
do that either. It's sort of like you're expected to figure it out and you end up with this ad hoc
00:08:30.980
process, which is often works, right? But when it doesn't work, it's hard to diagnose why it doesn't
00:08:36.660
work. And then it's hard to compensate for your errors through a process. And we all have strengths
00:08:41.620
and weaknesses. And ideally, we want to have a repeatable process that we can use that changes as
00:08:48.220
the environment changes, but adapts to our strengths and weaknesses. So it accounts for them, or at least
00:08:52.780
allows for us to take into account where we are naturally prone to make good decisions or bad
00:08:58.760
decisions, or we're naturally prone to overconfidence in a certain scenario. And so then we want to
00:09:03.920
structure something in, if possible, to reduce the biases that we might have in that sort of way.
00:09:12.000
And I think like, you don't want to do that for every decision possible. I mean, we don't want to,
00:09:16.760
you can't, sometimes you have to make split second decisions. And that becomes more about
00:09:20.620
preparation and pattern matching and thinking through second order consequences. But often you have
00:09:27.000
a lot of time to make decisions. And a lot of time can be like 30 minutes. And you want to sort of
00:09:31.800
structure you're thinking, and not a lot of people do. And I think that's one of the reasons that we
00:09:35.400
don't get better at making decisions, is we always bring a slightly different approach to the table
00:09:40.340
for how we're going to decide. Whereas if we sat down and we had some sort of process,
00:09:45.200
it doesn't have to be formal. But that process can be like, what are the variables that govern the
00:09:50.220
situation? How do those variables interact with each other? And how might I be fooling myself? I mean,
00:09:55.200
it can be as simple as that. And it can be more complicated, depending on your strengths and
00:09:59.860
weaknesses and the type of decision you're making. Okay. And we'll get into those specifics
00:10:03.300
here in a bit. So let's talk about Charlie Munger. So this is a guy that you were drawn to when you
00:10:08.660
first started thinking about these things while you're doing your MBA. For those who aren't familiar
00:10:12.660
with Munger, what does he do? I mean, he works at Berkshire Hathaway, but you don't hear too much
00:10:17.000
about him because Warren Buffett is the guy that gets most of the attention. Yeah, Buffett gets a lot of
00:10:21.700
the attention. I mean, Munger is an irreverent billionaire at this point. He's the vice chairman of
00:10:26.740
Berkshire Hathaway. And he just has this very unique, almost Richard Feynman-esque view of the
00:10:33.800
world and a bit of wit to him in a way that I find intellectually stimulating, right? Like the
00:10:41.740
world is complicated. I want to read about it. I want to understand that things interact. And if I
00:10:48.020
only understand one portion of the world, I'm not going to understand what's going to happen when I
00:10:52.080
make a decision. And he's very detailed and nuanced about how he thinks about things and how
00:10:58.520
he builds what he calls a latticework of mental models. And I think that that really resonated
00:11:04.280
with me while I was in school because I started seeing each chapter as not something that stands
00:11:09.480
alone in and of itself, sort of like each idea in business school, but something that interconnects
00:11:16.140
with every other part of the world. And then it became, oh, I can just add this to my latticework,
00:11:21.360
my framework. But the next time I make a decision, I'm not going to make it just based on this new
00:11:25.400
model I have. I'm going to incorporate this old model, or I'm going to see if this old model
00:11:29.200
incorporates. And then I'm going to check that. And now I have a better, more accurate view of the
00:11:34.880
world. You can think of it sort of as in like tracing paper, right? If you draw lines on each sheet of
00:11:40.720
paper, each sheet of paper gives you a view into the world. But if you put those paper on top of each
00:11:45.380
other, well, now you might be able to see what the picture actually is. And that's what we're doing,
00:11:50.920
right? We're struggling to sort of like go through the world and make these decisions. And if you think
00:11:56.420
about what we do when we make decisions, a lot of us make poor initial decisions. And then we spend so
00:12:01.800
much time correcting that. And it could just be like a miscommunication. It could be that we didn't
00:12:06.640
think of the second order consequences. It could be that we didn't have the right models in our head
00:12:12.260
to accurately see the problem for what it was. So we didn't know what to do. So we're slightly
00:12:15.960
off course. But then we spend a ton of time chasing that down, which causes stress and anxiety. And
00:12:21.300
it's part of the reason that we've worked so long. And there's a different approach to that. And one
00:12:26.100
of the different approaches is like, can I learn about the world or intelligently prepare for the
00:12:31.140
decisions I'm likely to make? And what does that intelligent preparation look like? How do I make it a
00:12:35.480
little bit less about luck and make it more about what's within my control?
00:12:40.180
Yeah. And one of the things I love about Charlie Munger is, as you said, he's very nuanced and it's
00:12:44.580
very sophisticated, his thinking. But the way he explains his thinking process, it's very folksy.
00:12:50.020
It's very simple. And you're like, whenever you read something, you're like, oh yeah,
00:12:53.600
that makes perfect sense. Why didn't I think of that before?
00:12:56.560
Yeah. It's so hard to disagree with him, even when he's controversial, right? Like one of his
00:13:00.160
opinions is that the US shouldn't be selling their oil. They should be keeping it and importing oil
00:13:04.600
because oil is cheap and it's a finite resource. And if you think about that at the start, you're
00:13:09.720
like, well, that doesn't make sense. But the more you dig into it, you're like, oh, that probably
00:13:13.320
actually, if you take a different time horizon, that might actually be the best decision that a
00:13:21.200
Right. And we'll get into some more Mungerisms here in a bit. So before we get into specific,
00:13:26.700
I don't know, heuristics or hacks or whatever you want to call them to make decisions, because I think
00:13:30.660
that's what a lot of people want first. They want tactics. Let's talk about
00:13:34.460
overarching principles that you use that guide pretty much everything. It's like meta principles,
00:13:41.020
like first principles that you use to guide decisions in your own life. Or whenever you
00:13:45.660
consult someone or coach someone, what do you tell them?
00:13:49.340
Well, so we have five principles listed on the website that we have, which is fs.blogs
00:13:54.820
slash principles. And it's kind of just guiding a framework for what we can think about, right?
00:14:00.580
And the first one is direction over speed. And the concept there is if you're pointed in the
00:14:06.580
wrong direction, it doesn't matter how fast you're traveling, right? Inversely, if you're locked into
00:14:11.840
your desired destination, all progress is positive, no matter how slow or small it seems, right? You're
00:14:17.440
going to reach your goal eventually. And if you think about this as a lot of us spend a lot of time
00:14:23.240
on speed. And not only do we have subtle cues and organizations that we want to signal to other
00:14:29.080
people that we're working fast, that we're busy, that we're, we're doing things, but we don't
00:14:33.480
actually stop and take time and think about like, where are we going? I might be really busy in these
00:14:38.580
meetings, but does that mean we're actually making progress? Or does it mean like, I just have this
00:14:42.600
endless calendars of meetings? Like, does it actually contribute to the work, right? And if you think of
00:14:48.740
velocity, velocity is a concept in physics that not only has speed involved in it, but it has
00:14:54.920
displacement, right? So it has a vector associated with it. Whereas speed is just, it's just fast.
00:15:00.600
Like if you think of a plane leaving New York and going to LA, well, one plane leaves New York and
00:15:05.220
starts flying around in circles and the other plane leaves New York and it's headed for LA, right?
00:15:09.760
They're both flying at the same speed, but one of them is going to their destination and the other is
00:15:14.880
just flying around. It's going just as fast. And I think that concept is something that we have to
00:15:20.220
keep in mind, not only in our personal lives and our relationships, but in the workplace.
00:15:25.760
The second principle that we talk about on the website is live deliberately. And we settle into
00:15:32.600
habits and we simply live often the same year over year again, right? We're waiting for some future
00:15:39.880
event before we start occurring or before we start living. Like we're, we're waiting for something to
00:15:44.760
happen and we're not conscious about the decisions that we're making. We're not conscious about who we
00:15:49.380
spend our time with. We're just defaulting to what we've done in the past. And so while we wait for
00:15:54.700
a raise or maybe a career opportunity or ideal relationship, I mean, life is passing us by and life is so
00:16:02.640
fragile. And I think we forget that there is no thing more fragile than life. I remember I was in
00:16:09.640
Hawaii this year and I ended up, somebody drowned on the beach and they, they died right in front of
00:16:17.440
me. And I was like, I was like crying and I was like, Oh my God, like this person is the same age
00:16:22.960
as me. They look fit and healthy, just like me and their life is over. And, you know, maybe they
00:16:29.300
have an aneurysm while they were swimming or a heart attack. I don't, I don't know the medical sort of
00:16:33.940
like reason of this, but I was like, man, life can go at any point in time. And if you realize that
00:16:40.500
and you recognize it, you can start setting aside time today to pursue your dreams, right? You can
00:16:46.960
start today to learn the things that you'd like to know. You can reach out today and repair a
00:16:52.800
relationship that you want to repair. You can jettison this dead weight that's holding you down
00:16:58.580
and you can be more free, but to do that, you have to be conscious. So living deliberately is
00:17:03.460
about awareness and purposeful action. The third thing that we talk about on the website from a
00:17:09.280
principal's point of view is thoughtful opinions held loosely, right? So the common refrain is strong
00:17:15.700
opinions held loosely, but we prefer thoughtful, right? Because often like you have to look at where
00:17:21.360
we get our opinion. So how do you respond when you're faced with facts that contradicts a long held
00:17:26.520
belief of yours? I mean, you should have your ego wrapped up in outcomes and not necessarily you
00:17:33.100
being right. And I think that's the key to that principle, right? You want to update your knowledge.
00:17:37.640
You want to update your database, your mental sort of like a repository of information with new facts.
00:17:43.420
And I think the fourth principle we talked about is principles outlive tactics. And the example we use
00:17:51.260
on the website is football, but another example is sort of the chef and the line cook, right? So a line
00:17:58.240
cook is really good at maybe following a recipe, but they don't necessarily know how the ingredients
00:18:03.100
interact with one another to form a recipe. And they don't necessarily know what that recipe
00:18:09.300
is intended to do. So when something goes wrong, they might not be able to sort of understand
00:18:15.780
what's happening. And so we want to understand things, right? We want to understand not only the
00:18:24.000
what, which is tactics, we want to understand the how. Sometimes we can get the results we want
00:18:30.560
through tactics, but if you want results in a changing environment, you must also understand the
00:18:37.340
why, right? By understanding principles that shape the reality, you understand the why. And alternatively,
00:18:42.880
like another way to view this is tactics might get you what you want, but if you're not a principled
00:18:49.780
person, you might sort of like end up wanting to redo your life. And if you think that sounds crazy,
00:18:57.220
there's this great example. And this time of year is perfect, right? A Christmas Carol by Charles
00:19:02.360
Dickens, right? So you had Ebeneezer Scrooge and he wanted to be the richest man in the city. He wanted
00:19:10.840
to be respected. He wanted to be well-known. And I think that he did all of those things,
00:19:16.200
but he did it in a way that was mutually exclusive from things that really mattered. And I've seen this
00:19:21.800
play out over and over again, right? I used to be the, I used to work directly for the deputy minister
00:19:27.400
in the intelligence agency. And you see this sort of stuff happen where people get to their position
00:19:33.940
the power through tactics and then maybe want to redo at the end of their career. Maybe those tactics
00:19:40.760
are mutually exclusive from the relationships that they want after. And the fifth sort of principle
00:19:46.760
we talk about is owning your actions, right? And it's incredibly difficult to do. We're not programmed
00:19:51.860
to expose our egos or to make ourselves vulnerable when we make mistakes or do something stupid.
00:19:56.980
But one of the most powerful ways that I've discovered in life to make giant leaps forwards is
00:20:02.400
not only accept that we'll screw up, but actually seek out like, how do we correct this? How do we,
00:20:08.660
how do we get better the next time we're going to do this? It's mostly through refusing to accept
00:20:14.420
ownership of our mistakes that we protect our ego. We protect our worldview. We protect that we're,
00:20:21.340
we're not complicit in why this went wrong, right? Those things prevent us from learning. And we
00:20:27.180
don't want to, we don't want to be prevented from learning. I think it was Stephen Covey who said that
00:20:32.400
proactive people don't blame circumstances, conditions, or conditioning for their behavior.
00:20:38.240
We want to take ownership for our decisions and our lives. And there is an element of luck. There's
00:20:45.060
a lot of elements of luck that happen in the world, right? Like what country you're born in,
00:20:50.160
what your socioeconomic status when you're born in, what your parents are like. You don't control
00:20:56.360
any of that. But at some point you grab the steering wheel and you're, you might not be
00:21:02.260
the next Kanye. And, you know, maybe that's an unfair comparison, but there's a version of you
00:21:07.060
that's on a trajectory. And what you should be focused on is like, how do I maximize my own
00:21:12.200
personal trajectory given where I should be, right? Given where I could be. And I think one of the ways
00:21:17.060
that we do that is we try to go to bed smarter every day. So I want to go back to that principle three,
00:21:22.080
thoughtful opinions held loosely. Because that's related to a mongerism that really resonated with
00:21:27.380
me. He has this idea that people should have fewer opinions. And we're like, you shouldn't have an
00:21:33.440
opinion until you can argue the other side's part of the argument as well as they can. And then you
00:21:40.380
can earn your opinion. Is that kind of what you were going for there? Yeah. I mean, we call it the
00:21:45.500
work required to have an opinion, right? And so often what I used to see when I managed a lot of
00:21:51.640
people in organizations was that people come in, they would have this really strong opinion, but they
00:21:57.460
wouldn't have really thought about the other side of it. And so that they would have a ton of their own
00:22:01.660
ego involved in it. And one of the ways that I use to reduce ego, and it doesn't eliminate it, but as I
00:22:07.720
would assign people a role in the meeting. So you would argue for or against it, right? And then your ego
00:22:13.720
comes into like, I'm really going to argue against it, even if I believe in it, because I want to look,
00:22:18.440
you know, like I know what I'm doing, and I'm confident, and I've thought about it. And I
00:22:21.380
wouldn't tell people what role they would have before the meeting. And that was just a way to
00:22:25.600
encourage people to do the homework that they need to do before they can come up with an opinion.
00:22:30.440
And it helps you think about a problem in a three-dimensional way. You should be able to
00:22:34.400
sit down and say, here are the common counter arguments about what I think. And here's what I think
00:22:39.820
about those counter arguments. And you should be able to have that discussion with
00:22:43.620
yourself. And I think that intellectual playfulness, the intellectual curiosity needed to do that is
00:22:49.720
difficult. And you can't do that for everything, right? Sometimes you have to let other people
00:22:54.620
have a, you know, think forward to you. And you can't think about everything, but you have to
00:22:58.840
acknowledge that, you know, maybe that's not your opinion. Maybe that's just an idea instead of what
00:23:04.880
should be done. Yeah. And I imagine the internet makes having thoughtful opinions difficult because the
00:23:10.060
internet rewards strong opinions, right? That shock people or are very upfront.
00:23:17.160
I think we're looking for sort of abstractions or heuristics or tactics, and we're not looking for
00:23:23.420
like how those are created. And if you think about how we learn, you know, a lot of what we consume
00:23:28.400
online is sort of other people's abstractions, right? Like our principles would be a great example of
00:23:33.700
that. Those are abstractions that I've created over years that I think are valuable. And if you
00:23:39.740
read those, you might understand them and you might be like, oh, this makes a lot of sense.
00:23:44.680
And just like when you read a website, that's like the four things you need to do to master office
00:23:49.420
politics. And those tactics probably do make sense. But what you're missing is sort of the reflection
00:23:54.940
that went into those abstractions. And what you're missing from the reflection is the experience that led to
00:24:00.760
that reflection. And so you're missing a lot of fluency and a lot of details that we commonly don't
00:24:07.500
get. We skim over as readers or, you know, consumers of information, but it's through that,
00:24:13.560
that we make reflections. It's through those details that we understand when something is likely to work
00:24:18.820
and when it's not likely to work. And I think that that is when we draw our own abstractions. And so if
00:24:24.540
we're reading other people's abstractions or we're consuming information from other people, we're trying to
00:24:29.620
consume an experience for other people. What we really want to do to improve our learning is ask
00:24:34.700
them like, how did they come up with that? What variables did they consider relevant? How do those
00:24:40.200
variables interact with each other? And then we can actually start to learn about the situation
00:24:43.800
because now they're going to give us the context that we need to draw our own abstractions or at the
00:24:48.500
very least learn when those abstractions are more likely to serve us and when they're more likely to
00:24:54.260
So we've talked about principles, like first principles here. Let's take a step down
00:24:59.200
and talk about how we can look at the world before we actually make decisions. And something you have
00:25:06.020
written about extensively is this idea of mental models. So for those who aren't familiar, what are
00:25:10.900
mental models and how can they help us see the world better?
00:25:14.580
So mental models describe the way the world works, right? They shape how we think,
00:25:19.160
how we understand and how we form beliefs. They're largely subconscious, right? They operate below
00:25:25.540
our surface. We're not generally aware that we're using them at all, but we are. They're the reason
00:25:31.180
that we look at a problem. The reason that when we look at a problem, we sort of like pick these
00:25:37.860
variables that matter. These are irrelevant. They're how we infer causality. They're how we match patterns.
00:25:44.520
And they're sort of like how we reason, right? And if you think about it, a mental model is simply a
00:25:50.680
representation of how something works. We can't keep all the details of the world in our brains or
00:25:57.020
concepts. So we use models to simplify something that's more complex into something that's
00:26:03.140
organizable and understandable. And gravity is a great example of a mental model, right? And one example
00:26:08.740
of how that works, and it's super simple, but if you're holding a pen like I am right now,
00:26:13.160
and I tell you I'm going to drop this, and I ask you what happens, well, you know what happens.
00:26:18.020
And if you hear a click and you see my hand open, you can also retrospectively try to figure out what
00:26:23.100
happened because you understand this concept of gravity. But if I told you to calculate the
00:26:27.180
terminal velocity of something that was falling, most of us wouldn't be able to do that. So we have
00:26:31.920
this concept of gravity and it's useful. We don't necessarily need to know all the details about it,
00:26:37.260
right? We don't need to know that this pen is going to fall at 9.8 meters squared per second.
00:26:42.100
That's not going to help us at all. But we understand that if we drop the pen, what's
00:26:48.340
going to happen? And so the idea with sort of mental models is how do we focus our time on
00:26:54.600
learning mental models that are less likely to change over time so that our knowledge becomes
00:26:59.220
cumulative? And how do we develop a framework for making decisions that incorporates these mental
00:27:06.540
models, right? How do we think better? And if you think about thinking, the quality of your
00:27:11.580
thinking is proportional to the models that you have in your head and their usefulness in the
00:27:16.700
situation at hand, right? So the more models you have, you can think of it as a toolbox, the bigger
00:27:21.800
your mental toolbox. So the more likely you are to have the right model to see reality in this given
00:27:27.520
situation. And when it comes to improving your ability to make decisions, the variety of models that
00:27:35.200
you have matters, right? Most of us, though, if you think about it, we're specialists. We go through
00:27:40.720
high school and we start specializing in high school increasingly over and over again, right? So you go
00:27:44.840
into track science or arts, you go into advanced or non-advanced, then you go to college or university
00:27:50.280
and you get more specialized. You might get the first year, which is a little more multidiscipline,
00:27:55.920
but increasingly like you live in this sort of domain that you're in. So by default, a typical engineer
00:28:01.380
will think in systems. A psychologist will think in terms of incentives and a biologist might think
00:28:09.180
in terms of evolution, but it's only by putting these disciplines together in our head that we
00:28:14.240
can walk around a problem in a three-dimensional way, right? If we're only looking at the problem
00:28:19.040
one way, we've got a blind spot and blind spots are how we get into trouble. And so if you think about
00:28:25.060
a botanist sort of like looking at a forest, they may focus on the ecosystem. An environmentalist
00:28:30.280
may see the impact of climate change, whereas a forestry engineer might see the state of tree
00:28:35.960
growth. The business person might see the value of the timber and how much it's going to cost to
00:28:40.360
extract it. None of those people are wrong, but none of those views are able to describe the full
00:28:47.900
scope of the forest, right? So mental models are about how do we develop those models that we need
00:28:53.880
in our head to get a better view of reality. And I think that we don't get enough of that through
00:28:59.500
college, university, or our own sort of learning. And what we've tried to do is develop a system where
00:29:06.180
we talk about unchanging mental models that help give you the big ideas of the world.
00:29:11.620
And Munger had said that, I think that there's hundreds of mental models, but there's a very
00:29:17.520
relatively few of them carry the bulk of the weight in terms of making better decisions. And you can get
00:29:22.780
esoteric ones, just like you can have a chisel in your toolbox that you might pull out on occasion,
00:29:27.680
but you'll use your hammer a lot more. So there's tools that are more common than other tools that
00:29:33.100
help us think and solve problems. We're going to take a quick break for your word from our sponsors.
00:29:38.680
And now back to the show. So what are some examples of those sort of long lasting ones that you use on
00:29:44.480
a regular basis to make decisions? Well, I think one of my favorites is sort of like the map is not
00:29:49.940
the territory, right? And the concept there is the map of reality is not reality. The best maps are
00:29:55.540
imperfect. Even mental models are imperfect, right? That's because they're reductions of what
00:30:00.780
they represent. And if a map were to represent the territory with perfect fidelity, it would no
00:30:06.700
longer be a reduction, right? And it wouldn't be useful to us if it wasn't a reduction. But a map
00:30:12.380
can also be a snapshot of a point in time representing something that no longer exists. And this is an
00:30:17.600
important model because we run businesses off maps. We use financial statements to evaluate whether one of
00:30:24.320
our investments is doing good. Well, the financial statements are a map that doesn't represent what's
00:30:28.880
actually happening in the business. You can look to Enron as a perfect example of that. Like the
00:30:33.780
financial statements leading up to that, the bankruptcy were, you know, they didn't represent
00:30:39.300
the territory that was actually happening in Enron. If you think about the business that we're in,
00:30:45.640
you can think about email lists. Well, the size of your email list is a map, but it doesn't represent
00:30:50.420
the territory. It doesn't tell you about the open rates. It doesn't tell you about the engagement.
00:30:53.660
It doesn't tell you whether people care about whether they receive the email or how many emails
00:30:58.280
you get if people miss it. And just thinking about dashboards and how we run business, we have to run
00:31:05.040
businesses on heuristics. But the more that we run businesses on heuristics, the less in touch we are
00:31:10.520
with the territory, right? The less we see what's actually happening. And we want to keep grounded.
00:31:15.580
We want to keep an eye on what the territory really looks like because we want to know when the
00:31:20.920
territory shifts. Because a shift in the territory, a shift in the environment, a shift in the conditions
00:31:26.480
under which we're operating and the way that we're operating might mean that our map is outdated.
00:31:31.620
And if we're using the wrong map, we're going to get to the wrong destination.
00:31:35.580
Another one that I really like is sort of second order thinking, right? Which is one we used at the
00:31:39.940
intelligence agency all the time, right? Almost everybody can anticipate the immediate results of
00:31:45.660
their actions. But that's kind of first order thinking. And it's pretty easy and it's safe.
00:31:51.260
And it's a way to ensure that you kind of get the same results as everybody else. But second order
00:31:56.200
thinking is thinking further ahead and thinking holistically. It requires us to not only consider
00:32:01.940
our actions and their immediate consequences, but the subsequent effects of those actions as well.
00:32:07.580
And failing to consider the second and third order effects can unleash disaster. If you think about
00:32:11.980
running a business or doing something in life, you want to think about something where the first
00:32:16.900
order consequences are negative, but the second, third, fourth, fifth, sixth order consequences are
00:32:23.160
positive. And the reason that you want to look at those things specifically is because there's not
00:32:27.920
going to be a lot of people who do those things, right? If you think about delayed gratification is a
00:32:32.760
great example of sort of like a first order negative, second order highly likely positive, third order
00:32:39.280
highly likely positive, saving for retirement, another example, right? Like you're suffering now
00:32:44.040
to do something for a later benefit. And those are things that you want to think about, not only in the
00:32:49.160
context of business, like what pain am I willing to suffer now? What can I do now that I know is going
00:32:55.000
to be negative in the short term and visibly negative? That's important, right? You want people to see how
00:33:00.060
negative it is. But if I think about the second, third and fourth order consequences, those are
00:33:05.780
positive consequences. And even better if they're not super visible positive consequences. And then
00:33:12.360
you can start to do things from a competitive point of view that other people can't do and they won't
00:33:16.740
be able to copy and they won't understand what you're doing. And I think those things are really
00:33:21.580
just different ways of seeing the world, right?
00:33:24.780
Yeah. And so there's lots more we can talk about. They're all on the website. We'll send people links
00:33:28.740
there so they can go check them out. One of the other interesting things I've read that Munger talks
00:33:33.620
about is that he's a voracious reader. He's reading about economics. He's reading about
00:33:38.100
philosophy. He's reading biology. He's reading behavior psychology. And what I find interesting
00:33:43.660
is that he'll sometimes find ways, and as he's reading, he's developing these mental models and
00:33:47.660
he'll find ways to apply a mental model, say, from the realm of biology that you would never think to
00:33:53.700
apply to business, but he does that, right? Yeah, definitely. We learn this sort of domain
00:34:00.100
independence in school, which is really interesting, right? So you get presented with a physics
00:34:04.140
problem in physics class and that you use this almost algorithm to solve this problem. They're
00:34:10.880
going to give you a problem that looks like a certain way, and you're going to take this formula
00:34:15.160
that you used and you're going to apply it. And we're not focused on sort of like a core understanding
00:34:20.180
of the underlying concepts. And we're not focused on how those concepts might apply outside of
00:34:25.160
biology, outside of physics, outside of chemistry, outside of math. Like probabilistic thinking is a
00:34:30.720
great example of just probability applied to thinking, right? And a lot of people don't even
00:34:35.500
view sort of our thinking as probabilistic, but inherently it is probabilistic. We're just trying to
00:34:41.080
create better probabilities. And I think that we do ourselves a disservice when we learn about these
00:34:46.000
topics and we learn about them in such a one-dimensional way, because the real world doesn't present you
00:34:50.820
problems that look like your grade 10 chemistry test. They're going to present you problems where
00:34:56.380
the information you learned in grade 10 might be valuable for you to apply, but you're not going
00:35:02.420
to see it because you're not thinking about it in that way. And I think if we learn about all of these
00:35:08.060
basic concepts and we just take a look at like how they might apply in different situations. And I think
00:35:12.520
Munger sort of has been a champion on that. And Peter Kaufman is another one. And Peter Bevelin,
00:35:17.860
those three in particular have been really good at, here are some core concepts and here are how they
00:35:24.260
apply outside of these domains in which they've been presented or how we can think about them.
00:35:29.420
And most of the time those examples are fairly esoteric or specific, but they give you a sense
00:35:34.480
for like how you can think about evolution and how you can think about an example of evolution and
00:35:40.640
applied to business would be, you know, things evolve and we have these mutations and those mutations
00:35:45.620
sort of get beneficial selection in a certain environment. We think in organizations that we
00:35:52.980
don't want to try something that's failed again, but that's a really simplistic example. I mean,
00:35:57.580
when you go to somebody and you're like, I have this idea and they're like, oh, that failed. Like
00:36:00.780
we've tried that. And that is a really common thing. I talk to my friends who work in organizations
00:36:05.760
that happens all the time. What you're missing though, is the environment in which it failed.
00:36:10.300
You're not talking about that. You're not talking about, did the reason it fail change? Will it
00:36:16.900
succeed now? Nature is blind in terms of gene mutations. It just keeps trying the same experiments
00:36:22.340
over and over again. And it ends up with different results, right? A trait that is valuable today
00:36:27.720
might've been one that is way less valuable hundreds of thousands of years ago. And that's something that
00:36:34.160
we can apply to business. And you can think about it, it just requires a few extra seconds that you're
00:36:39.940
not dismissing it out of hand and you're going, oh, that failed because of this, but this reason
00:36:45.140
is no longer there. So maybe it will work now. And that allows us to experiment better. And that's
00:36:50.280
an example of like how we can apply evolution to business. So it sounds like the way you develop
00:36:55.420
mental models is reading a lot and just putting these things into practice. I mean, what have you
00:36:59.760
found the best way to develop these mental models? I think like reading and just thinking about
00:37:04.300
like, could this apply in a different scenario is a great example of that. But I mean,
00:37:09.380
we try to distill them for other people because we realize that not everybody has a ton of time
00:37:14.180
to sort of like put this effort into reading biology textbooks or, you know, reading as much
00:37:20.000
as we do. And so we're just trying to like, here's a model. Here's how you can apply it in different
00:37:25.060
ways. And we're going to add to it later, but we give you sort of like the 80% of it. If you do the
00:37:31.460
extra work, the problem is if we give you the whole model, you won't actually learn anything. You need to
00:37:36.220
do a little bit of mental work. You need to like, how does it apply to me? How does this apply to a
00:37:40.600
situation that I'm facing? How can I use this? Are there other circumstances? And it's those questions
00:37:45.300
that create the reflection for you personally. And that reflection leads you to sort of like your
00:37:50.180
own abstraction or where it's going to be useful and where it's going to hinder you. And I think that
00:37:54.980
like the big problem with mental models, I think the world would be just a much better place if we
00:38:00.620
all just the base level of education just included like all the big ideas from most of the major
00:38:06.120
disciplines, not the new novel stuff, like the stuff that doesn't change, right? Like incentives
00:38:11.220
and psychology and randomness and sort of like numeracy and evolution and power laws and systems
00:38:18.960
thinking and feedback loops and chaos dynamics. And, you know, those are the things that we want to
00:38:23.460
think about. And those are the things that we want to learn. And those are the things that you learn
00:38:27.920
in a particular domain, but you don't necessarily learn as a general education. And then we also
00:38:32.140
want to overlay that with sort of what we call the general thinking concepts, which are just tools
00:38:37.520
that allow us to think through problems in a different way. And we already talked about a
00:38:41.900
couple of them, right? The map is not the territory and sort of second order of thinking are just ways
00:38:46.760
that we think about problems in a different way. You can also add thought experiment, right? Which
00:38:51.140
Einstein is famous for. And the way that I landed on this was, you know, I did a lot of computer
00:38:57.020
programming. And so you end up with this concept called a sandbox and thought experiments are
00:39:01.980
really like a sandbox, right? You run this experiment and it can't really wreck this system,
00:39:06.400
but you're trying to think about what will happen. And it's in a contained sort of unit.
00:39:11.060
And that was how Einstein came up with relativity, right? And I think there's a lot to that.
00:39:16.160
Thought experiments also help us point out second order of thinking. They help us think in first
00:39:20.100
principles. They help us probabilistically think. And all these things sort of reinforce each other.
00:39:25.000
So the more of them you have, the better you're able to see reality. And the better you're able to
00:39:29.600
see reality, the fewer blind spots you're going to have. And the fewer blind spots you have,
00:39:36.100
Yeah. Another guy who did something similar to what Munger and these other guys are doing was
00:39:40.320
John Boyd, the military strategist with his OODA loop. And he had sort of a similar idea of
00:39:46.520
mental models. But I love this idea that he had. He only wrote, published one piece of published
00:39:53.000
work, right? On his whole life, it's called creation and destruction. And he had this idea
00:39:57.060
that you can take what I guess what he'd call mental models, and you can take parts of them
00:40:01.340
from each other and then combine them together to start something new. So you destruct,
00:40:06.360
and then you create something new. So that's what another level you can take with these mental
00:40:10.060
models is not just use them discreetly by themselves, but actually start mashing together
00:40:16.200
Yeah, totally. And you can also use other people's mental models against them, right?
00:40:21.480
If you're in a military or you work for an intelligence agency, you want to think about
00:40:25.680
the cultural influences that affect people's mental models. You want to think about their
00:40:29.820
genetic heritage. You want to think about their ability to analyze and synthesize and how they're
00:40:34.740
likely to use new information. And you want to think about how they're combining models and how
00:40:40.840
they're taught in schools to combine models. And if you think about organizations and diversity,
00:40:44.900
you also want to think through at a different level about the diversity. Like it gives it a
00:40:50.180
different meaning to diversity, right? Diversity becomes like applying mental models in a different
00:40:54.760
way. Diversity comes from different backgrounds, different socioeconomic status, different lives,
00:40:59.360
different... But so often we're getting less and less diverse in organizations. We hire similar
00:41:04.360
sort of backgrounds, similar people, and then more and more we're training them in very similar
00:41:09.300
ways. And so they go for it. Everybody wants to be promoted. And so they get into an
00:41:14.200
organization. They're like, what's my path to promotion? And it used to be like, you'd be like,
00:41:19.220
it's your first day, buddy, like calm down. But now it's kind of expected. We do this where we give
00:41:25.200
people a path to promotion. But what we're doing on that path is we're creating a checklist. We're
00:41:29.100
creating a checklist of people who are going to combine. They're going to A, have the same mental
00:41:34.260
models and B, they're going to combine them in the same way. So all of those people are more likely
00:41:38.960
in the future to look at a problem in the exact same way. And I think Boyd's concept of
00:41:43.380
almost combinatory play applied to mental models is really good.
00:41:48.300
So let's get into making decisions. So first of all, let's talk about why really, really smart
00:41:55.080
people can sometimes make really, really bad decisions. Is it just incorrect mental models or
00:42:02.540
Well, I mean, like think about what we were talking about earlier a little bit with the
00:42:06.700
information overload and sort of like how we consume information. It's really easy
00:42:12.820
to tell when we're physically overloaded. Like if we go to the gym together and I put too much weight
00:42:18.220
on a bench press, you're just not going to be able to lift it. And you know that you're physically
00:42:22.860
overloaded, but it's really hard to tell when we're cognitively overloaded. And when we're
00:42:27.060
cognitively overloaded, we tend to take shortcuts. Our brain wants to optimize for,
00:42:32.540
this applies to everybody, right? It wants to optimize for the best solution that fits
00:42:37.120
what we have immediately in our minds. And the more busy we are, the more hurried we are,
00:42:42.640
the less we're going to have in our minds, the less that decision has to satisfy, which is also
00:42:46.860
more likely to mean that decision is not good, especially if it's not a common decision that
00:42:50.720
we're making. And so I think we get led astray in a couple of ways, right? One is just we're
00:42:55.540
overloaded, we're overworked, we're overtired. And one of the reasons that all of that happens,
00:42:59.540
which is really weird and perverse, is that we just make poor initial decisions. And the
00:43:04.660
consequence of poor initial decisions is that we have to spend more time correcting those decisions,
00:43:08.720
which increases our anxiety and our stress. And one of the ways that we can get out of this sort
00:43:14.460
of like spiral is to counterintuitively just slow down, right? Actually schedule thinking time.
00:43:20.740
I mean, that would be one way that we would improve decisions dramatically. And most of the people I know
00:43:27.200
who make really good decisions on a consistent basis do that. And they're not people that you
00:43:32.180
would think typically have time, right? There are people like Patrick Collison who run Stripe and
00:43:36.520
Toby who runs Shopify. And those people make time to make decisions. They make time to think about
00:43:41.940
problems and they think about problems in a different way. And I think that that's really
00:43:45.660
important. And then you also counterintuitively, you want to do something that's first order negative,
00:43:50.880
second order positive. We talked about that earlier, which is like you want to intelligently
00:43:55.400
prepare to make decisions. What are the decisions that you're likely to be making in the next year
00:44:01.000
or two? What information do you need now in advance of those decisions that's going to allow you to
00:44:07.020
make better decisions? And I think too often we go searching for information at the point when we're
00:44:12.400
making a decision. And what happens is we just end up in this weird state, right? And the weird state
00:44:17.440
is that we're seeking out information when we need it. So we're more likely to overvalue the information
00:44:21.820
to begin with. But that information is also commonly known, right? So it's going to almost
00:44:26.940
guarantee a mediocre decision. And that might be great. A mediocre decision might be really good if
00:44:33.760
we don't know what we're doing. We sort of like want to follow the common wisdom because that's going
00:44:38.100
to lead to average performance. And if we do know what we're doing, we want to know when to deviate
00:44:43.840
because deviation when we're not following others is going to lead to outperformance. But too often we're
00:44:49.460
sort of like don't know what we're doing and we deviate. And that leads to like what I call the
00:44:54.180
lottery ticket. And it's like the Hail Mary pass in football. It might be successful and it might
00:44:59.280
not. And if it is, it's not repeatable and you have no idea why it worked. And if it doesn't work,
00:45:04.800
well, you just sort of absolve yourself and that you didn't know what you were doing. So you don't
00:45:08.600
actually get better. It's the worst quadrant to sort of like be in if you were to map that out on a
00:45:14.360
two-by-two matrix. And I think that we all suffer from these things. So the keys are like slow down.
00:45:20.760
It seems counterintuitive. You might have to work a little bit longer at first,
00:45:24.680
make better initial decisions. That's going to free up a lot of your time. That time,
00:45:29.440
use that time to invest in intelligently preparing to make better decisions. That's going to vary
00:45:34.820
depending on the type of career you have, the type of field that you're in. But you can start by
00:45:39.340
understanding the big mental models that exist in the world, right? What are the 101 biggest ideas
00:45:45.200
that I would have learned if I did a university education in just sort of like the basic ideas of
00:45:50.180
each discipline? And then think about how those things apply to your specific field, your specific
00:45:55.840
problems, and then get more esoteric, right? Like what information do I need to seek out to make
00:46:00.880
better decisions in my niche? And then you want to take time to incorporate that, find it. And not a lot
00:46:06.460
of people are going to do that. And slowly over time, you'll be able to leverage those decisions
00:46:10.600
into more and more responsibilities. At first, it's going to be small. You might have an incremental
00:46:15.460
advantage over somebody else in making a decision, but it might not even be, or it's barely perceptible.
00:46:22.040
But over time, as you make more and more consistently better decisions, you get more and more
00:46:27.280
responsibilities. As you get more and more responsibilities, that leverage starts to kick in.
00:46:31.560
And now that little advantage turns into a bigger advantage.
00:46:36.520
So it sounds like decision-making, a lot of the work isn't on the front end, right? It's not
00:46:41.780
actually when you make the decision. It's just getting the information used, thinking, using
00:46:46.720
mental models to look at the problem in a 3D way. And then when it comes time to actually making the
00:46:52.000
decision, I mean, is it pretty easy at that point?
00:46:54.320
Well, I mean, that's a really interesting question because I think if you understand the problem,
00:47:00.900
it's really easy to know what to do. And one of the indications that you don't understand the
00:47:06.000
problem, or you don't understand the trade-offs, or you don't understand what you're optimizing for,
00:47:10.600
you don't understand the situation the way you want to, is that you get stuck in this sort of
00:47:14.560
paralysis of information overload or seeking out information at the time of making a decision
00:47:21.040
in the hopes that it's just going to satisfy you. And that's a good state to be in. You just have to
00:47:26.640
be aware of it, right? None of these states are good or bad by default. Sometimes they serve you,
00:47:32.400
and sometimes they don't serve you. And your goal as a thoughtful sort of practitioner of decision-making
00:47:38.040
is to understand, when is this likely to serve me? And when is this likely to hurt me? And do I have
00:47:43.460
to deviate? And do I have to have a different process for this situation in particular? Like if
00:47:48.060
you're picking toothpaste, it doesn't really matter. The consequences of a bad decision are easily
00:47:52.840
remedied, right? But if you're making a consequential, irreversible decision, you want
00:47:59.280
to approach that problem differently. And what you don't want to be doing is like Googling other
00:48:03.500
people's thinking. You don't want to be Googling sort of like information because you're going to
00:48:07.880
overvalue it. And when you overvalue it, you're going to take risks that you probably shouldn't take.
00:48:13.480
And anything on the first page of Google is probably like commonly known, right? So you're not even
00:48:18.120
getting an information advantage over other people. So you have to think about all of those things when
00:48:22.700
you're making a decision. I know it sounds like a lot, but it becomes a bit of a habit after a while.
00:48:28.440
So whenever you make these decisions, one of my favorite mongerisms is this idea of like,
00:48:33.140
you know, try... The goal in life is to try not to be consistently not stupid instead of trying to
00:48:38.020
be very intelligent. Because a lot of people, they just, they focus on making really, really
00:48:41.500
brilliant decisions. But oftentimes they do that at the expense of just making really dumb decisions.
00:48:47.920
Yeah. Think about like most of the concepts that we learn to look at the world,
00:48:52.460
are on a risk basis. So the tools that we have to evaluate situations are based on risk. Like it's
00:48:58.240
like roulette, right? You know how many slots there are, you know, the odds it's going to land on any
00:49:02.880
particular slot, assuming a random will. But life isn't really about risk. It's more uncertainty.
00:49:10.420
And uncertainty by its very nature means we might not know all the possible outcomes. And if we don't
00:49:15.200
know all the possible outcomes, there's no way we know the probability of each individual outcome.
00:49:19.320
So we have this idea of what we see and what we think is likely to happen,
00:49:23.100
but we don't really know how accurate that view is. And so one of the ways to make that view more
00:49:28.820
accurate is to take the inversion of that, which is like, what are the outcomes that I want to avoid?
00:49:34.440
And what can I be doing now to avoid those outcomes? And if I can avoid those outcomes,
00:49:39.200
well, now I'm more likely to get to the outcome I want. And I think working backwards is really,
00:49:44.980
really hard for people to do. And if you think about in meetings, like we had this quote a while
00:49:50.680
ago, which is avoiding stupidity is easier than seeking brilliance. But I came up with that while
00:49:57.580
I worked at the intelligence agency and it was a really apt sort of quote because I was in meetings
00:50:02.660
all the time where people are trying to like one-up each other in their brilliance and insightfulness
00:50:08.440
and sort of like complicated view of a situation. But often the best decision really just when you're
00:50:15.460
dealing with an uncertain environment is like, okay, well, what are the things that would be really
00:50:20.680
bad? How do we eliminate those from happening? And if we can eliminate all the bad outcomes,
00:50:25.500
well, we're only left with good outcomes. And you can think about the problem forwards and sort of
00:50:30.060
backwards. And I think that that gives you a much more holistic view of the situation,
00:50:34.640
which leads to a better understanding, which leads to a better initial decision.
00:50:39.260
Yeah. I think one of the examples Munger gives of like good heuristics or rules that will
00:50:43.620
prevent you from doing stupid things. And if you just follow those, like you'll have a pretty good
00:50:48.320
life. Like the 10 commandments from the Bible, right? If you can go not killing anybody, not having any
00:50:54.480
envy, not committing adultery, not lying. Like that's, if you avoid those things, the consequences that
00:51:00.580
come with those things, like your life is going to be pretty good. And then everything else is just
00:51:05.460
Like avoid leverage, right? Like financial leverage, avoid, you know, alcohol and substance
00:51:11.220
abuse. And if you think about it in a decision-making context, there's other things you can do to sort
00:51:16.140
of like prime the environment, which is like, get a good night's sleep. Take time to think about the
00:51:21.480
problem. Don't be rushed. I mean, when you look at sources of stupidity or where we're likely to be
00:51:26.500
stupid, it's often when we're rushed, when we're switching context really quickly, when we haven't got a lot
00:51:32.220
of sleep, when we have something important to do. And I think that just slowing down and being like,
00:51:37.400
what are the basics? Let's get the basics right. And what do I control and what don't I control,
00:51:41.880
right? To a large extent, you control how much sleep you get. To a large extent, you control
00:51:46.160
whether you're rushed or not. Even if you work for an organization, I mean, you control a lot of your
00:51:50.520
time, a lot more of your time than you think you do. And the higher up you get in an organization,
00:51:55.400
one of the weird things I found is I controlled less and less of my time, the higher I got.
00:51:59.220
And I thought that was really weird when I almost needed more and more control over my time
00:52:04.400
and not less and less because the decisions have more and more consequences. And you're expected
00:52:09.140
to kind of context switch eight to 10 times over the course of a day and make large decisions that
00:52:16.200
affect a lot of people. And you're not really given a lot of time to think about that. And I think that
00:52:21.700
those are things that you want to start thinking about. What are those variables that we can get right?
00:52:24.980
What things prevent us from, you know, or get in the way or encourage is probably a better way to
00:52:30.700
look at that. What things encourage stupidity or encourage bad decisions?
00:52:36.760
Yeah. And then you're better off than, you know, a lot of people just by doing that,
00:52:42.780
Yeah, you don't have to be brilliant. Just don't be stupid. So beyond taking a multidisciplinary
00:52:47.200
approach to decisions, have you come across any like tried and true tactics or checklists that you
00:52:56.560
I think Munger's like, Munger came up with this and most people have never even heard of it,
00:53:01.000
but he came up with a very simple framework, which I call the Munger two-step, which is
00:53:04.980
look at the situation. Do I understand it? Okay. If I don't understand it, that's one path away from it.
00:53:12.820
In that path, you want to go seek out somebody who does understand it ideally. If I do understand it,
00:53:18.440
I know what variables matter and I know how those variables interact.
00:53:21.240
And then the second sort of step to this decision making is how might I be fooling myself? What are
00:53:27.940
the ways that I might be tricking myself into thinking that I'm right about this? And I think
00:53:33.200
that that is a very simple heuristic and framework that people can start with. And one of the mistakes
00:53:38.540
that I see people make is like, I don't know what I'm doing, but I recognize it. So it's super
00:53:42.700
important that you recognize it, right? There's again, a tying your ego to outcomes and not you
00:53:47.920
personally being right enables you to see the world much more clearly than other people.
00:53:53.420
And so when you're able to go to somebody else, the mistake that most of us make,
00:53:57.780
maybe we have to make a decision in an area that we're not an expert in,
00:54:01.180
is that we ask people what they would do. We go to the auto dealership and we ask them like,
00:54:07.200
what should we fix on our car? And of course they have their own incentives and we don't learn
00:54:12.580
anything when they tell us. Or we go to somebody and we say, how should I pick a doctor? And we go
00:54:17.360
to a doctor friend of ours and we ask them like, what doctor would you pick? What we should ask them
00:54:21.800
is like, what variables would you consider relevant when you pick a doctor? Because now we're actually
00:54:27.160
learning. Now, the next time I have to pick a doctor, I have an idea of what those variables are,
00:54:32.020
which is better than just somebody telling me what to do. But we're so busy and we're so sort of like
00:54:38.180
starving for meaning in our life that we just, sometimes we coast, right? We ask people like,
00:54:44.020
who would you pick as a doctor? And then we're not actually taking advantage of an opportunity to
00:54:47.900
learn. It might take five extra minutes to learn something, but you're going to learn something
00:54:51.460
that applies over the course of your life. That's a great example of something that might be first
00:54:56.100
order negative, second order positive. So I imagine on that second part of the Munger two-step,
00:55:01.700
like going, you know, figuring out how you're, how you could be fooling yourself,
00:55:05.560
like having a list of biases that exist out there that we know of and just walking through it,
00:55:11.300
check by say, it's like, am I, is this bias playing effect here? Is this bias playing effect
00:55:15.120
there? And then, you know, answer those questions and you kind of get a better idea if you're fooling
00:55:18.880
yourself or not. I have like a bit of mixed feelings on that. Like, I think that the more
00:55:24.560
intelligent you are, the better the story you're going to tell yourself about why that bias doesn't
00:55:30.760
apply in this particular situation. I think biases are great at explaining
00:55:34.180
why our minds trick us. I think that we need to structure things more physically in our environment
00:55:42.320
or with a process, structured thinking to sort of account for biases, right? Whether we have
00:55:50.740
reminders about what to do, whether we sort of like have this informal process that we adjust based on
00:55:57.360
the type of decision-making that we're doing. And I think that we want to incorporate that.
00:56:02.840
We also want to incorporate other people's views that are very diverse and different from us.
00:56:08.200
And I think that that's going to allow us to sort of like get out of this. And we really,
00:56:12.060
I mean, the ultimate one is just attach your ego to outcomes, not your ego to your opinion or your
00:56:16.840
idea of being the one that's adopted. And that's going to enable you to just see clearer what's
00:56:22.060
happening in the world. And I think ultimately that's what we want to do. We want to understand
00:56:26.480
the situation. It was Wittgenstein who said to understand the problem is to know what to do.
00:56:32.420
And I imagine too, besides detaching your ego from your decision, like also detaching yourself
00:56:37.660
from results might help because sometimes you can make a good decision, like the right decision,
00:56:41.360
but the results are bad because of factors that you had no control over.
00:56:45.320
Yeah. Sometimes I think a lot of times we, you have to play a repeatable game, right? And that
00:56:51.860
repeatable game is like, how do I calibrate? Like, is my judgment of the fact that I made this right
00:56:56.340
decision, correct? And you have to be self-aware enough to be like, oh, I consistently think I'm
00:57:01.920
right, but I'm getting bad outcomes. Well, there's something wrong either with your view of the world,
00:57:06.260
with how the decision is being implemented. There's something that, you know, there's a flag there that
00:57:11.140
you need to look at. It doesn't mean that you're wrong. It doesn't mean that you made a bad decision,
00:57:15.080
but it does mean that there's something there for you to look at. Too often, it's really easy just
00:57:19.540
to convince ourselves that we, we did the best we could. We made the best decision or, you know,
00:57:25.600
given the information we had, that was, that was all I would decide. But I used to ask people
00:57:29.440
at the intelligence agency, like what information you had, what information did you use to make that
00:57:35.180
decision? Show me. And people would just come up with stuff and they would come up with it post hoc.
00:57:39.580
And then that's how we started creating decision journals, right? Which is like, no,
00:57:43.020
you're going to record this at the time you make the decision. We're going to see like,
00:57:46.920
this is how I can judge your judgment. This is how I can be comfortable trusting you to make
00:57:51.760
decisions. I need to see the way that you think. I need to see the variables that you consider
00:57:56.540
relevant. And together, we're going to hone your judgment. And if you're consistently missing
00:58:00.620
something, it's my job as your boss or peer to sort of like point that out so that we can come to
00:58:05.060
better decisions together. And if we have to structurally process that, maybe your decision journal
00:58:10.380
includes a flag for, Hey, are you, uh, are you considering a large enough sample size? Because
00:58:15.740
you have a bias towards small sample sizes and just that alone, like you have to fill that in.
00:58:20.940
It's not a checklist. It's something you have to fill out. You have to explain and you have to do it in
00:58:24.800
your own handwriting. And we were able to pretty dramatically raise the, I think the quality of
00:58:30.940
the decisions we made. And is there some place people can go to learn about those decision journals,
00:58:35.840
Yeah. If you just Google decision journal or just go to fs.blog
00:58:40.280
slash DJ for decision journal, you'll, we have a template online that we use. We'll be updating
00:58:46.820
that soon. Uh, we're working with the special forces to come up with a different, a slightly
00:58:53.700
That's awesome. Well, Shane, this has been a great conversation and there's like so much more we could
00:58:57.540
talk about. We could probably devote like entire episodes to like individual mental models.
00:59:01.720
So people can go to fsblog.com, fsblog to find out more about what you do.
00:59:09.640
Fantastic. Well, Shane Parrish, thanks so much for your time. It's been an absolute pleasure.
00:59:12.300
Thanks, man. Really appreciated the conversation.
00:59:14.240
Well, that wraps up another edition of the AOM podcast. Head over to artofmanliness.com where
00:59:30.980
you find thousands of thorough, well-researched articles on personal finances, style, life,
00:59:35.560
social skills, you name it. It's there. And if you haven't done so already, I'd appreciate if you
00:59:39.000
take one minute to give us a review on iTunes or Stitcher. Helps that a lot. And if you've done that
00:59:42.940
already, thank you. Please consider sharing the show with a friend or family member you think
00:59:46.820
would get something out of it. Until next time, this is Brett McKay encouraging you to not only
00:59:50.320
listen to the AOM podcast, but to put what you've learned into action.