#155 — Mental Models
Episode Stats
Words per Minute
177.91795
Summary
Shane Parrish is a blogger and podcaster. He has worked for many years in the Canadian equivalent of the U.S. intelligence agency, the NSA. But now he is a full-time digital media person, and he spends a lot of time thinking about thinking about what he calls mental models. This conversation has a lot in common with the conversation I had with Danny Kahneman, and I think you ll find it very different as well, as we talk about how to think about reasoning under uncertainty, and how to deal with the uncertainty that comes with it. This is a great episode to listen to, especially if you re interested in learning more about the Waking Up App, a new app I'm developing for the Apple App Store and Google Play Store. If you like what you hear here, please consider becoming a supporter of the podcast by becoming a patron patron of The Making Sense Podcast. You get access to all sorts of awesome features and perks, including a free year of the app, in exchange for your support. And if you really can t afford it, just send us an email to info at wakingup@samharris.org and we will give you a FREE year on the app. You can find out more about that on my website at waking up.org. I look forward to meeting you in July at my event at the Wiltern in Los Angeles in July! Sam Harris Thanks for listening to the Making Sense podcast, and Happy to be on this episode of the Podcast! Timestamps: 5:00 - What are you're listening to this podcast? 6:30 - What do you think of this episode? 7:20 - What would you like to hear from me? 8:40 - How do you feel about it? 9:15 - What does it sound like? 11:00 12:40 13:30 15:10 - What is your favorite part of the show? 16:10 17: What are some of your thoughts on the podcast so far? 17 - How would you want me to do it better? 18:20 19: What would I like to be doing in the future? 21:00 -- what would you put it in the next episode of this podcast 22:30 -- What are your answer? 23:20 -- What kind of thing you would like to see me do more of this?
Transcript
00:00:10.880
Just a note to say that if you're hearing this, you are not currently on our subscriber
00:00:14.680
feed and will only be hearing the first part of this conversation.
00:00:18.440
In order to access full episodes of the Making Sense Podcast, you'll need to subscribe at
00:00:24.140
There you'll find our private RSS feed to add to your favorite podcatcher, along with
00:00:30.520
We don't run ads on the podcast, and therefore it's made possible entirely through the support
00:00:35.880
So if you enjoy what we're doing here, please consider becoming one.
00:00:51.200
As I mentioned in the last housekeeping, there is a subscription policy change happening
00:00:57.960
on the podcast, and this will be going into effect on Wednesday, May 1st.
00:01:04.700
So in order to have access to subscriber-only content on my website, you will need an active
00:01:13.660
This means that those of you who never subscribed monthly, or whose subscriptions have lapsed,
00:01:19.380
and this includes anyone who used to support the podcast through Patreon, will need to start
00:01:29.140
Now, as always, if you cannot afford to support the podcast, you know I don't want money to
00:01:34.780
be the reason why you can't get access to my content.
00:01:38.380
So if you really can't afford a monthly subscription, you need only email us at info at samharris.org,
00:01:46.780
But you will need to either subscribe or send us that email in order to get behind the paywall
00:01:58.980
And that includes access to the live town hall, the Ask Me Anything episode of the podcast
00:02:06.320
That's in Los Angeles, and that will be videotaped and streamed live on my website at 8 o'clock
00:02:17.780
This is an experiment, and if it works, we may do all of our AMA episodes this way.
00:02:23.580
We will see what value is added with a live audience.
00:02:27.360
Anyway, those tickets sold out, I think, in 20 minutes.
00:02:33.280
I look forward to meeting you all, and we should have fun.
00:02:37.720
Again, the video will be streamed live on the website, and a final cut will be posted there.
00:02:44.300
So if you're in some time zone totally out of sync with Los Angeles, you need not worry.
00:02:53.660
As always, reviews of the podcast and iTunes are very helpful.
00:03:04.160
Those affect our visibility in the app store, and therefore help determine how many people
00:03:11.980
And again, I've got to say, releasing this app has been extremely gratifying.
00:03:16.880
Honestly, it's the one thing I've done where there is no distance between my intentions
00:03:22.060
and the apparent effect of what I have produced out in the world.
00:03:26.320
As you know, the app is continually under development and only getting better for your input.
00:03:36.260
And as you know, our policy for subscription to the app is also the same.
00:03:41.820
If you actually can't afford it, just send an email to info at wakingup.com, and we will
00:03:51.100
And if your luck hasn't changed at the end of that year, send us another email.
00:03:55.460
I believe there are some seats left for my event at the Wiltern in Los Angeles with Mingyur Rinpoche
00:04:05.480
You can find out about that on my website at samharris.org forward slash events.
00:04:11.060
That is the first event associated with the Waking Up app, and that event is being co-sponsored
00:04:29.660
His website is farnamstreet at fs.blog, and his podcast is The Knowledge Project.
00:04:40.260
There was recently a profile in the New York Times about him that brought him into greater
00:04:45.980
He has a background in computer science, and he worked for many years in the Canadian equivalent
00:04:54.920
In fact, he briefly worked with the NSA as well.
00:04:58.400
But now he is a full-time digital media person, and he spent a lot of time thinking about thinking.
00:05:05.180
And we talk a lot about what he calls mental models.
00:05:08.340
This conversation has a lot in common with the conversation I had with Danny Kahneman about
00:05:15.020
But I think you'll find it very different as well.
00:05:17.720
Anyway, without further delay, I bring you Shane Parrish.
00:05:31.360
So we're doing this in your hotel lobby, hence the ambient city vibe.
00:05:36.980
This is a non-studio sound, but it's an experiment.
00:05:41.820
I think we probably share a significant audience, and many people will know who you are.
00:05:45.800
But you run the Farnham Street blog, and you have your own podcast, The Knowledge Project.
00:05:53.000
We've interviewed some of the same people, so we have many interests in common.
00:05:57.760
But there was a great New York Times profile on you, which I think brought you to the attention
00:06:02.540
So let's just jump into a kind of potted history of your background, because you came into this
00:06:13.220
You started in, was it cybersecurity specifically, is your background?
00:06:21.340
So I started work August 28, 2001 for an intelligence agency.
00:06:27.520
And then September 11th happened two weeks later.
00:06:30.400
And I worked in, I guess you could say, cybersecurity in one way or another for, I guess, 15 years.
00:06:38.100
Is that something you can talk about, or are you bound by laws of Canadian espionage that
00:06:42.480
you will make that part of a very short conversation?
00:06:46.760
We can't talk about it too much in terms of specifics.
00:06:49.840
I think we can talk about general things around cybersecurity or maybe privacy issues.
00:06:57.280
There's a lot of stuff out there now with Snowden and everything.
00:06:59.940
So I think people have a fairly good insight into what goes on inside intelligence agencies.
00:07:06.300
So you were in computer science and got into cybersecurity right like two weeks before September
00:07:19.200
Well, we didn't even have a sign on the building as of August 28th.
00:07:22.720
And by Christmas that year, we actually had a sign we existed.
00:07:27.300
So just to contextualize for people, I worked for the Canadian version of the NSA.
00:07:32.860
And it just it was a really amazing time to be working there.
00:07:37.000
I mean, it was unfortunate the events that sort of led to our increased visibility and
00:07:44.080
But with that said, it was we went from, I don't know, 500 people to 2000 or so when I
00:07:53.300
You know, I ended up doing a job that I wasn't really hired to do, but I love doing.
00:07:57.660
And it was a good way to sort of give back to Canada and the country that I was born in.
00:08:02.840
My parents were in the military, so we live coast to coast.
00:08:05.620
I ended up working in the States for a little bit at NSA for a short time.
00:08:10.160
And then most of my other time has been in Ottawa.
00:08:18.080
Because this could have been an artifact of what the New York Times did to you, but
00:08:21.240
there seemed to be a real emphasis on how popular your blog and podcast are among the financial
00:08:29.160
We have three main audiences for our sort of blog and podcast, which is Wall Street,
00:08:36.540
And the way that it started was I took some time to go back to school, I think around 2008,
00:08:42.660
2009 to do an MBA and quickly realized that I wasn't going to learn what I was trying to
00:08:49.920
I wanted to learn how to make better decisions because I was doing operations and I was making
00:08:57.160
And I felt like there was an obligation on my part to get better at making decisions.
00:09:03.280
And it's not that there's no sort of like skill that is making decisions better.
00:09:08.300
It's a whole bunch of sub skills that you have to learn and apply.
00:09:12.700
So I went back to school to try to get better at some of that stuff and quickly realized that
00:09:17.080
the MBA wasn't going to teach me what I needed to know.
00:09:19.460
And so I started a website called 68131.blogger.com, I think.
00:09:25.140
And that's the zip code for Berkshire Hathaway.
00:09:27.140
And the reason that I did that was the site was an homage to Charlie Munger and Warren Buffett,
00:09:31.840
who were actually giving me things that I could think about and put into practice about
00:09:35.920
how to see the world differently, how to make better decisions.
00:09:42.100
And the reason that we used 68131 was because I didn't think anybody would type it in at the
00:09:47.100
It wasn't meant for anybody else's consumption.
00:09:49.140
It's more like a personal online notepad for my own edification and connecting ideas.
00:09:55.620
And then it just, I don't know, it took off from there.
00:09:58.020
It wasn't anything conscious, like it was not, we didn't try to reach Wall Street or
00:10:03.780
It wasn't even like, it didn't have my name on it because I was working for an intelligence
00:10:06.440
agency and they wouldn't sort of let me put my name on it.
00:10:09.660
You took time off of doing intelligence to get an MBA with the intention of going back
00:10:15.180
to intelligence, being better equipped to make decisions, or were you getting out of intelligence
00:10:19.520
I did full-time MBA studies and full-time work at the same time.
00:10:23.900
So I switched jobs to take a less demanding job in the organization while I did that.
00:10:28.720
And the intent was always to go back and sort of like see what options were available.
00:10:36.600
How do you view the current panic around online privacy and just what is happening to us based
00:10:44.960
I can imagine you have a few thoughts on what we are doing with our data, what's being done
00:10:50.000
with our data, how cavalier we are with these lives of transparency we're leading now.
00:10:55.360
I think it's something that we need to be aware of and make conscious choices around.
00:10:59.940
And I don't think there's a historical precedent where we can look back and sort of use that
00:11:05.200
as a guide because the environment's changing so quickly.
00:11:07.720
I think one of the big things that are going to dominate over the next 10 to 20 years is
00:11:13.120
online privacy and sort of the question about whether we're going to let foreign companies
00:11:22.320
And I think those questions are, they're not necessarily resolvable.
00:11:29.820
You can use DuckDuckGo or, but you also want these valuable services that are being provided.
00:11:36.000
I think we need to come to some sort of understanding about what that information
00:11:41.660
that we're giving away is in a transparent way.
00:11:46.240
I also think that there's an interesting, if you think about it, one of the questions that
00:11:53.460
I think is relevant is, do these companies get a cumulative advantage from having this
00:12:00.180
And so is Google better at search because we use it?
00:12:04.660
And the more we use it, the better they get at search, which means that it's much harder
00:12:11.060
As these algorithms get better and they're trained with more and more data, it becomes
00:12:15.740
harder and harder for the person in the garage to compete.
00:12:19.220
And then you end up having to compete with capital and not necessarily technology.
00:12:24.600
And I think that changes sort of the landscape of what we're seeing in the market today.
00:12:31.080
So I think maybe it's a case where history has always been the same, where big companies
00:12:39.540
But I think that it's a little bit different this time in the sense that these companies
00:12:47.160
They have a huge influence over regulatory frameworks.
00:12:51.520
The harder or more regulated they become, almost the more barriers to entry you'll get for
00:12:58.400
Where do you come down on the question of having a foreign company build critical infrastructure?
00:13:08.660
And I think one of the ways that you can think through that question is, if we were to go back
00:13:15.220
to World War II or something, to what extent would we want another country building our tanks?
00:13:19.900
To what extent do we want to be dependent on another?
00:13:25.680
So to what extent do we want to be dependent on another country?
00:13:30.800
Even if we have good relations right now, I think one of the questions we ask is, are
00:13:34.800
we always going to have good relations with these countries?
00:13:41.120
And we can't, again, looking backwards, it's hard to find historical precedents where we can
00:13:46.380
clearly say what could happen, but I think that the variability in outcomes is high and
00:13:54.760
we're focused maybe on short-term optimization over long-term survival.
00:14:01.600
This is one of these places where it feels like the market fails us because it's just, in the
00:14:09.420
abstract, you can understand why you would want a free market for more or less everything,
00:14:15.560
but it's just so easy to see what could go wrong here.
00:14:18.660
If you have China or some quasi-hostile foreign power, or at least a foreign power that is probably
00:14:27.320
best viewed as a competitor, and it's very easy to see how we could be really in an open
00:14:33.500
state of war at some point in the future, there's no other way to look at it.
00:14:37.460
If they were going to put something malicious into the system, they would have the power to
00:14:42.780
And it doesn't have to be war in a physical sense.
00:14:48.640
Stealing IP, which we know they do with abandons.
00:14:51.440
And so one of the ways that we think to address this, and I'm speaking of we as people, not
00:14:56.680
we as my intelligence background, is, okay, well, we'll set up a lab, and we'll review
00:15:01.460
your source code, and then we'll verify that it compiles in the checksums, and then we'll
00:15:07.240
deploy it into our infrastructure as a means to sort of reduce the risk.
00:15:11.900
And I think that there's problems inherent with that, one of which is logic errors and
00:15:16.240
computer code are extremely hard to pick up on.
00:15:19.060
But the one that stands out a little bit more to me would be, what if there was a zero
00:15:25.560
day found, and a zero day, for people who don't know, is a vulnerability that's not
00:15:29.320
patched, that becomes available, that's found in the code of this infrastructure.
00:15:35.120
So the phrase zero day means you have zero days to fix this.
00:15:39.800
There's nothing you can really do other than unplug your system to prevent it.
00:15:44.260
And so they issue a patch, and does that patch go through this long process of code
00:15:50.960
And you can quickly see circumstances where you would be forced into deploying something,
00:15:56.120
even under this regime of labs and stuff, where you would end up with stuff that you
00:16:05.140
And that's not to say that any nation would do that.
00:16:08.980
It's, do you want to be put in a position where you have to think about that?
00:16:16.360
It sounds like you were inspired by Berkshire Hathaway, by Warren Buffett and Charlie Munger.
00:16:26.300
Or is it just, you're just a fan based on reading their stuff?
00:16:29.820
I mean, they're people who've influenced my thinking a lot.
00:16:34.180
The website, Farnham Street, is named after the street in Omaha where they have their headquarters
00:16:40.940
And I think it's just interesting to me when I was doing my MBA and I was sort of thinking
00:16:44.640
about this, it's, you sort of learn the, you had Daniel Kahneman on recently.
00:16:52.300
So you learn these cognitive biases that are great at explaining why we make mistakes.
00:16:58.100
And you have sort of Michael Porter and his five forces theory of business competition.
00:17:04.120
And I found it really interesting that these two guys in Omaha, Nebraska, or I guess one's
00:17:09.760
in Pasadena, Charlie Munger is in Pasadena, California, but these two guys took that work
00:17:14.760
and they made it practical and useful and used it to make better decisions in the real
00:17:20.380
world over a wide variety of companies and businesses.
00:17:26.080
And that's how I really got interested in them and their thinking.
00:17:29.500
Well, it was interesting that that conversation with Danny at one point, so for those who aren't
00:17:35.320
aware, Daniel Kahneman is one of the fathers of what has become behavioral economics, but
00:17:41.080
decision theory, prospect theory was part of that.
00:17:45.920
The work he did with Amos Tversky, for which Danny won the Nobel Prize in economics, revealed
00:17:53.260
how bad we are at reasoning through various decisions.
00:17:56.920
We have heuristics where we make certain decisions under uncertainty, and many of these heuristics
00:18:07.920
And one thing that surprised me in my conversation with Danny is, I mean, he's the godfather of
00:18:16.820
And yet, when asked how much he's internalized this, how much better he is at not falling prey
00:18:23.340
to bad intuitions or making bad decisions or decisions that will, in hindsight, prove to
00:18:30.640
He claimed more or less to be as bad at this as anyone else, or that all of his knowledge
00:18:36.540
hasn't really paid dividends in his practical reasoning.
00:18:41.400
But I get the sense that you're not quite in that same boat.
00:18:45.060
How do you view yourself as a decision-maker based on everything you've thought about and
00:18:49.980
I think it's really interesting that he said that, and I was going to bring that up, that
00:18:53.800
he basically said, I've studied this my whole life, and I feel like I'm no better at avoiding
00:18:59.820
And I think what that means is cognitive biases are really great retrospectively at explaining
00:19:06.680
And they're not so great before in terms of avoiding maybe the pitfalls of those things.
00:19:13.400
And the way that we typically sort of, or the way that I deal with people and how they
00:19:19.520
try to address it is they create a checklist of, oh, I'm going to write down overconfident.
00:19:24.120
I'm going to write down, you know, sample size bias.
00:19:26.420
And then the problem with that is the more intelligent you are, the better the story you're going to
00:19:31.200
tell yourself about why that doesn't apply in this particular situation.
00:19:35.040
It's almost like you've made your decision and then you're rationalizing it, but you're
00:19:38.620
going through this checklist, so you're going to create overconfidence in terms of your decision
00:19:44.840
This is a point that Jonathan Haidt and Michael Shermer and other connoisseurs of faulty reasoning
00:19:52.700
Haidt puts it this way, that we reason rather often more like lawyers than like people who
00:19:58.980
are actually trying to get at the truth, where we're doing some internal PR, trying to convince
00:20:04.660
ourselves and others why our gut intuitions actually make sense.
00:20:09.340
And that's, and the problem is the smarter you are, the better you are at doing that.
00:20:14.200
And on some level, the better you are at fooling yourself.
00:20:19.620
We're trying to protect our ego and it's not a conscious thing.
00:20:23.040
We're not sort of like meta thinking about protecting our ego.
00:20:26.100
We're just unconsciously trying to protect our view of the world and our interpretation of
00:20:33.080
And we're willing to take a less optimal outcome in part because we can excuse it away after.
00:20:41.320
And, you know, it becomes really interesting when you start thinking about what are the
00:20:45.260
things that I can do in foresight to make better decisions?
00:20:48.760
One of which we sort of alluded to this earlier, like there is no meta decision-making skill
00:20:57.580
There's a subset of skills that apply in a particular situation and tools.
00:21:01.960
And those are the things that we want to learn, right?
00:21:07.300
I think it was Herbert Simon who said there's no meta skill of sort of like problem solver.
00:21:12.160
And what there is, is there's people who bring particular skills that are relevant and then
00:21:17.020
they deploy that schema to a particular problem and they can see things and chunk things in
00:21:21.520
a way that other people can't see or chunk and make better decisions based on that.
00:21:26.800
And that's only relevant if the environment hasn't changed from where they've honed their
00:21:30.620
expertise or they've acquired that sort of like mental models, if you will, of like how
00:21:36.120
the world works and the variables that interact.
00:21:38.440
And I think one of the interesting things that my sort of study of Buffett and Munger has picked
00:21:44.200
up on is they've deployed this and they've made a lot of sort of money in the process.
00:21:48.820
But one of the things that they've done is they've stayed away from a lot of companies
00:21:55.360
And I think one of the reasons they do that is that gives them a better lens.
00:21:59.020
So my knowledge becomes cumulative instead of like having to reacquire it all the time.
00:22:03.220
If I'm trying to understand the technology behind Google, well, that's changing every day.
00:22:07.240
But if I'm trying to understand the technology behind a dry cleaner, the dry cleaner or Burlington
00:22:15.120
So my knowledge as I'm learning becomes additive and cumulative.
00:22:19.320
And so I think in those cases, your schema, your mental schema is more likely to be correct.
00:22:25.420
So what do you do differently in your personal life or in your professional life as a result
00:22:31.720
of all the, all the study you've done about decision-making?
00:22:35.260
Well, one thing that I do that I don't think a lot of people do is I rarely make a decision
00:22:43.200
I rarely feel the need to sort of like sit down and decide something to demonstrate to
00:22:47.540
other people that I'm in control or that I'm a decision maker.
00:22:52.060
I'll often take 20 minutes or 30 minutes and go for a walk and actually just try to think
00:22:59.280
And the way that I conceptualize this in my mind is like you have a problem or situation
00:23:03.840
and you just want to walk around it from a three-dimensional point of view.
00:23:08.880
What does it look like through different lenses of the world?
00:23:15.280
Can you think of an example of a decision where you would, this is one thing Danny Kahneman
00:23:20.560
said, is that if he's better at anything now, it's that he's more alert to the situations
00:23:29.720
He's more likely to make an error and perhaps can take a little more time.
00:23:36.680
Well, we were talking about sort of allowing companies in your foreign infrastructure.
00:23:41.760
That would be an example of where you can think through the problem from different lenses,
00:23:46.180
The immediate sort of response is, oh, it's cheaper.
00:23:52.240
And then you start, the longer you rag on that problem, the longer you work through it,
00:23:58.100
the more implications you can see as to the outside.
00:24:01.020
But you can also think about it in terms of, one of the ways that I think about this is
00:24:08.240
A lot of life is sort of optimized for financial maximization.
00:24:15.760
I think that it's actually good to have a lot of margin of safety in terms of your financial
00:24:22.600
Interest rates aren't always going to be there.
00:24:26.320
But historically, if you want to look out into the future, we could have a situation where
00:24:35.200
So when I'm making decisions on finances, it's not necessarily just optimizing the short
00:24:40.560
It's optimizing over a wide variety of outcomes.
00:24:43.680
And I think when you start to take time to think about decisions, you don't necessarily
00:24:47.920
need to have more cognitive horsepower than other people to make better decisions.
00:24:52.640
You just have to think through a wider variety of situations and circumstances.
00:24:56.280
It's almost like you're doing a Monte Carlo simulation in your head, where you're just
00:25:00.040
thinking about what are the extent of the possible outcomes, where am I likely to end up on a
00:25:06.780
probabilistic basis, and are there outcomes that are unacceptable to me, in which case
00:25:11.420
I want to avoid those outcomes and invert the problem.
00:25:15.160
And then if you can avoid all the bad outcomes, you're likely to end up with good problems or
00:25:20.280
So maybe we should just run through some of your mental models, because your blog, for
00:25:24.460
those who haven't seen it, is just an absolute arsenal of short essays on what you and others
00:25:36.580
And these are both explicitly relevant to decision making of the sorts that Danny Kahneman has
00:25:43.200
spoken about, but also just ideas and memes that you think everyone should have in their
00:25:48.060
cognitive toolkit, whether they relate to biology or finance or probability or just many topics.
00:25:59.860
The best example of that is online dating, right?
00:26:03.040
So you get a profile of a person that is the map, and then you meet the person, and they're
00:26:14.360
We use maps in businesses like strategic plans.
00:26:17.160
We use balance sheets, income statements, or maps to what's happening in the business.
00:26:22.640
They're an abstraction of it, but they don't represent every nuance and detail in the business.
00:26:28.320
And we need maps to operate, because our brains can't handle that amount of details.
00:26:31.960
We have to have a map, and we can't have a map with perfect fidelity of the thing that
00:26:38.720
But territories change, and if the map becomes the goal in and of itself, you lose track of
00:26:46.160
So when I say online dating is the best way to conceptually, it's the quickest way to conceptually
00:26:51.780
Where you have a profile, a person is presenting a view of themselves.
00:26:58.660
And then you go meet them, and you talk with them, and they're nothing like their profile.
00:27:01.960
Or their interests don't line up with their profile.
00:27:04.380
So you base your decision to meet them on a map, and then when you sort of met them,
00:27:08.180
you're dealing with the territory, and it's a different proposition.
00:27:11.200
And I think that we just need to be aware of when we're dealing with a map, and if you're
00:27:16.100
running a business or a team, you want to be touching the territory, right?
00:27:26.500
And Ron would be another example of sort of like a map territory problem before they
00:27:31.600
Everybody was reading the map, and the map was saying...
00:27:39.260
So the maps can deceive you, and they can lie to you.
00:27:41.780
And your job, to the extent that you're an investor, is to sort of like understand the
00:27:46.960
territory and understand what's going on at a different level.
00:27:51.580
I ran into this recently with somebody was urging me to make a few business projections.
00:27:59.580
This is now a map of the future where, you know, like growth targets with respect to a
00:28:05.440
business, and maybe there's some context where this makes sense for people to do, but it just
00:28:14.580
And I just was thinking of what are the consequences of making this up?
00:28:18.840
So you posit whatever it is, you know, 20% growth over some period of time, and that is
00:28:24.840
being put forward as some criterion of success, and yet you don't know, you don't in fact know
00:28:31.140
So it made no sense to me to be anchored to that number.
00:28:35.400
It made no sense to imagine that we should be happy with that number or depressed not
00:28:42.080
to have reached it, because it's just plucked out of thin air.
00:28:45.240
If you could have 10x something, why would you be happy with 5x?
00:28:49.920
And if 5xing something is in fact impossible, why would you be disappointed with 4x, right?
00:28:56.560
So it's like all of this is made, you're basically creating a psychological experiment for yourself
00:29:00.560
where you're either going to feel good or bad based on this confabulation that you did
00:29:07.540
Maybe there's more to it than I understand, but it just seemed like a crazy use of intelligence.
00:29:12.180
On a one-off basis, projections are sort of, as you said, they're dangerous, right?
00:29:17.180
So you can also start working towards the projection and not do the obvious best thing to do because
00:29:24.460
And then on a recurring basis where you work for an organization or a body or entity that
00:29:30.940
sort of like is consistently making projections, there's very few of those organizations go
00:29:36.440
back and calibrate the individuals making those projections.
00:29:40.180
I mean, we used to have people who would make projections in a very sort of rote fashion.
00:29:46.200
They knew which projections would get accepted and they also knew that there was no consequences
00:29:50.300
to sort of like pulling those projections out of their ass.
00:29:53.640
And so if there are no consequences and you're not sort of held to account for your projections,
00:29:57.980
you also have no way to calibrate the person making the projections.
00:30:02.020
Is this person more accurate than another person at these projections?
00:30:05.980
And then an interesting question would be, what makes them more accurate than other people?
00:30:10.220
And can we use that information to make better decisions?
00:30:12.920
And it's also, you're aiming at an arbitrary target, right?
00:30:17.380
So if the projection is 20% growth and that's what's going to satisfy you because you put
00:30:22.880
that target on the wall, my question is, why not just do the best things you should be
00:30:29.700
doing for, in this case, we're talking about a business, do those best things and see what
00:30:36.100
So like, why aim at an arbitrary target that doesn't take into account the higher level
00:30:42.400
thinking of just what are the best things you should be doing for this business?
00:30:45.600
We don't make projections on our happiness, right?
00:30:48.460
It's not going to be like, I'm going to be 15% more happy next year.
00:30:52.000
We do it with finances and numbers because it tends to be a little easier, but I think it
00:31:02.500
Yeah, I mean, Elon Musk is sort of like the, the recent example of that, but it's breaking
00:31:10.300
And one of the things that the intelligence agency that we had to do a lot of was solve
00:31:14.280
problems that are sort of like ungoogleable where people haven't really solved them before
00:31:20.720
And you get constrained into thinking about things through your particular lens.
00:31:25.440
So your discipline, if you went through computer science or engineering or arts or HR, and we
00:31:31.160
were so fortunate to have a wide variety of people there, but one of the things that sort
00:31:35.600
of got us out of what we had been done, the other constraint is what you've done before,
00:31:40.860
So you're, you're beholden to improve upon what already exists versus, I wouldn't say
00:31:52.320
So you bring all this baggage with you, but if you actually stop and pause a bit, the problem
00:31:56.880
for a second and think about, well, what are the actual physical constraints of the world?
00:32:01.620
What are the building blocks that I'm dealing with?
00:32:03.480
What are the limitations, like the actual limitations, not what exists today?
00:32:08.120
And then you can sort of rethink the problem in terms of how you want to solve it.
00:32:17.760
So the organization can't do it, but it sort of like gets you into this, out of this incremental
00:32:23.580
improvement state and more seeing the problem more fundamentally.
00:32:27.040
And I think that's where we see a lot of disruption in the world is, you know, I think it was Peter
00:32:34.120
And if you think of innovation as possibly having two types of different innovation, one being
00:32:39.260
incremental improvement and one being sort of like a fundamental change.
00:32:42.940
I think the fundamental change is coming when we tend to think through problems from a
00:32:47.680
first principle basis and take a different approach to them within the boundaries of what
00:32:53.900
Whereas the incremental improvement is we look at something and we just move the widget
00:32:56.980
faster and they're both valuable and they're both valuable in an organization.
00:33:01.160
I think it's just a lot easier to do the incremental improvement.
00:33:04.720
And so if you think of optics and promotions and how sort of the internal dynamics of an organization
00:33:09.780
work, it becomes a lot less risky to do the incremental improvement than think about things
00:33:15.540
through a first principles basis and what's possible.
00:33:18.500
Yeah, I guess that's somewhat in tension with another mental model you have here, or at least
00:33:28.220
It's often the, well, first let's explore what that, what that means.
00:33:33.520
What do you, what do you mean by doing no harm?
00:33:38.600
Yeah, so we're sort of like prone to demonstrate value in an organization, right?
00:33:46.140
We're, we're prone to having this bias towards action, this bias towards doing something and
00:33:53.760
And often when we do that, we have a knee jerk reaction.
00:34:02.080
We don't necessarily solve the fundamental problem.
00:34:05.280
And a great example of this is sort of, if you think about software and you have a problem
00:34:10.700
with a software, hypothetically, you're using an HR software at work, you have a problem
00:34:16.840
And that problem is, you know, people can't take vacation leave through that software.
00:34:22.120
They have to manage and track it through an Excel spreadsheet.
00:34:24.740
And so you're put in charge of solving this problem.
00:34:28.680
And while you go out in the world and you look for software that can solve this particular
00:34:35.020
problem where you can track vacation and you implement this new software, but you don't
00:34:40.020
realize that the software has created other problems.
00:34:43.760
You don't realize that like you've just changed one problem for another.
00:34:47.060
And the problems that you're getting now could be a lot worse than the ones that you're dealing
00:34:51.460
The tension I saw there is that there's the Via Negativa model would counsel a kind of
00:34:58.180
conservatism, right, or an incrementalism, where it's like rather than tear up the whole
00:35:03.980
approach by the roots and reinvent it, you do just want to shave off inefficiencies or find
00:35:11.520
other ways of optimizing what has worked in the past rather than completely rethink it.
00:35:18.580
You mentioned Elon, you know, yesterday he successfully launched his Falcon Heavy rocket and landed
00:35:27.940
So this fundamental change of, you know, thinking of rocket launches as something that should
00:35:33.360
be totally reusable and you've got to figure out how to land these things, land the first
00:35:38.620
It's, you know, on its face, sounds like a crazy idea, but once you set that goal based
00:35:43.020
on rethinking the first principles of the whole enterprise, now we've discovered there's a
00:35:47.620
solution, but that requires such a vast use of resources to rethink something so fundamental
00:35:58.860
I mean, obviously this is a, the goal here is to cut the costs and to make it a bigger industry,
00:36:03.900
but it's easy to see that you could have gone down that path and for a very long time for
00:36:08.800
Elon, it looked like he was going down this path to a waiting cliff, right?
00:36:19.220
Like watching rockets launch and sort of like re-land and then re-deploy is...
00:36:26.320
There are a few things which every time you see them, you don't really habituate to how
00:36:36.000
I mean, and this, this is footage that I'm sure at some point will become jaded enough
00:36:40.460
to say, well, that's, of course, that's the way that's supposed to work, but watching those
00:36:44.980
boosters land perfectly in unison, it just looks like a science fiction movie from the
00:36:51.820
eighties that, you know, was just preposterous.
00:36:54.180
And then when you, when you think you, you sort of alluded to why that happened, right?
00:36:57.640
When he's being interviewed, I remember him talking about it in the sense of, I just thought
00:37:02.480
about what was possible and I thought it was possible, it was physically possible to reuse
00:37:07.880
And so he thought about the problem in a different way and he has a very great ability to attract
00:37:14.920
not only capital, but people to working on those problems and the result can be amazing.
00:37:20.980
But it's also important to note that not all of those results are amazing.
00:37:25.580
I mean, we see this sort of like SpaceX's of the world and we probably don't see the hundreds
00:37:30.940
or thousands of companies that rethink the problem as well and fail.
00:37:35.200
But I mean, that's how we make incremental progress as a society.
00:37:39.420
But that is, I guess that's probably another mental model you have written about.
00:37:44.100
There's a survivorship bias that we're constantly being advertised the evidence of only those success
00:37:52.020
stories and we're not given any true indication of the ocean of failures that is behind many of
00:38:01.760
I mean, I guess this also connects to another model, which is just understanding base rates.
00:38:06.100
I mean, just how many new businesses succeed, for instance, or how it's like this is not
00:38:10.020
something that you necessarily understand when you calculate the probability that any new
00:38:21.160
So we think, you know, the restaurant we're opening or the podcast we're launching or the
00:38:26.840
app we're doing or sort of the new business that we're sort of endeavoring to undertake
00:38:32.160
is going to be successful because we're involved in it.
00:38:35.380
But everybody has that view and the success rates are, you know, abysmal, especially after
00:38:42.420
If you ask people whether their marriage is going to be successful, if they're sort of like
00:38:46.420
on day one and embarking on that, they're of course going to say, like, we're not going
00:38:49.860
to fall victim to this 50% of marriage is dissolved sort of base rate.
00:38:55.740
You need to factor in that outside view in terms of making decisions.
00:39:01.260
Maybe it's best not to do it in matters of love, right?
00:39:05.200
And maybe it's best to make a more emotional decision there, I think.
00:39:08.500
Well, in that having a positive bias or an optimism bias could actually be a self-fulfilling
00:39:21.600
I mean, it's just that the positive attitude has to count for something in various contexts.
00:39:26.540
I think this desire to be purely rational all of the time in every decision that we make
00:39:32.500
might actually be a disservice because it would sort of take people like Elon and why
00:39:42.340
And it would sort of dissuade us from doing that.
00:39:44.400
We need some sort of emotional component to our decision making.
00:39:47.680
It's just a matter of determining when it's serving us and when it's hurting us.
00:39:51.940
And I think that that would be the more accurate view of how you think about that.
00:39:55.760
So thought experiments, how do you think about thought experiments?
00:40:01.660
The phrase now for me is fairly charged because I am the victim of having used thought experiments
00:40:09.880
on controversial topics that did not get received like they were thought experiments.
00:40:17.680
This is something that I got being a student of philosophy, where just to look for any kind
00:40:25.160
of ground truth, especially morally, you want to think of the corner cases.
00:40:30.320
You want to think of conditions where you've simplified a real world scenario so that you
00:40:37.060
can discover whether or not you actually have an argument against or for the thing you think
00:40:42.440
So probably the clearest case for me is thinking about the ethics of torture.
00:40:47.860
It's a fascinating and consequential argument to be had about whether torture is ever ethical.
00:40:56.160
And it's by no means straightforward when you line it up against the other things we accept
00:41:02.420
without blinking our eyes, which on paper seem worse than torture as you line them up.
00:41:12.740
But in order to have that conversation, you talk about ticking bomb scenarios, right, which
00:41:22.620
And in the purest cases, they don't happen at all.
00:41:25.920
But the issue is if you actually want to get down to bedrock, if you want to understand whether
00:41:30.840
you can make an ethical argument against the use of torture in all cases, you need the clearest
00:41:37.080
You need to say, OK, let's take out all the variables.
00:41:39.220
Let's take out the uncertainty, for instance, of a person's guilt, right?
00:41:42.860
So we know the person we have is guilty, right?
00:41:45.640
We know that, you know, we caught him with his heart.
00:41:52.020
And we can see, you know, the kinds of nefarious things he's been planning.
00:41:55.320
And, you know, we see the plans for the nuclear device that he claims is hidden in the middle
00:42:03.360
So you need the purified case, not because that's the likely case.
00:42:08.920
But let's just figure out if we actually have an argument against the use of torture in all
00:42:14.520
cases, because that would be immensely clarifying.
00:42:17.020
Because if we solve that, then we know, OK, we're never tempted to make an exception to this
00:42:22.760
rule, right, because we've thought it through in the clearest case, where we know the person
00:42:27.600
We know they've got a nuclear bomb in the middle of a city.
00:42:32.140
There's no other methods we can use to get the intelligence.
00:42:35.280
You distill it down to the case where even good people would be the most tempted to resort
00:42:43.220
to torture, then see if you have an argument against it.
00:42:46.240
But what happens when you have conversations like that is that then people, rather than
00:42:52.740
receive them in the spirit of ethical inquiry for the purpose of charting a course in the
00:43:00.100
future politically, they put a journalistic or political lens on it from the start, right?
00:43:05.740
And so, I mean, even a clearer case, and this is a case I haven't actually used, but this
00:43:09.680
is the kind of thing that one would routinely do in a philosophy seminar.
00:43:13.660
You say, OK, well, why can't we eat babies, right?
00:43:16.120
So like babies, there are unwanted children in the world.
00:43:22.240
Now, it's not that the person who's raising that example has an interest in eating babies.
00:43:29.300
It's just, this is like a laser focus on moral bedrock, right?
00:43:37.180
And it's instructive that some, you know, some people will find it difficult to even argue
00:43:44.400
I mean, some people will feel like they need to resort to a holy book revealed by an invisible
00:43:48.700
god in order to get you some bedrock where you can stand so as to not eat babies.
00:43:53.040
And so it is an engine of interesting and morally rich conversation.
00:43:59.700
Now, obviously, not all thought experiments deal with ethically fraught territory.
00:44:03.300
But I do find that the concept of a thought experiment has been stigmatized because it is synonymous
00:44:11.320
with or thought to be synonymous with not making contact with the real world.
00:44:17.100
You're basically creating the straw man case that you're then going to use to guide you
00:44:25.620
A couple of comments, just as you were talking there, one of the things that I found myself
00:44:33.400
thinking as you were talking is, how do we find out about what we think on an issue?
00:44:38.780
How do we find out where we land on a particular issue?
00:44:43.080
And so we're expected to have these fully formed opinions.
00:44:48.040
We're expected to have these fully thought out.
00:44:50.560
And we have really, I would argue, it's sort of increasingly difficult to have conversations
00:45:02.640
Like, can you imagine sort of like the outrage that would ensue about, you know, having this
00:45:08.300
debate on Twitter or just trying to figure out where you land?
00:45:12.360
So you put this out there and then the feedback would be like the media would be all over you.
00:45:22.580
This is why I'm tempted to delete my Twitter account on a monthly basis.
00:45:25.380
Aren't we better off having this safe space, like almost like a sandbox where we can play
00:45:31.340
with ideas, where we can explore things, where they don't have to infect us.
00:45:41.640
I have taken great pains to insulate it against the normal commercial pressures.
00:45:47.560
As you know, maybe we'll talk about that at some point.
00:45:49.340
But another example occurs to me that a guest brought up who I believe you've also had on
00:45:54.520
your podcast, Will McCaskill, the ethicist, who's just fantastic.
00:45:59.340
And he was talking about the ethics of, you know, running into a burning building to save
00:46:07.780
But if you run into that burning building and on your way to the child's bedroom, you
00:46:13.280
discover that there's a Picasso on the wall and you could also save that and, you know,
00:46:20.140
And we use the, you know, the $75 million or whatever you get from that sale to save many
00:46:28.260
And if there were really a zero-sum contest between the money or the child, at minimum,
00:46:34.040
that's an interesting ethical, apparent ethical dilemma to sort through, right?
00:46:38.560
Now, it seems we have a very strong intuition that you would be a psychopath to grab a painting
00:46:46.700
But, of course, the choice is never really presented to us in that form.
00:46:54.240
When you look at just the decision for a news organization to spend 24 hours covering a
00:47:02.020
story about a single suffering person as opposed to a genocide that is raging in some distant
00:47:08.500
country, it's just the way we marshal our resources, you know, the single compelling case
00:47:13.900
that causes the massive judgment as opposed to the statistics of vast human suffering that
00:47:24.440
This is how we can discover and correct for moral bugs that are actually of great consequence.
00:47:29.780
We need a mechanism to sort of have these conversations, and I think it's going away.
00:47:36.320
And as of right now, I mean, the only safe, guaranteed safe space you ever have is just inside your
00:47:42.840
But in the future, we might even see that go away as technology increasingly sort of like
00:47:50.300
And then what happens is like the minority report might become real, right?
00:47:54.000
Where you think of somebody cuts you off and you're like, I want to kill them.
00:47:58.360
And all of a sudden you're arrested because you had this thought.
00:48:01.740
And I think like we're at the very, we're in a very interesting time for thinking where
00:48:08.980
Like you, you can't go out being Sam Harris and say something.
00:48:15.360
But I mean, a lot of people with such a public profile can't come out with a controversial
00:48:19.900
idea because the backlash on them is going to be so huge.
00:48:24.040
And I think as a society, we need a way to sort of maybe press it.
00:48:28.560
If you'd like to continue listening to this conversation, you'll need to subscribe at
00:48:36.600
Once you do, you'll get access to all full length episodes of the Making Sense podcast,
00:48:40.860
along with other subscriber only content, including bonus episodes and AMAs and the
00:48:46.120
conversations I've been having on the Waking Up app.
00:48:48.040
The Making Sense podcast is ad free and relies entirely on listener support.