#649: Thinking for Yourself in an Age of Outsourced Expertise
Episode Stats
Summary
In an age where endless streams of data and options are available, it can feel like every choice from what TV show to watch to how to invest your money ought to be optimized. And yet making any choice, much less an ideal one, can seem completely overwhelming. So how do we figure out what to do? Well, much of the time we don t. Instead, we outsource our thinking to technology, experts and set protocols. This, my guest today says, is where some real problems start.
Transcript
00:00:00.000
Brett McKay here and welcome to another edition of the Art of Manliness podcast. In an age
00:00:11.340
where endless streams of data, options and information are available, it can feel like
00:00:14.780
every choice from what TV show to watch to how to invest your money ought to be optimized.
00:00:18.980
And yet making any choice, much less an ideal one, can seem completely overwhelming. So
00:00:23.140
how do we figure out what to do? Well, much of the time we don't. Instead, we outsource
00:00:26.700
our thinking to technology, experts and set protocols. This, my guest today says, is
00:00:30.760
where some real problems start. His name is Dr. Vikram Mantramani, and he's a Harvard lecturer
00:00:35.220
who studies future trends and risk, as well as the author of Think for Yourself, Restoring
00:00:39.160
Common Sense in an Age of Experts and Artificial Intelligence. Today on the show, Vikram explains
00:00:43.400
how our increasingly complex lives have led us to increasingly rely on algorithms, specialists
00:00:47.680
and checklists to make decisions, even though experts are best suited to entangling complications
00:00:52.420
rather than complexities. We've talked about the difference between the two. We then discuss
00:00:55.840
the issues that can therefore arise and rely on expert advice, including the siloing of
00:00:59.640
information and the application of misdirected focus. Once we diagnose the problem, we then
00:01:04.280
turn to the solution and how we can harness the good that technology and experts can provide
00:01:08.160
without undermining our ability to still think for ourselves by doing things like asking experts
00:01:12.500
about their incentives, knowing our own goals, triangulating opinions and crossing silos.
00:01:17.260
And we enter a conversation with how the serendipitous discovery of perspectives that can come
00:01:20.740
from flipping through a paper magazine and browsing a bookstore can be part of restoring self-reliant
00:01:24.980
thinking in the 21st century. After the show's over, check out our show notes at awim.is
00:01:43.360
Thanks for having me, Brett. Grilled to be with you.
00:01:45.580
Well, you just came out with a new book called Think for Yourself, Restoring Common Sense in
00:01:49.620
an Age of Experts and Artificial Intelligence. And in this book, you're making the case in the past
00:01:54.040
few decades, lay people, like just regular folks, have been increasingly outsourcing their thinking
00:01:59.940
to experts and even technology. And I'm sure we'll dig into some examples deep, but just off the top,
00:02:06.660
sort of give us a big picture view. What are some examples that you've seen where you see people
00:02:11.120
just outsourcing their thinking to experts in technology?
00:02:14.800
Sure. So, well, Brett, I think it starts off with the idea that we're drowning in information
00:02:19.520
and data. And then the result has been more and more decisions are being put in front of us to be
00:02:26.500
made. And so we have too many choices. And we think because there are more choices, that there is a
00:02:32.620
perfect answer, that there's an optimal decision to be made. We also know we can't make that by
00:02:38.460
ourselves, that we don't have enough information to do so. Or frankly, we have too much information
00:02:42.540
and we need some help. So we turn to expertise. And expertise can be embodied in the form of
00:02:48.180
technology, human beings, i.e. experts, or even checklists and rules. And so just think about the
00:02:54.660
GPS device that many of us use when we navigate our way around town. A lot of us have stopped
00:02:59.560
thinking about maps. We don't actually know sometimes where we are, but we'll listen to this
00:03:04.740
little voice that tells us make a left up here in 300 yards, stay in the right lane, merge left,
00:03:10.480
etc. So that's a great example where we've stopped thinking about the dynamics of geography
00:03:16.100
and our path within it. And we've resulted in just outsourcing our thinking to the GPS device.
00:03:21.820
Well, another example of outsourcing our thinking to technology is algorithms, right? You go to Amazon
00:03:26.180
and instead of thinking, oh, do I really like this book? You just rely on the algorithms
00:03:30.580
that Amazon says, yeah, you're going to like this book. So shut up and buy the book.
00:03:35.520
That's right. Think about it. You used to go to a bookstore, at least I used to. I don't know how old
00:03:39.460
the listeners are, but I used to go to bookstores where I would browse the shelves. I went looking
00:03:44.480
for a topic, possibly a specific book, and I would find adjacent titles, other things. And it was this
00:03:50.880
somewhat unorganized, albeit fortuitous, search process that often was fortuitous.
00:03:57.180
Nowadays, you end up in these little echo chambers. You expressed interest in a book on
00:04:02.240
baseball strategy. Amazon's going to recommend a lot of baseball strategy books to you over time.
00:04:07.680
And they're going to get more and more specific because that's more and more likely based on
00:04:11.860
your revealed preference of having purchased that book or that topic. And so, yeah, I think that
00:04:17.280
those algorithms are conscientiously managing where we pay attention.
00:04:22.140
And what sorts of problems can pop up when we over rely on experts in technology?
00:04:26.840
Well, think about it this way. Experts in technologies live in silos. And we live in the real world where
00:04:33.600
there's a context outside of those silos. But when you rely only on the siloed information,
00:04:39.620
you're not seeing the big picture. You're not seeing the context. And so, what you get is
00:04:45.180
information that's optimized in a particular domain, but may not be optimized for you or your
00:04:51.240
overall context. So, I think the biggest problem is the silo effect, if you will.
00:04:56.580
And when did you start noticing this outsourcing of thinking was starting to cause problems?
00:05:00.960
Well, it really has to do with my first book. My first book was about financial bubbles.
00:05:07.800
What I realized was economists and those that were very narrow and focused, i.e. those who were deep
00:05:13.620
and specialized, sometimes missed what was deemed very obvious to the layperson. And what I realized
00:05:20.280
was actually a multi-lens, multidisciplinary view could help you identify dynamics that a single
00:05:27.340
perspective might miss. And so, what I realized was actually every perspective was limited,
00:05:33.400
biased, and incomplete. Combine that with the fact that we often outsource to people who have really
00:05:39.140
deep focus and expertise. And what you realize is you're outsourcing your thinking to incomplete
00:05:44.380
perspectives. So, why do that? Or if you're going to do that, maybe consult multiple perspectives.
00:05:51.000
So, you know, we can triangulate by discussing your insights with that of an insight from another
00:05:56.200
expert, with another expert, with another expert, and really triangulate to get some sense of what
00:06:01.240
the problem really is about. So, you mentioned earlier, one of the reasons why we started to rely
00:06:06.000
more on experts is that we're just flooded, inundated with information. There's so many choices.
00:06:12.040
But not only that, I think everyone's experienced information overload, which is why they go to Google
00:06:16.260
and they look for, you know, they Google like best whatever for my kid. We'll talk about optimization
00:06:21.180
here in a bit. But besides being flooded with information, things have gotten more complex. Like
00:06:27.340
the information we have to work with is much more complex. What does that look like? I mean, let's flesh
00:06:32.040
that out for us a little bit. Like, what does the increasing complexity look like? And how is it pushing
00:06:36.620
us towards the, you know, towards experts in technology?
00:06:40.520
Yeah. So, I think there's a little nuance here, Brett, that I'd love to make sure I clarify for the
00:06:45.320
listeners. And that has to do with the terminology. And so, let me first describe what I think are a
00:06:51.400
couple of different types of problems and environments that we may be facing. The first
00:06:56.260
is a simple environment or a simple problem. And that's one where there is a clear cause and effect.
00:07:02.420
This is a problem that could be solved with automation very quickly, software, a spreadsheet.
00:07:08.320
Think about how to calculate the interest on a credit card balance. There's a spreadsheet that says,
00:07:12.820
here's your average balance, here's the interest rate, there's your interest payment.
00:07:16.640
Alternatively, you can get to something that I would call complicated. Complicated environments
00:07:21.620
or problems are ones where there is, in fact, a clear cause and effect that it takes an expert to
00:07:28.900
help you identify that because it's layered in multiple different causes and different effects.
00:07:34.940
So, think about the fact that your car didn't start this morning. Maybe you were more,
00:07:39.040
you know, astute on this matter than I would be, but I would likely seek assistance, especially as
00:07:44.200
these cars have become more technologically sophisticated. Is it the starter? Is it the
00:07:47.760
alternator? Is it the ignition? Is it the battery? Is it there? What is the problem? I don't know.
00:07:52.240
There is a problem. It didn't start. It takes an expert mechanic, someone who understands and can
00:07:58.200
disentangle all those causes and effects to get to it. This is the domain, what I call complicated,
00:08:04.200
that experts really thrive within. The minute you cross the threshold from complicated into complex,
00:08:12.020
what we have are emergent phenomenon. This is where causes and effects are not clearly linked
00:08:17.940
or identifiable. And it's because there's just too many moving parts. This is the domain of social
00:08:24.080
dynamics, right? When you have lots of individuals thinking for lots of reasons, different thoughts,
00:08:28.680
and interacting to produce behaviors that emerge. So it's an emergent phenomenon. It's in this domain
00:08:35.520
that our instincts are to lead us towards experts who promise us salvation, who can solve these
00:08:41.680
problems. But these are not problems that are solvable. These are problems that are understandable
00:08:46.740
and that we can try to get our arms around, but there is no solutions. And so when you employ an expert
00:08:52.640
who's skilled at helping us navigate a complicated dynamic in a complex dynamic, what you find is
00:09:00.140
you've brought a man with a hammer to a situation where there may or may not be a nail, but he's
00:09:06.000
going to find that nail. And so that's the domain of complexity where I would suggest it really does
00:09:11.580
make sense to use multiple experts or multiple perspectives to really get your arms around the type
00:09:16.420
of problem you're facing. Well, the example of how an expert, even though they're an expert and
00:09:21.160
they're very knowledgeable about the area, they still can't solve the problem of complexity. I mean,
00:09:25.700
this is your domain, financial advisors. You talk about that, but basically the track record for
00:09:29.980
financial advisors isn't great. Yeah. Look, I think the financial advice community, it's hard to really
00:09:38.900
gauge whether it's great or not, right? Because even the assumption that financial advice has been
00:09:44.860
suboptimal usually is based against some optimization logic of, oh, we want to have the maximum blank.
00:09:53.060
Well, what if instead a true financial advisor understood what their client's needs were and
00:10:00.200
increased the probability of achieving those rather than trying to maximize just some theoretical
00:10:07.420
objective such as, oh, we just want to produce the max return. Well, the max return comes with some risk.
00:10:13.240
So what if instead the person said, I want to make sure I have enough money for my kid's tuition when
00:10:19.160
he goes to college in three years time and it's this much money? Great. Then we're going to increase
00:10:24.140
the probability of achieving that number rather than just objectively try to maximize in this ambiguous
00:10:31.360
way. So it's not clear to me that financial advisors are unproductive or useless. I think they're
00:10:37.500
probably very productive and very useful. The key is really taking the time for financial advisors to
00:10:42.780
step out of their own little silo of maximize returns, maximize returns, and understand client
00:10:49.120
needs. And you're also seeing, I'd say in the past 10 years, you've seen companies pop up promising
00:10:55.760
that they can use artificial intelligence to solve these complex problems. The idea is that you can get
00:11:01.420
these supercomputers thinking about things and they can see all the different possibilities in these
00:11:05.740
emergent properties. But what do you think? Is that actually going to do anything? Or is it just,
00:11:10.140
can that help us solve problems using technology?
00:11:14.420
Yeah, look, technology has forever, as far as I can tell, promised us the salvation into a utopia
00:11:21.780
where everything is knowable and everything is optimizable. The problem is technology, at least so
00:11:27.940
far, has been designed by humans that have limitations and biases and other issues. And those get embodied in
00:11:35.300
the very technologies that are produced by humans. So when you get to the domain of artificial intelligence
00:11:40.780
or machine learning, where they are trying to learn from themselves, the possibilities really could be
00:11:46.780
endless, but they're not anywhere near. The concept of artificial general intelligence, you know, a computer
00:11:53.540
or software that can actually think and understand common sense dynamics, doesn't seem anywhere imminent,
00:12:00.300
to me, at least. So that's one dynamic, I would say. But it also, if you think about even something
00:12:06.000
that's becoming increasingly popular as a topic, like autonomous driving, if a car is driving,
00:12:12.460
let's say the car is about to hit, this is a common problem in the discussion of fairness and some of the
00:12:18.340
decision-making literature. But if the car is going to hit either a person on the right side of the road
00:12:26.240
that has a baby in a carriage and is pushing it down the sidewalk, or two old people on the left
00:12:32.360
side of the road that are walking with their canes down the street, and that's it. It has to choose
00:12:37.460
one of those two. Which one should it hit? I mean, those types of ethical problems that emerge in the
00:12:44.280
software are things that humans have grappled with, but the software doesn't know how to grapple with
00:12:48.960
that. The software is going to deal with it, whatever it's been programmed to do. And so you have these
00:12:54.540
ethical considerations that emerge. I mean, the truth is software embodies values. Algorithms embody
00:13:01.840
the values of the people that design them. And so there you go. That's the problem.
00:13:06.040
That's the problem. All right. So besides increasing information, besides increasing complexity,
00:13:10.920
you said another reason that we're starting to turn more towards experts to help us solve our
00:13:15.420
problems is this desire to optimize everything. What does that look like? And why do you think we're
00:13:20.200
trying to be the best at everything? Yeah. Well, look, I mean, think about,
00:13:24.420
I'll give you a great example that I did mention in the book, which is my wife and I would sit down
00:13:28.700
after a long week and we'll plop ourselves down on the couch and we'll say, all right,
00:13:33.700
let's just watch something. And, you know, maybe she's had a week where she's in the mood at this
00:13:39.540
point for an action movie. Maybe I was thinking more, you know, it was really like a heavy week. I want a
00:13:45.300
comedy. We're convinced because there's, I don't know, a million movies on demand available between
00:13:52.240
Xfinity, Hulu, Netflix. I mean, you go down Apple TV, what have you, Amazon Prime. I mean,
00:13:59.800
there's got to be a movie that can thread that needle perfectly, right? There's got to be.
00:14:04.580
There's so many movies. Why wouldn't there be? Of course there is. And our mood is perfectly
00:14:10.220
studed to that exact movie. And so, but finding it is a non-trivial task. And the truth is we
00:14:17.960
probably won't. And so what ends up happening is we think because there are so many choices
00:14:23.400
out there in the world that we get effectively paralyzed because we know that an optimum,
00:14:29.840
an optimal perfect decision probably exists, but we can't find it. How do we find that movie?
00:14:36.000
How do I know? I mean, oh my God, you would think I'd have to consult Rotten Tomato. I'd have to
00:14:40.460
consult different movie critics. I'd have to find this. I'd have to find the genres. Look,
00:14:45.100
the stakes are not high enough to do all that, but we're left with this low-grade anxiety.
00:14:49.920
And the result is we're probably going to be unsatisfied because of all that choice.
00:14:55.620
And so rather than sort of empowering us that having all this choice and we can find whatever we want,
00:15:00.740
we end up with this low-grade regret. Ah God, that movie wasn't perfect. There was probably
00:15:05.100
something better. It's this fear of missing out. Fear of missing out on the perfect choice.
00:15:11.440
So we often hear about this FOMO in a lot of walks of life, but the fear of missing out on the perfect
00:15:16.760
choice exists in many domains. And so the result is, well, let me go with the algorithm suggestion.
00:15:23.700
Netflix thinks based on my prior watching that I'll like this. Let me try it, right? Or based on these
00:15:30.140
decisions I've made in the past, the expert believes, my financial advisor thinks that I'm
00:15:34.400
very risk averse. They're not going to put me in Zoom stock because it's volatile, even though it
00:15:39.680
went up. Oh God, it was volatile. What have you. And so the fear of missing out on that perfect decision
00:15:45.880
that is elusively promised constantly by the explosion in choice and opportunities really leaves
00:15:53.660
us with this tendency to run headlong into the arms of experts and technologies.
00:15:59.080
Well, yeah, it seems like in the, it sounds like what you're saying, the technology actually
00:16:01.960
encourages us to think that way because technology, they want, there's data with everything. So you can
00:16:07.000
see popularity, but like the data could be immediately. I mean, I've had instances where I've
00:16:11.240
tried to get whatever I, whatever some company said was the best. Then I get it. I'm like, this wasn't
00:16:15.980
that great. This is the problem. This is FOMO invading all walks of life effectively, right? I mean,
00:16:22.300
think about this, but I went and I got to get a drink at Starbucks and, you know, it's kind of
00:16:26.860
feeling like a coconut milk latte, but they had this picture up there. It says real popular. The
00:16:31.440
app suggested that, you know, dollars off. If I get this other drink, this mocha, something
00:16:36.300
trying to manage my calories. I'm worried about fat. I'm trying to optimize, you know, the caffeine to,
00:16:42.300
I don't know, the caffeine to carb ratio, some weird thing that someone somewhere said is
00:16:46.760
important. And so I get it and yeah, it's okay. But there was a perfection,
00:16:52.300
promise at one point. And by the way, this goes right headlong in conflict with economic thinking.
00:16:59.300
Economic thinking has often said more choice is always better. It can't be worse, right?
00:17:07.000
I can ask you, Brett, do you like an apple or an orange? You say, I like an apple. I say, great.
00:17:10.740
Do you like an apple, an orange, or a pear? Well, now you either like the pear or you can still like
00:17:15.520
the apple, right? The orange is never going to be better. What we find with humans is, is it apple or
00:17:20.960
orange or pear? And then you introduce a banana and they say, I like the, I like the orange.
00:17:26.080
You say, wait, hold on a second. Why did you like the orange? Now you'd like the apple more than the
00:17:30.000
orange. The apple's still there. Now I introduce a banana and now you like the orange. What happened?
00:17:36.540
Well, it turns out choice is confusing and we drown in these sort of decisions. We get paralyzed. I mean,
00:17:42.540
you hear about analysis paralysis, this choice paralysis. There's been wonderful research that shows
00:17:48.280
after a certain point, more options actually paralyze rather than in power. And that's what
00:17:55.640
we're finding. Right. And so that's why we decide to go to Google, just Google, tell me what the best
00:17:59.640
thing is to buy or Netflix, tell me the best show to watch because I don't want to make the choice.
00:18:03.400
Yeah, it's easier. And by the way, in some walks of life, I would tell you,
00:18:07.080
you shouldn't think for yourself. You should just blindly follow what it's, right? When the stakes are low,
00:18:12.340
why do I need to try to optimize the movie I'm going to watch with my wife on a Friday evening or
00:18:17.060
Friday evening, right? Why should I do that? Why not just, it's an hour and a half to two hour
00:18:22.660
commitment. And in fact, if I didn't think of it as something to optimize, but something instead that
00:18:27.580
I tried to satisfy, I'd probably enjoy it better. Right. Okay. But then when the stakes are high,
00:18:33.820
you don't want to just rely on the expert or the technology. That's right. All right. No,
00:18:38.720
that's exactly right. You don't want to rely blindly when the stakes are high. Think of it this way.
00:18:42.620
If you had to worry about a medical decision where a procedure that was somewhat risky,
00:18:48.480
that maybe had some side effects, that balancing act is going to be a little more difficult than
00:18:53.780
one where I'd encourage you to think for yourself. Well, something that I thought was interesting
00:18:57.980
too, you argue in the book is that this information overload, this information complexity, this too
00:19:03.020
many choices is not only affecting lay people, but it's also causing problems for the experts in the
00:19:08.980
technology we rely on. How so? What's going on there? Well, think about the experts, right? They
00:19:14.540
live in silos and they may in fact, because they live in silos, not have an appreciation for where
00:19:21.800
their work is useful or not useful. And so I encourage the experts to also take a step back and see the big
00:19:29.140
picture. You know, one example I've used in the book and I often talk about is imagine if you went to
00:19:35.440
your cardiologist and she says to you, look, you're doing great. Your health is fabulous. However,
00:19:42.960
I'm noticing your cholesterol levels rise a little bit. It's a little concerning. What I really want to
00:19:48.780
do is put you on a statin to lower your cholesterol levels. By the way, don't worry about it. Statins are
00:19:55.360
completely safe and proven to work. I myself as a cardiologist take a statin. Most of my medical
00:20:02.380
school peers are on statins. In fact, every doctor in this practice is taking a statin.
00:20:09.720
We really recommend you take a statin. It works. And so you go ahead and take a statin. Later that
00:20:15.960
year, you come back, you get tested and lo and behold, your cholesterol levels have fallen.
00:20:20.320
Fabulous, right? She did her job. You can claim victory. And we know with good, you know, pretty serious,
00:20:27.120
good research that high levels of blood cholesterol are associated with higher risk of heart attack.
00:20:32.860
And she just lowered your blood level of cholesterol through a statin. Great. So that's a good thing.
00:20:38.600
However, now you walk down the hall and you go see a endocrinologist and he tells you, Brett,
00:20:45.680
you know what? You're doing great. Health is looking good. Except I'm seeing signs of prediabetes.
00:20:51.880
It looks like you're developing insulin resistance. And in fact, I think we're going to have to address
00:20:57.400
this because something's not right. There's a warning here and I'm worried because diabetes comes
00:21:02.840
with an elevated risk of heart attack. And so now we've crossed the silo away from the cardiologist
00:21:09.360
to the endocrinologist and we're seeing the exact opposite impact. Why is that? Because the way a statin
00:21:14.400
works is it interferes with enzymes that impact insulin production, et cetera. And so it interferes with the
00:21:20.820
system. The fact that lower cholesterol is good for you is true, all else equal, but all else wasn't
00:21:27.540
equal. You took a statin, a foreign object that interferes with other things. And so there's an
00:21:31.980
example where crossing silos may result in a different insight than living within a silo.
00:21:37.860
So, you know, I think it's useful for experts to look beyond their own silos as well.
00:21:42.400
We're going to take a quick break for a word from our sponsors.
00:21:49.960
Well, and besides experts in technology, you've mentioned something else that can cause us to
00:21:54.360
not really think for ourselves, and that's rules or procedures or, you know, sort of the bureaucracy.
00:21:58.960
Any examples of that causing us to be blind to different options?
00:22:03.200
Yeah. Look, I mean, sometimes checklists are useful. They've proven extraordinarily useful in reducing
00:22:07.940
surgical error. They've proven extraordinarily useful in aviation where, you know, pilots will go down a
00:22:14.000
checklist to double check everything, et cetera. It's a means to minimize sort of that complacency that
00:22:21.760
comes in with regular repeated actions. And the complacency sometimes increases the error rate. So
00:22:28.260
use the checklist and you reduce the error rate. However, what happens when we blindly rely on
00:22:35.940
checklists or protocols is we stop thinking. And that's a problem. There's a story in the book
00:22:41.860
where I talk about a checklist that was used to determine whether a patient should be removed off
00:22:46.580
of a blood thinner. And the checklist said, yes, he should be removed off the blood thinner. And so
00:22:52.020
this patient stopped taking blood thinner and later had a stroke. Well, it turns out one item that wasn't
00:22:57.700
in the checklist was family history of strokes wasn't part of the checklist. And so this doctor took
00:23:03.680
him off the blood thinner saying, oh, the checklist thought no reason to stay on the blood thinner.
00:23:08.980
However, this person, his father at that very age had a stroke that he was. And so, you know,
00:23:16.160
if you'd use a little common sense and not relied blindly on the checklist, you might've had a
00:23:21.500
different recommended course of action. So there's an example, again, sorry from the medical, we're
00:23:25.280
sticking with medical examples, but you know, it's also true even within aviation, you know,
00:23:29.500
Captain Sully Sullenberger, famous US air pilot who landed on the Hudson. You know, there was a
00:23:34.500
checklist in the plane for what to do when you lose thrust in both engines. There was a checklist for
00:23:42.400
that. However, that checklist was designed to be followed. If you were at 35,000 feet cruising at 600
00:23:49.380
miles an hour, he was at 3,000 or 4,000 feet, hadn't yet reached an ascent level that allowed him to glide
00:23:56.800
very far. And so he put the checklist aside and he thought for himself, the result was a good outcome.
00:24:03.800
Well, better than it would have been, I guess, is one way of thinking. So yeah, there's a couple
00:24:09.480
Well, so it sounds like what all these things, experts, technology, checklist, procedures, what
00:24:13.320
they all do, one of the things they do is they direct our focus to a specific area, causing us not to
00:24:22.280
That's right. Yep. No, that's exactly right. It's about focus management. In fact, one of the things
00:24:26.780
I often suggest is that we need to be mindful about where we're focusing because the experts
00:24:33.420
in technologies are like spotlights and they're shining a spotlight for us in terms of where to
00:24:39.100
look or what to pay attention to. When in reality, the insight may exist in the shadows. You know, we
00:24:44.740
talked about the cardiologist and cholesterol, but I have to ask, like, why would you care about
00:24:48.660
cholesterol? I mean, do you care about cholesterol, Brett? No. I don't think you should care about
00:24:52.780
cholesterol. In fact, I don't know why I should care about cholesterol. You care about cholesterol
00:24:56.920
because it might impact your heart attack risk. Well, shouldn't you just care about heart attack
00:25:02.480
risk rather than cholesterol? Why are we focused on cholesterol?
00:25:06.800
Well, another example from the medical, because I guess that's an area where it's complex, a lot of
00:25:11.580
information that affects men is the prostate-specific antigen test. It's this idea you could detect
00:25:17.780
prostate cancer really early by taking this test and you think that's a good thing, but it actually
00:25:23.180
ended up causing a bunch of problems. Yeah. So that's a really, it's actually a tragic story,
00:25:28.560
I think, on many levels for lots of men. So Dr. Alblin designed this test. He was a University of
00:25:35.660
Arizona professor. And effectively what happened was this was a test to manage. Originally, I think the
00:25:42.220
intent was to manage people that had been, because of symptoms, identified as having prostate cancer.
00:25:49.100
And so you use the PSA test to see in their blood levels, perhaps how that cancer was progressing and
00:25:55.240
what to do. But the way you found out if a person had prostate cancer was they showed up with symptoms
00:26:02.180
or there was some identifiable physical means to say, okay, there's a problem here.
00:26:07.700
Well, what ends up happening is big business, big pharma, or not pharma, but big medicine,
00:26:13.740
if you will, takes over and they start using this test as a identifier of prostate cancer.
00:26:19.300
Well, it turns out most men will die with prostate cancer, but very few men will die because of prostate
00:26:29.020
cancer. And so it turns out that there's actually, you know, with age, there's a greater preponderance of
00:26:35.000
prostate cancer. So the PSA test gets hijacked. And so there's more people starting to rely on the
00:26:41.560
PSA test as a screening tool rather than a management tool. And so suddenly this becomes
00:26:46.680
the focus. Urologists around the country, around the world start saying, let's get a PSA test score
00:26:51.040
to see whether there's a tendency or an issue of potential prostate cancer. And then they end up
00:26:56.540
looking more. And then when they look more, they find more. And when they find more, they treat more.
00:27:01.960
And the result was at one point, Dr. Ablin, who designed the test, the scientist who came up with
00:27:07.900
it, ended up writing a New York Times op-ed. It was the most read New York Times op-ed that year
00:27:12.840
that said something that was called the great prostate mistake or something like that, where he
00:27:17.900
said, listen, I'm sorry. This test was not designed for use in this way. The result is millions of men
00:27:26.340
have undergone treatments for an issue that may never have bothered them, an issue that may never
00:27:33.520
have actually produced any identifiable impact on their life. And so, you know, there's, in fact,
00:27:40.640
he then had a book-length treatment called the great prostate hoax, where he talked about how,
00:27:46.680
and he starts it off with an apology to men saying, you know, there's been millions of men who are
00:27:51.260
probably incontinent or impotent because of procedures that might have been deemed unnecessary
00:27:56.860
because of over-reliance on this one indicator. Right. And so that's an example, again, like
00:28:02.100
misdirected focus. Like it, yeah, it just, it made you blind to the bigger picture and just
00:28:07.520
made you hyper-focused on one thing. That's right.
00:28:11.060
Well, another area that you talk about, and this is sort of your domain of business and finance,
00:28:15.460
where misdirected focus, where you're, you're siloed and you're just paying attention to specific
00:28:21.300
things can actually hurt businesses in the way they promote people. And this, uh, you talk about
00:28:28.140
the Peter principle. Oh, thanks for asking, Brett. This is one of, I think it's a genuinely comical
00:28:35.100
manifestation of this problem of misdirected focus. Well, it's like, so for those, yeah, for those
00:28:40.120
who aren't familiar, what is the Peter principle and how does misdirected focus lead to the Peter
00:28:43.480
principle? Yeah. So the Peter principle is, so there was this book in the 1960s, I think,
00:28:49.280
written by Lawrence Peters and there was somebody else, but anyway, it's called the Peter principle.
00:28:53.440
And what he found was he went around and was just frustrated by large organizations and bureaucracies.
00:29:00.560
And what he did was he said, well, why are people getting promoted? Why is this person in the job?
00:29:04.640
Why is this person staying in the job? And he did some research and looked into it.
00:29:08.280
And what he found is really at some level, I mean, it caused me to chuckle when I first read it.
00:29:13.480
And then when you think about it, it's quite profound. He said, well, it turns out people
00:29:17.880
get promoted by doing well in their current job. And the result is you keep getting promoted if you
00:29:26.100
do well. Seemingly logical, right? The next question he asks is, when do people stop getting
00:29:32.520
promoted? They stop getting promoted when they're doing poorly in their job. And he calls that,
00:29:38.720
that that person has reached their, quote, level of incompetence, quote. And so the result is
00:29:45.100
eventually over time, an organization is filled with people that reach their level of incompetence
00:29:52.300
and therefore nothing gets done. And so you can laugh about that because you're like, oh my God,
00:29:58.000
obviously if this person's really great at customer service, they're going to get promoted to run the
00:30:01.660
customer service team. Well, that's a different skill, managing people rather than managing customers.
00:30:06.580
And if that person's really good at managing that group of people, they get promoted to managing a
00:30:12.340
bigger, different operation, et cetera, and they'll get migrated up. And so the misdirected focus that I
00:30:18.940
highlight is the Peter Principle suggests that people get promoted by how they're doing in their
00:30:23.800
current job. When in fact, you should really look to promote them based on how they might do in their
00:30:29.920
next job, not the job they're in. You may find an underperformer in your business who's at a
00:30:36.120
particular level who once promoted may excel. Likewise, you can find someone who's doing really
00:30:42.140
well in their current position that if you promote them, they'll really struggle. And so the focus on how
00:30:47.900
someone's doing in their current job to determine whether they will do well in their next job really
00:30:53.660
doesn't actually make a lot of sense. I mean, it's a rewarding mechanism, but it's not a mechanism that
00:30:59.820
actually lines up people with the skills they need for the job they're being asked to do.
00:31:05.160
Gotcha. Okay. So in that case, if you're working in a corporation where you determine promotions,
00:31:10.060
you don't just look at how well they're performing at the job. Look at the bigger picture
00:31:12.920
of that person and see if they would do well where they're at right now or in a position higher.
00:31:18.380
Yeah. Look, yeah. Wayne Gretzky's, I don't actually know if it's Wayne Gretzky or Wayne
00:31:23.200
Gretzky's father. There's been debates on this, but there was a quote that came out of the Gretzky
00:31:27.780
family, which was one should skate to where the puck is going, not to where the puck is, right? So
00:31:35.200
evaluating people by how they're doing in their current role is looking at where the puck is. We want
00:31:41.800
to know where the puck is going. If I promote Brett to this other job, will he do well? That is
00:31:47.480
independent of how he's doing in his current job. All right. So we've talked about the problem.
00:31:51.840
We have all these experts, technology, rules and procedures that direct our attention and sometimes
00:31:56.580
to our detriment. Let's talk about how we can overcome that and be a little bit more self-reliant
00:32:01.740
in the 21st century. And let's talk about that managing focus. So it seems like the first step
00:32:05.860
is just resting control of your focus from experts, technology, and not just completely outsourcing
00:32:12.060
that thinking to them. But how do you do that when you have all these things, these algorithms,
00:32:17.060
these experts and books and TV telling you, here's what you need to do. How do you rest control
00:32:21.460
and start managing your attention for yourself? Yeah, it's hard to do. Let's be honest. So first
00:32:27.220
of all, it takes effort. But let me actually clarify one thing that I want to make sure comes across
00:32:31.720
here. I am by no means suggesting we shouldn't listen to experts. I am not bashing experts. What
00:32:39.040
I'm suggesting is we have for far too long bounced like a ping pong ball between complete deferral to
00:32:47.960
experts, which I think is problematic. That's where we don't think for ourselves and we blindly
00:32:51.920
outsource. But we've also bounced to the other extreme, which is complete dismissal of experts,
00:32:57.380
which I also think is wrong. What we need to do is keep experts in their spot, in their place.
00:33:03.180
We are the main actors. Experts are supporting actors. So we can take their insight. In fact,
00:33:10.360
in the book, I say, keep experts on tap, not on top. For that reason, I think there's a role for
00:33:16.920
experts and we want to rely on them and we want to tap into them and we want to get insight from them
00:33:22.200
and extract value from them without completely blindly outsourcing to them. Now, one exception,
00:33:28.460
in fact, again, in the book, I mention an example of a Stanford University professor, Baba Shiv,
00:33:34.220
who realized that he and his wife had a cancer diagnosis and they decided they were going to
00:33:39.800
take the backseat. They were going to do what the experts told them to do blindly. Now, they realized
00:33:45.760
that it was emotional and so they mindfully decided to give up control. So one, being mindful. And number
00:33:52.880
two, they then spent more time figuring out who they would give up the control to. So they were
00:33:58.020
mindful of who they outsourced to and they were mindful of the very outsourcing act. So I'm okay
00:34:03.380
with people outsourcing, you're thinking. I just want you to do it mindfully and intentionally
00:34:08.360
rather than subconsciously or just reflexively.
00:34:13.020
Gotcha. And so it sounds okay. If you make that decision to outsource your thinking and rely on
00:34:18.220
experts, which you should, I guess one of the things you do, maybe you start asking, what is this person
00:34:22.420
missing that because of their expertise, they might be missing because it's not even in their silo?
00:34:28.540
Bingo. That's exactly right. Ask questions of where the information is relevant and where it might not
00:34:33.540
be relevant or what the insight is based on. So, you know, I know it's hard to ask experts. You feel
00:34:40.320
like there's a status dynamic, et cetera. But when you're interfacing with an expert, I think it's
00:34:44.940
eminently reasonable to ask questions about how that expert came to the conclusion that they've come to,
00:34:51.440
why they're recommending it to you and how it applies to your specific context and your specific
00:34:58.760
problem or objective. So I think that's a very reasonable conversation to have. And an expert
00:35:03.900
should be willing to guide you to understand why they're coming to this conclusion.
00:35:08.960
Right. So that even involves like asking about their incentives. I mean, that could be uncomfortable.
00:35:12.840
It's like, well, do you make money if you tell me to do this thing?
00:35:18.680
It's uncomfortable, but worth doing. You know, there's a quote in the book. I think it's from
00:35:24.280
Warren Buffett, but you know, don't ever ask a barber if you need a haircut.
00:35:28.260
It's sort of the logic. I think that captures it, right? That sort of gets at it.
00:35:33.560
Well, and besides, okay, just being more mindful of when you're ceding control and also being mindful
00:35:39.720
of the experts, their limitations or the technology's limitations. You also say another important thing
00:35:46.640
to be self-reliant in the 21st century is just actually knowing what you're trying to do,
00:35:51.660
And like, it seems like very basic, but how do you think people like,
00:35:54.360
do they just not think about that? Like, why don't you think people think about what their actual
00:35:57.660
goals are when they decide to cede control over to an expert or technology?
00:36:01.240
Well, it's not that you don't understand your goal. It's just that you let your goal be
00:36:04.580
subservient to the expert's objectives, right? So ultimately, you know, think of the cardiologist
00:36:10.220
example. The cardiologist is an expert in heart health. You are worried about your wellness,
00:36:15.480
your longevity, your risk of heart attack is part of that. Ultimately, we can even ask,
00:36:20.220
why do you care about a heart attack? If it doesn't kill you, I mean, yeah, you don't want to have it
00:36:23.440
because there's risks, complications, et cetera. But ultimately, you care about living a life
00:36:28.400
healthily. You want to stay well. And so, you know, your objective really should be driving
00:36:34.520
your interactions with experts. Again, think of yourself as an artist putting together a mosaic
00:36:40.900
and experts have the tiles. There's different tiles, there's different shapes, a different color,
00:36:45.940
different texture. You put them together based on your objectives. So, you know, again,
00:36:51.860
I think of it as the experts are pieces. You know where you want to go with this whole thing.
00:36:57.500
So take what you need from them. It also sounds like too, as you're working with an expert,
00:37:02.940
you have to look at results. And if the results, you're not getting the results you wanted or
00:37:07.240
desired, well, then you got to change course and maybe find another expert or do something else.
00:37:11.900
Yeah. Or interface with them. I mean, look, I'm not suggesting experts are bad. Maybe they don't
00:37:15.120
understand your objectives. Maybe there hasn't been a clarity of communication.
00:37:18.760
Right. That's a good point. A lot of people, he might be working on different assumptions than
00:37:22.640
you are. And so, another tactic you recommend is just, you mentioned this earlier, just triangulate.
00:37:27.920
Instead of just relying on one, you know, get a second or third, sometimes fourth opinion.
00:37:32.260
Yeah. And don't hesitate to cross silos in those opinions that you seek, right? So,
00:37:37.480
you know, I'll give you an example. Well, in fact, we talked about a cardiologist. So when you go
00:37:40.960
to your cardiologist, she tells you to take a statin. Why not ask your endocrinologist what they
00:37:45.200
think about you taking a statin? The obvious assumption is, well, that's not their domain.
00:37:49.880
That's not their silo. That's not their area of expertise. Why would I ask them? Well, because
00:37:53.900
it might interact with them in some way. They may have a unique insight or perspective on this.
00:37:59.680
Oh, actually, Brett, we've seen people who take statins. It turns out they have an elevated risk
00:38:03.140
of diabetes. Whoa, really? I don't want that. Okay. Let me re-engage with the cardiologist with this
00:38:09.580
insight. So part of the triangulation logic is an acceptance and admission that every perspective
00:38:17.180
is biased, limited, and incomplete. So don't just rely on one. And that's really what I mean when I
00:38:24.380
say triangulate, which is, you know, in the domain of financial bubbles where I've spent some time
00:38:28.400
thinking and writing, you know, an economic perspective leads you to one insight, but a
00:38:32.960
psychological perspective may lead you to another. And you add into it a political perspective,
00:38:37.720
a credit perspective, a herd behavior, or even, you know, what you find is that, oh,
00:38:43.460
cultural perspective, you get a different view than you would through any one particular lens.
00:38:50.700
And it reminds me of this, I don't know if it's a parable, but there's an often quoted story of the
00:38:55.320
six blind men that stumble upon a elephant, right? So the one man, you know, grabs the leg and he says,
00:39:01.800
oh, this is a tree trunk. It's definitely a tree that we've stumbled upon. Another one grabs the
00:39:07.100
tail and says, whoa, hold on a second. What we have here is a snake. And it's only through the
00:39:13.420
integration of multiple perspectives that the group would be able to determine that they are in fact
00:39:19.120
encountering an elephant. And it's the same way with a large portion of the uncertainty we face in
00:39:24.600
our lives, whether it's in medical, financial, or other domains, is that it really requires integration
00:39:30.800
of multiple perspectives to get our arms around what we're facing, the problems, and even the
00:39:35.480
potential solutions. Well, it sounds like too, besides getting a breadth of opinions from different
00:39:40.200
experts, you know, being self-reliant in the 21st century and knowing how to handle expert knowledge
00:39:44.680
requires the person, you know, you yourself to develop a breadth of knowledge. Like we read widely,
00:39:50.280
have multiple perspectives. Yeah, look, I think breadth is really important at the individual level
00:39:55.760
so that you can understand the limitations and boundaries of the silos in which experts and
00:40:01.660
specialists live. And so reading widely, yes, is important, but it's also just developing an
00:40:07.020
awareness. And these are simple things that can be done to give you that awareness. You know, let's
00:40:13.840
just talk about reading information and the news, for instance, right? I mean, a lot of people will
00:40:18.240
now, because of technology, tunnel in based on existing, you know, searches or filters or alerts.
00:40:27.420
They'll be told, oh, there's something you're in the, uh, I don't know, you're in the aerospace
00:40:31.260
industry. Great. Here's this 737 max problem. And so you get news on that, your alerts come in every
00:40:36.940
day and you get your, your industry newsletter and it comes to you and you read it. Whereas if you took
00:40:43.500
a physical newspaper or magazine and you flipped through it, you will be exposing yourself to different
00:40:51.020
ideas in different domains that may be adjacent, et cetera. And you'll just be more aware,
00:40:56.060
right? I mean, if you think about what's happening in business, reading the wall street journal,
00:41:00.640
physical edition, rather than the algorithmically influenced alert driven online version, you know,
00:41:08.460
I think there's some value in that and, and, and exposing you to breath. And I consistently will
00:41:13.260
read, for instance, the economist magazine cover to cover in physical form. And I do that because
00:41:19.280
just even flipping pages, even if it's not interesting or not a topic, I have real depth of
00:41:25.760
interest in, I may, I may get some value out of seeing the headline and even reading the first
00:41:31.420
paragraph of it. And it's very quick, but it gives me an awareness and a breadth of, uh, of exposure
00:41:37.100
that I wouldn't get if I just said, all right, I want to know about us China relations. Let me just
00:41:40.980
read about that. Well, Vikram, this has been a great conversation. Where can people go to learn
00:41:44.880
more about the book and your work? Sure. I think my website's probably the best spot,
00:41:49.080
which is just www.mansharmani.com and that's M-A-N-S-H-A-R-A-M-A-N-I or on, I'm on LinkedIn and
00:41:59.060
Twitter as well. And you can find me there. Fantastic. Well, Vikram, thanks so much time.
00:42:02.580
It's been a pleasure. Thanks, Brett. I've enjoyed the conversation. My guest here is Dr. Vikram
00:42:06.460
Mansharmani. He's the author of the book, Think for Yourself. It's available on amazon.com and bookstores
00:42:10.620
everywhere. Check out our show notes at aom.is slash think for yourself where you can find links to
00:42:14.360
resources. We delve deeper into this topic. Well, that wraps up another edition of the
00:42:25.160
AOM podcast. Check out our website at artofmanliness.com where you find our podcast
00:42:28.540
archives. Well, there's thousands of articles we've written over the years about pretty much
00:42:31.320
anything you can think of. And if you'd like to enjoy ad-free episodes of the AOM podcast,
00:42:34.540
you can do so on Stitcher Premium. Head over to stitcherpremium.com, sign up, use code manliness
00:42:38.740
at checkout for a free month trial. Once you're signed up, download the Stitcher app on Android or iOS,
00:42:42.800
and you can start enjoying ad-free episodes of the AOM podcast. And if you haven't done so already,
00:42:46.480
I'd appreciate it if you take one minute to give us a review on Apple podcast or Stitcher. It helps
00:42:49.940
that a lot. And if you've done that already, thank you. Please consider sharing the show with a friend
00:42:53.800
or family member who you would think would get something out of it. As always, thank you for the
00:42:56.860
continued support. Until next time, this is Brett McKay, reminding you not only listen to the AOM
00:43:00.180
podcast, but put what you've heard into action.