The Peter Attia Drive - April 27, 2026


#389 - Thinking scientifically: why it's hard, why it matters, and a practical toolkit


Episode Stats


Length

53 minutes

Words per minute

146.03018

Word count

7,872

Sentence count

501

Harmful content

Toxicity

1

sentences flagged

Hate speech

3

sentences flagged


Summary

Summaries generated with gmurro/bart-large-finetuned-filtered-spotify-podcast-summ .

Transcript

Transcript generated with Whisper (turbo).
Toxicity classifications generated with s-nlp/roberta_toxicity_classifier .
Hate speech classifications generated with facebook/roberta-hate-speech-dynabench-r4-target .
00:00:00.000 Hey, everyone. Welcome to the Drive podcast. I'm your host, Peter Atiyah. This podcast,
00:00:16.540 my website, and my weekly newsletter all focus on the goal of translating the science of longevity
00:00:21.520 into something accessible for everyone. Our goal is to provide the best content in health and
00:00:26.720 wellness. And we've established a great team of analysts to make this happen. It is extremely
00:00:31.660 important to me to provide all of this content without relying on paid ads to do this. Our work
00:00:36.960 is made entirely possible by our members. And in return, we offer exclusive member only content
00:00:42.720 and benefits above and beyond what is available for free. If you want to take your knowledge of
00:00:47.940 this space to the next level, it's our goal to ensure members get back much more than the price
00:00:53.200 of a subscription. If you want to learn more about the benefits of our premium membership,
00:00:58.040 head over to peteratiamd.com forward slash subscribe.
00:01:04.260 Welcome to a special episode of The Drive. In this episode, I step back to share how my thinking
00:01:09.660 is evolving around a topic that sits actually well upstream of almost everything else we discuss
00:01:17.240 on this podcast, which is how to think scientifically. Now I get asked this question
00:01:23.040 all the time. And frankly, I don't think until this episode, I really had a comprehensive way
00:01:31.360 to approach this important topic. So this is really an introspective episode about why scientific
00:01:39.180 thinking is so difficult for us as a species, why it matters more than ever in an environment
00:01:45.780 flooded with misinformation, and what each of us can do to get better at separating what we want
00:01:52.080 to be true from what the evidence actually suggests. So without further delay, I hope
00:01:57.120 you'll enjoy this episode of The Drive. Today, I want to talk about a skill that sits upstream
00:02:08.040 of nearly every decision you make about health, policy, risk, and even how to evaluate other
00:02:15.060 people in this space. I want to talk about how to think scientifically. By that, I don't mean
00:02:21.960 how to run a lab or memorize statistics. I mean how to evaluate claims, how to update your beliefs
00:02:29.760 when the evidence changes, and how to figure out who to trust when you can't do the analysis
00:02:35.980 yourself, which as we're going to come to appreciate is often. If you get good at that,
00:02:42.200 you put yourself in the position to make better decisions than somebody who simply knows more facts
00:02:48.060 but doesn't know how to weigh them. We're going to cover four things here today. First, what
00:02:53.180 scientific thinking actually is beyond, you know, spending time in a lab. Second, why it's so hard
00:03:00.380 for us, which has less to do with intelligence than you might expect. Third, what you can do
00:03:06.560 as an individual to get better at it. And fourth, how to find people you can trust when you can't
00:03:13.540 do the analysis yourself, which as I said a second ago is going to be most of the time for most
00:03:19.460 people. One idea is going to thread through this entire episode and I want to put it on the table
00:03:25.440 right now. The goal of thinking scientifically is not simply to be right. It's to be less wrong
00:03:32.740 over time. Science is a process built around that principle, and what I want to do today
00:03:39.160 is help you engage with it more skillfully. This is a topic I get asked about very often,
00:03:46.820 and I think honestly until now I haven't had a great consolidated approach for laying it out.
00:03:54.280 So let's start with what we actually mean when we say think scientifically. There's a common
00:04:00.680 misconception that scientific thinking is something scientists do in labs and the rest of us
00:04:05.500 just receive in the form of results. But that's not what I'm talking about at all.
00:04:10.140 Thinking scientifically is a way of engaging with claims about the world, any claims, not just ones
00:04:16.860 that come with a citation attached. At its core, it means generating hypotheses, possible explanations
00:04:24.260 for why something might be the way it is or how something works. It means testing those hypotheses
00:04:31.200 against experimental evidence. It means updating your beliefs when the evidence changes. And it
00:04:39.140 means tolerating uncertainty throughout this entire process. It means separating what you want
00:04:45.860 to be true from what the evidence suggests is true and recognizing, really recognizing how often
00:04:52.060 those two things are in tension. As Richard Feynman, someone we're going to refer to a few
00:04:57.380 times today, one of the greatest scientific thinkers in history once said, the first principle
00:05:03.120 is not to fool yourself, and you are the easiest person to fool. Scientific thinking means being
00:05:10.940 more invested in the process that produced a conclusion than in the conclusion itself.
00:05:17.140 Again, I want you to say that again with me because that is not intuitive. Scientific thinking
00:05:22.280 means being more invested in the process that produced a conclusion than in the conclusion
00:05:28.380 itself. Most of us evaluate claims by asking, is it true? A scientific thinker asks a different
00:05:35.820 set of questions first. How did they arrive at this? What's the evidence? How strong is it?
00:05:40.940 what are the alternative hypotheses or explanations? And scientific thinking means
00:05:46.960 understanding that I don't know and it depends are often the most honest available answers.
00:05:55.460 This idea, I don't know, is critically important, and I don't think we discuss it often enough.
00:06:02.220 In many ways, I don't know can always be the first answer to any scientific question.
00:06:07.600 The second answer is then our best understanding based on the available evidence. But out of ease
00:06:14.120 and out of confidence and out of trying to avoid sounding like a broken record, we often just skip
00:06:19.560 straight to the second answer. We drop the uncertainty. And when we do that, we lose
00:06:24.420 something essential. We lose the thing that makes scientific thinking scientific. Because here's the
00:06:30.780 thing. One of the most useful ways to think about science, especially in medicine, is by focusing
00:06:37.200 on two of its core functions. The first is ruling things out, and the second, getting less wrong
00:06:45.680 over time. There's a famous saying within scientific research, often attributed to George
00:06:51.440 Box, but honestly I've seen it attributed to 20 other people, all models are wrong, but some are
00:06:57.800 useful. More often than not, we are not proving a claim in some final absolute sense. We are
00:07:05.180 comparing explanations, testing predictions, and gradually gaining confidence in the ones that
00:07:12.300 survive contact with data. We rule things out one by one until we're left with the explanation we
00:07:19.240 can't rule out. And then we make a logical leap. We say we've eliminated every other possibility
00:07:25.300 we can think of, so we have growing confidence that this explanation is correct. But notice
00:07:31.660 the qualifier here. Every other possibility we can think of. It's not a proof. Hard proof
00:07:39.120 only exists in formal logic and mathematics, where we can demonstrate within a set of defined
00:07:44.640 rules that something must be true within those defined rules. The rest of science relies
00:07:50.540 on experimentation, trying to discover what the rules are, doing our best to accept or reject
00:07:57.220 rules based on available evidence, and deducing the best possible explanations within the landscapes
00:08:04.040 we've identified. This is fundamentally different from a derived proof. Now, I started my academic
00:08:12.120 career in mathematics, and we spent a lot of time working on proofs. And it was very difficult for
00:08:19.520 me when I transitioned from mathematics to medicine, which was so fundamentally messy.
00:08:26.620 Instead, we rely on our best approximations of reality, but true certainty is not even
00:08:32.900 on the table. Our models are, at their core, probabilities built on probabilities. They
00:08:39.180 aren't proof. They're simply the best we've got. Sometimes that distinction barely matters
00:08:45.100 in practice. Take gravity. The idea that objects with mass attract to each other is an empirically
00:08:52.840 derived theory, one that is not derived from a true mathematical proof, but has experimentally
00:08:59.880 out-competed all other explanations and provided countless verified predictions. This theory,
00:09:07.300 first proposed by Isaac Newton in 1687, has proven so successful that future iterations
00:09:13.640 didn't destroy it. They refined it with incredible implications. At the turn of the 20th century,
00:09:20.620 Einstein proposed that gravity doesn't only lead to objects attracting each other. Gravity quite
00:09:27.200 literally bends time and space, slowing and speeding up the passage of time. A proposition
00:09:34.040 that was, you know, quite frankly, insane sounding at the time and insane sounding maybe now,
00:09:39.600 except that it works. For example, due to the incredible speed of man-made satellites circulating
00:09:45.460 the Earth together with their distance decreasing the pull of Earth's gravity, the satellite systems
00:09:52.560 responsible for GPS have to adjust their clocks by 38 microseconds every day. Time literally passes
00:10:00.880 slower on these satellites. And without these adjustments, predictions made by Einstein's
00:10:07.160 theory of relativity, our GPS system would misalign by 8 meters per minute or 11 kilometers
00:10:15.640 per day. From principles discovered by experimentation, we have satellites orbiting
00:10:21.180 over our heads, perfectly combating the pull of Earth's gravity to rotate around the planet
00:10:26.520 near endlessly. On these satellites, we adjust their clocks using experimentally derived
00:10:31.820 principles of the time dilation due to gravity, instead of sending these hunks of metal crashing
00:10:38.300 onto Earth or getting data from them that errs on the order of kilometers. Our theory of gravity
00:10:45.000 permits the coordinated movement of hundreds of satellites over decades, able to pinpoint
00:10:49.540 exactly where your room is in your house or where you forgot your phone. Now, do we know everything
00:10:56.380 about gravity? No. In fact, combining our best theory of gravity with our best theory of particle
00:11:02.380 physics is one of the greatest unsolved scientific problems of our day. But have we discovered enough
00:11:09.400 about gravity to be useful? Undeniably. And confidence in our models isn't restricted to
00:11:15.960 physics, although admittedly that's where the highest confidence tends to concentrate.
00:11:20.200 But take something that impacts biology. Take smoking. We have overwhelming evidence that
00:11:25.480 smoking causes cancer. The epidemiologic data, with its enormous hazard ratios, the mechanistic
00:11:33.020 understanding, the dose-response relationships, taken together, they've ruled out any plausible
00:11:38.680 alternative. At that point, is it really so different to say we've proven it? Not much.
00:11:45.020 When the evidence is so overwhelming, the gap between our least wrong model and capital T
00:11:51.840 true becomes vanishingly small. But most questions in medicine don't look like that.
00:11:58.560 Most live in the middle. The evidence is suggestive, sometimes highly suggestive,
00:12:03.200 but imperfect. The model is useful, but incomplete. And the conclusions are right
00:12:09.140 enough to act on now, but not final. Dietary cholesterol is a good example. For decades,
00:12:14.940 the accepted answer was straightforward. The cholesterol we eat raises the cholesterol in
00:12:19.740 our blood, which raises cardiovascular risk. This was treated as settled. Dietary guidelines
00:12:25.340 were built around it. Eggs became the enemy. And the evidence did point in that direction.
00:12:31.120 It wasn't fabricated and it wasn't a conspiracy, but it was incomplete. The relationship turned
00:12:35.880 out to be far more complex and far more individualized than the simple causal chain
00:12:40.920 suggested. If the field had held onto that finding as our best current model, rather than settled,
00:12:46.640 the guidelines might have been updated much sooner. Now here's the part that's conceptually
00:12:52.460 maybe even effectively difficult. The implication is this. Some of the guidance that exists right
00:12:59.500 now, today, as we're having this discussion, is going to turn out to be as incomplete as eggs
00:13:05.720 and cholesterol. Some of the guidance that you and I believe in and follow. Now the kicker is
00:13:11.840 I don't know which parts. Nobody does yet, at least not at a conscious level. But the history
00:13:17.160 of science tells us that some of what we currently treat as settled simply is not.
00:13:22.860 We have to live with that. We have to make decisions based on the best available knowledge
00:13:28.000 while staying aware that the best available truth is rarely the whole truth. It means holding room
00:13:34.360 for doubt and living confidently anyway. It requires being a walking contradiction.
00:13:41.400 The good news is that thinking scientifically works precisely because it's built to address
00:13:46.900 its own imperfections. The system has self-correcting baked in, as long as you don't
00:13:53.380 freeze your conclusions in place and defend them like territory. And this ability can be honed.
00:13:59.800 It's not a talent or a personality trait per se. It's a discipline, a practice that requires effort,
00:14:06.060 humility, and repetition. You can train it. You can get meaningfully better. That said,
00:14:13.680 scientific thinking is a practice. It's not an achievement. Good thinking now doesn't guarantee
00:14:19.580 good thinking later. It's something you have to keep doing or it atrophies. Most of us think we
00:14:25.420 already do this, but most of us are wrong about how consistently and effectively we do it.
00:14:31.140 So let's talk about why. Here's the core thesis. Thinking scientifically is not just hard,
00:14:36.640 it's unnatural. And I mean that in a very literal biological sense. We're primates. That means we
00:14:44.440 have been evolved as social animals for roughly 50 million years. For the vast majority of that
00:14:50.660 time, your survival depended on your standing within a social group. If the group accepted you,
00:14:57.240 you had access to food, mates, and protection. If the group rejected you, you were in real danger.
00:15:04.600 Exile wasn't an inconvenience, it was often a death sentence. Social belonging was a survival
00:15:11.400 imperative and our brains were shaped over tens of millions of years to be exquisitely good at
00:15:18.120 navigating social environments, reading faces, building alliances, signaling loyalty, maintaining
00:15:25.080 status. This is what our cognition was optimized for. We do not just use social skills, we
00:15:32.680 fundamentally rely on social groups and social information. We are also an intelligent species,
00:15:39.640 but most learning for most of our history happened through imitation or through language,
00:15:45.560 and both of these are intrinsically social. You learn from members of your group, be it watching
00:15:51.640 or listening. The information you receive is filtered through trust, status, and identity.
00:15:58.360 Even reading a study alone in your office by the very act of using language is participating in
00:16:05.080 a social system of knowledge. Now, here's where the timeline narrows. The first stone tools are
00:16:12.420 around 3 million years old. Homo sapiens, which is what we are, showed up about 250,000 years ago.
00:16:21.640 Formal logic was systematized roughly 2,500 years ago, and a formal system of empiricism,
00:16:29.740 the basis of the scientific method, is maybe 400 years old. That's it. A few hundred years
00:16:38.540 of empiricism built on top of 50 million years of primate social cognition.
00:16:44.840 Logic and hypothesis testing are not our default state, and can even be at odds with our fundamental
00:16:52.540 sociability. We form groups, form identities around these groups, and let group membership
00:16:58.540 shape what we believe and how we interpret evidence. Social information can and will
00:17:04.820 override logical information. This isn't a bug that only shows up in uneducated people,
00:17:10.780 it's a basic feature of human cognition. Evolution shaped our brains, but evolution
00:17:17.420 works on things that are good enough. Good enough to survive as a hunter-gatherer was good enough
00:17:23.500 for evolution. We weren't shaped to be the ultimate logicians. We were shaped to be good
00:17:29.680 enough logicians to outcompete other animals, to access resources other animals couldn't,
00:17:35.000 to make fast decisions in uncertain environments, and to do it within an intricate social structure.
00:17:40.780 That's a very different optimization target than figure out what's inviolably true.
00:17:48.000 The pursuit of science requires almost the opposite, holding multiple hypotheses,
00:17:54.740 tolerating uncertainty for years, understanding counterintuitive concepts like conditional
00:18:00.960 probability and effect size, willingness to change beliefs that are deeply held,
00:18:05.820 and socially costly to abandon.
00:18:07.960 When you frame it that way, the real question isn't why scientific thinking is hard. It's frankly,
00:18:14.380 how do we manage it at all? It's frankly more amazing that we can do this than it is surprising
00:18:20.800 that we struggle with it. And yet we do manage it. Now, if there were a second thesis within
00:18:27.680 this section, it would be this. Despite the limitations of our biology, despite our pull
00:18:33.920 toward fast social identity, protective reasoning, we have invented, notice I use the word invented,
00:18:42.560 a remarkable set of corrective tools. But importantly, we built structures, formal
00:18:49.480 structures, specifically designed to counteract our natural tendencies. Peer review, blind
00:18:57.280 experiments, pre-registration of hypotheses, statistical frameworks. These aren't just tools.
00:19:05.840 I think of them as prosthetics for objectivity. They exist precisely because we've recognized,
00:19:12.920 at some point collectively, that we couldn't trust our own unassisted judgment. And instead
00:19:19.800 of giving up, we engineered workarounds. Think about what a double-blinded clinical trial
00:19:27.060 actually is. It is an explicit admission that even well-trained experts can't be trusted
00:19:34.780 to evaluate outcomes without being influenced by what they hope to find. So we remove this
00:19:41.600 information. We build a system that assumes we're biased and corrects for it. Science also
00:19:47.600 institutionalize productive disagreement. Peer review is adversarial by design. The norm of
00:19:55.640 replication, the idea that you're finding doesn't count until someone else can reproduce it, says
00:20:01.600 something remarkable. We don't trust any one of us, but we trust the process. And at the individual
00:20:08.780 level, we can train ourselves to be better, to engage in science as a process rather than a set
00:20:15.580 of facts to accept or reject. It is slow and it is humbling, but it works.
00:20:22.520 All right. So far, we've been talking about the structural level, how science as an institution
00:20:29.860 has built systems to overcome our natural cognitive limitations. Now I want to shift gears
00:20:36.460 and talk about something that's probably more directly relevant to most of you. How we as
00:20:42.140 individuals, even if we're not professionally interacting with the scientific method on a
00:20:46.960 daily basis ourselves, can integrate scientific principles into our daily lives. We're going to
00:20:53.640 look at five ideas. And while they're all important tools, we're going to really dive into the fifth,
00:21:01.360 how to approach outsourcing our thinking when necessary. First, let's start with the idea that
00:21:08.060 we want to treat certainty as a cue to slow down. When you encounter a claim and you feel certain
00:21:15.080 about it, treat that certainty as a signal to pause. Certainty is a feeling, not an indicator
00:21:21.620 of truth. Your brain generates it for all sorts of reasons that we've talked about. Social consensus,
00:21:26.900 emotional resonance, familiarity, repetition, the confidence of the speaker. None of those
00:21:32.240 have anything to do with whether the claim is correct. When you notice certainty, ask yourself,
00:21:37.660 why do I believe this? If the answer is social or based on identity, everyone in my feed agrees,
00:21:43.940 the person sounded confident, people I identify with believe this, I want this to be true,
00:21:49.100 that's a red flag. It doesn't mean the claim is wrong, but it means your basis for believing it
00:21:54.940 is social, not evidential. If the answer is that you've seen the data, you've understood it,
00:22:01.840 you've considered alternatives, and you still find this conclusion most compelling,
00:22:05.440 you're in a much better place. Now, here's the recursive part that makes this so powerful.
00:22:11.560 Asking yourself this question in the first place is step one, and the more you do it,
00:22:16.540 the more honestly you'll do it. If you start by being certain that you're always logical,
00:22:22.100 you'll eventually learn to ask yourself why you're so certain you're being logical.
00:22:26.760 The questioning deepens over time. It's hard to say when one masters this skill,
00:22:32.500 because frankly, I believe it looks different for different people. But this is the process.
00:22:37.820 Question your certainty. Question your questioning. Find the certainty and uncertainty and build
00:22:43.160 comfort in that uncomfortable space. You won't become perfect at this. I know I sure haven't.
00:22:49.960 But we can get monumentally better. We can get better at knowing what we do not know.
00:22:55.340 and that awareness is worth a lot. Okay, second, judge the process, not just the conclusion.
00:23:03.560 When someone makes a claim, most of us instinctively evaluate the conclusion. Is this true?
00:23:08.920 Do I agree? That's natural, but it's not the first question to ask. The first question should be,
00:23:15.560 how did this person arrive at this? What evidence? How strong? What alternatives were considered?
00:23:21.740 What do critics say? And have they engaged with those criticisms?
00:23:26.780 When you start asking these questions, something shifts. You stop evaluating claims as things to
00:23:33.000 agree or disagree with, and you start evaluating them as products of a process. And the quality
00:23:39.260 of that process tells you far more than the conclusion itself. A good process can produce
00:23:45.220 a wrong conclusion. That happens in science all the time. But a bad process that happens to produce
00:23:51.240 a right conclusion is not something to trust because it got there by accident and it won't
00:23:56.620 be reliable going forward. Engaging with the process, how did we get this conclusion, is the key.
00:24:04.280 Now, let me make this concrete with a tangible example. Detox cleanses, juice cleanses,
00:24:10.940 supplement protocols, products that claim to remove toxins from your body. It starts with a
00:24:17.040 real observation. You don't feel as good as you used to. Maybe you're tired, your digestion is
00:24:22.260 off, your skin is feeling dull, and we all know there are chemicals, pollutants, and food additives
00:24:28.300 in our environment that aren't doing us any favors. Okay, all of this is true. The basis is real.
00:24:34.300 That's what lures us in. Then comes the conclusion. Drink this, stop eating that, take this capsule,
00:24:41.080 and those things go away. The process failure is the absence of everything in between.
00:24:47.740 How specifically does this product remove toxins? Which toxins? Were they measured before and after?
00:24:55.300 How were they measured? What was the control? What's the mechanism by which the juice or
00:25:00.920 supplement bind to, mobilize, and eliminate a specific harmful substance from your body?
00:25:07.160 Almost every time the answer to those questions is silence or vague gesturing at flushing and
00:25:13.820 purifying. No real mechanism and no real study. A conclusion has been asserted with no specific
00:25:21.100 hypothesis being tested, no mechanism being described, no blinding, no control. It's a leap
00:25:28.560 straight from a real observation, a real problem, to a marketed solution with none of the work done
00:25:34.860 in the middle. We can even be tricked by a lived experience here. Maybe your headaches do go away
00:25:41.240 when you consume nothing but lemon juice for three days. Maybe your skin does clear up or your
00:25:46.480 digestion feels better. Maybe there is a real effect, but what confidence do we have in its
00:25:52.400 cause? By dramatically altering our diet, we alter numerous features of our physiology.
00:25:58.340 By definition, we're eating different foods, drinking different fluids, we're changing the
00:26:02.420 inputs our bodies metabolize, giving rise to how we feel and think. How do we know,
00:26:08.720 without an appropriate process, that a toxin was purged from the body rather than a toxin was
00:26:15.220 removed from your diet? Even if the effect is real, its explanation can miss the mark entirely
00:26:20.900 and get you stuck repeatedly enduring three to seven days of intentional starvation in some
00:26:27.720 proprietary placebo for years when cutting some element of your diet, some processed food, for
00:26:34.840 example, or portion size is actually doing the work. Without a controlled investigation, we're
00:26:40.420 fed a conclusion, but not the means to judge its validity. Now, detox cleanses might feel like an
00:26:47.180 easy target, but the argument structure, real observations straight to confident conclusions,
00:26:52.220 can be far more subtle than a bottle of green juice. It shows up in supplement marketing,
00:26:58.480 in wellness claims, and in things that sound much more sophisticated than a cleanse.
00:27:04.060 What we're training ourselves to notice is the jump, problem to conclusion, with nothing rigorous
00:27:11.000 connecting them. And while we're on supplements, here's a related example of why process questions
00:27:16.800 matter. You'll often see supplement companies claim their products are third-party tested,
00:27:22.160 and that sounds reassuring. But if you ask a process question, what specifically are they
00:27:27.480 testing for? The answer is often simply heavy metal contamination, which means you aren't
00:27:33.160 getting assurance that your ashwagandha capsule contains ashwagandha. You're getting, at best,
00:27:38.800 assurance that your ashwagandha capsule doesn't contain toxic levels of lead.
00:27:43.160 It's not that lead contamination isn't a problem, nor is it an overt lie that the product was tested,
00:27:49.380 but by omitting the step where you question the process, asking what the third-party testing was
00:27:54.880 for specifically, we can be lulled into a sense of confidence that the testing process, in reality,
00:28:01.460 wasn't even designed to provide. This is what evaluating processes looks like in everyday life,
00:28:07.960 Not necessarily reading clinical trials per se, just pausing long enough to ask how someone got
00:28:14.440 from the problem to the conclusion and noticing when the answer is they didn't bother or they
00:28:21.040 didn't bother to do it right. Okay. Third, notice when identity is doing your thinking.
00:28:28.260 This is a hard one. Maybe the hardest on the list. Coalitional thinking is our default mode for all
00:28:35.460 of the reasons I described earlier. It is hardwired into our DNA motherboard, and it can be the enemy
00:28:42.480 of scientific thinking. No group is always right. No political group, no activist group, no scientific
00:28:49.900 group. If you find yourself believing that your team has the right answer on every issue, that's
00:28:56.320 not a sign that you found the right team. It's a sign that your group identity is doing your
00:29:01.420 thinking for you. There's a great line from the movie Men in Black. A person is smart, people are
00:29:07.380 dumb, panicky, dangerous animals, and you know it. Individual thinking can be remarkably rational. 0.99
00:29:15.780 Groups driven by identity often aren't. The discipline is to consider arguments on their
00:29:21.960 merits, not based on where they're coming from. That means engaging with arguments from people
00:29:27.720 you generally disagree with, and questioning arguments from people you generally trust.
00:29:33.620 Let me give you two examples. Most of us are familiar with Galileo and the heliocentric
00:29:38.720 model. Galileo presented evidence that the earth revolves around the sun. But this conflicted with
00:29:46.220 Aristotle's physics, which the church had adopted as essentially doctrine. Galileo was tried,
00:29:53.480 forced to recant, and spent the rest of his life under house arrest. The evidence didn't matter
00:29:59.720 because the conclusion threatened the identity and authority of the institution evaluating it.
00:30:05.980 This example is famous, but it isn't terribly relatable. The example I find even more instructive
00:30:13.300 comes from inside the medical community itself, and it's actually something I wrote about
00:30:17.720 in Outlive. In the 1840s, Ignis Semmelweis was working in the maternity ward of the Vienna
00:30:25.600 General Hospital. The hospital had two clinics, one staffed by doctors and one by midwives.
00:30:32.460 Mortality from childbed fever in the doctor's clinic was roughly five times higher than in
00:30:39.480 the midwife clinic, and Semmelweis wanted to understand why. He systematically ruled out
00:30:44.920 explanations, birthing positions, the root of the clinic's priest, until a colleague died after
00:30:52.520 cutting his finger during an autopsy on one of the childbed fever patients. And the colleague
00:30:59.120 died of symptoms identical to childbed fever. This was the light bulb moment for Semmelweis.
00:31:06.360 The dead bodies were carrying something, and doctors were carrying that material from autopsies
00:31:13.620 directly to deliveries. Midwives didn't perform autopsies, so they couldn't carry material from
00:31:21.320 the autopsy to the maternity patient. He required physicians to wash their hands with chlorinated 0.67
00:31:27.700 lime before working with maternity ward patients, and mortality dropped from 18% to under 2%,
00:31:35.620 percent. In some months, all the way to zero. And yet, the medical establishment rejected it.
00:31:44.340 Now, this is where it gets really interesting for our purpose, because the rejection wasn't
00:31:48.680 purely religious or political. It wasn't just that doctors didn't want to believe it. Germ theory
00:31:55.040 didn't exist yet. They had what sounded like a legitimate scientific objection. The dominant
00:32:01.340 theory of disease transmission was miasma, the idea that disease was caused by bad air,
00:32:08.860 by noxious fumes. Now under that framework, the idea that invisible material on your hands could
00:32:15.580 transmit disease didn't make any sense. You can't wash bad miasmus off your hands. Semmelweis had 0.97
00:32:22.800 gone through the right process, found the right conclusion, but he couldn't explain why his
00:32:28.360 intervention worked in terms that fit any accepted theory over time. His findings let doctors tell
00:32:35.440 themselves, and each other, that they were rejecting Semmelweis on scientific grounds.
00:32:41.900 His conclusions couldn't fit the prevailing theory. But layered underneath that objection
00:32:47.640 was something much more primal. Accepting Semmelweis's data meant accepting that doctors
00:32:53.980 had been killing their patients. That their own hands, the instruments of healing, the symbols
00:32:59.920 of their professional identity, had been vectors of death. That was an identity-level threat.
00:33:06.280 And the stated scientific objection gave cover to the unstated identity defense.
00:33:13.180 That pattern, identity-based motivation hiding behind scientific-sounding skepticism,
00:33:18.740 undoubtedly happens today. The lesson isn't that you should distrust doctors. It's that even
00:33:25.600 trained experts can resist evidence when accepting it threatens identity, status, or the story they've
00:33:32.300 been telling themselves, even in the face of overwhelming evidence, even when the process
00:33:37.620 is right. That's how powerful the pull of identity can be. Becoming aware of it is difficult,
00:33:43.900 but critical for scientific decision-making. Okay, the fourth one, don't confuse criticism
00:33:50.920 with understanding. This one is practical and I think underappreciated. In science,
00:33:57.400 we need to respect the asymmetry between building knowledge and attempts to discredit it.
00:34:04.080 It is vastly easier to criticize a study than it is to design and run one. It's vastly easier
00:34:10.200 to poke holes in evidence than it is to generate evidence. This is just a structural fact about how
00:34:16.420 science works. Every study can be criticized. I mean that literally. Show me any study ever
00:34:22.260 published and I, or any expert in that field, can find a legitimate methodological concern.
00:34:28.520 The sample size could have been bigger, the follow-up period could have been longer,
00:34:32.000 the control group wasn't perfectly matched, there's residual confounding, the primary endpoint
00:34:37.540 was a surrogate. The population study doesn't quite generalize the way the authors generalize.
00:34:43.360 Those are real concerns and they matter, but they apply to everything. So the question is not,
00:34:49.460 can this study be criticized? The answer to that question is always yes. The question is,
00:34:55.940 is this study informative despite its limitations? And answering that question requires a kind of
00:35:03.340 judgment, a willingness to synthesize, to weigh evidence, to say this isn't perfect but it moves
00:35:12.020 the needle, that pure criticism doesn't require. There's a concept called Brandolini's Law,
00:35:19.680 the bullshit asymmetry principle, which says the energy needed to refute bullshit is an order of
00:35:26.960 magnitude larger than what's needed to produce it. Someone committed to casting doubt can always
00:35:35.000 outrun someone committed to building understanding. As Mark Twain said, a lie can travel halfway
00:35:42.100 around the world while the truth is putting on its shoes. When hot-button issues arise,
00:35:48.840 the goal of science is to move the field forward, gathering better evidence, improving models.
00:35:55.920 Be wary of people who only criticize and never synthesize.
00:36:00.860 It's not that scientists should be shielded from the public,
00:36:03.580 but the goal should be focusing on what is built, what data exists,
00:36:08.640 and how this helps us get closer to the truth,
00:36:11.760 not playing whack-a-mole with whatever mess seems to be sticking to the wall this week.
00:36:17.480 Okay, the fifth and final principle, outsource your thinking carefully.
00:36:23.440 It is important to recognize that no human being can exist in the modern world given the sheer
00:36:30.780 expansion of knowledge without relying on the expertise of others. This has nothing to do with
00:36:37.900 intelligence. It is simply not possible. It is metaphysically impossible for any one individual
00:36:45.180 to be a true expert across all domains. And every day each of us make decisions from boarding an
00:36:52.700 airplane to navigating a complex system that depend on the judgment and competence of people
00:36:58.700 whose domain knowledge exceeds our own. There are effectively infinite examples of this reliance,
00:37:05.600 which makes the central question unavoidable. How do we decide in whom to place our trust?
00:37:13.600 Before we dive in, I'll take a moment to step back and tell you where I want to take us.
00:37:18.740 The goal here is for you to build what I think of as a personal board of advisors. For any topic
00:37:25.860 that matters to you, identify two or three people or outlets whose judgment you trust,
00:37:31.200 and be honest about why you trust them. When you find these people, have them help you cut through
00:37:37.460 the noise. When I'm evaluating whether or not to trust someone on a scientific topic, I run through
00:37:43.560 a set of questions. Not necessarily formally, not with a clipboard or an Excel sheet, but these are
00:37:49.560 the things I'm thinking about when I'm reading a paper, listening to a podcast, or watching someone
00:37:54.840 on YouTube. I'm thinking about it in three layers. First, who is this person? Second, how are they
00:38:01.660 thinking? Third, what should make me cautious? And we'll walk through each of these layers.
00:38:09.140 Okay, layer one.
00:38:10.360 I ask, who is this person?
00:38:12.860 What is their actual expertise?
00:38:15.380 We'll get this out of the way first.
00:38:16.900 Credentials aren't conclusive, but they're a meaningful starting point.
00:38:21.220 If someone has a PhD in molecular biology and they're talking about a molecular biology
00:38:26.660 finding, the starting probability that they know what they are talking about is higher
00:38:32.740 than for someone who learned about it from a YouTube video last week.
00:38:36.880 This is not elitism.
00:38:38.380 It's called Bayesian reasoning. Credentials set a prior, but that prior should be updated based on
00:38:45.840 how the person actually reasons. Do they show their work? Do they engage with criticism? Do
00:38:50.720 they acknowledge what they don't know? The worst mistake is dismissing the importance of credentials
00:38:56.140 entirely. The second worst mistake is treating them as conclusive, which is to say that credentials
00:39:02.140 aren't the whole story. With or without them, the question is the same. Has this person done the
00:39:07.020 work? Are they deeply embedded in the field, or are they weighing in on something they're
00:39:13.020 passingly familiar with? How long have they been at it? What is their track record? And it's worth
00:39:19.160 remembering, nobody is an expert in everything. There are several examples where a Nobel laureate
00:39:24.740 in one field goes on to make outlandish and conspiratory claims that contradict every single
00:39:30.520 expert in a different field. In fact, this was the case with Kerry Mullis, the inventor of PCR
00:39:36.140 and winner of the Nobel Prize in Chemistry, who denied that HIV was the causal virus in AIDS.
00:39:42.680 His ideas drove policymaking in South Africa in the early 2000s, and likely led to the death of
00:39:50.220 more than a quarter of a million people. The people you trust on one topic may be completely
00:39:56.120 out of their depth in another. Someone can be the right person to trust in one domain,
00:40:01.100 but the board of advisors needs other people to fill in the gaps.
00:40:06.060 Another big question to ask when judging someone's credibility is how they approach
00:40:10.860 presentations. Are they explaining or performing? Science uses a lot of technical language or jargon
00:40:17.960 that isn't necessarily a part of normal day-to-day language. How the jargon is used matters. Using
00:40:25.140 jargon without context is kind of a hallmark of an attempt to mislead a listener, deploying
00:40:31.140 technical terms to impress an audience rather than inform them. It is the performance of being
00:40:37.360 scientific sounding in an attempt to appear credible rather than using technical terms to
00:40:42.980 add precision to complex ideas. We use jargon on this podcast, but we try to use it to bring you
00:40:49.680 with us not to create a gate. If someone is hiding behind jargon instead of using it to elevate their
00:40:55.900 audience, they could very well be hiding something in hopes the audience won't catch it. Credentials
00:41:02.180 and familiarity with technical language are good on paper, but how they're utilized, how this person
00:41:08.120 interacts with their field and with their audience are the deciding factors in building trust.
00:41:15.100 Which brings us to the second layer, how are they thinking? Now this layer is of course critical,
00:41:21.380 but it's a bit more nuanced because again, science is a process. We want to know the person we are
00:41:27.160 listening to is engaging with a process, not simply a series of conclusions. So the first
00:41:33.160 question here is, do they show their reasoning? Not just the final answer, but how they got there.
00:41:39.160 why they believe what they believe, what evidence they're relying on, and what alternatives
00:41:45.720 they've considered. Transparent reasoning is one of the clearest signals of someone worth
00:41:51.040 listening to. We also want to know how they treat disagreement. Pay attention to how an expert
00:41:56.900 talks about people they disagree with. A steel manner presents the strongest version of the
00:42:02.640 opposing argument and then explains why they still disagree. A straw manner presents the
00:42:08.840 weak position so that it's easy to knock down. If someone consistently engages with the best
00:42:14.720 versions of the other side, they're doing real intellectual work, and they're far more likely
00:42:20.760 to update when the opposing case gets stronger. If they only attack the weakest version,
00:42:27.060 they're performing, and not seriously engaging with the fundamental possibility that their
00:42:33.100 conclusions could be wrong. And to know how they're reaching their conclusions,
00:42:38.020 how they are engaging with disagreements, we ask, are their opinions anchored to data?
00:42:45.800 Let's go back to Richard Feynman, one of my favorite thinkers, but this time in his own
00:42:51.020 words from a lecture that he gave probably sometime in the 1960s.
00:42:55.400 Now I'm going to discuss how we would look for a new law. In general, we look for a new law by the following process. First, we guess it. Then we, don't laugh, that's really true. Then we compute the consequences of the guess to see what, if this is right, if this law that we guessed is right, we see what it would imply.
00:43:16.900 and then we compare those computation results to nature.
00:43:22.400 Or we say compare it to experiment or experience.
00:43:27.720 Compare it directly with observation to see if it works.
00:43:34.460 If it disagrees with experiment, it's wrong.
00:43:40.940 In that simple statement is the key to science.
00:43:44.140 It doesn't make any difference how beautiful your guess is.
00:43:46.880 It doesn't make any difference how smart you are who made the guess or what his name is.
00:43:51.000 If it disagrees with experiment, it's wrong.
00:43:54.460 That's all there is to it.
00:43:56.520 Concepts matter.
00:43:58.440 Mechanisms matter.
00:44:00.300 Hypotheses matter.
00:44:02.260 But data are the anchor.
00:44:04.240 If someone relies mostly on gut, charisma, or an elegant story that hasn't been tested,
00:44:09.880 that's a red flag.
00:44:11.480 On the other hand, if they can cite studies, discuss limitations, and locate key results
00:44:16.440 within a larger body of evidence, that tells you they've done the work.
00:44:21.260 Data and experimentation are the currency of scientific research.
00:44:25.260 Without understanding what data exists and how those data were collected,
00:44:29.400 we are missing the bedrock upon which scientific conclusions are built.
00:44:34.800 Finally, do they acknowledge uncertainty and have they changed their mind?
00:44:39.960 These two go together.
00:44:42.140 Someone who tells you what they don't know, not just what they do know, is operating with
00:44:47.080 the right kind of humility.
00:44:48.800 And someone who has publicly changed their mind, who has said, I used to think X, here's
00:44:55.020 Y, now I think Y, here's what changed, that's one of the strongest signals available.
00:45:01.820 The social cost of changing your mind publicly is enormous.
00:45:06.700 Doing it anyway tells you this person cares more about getting it right
00:45:11.340 than being perceived as an infallible authority.
00:45:15.480 These people, the people who care about getting it right,
00:45:18.780 are the people we want to trust.
00:45:21.000 With these positive signs to look for, it's also worth considering the opposite.
00:45:27.120 This is our third layer.
00:45:29.060 What should make you cautious?
00:45:31.540 Or what are the red flags to look for when building our board of advisors?
00:45:36.880 First up is asking, how do they make money?
00:45:39.980 What is their reward?
00:45:41.940 Is this person's value from providing insight and truth by being consistently
00:45:46.880 right and useful over time, or do they make money by selling products or engagement?
00:45:53.080 If a person's scientific pitch always ends in a link to their company or their
00:45:58.700 affiliate link to their supplement or a promo code, you are listening to an
00:46:03.400 advertisement, not science.
00:46:05.940 Now, sometimes scientists endorse or make deals with products they genuinely believe in.
00:46:10.340 That absolutely happens. But if someone is always making money off a product rather than off
00:46:16.860 education, that's a major red flag. Their financial incentive is not aligned with your
00:46:22.560 well-being. It's aligned with your purchasing behavior. Those are not the same thing.
00:46:27.380 The more insidious version of this is engagement-based incentives. If someone is always
00:46:33.880 contrarian, always telling you everyone is lying except them, especially on platforms like TikTok
00:46:40.160 and YouTube and Instagram. Their business model is your engagement. What gets engagement? Outrage, 0.77
00:46:47.700 contrarianism, the feeling that you're getting secret knowledge and that if you're not listening
00:46:52.820 to them, you're getting duped. Their product isn't getting truth to you. It's your attention
00:46:59.120 to advertisers. Another key question when looking for who to trust is to ask, is their position
00:47:06.360 consistent with the weight of the evidence? Here is where I want to talk about scientific consensus
00:47:11.200 and how to best interact with it. Scientific consensus is not a vote. It's not a popularity
00:47:18.040 contest among scientists. Consensus forms when the evidence in a particular area becomes so
00:47:24.520 overwhelming that virtually all qualified people who genuinely engage with it arrive at the same
00:47:31.440 conclusion. This does not make it infallible. Consensus has been wrong before and will be
00:47:38.540 wrong again. But the prior probability that the consensus position is accurate is much,
00:47:45.140 much higher than the prior probability that any individual dissenter is correct.
00:47:51.620 Now, countering consensus is part of the scientific process. It's very important. It's
00:47:57.580 how science moves forward. But, and this is the key, it should be built on critiques of data
00:48:04.760 interpretation, or on new data, or on identification of missing pieces of information.
00:48:11.420 Consensus is built on data. It takes data, therefore, to change the consensus. If someone
00:48:18.060 is opposing the consensus based on vibes or ideology, or the claim that everyone else is
00:48:24.020 corrupt or compromised, that's not science. That's identity. It is, again, performative.
00:48:31.500 And for our final red flag, is this person always right and everyone else always wrong?
00:48:38.580 Dissent is a normal part of the scientific process, but is the consensus really always
00:48:44.300 trying to trick us? And is one person or group really the only clear thinker in a sea of
00:48:50.160 charlatans? The reality is that we are all wrong to some degree. The difference is that serious
00:48:57.520 thinkers, serious scientists work to become less wrong over time. There is not always a boogeyman.
00:49:06.160 There is not always a conspiracy. In science, there is always more data to collect. And I want
00:49:12.600 to address something really directly here. This same fact that science updates and sometimes
00:49:19.460 gets things wrong, gets used as a weapon against science itself. You'll hear people say,
00:49:26.060 they used to say eggs were bad, so why should I trust them about anything? That sounds compelling
00:49:31.160 for about three seconds. Yes, science got the egg cholesterol story wrong, but it updated it.
00:49:38.260 That is the system working. It finds weaknesses in the armor and seeks to repair or replace it.
00:49:45.640 But the person who uses that to argue that all guidance is equally unreliable,
00:49:52.160 well, that's just nonsense. That's throwing the baby out with the bathwater.
00:49:56.980 Some things have been tested so exhaustively that the remaining uncertainty is vanishingly small.
00:50:03.680 And treating all scientific conclusions as equally shaky isn't skepticism. It's a performance of
00:50:10.160 skepticism designed to let you ignore what you find inconvenient. It's an attempt to rope you
00:50:17.280 into our camp, circumventing your ability to critically evaluate unique claims on their own
00:50:23.600 unique strengths and weaknesses. We will continue to make mistakes, but we update with data if you
00:50:30.760 have a legitimate challenge, show me the data. Science will follow. That's what it's built to
00:50:37.560 do. The existence of past errors isn't evidence that all current conclusions are wrong. It's
00:50:44.440 evidence that the process works. If I had to distill this into a single practice, it would be
00:50:51.360 this. Before you accept a claim from anyone, run through these questions. Who is this person? How
00:50:57.820 are they thinking? And are there any red flags that should make me cautious? You won't run through
00:51:03.180 every sub-question every time, but the more often you do, the better your filters get.
00:51:08.780 And as a final point, look for people who reason the way science reasons. People who say things
00:51:14.640 like, if X were true, we'd expect to see Y, but you don't see Y, so X becomes less likely, and Z
00:51:20.660 is our best current explanation. That logical structure, ruling things out, building confidence
00:51:27.260 in what survives is the heartbeat of good science. It isn't the only thing that makes a good
00:51:32.840 scientist, and it isn't how good scientists always talk about science, but when you see it,
00:51:38.400 you'll know ruling things out is the argument structure of someone deeply familiar with the
00:51:44.420 scientific process. And even when they're trying to simplify the narrative, they can't help but
00:51:50.120 slip in the fundamentals. All right, let's land this plane. You do not need to become a scientist
00:51:58.320 to think more scientifically. You need to get better at three key things. Noticing when certainty
00:52:06.540 and identity are misleading you. Judging the quality of a process rather than just the
00:52:12.320 conclusions it's producing. And choosing who to trust when you can't do the analysis yourself.
00:52:18.400 The goal is not perfect certainty. That isn't what science can offer us. It offers us a disciplined way to become less wrong through time. The goal is better calibration, better judgment, and a willingness to update. And that's a goal every one of us can work toward.
00:52:40.720 Thank you for listening to this week's episode of The Drive. Head over to peteratiamd.com
00:52:47.740 forward slash show notes if you want to dig deeper into this episode. You can also find me
00:52:53.760 on YouTube, Instagram, and Twitter, all with the handle peteratiamd. You can also leave us
00:52:59.340 review on Apple Podcasts or whatever podcast player you use. This podcast is for general
00:53:05.360 informational purposes only and does not constitute the practice of medicine, nursing,
00:53:09.260 or other professional healthcare services, including the giving of medical advice.
00:53:14.080 No doctor-patient relationship is formed. The use of this information and the materials linked to
00:53:19.720 this podcast is at the user's own risk. The content on this podcast is not intended to be
00:53:25.160 a substitute for professional medical advice, diagnosis, or treatment. Users should not disregard
00:53:30.400 or delay in obtaining medical advice from any medical condition they have, and they should seek
00:53:34.960 the assistance of their healthcare professionals for any such conditions.
00:53:39.060 Finally, I take all conflicts of interest very seriously. For all of my disclosures and the
00:53:44.200 companies I invest in or advise, please visit peteratiamd.com forward slash about where I keep
00:53:51.460 an up-to-date and active list of all disclosures.