00:38:38.380It's called Bayesian reasoning. Credentials set a prior, but that prior should be updated based on
00:38:45.840how the person actually reasons. Do they show their work? Do they engage with criticism? Do
00:38:50.720they acknowledge what they don't know? The worst mistake is dismissing the importance of credentials
00:38:56.140entirely. The second worst mistake is treating them as conclusive, which is to say that credentials
00:39:02.140aren't the whole story. With or without them, the question is the same. Has this person done the
00:39:07.020work? Are they deeply embedded in the field, or are they weighing in on something they're
00:39:13.020passingly familiar with? How long have they been at it? What is their track record? And it's worth
00:39:19.160remembering, nobody is an expert in everything. There are several examples where a Nobel laureate
00:39:24.740in one field goes on to make outlandish and conspiratory claims that contradict every single
00:39:30.520expert in a different field. In fact, this was the case with Kerry Mullis, the inventor of PCR
00:39:36.140and winner of the Nobel Prize in Chemistry, who denied that HIV was the causal virus in AIDS.
00:39:42.680His ideas drove policymaking in South Africa in the early 2000s, and likely led to the death of
00:39:50.220more than a quarter of a million people. The people you trust on one topic may be completely
00:39:56.120out of their depth in another. Someone can be the right person to trust in one domain,
00:40:01.100but the board of advisors needs other people to fill in the gaps.
00:40:06.060Another big question to ask when judging someone's credibility is how they approach
00:40:10.860presentations. Are they explaining or performing? Science uses a lot of technical language or jargon
00:40:17.960that isn't necessarily a part of normal day-to-day language. How the jargon is used matters. Using
00:40:25.140jargon without context is kind of a hallmark of an attempt to mislead a listener, deploying
00:40:31.140technical terms to impress an audience rather than inform them. It is the performance of being
00:40:37.360scientific sounding in an attempt to appear credible rather than using technical terms to
00:40:42.980add precision to complex ideas. We use jargon on this podcast, but we try to use it to bring you
00:40:49.680with us not to create a gate. If someone is hiding behind jargon instead of using it to elevate their
00:40:55.900audience, they could very well be hiding something in hopes the audience won't catch it. Credentials
00:41:02.180and familiarity with technical language are good on paper, but how they're utilized, how this person
00:41:08.120interacts with their field and with their audience are the deciding factors in building trust.
00:41:15.100Which brings us to the second layer, how are they thinking? Now this layer is of course critical,
00:41:21.380but it's a bit more nuanced because again, science is a process. We want to know the person we are
00:41:27.160listening to is engaging with a process, not simply a series of conclusions. So the first
00:41:33.160question here is, do they show their reasoning? Not just the final answer, but how they got there.
00:41:39.160why they believe what they believe, what evidence they're relying on, and what alternatives
00:41:45.720they've considered. Transparent reasoning is one of the clearest signals of someone worth
00:41:51.040listening to. We also want to know how they treat disagreement. Pay attention to how an expert
00:41:56.900talks about people they disagree with. A steel manner presents the strongest version of the
00:42:02.640opposing argument and then explains why they still disagree. A straw manner presents the
00:42:08.840weak position so that it's easy to knock down. If someone consistently engages with the best
00:42:14.720versions of the other side, they're doing real intellectual work, and they're far more likely
00:42:20.760to update when the opposing case gets stronger. If they only attack the weakest version,
00:42:27.060they're performing, and not seriously engaging with the fundamental possibility that their
00:42:33.100conclusions could be wrong. And to know how they're reaching their conclusions,
00:42:38.020how they are engaging with disagreements, we ask, are their opinions anchored to data?
00:42:45.800Let's go back to Richard Feynman, one of my favorite thinkers, but this time in his own
00:42:51.020words from a lecture that he gave probably sometime in the 1960s.
00:42:55.400Now I'm going to discuss how we would look for a new law. In general, we look for a new law by the following process. First, we guess it. Then we, don't laugh, that's really true. Then we compute the consequences of the guess to see what, if this is right, if this law that we guessed is right, we see what it would imply.
00:43:16.900and then we compare those computation results to nature.
00:43:22.400Or we say compare it to experiment or experience.
00:43:27.720Compare it directly with observation to see if it works.
00:43:34.460If it disagrees with experiment, it's wrong.
00:43:40.940In that simple statement is the key to science.
00:43:44.140It doesn't make any difference how beautiful your guess is.
00:43:46.880It doesn't make any difference how smart you are who made the guess or what his name is.
00:43:51.000If it disagrees with experiment, it's wrong.
00:46:05.940Now, sometimes scientists endorse or make deals with products they genuinely believe in.
00:46:10.340That absolutely happens. But if someone is always making money off a product rather than off
00:46:16.860education, that's a major red flag. Their financial incentive is not aligned with your
00:46:22.560well-being. It's aligned with your purchasing behavior. Those are not the same thing.
00:46:27.380The more insidious version of this is engagement-based incentives. If someone is always
00:46:33.880contrarian, always telling you everyone is lying except them, especially on platforms like TikTok
00:46:40.160and YouTube and Instagram. Their business model is your engagement. What gets engagement? Outrage,0.77
00:46:47.700contrarianism, the feeling that you're getting secret knowledge and that if you're not listening
00:46:52.820to them, you're getting duped. Their product isn't getting truth to you. It's your attention
00:46:59.120to advertisers. Another key question when looking for who to trust is to ask, is their position
00:47:06.360consistent with the weight of the evidence? Here is where I want to talk about scientific consensus
00:47:11.200and how to best interact with it. Scientific consensus is not a vote. It's not a popularity
00:47:18.040contest among scientists. Consensus forms when the evidence in a particular area becomes so
00:47:24.520overwhelming that virtually all qualified people who genuinely engage with it arrive at the same
00:47:31.440conclusion. This does not make it infallible. Consensus has been wrong before and will be
00:47:38.540wrong again. But the prior probability that the consensus position is accurate is much,
00:47:45.140much higher than the prior probability that any individual dissenter is correct.
00:47:51.620Now, countering consensus is part of the scientific process. It's very important. It's
00:47:57.580how science moves forward. But, and this is the key, it should be built on critiques of data
00:48:04.760interpretation, or on new data, or on identification of missing pieces of information.
00:48:11.420Consensus is built on data. It takes data, therefore, to change the consensus. If someone
00:48:18.060is opposing the consensus based on vibes or ideology, or the claim that everyone else is
00:48:24.020corrupt or compromised, that's not science. That's identity. It is, again, performative.
00:48:31.500And for our final red flag, is this person always right and everyone else always wrong?
00:48:38.580Dissent is a normal part of the scientific process, but is the consensus really always
00:48:44.300trying to trick us? And is one person or group really the only clear thinker in a sea of
00:48:50.160charlatans? The reality is that we are all wrong to some degree. The difference is that serious
00:48:57.520thinkers, serious scientists work to become less wrong over time. There is not always a boogeyman.
00:49:06.160There is not always a conspiracy. In science, there is always more data to collect. And I want
00:49:12.600to address something really directly here. This same fact that science updates and sometimes
00:49:19.460gets things wrong, gets used as a weapon against science itself. You'll hear people say,
00:49:26.060they used to say eggs were bad, so why should I trust them about anything? That sounds compelling
00:49:31.160for about three seconds. Yes, science got the egg cholesterol story wrong, but it updated it.
00:49:38.260That is the system working. It finds weaknesses in the armor and seeks to repair or replace it.
00:49:45.640But the person who uses that to argue that all guidance is equally unreliable,
00:49:52.160well, that's just nonsense. That's throwing the baby out with the bathwater.
00:49:56.980Some things have been tested so exhaustively that the remaining uncertainty is vanishingly small.
00:50:03.680And treating all scientific conclusions as equally shaky isn't skepticism. It's a performance of
00:50:10.160skepticism designed to let you ignore what you find inconvenient. It's an attempt to rope you
00:50:17.280into our camp, circumventing your ability to critically evaluate unique claims on their own
00:50:23.600unique strengths and weaknesses. We will continue to make mistakes, but we update with data if you
00:50:30.760have a legitimate challenge, show me the data. Science will follow. That's what it's built to
00:50:37.560do. The existence of past errors isn't evidence that all current conclusions are wrong. It's
00:50:44.440evidence that the process works. If I had to distill this into a single practice, it would be
00:50:51.360this. Before you accept a claim from anyone, run through these questions. Who is this person? How
00:50:57.820are they thinking? And are there any red flags that should make me cautious? You won't run through
00:51:03.180every sub-question every time, but the more often you do, the better your filters get.
00:51:08.780And as a final point, look for people who reason the way science reasons. People who say things
00:51:14.640like, if X were true, we'd expect to see Y, but you don't see Y, so X becomes less likely, and Z
00:51:20.660is our best current explanation. That logical structure, ruling things out, building confidence
00:51:27.260in what survives is the heartbeat of good science. It isn't the only thing that makes a good
00:51:32.840scientist, and it isn't how good scientists always talk about science, but when you see it,
00:51:38.400you'll know ruling things out is the argument structure of someone deeply familiar with the
00:51:44.420scientific process. And even when they're trying to simplify the narrative, they can't help but
00:51:50.120slip in the fundamentals. All right, let's land this plane. You do not need to become a scientist
00:51:58.320to think more scientifically. You need to get better at three key things. Noticing when certainty
00:52:06.540and identity are misleading you. Judging the quality of a process rather than just the
00:52:12.320conclusions it's producing. And choosing who to trust when you can't do the analysis yourself.
00:52:18.400The goal is not perfect certainty. That isn't what science can offer us. It offers us a disciplined way to become less wrong through time. The goal is better calibration, better judgment, and a willingness to update. And that's a goal every one of us can work toward.
00:52:40.720Thank you for listening to this week's episode of The Drive. Head over to peteratiamd.com
00:52:47.740forward slash show notes if you want to dig deeper into this episode. You can also find me
00:52:53.760on YouTube, Instagram, and Twitter, all with the handle peteratiamd. You can also leave us
00:52:59.340review on Apple Podcasts or whatever podcast player you use. This podcast is for general
00:53:05.360informational purposes only and does not constitute the practice of medicine, nursing,
00:53:09.260or other professional healthcare services, including the giving of medical advice.
00:53:14.080No doctor-patient relationship is formed. The use of this information and the materials linked to
00:53:19.720this podcast is at the user's own risk. The content on this podcast is not intended to be
00:53:25.160a substitute for professional medical advice, diagnosis, or treatment. Users should not disregard
00:53:30.400or delay in obtaining medical advice from any medical condition they have, and they should seek
00:53:34.960the assistance of their healthcare professionals for any such conditions.
00:53:39.060Finally, I take all conflicts of interest very seriously. For all of my disclosures and the
00:53:44.200companies I invest in or advise, please visit peteratiamd.com forward slash about where I keep
00:53:51.460an up-to-date and active list of all disclosures.