#188 - AMA #30: How to Read and Understand Scientific Studies
Episode Stats
Words per Minute
180.05621
Summary
In this episode, Dr. Bob Kaplan and I discuss the process of studying studies, the strengths and limitations of each type of study, and how to make sense of the ever-changing landscape of scientific literature. This episode is a bit longer than normal, and we contemplated breaking it into two because it's so long, but I think our audience can handle this.
Transcript
00:00:00.000
Hey everyone, welcome to a sneak peek, ask me anything or AMA episode of the drive podcast.
00:00:16.500
I'm your host, Peter Atiyah. At the end of this short episode, I'll explain how you can
00:00:20.460
access the AMA episodes in full, along with a ton of other membership benefits we've created,
00:00:25.440
or you can learn more now by going to peteratiyahmd.com forward slash subscribe.
00:00:31.140
So without further delay, here's today's sneak peek of the ask me anything episode.
00:00:39.280
Welcome to another ask me anything episode, AMA number 30. I'm once again joined by Bob Kaplan.
00:00:44.840
In today's episode, we discuss all things around studying studies. If you've listened to this
00:00:50.180
podcast or read any of my weekly emails, you know that I place a large emphasis on being able to
00:00:54.280
sift through the noise and find the signal when it comes to various studies and papers that are
00:00:58.280
printed, both as the studies themselves. And unfortunately, a lot of the times with the
00:01:01.900
media reports on them. So if you follow the news, you know, there's no shortage of articles that
00:01:05.520
either contradict each other or seem too good to be true, don't make any sense. It can be hard to
00:01:09.480
understand this. So what do you do? Well, we've, as some of you may know, written a series on this
00:01:14.240
called studying studies, but we've also tried to tackle some of the bigger things in this AMA.
00:01:18.640
And we combed through a lot of the questions that many of you have been asking over the previous
00:01:21.640
months and years that relate to this topic. And I think we have enough of them here that we were
00:01:24.980
able to put together a solid episode. This episode's a bit longer than normal. We contemplated breaking
00:01:29.740
it into two because it's so long, but I think our audience can handle this. And so at least now you
00:01:35.200
have in one place, the all singing, all dancing discussion of how to study studies. So in this
00:01:40.580
discussion, we talk about a bunch of things. What are they? What is the process for a study to go from
00:01:43.660
an idea to a design to execution? What are the different types of studies out there? And what do they
00:01:48.140
mean? What are the strengths and limitations of each of them? How do clinical trials work
00:01:51.980
specifically for drugs? What are the common pitfalls of observational studies that you
00:01:55.880
should be looking for? What questions should you be asking about a study to figure out how rigorous it
00:02:00.500
was? What does it really mean when a study is statistically significant? Why do some studies
00:02:05.380
never get published? What is my process for reading scientific papers? So if you're a subscriber and you
00:02:11.220
want to watch the video of this podcast, you can see it on the show notes page. I think that's
00:02:15.180
valuable for this episode because I do refer to some tables, though this is still certainly
00:02:20.060
amenable to audio only. If you're not a subscriber, you can watch the sneak peek of this on our YouTube
00:02:25.200
page. So without further delay, I hope you enjoy AMA number 30. Hey, Bob, how are you, man? Looking
00:02:35.720
pretty studious there in the library today. Hey, Peter. Thanks very much. Yeah, just getting some
00:02:40.760
reading in before the podcast. This is going to be a pretty good one because as you may recall about,
00:02:46.600
I don't know, four or five months ago, maybe longer, I was on a podcast with Tim Ferriss and
00:02:50.600
I don't know how it came up, but I do remember somehow it came up that we had spent a lot of
00:02:55.260
time writing this series, Studying Studies. And God, that's been four years ago, I think. But we didn't
00:03:02.560
really have something more digestible for folks on how to make sense of the ever-changing landscape of
00:03:09.240
scientific literature and how to kind of distinguish between the signal and the noise of the research
00:03:14.760
news cycle. And I remember after that, Tim and I went out for dinner and he kept pressing me on,
00:03:19.500
well, what can I do to get better at this process? Are there newsletters I used to be subscribing to
00:03:24.920
and things like that? And while I'm sure that there are, I didn't know what they were off the top of my
00:03:28.180
head. And so I think what we've done here, when I say we, I mean you, what you have done here is
00:03:33.360
aggregate all the questions that have come in over the past year, basically that pertain to
00:03:40.400
understanding the structure of science. I looked through the questions last week and I was pretty
00:03:46.000
excited. I think it's going to be a sweet discussion and I hope this serves as an amazing
00:03:49.960
primer for people to really understand the process of scientific experiments and everything from how
00:03:57.160
studies are published and obviously what some of the limitations are. So anything else you want to add
00:04:01.200
to that, Bob, before we jump in? I agree. I think it's a fun topic. We get so many of these questions
00:04:05.640
that we end up, or at least I do, or to the website where we'll point readers to one of the parts of the
00:04:11.940
studying studies. But I think sometimes just talking about it and explaining it can help a lot. So I think
00:04:18.060
this will be really useful as far as like a question and answer session rather than just reading a
00:04:23.280
blog. I don't think this displaces that other stuff. I think we go into probably more detail on some
00:04:28.340
things there, but I also think we're going to cover things here that aren't covered there. So
00:04:31.540
depending on how you like to get your info, this could be fun. So where do you want to start?
00:04:36.620
We have, again, a lot of questions, but I think this question gets to the core of, I think,
00:04:40.860
what we're trying to do here, which is how can a user or a person who has no scientific background
00:04:46.640
better understand studies that they read in the news or in the publications to know if the findings
00:04:52.080
are solid or not, especially in today's age where you can easily see two studies that contradict
00:04:56.780
each other. Coffee's good. Coffee's bad. Eggs are good. Eggs are bad. So I thought we could run
00:05:02.660
through a bunch of questions with the first one that we got here is what is the process for a study
00:05:08.980
to go from an idea to design and execution? This is a great question. In theory, it should start
00:05:16.480
with a hypothesis. Good science is generally hypothesis driven. I think the cleanest way to think
00:05:24.420
about that is to take the position that there is no relationship between two phenomena. We would call
00:05:33.500
this sort of a null hypothesis. So my hypothesis might be that drinking coffee makes your eyes turn
00:05:43.160
darker. So I would have to state that hypothesis and then I would have to frame it in a way that says
00:05:49.320
my null hypothesis is that when you drink coffee, your eyes do not change in color in any way, shape,
00:05:57.520
or form. And that would imply that the alternative hypothesis is that when you drink coffee, your eyes
00:06:04.980
do change color. You can already see, by the way, that there's nuance to this because am I specifying
00:06:12.220
what color it changes to? Does it get darker? Does it get lighter? Does it change to blue, green? Does it just
00:06:17.720
get the darker shade of whatever it is? But let's put that aside for a moment and just say that you
00:06:22.140
will have this null hypothesis and you will have this alternative hypothesis. And to be able to
00:06:27.200
formulate that cleanly is sort of the first step here. The second thing, of course, is to conduct
00:06:32.600
an experimental design. How are you going to test that hypothesis? As we're going to talk about,
00:06:38.380
a really, really elegant way to test this is using a randomized controlled experiment. If it's possible
00:06:44.640
to blind it, we'll talk about what that means. You'll have to decide, well, how long should we
00:06:49.620
make people drink coffee? How frequently should they drink coffee? How are we going to measure eye
00:06:53.880
color? These are the questions that come down to experimental design. You then have to determine
00:06:59.060
a very important variable, which is how many subjects will you have? And of course, that will depend on a
00:07:05.600
number of things, including how many arms you will have in this study. But it comes down to doing
00:07:10.280
something that's called a power analysis. And this is so important that we're going to spend some time
00:07:14.220
talking about it today, although I won't talk about it right now. If this study involves human
00:07:19.640
subjects or animal subjects, you will have to get something called an institutional review board to
00:07:25.320
approve the ethics of the study. So you'll have to get that IRB approval. You'll have to determine what
00:07:31.380
your primary and secondary outcomes are, get the protocol approved, develop a plan for statistics,
00:07:36.620
and then pre-register the study. All of these things happen before you do the study. And of
00:07:42.260
course, in parallel to this, you have to have funding. So those are kind of the steps that go
00:07:47.440
into doing an experimental study. And what we're going to talk about, I think in a minute, is that
00:07:53.440
there are some studies that are not experimental, where some of these steps are obviously skipped.
00:07:58.740
Yeah. One of the questions we got was, what are the different types of studies out there? And what do
00:08:02.460
they mean, for example, observational study versus a randomized controlled study? What are the
00:08:07.740
different types of studies? I think broadly speaking, you can break studies into three
00:08:15.760
categories. One would be observational studies. We'll bifurcate those or trifurcate those in a minute.
00:08:22.380
Then you can have experimental studies. And then you can have basically summations of and or reviews of
00:08:32.800
and or analyses of studies of any type. Let's kind of start at the bottom of that pyramid.
00:08:39.200
I think you actually have a figure that I don't like very much, but-
00:08:42.940
I was going to say, I thought it was one of your favorites.
00:08:45.040
Yeah, I can't stand it. I'll tell you what I like about the figure. I like the color schema because
00:08:49.200
my boys are so obsessed with rainbows that if I show them this figure, they're going to be really
00:08:56.060
happy. So let's pull up said rainbow figure. Okay. Got it.
00:09:01.340
Okay. So you can see these buckets here. And again, at the level of talking about them,
00:09:06.560
I think this makes sense. What I don't agree with the pyramid for Bob is that it puts a hierarchy in
00:09:12.400
place that suggests a meta-analysis is better than a randomized control trial, which is not necessarily
00:09:18.220
true, but let's just kind of go through what each of these things mean. So looking at the
00:09:22.280
observational studies, an individual case report is first or second paper I ever wrote in my life
00:09:27.500
when I was in medical school, it was an individual case report. There was a patient who had come into
00:09:32.820
clinic when I was at the NIH. This was a patient with metastatic melanoma and their calcium was sky high,
00:09:41.620
dangerously high, in fact. And obviously our first assumption was that this patient had metastatic
00:09:48.120
disease to their bone and that they were lysing bone and calcium was leaching into their bloodstream.
00:09:53.860
Turned out that wasn't the case at all. It turned out they had something that had not been previously
00:09:58.260
reported in patients with melanoma, which was they had developed this parathyroid hormone related like
00:10:04.800
hormone in response to their melanoma. This is a hormone that exists normally, but it doesn't exist
00:10:11.080
in this format. And so their cancer was causing them to have more of this hormone that was causing
00:10:17.740
them to raise their calcium level. It was interesting because it had never been reported before in the
00:10:21.820
literature. And so I wrote this up. This was an individual case report. Is there any value in that?
00:10:29.200
Sure. There's some value in that. The next time a patient with melanoma shows up to clinic
00:10:33.260
and their calcium is sky high and someone goes to the literature to search for it, they'll see that
00:10:37.860
report and it will hopefully save them time in getting to the diagnosis.
00:10:43.180
Your mentor and friend, Steve Rosenberg, I think of him when I think of individual case reports.
00:10:48.920
I think if you listen to the podcast, he talks about this, but a lot of what motivated him early
00:10:53.560
on, I think were just a couple of cases. I think it gets back to that first question too,
00:10:57.440
about the process for a study to go to an idea to design to execution, which is to have a hypothesis,
00:11:02.660
you need to make an observation. And so you make an observation, you say, hmm, that's strange.
00:11:07.680
And I think that that's what individual case reports can represent sometimes is this is an
00:11:11.300
interesting observation. It's hypothesis generating for the most part, but it really
00:11:15.360
might kickstart a larger trial or it might kickstart a career. You never know.
00:11:20.940
Exactly. Now, of course, it's not going to be generalizable. I can't make any statement about
00:11:25.900
the frequency of this in the broader subset of patients. And obviously I can't make any comment
00:11:31.620
about any intervention that may or may not change the outcome of this. So that gets us to kind of
00:11:36.660
our next thing, which is like a case series or set of studies. So here you're basically doing the same
00:11:45.620
thing, but in plural effectively. You wouldn't just look at one patient. You would say, well,
00:11:52.100
I've now been looking back at my clinical practice and I've had 27 patients over the last 40 years
00:12:02.260
that have demonstrated this very unusual finding. And another example of this, going back to the
00:12:08.480
Steve Rosenberg case, would be one could write a paper that looks at all spontaneous regressions of
00:12:13.920
cancer. Obviously, spontaneous regressions of cancer are incredibly rare, but there are certainly
00:12:19.140
enough of them that one could write a case series. So now let's consider cohort studies. So cohort
00:12:25.000
studies are larger studies and they can be retrospective or they can be prospective. So I'll
00:12:29.960
give you an example of both. So a retrospective observational cohort study would be, let's go back
00:12:37.960
and look at all the people who have used saunas for the last 10 years and look at how they're doing today
00:12:48.000
relative to people who didn't use saunas over the last 10 years. So it's retrospective. We're looking
00:12:54.580
backwards. It's observational. We're not doing anything, right? We're not telling these people
00:12:59.540
to do this or telling those people to do that. And the hope when you do this is that you're going to
00:13:03.860
see some sort of pattern. Undoubtedly, you will see a pattern. Of course, the question is,
00:13:07.700
will you be able to establish causality in that pattern? Cohort studies can just as easily,
00:13:13.120
although more time-consumingly, be prospective. So you could say, I want to follow people over the
00:13:21.400
next five years, 10 years who use saunas and compare them to a similar number of people who
00:13:28.340
don't. And now in a forward-looking fashion, we're going to be examining the other behaviors
00:13:35.660
of these people and ultimately what their outcomes are. Do they have different rates of death,
00:13:39.140
heart disease, cancer, Alzheimer's disease, other metrics of health that we might be interested in?
00:13:43.740
Again, we're not intervening. There's not an experiment per se. We're just observing,
00:13:49.360
but now we're doing it as we march forward through time. So this brings us to the kind of the next
00:13:54.400
layer of this pyramid, which are the experimental studies. Divide these into randomized versus non-randomized.
00:14:02.200
And of course, this idea of randomization is going to be a very important one as we go through this.
00:14:07.300
So a non-randomized trial sometimes gets referred to as an open label trial where you take two groups
00:14:15.980
of people and you give one of them a treatment and you give the other one either a placebo or a
00:14:20.360
different treatment, but you don't randomize them. There's a reason that they're in that group.
00:14:25.260
So you might say, we want to study the effect of a certain antibiotic on a person that comes in
00:14:34.720
and we're going to take all the people that come in who look a certain way. Maybe they have a fever of
00:14:44.160
a certain level or a white blood cell count of a certain level. We're going to give them the
00:14:48.180
antibiotic and the people who come in, but they don't have those exact signs or symptoms. We're
00:14:54.600
going to not give an antibiotic to, and we're going to follow them. That's kind of a lame example.
00:14:59.020
You could do the same sort of thing with surgical interventions. We're going to try to ask the
00:15:04.280
question, is surgery better than antibiotics for appendicitis or suspected appendicitis?
00:15:10.660
But we don't randomize the people to the choice. There's some other factor that is going to determine
00:15:17.480
whether or not we do that. As you can see, that's going to have a lot of limitations because
00:15:21.580
presumably there's a reason you're making that decision and that reason will undoubtedly introduce
00:15:26.800
bias. So of course, the gold standard that we always talk about is a randomized control trial
00:15:32.720
where whatever question you want to study, you study it, but you attempt to take all bias out of
00:15:40.880
it by randomly assigning people into the treatment groups, the two or more treatment groups. We'll talk
00:15:46.820
about things like blinding later because you can obviously get into more and more rigor when you do
00:15:52.140
this. But before we leave the kind of experimental side, anything you want to add to that, Bob?
00:15:56.800
I would add, so non-randomized controlled trials, maybe another example, illustrative example,
00:16:01.640
I think with non-randomized controlled trials might be you have patients maybe making a decision
00:16:06.680
beforehand, which we'll get into selection bias, but they might want to go on a statin, let's say,
00:16:11.120
and then you give them a choice and the other ones might want to go on some other drug like
00:16:14.440
azetimibe. They're basically selecting themselves into two groups, but you could compare those two
00:16:19.860
groups and see how they do, but it hasn't been randomized. There's a lot of bias that can go into that.
00:16:24.800
There could be a lot of reasons why one group is selecting a particular treatment over the other.
00:16:29.820
And so that's why I think when we get to randomized trials that shows the power of randomization.
00:16:35.300
Yeah, exactly. We don't need to go back to the figure, but people might recall that at the top
00:16:39.220
of that pyramid was systemic reviews and meta-analyses. Let's just talk about meta-analyses
00:16:43.460
since they are probably the most powerful. So this is a statistical technique where you can combine
00:16:47.980
data from multiple studies that are attempting to look at the same question, basically. So each
00:16:54.340
study gets a relative weighting and the weighting of a study is sort of a function of its precision.
00:17:00.060
It depends a little bit on sample size, other events in the study, larger studies, which have
00:17:04.900
smaller standard errors are given more weight than smaller studies with larger standard errors,
00:17:08.700
for example. You'll know you're looking at a meta-analysis. We should have had a figure for
00:17:12.640
this, but I'll describe it the best I can. They usually have a figure somewhere in there that will
00:17:18.640
show across rows all of the studies. So let's say there's 10 studies included in the meta-analysis,
00:17:24.860
and then they'll have the hazard ratios for each of the studies. So they'll represent them usually as
00:17:31.780
little triangles. The triangle will represent the 95% confidence interval of what the hazard ratio is,
00:17:39.800
which we'll talk about a hazard ratio, but it's basically a marker of the risk. And you'll see all 10
00:17:44.740
studies, and then they'll show you the final summation of them at the bottom, which of course you wouldn't
00:17:49.360
be able to deduce looking at the figure, but it takes into account that mathematical weighting. So on the
00:17:54.020
surface, meta-analyses seem really, really great, because if one trial, one randomized trial is good,
00:18:02.040
10 must be better. I know I've said this before probably three or four times over the past few years on the
00:18:07.100
podcast, but as James Yang, one of the smartest people I ever met when I was both a student and
00:18:12.500
fellow at NCI, once said during a journal club about a meta-analysis that was being presented,
00:18:17.580
he said something to the effect of, a thousand sows ears makes not a pearl necklace. And that's just an
00:18:24.080
eloquent way to say that garbage in, garbage out. So if you do a meta-analysis of a bunch of garbage
00:18:29.800
studies, you get a garbage meta-analysis. It can't clean garbage. It simply can aggregate it. So a
00:18:38.180
meta-analysis of great randomized control trials will produce a great meta-analysis.
00:18:44.380
They try to control for garbage, the researchers and the investigators. But I think to your point
00:18:48.440
with the pearl necklace, imagine if you had say 10 trials and nine of them are garbage. One of them
00:18:54.860
is really good, really rigorous, randomized control trial. And you're looking at the top of the
00:18:59.560
pyramid and you're saying, well, meta-analysis is the best. We should be looking at this meta-analysis.
00:19:03.760
Meanwhile, you've got that one randomized control trial that actually is worth its salt. It's
00:19:08.920
rigorous, et cetera, that I would say, if you had the option, I think you probably would rely more
00:19:13.040
on that one randomized control trial, which is lower on the pyramid. So I think that's probably,
00:19:18.400
I think you've told me one of your hangups with the pyramid, because it's not necessarily
00:19:21.800
top of the pyramid is going to be some meta-analysis of randomized control trials.
00:19:26.780
That's right. Yeah. I don't want to suggest meta-analyses are not
00:19:29.460
great. What I want to suggest is you can't just take a meta-analysis as gospel without actually
00:19:36.060
looking at each study. You don't get a pass at examining each of the constitutive studies within
00:19:41.460
a meta-analysis is really the point I think we want to make here. There's one thing in here that
00:19:45.880
isn't represented, but we had a few questions about it. I think a couple. People were asking about
00:19:51.260
what's the difference between a phase three and a phase two or a phase one clinical trial?
00:19:57.940
Yes. So here we're talking about human clinical trials. This phraseology is used by the FDA here
00:20:05.640
in the United States. And typically the world does tend to follow in lockstep, but not always with
00:20:11.380
kind of the FDA's process. So if you go way, way, way back, you have an interesting idea. You have a
00:20:17.920
drug that you think is, or a molecule that you think will have some benefit. Think of it as a cancer
00:20:25.300
therapeutic. You've done some interesting experiments in animals, maybe started with some
00:20:31.400
mice and you went up to some rats and maybe even you've done something in primates. And now you're
00:20:37.100
really committed to this as the success of this and the safety of this in animals looks good. So it's
00:20:42.720
both safe and efficacious in animals. And you now decide you want to foray into the human space. Well,
00:20:48.860
the first thing you have to do is file for something called an IND, an investigational new
00:20:53.960
drug application. So after you do all of this preclinical work, you have to file this IND with the
00:21:00.260
FDA. And that basically sets your intention of testing this as a drug in humans. And the first
00:21:07.520
phase of that, which is called phase one is geared specifically to dose escalate this drug from a
00:21:15.660
very, very low level to determine what the toxicity is across a range of doses that will hopefully have
00:21:23.740
efficacy. These are typically very small studies, usually less than a hundred people. They're typically
00:21:30.600
done in cohorts. So you might say, well, the first 12 people are going to be at 0.1 milligrams
00:21:37.260
per kilogram. And assuming we see no adverse effects there, we'll go up to 0.15 milligrams
00:21:44.020
per kilogram for the next 12 people. And if we have no issues there, we'll escalate it to 0.25.
00:21:50.240
You'll notice, Bob, I said nothing in there about does the drug work. These are going to be patients
00:21:55.660
with cancer. If this is a drug that's being sought as a treatment for colon cancer, these are going to be
00:22:01.020
patients that all have colon cancer. They're often going to be patients who have metastatic colon cancer.
00:22:06.200
So these are going to be patients who have progressed through all other standard treatments
00:22:12.860
and who are basically saying, look, sign me up for this clinical trial. I realized that this first phase
00:22:19.520
is not going to be necessarily giving me a high enough dose that I could experience a benefit
00:22:25.200
and that you're really only looking to make sure that this drug doesn't hurt me. But nevertheless,
00:22:30.700
I want to participate in this trial. If the drug gets through phase one safely, then it goes to phase
00:22:37.520
two. And the goal of phase two is to continue to evaluate for safety, but also to start to look for
00:22:44.820
efficacy. But this is done in an open label fashion. What that means is they're not randomizing
00:22:52.080
patients to one drug versus the other typically. They can, but usually it's now we think we know
00:22:59.820
one or two doses that are going to produce efficacy. They were deemed safe in the phase one.
00:23:06.300
We're now going to take patients and give them this drug and look for an effect. And a lot of times,
00:23:12.460
if there's no control arm in the study, you're going to compare to the natural history. So let's assume
00:23:18.220
that we know that patients with metastatic colon cancer have on standard of care have immediate
00:23:24.980
survival of X months. Well, we're going to give these patients this drug and see if that extends
00:23:29.700
it anymore. And of course you could do this with a control arm, but now it adds the number of patients
00:23:33.380
to the study. So again, typically very small studies can be in the 20, 30, 40, 50 range, maybe up to a few
00:23:41.460
hundred people. And that one, Peter, I think is probably a good example of if you have the
00:23:46.620
non-randomization, this might be a case where say it's an immunotherapy and people know about the
00:23:51.580
immunotherapy and it's been really effective. It's approved for a particular cancer, let's say.
00:23:55.800
And there are a lot of people that know about it and there are cancer patients that know about it and
00:23:59.600
they want to get that treatment, but it's not approved. They're talking to their doctor,
00:24:03.280
maybe they're online. They might enroll in one of these trials because they really want to try the
00:24:07.660
drug and maybe they might believe in it more than some other treatment.
00:24:11.420
Yep. There are lots of things that can introduce bias to a phase two if it does not have randomization.
00:24:17.220
Again, the goal would be to still randomize in phase two because you really do want to tease out
00:24:22.240
efficacy. So if a compound succeeds in phase two, which means it continues to show no significant
00:24:31.240
adverse safety effects, which by the way, doesn't mean it doesn't have side effects. Every treatment has
00:24:35.940
side effects. It's just that it doesn't have side effects that are deemed unacceptable for the risk
00:24:42.120
profile of the patient and it shows efficacy. So really you have to have these two things.
00:24:47.020
You then proceed to phase three. Here a phase three is a really rigorous trial. This is a huge step up.
00:24:53.760
It's typically a log step up in the number of patients. You're talking potentially thousands of
00:24:59.340
patients here. And this is absolutely a placebo controlled trial or not necessarily placebo, but it
00:25:08.340
can be standard of care versus standard of care plus this new agent. But it is randomized whenever
00:25:14.440
possible. It is blinded. And with drugs, that's always possible. And these are typically longer
00:25:19.760
studies because you have so much more sample size. You're going to potentially pick up side effects
00:25:25.900
that weren't there in the first place. And of course, now you really have that gold standard for
00:25:30.940
measuring efficacy. And it's on the basis of the phase one, phase two, and mostly phase three data
00:25:36.880
that a drug will get approved or not approved for broad use, which leads to a fourth phase, which is a
00:25:45.020
post-marketing study. So phase four studies take place after the drug has been approved and they're used
00:25:53.180
to basically get additional information because once a drug is approved, you now have more people
00:25:58.780
taking it. And they may also be using this to look at other indications for the drug. We talked about
00:26:04.640
this recently, right? A phase four trial with semaglutide being used to look at obesity versus
00:26:12.100
its original phase three trials, which were looking at diabetes. The drug's already been approved. This
00:26:17.700
study isn't being done to ask the question, should semaglutide be on the market? No, it's on the market.
00:26:22.720
It's basically expanding the indication for semaglutide in this case so that insurance
00:26:27.120
companies would actually pay for it for a new indication. But given the size and the number
00:26:31.700
of these studies, you're also looking for, hey, is there another side effect here that we missed
00:26:36.860
in the phase three? Right. And it might be the particular population that might have a different
00:26:42.180
risk profile. You might have a different threshold. That's right. Because you're not doing this in
00:26:46.200
patients with type two diabetes. You're doing this in patients who explicitly don't have diabetes,
00:26:49.760
but have obesity, different patients. Are we going to see something different here? So yeah. So
00:26:54.260
anyway, that's the long and short of phases one, two, three, and four.
00:26:59.120
Okay. So going back to observational studies, are there any things that you look for in particular
00:27:04.640
that will increase or decrease your confidence in it, whether that's a pearl necklace or garbage?
00:27:09.180
Thank you for listening to today's sneak peek AMA episode of The Drive. If you're interested in
00:27:15.020
hearing the complete version of this AMA, you'll want to become a member. We created a membership
00:27:19.700
program to bring you more in-depth exclusive content without relying on paid ads. Membership
00:27:26.040
benefits are many and beyond the complete episodes of the AMA each month. They include the following
00:27:31.200
ridiculously comprehensive podcast show notes that detail every topic, paper, person, and thing we
00:27:37.640
discuss on each episode of The Drive. Access to our private podcast feed. The Qualies, which were a
00:27:44.120
super short podcast, typically less than five minutes, released every Tuesday through Friday,
00:27:49.060
which highlight the best questions, topics, and tactics discussed on previous episodes of The Drive.
00:27:53.940
This is particularly important for those of you who haven't heard all of the back episodes.
00:27:59.100
It becomes a great way to go back and filter and decide which ones you want to listen to in detail.
00:28:04.320
Really steep discount codes for products I use and believe in, but for which I don't get paid to
00:28:09.220
endorse. And benefits that we continue to add over time. If you want to learn more and access these
00:28:14.920
member-only benefits, head over to peteratiamd.com forward slash subscribe. Lastly, if you're already a
00:28:22.380
member, but you're hearing this, it means you haven't downloaded our member-only podcast feed,
00:28:26.540
where you can get the full access to the AMA and you don't have to listen to this. You can download that
00:28:31.940
at peteratiamd.com forward slash members. You can find me on Twitter, Instagram, Facebook, all with the
00:28:40.080
ID, peteratiamd. You can also leave us a review on Apple Podcasts or whatever podcast player you listen
00:28:46.900
on. This podcast is for general informational purposes only and does not constitute the practice of
00:28:52.280
medicine, nursing, or other professional healthcare services, including the giving of medical advice.
00:28:58.560
No doctor-patient relationship is formed. The use of this information and the materials linked to this
00:29:03.740
podcast is at the user's own risk. The content on this podcast is not intended to be a substitute for
00:29:10.160
professional medical advice, diagnosis, or treatment. Users should not disregard or delay in obtaining
00:29:17.180
medical advice from any medical condition they have, and they should seek the assistance of their
00:29:22.340
healthcare professionals for any such conditions. Finally, I take conflicts of interest very seriously.
00:29:28.820
For all of my disclosures and the companies I invest in or advise, please visit peteratiamd.com
00:29:35.780
forward slash about where I keep an up-to-date and active list of such companies.
00:30:05.780
For all of my disclosures, please visit peteratiamd.com.