RFK Jr. The Defender - January 04, 2023


Science vs Plausibility with Dr Harvey Risch


Episode Stats

Length

42 minutes

Words per Minute

161.83017

Word Count

6,956

Sentence Count

366

Hate Speech Sentences

1


Summary

Dr. Harvey Rich is a practicing epidemiologist with more than 40 years of research experience and has served as a peer reviewer on more than 60 scientific and medical journals. Dr. Rich has been an associate editor of the Journal of the National Cancer Institute since 2000, a member of the Board of Editors of the American Journal of Epidemiology from 2014 to 2020, and an editor of International Journal of Cancer in 2008. He is a professor emeritus now of epidemiology at Yale School of Public Health, and has been a regular guest on my podcast since the beginning of the Pandemic. In this episode, we talk about his early career and how he became one of the most important epidemiologist to emerge from the early days of the World War II pandemic, and how important it was for us to have an open discussion about the decline of evidence-based medicine. We also discuss his work in the Brownstone Institute Journal on the decline in evidence, and his thoughts on the idea of "consistency" in the scientific community, and why we should all be fighting to defend what we think is good science against what others think is bad science. This episode is a must-listen episode for anyone who wants to know what's good and what's bad about science, and what we should do about it. It's a must listen, and you won't want to miss this one! Dr. Risch is one of my favorite guests, and I hope you enjoy listening to this episode of the podcast. - especially if you're interested in learning more about epidemiology and public health, and public policy, and other topics related to the spread of infectious disease. . Thank you, Dr. Harvey Risch! - Tom and his work and his contributions to public health and public service in this episode and the importance of evidence based on science in the 21st century epidemiology, and the role of evidence in public policy in our everyday life in the fight against climate change, and in the prevention of global pandemic prevention, and so much more. Thank you for listening and supporting the podcast, and for supporting the work he does. this podcast, Dr Risch thank you, Tom Risch, and thank you for being a good friend of mine, and thanks you, and your support, and good work, and God bless you, bye, bye. XOXO, bye - Dr. Tom.


Transcript

00:00:00.000 Hey, everybody.
00:00:00.000 I have today, I'm really, really happy.
00:00:03.000 I have one of my favorite people, and he's been a regular guest on my podcast since the beginning of the pandemic, which means since the beginning of this podcast.
00:00:12.000 Dr.
00:00:13.000 Harvey Rich is a professor emeritus now of epidemiology at Yale School of Public Health.
00:00:19.000 He is a practicing epidemiologist with more than 40 years of research experience.
00:00:24.000 He is a member of the Society for Epidemiological Research since 1982.
00:00:29.000 And there's a list here that I cannot read of every other award and honor and association you're involved in.
00:00:37.000 But I'm just going to read one paragraph.
00:00:40.000 Dr.
00:00:40.000 Risch has published approximately 400 peer-reviewed original research papers In iGravitas scientific journals, he has an age index.
00:00:52.000 For people who know what that is, this is incredibly impressive, of 104, with some 48,000 publication citations to date.
00:01:02.000 He has served as a peer reviewer on more than 60 scientific and medical journals.
00:01:07.000 Dr.
00:01:08.000 Rich has been an associate editor of the Journal of the National Cancer Institute since 2000.
00:01:13.000 He's a member of the Board of Editors of the American Journal of Epidemiology from 2014 to 2020 and an editor of International Journal of Cancer in 2008.
00:01:22.000 And that's just a little tiny tip of the iceberg from his curriculum P-type.
00:01:28.000 But those are the reasons that he was so dangerous to the medical establishment when he started talking about the problems with denying early access to treatments like ivermectin and hydroxychloroquine because he was impossible to dismount.
00:01:46.000 Nobody could claim that he was not one of the leading epidemiologists and they tried to attack him, but there was really no legitimate way and they kind of just had to ignore him and brush him onto the rug.
00:01:57.000 Your courageous intervention at the beginning of the pandemic brought, for those of us who were fighting in the shadows and the total darkness, obscurity and exile and pariahship, your decision to join the fight was really kind of a monumental in the battle formation.
00:02:15.000 You, Didier Rowold from France, ultimately Robert Malone, Ed Dowd and a number of other people who were, you know, part of the medical establishment or the political or financial establishment, who sort of crossed the lines and part of the medical establishment or the political or financial establishment, who sort of crossed the lines and brought their
00:02:38.000 I've worked with you on many projects and I consider you my friend but also just somebody I enjoy being around because you have an amazing mind.
00:02:51.000 I wanted to have you on tonight.
00:02:53.000 Because you wrote an article, really what I consider a seminal piece in the Brownstone Institute Journal, which is about the kind of decline of evidence-based medicine.
00:03:04.000 And it was a really interesting article because you go back a long, long way to sort of trace the disappearance.
00:03:12.000 One of the great lines that you use is, without debate, science is nothing more than propaganda.
00:03:20.000 You know, that's just a summary of the time that we live in right now.
00:03:23.000 We are living in the middle of a giant PSYOP where we're all being propagandized and people have forgotten that you're supposed to have a conversation.
00:03:32.000 You're supposed to have a debate and without it, nobody, anybody who tells you what the scientific consensus says is not telling you something that is probably true.
00:03:45.000 Anyway, let's hear it.
00:03:47.000 So tell us what the reaction has been to your Brownstone Institute paper.
00:03:55.000 Well, actually, it's great to be with you.
00:03:57.000 And I wish that my wife had heard half the stuff that you said.
00:04:02.000 By the way, I'm looking at the background.
00:04:05.000 You're either in a motel or you're in a mental institution.
00:04:08.000 I hope it's a motel.
00:04:11.000 It's close to a motel.
00:04:12.000 It's a furnished apartment.
00:04:14.000 So I'm just happy here.
00:04:16.000 I wrote this essay on plausibility versus science.
00:04:20.000 And it's been bugging me for about the last two years that Dr.
00:04:26.000 Fauci comes out and says, this is science, I'm science, science, science, science, science with capital letters, science with quotation marks, everything science.
00:04:35.000 Whereas I, as a scientist, I'm not seeing any evidence.
00:04:38.000 There's no evidence ever quoted.
00:04:40.000 It's just proclamations that it's all science.
00:04:42.000 And you're right that we scientists always come away bruised from our interactions with each other, because that's all we do is debate each other.
00:04:52.000 And, you know, when you're in a debate with somebody who's good, you're equal, you're likely to get bruised trying to defend what you think is right.
00:05:01.000 But that is the way science works, and that's how science moves forward.
00:05:06.000 Michael Crichton famously said that if you ever see a consistency between scientists and that they all believe the same thing, you should hold on to your wallet and run in the other direction.
00:05:21.000 That science is not what people opine.
00:05:26.000 But it's the data and the experiments that they do.
00:05:29.000 And the essence of science is not the theories.
00:05:34.000 We use theories all the time, but theories are not science.
00:05:37.000 Theories motivate science.
00:05:39.000 So Dr.
00:05:40.000 Fauci and the Public Health Administration, but putting out theories from beginning to end with no evidence for them, that's not science.
00:05:48.000 Science is taking those theories and doing empirical studies, either experiments or observational studies, and Looking at the results, and if they support the theory, good.
00:05:59.000 If they refute the theory, then we update the theory, modify it, or throw it out, and do more studies, and repeat the cycle.
00:06:06.000 And that is the essence of science.
00:06:08.000 It's looking at observational data in support or refutation of theories, as well as debating those experiments and observations with colleagues, As to the validity of what they show.
00:06:22.000 And that's really what science is about, not the proclamation that I am the science.
00:06:27.000 You know, that has nothing to do with science.
00:06:30.000 Now, the issue of plausibility is a little more interesting because plausibility is basically throwing out a theory.
00:06:36.000 So when somebody tells you that masks work to prevent infection transmission, that's a theory.
00:06:44.000 But it's not science until you actually go measure it.
00:06:47.000 And the studies that have measured it haven't supported that theory.
00:06:50.000 So here we are with a policy based on a theory that has essentially no empirical data in support of the policy or the theory.
00:06:58.000 And so my problem with this is that I do the science.
00:07:02.000 I don't care about the policy much, except in the COVID era when it was all forced on us.
00:07:06.000 But I just do the science and try to interpret what nature says.
00:07:11.000 As I say, I translate nature into English.
00:07:15.000 And that's all there is for me as to looking at the truth of objective information for the things that we believe scientifically.
00:07:22.000 In COVID, I started that with looking at outpatient treatment and various drugs that were available at the time.
00:07:28.000 And that's gone on to include all the other things that we've dealt with, including the vaccines, their potential benefits and risks and efficacy or no efficacy and All that stuff that we've been doing all have scientific aspects and have had some degree of scientific study.
00:07:45.000 But one of the things that you have to realize about doing science is, as my MPH students say about epidemiology, it's nitpicky.
00:07:54.000 The details matter.
00:07:56.000 And the details matter when you look at what the study actually studied and what conclusions you can actually draw.
00:08:03.000 And it's crucial to get into the details to see that what the study purports to study is not actually what you wanted it to study or what the authors wanted it to study.
00:08:12.000 And we can see that very clearly in the original studies of the vaccine efficacy, where the endpoints in the original randomized trials involved the study participants getting infections with COVID.
00:08:25.000 Well, infections with COVID might matter to them, but it doesn't matter to me.
00:08:30.000 What matters to me is transmission of the infection.
00:08:33.000 And the study didn't study transmission or spread.
00:08:37.000 All it studied is whether people who got vaccinated got more or less infections than the people who didn't.
00:08:43.000 And it's an inference.
00:08:45.000 It's a theory that if you reduce the risk of infection, you reduce the risk of transmission.
00:08:50.000 But that means all the infections are symptomatic, that people knew that they got infected or they were tested adequately.
00:08:56.000 And all of that are details that falls by the wayside when you draw a policy conclusion that the vaccines reduce spread When you haven't ever tested for that fact.
00:09:05.000 So this is the kind of issues of science versus plausibility that we've been dealing with for three years now, and I as a scientist am just not willing to sit there and listen to plausibility and the claim that it represents science.
00:09:18.000 You know, my experience with it, you know, I got kind of sucked into the vaccine issue many, many years ago.
00:09:25.000 And the first, you know, real red flag that I saw that I recognize is that evidence-based medicine had been thrown out the window.
00:09:35.000 To me, evidence-based medicine means that you, you know, I'm a simple-minded person.
00:09:40.000 You test a placebo against the intervention.
00:09:46.000 In a large enough group to look at all the outcomes, and in a similarly situated placebo group to the study group, and then you wait and you look for long-term outcomes because we know that all medicines have some long-term injuries, injuries with long diagnostic horizons or long incubation periods.
00:10:08.000 You won't see right away that the way the vaccine kind of cartel What looks at these issues is they only look at one issue.
00:10:16.000 Will this intervention produce an antibody response?
00:10:20.000 And if it does, it gets a license if the antibody response is sufficiently robust and durable.
00:10:28.000 And nobody's looking at the long-term outcomes.
00:10:33.000 And one of the things that vaccinologists learn very early on is that when you're using dead virus vaccines, that they seldom are able to produce a durable and robust vaccine.
00:10:45.000 Antibody response.
00:10:46.000 But if you add something horrendously toxic to the vaccine, like aluminum or mercury or squalene, you can get that response.
00:10:55.000 The body reacts and it associates that toxic with the antigen, with the viral antigen.
00:11:00.000 So the next time it sees it, it, you know, essentially it goes to DEFCON 1.
00:11:05.000 It has a five-alarm fire.
00:11:07.000 But the reason the vaccinologists and toxicologists don't get along is Is the toxicologist kept asking this inconvenient question.
00:11:15.000 What happens to the aluminum after it induces that response?
00:11:21.000 And what is the fate in the body?
00:11:23.000 Does it go?
00:11:24.000 Is it discreeted?
00:11:26.000 Does it go into the brain?
00:11:27.000 Does it go into the organs?
00:11:28.000 Does it cross the blood-brain barrier?
00:11:30.000 And the vaccinologists don't want to answer that question.
00:11:33.000 And the shocking thing that I found out is that Not a single one of the 72 vaccines now recommended for American children by the CDC has ever been tested in a pre-licensing placebo-controlled trial using a true placebo.
00:11:50.000 And I said this for many years, and Anthony Fauci was publicly saying that's not true.
00:11:55.000 But when I met him, I asked him to show me one of those studies, and he was unable to.
00:12:00.000 And then we sued him, NHHS, and after a year of sandbagging, On the courthouse steps, they handed us a letter acknowledging that, which is on our website, acknowledging that not a single one of those vaccines has ever been tested in a pre-licensing study, in a placebo-controlled study.
00:12:22.000 It's not surprising that when pharma interests are determinative of what our regulatory agencies do, that they will push for the easiest path forward and the one with the least bumps or risks in it.
00:12:39.000 And unfortunately, FDA and its CDC partner have all been all too obliging to I think we're good to go.
00:13:02.000 For the COVID vaccines, just point out to anybody listening what all of the things that you mentioned are as problems.
00:13:10.000 Not to mention that randomized trials are usually pretty terrible for looking at adverse events because they're not long enough for the adverse events of serious consequence to actually accrue.
00:13:21.000 As you pointed out, it could take seven or ten years or even longer to know what some of those serious adverse events eventually are.
00:13:29.000 And the Pfizer trial, for example, two weeks after the trial data were unblinded and the results reported, They converted all of the placebo subjects to active arm vaccination subjects, making it impossible to do a comparison of the adverse events in any longer time frame.
00:13:48.000 The other thing is that...
00:13:50.000 I think it was two months, is that correct?
00:13:53.000 Right, right, could be.
00:13:54.000 And the other thing is that...
00:13:58.000 These trials are never large enough to look at the adverse.
00:14:04.000 They're never statistically significant for adverse events because their sample size calculations for statistical significance are based on the expected outcome of interest, which is the vaccine, you know, infection or whatever the outcome is, reduction with the vaccine.
00:14:21.000 So, for example, in the Pfizer trial, there were some 44,000 participants, which you would think means it's a large study.
00:14:30.000 But in fact, the study only had 162 infections in the placebo group, which is okay, statistically speaking, but only eight in the vaccine group.
00:14:42.000 And eight individuals as outcome events in a randomized trial are not randomized because eight doesn't get...
00:14:48.000 Random numbers don't work until you get to 50 or 100 people.
00:14:52.000 Then everything works out in the balance.
00:14:55.000 Like flipping a coin 50 or 100 times, you get close to 50-50.
00:14:59.000 But at 8, you don't.
00:15:00.000 You could get, you know, 6 and 2 easily enough, or 5 and 3.
00:15:05.000 And that's not close to 50-50.
00:15:06.000 And that in itself is already a violation of randomization should that happen, which is pretty likely.
00:15:14.000 So, if there are that many outcomes, think how many fewer adverse events are likely to have happened of a serious nature, you know, in those people.
00:15:24.000 It's relatively small in terms of the two months or four months or whatever the duration of the trial actually was.
00:15:31.000 It's just not long enough and large enough to see what the real spectrum of adverse events is.
00:15:37.000 And especially now that we've seen with Ed Dowd and the insurance company data, That the peak of the all-cause mortality seems to occur five to six months after vaccine rollout.
00:15:49.000 So we're talking about, you know, that kind of time period, even just to get to the all-cause mortality results, let alone longer-term things like cancers and chronic cardiovascular disease and so on, that might be affected by these vaccines.
00:16:03.000 But we have no way of telling, except for the early people On that spectrum.
00:16:09.000 And because of that, it's too short and too few.
00:16:13.000 The numbers are too few to make up for the too short period of time, even with 44,000 people tested.
00:16:18.000 Yeah, a couple of times you've called me.
00:16:21.000 I'm using Pfizer's data against Pfizer because, you know, Pfizer's data showed that in all-cause mortality, they gave the vaccine to 22,000 people and they gave the placebo to 22,000 people.
00:16:36.000 And in the vaccine group, there were 21 deaths.
00:16:40.000 And in the placebo group, there were 17 deaths over that six-month period.
00:16:45.000 And so I said, that doesn't mean that if you get the vaccine, you're 23% more likely to die in the next six months, which is a fair interpretation of the data.
00:16:56.000 But you said to me, you can't even say that because there were so few people.
00:17:00.000 The data is completely worthless.
00:17:03.000 Well, it's very worthless, if not completely worthless.
00:17:06.000 The problem is that numbers like 16 and 21 tell you only a little bit, but they have high variability.
00:17:14.000 16 or 17 could have been 19 or could have been 13.
00:17:17.000 That's just random fluctuations.
00:17:19.000 And so we don't really know how much more the adverse events in the vaccinated people were compared to the placebo people.
00:17:27.000 Yes, the best estimate is that the vaccine conveyed more adverse events than the placebo.
00:17:33.000 But are we convinced that that is the reality?
00:17:35.000 No, there's just not enough to be able to say that in a statistically convincing way based on such small numbers.
00:17:40.000 You know, on your own personal journey, did you start out as a kind of idealist, believing that government and, you know, the public health officials were actually would be swayed by good science, and then you gradually, you know, had that idealism So I think yes and no, mostly yes.
00:18:04.000 I've thought all along for many decades that governments have an obligation to lie to the population.
00:18:12.000 In order to protect the population to a certain degree, that the job of the government overall is to protect the country, and with some nuance, of course.
00:18:21.000 But if that entails having to lie to protect the country for national security interests, then that may be warranted.
00:18:29.000 It doesn't apply for the government to lie to protect their own personal interests or their administrative interests or something like that.
00:18:37.000 So, that's why we have national secrets that, you know, that the government knows lots of stuff about lots of countries and we have spies all over the world and all that stuff and all that's kept secret because it's the government's job to protect us through those kinds of tools.
00:18:52.000 However, when you flip that into public health paternalism, then you no longer...
00:18:59.000 In the realm of the government is actually doing good for you, for the population.
00:19:03.000 We don't know that, and in fact, it's blown up in our faces, you know, in the last three years that the lying to the public, the manipulation of data, the fake studies and fake analyses that have come out of CDC, the fake principles that have been used by the FDA, is all a form of lying to the population to achieve a goal that is not the ultimate health of the population.
00:19:27.000 By any means.
00:19:28.000 The goal is to sell a very simple-minded, very simplistic plan of accomplishing public health, no matter what the consequences, no matter what the innocent casualties are along the way.
00:19:46.000 And that's just not how I view public health.
00:19:49.000 Public health is not something that disregards Our rights, you know, as humans, our natural rights, our constitutional rights, and so on.
00:20:00.000 It works with those rights to optimize public health decisions for each person.
00:20:05.000 But for example, in medical practice, medical practice is not a one-size-fits-all for everybody and you just read the script and you don't need doctors to do that.
00:20:13.000 Medical practice evaluates each person in the person's current condition, status, everything about their history, their family history, their current history, where they are in an illness, what they're at risk for, and figures out what things work and what might not.
00:20:29.000 That individualized medicine is how public health should work on a much larger scale.
00:20:34.000 To the degree that it becomes practical.
00:20:37.000 So to say that everybody must take a vaccine because the vaccine might work for some people is ridiculous.
00:20:42.000 And to say that, well, it's too complicated to see that some people might have natural immunity and don't need the vaccine and we would have to test for that and that testing costs too much and therefore we're not going to do it or it's too complicated to do and therefore we're not going to do it is absurd.
00:20:55.000 Because the whole point, why would you have hundreds of thousands of people doing contact tracing for a An infectious disease that is impossible to be contact traced, wasting all of those resources when you can't figure out that if people have natural immunity from infection, they don't need to be vaccinated in order to accomplish the same infection resistance or better infection resistance.
00:21:16.000 So all of those kinds of lies have been self-serving, but not public health serving.
00:21:21.000 And I'm really at a loss to think of how bad public health has done in this whole era.
00:21:27.000 And I'm really kind of ashamed of my colleagues over that.
00:21:31.000 When do you think it got really bad?
00:21:34.000 Well, I think it started out bad.
00:21:36.000 I mean, as a scientist, you know, you do research studies, and you write the papers, and sometimes they get media attention.
00:21:44.000 And I, from experience from lots of years doing that, I know that reporters typically don't get things all that right.
00:21:52.000 And so I always offered...
00:21:54.000 them to read over what they're going to put out before they do just to help them.
00:21:58.000 You know, I put it in terms of their benefit, not mine, but of course my interest is just making sure that I'm quoted properly at least.
00:22:06.000 And so almost every media report has gotten things wrong in one way or another.
00:22:12.000 So when I got into writing the paper in the American Journal of Epidemiology in May of 2020, and I started seeing pushback from reporters saying things that I didn't say or invoking studies that were irrelevant.
00:22:26.000 I was talking about outpatient treatment.
00:22:28.000 They were talking about treatment of hospital patients, which is absurd when you're talking about outpatient disease because they're totally different diseases.
00:22:36.000 I started seeing this thinking, oh, well, the reporters are being sloppy like they've always been.
00:22:40.000 They weren't paying attention to details of the things that I said.
00:22:43.000 But then this continued and got worse.
00:22:46.000 And it finally dawned on me that this was a systematic campaign to push back against the idea that these outpatient treatments actually did work, which they did.
00:22:55.000 And once I realized that, I had to accept that there was some plan in the government, in industry, in public health, That was rejecting empirical information and study results for some other goal that was unstated and one can only hypothesize about.
00:23:16.000 And what kind of pushback have you gotten on this Brownstone Institute article?
00:23:22.000 Which should be a kind of fairly innocuous point that we need evidence-based medicine.
00:23:28.000 Well, basically, I meant it to be provocative.
00:23:31.000 I said, basically, that evidence-based medicine was not actually properly evidence-based from the start.
00:23:37.000 In other words, that this is not a real field, that this field inserted itself into medical research with the idea of capturing the narrative without actually providing something that was a substantial improvement.
00:23:50.000 And the reason I said that was that David Sackett, who was really the founder of evidence-based medicine with Gordon Guyatt, Was asked to define what evidence-based medicine was.
00:24:01.000 This is a term that Guyatt had defined, initiated in 1991.
00:24:06.000 And Sackett said, well, it's the use of a doctor's clinical expertise with the best external evidence, meaning the best other scientific evidence.
00:24:17.000 And I pointed out that both of these things were wrong.
00:24:20.000 That a doctor with his or her own empirical clinical evidence is not representative of all clinical evidence studying the same thing.
00:24:30.000 Each doctor gets their little share of the patients that they see compared to all the patients with a particular disease or using particular drugs.
00:24:37.000 And the only way to know whether the evidence is valid is to somehow summarize all of that kind of clinical experience into a summary that forms a body of knowledge about the clinical aspects of the evidence.
00:24:51.000 At the same time, Sir Austin Bradford Hill in 1965 wrote the definitive essay on reviewing observational study data for causation.
00:25:02.000 He said that you see an association between, say, a drug and a disease that is beyond the play of chance, meaning statistically significant beyond some quibbling level of significance.
00:25:15.000 How do you acquire evidence to know whether this is a causal association or not?
00:25:20.000 And he sketched out nine aspects of reasoning, ways of collecting all sorts of evidence, all kinds of evidence that would bear on the association to try to reason for it.
00:25:29.000 This is how scientists today use evidence to draw conclusions about causation.
00:25:36.000 Not by cherry picking the so-called best evidence.
00:25:38.000 Cherry picking best evidence leads to subjective decisions.
00:25:41.000 It involves lots of subconscious motivations, if not more obvious, corrupt motivations, and is not a representative way of addressing the scientific evidence.
00:25:52.000 Sure, if some study is fatally flawed, if you can show that a study is fatally flawed and comes up with a completely wrong result, Because of something that was untoward in the design or analysis of the study, you can throw it out.
00:26:03.000 That makes sense.
00:26:04.000 But even weak studies should be included because empirical studies have shown that trying to put weights, weighting studies more heavily if they're defined as so-called good studies and weighting them lower if they're defined as so-called weak studies does nothing to improve the accuracy of the combined study results.
00:26:21.000 It only adds more noise.
00:26:23.000 And so there's no real point of doing that.
00:26:25.000 You just have to take all the studies and try to evaluate them in both a subjective way and a quantitative way the best you can.
00:26:32.000 And that is how one, in the real world, in the real scientific world, draws inferences about causation.
00:26:39.000 So this is messy.
00:26:40.000 It is subjective.
00:26:41.000 It requires kind of messy...
00:26:45.000 Evaluation of the pros and cons of everything that you know about in the evidence and not some click formula.
00:26:52.000 You just click on a box and it tells you yes or no.
00:26:54.000 It's a true association or not.
00:26:57.000 We've dumbed down almost everything over the last 40 years.
00:27:01.000 As I say, grade inflation and we want grades on everything and all that stuff.
00:27:06.000 Science does not work by checkboxes.
00:27:08.000 It works by reasoning.
00:27:09.000 The reasoning is difficult.
00:27:11.000 It's messy.
00:27:12.000 It's argumentative.
00:27:13.000 This is why scientists argue with each other all the time, because of the relative value that you put, whether you think consistency between studies is more important than how big the associations are in the studies, or vice versa.
00:27:26.000 These are all the points that scientists argue about, and that's why there's so much debate.
00:27:29.000 But that's how real science progresses with all of these arguments.
00:27:34.000 And so when Sackett said, you cherry pick, he didn't use the word cherry pick, he said, you pick the best science, that was an idealistic, plausible argument that's just wrong.
00:27:42.000 You have to pick all the evidence, or essentially all the evidence, and somehow lump it together and make sense of it, and that's how science progresses.
00:27:49.000 So evidence-based medicine didn't really start out on a good footing for those of us scientists who really were thinking about how you evaluate evidence to make the case that something in nature or drug or drug But what happened after that is that randomized trials then got assigned the pinnacle of the evidence pyramid.
00:28:11.000 So they put randomized trials on the top, and then non-randomized but controlled trials below that, and observational studies below that, and so on.
00:28:20.000 And if you read Hill's essay from 1965, you'll see that he doesn't discuss this at all.
00:28:26.000 In fact, he doesn't discuss the quality of individual studies anywhere in that essay.
00:28:30.000 That's not one of his reasoning points.
00:28:33.000 And the reason for that is this is, again, very subjective.
00:28:35.000 And there is no a priori reason why randomized trials are at the top of this pinnacle of the pyramid, because, as I said, they're not necessarily large enough for the randomization to have done much.
00:28:50.000 If they are, if you have a randomized trial where you have 200 events in one group, 200 outcome events in one group, and 100 in the other, then you have a randomized trial that has some hope of being meaningful.
00:29:01.000 But one with 162 versus 8 is not meaningful.
00:29:05.000 And the bigger problem than that is that we epidemiologists who don't do randomized trials, who do controlled trials but that are not randomized, and who do case control and cohort studies and so on, know that diseases have lots of different components to their causation.
00:29:20.000 And so we measure everything in the world that we can think of that matters to how the diseases occur.
00:29:26.000 And then in the analyses, we adjust for all that stuff.
00:29:29.000 The adjustments may not be perfect, but they at least tend to reduce the effect of what we call confounding, which is a kind of bias in the results where the association that you're interested in, drug A and outcome Y, is biased because genetic tendency B also conveys a lot of the mechanism and is associated with who actually takes drug A and so on.
00:29:52.000 So we know what we have to measure.
00:29:55.000 Most diseases now have reached a state of scientific maturity where we know a lot about them and what has to be measured in order to control for the things that affect their relationships.
00:30:04.000 And so we measure them and adjust for them.
00:30:06.000 Whereas in randomized trials, The investigators assume that because of randomization, they don't have to measure anything.
00:30:12.000 All they have to do is look at the trial group versus the outcome, and that's enough to know.
00:30:18.000 And I'm saying that in trials that have small numbers of outcomes, the randomization didn't work.
00:30:24.000 You don't know that in the 162 cases of infection that occurred in the Pfizer initial trial, it was probably approximately 81 males and 81 females.
00:30:38.000 But you don't know that in the eight outcome groups in the placebo and in the vaccinated group, the other ones, the placebo and the vaccinated group, you don't know that there's four males and four females.
00:30:49.000 It could easily have been six males and two females or seven males and one female.
00:30:52.000 And so because of that, the randomization isn't good enough to show you that the gender is balanced In the treatment arms, and this is what I'm saying, that randomized trials with small numbers have not solved the problem that randomization was initially conceived for, which is to remove bias from confounding by unmeasured variables, unmeasured confounders.
00:31:14.000 And this is a problem that those investigators just don't adjust for the things that they need to adjust because they think the randomization works when it didn't.
00:31:22.000 And so this is the reason why these randomized trials should not necessarily be at the pinnacle of the evidence triangle, evidence pyramid.
00:31:29.000 And this has dumbed down even further for the FDA proclaiming that it will only accept randomized trial evidence before it approves anything.
00:31:38.000 This is absurd.
00:31:39.000 We've been through this before.
00:31:41.000 In 2016, Congress passed the 21st Century Cures Act, which among other things says that regulatory agencies shall use all forms of relevant evidence, not just randomized trials.
00:31:52.000 They did this because FDA was holding out for randomized trials in cancer treatments and cancer patients were demanding Access to other treatments that the FDA was refusing to approve because they were repurposed or generic medications.
00:32:07.000 So this has been a problem for some time, and the fact that the FDA was still holding out this fake position about only randomized trials count as evidence when even the Congress had dealt with this six years previously is just absurd and violates the real science of everything we do.
00:32:25.000 And also the Cochrane Collaboration did really a massive study on whether randomized trials were more reliable than other forms of evidence, and they found out they weren't.
00:32:39.000 Yeah, that's right.
00:32:40.000 The Cochrane Collaboration, for people who don't know, is a It's a collaboration of about 30,000 independent scientists who convened to do essentially oversight on pharmaceutical industry-sponsored science that was landing in the journals.
00:32:59.000 And they felt that they needed somebody, the people in the scientific community felt they needed somebody looking over the shoulders of those studies and making sure they weren't all fixed.
00:33:11.000 And it's become, the Cochrane Collaboration itself has become a little bit absurd in recent years due to accepting contributions from the Gates Foundation, etc.
00:33:22.000 But they're generally looked upon as the most reliable, I would say, referee for clinical trials and for peer-reviewed science.
00:33:32.000 And correct me if I'm wrong.
00:33:34.000 I think that's right up to maybe four or five years ago.
00:33:39.000 The collaboration was kind of hijacked by a subgroup that was, I think, biased the way you said.
00:33:45.000 It kicked out the original scientists who had been in support of the original principles of objective review.
00:33:53.000 The paper in question was written by Engelmeyer and colleagues and compared thousands and thousands of individual randomized trials to their non-randomized counterparts.
00:34:03.000 And it was a meta-analysis of 14 meta-analyses showed that the non-randomized versus the randomized trials differed on average by 8%.
00:34:13.000 That's it, 8%.
00:34:14.000 Basically showing that high-quality non-randomized but controlled trials do perfectly well for most diseases that we study.
00:34:22.000 You know, you didn't really answer the other question I asked you because I think we got lost, but what has been the pushback from the scientific community against your article?
00:34:33.000 I mean, who have you heard from?
00:34:35.000 I've heard from a lot of colleagues in support.
00:34:38.000 I've heard from two in regard to initial differences of opinion that when I basically laid out all the details, as I've mentioned to you now, it's a bit wonky for lay people.
00:34:52.000 But nevertheless, as in many things in science, the details matter.
00:34:57.000 And so when you go through the issue about how good randomization works, they're finally compelled to accept that what I've said is true.
00:35:06.000 I didn't really make this up.
00:35:07.000 I had this epiphany about small numbers in the outcome groups.
00:35:10.000 But this is discussed in a whole long article by Angus Deaton and Cartwright.
00:35:16.000 Angus Deaton is a Nobel Prize winning economist statistician at Princeton.
00:35:20.000 And this is a long article from 2018 on the pros and cons of randomized trials.
00:35:27.000 And he says it as well, that randomization doesn't cure all ills, that it has to be good randomization, which means it has to be enough randomization, which And many trials that we see today do not satisfy that.
00:35:41.000 It's a kind of a quietly moved goalpost that big randomized trials work, and so now they're not quite big.
00:35:49.000 I've had this in cohort studies, too, that you see a cohort study of some cancer outcome, say, that comes out of the Harvard Consortium, Or the cohort studies consortium.
00:36:00.000 Explain to people what a cohort study is.
00:36:03.000 Sure.
00:36:04.000 A cohort study is where you identify, say, 80,000 people at some point in their lives, and you interview them with questionnaires and measurements and blood samples and what have you.
00:36:15.000 And then you track them.
00:36:16.000 You follow them forward, interviewing them again, or at least finding out their vital status, their cancer status, and things like that, on an annual basis, at least.
00:36:24.000 And you do this for 5, 10, 15, 20 years.
00:36:27.000 There's been numbers.
00:36:28.000 The Framingham study was a cardiovascular disease study that did this for a long time.
00:36:33.000 Then the Harvard cohorts, the Nurses' Health Study.
00:36:36.000 There's lots of these cohort studies.
00:36:37.000 And these cohort studies...
00:36:43.000 Just so people know, a randomized controlled trial is when you have a group, like we were talking about before, that gets the placebo and another similarly situated group that gets the intervention, the drug, and then you look at their outcomes over hopefully a 10 or 12 year period.
00:37:05.000 Well, so that is a form of a cohort study.
00:37:07.000 A cohort study where you determine who gets exposed by assigning them to treatment groups.
00:37:14.000 And if you randomize them, it's a randomized controlled cohort study.
00:37:18.000 A randomized controlled trial is a cohort study.
00:37:20.000 Most of the time, if there's a fixed follow-up time, for example, you follow them for three months and you see what's happened over the three months, it doesn't matter when the events happen.
00:37:29.000 But if some of the events happen in three months and others happen in three years, then you're more concerned about who had which event early versus late and things like that.
00:37:38.000 And that's when cohort studies become more statistically powerful that they tell you as of the date when things happen, not just what happens, but when they happen.
00:37:46.000 So cohort studies have been the underlying principle designed for randomized trials, but forms one form of epidemiologic evidence that we use separate from case control studies, which is a backwards way of looking at the same kinds of associations.
00:38:03.000 Let me ask you just one last kind of train of question, which is about you personally.
00:38:08.000 And I've actually tried to talk to you about this before.
00:38:12.000 What do you think made you...
00:38:15.000 Sort of indifferent to public opinion, ready to...
00:38:18.000 You did something very courageous on this issue and it made a huge difference, but it also cost you a lot.
00:38:26.000 And, you know, from the beginning of my relationship with you, you just always seemed indifferent about the fact that people didn't...
00:38:34.000 You lost friendships and you had a lot of people angry at you.
00:38:38.000 Has that been something throughout your career or is it just that you're kind of, you were at the end of your career and, you know, they couldn't really hurt you anymore?
00:38:47.000 So I think you're right.
00:38:48.000 I had a little sweet spot here of when I was kind of more free to just say what I thought, you know, base it on my scientific understandings of things.
00:38:58.000 Yes, I was due to retire this year anyway, so perhaps Yale didn't think it was worthwhile making a big brouhaha over it.
00:39:08.000 I'm not in medical practice, so I have no licensure risks in that regard.
00:39:15.000 And...
00:39:16.000 You are a medical doctor, though.
00:39:20.000 Yes.
00:39:20.000 After I graduated from medical school, I went and got a PhD in mathematical modeling of infectious epidemics.
00:39:27.000 So I wasn't at risk from licensure, from specialty qualifications, for anything like that.
00:39:34.000 So I just, I did not take seriously the colleagues who didn't know what they were doing and were writing an op-ed here or there, not many, saying that I didn't know what I was doing.
00:39:45.000 You know, first of all, they have the right to academic freedom just like I do.
00:39:48.000 They can write anything they want as long as it's not slanderous, you know.
00:39:51.000 And I don't really care because I'm not going to, one thing you learn in this is that if people are really earnest and disagreeing with you, Then it's worth discussing it with them, seeing what they think, how it's right or wrong.
00:40:06.000 If they're right and I'm wrong, then I figure out why I went astray and how to pull back and go in the better direction.
00:40:13.000 That's what science and we're supposed to be doing.
00:40:16.000 But people who are scoffers, who are just there to say, no, no, no, I don't provide any counter evidence, just say, the WHO says, FDA says, CDC says, And never provide any evidence.
00:40:29.000 That's not science.
00:40:30.000 That's not evidence.
00:40:31.000 That's not reasoning.
00:40:32.000 That's just scoffing.
00:40:33.000 And so I just don't take them on because it ends up being he said, she said, and nobody wins that, and there's no point.
00:40:39.000 It's a waste of time.
00:40:40.000 And it doesn't defeat the arguments that I'm making because there's no science involved in it.
00:40:47.000 So the only discussions that I've really entertained are ones that actually have scientists who are willing to talk about things.
00:40:54.000 And, you know, a really interesting example of that, there's been very few, very few people on the public health side who have been willing to debate the issues that we've all been contending with for the last three years.
00:41:06.000 The one exception to that is my dean, who's now retired as dean, Stan Vermund at Yale, who debated with Dr. Jay Bhattacharya.
00:41:16.000 He did this in New York at a special symposium a few weeks ago.
00:41:21.000 And they had what I wasn't there, but what was reported to me was a very fair and objective presentation of each side of the debate.
00:41:31.000 Each one in earnest trying to provide their understanding of scientific evidence.
00:41:37.000 And what was reported to me is they actually held a vote at the end of this debate, and the vote was 80-20 in favor of Dr.
00:41:45.000 Bhattacharya.
00:41:46.000 So that's the only evidence that I've seen, the only instance that I've seen of actual real public debates on the science.
00:41:54.000 As you know, Steve Kirsch has put out offers of a million dollars to debate anybody from the government agency side of As to anything that he's talked about, the vaccines, early treatment or whatever, and he's had no takers.
00:42:08.000 Nobody's even willing to come to debate him.
00:42:10.000 That's been the problem, that when you have a policy that is indefensible, that you know you can't win the debate, you don't debate.
00:42:19.000 You just censor.
00:42:20.000 And that's what we've faced for the last three years.
00:42:22.000 And the only debate that I hear from the other side really consists of what you said, which are appeals to authority.
00:42:29.000 The WHO says it, the CDC says it, the FDA says it, and that, of course, is a logical fallacy.
00:42:36.000 Right.
00:42:36.000 As Karl Popper said in the early 1950s, studies about what scientists believe have no relationship to studies about how nature behaves.
00:42:47.000 All right.
00:42:48.000 Professor Harvey Risch, thank you so much for joining us, for interrupting your vacation.
00:42:53.000 Please say hi to your family and give them my love.
00:42:55.000 Likewise.
00:42:56.000 Thank you.
00:42:56.000 Thank you so much.
00:42:57.000 Thank you.
00:42:58.000 It's a pleasure to be with you.