#309 ‒ AI in medicine: its potential to revolutionize disease prediction, diagnosis, and outcomes, causes for concern in medicine and beyond, and more | Isaac Kohane, M.D., Ph.D.
Episode Stats
Length
1 hour and 55 minutes
Words per minute
163.1663
Harmful content
Misogyny
2
sentences flagged
Hate speech
4
sentences flagged
Summary
In this episode, Dr. Zach Cohen joins me to talk about the evolution of artificial intelligence (AI) from science fiction to real-world applications. We talk about where AI is today, where it is in the past, and where it could be in the future.
Transcript
00:00:00.000
Hey, everyone. Welcome to the Drive podcast. I'm your host, Peter Atiyah. This podcast,
00:00:16.540
my website, and my weekly newsletter all focus on the goal of translating the science of longevity
00:00:21.520
into something accessible for everyone. Our goal is to provide the best content in health and
00:00:26.720
wellness, and we've established a great team of analysts to make this happen. It is extremely
00:00:31.660
important to me to provide all of this content without relying on paid ads. To do this, our work
00:00:36.960
is made entirely possible by our members, and in return, we offer exclusive member-only content
00:00:42.700
and benefits above and beyond what is available for free. If you want to take your knowledge of
00:00:47.940
this space to the next level, it's our goal to ensure members get back much more than the price
00:00:53.200
of the subscription. If you want to learn more about the benefits of our premium membership,
00:00:57.980
head over to peteratiyahmd.com forward slash subscribe.
00:01:04.180
My guest this week is Isaac Cohen, who goes by Zach. Zach is a physician scientist and chair
00:01:10.180
of the Department of Biomedical Informatics at Harvard Medical School, and he's an associate
00:01:14.460
professor of medicine at the Brigham and Women's Hospital. Zach has published several hundred papers
00:01:18.940
in the medical literature and authored the widely used books, Microarrays for Integrative Genomics
00:01:24.040
and the AI Revolution in Medicine, GPT-4 and beyond. He is also the editor-in-chief of the newly launched
00:01:30.660
New England Journal of Medicine AI. In this episode, we talk about the evolution of AI. It wasn't really
00:01:37.280
clear to me until we did this interview that we're really in the third generation of AI, and Zach has been
00:01:41.780
a part of both the second and obviously the current generation. We talk about AI's abilities to impact
00:01:47.560
medicine today. In other words, where is it having an impact? And where will it have an impact in the
00:01:52.820
near term? What seems very likely? And of course, we talk about what the future can hold. And obviously
00:01:58.280
here, you're starting to think a little bit about the difference between science fiction and potentially
00:02:02.300
where we hope it could go. Very interesting podcast for me, really a topic I know so little about,
00:02:08.120
which tend to be some of my favorite episodes. So without further delay, please enjoy my conversation
00:02:13.100
with Zach Cohen. Well, Zach, thank you so much for joining me today. This is a topic that's highly
00:02:23.480
relevant and one that I've wanted to talk about for some time, but wasn't sure who to speak with. And
00:02:29.280
we eventually kind of found our way to you. So again, thanks for making the time and sharing your
00:02:33.580
expertise. Give folks a little bit of a sense of your background. What was your path through medical
00:02:38.780
school and training? It was not a very typical path. No. So what happened was I grew up in
00:02:44.760
Switzerland. Nobody in my family was a doctor, come to United States, decide to major in biology.
00:02:52.340
And then I get nerd sniped by computing back in the seventies and the late seventies. And so I minor in
00:02:59.900
computer science, but I still complete my degree in biology and I go to medical school. And then in the
00:03:06.820
middle of medical school's first year, I realized, holy smokes, this is not what I expected. It's a
00:03:12.380
noble profession, but it's not a science. It's an art. It's not a science. And I thought I was going
00:03:17.580
into science. And so I bail out for a while to do a PhD in computer science. And this is during the
00:03:24.960
1980s now, early 1980s. And it's a heyday of AI. It's actually a second heyday. We're going through the
00:03:32.660
third heyday. And it was a time of great promise. And with a retrospective scope, very clear that it
00:03:41.600
was not going to be successful. There was a lot of over-promising. There is today, but unlike today,
00:03:47.780
we had not released it to the public. It was not actually working in the way that we thought it
00:03:53.940
would go on at work. And it certainly didn't scale. It was a very interesting period. And
00:04:00.420
my thesis advisor, Peter Solovich, a professor at MIT, said, Zach, you should finish your clinical
00:04:06.620
training because I'm not getting a lot of respect from clinicians. And so to bring rational decision
00:04:13.280
making to the clinic, you really want to finish your clinical training. And so I finished medical
00:04:19.040
school, did a residency in pediatrics and pediatric endocrinology, which was actually extremely
00:04:25.180
enjoyable. But when I was done, I restarted my research in computing, started a lab at Children's
00:04:32.580
Hospital in Boston, and then a center of biomedical informatics at the medical school. Like in almost
00:04:39.840
every other endeavor, getting money gets attention from the powers that be. And so I was getting a lot
00:04:46.560
of grants. And so they asked me to start the center and then eventually a new department of
00:04:52.840
biomedical informatics that I'm the chair of. We have now 16 professors or assistant professors
00:04:58.900
of biomedical informatics. Then I had been involved in a lot of machine learning projects,
00:05:06.120
but like everybody else, I was taken by surprise, except perhaps a little bit earlier than most,
00:05:10.520
by large language models. I got a call from Peter Lee in October of 22. And actually I didn't get a
00:05:17.140
call. It was an email right out of a Michael Crichton novel. It said, Zach, if you'll answer
00:05:22.780
the phone, I can't tell you what it's about, but it'd be well worth your while. And so I get a call
00:05:29.440
from Peter Lee and I knew him from before. He was a professor of computer science at CMU and also
00:05:36.000
department chair there. And then he went to ARPA and then he went to Microsoft and he tells me about
00:05:40.800
GPT-4. And this was before any of us had heard about chat GPT, which is initially GPT-3.5. He tells
00:05:48.040
me about GPT-4 and he gets me early access to it when no one else knows that exists. Only a few
00:05:55.400
people do. And I start trying it against hard cases. I get called down. I just remember from my
00:06:03.200
training. I get called down to the nursery. It's a child with a small phallus and a hole at the base
00:06:10.540
of the phallus and they can't palpate testicles and they want to know what to do because I'm a
00:06:16.560
pediatric endocrinologist. And so I asked GPT-4, what would you do? What are you thinking about?
00:06:22.940
And it runs me through the whole workup of these very rare cases of ambiguous genitalia. In this case,
00:06:30.860
it was congenital adrenal hyperplasia where the making of excess androgens during pregnancy and
00:06:38.420
then subsequently in birth causes the clitoris to swell from the glands of the penis of the phallus
1.00
00:06:46.360
and the labia minora to fuse to form the shaft of what looks like a penis. But there's no testicles,
00:06:55.920
there's ovaries. And so there's a whole endocrine workup with genetic tests, hormonal tests,
00:07:04.620
ultrasound, and it does it all. And it blows my mind. It really blows my mind because very few of us in
00:07:12.720
computer science really thought that these large language models would scale up the way they do.
00:07:18.420
It was just not expected. And talking to Bill Gates about this after Peter Leon introduced me to
00:07:26.500
problem, and he told me that his line engineers in Microsoft research, a lot of his fanciest computer
00:07:32.160
scientists did not expect this. But the line engineers at Microsoft were just watching the
00:07:37.660
scale up, you know, GPT 0, 1, 2, and they just saw it was going to keep on scaling up with the size of
00:07:44.600
the data and with the size of the model. And they said, yeah, of course, it's going to achieve this
00:07:49.520
kind of expertise. But the rest of us, I think because we value our own intellects so much, we
00:07:56.160
couldn't imagine how we'd get that kind of conversational expertise just by scaling up the
00:08:03.340
model and the data set. Well, Zach, that's actually kind of a perfect introduction to how I want to
00:08:09.460
think about this today, which is to say, look, there's nobody listening to us who hasn't heard
00:08:13.700
the term AI, and yet virtually no one really understands what is going on. So if we want to
00:08:19.280
talk about how AI can change medicine, I think we have to first invest some serious bandwidth in
00:08:26.200
understanding AI. Now, you alluded to the fact that when you were doing your PhD in the early 80s,
00:08:30.240
you were in the second generation of AI, which leads me to assume that the first generation was
00:08:36.460
shortly following World War II. And that's probably why someone by the name of Alan Turing
00:08:41.820
has his name on something called the Turing test. So maybe you can talk us through what Alan Turing
00:08:48.300
posited, what the Turing test was and proposed to be, and really what Gen 1 AI was. We don't have to
00:08:54.740
spend too much time on it, but clearly it didn't work. But let's maybe talk a little bit about
00:08:58.740
the postulates around it and what it was. After World War II, we had computing machines.
00:09:07.000
And anybody who was a serious computer scientist could see that you could have these processes
00:09:13.920
that could generate other processes. And you could see how these processes could take inputs
00:09:19.800
and become more sophisticated. And as a result, shortly after World War II, we actually had artificial
00:09:30.960
neural networks, the perceptron, which was modeled, roughly speaking, on the ideas of a neuron that could
00:09:40.740
take inputs from the environment and then have certain expectations.
00:09:47.120
And if you updated the neurons as to what was going on, it would update the weights going into
00:09:55.640
that artificial neuron. And so going back to Turing, he just came up with a test that said,
00:10:03.420
essentially, if a computational entity could maintain, essentially, its side of the conversation
00:10:10.820
without revealing that it was a computer and that others would mistake it for a human, then for all
00:10:19.780
intents and purposes, that would be intelligent behavior. And there's been all sorts of additional
00:10:26.900
constraints put on it. And one of the hallmarks of AI, frankly, is that it keeps on moving the goalposts
00:10:34.840
of what we consider to be intelligent behavior. If you had told someone in the 60s that the world chess
00:10:43.360
masters were going to be beaten by a computer program, they'd say, well, that's AI, really, that's AI.
00:10:49.820
And then when Kasparov was beaten by Deep Blue, by the IBM machine, people said, well, it's just doing
00:10:57.260
search very well. It's searching through all the possible moves in the future. It also knows all the
00:11:02.540
grandmaster moves. It has a huge encyclopedia store of all the different grandmaster moves.
00:11:07.900
And this is not really intelligent behavior. If you told people it could recognize human faces
00:11:14.640
and find your grandmother in a picture on any picture in the internet, they'd say, well, that's
00:11:20.920
intelligence. And of course, when we did it, no, that was not intelligent. And then when we said it
00:11:28.280
could write a rap poem about Peter Atia based on your webpage, and it did that, well, that would be
00:11:37.700
intelligent, that would be creative. But then if you said it's doing it based on having created a
00:11:44.220
computational model based on all the text ever generated by human beings, as much as we can gather,
00:11:50.940
which is one to six terabytes of data. And this computational model basically is predicting what is
00:11:57.860
the next word that's going to say, not just the next word, but of the millions of words that could be,
00:12:02.880
what are the probabilities of that next word? That is what's generating that rap. There's
00:12:08.720
people who are arguing that's not intelligence. So the goalposts around the Turing test keep getting
00:12:14.980
moved. So I just have to say that I no longer find that an interesting topic because it's what it's
00:12:22.000
actually doing. And whether you want to call it intelligent or not, that's up to you. It's like
00:12:26.480
discussing whether is a dog intelligent, is a baby intelligent before it can recognize
00:12:33.620
constancy of objects. Initially, babies, if you hide something from it, it's gone and it comes back.
00:12:39.880
It's a surprise. But at some point early on, they learn there's constancy of objects, even when they
00:12:44.720
don't see them. There's this spectrum of intelligent behavior. And I'd just like to remind myself that
00:12:53.780
there's a very simple computational model predicting the next word called a Markov model.
00:13:00.260
And several years ago, people were studying songbirds, and they were able to predict the full
00:13:07.000
song, the next note, and the next note of the songbird, just using a very simple Markov model.
00:13:14.240
So from that perspective, I know we think that we're all very smart, but the fact that you and I,
00:13:20.680
without thinking too hard about it, can come up with fluid speech. Okay, so the model's now
00:13:25.000
a trillion parameters. It's not a simple Markov model, but it's still a model. And perhaps later,
00:13:31.140
we'll talk about how this plays into, unfortunately, the late Kahneman's notions of thinking fast and
00:13:37.820
thinking slow. And his notion of system one, which is this sort of pattern recognition,
00:13:42.100
which is very much similar to what I think we're seeing here. And system two, which is the more
00:13:46.920
deliberate and much more conscious kind of thinking that we pride ourselves on. But a lot of what we
00:13:53.200
do is this sort of reflexive, very fast pattern recognition.
00:13:58.840
So if we go back to World War II, that's to your point where we saw basically rule-based computing
00:14:05.380
come of age. And anybody who's gone back and watched movies about the Manhattan Project,
00:14:11.140
or the decoding of all the sorts of things took place, Enigma, for example. Again, that's straight
00:14:18.780
rules-based computational power. And we're obviously at the limits of, I can only go so far. But it seems
00:14:25.420
that there was a long hiatus before we went from there to kind of like maybe what some have called
00:14:32.620
context-based computation, what your Siri does or Alexa, which is a step quite beyond that. And then
00:14:42.300
of course, you would go from there to what you've already talked about, Blue or Watson, where you have
00:14:48.640
computers that are probably going even one step further. And then of course, where we are now,
00:14:54.880
which is GPT-4. I want to talk a little bit about the computational side of that. But more what I want
00:15:00.860
to get at is this idea that there seems to be a very non-linear pace at which this is happening.
00:15:07.600
And I hear your point. I'd never thought of it that way. I hear your point about the goalpost moving,
00:15:12.840
but I think your instinct around majoring in the right thing is also relevant, which is let's focus
00:15:20.780
less on the fact that we're never quite hitting the asymptote definitionally. Let's look at the actual
00:15:26.380
output and it is staggeringly different. So what was it that was taking place during the period of your PhD,
00:15:33.200
what you're calling wave two of AI? What was the objective and where was the failure?
00:15:38.800
So the objective was in the first era, you wrote computer programs in assembler language or in languages
00:15:45.960
like Fortran. And there was a limit of what you could do. You had to be a real computational
00:15:52.580
programmer to do something in that mode. In wave two in the 1970s, we came up with these rule-based
00:16:01.700
systems where we said rules in what looked like English. If there is a patient who has a fever
00:16:08.400
and you get an isolate from the lab and that bacteria in the isolate is gram-positive,
00:16:14.700
then you might have a streptococcal infection with a probability of so-and-so. And these rule-based
00:16:21.580
systems, which you're now programming in the level of human knowledge, not in computer code,
00:16:28.600
the problem with that was several fold. A, you're going to generate tens of thousands of these rules
00:16:33.720
rules. And these rules would interact in ways that you could not anticipate. And we did not know
00:16:39.460
enough. And we could not pull out of human beings the right probabilities. And what is the right
00:16:46.460
probability of you have a fever and you don't see anything on the blood test? What else is going on?
00:16:54.920
And there's a large set of possibilities. And getting all those rules out of human beings ended up
00:16:59.580
being extremely expensive and the results were not stable. And for that reason, because we didn't
00:17:06.280
have much data online, we could not go to the next step, which is have data to actually drive these
00:17:13.340
models. What were the data sources then? Books, textbooks, and journals as interpreted by human
00:17:21.460
experts. That's why some of these were called expert systems, because they were derived from
00:17:26.760
introspection by experts who would then come up with the rules, with the probabilities. And some of the
00:17:35.100
early work, like for example, there was a program called Mycin run by Ted Shortliff out of Stanford, who
00:17:41.540
developed a antibiotic advisor that was a set of rules based on what he and his colleagues sussed out from
00:17:52.280
the different infectious disease textbooks and infectious disease experts. And it stayed only up
00:17:58.820
to date as long as they kept on looking at the literature, adding rules, fine-tuning it. There's
00:18:05.060
an interaction between two rules that was not desirable. Then you had to adjust that. Very labor-intensive.
00:18:12.120
And then if there's a new thing, you'd have to add some new rules. If AIDS happened, you'd have to say,
00:18:19.180
oh, there's this new pathogen. I have to make a bunch of rules. The probability is going to be
00:18:25.200
different if you're an IV drug abuser or if you're a male, a homosexual. And so it was very, very hard
00:18:32.040
to keep up. And in fact, people didn't. What was the language that it was programmed in? Was this
00:18:37.000
Fortran? No, no. These were so-called rule-based systems. And so the languages, for example,
00:18:42.720
system mycin was called e-mycin, essential mycin. So these looked like English.
00:18:50.300
Super labor-intensive. And there's no way you could keep it up to date. And at that time,
00:18:56.060
there was no electronic medical records. They were all paper records. So not informed by what was
00:18:59.940
going on in the clinic. Three revolutions had to happen in order for us to have what we have today.
00:19:08.120
And that's why I think we had such a quantum jump recently.
00:19:13.300
Before we get to that, that's the exciting question, but I just want to go back to the
00:19:16.820
Gen 2. Were there other industries that were having more success than medicine? Were there
00:19:21.660
applications in the military? Were there applications elsewhere in government where
00:19:28.400
Yes. So there was a company which was a remnant of... Back in the 1970s, there were a whole bunch of
00:19:35.900
computer companies around what we called 128 in Boston. And these were companies that were famous
00:19:42.280
back then, like Wang Computer, like Digital Equipment Corporation. And it's a very sad story for Boston
00:19:49.260
because that was before Silicon Valley got its pearl of computer companies around it. And one of the
00:19:57.560
companies, Digital Equipment Corporation, built a program called R1. And R1 was an expert in configuring
00:20:05.380
the mini computers that you ordered. So you wanted some capabilities and it would actually
00:20:09.760
configure all the industrial components, the processors, the disk, and it would know about
00:20:16.520
all the exceptions and what you needed to know, what cabling, what memory configuration, all that was
00:20:22.520
done. And it basically replaced several individuals who had that very, very rare knowledge to configure
00:20:29.720
their systems. It was also used in several government logistics efforts. But even those efforts, although
00:20:37.740
they were successful and used commercially, were limited because it turns out human beings, once you
0.56
00:20:43.160
got to about three, four, five, six thousand rules, no single human being could keep track of all the ways
00:20:50.480
these rules could work. We used to call this the complexity barrier, that these rules would interact in
00:20:57.520
unexpected ways and you'd get incorrect answers, things that were not commonsensical because you'd
00:21:07.220
actually not captured everything about the real world. And so it was very narrowly focused. And if the
00:21:15.380
expertise was a little bit outside the area of focus, if let's say it was an infectious disease program
00:21:20.960
and there was a little bit of influence from the cardiac status of the patient and you had not
00:21:28.020
accurately modeled that, its performance would degrade rapidly. Similarly, if there was in digital
00:21:35.220
equipment a new model that had a complete different part that had not included and that there were some
00:21:43.200
dependencies that were not modeled, it would degrade in performance. So these systems were very brittle,
00:21:48.460
did not show common sense. They had expert behavior, but it was very narrowly done. There were applications
00:21:55.600
of medicine back then that survived till today. For example, already back then we had these systems
00:22:02.040
doing interpretation of ECGs pretty competently, at least a first pass until they would be reviewed by
00:22:10.320
an expert cardiologist. There's also a program that interpreted what's called serum protein
00:22:15.540
electrophoresis, where you look at proteins separated out by an electric gradient to make a diagnosis,
00:22:23.260
let's say of myeloma or other protein disorders. And those also were deployed clinically, but they only
00:22:30.920
worked very much in narrow areas. They were by no stretch of an imagination, general purpose reasoning
00:22:37.900
So let's get back to the three things. There are three things that have taken the relative failures of first and second
00:22:46.000
attempts at AI and got us to where we are today. I can guess what they are, but let's just have you walk us through
00:22:53.540
The first one was just lots of data. And we needed to have a lot of online data to be able to develop models of
00:23:03.920
interesting performance and quality. So ImageNet was one of the first such data sets, collections of millions of
00:23:13.080
images with annotations, importantly. This has a cat in it. This has a dog in it. This is a blueberry muffin. This has a
00:23:20.320
human in it. And having that was absolutely essential to allow us to train the first very successful neural network
00:23:30.320
models. And so having those large data sets was extremely important. The other, and there's equivalent in
00:23:39.760
medicine, which is we did not have a lot of textual information about medicine until PubMed went online. So all the
00:23:50.520
literature, medical literature, at least we have an abstract of it in PubMed. Plus we have for a subset of it that's open
00:23:58.380
access because government has paid for it through grants. There's something called PubMed Central, which has the full
00:24:04.100
text. So all of a sudden that has opened up over the last 10 years. And then electronic health records, after Obama
00:24:12.660
signed the HITECH Act, electronic health records, which also ruined the lives of many doctors, also happened to also
00:24:19.540
generate a lot of text for the use in these systems. So that's large amounts of data being generated online. The second was
00:24:28.160
the neural network models themselves. So the perceptron that I mentioned that was developed
00:24:33.240
not too long after World War II was shown by one of the pioneers of AI, Marvin Minsky, to have fundamental
00:24:41.700
limitations in that it could not do certain mathematical functions like what's called an exclusive
00:24:47.760
ore gate. Because of that, people said these neural networks are not going to scale. But there were a few true
00:24:53.360
believers who kept on pushing and making more and more advanced architectures and those multi-level deep
00:25:03.060
neural networks. So instead of having one neural network, we layer on top of one neural network, another
00:25:09.860
one, and another one, and another one, so that the output of the first layer gets propagated up to the
00:25:16.540
second layer of neurons to the third layer and fourth layer and so on.
00:25:20.780
And I'm sorry, was this a theoretical mathematical breakthrough or a technological breakthrough?
00:25:26.420
Both. It was both because having those insight that these, we could actually come up with all the
00:25:33.540
mathematical functions that we needed to, we could simulate them with these multi-level networks,
00:25:38.440
whereas it was a theoretical insight, but we would have never made anything out of it if not for the
00:25:43.880
fact of sweaty teenagers, mostly teenage boys, playing video games. In order to have first-person shooters
00:25:53.120
capable of running high-resolution pictures of aliens or monsters in high-resolution, 24-bit color,
00:26:04.900
60 frames per second, we needed to have processors, very parallel processors,
00:26:10.700
that would allow you to do the linear algebra that allow you to calculate what was going to be the
00:26:17.580
intensity of color on every dot of the screen at 60 frames per second.
00:26:22.320
And that's literally just because of the matrix multiplication
00:26:25.080
math that's required to do this. You have N by M matrices that are so big, and you're crossing and
00:26:34.940
Huge matrices. And it turns out that's something that can be run in parallel. So you want to have
00:26:40.940
multiple parallel processors capable of rendering those images, again, at 60 frames per second. So
00:26:47.840
basically, millions of bits on your screen being rendered at 24 or 32-bit color. And in order to do
00:26:55.180
that, you need to have that linear algebra that you just referred to being run in parallel.
00:26:59.460
And so these parallel processors called graphical processing units, GPUs, were developed. And the GPUs
00:27:09.780
were developed by several companies. And some of them stayed in business, some didn't, but they were
00:27:14.960
aptly essential to the success of video games. Now, it then occurred to many smart mathematicians and
00:27:21.900
computer scientists that the same linear algebra that was used to drive that computation for images
00:27:29.320
could also be used to calculate the weights of the edges between the neurons in a neural network.
00:27:37.860
So the mathematics of updating the weights in response to stimuli, let's say, of a neural network,
00:27:45.520
updating of those weights can be done all in linear algebra. And if you have this processor, so a typical
00:27:55.140
computer has a central processing unit. So that's one processing unit. A GPU has tens of thousands of
00:28:04.840
processors that do this one very simple thing, linear algebra. And so by having this parallelism that
00:28:12.420
only supercomputers would have typically on your simple PC, because you needed to show the graphics
00:28:20.260
at 60 frames per second, gave us all of a sudden these commodity chips that allowed us to calculate
00:28:26.460
the performance of these multi-level neural networks. So that theoretical breakthrough was the second
00:28:31.740
part, but would not have happened without the actual implementation capability that we had with the GPUs.
00:28:40.600
And so NVIDIA would be the most successful example of this, presumably?
00:28:45.680
It was not the first, but it's definitely the most successful example. And there's a variety of
00:28:49.820
reasons why it was successful and created an ecosystem of implementers who built their neural
00:28:56.380
network deep learning systems on top of the NVIDIA architecture.
00:29:02.000
Would you go back and look at the calendar and say this was the year or quarter when there was
00:29:06.500
escape velocity achieved there? Yeah. So it was probably around 2012 when there was an ongoing
00:29:13.340
contest every year saying who has the best image recognition software. And these deep neural networks
00:29:22.060
running off GPUs were able to outperform significantly all their other competitors
00:29:29.400
in image recognition in 2012. That's very clearly when everybody just woke up and said, whoa, we knew about
00:29:36.560
neural networks. We didn't realize that these convolutional neural networks were going to be
00:29:41.980
this effective. And it seems that the only thing that's going to stop us is computational speed and the size
00:29:50.440
of our data sets. That moved things very fast along in the imaging space with very soon consequences in
00:29:59.620
medicine. It was only six years later that we saw journal articles about recognition of retinopathy,
00:30:08.380
diseases affecting the retina, the back of your eye and diabetes. And a paper coming out of all places
00:30:15.360
from Google saying we can recognize different stages of retinopathy based on the images of the back of
00:30:23.020
the eye. And that also was a wake up call because yes, part of the goalpost moving is great that we
00:30:28.660
could recognize cats and dogs in web pages. But now all of a sudden, this thing that we thought was
00:30:35.880
specialized human expertise could be done by that same stack of software. Just if you gave it enough
00:30:43.380
cases of these retinopathies, it would actually work well. And furthermore, what was wild was that
00:30:50.120
there's something called transfer learning, where you tune up these networks, get them to recognize
00:30:55.800
cats and dogs. And in the process of recognizing cats and dogs, it learns how to recognize little
00:31:01.040
circles and lines and fuzziness and so on. You did a lot better in training up the neural network
00:31:08.560
first on the entire set of images and then on the retinas. And if you just went straight to,
00:31:15.180
I'm just going to train on the retinas. And so that transfer learning was impressive.
00:31:20.980
And then the other thing as a doctor was impressive to many of us. I was actually asked to write an
00:31:26.200
editorial for the Journal of the American Medical Association in 2018 when a Google article was
00:31:32.560
written. What was impressive to us was that what was the main role of doctors in that publication?
00:31:39.120
It was just twofold. One was to just label the images that were used for training. This is
00:31:45.700
a retinopathy. It's not retinopathy. And then to serve as judges of its performance. And that was it.
00:31:53.580
The rest of it was computer scientists working with GPUs and images, tuning it. And that was it.
00:32:00.480
Didn't look anything like medical school. And you were having expert level recognition of
00:32:05.840
retinopathy. That was a wake-up call. You've alluded to the 2017 paper by Google,
00:32:14.080
Attention is All That is Needed, I think is the title of the paper. Attention is All You Need.
00:32:19.060
That's not what I'm referring to. I'm also referring to a 2018 paper in JAMA.
00:32:24.700
You're talking about the great paper, Attention is All You Need. That was about the invention of the
00:32:29.320
transformer, which is a specific type of neural network architecture. I was talking about these
00:32:35.540
were vanilla, fairly vanilla convolutional neural networks, the same one that can detect dogs and
00:32:41.560
cats. It was a big medical application, retinopathy 2018. Except for computer scientists, no one noticed
00:32:47.940
the attention is all you need paper. And Google had this wonderful paper that said, you know,
00:32:56.520
if we recognize not just text that co-locates together, because previously, so we're going to
00:33:03.740
get away from images for a second. There was this notion that I can recognize a lot of similarities
00:33:11.520
in text. If I see which words occur together, I can implicate the meaning of a word by the company
00:33:18.780
it keeps. And so if I see this word and it has around it, kingdom, crown, throne, it's about a king.
00:33:30.080
And similarly for queen and so on. That kind of association in which we created what was called
00:33:37.420
embedding vectors, which just in plain English, it's a string of numbers that says for any given word,
00:33:45.920
what's the probability? How often do these other words co-occur with it? And just using those
00:33:51.960
embeddings, those vectors, those lists of numbers that describe the co-occurrence of other words,
00:33:59.700
we were able to do a lot of what's called natural language processing, which you're looking at text
00:34:04.600
and saying, this is what it means. This is what's going on. But then in the 2017 paper,
00:34:11.220
they actually took a next step, which was the insight that where exactly the thing that we
00:34:19.380
were focusing on was in the sentence, what was before and after the actual ordering of it
00:34:24.980
mattered, not just the simple co-occurrence, that knowing what position that word was in the sentence
00:34:32.480
actually made a difference. That paper showed the performance went way up in terms of recognition.
00:34:40.340
And that transformer architecture that came from that paper made it clear for a number of
00:34:47.780
researchers, not me, that if you scaled that transformer architecture up to a larger model
00:34:54.680
so that the position dependence and this vector was learned across many, many more texts,
00:35:03.120
the whole internet, you could train it to do various tasks. This transformer model, which is called
00:35:08.200
the pre-trained model. So I apologize, I find it very boring to talk about because unless I'm working
00:35:14.080
with fellow nerds, this transformer, this pre-trained model, can think of it as the equivalent of an
00:35:20.180
equation with multiple variables. In the case of GPT-4, we think it's about a trillion variables.
00:35:27.540
It's like an equation where you have a number in front of each variable, a coefficient,
00:35:31.840
that's about a trillion long. And this model can be used for various purposes. One is the chatbot
00:35:41.900
purpose, which is given this sequence of words, what is the next word that's going to be said?
00:35:47.860
Now, that's not the only thing you could use this model for, but that's, turns out to have been
00:35:53.040
the breakthrough application of the transformer model for text.
00:35:57.460
Just to round out what you said earlier, Zach, would you say that is the third thing that enabled
00:36:02.300
this third wave of AI, the transformer? It was not what I was thinking about. For me,
00:36:07.400
I was thinking of the real breakthrough in data-driven AI. I put around the 2012 era. This is
00:36:13.520
yet another, if you talk to me in 2018, I would have already told you we're in a new heyday and
00:36:20.340
everybody would agree with you. There was a lot of excitement about AI just because of the image
00:36:25.300
recognition capabilities. This was an additional capability that's beyond what many of us were
00:36:32.740
expecting just from the scale-up of the neural network. The three, just to make sure I'm consistent,
00:36:39.280
was large data sets, multi-level neural networks, aka deep neural networks, and the GPU infrastructure.
00:36:46.440
That brought us well through the 2012 to 2018. The 2017 blip that became what we now know to be
00:36:58.760
this whole large language model transformer architecture, that development, unanticipated
00:37:04.920
for many of us, but that was already on the heels of a ascendant AI era. There was already billions of
00:37:10.760
dollars of frothy investment in frothy companies, some of which did well and many of which did not
00:37:17.800
do so well. The transformer architecture has revolutionized many parts of the human condition,
00:37:24.520
I think, but it was already part of it. I think the third wave. There's something about GPT where I feel
00:37:34.040
like most people by the time GPT-3 came out or certainly by 3.5, this was now outside of the
00:37:41.880
purview of computer scientists, people in the industry who were investing in it. This was now
00:37:48.920
becoming as much a verb as Google was in probably the early 2000s. There were clearly people who knew
00:37:57.580
what Google was in 96 and 97, but by 2000, everybody knew what Google was, right? Something about GPT 3.5
00:38:05.900
or 4 was kind of the tipping point where I don't think you can not know what it is at this point.
00:38:11.900
I don't know if that's relevant to the story, meaning does that sort of speak to what trajectory
00:38:19.180
we're on now? The other thing that I think, Zach, has become so audible in the past year
00:38:26.140
is the elevation in the discussion of how to regulate this thing, which seems like something
00:38:34.540
you would only argue about if you felt that there were a chance for this thing to be harmful to us
00:38:41.980
in some way that we do not yet perceive. So what can you say about that? Because that's obviously a nod
00:38:48.380
to the technical evolution of AI, that very serious people are having discussions about
00:38:56.620
pausing, moratoriums, regulations. There was no public discussion of that in the 80s,
00:39:02.060
which may have spoke to the fact that in the 80s, it just wasn't powerful enough to pose a threat.
00:39:06.620
So can you maybe give us a sense of what people are debating now? What is the smart, sensible,
00:39:13.420
reasonable argument on both sides of this? And let's just have you decide what the two sides are.
00:39:19.340
I'm assuming one side says, pedal to the metal. Let's go forth on development. Don't regulate this.
00:39:25.260
Let's just go nuts. The other side is, no, we need to have some breaks and barriers.
00:39:30.300
Not quite that. So you're absolutely right that chatbots have now become a commonly used noun. And that
00:39:36.940
probably happened with the emergence of GPT 3.5. And that appeared around, I think, December of 2022.
00:39:44.700
But now, yes, because out of the box, that pre-trained model I told you about could tell
00:39:51.020
you things like, how do I kill myself? How do I manufacture a toxin? It could allow you to do a lot
00:39:58.300
of harmful things. So there was that level of concern. We can talk about what's been done about
00:40:04.860
those first order efforts. Then there's been a group of scientists who interestingly went from
00:40:14.300
saying, we'll never actually get general intelligence from this particular architecture to saying, oh my
00:40:21.420
gosh, this technology is able to inference in a way that I had not anticipated. And now I'm so worried
00:40:30.300
that either because it's malevolent or just because it's trying to do something that has bad side effects
00:40:37.020
for humanity, it presents an existential threat. Now, on the other side, I don't believe is anybody saying,
00:40:44.220
let's just go heads down and let's see how fast we can get to artificial general intelligence.
00:40:51.020
Or if they do think that, they're not saying it openly.
00:40:54.060
Can you just define AGI, Zach? I think we've all heard the term, but is there a quasi-accepted
00:41:00.060
definition? First of all, there's not. And I hate myself for even bringing it up because it starts-
00:41:05.100
I was going to bring it up before you, anyway, it was inevitable.
00:41:08.380
That was an unfortunate slip because artificial general intelligence means a lot of things to a
00:41:13.580
lot of people. And I slipped because I think it's, again, a moving target and it's very much
00:41:19.820
eye on the beholder. There's a guy called Eliezer Yudkowsky, one of the so-called doomers.
00:41:24.620
And he comes up with great scenarios of how a sufficiently intelligent system could figure out
00:41:33.340
how to persuade human beings to do bad things or control of our infrastructure to bring down our
00:41:40.780
communications infrastructure or airplanes out of the sky. And we can talk about whether that's
00:41:45.580
relevant or not. And on the other side, we have, let's say, OpenAI and Google.
00:41:50.780
But what was fascinating to me is that OpenAI, which working with Microsoft generated GPT-4,
00:41:58.060
we're not saying publicly at all, let's not regulate it. In fact, they were saying,
00:42:03.100
please regulate me. Sam Altman went on a world tour where he said, we should be very concerned about
00:42:08.540
this. We should regulate AI. And he was before Congress saying, we should regulate AI. And so,
00:42:15.980
I feel a bit churlish about saying this because Sam was kind enough to write forward to the book I
00:42:21.260
wrote with Peter Lee and Kerry Goldberg on GPT-4 and the revolution in medicine. But I was wondering,
00:42:29.580
why were they insisting so much on regulation? And there's two interpretations. One is just a sincere,
00:42:35.820
and it could very well be that. Sincere wish that it be regulated so we check these machines,
00:42:41.500
these programs to make sure they don't actually do anything harmful. The other possibility,
00:42:46.060
unfortunately, is something called regulatory locking, which means I'm a very well-funded company,
00:42:51.820
and we're going to create regulations with Congress about what is required, which boxes do you have to
00:42:57.660
check in order to be allowed to run. If you're a small company, you're not going to have a
00:43:03.660
bevy of lawyers with big checks to comply with all the regulatory requirements. And so, I think Sam is,
00:43:13.180
I don't know him personally, I imagine he's a very well-motivated individual. But whether it's for
00:43:19.100
the reason of regulatory lock-in or for genuine concern, there has not been any statements of,
00:43:26.460
let's go heads down. They do say, let's be regulated. Now, having said that, before you even
00:43:32.780
go with a doomer scenario, I think there is someone just as potentially evil that we have to worry about,
00:43:38.380
another intelligence, and that's human beings. And how do human beings use these great tools?
00:43:44.540
So, just as we know for a fact that one of the earliest users of GPT-4 were high schoolers trying
00:43:52.940
to do their homework and solve hard puzzles given to them, we also know that various parties have
00:43:59.740
been using the amazing text generation and interactive capabilities of these programs to
00:44:06.380
spread misinformation, to chatbots, and there's a variety of malign things that could be done by
00:44:13.580
third parties using these engines. And I think that's, for me, the clear and present danger today,
00:44:20.140
which is how do individuals decide to use these general purpose programs?
00:44:27.180
If you look at what's going on in the Ukraine-Russian war, I see more and more autonomous
00:44:34.380
vehicles flying and carrying weaponry and dropping bombs. And we see in our own military a lot more
00:44:44.620
autonomous drones with greater and greater autonomous capabilities. Those are purpose-built
00:44:51.980
to actually do dangerous things. And a lot of science fiction fans will refer to Skynet from the
00:45:01.820
Terminator series, but we're literally building it right now.
00:45:05.260
In the Terminator, Zach, they kind of refer to a moment, I don't remember the year, like 1997 or
00:45:11.900
something. And I think they talk about how Skynet became, quote, self-aware. And somehow when it became
00:45:17.340
self-aware, it just decided to destroy humans. Is self-aware movie speak for AGI? Like, what do you
00:45:25.100
think self-aware means in more technical terms? Or is it super intelligence? There's so many terms here,
00:45:32.300
and I don't know what they mean. Okay. So self-awareness means a process by which the
00:45:38.620
intelligent entity can look back, look inwardly at its own processes and recognize itself. Now, that's
00:45:46.060
very hand-wavy, but Douglas Hofstra has probably done the most thoughtful and clear writing about what
00:45:56.140
self-awareness means. I will not do it justice, but if you really want to read a wonderful book that
00:46:02.460
spends a whole book trying to explain it, it's called I Am A Strange Loop. And in I Am A Strange
00:46:08.540
Loop, he explains how if you have enough processing power and you can represent the processes that you
00:46:17.660
have essentially models of the processes that constitute you. In other words, you're able to look at what
00:46:23.340
you're thinking. You may have some sense of self-awareness. There's a bit of an act of faith
00:46:28.300
on that. Many AI researchers don't buy that definition. There's a difference between self-awareness
00:46:35.500
and actual raw intelligence. You can imagine a super powerful computer that would predict everything
00:46:43.740
that was going to happen around you and was not aware of itself as an entity. The fact remains,
00:46:49.180
you do need to have a minimal level of intelligence to be able to be self-aware. So a fly may not be
00:46:56.540
self-aware. It just goes and finds good-smelling poop and does whatever it's programmed to do on that.
00:47:04.220
But dogs have some self-awareness and awareness of their surroundings. They don't have perfect
00:47:11.900
self-awareness, like they don't recognize themselves in the mirror and they'll bark at that. Birds will
00:47:17.180
recognize themselves in mirrors. We recognize ourselves in many, many ways. So there is some
00:47:23.580
correlation between intelligence and self-awareness, but these are not necessarily dependent functions.
00:47:28.700
So what I'm hearing you say is, look, there are clear and present dangers associated with current
00:47:34.780
best AI tools in that humans can use them for nefarious purposes. It seems to me that the most scalable
00:47:43.100
example of that is still relatively small in that it's not existential threat to our species large,
00:47:50.780
correct? Well, yes and no. If I was trying to do gain of function research with a virus,
00:47:58.140
good point, I could use these tools very effectively. Yeah. That's a great example.
00:48:03.980
There's this disconnect and perhaps you understand the disconnect better than I do.
00:48:07.580
There's those real existential threats. And then there's this more fuzzy thing that we're worried
00:48:14.780
about correctly about bias, incorrect decisions, hallucinations. We can get into what that might be
00:48:22.140
and our use in the everyday of human condition. And there's concerns about mistakes that might be
00:48:28.220
made. There's concerns about displacement of workers that just as automation displaced a whole other series
00:48:37.420
of workers. Now that we have something that works in the knowledge industry automatically, just as
00:48:43.660
we're placing a lot of copy editors and illustrators with AI, where's that going to stop? It's now much
00:48:50.700
more in the white collar space. And so there is concern around the harm that could be generated there.
00:48:57.260
In the medical domain, are we getting good advice? Are we getting bad advice? Whose interests are being
00:49:02.940
optimized in these various decision procedures? That's another level that doesn't quite rise at
00:49:08.540
all to the level of extinction events. But a lot of policymakers and the public seem to be concerned
00:49:14.860
about it. Those are fair points. Let's now talk about that state of play within medicine. So I liked
00:49:20.700
your first example, almost one we take for granted, but you go and get an EKG at the doctor's office. This was
00:49:25.820
true 30 years ago, just as it is today. You get a pretty darn good readout. It's going to tell you if
00:49:31.500
you have an AV block. It's going to tell you if you have a bundle branch block. Put it this way,
00:49:36.540
they read EKGs better than I do. That's not saying much anymore, but they do. What was the next area
00:49:42.300
where we could see this? It seems to me that radiology is a field of medicine, which is of course,
00:49:48.940
image pixel based medicine that would be the most logical next place to see AI do good work. What
00:50:00.140
is the current state of AI in radiology? In all the visual based medical specialties,
00:50:07.740
it looks like AI can do as well as many experts. So what are the image appreciation subspecialties?
00:50:19.020
Pathology, when you're looking at slices of tissue under the microscope. Radiology,
00:50:23.020
where you're looking at x-rays or MRIs. Dermatology, where you're looking at pictures of the skin.
00:50:31.580
So in all those visual based specialties, the computer programs are doing by themselves as well
00:50:42.300
as many experts, but they're not replacing the doctors because that image recognition process
00:50:50.460
is only part of their job. Now, to be fair to your point, in radiology, we already today,
00:50:58.220
before AI in many hospitals would send x-rays by satellite to Australia or India where they would
00:51:06.220
be read overnight by a doctor or a specially trained person who had never seen the patient.
00:51:12.060
And then the reports filed back to us because they're 12 hours away from us overnight, we'd have
00:51:17.660
the results of those reads. And that same kind of function can be done automatically by AI. So that's
00:51:24.140
Let me dig into that a little bit more. So let's start with a relatively simple
00:51:29.740
type of image, such as a mammogram or a chest x-ray. So it's a single image. I mean, I guess
00:51:35.980
with a chest x-ray, you'll get an AP and a lateral, but let's just say you're looking at an AP
00:51:40.780
or a single mammogram. A radiologist will look at that. A radiologist will have clinical information
00:51:47.500
as well. So they will know why this patient presented in the case of the chest x-ray,
00:51:52.940
for example, in the ER in the middle of the night. Were they short of breath?
00:51:56.300
Do they have a fever? Do they have a previous x-ray? I can compare it to all sorts of information.
00:52:02.620
Are we not at the point now where all of that information could be given to the AI to enhance
00:52:09.420
the pre-test probability of whatever diagnosis it comes to?
00:52:12.860
I am delighted when you say pre-test probability. Don't talk dirty around me.
00:52:18.780
Yep. So you just said a lot, because what you just said actually went beyond what the straight
00:52:26.060
convolutional neural networks would do, because they actually could not replace radiologists,
00:52:30.380
because they could not do a good job of taking into account the previous history of the patient.
00:52:36.540
And it's required the emergence of transformers, where you can have multimodality. You have both
00:52:43.980
the image and the text. Now, they're going to do better than many, many radiologists today.
00:52:52.460
There is, I don't think, any threat yet to radiologists as a job. One of the most irritating
00:52:58.220
to doctors predictions was by Jeffrey Hinton, one of the intellectuals leaders of neural network
00:53:03.900
architecture. He said, I think it was in 2016, I have this approximately wrong, but in six years,
00:53:09.740
we wouldn't have no need for radiologists. And that was just clearly wrong. And the reason it was
00:53:16.220
wrong is A, they did not have these capabilities that we just talked about, about understanding
00:53:21.180
about the clinical context. But it's also the fact that we just don't have enough radiologists.
00:53:27.180
To actually do the work. So if you look at American medicine, I'll let you shut me down.
00:53:33.980
But if you look at the residency programs, we're not getting enough radiologists out. We have an
00:53:42.060
overabundance of applicants for interventional radiology. They're making a lot of money. It's
00:53:47.100
high prestige. But straight up radiology readers, not enough of them. Primary care doctors, I go around
00:53:54.620
medical schools and ask who's becoming a primary care doctor. Almost nobody. So primary care is
00:53:59.660
disappearing in the United States. In fact, Mass General and Brigham announced officially they're
00:54:05.020
not seeing primary care patients. People are still going to dermatology and they're still going to
00:54:10.060
plastic surgery. What I did, pediatric endocrinology, half of the slots nationally are not being filled.
00:54:17.500
Pediatric developmental disorders like autism, those slots, half of them filled.
00:54:23.740
PDID. There's a huge gap emerging in the available expertise. So it's not what we thought it was
00:54:33.180
going to be that we had a surplus of doctors that had to be replaced. It's just we have a surplus in
00:54:39.980
a few focused areas which are very popular. And then for all the work of primary care and primary
00:54:46.140
prevention, kind of stuff that you're interested in, we have almost no doctors available.
00:54:50.540
Yeah, let's go back to the radiologist for a second because, again, I'm fixated on this one
00:54:55.660
because it seems like the most, well, the closest one to address. And again, if you're saying, look,
00:55:01.500
we have a dearth of imaging radiologists who are able to work the emergency rooms, urgent care clinics,
00:55:08.140
and hospitals, wouldn't that be the first place we would want to apply our best of imaging recognition
00:55:15.820
with our super powerful GPUs and now plug them into our transformers with our language models
00:55:23.260
so that I can get clinical history, medical past history, previous images, current images,
00:55:30.380
and you don't have to send it to a radiologist in Australia to read it, who then has to send it back
00:55:35.900
to a radiologist here to check. Like, if we're just trying to fill a gap, that gap should be fillable,
00:55:40.780
shouldn't it? And that's exactly where it is being filled. And what keeps distracting me in this
00:55:46.380
conversation is that there's a whole other group of users of these AIs that we're not talking about,
00:55:52.620
which is the patients. And previously, none of these tools were available to patients. With the release
00:55:58.700
of GPT 3.5 and 4, and now Gemini and Claude III, they're being used by patients all the time in ways
00:56:06.540
that we had not anticipated. Let me give you an example. So there's a child who was having trouble
00:56:14.780
walking, having trouble chewing, and then started having intractable headaches. Mom brought him to
00:56:21.980
multiple doctors, they did multiple imaging studies, no diagnosis, kept on being in intractable
00:56:28.540
pain. She just typed into GPT-4 all the reports and asked GPT-4, what's the diagnosis? And GPT-4
00:56:36.220
said, tethered cord syndrome. She then went with all the imaging studies to a neurosurgeon and said,
00:56:42.220
what is this? He looked at it and said, tethered cord syndrome. And we have such an epidemic of
00:56:48.460
misdiagnosis and undiagnosed patients. Part of my background that I'll just mention briefly,
00:56:55.580
I'm the principal investigator of the coordinating center of something called the Undiagnosed Network.
00:56:59.740
It's a network with 12 academic hospitals down the West Coast from University of Washington,
00:57:05.580
Stanford, UCLA, to Baylor, up the East Coast, Harvard hospitals, NIH. And we see a few thousand
00:57:12.300
patients every year. And these are patients who have been undiagnosed and they're in pain. That's just a
00:57:17.980
small fraction of those who are undiagnosed. And yes, we bring to bear a whole bunch of computational
00:57:23.020
techniques and genomic sequencing to actually be able to help these individuals. But it's very clear
00:57:29.100
that there's a much larger burden out there of misdiagnosed individuals.
00:57:33.340
But the question for you, Zach, which is, does it surprise you that in that example, the mother
00:57:37.980
was the one that went to GPT-4 and inputted that? I mean, she had presumably been to many physicians
00:57:45.100
along the way. Were you surprised that one of the physicians along the way hadn't been the one to say,
00:57:51.260
gee, I don't know, but let's see what this GPT-4 thing can do?
00:57:54.460
Most clinicians I know do not have what I used to call the Google reflex. I remember when I was
00:58:02.620
on the wards and we had a child with dysmorphology, they look different. And I said to the fellows,
00:58:10.700
this is after residency, what is the diagnosis? And they said, I don't know, I don't know. I said,
00:58:16.140
he has this and this and this finding. What's the diagnosis? And I said, how would you find out?
00:58:20.780
They had no idea. I just said, let's take what I just said and type it into Google.
00:58:24.620
In the top three responses, there was the diagnosis. And that reflex, which they do
00:58:31.100
use in a civilian life, they did not have in the clinic. And doctors are in a very unhappy position
00:58:38.220
these days. They're really being driven very, very hard. And they're being told to use certain
00:58:44.140
technological tools. They're being turned into data entry clerks. They don't have the Google reflex.
00:58:49.740
They don't have the reflex, who has the time to look up a journal article? They don't do the Google
00:58:55.500
reflex. Even less, do they have the, let's look at the patient's history and see what GPT-4 would
00:59:02.380
come up with. I was gratified to see early on doctors saying, wow, look, I just took the patient
00:59:09.260
history, plugged into GPT-4 and said, write me a letter of prior authorization. And they were
00:59:14.380
actually tweeting about doing this, which on the one hand, I was very, very pleased for them
00:59:19.180
because it was saving them five minutes to write that letter to the insurance company saying,
00:59:24.300
please authorize my patient for this procedure. I was not pleased for them because if you use chat GPT,
00:59:30.460
you're using a program that is covered by open AI, as opposed to a version of GPT-4 that is being run
00:59:38.860
on protected Azure cloud by Microsoft, which is HIPAA covered. For those of you, the audience
00:59:45.260
doesn't know, HIPAA is the legal framework under which we protect patient privacy. And if you violate
00:59:50.940
it, you can be fined and even go to prison. So in other words, if a physician wants to put any
00:59:57.260
information into GPT-4, they better not identify it. That's right. So they just plunked in a patient
01:00:04.300
note into chat GPT. That's a HIPAA violation. If there's a Microsoft version of it, which is HIPAA
01:00:10.620
compliant, it's not. So they were using it to improve their lives. The doctors were using it
01:00:15.580
for improving the business, the administrative part of healthcare, which is incredibly important.
01:00:20.140
But by and large, only a few doctors use it for diagnostic acumen.
01:00:26.620
And then what about more involved radiology? So obviously a plain film is one of the more
01:00:32.460
straightforward things to do, although it's far from straightforward, as anybody knows who's
01:00:36.460
stared at a chest x-ray. But once we start to look at three-dimensional images, such as
01:00:41.420
cross-sectional images, CT scans, MRIs, or even more complicated images like ultrasound and things of
01:00:48.220
that nature, what is the current state of the art with respect to AI in the assistance of reading
01:00:57.580
So that's the very exciting news, which is, remember how I said it was important to have
01:01:03.340
a lot of data, one of the three ingredients in the breakthrough. So all of a sudden having
01:01:07.740
a lot of data around, for example, echocardiograms, the ultrasounds of your heart. Normally it takes
01:01:14.300
a lot of training to interpret those images correctly. So there is a recent study from the
01:01:20.700
Echo Clip Group led, I think, out of UCLA. And they took a million echocardiograms and a million
01:01:30.380
textual reports and essentially trained the model, both to create those embeddings I talked about
01:01:39.020
Just to make sure people understand what we're talking about, this is not, here's a picture of
01:01:44.700
a cat, here's a description, cat. When you put the image in, you're putting a video in. Now you're
01:01:51.420
putting a multi-dimensional video because you have time scale, you have Doppler effects. This is a very
01:02:01.180
It's a very complicated video and it's three-dimensional and it's weird views from different angles.
01:02:08.300
And it's dependent on the user. In other words, the tech, the radiology tech can be good or bad.
01:02:20.780
The echo tech does not have medical school debt. They don't have to go to medical school. They don't
01:02:25.660
have to learn calculus. They don't have to learn physical chemistry, all the hoops that you have to
01:02:29.740
go through in medical school. You don't have the attitudinal debt of doctors. So in two years,
01:02:34.460
they get all those skills and they actually do a pretty good job.
01:02:37.020
They do a fantastic job. But my point is, their skill is very much an important determinant of
01:02:44.300
Yes. But what we still require these days is a cardiologist to then read it and interpret it.
01:02:50.460
Right. That's sort of where I'm going, by the way, is we're going to get rid of the
01:02:52.940
cardiologist before we get rid of the technician.
01:02:55.500
We're on the same page. My target in this conversation is nurse practitioners
01:03:00.220
and physician assistants with these tools can replace a lot of expert clinicians.
01:03:06.140
And there is a big open question. What is the real job for doctors in 10 years from now?
01:03:13.100
And I don't think we know the answer to that because you fast forward to the conversation just now.
01:03:18.780
Excellent. Well, let's think about it. We still haven't come to proceduralists.
01:03:22.780
So we still have to talk about the interventional radiologist, the interventional cardiologist,
01:03:26.460
and the surgeon. We can talk about the role of the surgeon and the da Vinci robot in a moment.
01:03:31.500
But I think what we're doing is we're kind of identifying the pecking order of physicians.
01:03:36.700
And let's not even think about it through the lens of replacement. Let's start with the lens of
01:03:40.780
augmentation, which is the radiologist can be the most easily augmented, the pathologist,
01:03:47.900
the dermatologist, the cardiologist who's looking at echoes and EKGs and stress tests. People who are
01:03:56.460
interpreting visual data and using visual data will be the most easily augmented. The second tranche of
01:04:03.500
that will be people who are interpreting language data plus visual data. So now we're talking about
01:04:09.020
your internist, your pediatrician, where you have to interpret symptoms and combine them with laboratory
01:04:15.260
values and combine it with a story and an image. Is that a fair assessment in terms of tier?
01:04:21.260
Absolutely a fair assessment. My only quibble, it's not a quibble, I'm going to keep on going back to
01:04:25.820
this, is in a place where we don't have primary care. The American Association of Medical Colleges
01:04:31.580
estimates that by 35, that's only 11 years from now, we'll be missing on the order of 50,000 primary
01:04:36.940
care doctors. As I told you, I can't get primary care at the Brigham or at MGH today. And in the absence of
01:04:43.420
that, you have to ask yourself, how can we replace these absent primary care practitioners with
01:04:50.220
nurse practitioners with physician assistants augmented by these AIs? Because there's literally
0.83
01:04:57.980
no doctor to replace. So tell me, Zach, where are we technologically on that augmentation? If NVIDIA
01:05:06.060
never came out with another chip, if they literally said, you know what, we are only interested in building
01:05:12.860
golf simulators, and we're done with the progress of this, and this is as good as it's going to get. Do we have
01:05:19.980
good enough GPUs, good enough multi-layer neural networks, that all you need is more data and training sets, that we
01:05:28.780
could now do the augmentation that has been described by us in the last five minutes?
01:05:33.740
The short answer is yes. Let me make it very concrete. Most concierge services cost in Boston
01:05:39.900
somewhere between $5,000 and $20,000 a year. You can get this very low cost concierge service that I'm
01:05:46.060
just amazed that have not done the following, called One Medical. One Medical was acquired by Amazon.
01:05:51.500
And they have a lot of nurse practitioners in there. And you can make an appointment,
01:05:55.100
and you can text with them. I believe that those individuals could be helped in ordering the right
01:06:02.380
imaging studies, the right EKGs, the right medications, and assess your continuing heart failure,
01:06:11.980
and only decide in the very few cases that you need to see a specialist cardiologist or a specialist
01:06:19.660
endocrinologist today. Just be a matter of just making the current models better, evaluating them,
01:06:26.540
because not all models are equal. A big question for us, this is the regulatory question, which is,
01:06:32.140
which ones do a better job? And they're not all equal. I don't think we need technological breakthroughs
01:06:38.860
to just make the current set of paraprofessionals work at the level of entry-level doctors. Let me quickly
01:06:47.180
say the old very bad joke. What do you call the medical student who graduates at the bottom of
01:06:52.300
his class? Doctor. And so if you could just merely get the bottom 50% of doctors to be as good as the
01:07:01.580
top 50%, that would be transformative for healthcare. Now, there are other superhuman capabilities that we
01:07:10.220
can go towards, and we can talk about if we want, that do require the next generation of algorithms,
01:07:18.060
NVIDIA architectures, and data sets. Everything stopped now, we could already transform medicine.
01:07:24.300
It's just a matter of the sweat equity to create the models, figure out how to include them in the
01:07:30.780
workflow, how to pay for them, how to create a reimbursement system, and a business model that
01:07:37.420
works for our society. But there's no technological barrier.
01:07:42.940
In my mind, everything we've talked about is take the best case example of medicine today
01:07:49.900
and augment it with AI such that you can raise everyone's level of care to that of the best,
01:07:56.300
best, no gaps, and it's scaled out. Okay, now let's talk about another problem, which is where do you
01:08:04.780
see the potential for AI in solving problems that we can't even solve on the best day at the best
01:08:13.740
hospitals with the best doctors? So let me give you an example. We can't really diagnose Alzheimer's
01:08:21.180
disease until it appears to be at a point that for all intents and purposes is irreversible.
01:08:29.580
Maybe on a good day, we can halt progression really, really early in a patient with just a whiff of MCI,
01:08:36.300
mild cognitive impairment, maybe with an early amyloid detection and an anti-amyloid drug.
01:08:43.100
But is it science fiction to imagine that there will be a day when an AI could listen to a person's
01:08:48.620
voice, watch the movements of their eyes, study the movements of their gait, and predict 20 years
01:08:57.020
in advance when a person is staring down the barrel of a neurodegenerative disease and act at a time
01:09:03.260
when maybe we could actually reverse it? How science fiction-y is that?
01:09:07.580
I don't believe it's science fiction at all. Do you know that looking at retinas today, images of retina,
01:09:13.660
straightforward convolutional neural network, not even ones that involve transformers,
01:09:17.820
can already tell you by looking at your retina, not just whether you have retinal disease,
01:09:23.020
but if you have hypertension, if you're a male, if you're female, how old you are,
01:09:28.700
and some estimate of your longevity. And that's just looking at the back of your eye
01:09:33.580
and seeing enough data. I was a small player in a study that appeared in Nature in 2005 with Bruce
01:09:40.540
Yankner. We were looking at the frontal lobes of individuals who had died for a variety of reasons,
01:09:47.500
often in accidents of various ages. And we saw, bad news for people like me, that after age 40,
01:09:54.380
your transcriptome, the genes that are switched on, fell off a cliff. Thirty percent of your transcriptome
01:10:00.860
went down. And so there seemed to be a big difference in the expression of genes around age 40,
01:10:07.900
but there was one 90-year-old who looked like the young guy. So maybe there's hope for some of us.
01:10:12.140
But then I thought about it afterwards, and there were other things that actually have much smoother
01:10:16.380
functions, which don't have quite a fall off, like our skin. So our skin ages. In fact, all our organs
01:10:23.980
age and they age at different rates. You're saying that the transcriptome of the skin,
01:10:28.940
you did not see this cliff-like effect at a given age, the way you saw it in the frontal cortex.
01:10:34.540
So different organs age at different rates, but having the right data sets and the ability to see
01:10:42.540
nuances that we don't notice makes it very clear to me that the early detection part, no problem.
01:10:49.580
It can be very straightforward. The treatment part, we can talk about it as well. But again,
01:10:54.940
we had early on from the very famous Framium Heart Study, a predictor of when you had going to have
01:11:01.500
heart disease based on just a handful of variables. Now we have these artificial intelligence models
01:11:07.500
that, based on hundreds of variables, can predict various other diseases. And it will do Alzheimer's,
01:11:16.220
I believe, very soon. I think you'll be able to see a combination of
01:11:21.500
gait, speech patterns, picture of your body, picture of your skin, and eye movements. Like you said,
01:11:29.740
will be a very accurate predictor. We just published, by the way recently, speaking about eyes,
01:11:34.300
a very nice study where in a car, just by looking at the driver, it can figure out what your blood sugar is.
01:11:42.460
Because diabetics previously have not been able to get driver licenses sometimes because of the worry about
01:11:49.260
them passing out because of hypogycemia. So there was a very nice study that showed that you could
01:11:52.940
just, by looking, have cameras pointed at the eyes, could actually figure out exactly what the blood
01:11:57.500
sugar is. So that kind of detection is, I think, fairly straightforward. It's a different question
01:12:03.740
about what you can do about it. Before we go to the what you can do about it,
01:12:07.100
I just want to go a little deeper on the predictive side. You brought up the Framingham model or the
01:12:12.220
multi-ethnic study on atherosclerosis, the MESA model. These are the two most popular models by far
01:12:17.100
looking at a major adverse cardiac event risk prediction. But you needed something else to
01:12:21.820
build those models, which was enough time to see the outcome. In the Framingham cohort,
01:12:27.100
which was the late 70s and early 80s, you then had the Framingham offspring cohort.
01:12:31.740
And then you had to be able to follow these people with their LDL-C and HDL-C and triglycerides.
01:12:36.700
And later, eventually, they incorporated calcium scores. So if today we said, look,
01:12:43.100
we want to be able to predict 30-year mortality, which is something no model can do today,
01:12:50.700
this is a big pet peeve of mine, is we generally talk about cardiovascular disease through the lens
01:12:55.580
of 10-year risk, which I think is ridiculous. We should talk about lifetime risk. But I would
01:13:00.220
settle for 30-year risk, frankly. And if we had a 30-year risk model where we could take
01:13:07.100
many more inputs, I would absolutely love to be looking at the retina. I believe, by the way,
01:13:12.540
Zach, that retinal examination should be a part of medicine today for everybody.
01:13:17.180
I would take a retinal exam over a hemoglobin A1C all day, every day. I'd never look at another A1C
01:13:24.620
again if I could see the retina of every one of my patients. But my point is, even if effective today,
01:13:30.540
we could define the data set, and let's overdo it, and we can prune things later. But we want to see
01:13:35.820
these 50 things in everybody to predict every disease. Is there any way to get around the fact
01:13:41.340
that we're going to need 30 years to see this come to fruition in terms of watching how the story plays
01:13:46.300
out? Or are we basically going to say, no, we're going to do this over five years? It won't be that
01:13:51.340
useful because a five-year predictor basically means you're already catching people in the throes of
01:13:55.100
the disease. I'll say three words, electronic health records. So that turns out not to be the
01:14:02.060
answer in the United States. Why? Because in the United States, we move around. We don't stay in any
01:14:09.020
given healthcare system that long. So very rarely will I have all the measurements made on you,
01:14:15.820
Peter, all your glycohemoglobins, all your blood pressures, all your clinic visits, all the imaging
01:14:21.180
studies that you've had. However, that's not the case in Israel, for example. In Israel, they have
01:14:28.460
these HMOs, health maintenance organizations. And one of them, Clarit, I have a good relationship with
01:14:35.020
because they published all the big COVID studies looking at the efficacy of the vaccine. And why
01:14:42.540
could they do that? Because they had the whole population available. And they have about 20,
01:14:48.220
25 years worth of data on all their patients in detail and family relationships. So if you have
01:14:55.980
that kind of data, and Kaiser Permanente also has that kind of data, I think you can actually come
01:15:02.060
close. But you're not going to be able to get retina, gait, voice, because we still have to get those
01:15:07.900
prospectively. I'm going to claim that there are proxies, rough proxies, but for gait, false.
01:15:15.900
And for hearing problems, visits to the audiologist. Now, these are noisier measurements. And so,
01:15:23.820
those of us who are data junkies, like I am, always keep mumbling to ourselves, perfect is the
01:15:30.060
enemy of good. Waiting 30 years to have the perfect data set is not the right answer to help patients
01:15:36.300
now. And there are things that we could know now that are knowable today that we just don't know
01:15:42.860
because we haven't bothered to look. I'll give you a quick example. I did a study of autism using
01:15:49.180
electronic health records, maybe 15 years ago. And I saw there was a lot of GI problems. And I
01:15:55.660
talked to a pediatric expert, and they said, it was a little bit dismissive. They said, brain bad,
01:16:00.940
tummy hurt. I've seen a lot of inflammatory bowel disease. It just doesn't make sense to me that this
01:16:06.220
is somehow effect of brain function. To make a long story short, we did a massive study. We're
01:16:11.660
looking forward to tens of thousands of individuals. And sure enough, we found subgroups of patients who
01:16:15.820
had immunological problems associated with their autism, and they had type 1 diabetes,
01:16:21.500
inflammatory bowel disease, lots of infections. Those were knowable, but they were not known. And I had,
01:16:27.020
frankly, parents coming to me more thankful than for anything else I had ever done for them
01:16:30.860
clinically, because I was telling these parents, they weren't hallucinating that these kids have
01:16:35.420
these problems. They just weren't being recognized by medicine because no one had the big wide angle
01:16:41.340
to see these trends. So, without knowing the field of Alzheimer's the way I do other fields,
01:16:48.140
I bet you there are trends in Alzheimer's that you can pick up today by looking at enough patients
01:16:53.660
that you'll find some that have more frontotemporal components, some that have more effective
01:16:58.780
components, some that have more of an infectious and immunological component. Those are knowable
01:17:04.220
today. Zach, you've already alluded to the fact that we're dealing with a customer, if the physician
01:17:11.580
is the customer, who is not necessarily the most tech-forward customer. And truthfully, like many customers
01:17:20.060
of AI, runs the risk of being marginalized by the technology if the technology gets good enough.
01:17:25.900
And yet, you need the customer to access the patient to make the data system better,
01:17:33.900
to make the training set better. So, how do you see the interplay over the next decade of that dynamic?
01:17:42.620
That's the right question. Because in order for these AI models to work, you need a lot of data,
01:17:47.900
a lot of patients. Where is that data going to come from? So, there are some healthcare systems,
01:17:52.860
like the Mayo Institute, who think they can get enough data in that fashion. There are some data
01:18:00.940
companies that are trying to get relationships with healthcare systems where they can get de-identified
01:18:06.540
data. I'm betting on something else. There is a trend where consumers are going to have increased
01:18:13.580
access to their own data. The 21st Century Cures Act was passed by Congress, and it said that patients
01:18:20.220
should be given access to their own data programmatically. Now, they're not expecting
01:18:24.860
your grandmother to write a program to access the data programmatically, but by having a right to it,
0.97
01:18:30.540
it enables others to do so. So, for example, Apple has something called Apple Health. It has this big
01:18:36.060
heart icon on it. If you're one of the 800 hospitals that they've already hooked up with,
01:18:40.780
Pass General or Brigham Women's, and you're a patient there, if you authenticate yourself to it,
01:18:45.260
if you give it your username and password, it will download into your iPhone, your labs, your meds,
01:18:51.900
your diagnoses, your procedures, as well as all the wearable stuff, your blood pressure that you get
01:18:58.220
as an outpatient, and various other forms of data. That's already happening now. There's not a lot of
01:19:04.140
companies that are taking advantage of that, but right now that data is available on tens of millions
01:19:08.780
of Americans. Isn't it interesting, Zach, how unfriendly that data is in its current form? I'll
01:19:15.420
give you just a silly example in our practice. So, if we send a patient to LabCorp or Boston Heart or
01:19:21.340
Pick Your Favorite Lab, and we want to generate our own internal reports based on those where we want
01:19:28.940
to do some analysis on that, lay out trend sheets, we have to use our own internal software. It's almost
01:19:37.420
impossible to scrape those data out of the labs because they're sending you PDF reports. Their
01:19:44.860
APIs are garbage. Nothing about this is user-friendly. So, even if you have the My Heart thing or whatever,
01:19:53.020
the My Health thing come on your phone, it's not navigable. It's not searchable. It doesn't show you
01:19:58.540
trends over time. Like, is there a more user-hostile industry from a data perspective than the health
01:20:04.140
industry right now? No, no. And there's a good reason why, because they're keeping you captive.
01:20:10.860
But Peter, the good news is you're speaking to a real nerd. Let me tell you two ways where we could
01:20:17.020
solve your problem. One, if it's in that Apple Health thing, someone can actually write a program,
01:20:22.140
an app on the iPhone, which will take those data as numbers and not have to scrape it. And it could run
01:20:28.620
it through your own trending programs. You could actually use it directly. Also, Gemini and GPT-4,
01:20:34.540
you can actually give it those PDFs. And actually, with the right prompting, it will actually take
01:20:41.100
those data and turn them into tabular spreadsheets. We can't do that because of HIPAA, correct?
01:20:47.340
If the patient gets it from the patient portal, absolutely, you can do that.
01:20:50.700
The patient can do that, but I can't use a patient's data that way.
01:20:54.380
If the patient gives it to you, absolutely. Really? Oh, yes. But it's not de-identified.
01:21:00.940
It doesn't matter. If a patient says, Peter, you can take my 50 LabCorp reports for the last 10 years,
01:21:09.260
and you can run them through ChatGPT to scrape it out and give me an Excel spreadsheet that will
01:21:15.900
perfectly tabularize everything that we can then run into our model to build trends and look for
01:21:21.100
things. I didn't think that was doable, actually. So it's not doable through ChatGPT because your
01:21:26.220
lawyers would say, Peter, you're going to get a million dollars in fines from HIPAA. I'm not a
01:21:31.100
shill for Microsoft. I don't own any stock. But if you do GPT on the Azure cloud that's HIPAA protected,
01:21:37.740
you absolutely can use it with patient consent. 100% you can do it. GPT is being used
01:21:43.660
with patient data out of Stanford right now. EPIC's using GPT-4, and it's absolutely legitimately
01:21:51.980
usable by you. People don't understand that. We've now just totally bypassed OCRs.
01:21:57.420
We do not need to waste our time with, for people not in the acronyms, optical character recognition,
01:22:02.460
which is 15 years ago what we were trying to do to scrape this data. Peter, let me tell you,
01:22:08.940
there's New England Journal of Medicine. I'm on the editorial board there. And we just published
01:22:12.700
three months ago, a picture of the week back of this 72-year-old. And it looks like a bunch of red
01:22:18.620
marks. To me, it looks like something to scratch themselves. And it says, blah, blah, blah. They
01:22:23.100
had trouble sleeping. This is the image of the week. Image of the week. And I took that whole thing,
01:22:27.820
and I took out one important fact, and then gave it to GPT-4, the image and the text. And it came up with
01:22:35.500
the two things I thought it would be, either bleomycin toxicity, which I don't know what
01:22:39.660
that looks like, and shiitake mushroom toxicity. What I'd removed is the fact that the guy had eaten
01:22:47.100
mushrooms the day before. So this thing just like looking at the picture.
01:22:55.900
I don't think most doctors know this, Zach. I don't think most doctors understand. First of all,
01:23:01.820
I can't tell you how many times I get a rash. Well, I try to send a picture to my doctor,
01:23:06.700
or my kid gets a rash, and I'm trying to send a picture to their pediatrician,
01:23:10.700
and they don't know what it is. And it's like, we're rubbing two sticks together,
01:23:14.380
and you're telling me about the Zippo lighter. Yes. And that's what I'm saying is patients without
01:23:19.260
primary care doctors. I know I keep repeating myself. They understand that they have a Zippo
01:23:22.860
lighter waiting three months because of a rash or the symptoms. They say, I'll use this Zippo lighter.
01:23:28.540
It's better than no doctor for sure, and maybe better. That's now. Quickly illustrate it. I don't
01:23:34.460
know squat about the FDA. And so I pulled down from the FDA the adverse event reporting files.
01:23:41.180
It's a big zip file, compressed file. And I said to GPT-4, please analyze this data. And it says,
01:23:47.260
unzipping. Based on this table, I think this is about the adverse events, and this is the
01:23:51.900
locations. What do you want to know? I say, tell me what adverse events for disease-modifying
01:23:57.740
drugs for arthritis. It says, oh, to do that, I'll have to join these two tables.
01:24:02.220
And it just does it. It creates its own Python code. It does it, and it gives me a report.
01:24:08.140
Is this a part of medical education now? You're at Harvard, right? You're at one of the three best
01:24:12.620
medical schools in the United States, arguably in the world. Is this an integral part of the
01:24:17.660
education of medical students today? Do they spend as much time on this as they do histology,
01:24:23.500
where I spent a thousand hours looking at slides under a microscope that I've never once tried to
01:24:29.740
understand? Again, I don't want to say there wasn't a value in doing that. There was,
01:24:34.620
and I'm grateful for having done it. But I want to understand the relative balance of education.
01:24:40.060
It's like the stethoscope. Arguably, we should be using things other than the stethoscope.
01:24:44.540
Let me make sure I don't get fired, or at least beaten severely, by telling you that George Daly,
01:24:50.220
our dean of the medical school, has said explicitly he wants to change all of medical education.
01:24:54.940
So these learnings are infused throughout the four years, but it's going to take some doing.
01:25:01.180
Let's now move on to the next piece of medicine. So we've gone from purely the recognition image-based
01:25:09.980
to how do I combine image with voice, story, text. You've made a very compelling case that we don't
01:25:17.820
need any more technological breakthroughs to augment those. It's purely a data set problem at this point,
01:25:22.780
and a willingness. Let's now move to the procedural. Is there, in our lifetimes,
01:25:28.540
say, Zach, the probability that if you need to have a radical prostatectomy, which currently,
01:25:35.980
by the way, is never done open. This is a procedure that the da Vinci, a robot, has revolutionized.
01:25:41.580
There's no blood loss anymore. When I was a resident, this was one of the bloodiest operations
01:25:46.380
we did. It was the only operation, by the way, for which we had the patients donate their own blood
01:25:52.300
two months ahead of time. That's how guaranteed it was that they were going to need blood transfusions,
01:25:57.580
so we just said, to hell with it. Come in a couple of months before, give your own blood,
01:26:01.180
because you're going to need at least two units following this procedure.
01:26:05.260
Today, it's insane how successful this operation is on a large part of the robot.
01:26:11.180
But the surgeon needs to move the robot. Are we getting to the point where that could change?
01:26:17.340
So let me tell you where we are today. Today, there's been studies where it's collected a bunch
01:26:21.660
of YouTube videos of surgery and traded up one of these general models. So it says, oh, they're
01:26:29.020
putting on the scalpel to cut this ligament. And by the way, that's too close to the blood vessel.
01:26:35.580
They should move it a little bit to the side. That's already happening. Based on what we're seeing with
01:26:40.780
robotics in the general world, I think the da Vinci controlled by a robot 10 years is a very safe bet.
01:26:50.940
It's a very safe bet. In some ways, 10 years is nothing.
01:26:54.620
It's nothing. But it's a very safe bet. The fact is, right now, I can do a better job,
01:26:59.500
by the way, just to go back to our previous discussion, giving you a genetic diagnosis
01:27:04.780
based on your findings than any primary care provider interpreting a genomic test.
01:27:10.860
So are you using that example, Zach, because it's a huge data problem? In other words, that's obvious
01:27:17.900
that you would be able to do that because the amount of data, I mean, there's 3 billion base pairs
01:27:22.780
to be analyzed. So of course, you're going to do a better job.
01:27:27.100
Yeah, yeah. But you're saying surgery is a data problem because if you turn it into a pixel problem?
01:27:36.620
That's it. Remember, there's a lot of degrees of freedom in moving a car around traffic. And by the
01:27:42.300
way, lives are on the line there too. Now, medicine is not the only job where lives are at stake.
01:27:49.740
Driving a ton of metal at 60 miles per hour in traffic is also putting lives at stake. And last
01:27:57.100
time I looked, there's several manufacturers who are saying that, or some appreciable fraction
01:28:05.020
of that effort, they're controlling multiple degrees of freedom with a robot.
01:28:08.620
Yeah. I very recently spoke with somebody, I won't name the company, I suppose, but it's one
01:28:15.260
of the companies that's deep in the space of autonomous vehicles. And they very boldly stated,
01:28:21.820
they made a pretty compelling case for it, that if every vehicle on the road was at their level of
01:28:27.100
technology and autonomous driving, you wouldn't have fatalities anymore. But the key was that every
01:28:32.780
vehicle had to be at that level. I don't know if you know enough about that field, but does that
01:28:36.620
sense check to you? Well, first of all, I'm a terrible driver. I am a better driver. It's not
01:28:42.140
for an ad, but the fact is I'm a better driver because I'm not on a Tesla, because I'm a terrible
01:28:46.300
driver. And there's actually a very good message for medicine, because I will paraphrase this.
01:28:51.900
I knew enough to know that I need to jiggle the steering wheel when I'm driving with a Tesla,
01:28:56.060
because otherwise it will assume that I'm just zoning out. But what I didn't realize is this,
01:29:01.260
I'm very bad. I'll pick up my phone and I'll look at it. I didn't realize it was looking at me and
01:29:06.060
it says, because Zach put down the phone. So I, okay, I put down. Three minutes later,
01:29:10.460
I pick it up again and it says, okay, that's it. I'm switching off autopilot. So it switches off
01:29:16.140
autopilot and now I have to pay attention, full attention. Then I get home and it says, all right,
01:29:21.740
that was bad. You do that four more times. I'm switching off autopilot until the next software
01:29:27.820
update. And the reason I mentioned that is it takes a certain amount of confidence to do that
01:29:33.180
to your customer base saying, I'm switching off the thing that they bought me for. In medicine,
01:29:38.780
how likely is it that we're going to fall asleep at the wheel if we have an AI thinking for us?
01:29:43.980
It's a real issue. We know for a fact, for example, back in the nineties,
01:29:47.740
that doses for a drug like a dancetron, where people would talk endlessly about how frequently
01:29:52.860
you should be given it with what dose. The moment you put it in the order entry system,
01:29:56.540
95% of doctors would just use the default there. And so how in medicine are we going to keep doctors
01:30:03.100
awake at the wheel? And will we dare to do that kind of challenges that I just described the car
01:30:09.340
doing? So just to get back to it, I do believe because of what I've seen with autonomy and robots
01:30:16.940
that as fancy as we think that is, controlling a dementia robot will probably have less
01:30:23.980
bad outcomes. Every once in a while, someone nicks something and you have to go into full
01:30:29.340
surgery or they go home and they die on the way home because they exsanguinate. I think it's just
01:30:34.620
going to be safer. It's just unbelievable for me to wrap my head around that. But truthfully,
01:30:41.500
it's impossible for me to wrap my head around what's already happened. So I guess I'll try to
01:30:45.740
retain the humility that says I reserve the right to be startled. Again, there are certain things that
01:30:51.980
seem much easier than others. Like I have an easier time believing we're going to be able to replace
01:30:56.140
interventional cardiologists where the number of degrees of freedom, the complexity and the
01:31:02.060
relationship between what the image shows, what the cath shows and what the input is, the stent,
01:31:09.180
that gap is much narrower. Yeah, I can see a bridge to that. But when you talk about doing a Whipple
01:31:14.460
procedure, when you talk about what it means to cell by cell take a tumor off the superior mesenteric
01:31:22.380
vessels, I'm thinking, oh my God. Since we're on record, I'm going to say, I'm talking about your
01:31:28.380
routine prostate removal. Yeah. First 10 years, I would take that bet today. Wow. Let's go one layer further.
01:31:36.940
Sure. Let's talk about mental health. This is a field of medicine today that I would also argue
01:31:42.940
is grossly underserved. Everything you've said to date resonates. I completely agree from my own
01:31:49.980
experience that the resources in pediatrics and primary care, I mean, these things are unfortunate
01:31:55.980
at the moment. Harvard has, I think, 60% of undergraduates are getting some sort of mental
01:32:01.180
health support and it's completely outdoing all the resources available to the university health
01:32:06.780
services. And so we have to outsource some of our mental health. And this is a very richly endowed
01:32:11.660
university. In general, we don't have the resources. So here we live in a world where I think the evidence
01:32:18.300
is very clear that when a person is depressed, when a person is anxious, when a person has any sort of
01:32:24.580
mental or emotional illness, pharmacotherapy plays a role, but it can't display psychotherapy.
01:32:30.100
You have to be able to put these two things together. And the data would suggest that the
01:32:35.660
knowledge of your psychotherapist is important, but it's less important than the rapport you can
01:32:41.700
generate with that individual. Now, based on that, do you believe that the most sacred, protected,
01:32:49.620
if you want to use that term, profession within all of medicine will then be psychiatry?
01:32:54.980
I'd like to think that if I had a psychiatric GPT speaking to me,
01:33:00.100
I wouldn't think that it understood me. On the other hand, back in the 1960s or 70s,
01:33:08.900
there was a program called Eliza and it was a simple pattern matching program. It would just emulate
01:33:14.740
what's called a Rogerian therapist, where I really hate my mother. Why do you say you hate your mother?
01:33:21.140
Oh, it's because I don't like the way she fed me. What is it about the way she fed you? Just very,
01:33:26.980
very simple pattern matching. And this Eliza program, which was developed by Joe Weizenbaum at MIT,
01:33:34.660
his own secretary would lock herself in her office to have sessions with this thing because it's
01:33:44.420
Yeah. And it turns out that there's a large group of patients who actually would rather have a non-human,
01:33:51.380
non-judgmental person who remembers what they've said from last time, shows empathy verbally. Again,
01:33:58.500
I wrote this book with Peter Lee and Peter Lee made a big deal in the book about how GPT-4 is showing
01:34:05.220
empathy. In the book, I argued with him that this is not that big a deal. And I said, I remember from
01:34:11.140
medical school being told that some of the most popular doctors are popular because they're very
01:34:16.980
deep empaths, not necessarily the best doctors. And so I said, you know, for certain things,
01:34:22.100
that's just me. I could imagine a lot of, for example, cognitive behavioral therapy being done
01:34:29.380
and be found acceptable by a subset of human beings. It's not, wouldn't be for me. I'd say,
01:34:34.100
I'm just speaking to some stupid program. But if it's giving you insight into yourself and it is
01:34:39.220
based on the wisdom called for millions of patients, who's to say that it's worse? And it's certainly not
01:34:48.420
So Zach, you're born probably just after the first AI boom. You come of age, intellectually,
01:34:58.740
academically in the second. And now in the mature part of your career, when you're at the height of
01:35:05.780
your esteem, you're riding the wave of this third version, which I don't think anybody would argue
01:35:13.220
is going anywhere. As you look out over the next decade, and we'll start with medicine,
01:35:20.180
what are you most excited about? And what are you most afraid of with respect to AI?
01:35:25.780
Specifically with regard to medicine, what I'm most concerned about is how it could be used by
01:35:33.300
the medical establishment to keep things the way they are, to pour concrete over practices.
01:35:39.700
What I'm most excited about is alternative business models, young doctors who create businesses
01:35:48.260
outside the mold of hospitals. Hospitals are these very, very complex entities.
01:35:55.300
They make billions of dollars, some of the bigger ones, but with very small margins, one to 2%.
01:36:01.220
When you make, have huge revenue, but very small margins, you're going to be very risk averse.
01:36:06.180
And you're not going to want to change. And so what I'm excited about is the opportunity for new
01:36:13.380
businesses and new ways of delivering to patients insights that are data-driven. What I'm worried
01:36:19.940
about is hospitals doing a bunch of information blocking and regulations that will make it harder for
01:36:28.820
these new businesses to get created. Understandably, they don't want to be disrupted. That's the danger.
01:36:34.260
In that latter case or that case that you're afraid of, Zach, can patients themselves work around the
01:36:42.180
hospitals with these new companies, these disruptive companies and say, look, we have the legal framework
01:36:48.980
that says, I own my data as a patient. I own my data. Believe me, we know this in our practice.
01:36:54.340
Just because our patients own the data doesn't make it easy to get. There is no aspect of my practice
01:37:00.180
that is more miserable and more inefficient than data acquisition from hospitals. It's actually
01:37:07.140
Absolutely comical. And I do pay hundreds of dollars to get my data from my patients with rare and unknown
01:37:13.940
diseases in this network extracted from the hospitals because it's worth it to pay someone to do that
01:37:19.780
extraction. Yeah. But now I'm telling you it is doable.
01:37:23.860
So you're saying because of that, are you confident that the legal framework for patients to have their
01:37:29.540
data coupled with AI and companies, do you think that that will be a sufficient hedge against your
01:37:37.060
I think that unlike my 10-year prostatectomy by robot prediction, I'm not as certain, but I would
01:37:44.020
give better than 50% odds that in the next 10 years, there'll be a company, at least one company,
01:37:50.100
that figures out how to use that patient's right to access through dirty APIs, using AI to clean it up,
01:37:59.300
provide decision support with human doctors or health professionals to create alternative
01:38:05.460
businesses. I am convinced because the demand is there. And I think that you'll see companies that
01:38:12.820
are even willing to put themselves at risk. What I mean by that, are willing to take the medical risk
01:38:18.100
on that if they do better than a certain level of performance, they get paid more. And if they do
01:38:26.660
I believe there were companies that are going to be in that space, but that is because I don't want to
01:38:32.420
underestimate the medical establishment's ability to squish threats. So we'll see.
01:38:38.100
Okay. Now let's just pivot to AI outside of medicine. Same question. What are you most afraid
01:38:44.340
of over the next decade? So maybe we're not talking about self-awareness and Skynet, but next decade,
01:38:51.700
what are you most afraid of? And what are you most excited about?
01:38:55.140
What I'm most afraid of is a lot of the ills of social networks being magnified by use of these
01:39:06.500
AIs to further accelerate cognitive chaos and vitriol that fills our social experiences on the net.
0.96
01:39:17.620
It could be used to accelerate them. So that's my biggest fear.
01:39:21.300
I saw an article two weeks ago that was an individual. I can't remember if they were
01:39:26.580
currently in or formerly part of the FBI. And they stated that they believed, I think it was
01:39:32.100
somewhere between 75 and 90% of quote unquote individuals on social media were not in fact
01:39:38.500
individuals. I don't know if you spend enough time on social media to have a point of view on that.
01:39:43.060
Unfortunately, I have to admit to the fact that my daughter, who's now 20 years old,
01:39:47.620
been four years ago, she bought me a mug that says on it, Twitter addict. I spent enough time.
01:39:53.380
I would not be surprised if some large fraction, our bots could get worse. And it's going to be
01:39:59.380
harder to actually distinguish reality from human beings, harder and harder and harder.
01:40:05.780
That's the real problem. We are fundamentally social animals. And if we cannot understand our social
01:40:12.980
context in most of our interactions, it's going to make us crazy, or I should say crazier. And my most
01:40:21.300
positive aspect is, I think that these tools can be used to expand the creative expression of all
01:40:28.740
people. If you're a poor driver like me, I'm going to be a better driver. If you're a lousy musician,
01:40:36.580
but have a great ear, you're going to be able to express yourself musically in ways that you could not do
01:40:41.300
before. I think you're going to see filmmakers who were never meant to be filmmakers before
01:40:46.820
express themselves. I think human expression is going to be expanded because just like printing
01:40:53.700
press allowed all sorts of... In fact, it's a good analogy because the printing press also created a
01:40:58.180
bunch of wars because it allowed people to make clear their opposition to the church and so on,
01:41:03.300
enabled a number of bad things to happen. But it allowed also expression of all literature in ways
01:41:08.420
that would have not been possible without the printing press. I'm looking forward to human
01:41:12.740
expression and creativity. I can't imagine you haven't played with some of the picture generation
01:41:17.460
or music generation capabilities of AI, or if you haven't, I strongly recommend. You're going to be
01:41:22.260
amazed. I have not. I am ashamed maybe to admit my interactions with AI are limited to really chat GPT
01:41:29.860
for and basically problem solving. Solve this problem for me. And by the way, I think I'm doing it at a very
01:41:35.860
JV level. I could really up my game there. Just before we started this podcast, I thought of a
0.82
01:41:40.900
problem. I've been asking my assistant to solve because A, I don't have the time to solve it and
01:41:45.860
I'm not even sure how I would solve it. It would take me a long time. I've been asking her to solve
01:41:49.700
it and it's actually pretty hard. And then I realized, oh my God, why am I not asking chat GPT for
01:41:54.340
to do it? So I just started typing in the question. It's a bit of an elaborate question. As soon as we're
01:41:59.620
done with this podcast, I'll probably go right back to it, but I haven't done anything creatively with it.
01:42:03.540
What I will say is what does this mean for human greatness? So right now, if you look at a book
01:42:12.560
that's been written and someone who's won a Pulitzer Prize, you sort of recognize like, I don't know if
01:42:17.800
you read Sid Mukherjee, right? He's one of my favorite writers when it comes to writing about science and
01:42:22.960
medicine. When I read something that Sid has written, I think to myself, there's a reason that he is so
01:42:30.180
special. He and he almost alone can do something we can't do. I've written a book. It doesn't matter.
01:42:38.500
I could write a hundred books. I'll never write like Sid and that's okay. I'm no worse a person.
01:42:43.860
I'm no worse a person than Sid, but he has a special gift that I can appreciate just as we could all
01:42:50.120
appreciate watching an exceptional athlete or an exceptional artist or musician. Does it mean anything
01:42:56.080
if that line becomes blurred? That's the right question. And yes, Sid writes like poetry.
01:43:04.440
Here's an answer which I don't like. I've heard many times people said, oh, you know that Deep Blue
01:43:10.620
beat Kasparov in chess, but chess is more popular than it ever was, even though we know that the best
01:43:17.580
chess players in the world are computers. So that's one answer. I don't like that answer at all.
01:43:22.720
Yeah. Because if we create Sid GPT and Sid wrote Alzheimer's, the second greatest malady,
01:43:31.180
and he wrote it in full Sid style, but it was not Sid, but it was just as empathic family references.
01:43:39.060
Right. The weaving of history with story with science. Yeah.
01:43:42.060
If it did that and it was just a computer, how would you feel about it, Peter?
01:43:45.120
I mean, Zach, you are asking the jugular question. I would enjoy it, I think, just as much,
01:43:51.840
but I don't know who I would praise. Maybe I have in me a weakness slash tendency to want to idolize.
01:44:00.180
You know, I'm not a religious person, so my idols aren't religious, but I do tend to love to see
01:44:06.340
greatness. I love to look at someone who wrote something who's amazing and say, that amazes me.
01:44:11.720
I love to be able to look at the best driver in the history of Formula One and study everything
01:44:17.900
about what they did to make them so great. So I'm not sure what it means in terms of that.
01:44:24.560
I grew up in Switzerland, in Geneva. And even though I have this American accent,
01:44:28.860
both my parents were from Poland. And so the reason I have an American accent is I went to
01:44:33.180
international school with a lot of Americans. All I read was whatever my dad would get me from
01:44:37.900
England in science fiction. So I'm a big science fiction fan. So let me go science fiction on you
01:44:43.080
to answer this question. It's not going to be in 10 years, but it could be in 50 years.
01:44:48.240
You'll have idols and idols will be, yes, Greg Orovich wrote a great novel, but you know,
01:44:53.780
AI 521, their understanding of the human condition is wonderful. I cry when I read their novels.
01:45:01.160
They'll be a part of the ecosystem. They'll be entities within us, whether they are self-aware or not,
01:45:06.440
will become a philosophical question. Let's not go that narrow path, that disgusting rabbit hole
01:45:11.540
where I wonder, does Peter actually have consciousness or not? Does he have the same
01:45:15.820
processes as I do? We won't know that about these, or maybe we will, but will it matter
01:45:21.060
if they're just among us? And they'll have brands, they'll have companies around them.
01:45:26.420
They'll be superstars. And they'll be Dr. Fubar from Kansas, trained on iridivic medicine,
01:45:37.580
the key person for alternative medicine, not a human, but we love what they do.
01:45:43.260
Okay. Last question. How long until, from at least an intellectual perspective, we are immortal?
01:45:51.400
So if I died today, my children will not have access to my thoughts and musings any longer.
01:46:00.940
Will there be a point at which during my lifetime, an AI can be trained to be identical to me,
01:46:10.320
at least from a goalpost perspective, to the point where after my death, my children could say,
01:46:17.980
dad, what should I do about this situation? And it can answer them in a way that I would have.
01:46:25.940
It's a great question because that was an early business plan that was
01:46:29.940
generated shortly after GPT-4 came out. In fact, I was talking very briefly to Mark Cuban
01:46:36.240
because he saw GPT-4. I think he got trademarks or copyrights on his voice,
01:46:42.180
all his work and likeness so that someone could not create a mark who responded in all the ways he
01:46:50.440
does. And I'll tell you that it sounds crazy, but there's a company called rewind.ai. And I have
01:46:58.580
it running right now. And everything that appears on my screen, it's recording. Every sound that it
01:47:06.720
hears, it's recording. And if characters appear on the screen, it'll OCR them. If a voice appears,
01:47:13.460
and then if I have a question, I say, when did I speak with Peter Atiyah? They'll find it for me.
01:47:18.540
I'll say, who was I talking about AI and Alzheimer's? And they'll find this video on a timeline.
01:47:35.320
Because A, it compresses it down in real time with using Apple Silicon. And second of all,
01:47:40.360
you and I, you're old and you don't realize that gigabytes are not big on a standard Mac that has
01:47:46.880
a terabyte. That's a thousand gigabytes. And so you can compress audio immensely. It's actually not
01:47:54.120
taking video. It's just taking multiple snapshots every time the screen changes by a certain amount.
01:47:59.020
Yeah. It's not trying to get video resolution per se.
01:48:02.020
No. And it's doing it. And I can see a timeline. It's quite remarkable. And so that is enough,
01:48:09.240
in my opinion, data so that with enough conversations like this, someone could create
01:48:14.840
a pretty good approximation of at least public Zach.
01:48:18.780
So then the next question is, is Zach willing to have rewind AI on a recording device, his phone
01:48:26.660
with him 24 seven in his private moments, in his intimate moments, when he is arguing with his
01:48:33.400
wife, when he's upset at his kids, when he's having the most amazing experience with his postdoc. Like
01:48:40.120
if you think about the entire range of experiences we have from the good, the bad, the ugly, those are
01:48:45.800
probably necessary. If we want to formulate the essence of ourselves, you envision a day in which
01:48:51.420
people can say, look, I'm willing to take the risks associated with that. And there are clear risks
01:48:55.900
associated with doing that, but I'm willing to take those risks in order to have this legacy,
01:49:05.300
I think it's actually pretty creepy to come back from the dead to talk to your children.
01:49:09.600
So I actually have other goals. Here's where I take it. We are being monitored all the time.
01:49:14.740
We have iPhones, we have Alexa devices. I don't know what is actually being stored by whom and what.
01:49:20.600
And people are going to use this data in ways that we do or don't know. I feel it's us,
01:49:26.140
the little guy, if we have our own copy and we can say, well, actually, look, this is what I said then.
01:49:33.480
Yeah. That was taken out of context. And I can do it. I have an assistant that can
01:49:37.980
just find it and find exactly and find all the times I said it. I think that's good. I think it's
01:49:45.240
messing with your kid's head to have you come back from the dead and give advice, even though
01:49:49.820
they might be tempted. Technically, I think it's going to be not that difficult. And again,
01:49:55.080
speaking about Rewind AI, again, I have no stake in them. I think I might have paid them for a license
01:50:01.260
to run on my computer, but the microphone is always on. So when I'm talking to students in
01:50:07.100
my office, it's taking that down. So there are some moments in my life where I don't want to be
01:50:12.860
on record. There are big chunks of my life that are actually being stored this way.
01:50:16.780
Well, Zach, this has been a very interesting discussion. I've learned a lot.
01:50:20.100
I probably came into this discussion with about the same level of knowledge, maybe slightly more
01:50:25.540
than the average person, but clearly not much more on just the general principles of AI, the
01:50:30.760
evolution of AI. I guess if anything surprises me, and a lot does, but nothing surprises me more
01:50:36.780
than the timescale that you've painted for the evolution within my particular field and your
01:50:43.120
particular field, which is medicine. I had no clue that we were getting this close to that level of
01:50:53.260
intelligence. Peter, if I were you, this is not an offer because I'm too busy, but you're a capable guy
01:50:59.260
and you have a great network. If I was running the clinic that you're running, I would take advantage
01:51:03.360
of now. I would get those videos and those sounds and get all my patients with, of course, their
01:51:11.300
consent to be part of this and to actually follow their progress, not just the way to report it,
01:51:17.560
but by their gait, by the way they look. You can do great things in what you're doing and advance the
01:51:24.200
state of art. You're asking who's going to do it. You're doing some interesting things. You could be
01:51:29.180
pushing the envelope using these technologies as just another very smart, comprehensive assistant.
01:51:36.200
Zach, you've given me a lot to think about. I'm grateful for your time and obviously for your
01:51:40.640
insight and years of dedication that have allowed us to be sitting here having this discussion.
01:51:45.320
Thank you very much. It was a great pleasure. Thank you for your time.
01:51:48.900
Thank you for listening to this week's episode of The Drive. It's extremely important to me to
01:51:53.880
provide all of this content without relying on paid ads. To do this, our work is made entirely
01:51:58.860
possible by our members. And in return, we offer exclusive member-only content and benefits above and
01:52:05.500
beyond what is available for free. So if you want to take your knowledge of this space to the next
01:52:09.820
level, it's our goal to ensure members get back much more than the price of the subscription.
01:52:14.980
Premium membership includes several benefits. First, comprehensive podcast show notes that detail
01:52:21.100
every topic, paper, person, and thing that we discuss in each episode. And the word on the street is
01:52:26.940
nobody's show notes rival ours. Second, monthly Ask Me Anything or AMA episodes. These episodes are
01:52:34.820
comprised of detailed responses to subscriber questions typically focused on a single topic
01:52:39.680
and are designed to offer a great deal of clarity and detail on topics of special interest to our
01:52:45.040
members. You'll also get access to the show notes for these episodes, of course. Third, delivery of our
01:52:50.940
premium newsletter, which is put together by our dedicated team of research analysts. This newsletter
01:52:56.320
covers a wide range of topics related to longevity and provides much more detail than our free weekly
01:53:02.560
newsletter. Fourth, access to our private podcast feed that provides you with access to every episode,
01:53:09.440
including AMA's sans the spiel you're listening to now and in your regular podcast feed. Fifth,
01:53:16.360
the Qualies, an additional member-only podcast we put together that serves as a highlight reel featuring
01:53:22.560
the best excerpts from previous episodes of The Drive. This is a great way to catch up on previous episodes
01:53:28.280
without having to go back and listen to each one of them. And finally, other benefits that are added
01:53:33.100
along the way. If you want to learn more and access these member-only benefits, you can head over to
01:53:38.660
peteratiamd.com forward slash subscribe. You can also find me on YouTube, Instagram, and Twitter,
01:53:45.600
all with the handle peteratiamd. You can also leave us a review on Apple Podcasts or whatever podcast
01:53:52.160
player you use. This podcast is for general informational purposes only and does not
01:53:57.380
constitute the practice of medicine, nursing, or other professional healthcare services,
01:54:01.320
including the giving of medical advice. No doctor-patient relationship is formed. The use
01:54:07.140
of this information and the materials linked to this podcast is at the user's own risk. The content
01:54:12.920
on this podcast is not intended to be a substitute for professional medical advice, diagnosis, or treatment.
01:54:18.360
Users should not disregard or delay in obtaining medical advice from any medical condition they
01:54:23.560
have, and they should seek the assistance of their healthcare professionals for any such conditions.
01:54:28.960
Finally, I take all conflicts of interest very seriously. For all of my disclosures and the
01:54:34.080
companies I invest in or advise, please visit peteratiamd.com forward slash about where I keep an up-to-date