Episode 3090 - The Scott Adams School 02⧸09⧸26
Episode Stats
Words per Minute
159.3474
Summary
In this episode of The Scott Adams School, we have a special guest, John Nosta. John is a cardiovascular and cognitive neuroscientist who has spent the past several years focusing exclusively on artificial intelligence and the impact it can have on human cognition.
Transcript
00:00:22.140
So, whatever you think is going to happen in the future,
00:00:55.740
Just a reminder, this is not to replicate Scott.
00:01:05.200
And we're just here to commune, have a good time, keep learning, keep growing.
00:01:10.320
And hopefully, we'll always have something interesting for us all to learn and understand and talk about.
00:01:18.760
So, we can't do any of that, you all, until we do one thing first.
00:01:30.220
This is a short sip, because we have a lot to talk about today.
00:02:00.400
For the best part of the day, except for the rest of it, which is going to be pretty good, too.
00:02:05.760
Yes, it's going to be coffee with Scott Adams this morning.
00:02:14.960
You need a cup or a mug or a glass, a tank or a chalice or a stein, a canteen jug or a flask, a vessel of any kind, fill it with your favorite liquid.
00:02:24.060
And join me now for the unparalleled pleasure of the dopamine of the day that I think makes everything better.
00:03:00.840
You guys want to introduce yourselves really quick, and then I'll introduce John.
00:03:26.640
And I am so grateful to Brian Romelli that you guys all remember, because he introduced
00:03:33.420
And I had to ask John to tell me how to explain him, because he's got quite the talent stack,
00:03:43.640
So, John, he has an eclectic background, for sure, from cardiovascular physiology to strategic
00:03:52.640
Over the past several years, he's focused exclusively on artificial intelligence and his impact on
00:04:01.080
And I want to also say we had quite an interesting phone call, because I am not an AI aficionado
00:04:08.980
at all, and I'm kind of like your base-level person, but I've never had a conversation about
00:04:14.460
AI like we had, where you brought in the human aspect of it and what's missing.
00:04:22.740
And I think everybody listening today is really going to benefit from hearing your perspective
00:04:30.520
So, John Nostra, welcome to the Scott Adams School.
00:04:35.660
I had some ideas about what to talk about, and then that clip completely took me off
00:04:46.340
I warned you guys about this, that I'm going to channel this down.
00:05:00.280
And I want to talk about that just briefly, because the notion of gather around actually
00:05:08.080
can be hearkened to ancient texts, the Upanishads, which are old Hindu spiritual texts.
00:05:17.460
The Upanishad is actually Sanskrit for sit up close.
00:05:27.120
Doctors walking down a hallway on rounds, talking to one another.
00:05:32.180
A guru sitting with a master talking about a particular issue.
00:05:43.100
That is the essence of sit down, come up close.
00:05:47.480
And what we're seeing today, for the first time, is there's a technological component to
00:05:56.040
It's that now we have the ability to interact with AI, with large language models, where we
00:06:07.020
And that reflects very, very much as to what Scott was saying is, come on, sit down.
00:06:12.720
And I think that's the essence of where technology is going today.
00:06:20.860
And probably the most interesting word here is iterative, that we have an engaged conversation.
00:06:28.880
Well, don't take too long of a breath, because you have so much to offer.
00:06:34.220
And I think I also wanted to point out that you've written, what, over 500 articles for
00:06:41.640
And when we first got on the phone, if you're old enough, you remember Doogie Howser.
00:06:48.360
So you asked me if I know who Doogie Howser is, and that you were writing medical papers
00:07:02.620
You know, I, but it wasn't, it wasn't because I was smart.
00:07:08.300
It's because I had a real unique interest and connection with things like physiology and
00:07:15.280
So, so my early path took me to what was going to be medical school.
00:07:21.820
You know, the nature of medicine today is very regurgatory.
00:07:32.080
So you memorize it and then we're going to, you're going to regurgitate it or you're going
00:07:36.040
to be a sponge and they squeeze it out after that test.
00:07:39.380
So for me, it didn't really align with my interests.
00:07:43.500
I tend to be more of a creative or strategic thinker.
00:07:46.580
So that's where I ended up working in, in advertising and marketing, um, with a large,
00:07:51.620
uh, advertising agency called Ogilvy, which is the largest healthcare advertising agency.
00:08:02.140
And I think that, that, that is probably one of the defining elements that I'm going to
00:08:06.640
jump back to that connectivity to what's going on here.
00:08:15.740
And it's been said, I think it's quite profound is as you think, so you act as you act.
00:08:21.240
So you become there's, there's the magic, right?
00:08:24.320
If we want to be a doctor or a lawyer or a billionaire, you have to think it first.
00:08:32.140
And I want to, I want to take that note and go back, go back a few hundred years and kind
00:08:38.100
of put this into a little bit of perspective because the word think is going to be real
00:08:44.680
So a few hundred years ago, a guy named Gutenberg did something that helped us think he created
00:09:02.620
Now, in those days, it was principally the Bible.
00:09:07.660
Back in those days, innovation, technology, if you will, created something that there was
00:09:15.360
So that's the first sort of paradoxical thing here.
00:09:17.700
So I'm going to invent a book when no one can read.
00:09:24.940
So that was sort of the first inflection point in the dissemination of thought and thinking
00:09:35.600
Just because something is new and innovative doesn't mean it aligns to market adoption.
00:09:44.100
Then we move up in time when we get to this other thing called the internet.
00:09:48.840
The internet, and principally Google, I guess, if you really wanted to talk about search
00:09:56.360
Google did something that was very similar to Gutenberg.
00:10:03.580
And that was the second stage in this sort of thinking dynamic.
00:10:09.500
The bad news is it unlocked facts in a way that is very cold.
00:10:21.880
It's transactional if you're looking for a word.
00:10:25.820
And you know, back in the days when you type up, you know, where's the best Mexican restaurant
00:10:33.140
You know, we all know it's Casa Comida, by the way.
00:10:35.380
But anyway, it gives you a long, complicated answer.
00:10:41.620
So that was the second sort of inflection point, if you will.
00:10:48.560
That really transformed the way we could think.
00:10:52.940
But now, what happens with these large language models that are really changing everything
00:10:59.460
What large language models are doing is unlocking thought.
00:11:04.440
So that's the transition, unlocking words, unlocking facts, and unlocking thought.
00:11:11.480
The interesting thing about large language models is that it is an iterative dynamic and
00:11:17.280
that our ability to engage with a large language model back and forth actually activates thought.
00:11:27.920
How does that fit into the construct of things like the industrial age and the digital age
00:11:34.020
I would argue that we're moving into a new domain, and that is the domain of thought.
00:11:41.640
And it goes right back to that fundamental reality written thousands of years ago in the
00:11:47.940
Upanishads that simply says, as you think, so you act, as you act, so you become.
00:11:54.340
So when we talk about modern technology and we talk about, come on, everybody, let's gather
00:12:02.540
That's as old as humanity itself, yet we see a new contemporary spin on that.
00:12:06.940
So that's kind of what I've been thinking about recently.
00:12:09.680
So what I've noticed in some of the writings about this sort of thing is it seems to go in
00:12:18.080
One is along the lines of what you said, where you now have this personal companion that you
00:12:23.520
can chat with and you can think with and have a conversation and that sort of thing.
00:12:28.600
But there's also the opposite, which is, I think the article I posted today talked about
00:12:33.020
it as cognitive debt and that you might be offloading your thinking to the AI and therefore
00:12:41.300
And I have seen similar things across all of kind of education use cases for AI, where
00:12:47.480
it can be a great tutor and help you learn if you use it the right way.
00:12:51.380
But it could also just give you all the answers and keep you from learning.
00:12:55.780
And there's a big fear now that a lot of people are never going to learn the skills they have
00:13:03.080
And on top of that, the other thing I'll layer on and let you comment is I've noticed, or
00:13:10.160
at least there's been people that have commented that AI is kind of reflect back your level of
00:13:15.000
thinking, that if you are really kind of dumb and ask it questions that kind of are what
00:13:22.240
an ADIQ person might say, then it's going to kind of reflect that back at you and adapt
00:13:28.000
But if you're more of a PhD super genius and you use all sorts of big words and, you know,
00:13:33.460
it's different, then it's going to reflect back that level of thinking or that level of
00:13:38.460
So what do you, what do you think about all this?
00:13:41.940
This is you, you, you've, you've touched on, I'm going to reach over here.
00:13:46.800
So this is the shameless self-promotion of my book, which is The Borrowed Mind.
00:13:54.360
Yes, we are doing a certain element of cognitive offloading.
00:13:59.520
And there's, oh my God, there's so much to talk about in that, in that question.
00:14:02.460
So let's, let's back up and let's humanize this a little bit.
00:14:11.520
And interestingly, it's generally one or two people.
00:14:17.840
Nobody has, you know, a whole, a whole load of famous teachers.
00:14:22.500
It's oftentimes a woman, which, which is because elementary education was sort of biased to
00:14:37.320
She delivered information in a way that was tuned to the creative frequency of your brain.
00:14:44.820
And I think that's something we have to consider.
00:14:47.220
So did the teacher rob your intelligence by pandering to your proclivities or insecurities?
00:14:55.600
Now, that being said, we have to recognize that the nature of large language models are such
00:15:09.160
They go right to that point, they do the thinking for us.
00:15:15.460
And that's, that's a very dangerous situation that I've written extensively about.
00:15:20.060
So what happens when you go, when answers become instant?
00:15:24.860
What, what is actually happening between point A and point B, that cognitive path, right?
00:15:29.560
Well, it's the stumbles, it's the falls, it's the controversy, it's the pauses of contemplation
00:15:40.380
So what I think is happening is that we go from point A to point B with a large language model,
00:15:54.620
It's, it's a word we all know, it's a word we, we, we relish.
00:15:58.860
And I think that Scott kind of defined that in some ways, it's imagination.
00:16:04.720
Imagination is that sort of rumbling, that pause, that confusion, that concern, that failure
00:16:14.180
So to answer your question, I think that, that artificial intelligence and large language
00:16:21.540
models are problematic or curiously interesting.
00:16:25.880
So can, can I go down another path real quick, Erica, just, uh, talking about technological
00:16:32.360
Everybody knows, um, the painting by Vermeer, the girl with the pearl earring, you know, that
00:16:41.540
So, um, Vermeer used technological augmentation to do that painting.
00:16:48.540
He used something called the camera obscura and he projected the image through light and
00:16:59.120
Was that technology, it was technological augmentation in his day.
00:17:06.940
Norman Rockwell, everybody knows Norman Rockwell, right?
00:17:12.100
You can, you often see the, uh, the painting of Norman Rockwell, um, at the, at the Thanksgiving
00:17:17.940
table, the family with the turkey and, or the cop with the kid who ran away from home.
00:17:23.440
These are extraordinarily powerful moments, uh, that move us.
00:17:36.220
It's a device where he actually hired a photographer, created a set, took a picture and then take
00:17:44.680
that, took that image and enlarged it and changed it using this mechanism called the Lucy and
00:17:49.500
then painstakingly traced it and colored it in.
00:17:53.220
Next time you look at a, uh, a Norman Rockwell painting, take a close look, take a close look
00:18:01.080
Remember those things when we were kids, the painting by number thing?
00:18:04.360
Norman Rockwell's art is very, very specific because he was constrained by the technology
00:18:13.480
And that's a, that's a really interesting dynamic, constrained by the technology you embrace.
00:18:19.720
Now, if you look at the Norman Rockwell, the more contemporary, um, his, his most contemporary
00:18:34.360
The signature is an expression of our humanity, right?
00:18:43.000
Well, what, um, he did is he actually took a stencil and made Norman Rockwell.
00:18:49.120
Now why I'm bringing this up is because this goes back to the earlier question is, is it
00:18:59.700
Um, what I find interesting is the way Norman Rockwell responded when asked about the Lucy,
00:19:09.500
If you go to the Norman Rockwell museum in Lenox, Massachusetts, don't ask them about the Lucy.
00:19:16.740
They get very upset because it's like, kind of like asking, did you write that essay or
00:19:22.920
It's that same social, cognitive, emotional dynamic that we're seeing play out here today.
00:19:28.520
So what Norman Rockwell said, I thought was really interesting.
00:19:31.380
He said, the Lucy is a horrible machine and I'd be lost without it.
00:19:35.580
And I think to a certain way, that's the delicate balance that we're seeing with large language
00:19:46.700
Well, I don't know Erica's cell phone number, right?
00:19:57.420
What is the appropriate level of cognitive offloading in our world?
00:20:00.820
If a medical student needs to know the second metabolic intermediary in the Krebs cycle,
00:20:06.100
which happens to be one, six fructose diphosphate, probably going to get an A on his, on his
00:20:13.760
But does that make her or him a better clinician?
00:20:17.800
These are very, very complicated questions now.
00:20:22.000
And it, I mean, I think it will depend on, on the type of thing you're talking about.
00:20:29.360
But what I mean is there's certain skills that I see, like in the, in the context of
00:20:33.540
a doctor, I would want them to be able to diagnose me kind of right on the spot and
00:20:38.540
not have to look up all the information or ask an LLM to figure out what my condition is,
00:20:43.800
Let's, let's talk about that because that, that, that in of itself is a very interesting
00:20:49.200
That's called a, that's called a differential diagnosis.
00:20:52.320
So, um, a 65 year old guy goes to the emergency room who's sweaty and has chest pain radiating
00:21:00.220
Everybody want to do the diagnosis with me at the same time?
00:21:11.720
He might have a variety of things, but we often statistically guess into that, into that
00:21:16.940
spot, there was a study that showed how well LLMs did, doctors did, and doctors using an
00:21:30.640
And they found something very interesting here.
00:21:39.180
Doctor alone, LLM, or an LLM and a doctor combined, which I thought was really interesting.
00:21:48.140
And this is where it gets to the point where you're worried about, about this idea.
00:21:53.460
Well, I want the doctor right there to make the diagnosis for me.
00:21:56.240
If you look at the clinical chain of reasoning, in other words, don't tell me you had a heart
00:22:04.720
Doctor, tell me the five reasons why the ST segment is elevated on my EKG.
00:22:11.080
That's a classic size called a STEMI classic side of a heart attack.
00:22:14.000
Tell me the five reasons why that might be elevated.
00:22:18.180
Pericarditis, early repolarization, ventricular aneurysm, you know, there's, there's, there's
00:22:29.340
Sometimes augmenting clinical thinking and reasoning is very, very helpful.
00:22:34.220
So I think that we're going to see, you know, the interesting thing here is when my wife
00:22:40.200
comes back to, from the pediatrician, I ask her, what did the doctor say?
00:22:55.960
So the rest of that question is really very telling.
00:22:58.420
It's what did the doctor, what did the computer say, comma, and what did the doctor do?
00:23:03.180
And it's that sort of cognitive functional dance.
00:23:07.100
So when I go into the emergency room, um, and they say I have a heart attack, my differential
00:23:13.560
diagnosis should be scrubbed analytically by an AI.
00:23:19.600
That's one of the pitfalls that we find with AI is it becomes a zero sum game.
00:23:27.360
And I, what I've also noticed is that at least in my use of AI, I found that it's very useful,
00:23:34.940
Like I know what, right, I know what questions to ask to get good answers.
00:23:39.260
And I think from everything I've read about it, people who don't have a lot of expertise
00:23:46.120
And it's because they don't know what questions to ask and they don't know how to guide the
00:23:51.320
So it seems to me like we do need to maintain some ability for people to gain enough expertise
00:23:57.340
to control and guide the AI, at least until they don't need people anymore.
00:24:02.460
Well, also, Owen, the other thing I want to chime in, because I see it in the chat, is
00:24:26.260
Again, I'm going to use my Libra reference where I'm always in the middle of two things
00:24:35.200
And how do I know that it isn't skewed a certain way?
00:25:07.400
And I think that's so true that maybe we should worry about the human bias in a lot of our information.
00:25:16.920
But I think that AI is very helpful to me because I'm a geek.
00:25:22.320
I was in the car the other day, and I was actually having a conversation.
00:25:28.100
And I wanted the model to teach me about the strange qualities of subatomic particles.
00:25:36.820
What I find, what I know, I know I'm a complete geek.
00:25:41.360
Like when you had these quarks, these funky quarks, well, they ran out of words to describe them.
00:25:48.240
So they started using words like beauty, truth, charm, upness, downness to describe them.
00:25:54.160
So I had a really good discussion with AI about something I know very little about.
00:26:00.960
So, yes, you need to be a master of your domain, but you don't have to be a master of the knowledge domain.
00:26:12.600
Now, I want to get to something because I know we've gone like almost a half hour into this mumbo jumbo.
00:26:18.600
I quick just wanted to, if you don't mind, ask Sergio and Marcella if they have a question at this point for you before we move on.
00:26:36.080
We get to get our heads together and like little sparks fly.
00:26:48.980
And I love that you are focused on the health part, you know, because that's a very important aspect for me,
00:26:58.380
always to know how are we maximizing our doctors.
00:27:01.380
And you already answered a lot of those questions.
00:27:10.300
That instead of calling it AI, calling it IA, right?
00:27:18.840
I wanted to ask you, I always tell people to not get into conversations with AI, like chats, back and forth.
00:27:30.400
Because I personally feel like it's getting me, like Owen was saying, he tries to agree with me a lot.
00:27:45.560
I just put a little microphone and I say in a voice memo.
00:27:50.620
I don't allow it to say, like, stop, you know, let me.
00:27:56.140
My question is, when it comes to health, right?
00:27:59.900
And the mental health part of it, of talking to an AI like this,
00:28:06.780
can you also agree that some people are more susceptible to that?
00:28:12.600
And because I am, that's why, because I know I am, I avoid it.
00:28:21.520
Because, you know, I type in a sentence and then Claude or whomever writes back,
00:28:28.280
oh, John, that is such an interesting observation, right?
00:28:53.300
You know, the best use of a hammer is from a skilled craftsman.
00:29:07.440
For example, if you've ever used any of those, you submit your picture into these apps and they give you an avatar, you know?
00:29:19.960
The hair is, I mean, it's like, what's going on here, right?
00:29:22.560
They're trained to give you output that you like.
00:29:25.960
And I think that large language models are very similar to that unless you can provoke them.
00:29:37.020
I want you to take on the role of a contrarian.
00:29:39.420
And I want you to look at this idea and give me all the downside to it.
00:29:43.640
So we have to be very careful because they are insidious.
00:29:47.060
Now, with respect to psychiatry and psychology, there's been a lot of action there.
00:29:53.060
I think that probably the fringe cases, you know, we have a normal distribution, a bell curve.
00:30:05.540
So let's say I told an LLM, I'm feeling a little blue today.
00:30:12.020
And then all of a sudden we start falling down the rabbit hole.
00:30:18.360
Well, it's a black hole of conscious awareness, you know, whatever that is.
00:30:27.160
In fact, I would argue that that's the friction of life.
00:30:32.260
And that friction is what drives the process of understanding.
00:30:38.580
So when I said getting to A to B, getting from A to B with an LLM is instant, right?
00:30:45.240
Getting from A to B for a human is often toil, controversy, struggle, joy, wisdom, insight,
00:30:52.860
But I think that we have to be careful with large language models because they are.
00:31:05.240
I'm going to give you mine because I disagree with Brian on this point.
00:31:08.800
I think that intelligence amplified is not intrinsic to the model.
00:31:17.540
So I can say, I can call a hammer house amplified, builder amplified, right?
00:31:23.660
Because it's just going to help me, you know, whatever.
00:31:26.180
I think that, and this is really a bit controversial.
00:31:29.340
I don't think that artificial intelligence is intelligence at all.
00:31:36.600
Now, let's unpack this a little bit because it's a little complicated.
00:31:42.020
What does an apple look like to a large language model?
00:31:47.700
Well, let's talk about what it looks like to us, okay?
00:31:59.700
Some smarty pants might include time, but that's a very interesting thing.
00:32:05.240
Do you know that large language models don't have any idea what time is?
00:32:13.100
But when they see an apple, the old models from a few months ago would see that model in 12,288 dimensions.
00:32:26.220
The new frontier models, the new chat GPTs, actually look at the apple in 25,000 dimensions.
00:32:33.920
Now, the reason I'm talking about this is because I want you to be confused deliberately.
00:32:38.660
The perceptual domain, the cognitive capability of a large language model is vastly different than humans.
00:32:46.220
When we think of an apple, we think of three dimensions.
00:32:49.840
We think of, let's see, apple a day keeps the doctor away.
00:32:58.160
We think of about 25 linguistic associations combined with three spatial dimensions.
00:33:04.840
But an LLM looks at an apple in 25,000 dimensions.
00:33:11.260
Sometimes people talk about multiple dimensions called hypercubes, which are really cool mathematical
00:33:17.540
Sometimes people who study string theory get really wacky, and they look at string theory
00:33:22.040
in the context of 11 dimensions, which blows their mind.
00:33:28.660
As a human, we have no ability to conceptually understand that.
00:33:33.740
And what I think that the difference is, is that AI is anti-intelligence.
00:33:41.260
Number one, it doesn't think, like we do, it's really important that, I actually took some
00:33:58.960
We know who we were yesterday, and we kind of know who we're going to be tomorrow, right?
00:34:14.200
And when you combine these things together, I've written extensively about that, it's actually
00:34:20.440
The way LLMs process information is antithetical to human thought.
00:34:45.620
It's the difference between the two eyes that allow us to see the world uniquely.
00:34:51.600
And it's my contention that what we're seeing with artificial intelligence is the combination
00:34:57.120
of anti-intelligence and human intelligence, the combination of extraordinary computational
00:35:05.900
And our human, time-driven, biographical-driven, experience-driven, emotional-driven dynamic gives
00:35:20.620
So I think that when we think about AI, we have to celebrate the fact that it's frigging
00:35:28.340
The computational capabilities of AI are not good or bad.
00:35:37.580
And I think that's a difference we should celebrate.
00:35:43.240
And this idea that we make humans more like AI or we make AI more like humans, I think
00:35:48.380
is fundamentally flawed at a very, very base level.
00:35:54.480
And I think that's kind of one of the big issues.
00:35:56.820
And that gets back to the earlier question about psychiatry and psychology.
00:36:00.680
We have to recognize that these models are, in fact, models.
00:36:09.420
Stoiatric parrots, as they're sometimes called.
00:36:14.340
When we ask an LLM a question, they already assume that there's an answer.
00:36:20.880
Take a step back and let's think about what this even means.
00:36:35.360
We as humans don't process information that way.
00:36:38.000
We don't think about the answer that exists at the end of the journey.
00:36:42.060
We think about the process that gets us to a place that may not exist, that may take us to a new cognitive construct.
00:36:50.420
So all these things are kind of, you know, there's a lot going on here.
00:36:53.940
And in the final analysis, I don't want to say that AI is bad.
00:36:57.700
I think the fundamental analysis, AI is anti-intelligence, antithetical to human thought.
00:37:03.720
And it lives in sort of a cognitive parallax related to depth.
00:37:07.960
Think about the intellectual and cognitive depth that we can have when we leverage an LLM.
00:37:14.480
And we haven't even gotten to education yet because I think education, while precarious, is still a wonderful opportunity.
00:37:23.600
And I'm the dad, I homeschool my kids with my wife.
00:37:28.400
And we look at, you know, good old-fashioned things like reading a book.
00:37:31.880
But we also use technology too, just like, you know, just like Vermeer and just like our other friend, the painter, who said that I'd be lost without it.
00:37:43.140
With the RBC Avion Visa, you can book any airline, any flight, any time.
00:37:59.160
Switch and get up to 55,000 Avion points that never expire.
00:38:13.140
A Tim's Donut and Coffee is the original collab.
00:38:15.880
And now, any classic donut is a dollar when you buy any size original or dark roast coffee.
00:38:20.060
Get a deal on the iconic duo with a Tim's Dollar Donut.
00:38:23.100
Plus tax at participating restaurants for limited time.
00:38:49.820
So you wanted to make sure that, to clarify that.
00:39:00.640
I think that some of the things that AI actually does is outside the domain of humans.
00:39:08.420
So does it amplify or does it contribute new perspectives?
00:39:17.140
Do you ever drive down the road and you see a big water truck and it says non-potable?
00:39:27.460
I believe that large language models, the computational brilliance of these models, albeit antithetical
00:39:35.560
to human cognition, is so deep, so multidimensional that we don't even understand it.
00:39:43.900
We do not have the capacity to understand what this 10,000 dimension articulation of quantum
00:39:52.200
physics is looking at gravity and it exists in this little packet that it's unfit for human
00:40:04.080
I mean, if we look at a CD, we can't read a CD with our mind, with our head, right?
00:40:09.980
But I think AI creates a new domain of knowledge.
00:40:14.040
And unless you recognize that that knowledge is different than humans, is antithetical
00:40:19.500
to humans, I think you're going to get to run into a problem there.
00:40:23.820
So, you know, Brian and I align on almost everything.
00:40:29.300
But I really kind of think about that AI is fundamentally different.
00:40:45.340
Because when I use a hammer, I can hammer the nail, I can put the hammer down, and I
00:40:56.520
Can you unthink a thought you shared with a large language model?
00:41:04.180
I mean, it's, you know, we all have a voice in our head.
00:41:14.140
But for the most part, that voice in our head is a pretty good thing.
00:41:17.800
Interestingly, the voice in our head never changes.
00:41:20.040
That voice in your head is the same when you were five than when you were 55.
00:41:23.720
It's a curious voice that speaks in your head that is an amazingly intimate and personal
00:41:29.580
I think that we're actually seeing the emergence of not an inner monologue, but an inner dialogue.
00:41:36.880
So when I have those conversations with ChatGPT, I'm having an iterative dialogue, which is
00:41:51.460
You could almost think of it as a dress rehearsal for life.
00:41:54.260
So I got on ChatGPT and said, well, I have to tell my wife that we're not going on that
00:41:57.520
vacation to, you know, to Belmar, New Jersey this year.
00:42:00.900
So I could rehearse that with her, with ChatGPT.
00:42:10.960
But it's also problematic because in certain instances, it could drive you down the rabbit
00:42:24.100
And that's the duality that really kind of flips me out.
00:42:27.060
We know that introspection is at the heart of transformation.
00:42:43.200
Many, many great thinkers found the answers come when they're quiet and alone.
00:42:47.640
And that level of introspection, I think, kindles a certain level of what I, I often refer to
00:42:54.820
it as genius as our birthright and mediocrity as self-opposed.
00:42:58.220
That I believe that our cognitive capabilities, when kindled, when tapped into, when managed
00:43:04.000
appropriately, yield really, really interesting things.
00:43:08.180
I think that AI may in fact be a surrogate, be a partner in kindling that reality.
00:43:15.540
That kind of flips people out, but it's not, it's not that, look, Michael Jordan was a
00:43:27.200
And that was, in many ways, that was introspective.
00:43:39.640
And I think that there may be an opportunity to leverage that unique internal dialogue, not
00:43:46.640
monologue, with artificial intelligence and large language models to find new levels of
00:43:53.320
So it's, we are in the, we are in the abyss right now.
00:43:59.340
And I just think it's important that we recognize that it's very easy to defer thinking to the
00:44:05.560
And when you defer thinking to the machine, you're just not cognitive offloading.
00:44:10.640
It's just not that I'm letting it remember my wife's telephone number.
00:44:16.680
Now, one could argue that it makes it efficient, but I think that's a real risk and something
00:44:23.900
I mean, I worry about that disconnect and all the things that we do in our quiet time.
00:44:29.240
And, you know, even like, especially as a child, I think about the kids today, they don't have
00:44:34.960
any quiet time, generally speaking, because they're being fed something all the time.
00:44:40.940
And they're always having a screen put in front of them.
00:44:43.660
And it's like their play dates now with friends are just on screens together in the same room.
00:44:49.860
And there's, you know, when you think back, I mean, I'll, I mean, my life, you know, playing
00:44:54.080
with Barbies and playing outside and creating a world for these things and pretending you're,
00:44:59.640
you know, in an army fort and what happens and like, that's just not happening now.
00:45:04.740
And I think, and I think that if you already had those things and now you're an adult, I
00:45:10.420
mean, you stop creating as an adult also, like you're, you're still destroying everything
00:45:15.820
moving forward. Here's my completely unscientific and controversial take. I think that we all have
00:45:24.620
the capacity for this hyper experience, for this cognitive zone, right? Being in the zone,
00:45:31.920
experiencing the aha moment. When we, when, when I ask you to draw a picture of a genius,
00:45:38.100
most of you will scribble Albert Einstein, or you'll write that name down because he's the
00:45:42.460
prototypical genius in many instances. And that, that, that perspective of the genius is a smart
00:45:50.760
man sitting in a room, getting the right question correct all the time, right? That's, that ain't
00:45:56.700
genius. In fact, if you look at Einstein, much of Einstein's early work was, was where he won the
00:46:01.340
Nobel prize, photoelectric effect, relativity, general relativity. Those things happened in his twenties.
00:46:06.480
And, and for the rest of his life, he kind of languished in Princeton, another Jersey spot we're
00:46:10.840
going to mention, but, but he did have some interesting work, but I think we've all had moments
00:46:15.320
of enlightenment, moments of transcendence, moments of sort of an experiential element.
00:46:22.820
Now with kids, my contention is oftentimes kids find something that they're good at.
00:46:28.600
Like, and it's not, it's not like good at math. It's good at knowing every car on the road. That's a
00:46:34.120
Chevy. That's a Tesla. You know, they have this, this savant like capability. We should nurture that.
00:46:40.720
Because that savant like capability is in essence, the genius experience. And when you find that
00:46:47.100
genius experience, you discover the joy of thought. Remember, we started our conversation today on,
00:46:53.080
as you think, so you act, as you act, so you become. And the cognitive age. I think that that's
00:46:58.080
something that we see with kids today. We can nurture that ability to find that spark. And we're
00:47:04.480
developing that spark in new and interesting ways because if, if my son wants to learn gravity from
00:47:11.400
Carl Sagan, I can, I can do that. I can have an LLM create a Carl Sagan like, like teacher.
00:47:21.920
And when that happens, I think we see, we see, we see very, very magical changes. These things are
00:47:28.420
tuned to the creative frequency of your brain. So I think there's a lot of opportunity, you know,
00:47:32.200
but look, you're going fast, you know, you're traveling at the speed of thought. And that's
00:47:38.900
problematic. There was a study in Africa that used an LLM to teach math to children. And because the
00:47:47.420
math was tuned to their frequency, let me back up and can I keep going on this or am I, am I down the
00:47:54.900
dark rabbit hole here? Well, I want to make sure Marcella gets into, because so much has been said
00:48:00.760
so she might have a question thus far. You remind me of Richard Feynman when Richard Feynman talked
00:48:07.600
about learning and how like you can learn, you can learn words, you can learn, this is the name of the
00:48:14.840
tree, the scientific name, but do you really know what it does? And yeah, and what I like about AI
00:48:21.740
though, is that like you said, it's a tool in a way that you can either just let it take you or you
00:48:31.460
can drive it where you can have Carl Sagan and all that. Spot on a hundred percent, you know, but
00:48:40.820
here's, here's the interesting thing. And let's go back to like 30,000 foot, another comment that I
00:48:45.500
always get in trouble for, but I'll, I'll say it. Knowledge is dead. Knowledge is dead. And, and people
00:48:53.180
look at me and say, what the heck are you talking about? Well, if I want to cook a souffle, okay,
00:48:58.940
I'm no chef, but if I want to cook a souffle, I go into our kitchen and we have that book,
00:49:04.020
Julia Child, The Art of French Cooking. A lot of, a lot of, you know, people have it. They never use it
00:49:08.640
because it's too darn hard, but they, it's, it's kind of, they got it as a gift or something. So
00:49:12.900
on page 172 is the recipe for souffle. And they tell me, first thing they say is make a sachet.
00:49:21.080
I don't even know what a sachet is. This was not for me. It was not written for me. Now today,
00:49:30.060
if I want to learn how to do a souffle, cook a souffle, I go to a large language model and I say,
00:49:36.180
I want you to cook a souffle as good as Julia Child, but I want you to tell me how to do it
00:49:41.820
and make it funny. Use analogies to automotives and write it for a man who's never cooked in his life.
00:49:51.860
So that comes down. It, it actually collapses a wave function in physics. We talk about that,
00:49:58.820
that superposition, that knowledge, that thing of knowledge, how to cook a souffle, make it automotive
00:50:04.740
funny for a guy who's never cooked has never existed before. It exists nowhere, but it happens
00:50:11.120
to come down to your computer, to you uniquely to you. That's why knowledge in the traditional sense
00:50:17.640
is dead. Julia Child's book is a dust collector because today we could interpret that in the
00:50:25.260
context of my needs. It's user or more specifically learner centric. Well, let's go back to that crazy
00:50:34.680
teacher, your favorite teacher. She put you at the center. It was learner centric. So large language
00:50:41.880
models can teach me the way I want to learn. So my girls had to study the Krebs cycle. The Krebs cycle
00:50:48.440
is a metabolic pathway that biology students always have to learn and it's a pain in the neck, but I had
00:50:54.740
chat GPT write a poem about it and it was memorable and they learned it that way. So again, it's not just
00:51:04.560
an extension of what I know as a navigator because I, as a navigator don't know what 1,6 fructose
00:51:11.680
thyphosphate is. ChatGTP does. And when it teaches me, it's in the context of poetry. Holy crap. It's
00:51:18.820
transformative. So if knowledge is dead, does that mean knowledge work is dead? And is AI going to be
00:51:29.260
taking all of our jobs? Brian in his recent post would tell you that he's developing the zero
00:51:35.020
employee company. So I think that again, I'm going to hedge on this and I'm going to say, I don't,
00:51:42.820
I don't think so. I think that, that, that the blacksmith died in the industrial revolution,
00:51:50.180
right? When we, when we changed to steel and cars and, and that doesn't necessarily mean that
00:51:59.160
the knowledge worker is dead. I'll give you a couple examples. Um, when I think it was, was it
00:52:04.420
Matthew Brady, the guy who did the civil war photography, the black and white civil war photography,
00:52:11.020
I think his Brady was his last name. Anyway, when photography emerged in the United States
00:52:16.920
and prints and around the world, portraiture did not go away. It got bigger. It grew and
00:52:25.580
grew and grew. And it created this thing called selfies, a billion dollar industry of selfies.
00:52:32.500
Similarly, when I think it was Boris Spassky played IBM's deep blue in chess and lost.
00:52:41.020
What happened to chess? Was chess finished? Did everybody just take their boards and go home
00:52:46.100
and go away? No chess. And that's 20 years ago. Chess has never been more popular than it is today.
00:52:52.780
So my, my contention is that, that we don't cut the pie. We don't cut the pizza into smaller and
00:52:59.340
smaller pieces, leaving less for us, the pie grows. And when that pie grows, it develops new areas for
00:53:07.200
humanity. Now, you know, now, now we're, you know, it's interesting because innovation has always had
00:53:14.320
the backside of the coin. What's on the other side of the innovation coin? Obsolescence. When our phone
00:53:20.000
is obsolete, what do we do? We get a new one. When our car, washing machine, microwave, whatever it is,
00:53:26.960
when it breaks, generally we get a new one because innovation and obsolescence go hand in glove,
00:53:32.720
same side of this coin. For the first time in history, human cognition itself is on the obsolescence
00:53:38.400
chopping block. That's what flips people out. But I, I think it's also a concern when you think
00:53:45.280
of just the range of IQs, right? Like, cause there are certain people that have the cognitive ability
00:53:51.360
to maybe get to that high end that LLMs can't do and they can be useful and maybe amplified and, you
00:53:57.840
know, be 10 times more productive, but then there might be half the population that LLMs could just
00:54:04.400
replace and you don't need them anymore. And there's no jobs left for them to do once you bring robots