Real Coffee with Scott Adams - February 09, 2026


Episode 3090 - The Scott Adams School 02⧸09⧸26


Episode Stats

Length

54 minutes

Words per Minute

159.3474

Word Count

8,660

Sentence Count

680

Misogynist Sentences

6

Hate Speech Sentences

6


Summary

In this episode of The Scott Adams School, we have a special guest, John Nosta. John is a cardiovascular and cognitive neuroscientist who has spent the past several years focusing exclusively on artificial intelligence and the impact it can have on human cognition.


Transcript

00:00:00.300 Investing is all about the future.
00:00:02.400 So, what do you think is going to happen?
00:00:04.320 Bitcoin is sort of inevitable at this point.
00:00:06.840 I think it would come down to precious metals.
00:00:09.400 I hope we don't go cashless.
00:00:11.540 I would say land is a safe investment.
00:00:14.100 Technology, companies.
00:00:15.240 Solar energy.
00:00:16.280 Robotic pollinators might be a thing.
00:00:18.860 A wrestler to face a robot.
00:00:20.560 That will have to happen.
00:00:22.140 So, whatever you think is going to happen in the future,
00:00:25.660 you can invest in it at Wealthsimple.
00:00:27.580 Start now at Wealthsimple.com.
00:00:30.160 We're going live.
00:00:31.940 Let's see how we're doing.
00:00:34.020 Is Locals live?
00:00:36.780 Nope.
00:00:38.240 There's Steven.
00:00:40.080 Good morning, you guys.
00:00:41.580 There's Magician.
00:00:43.000 Good morning.
00:00:44.880 Bookish.
00:00:47.140 Gracie.
00:00:49.200 We have a special guest today, you guys.
00:00:51.440 We're going to introduce you.
00:00:52.480 Good morning.
00:00:53.060 Good morning.
00:00:53.560 Welcome to the Scott Adams School.
00:00:55.740 Just a reminder, this is not to replicate Scott.
00:00:59.680 Where you don't think we're Scott.
00:01:01.220 We could never be Scott.
00:01:02.620 This is the Scott Adams School.
00:01:05.200 And we're just here to commune, have a good time, keep learning, keep growing.
00:01:10.320 And hopefully, we'll always have something interesting for us all to learn and understand and talk about.
00:01:18.760 So, we can't do any of that, you all, until we do one thing first.
00:01:24.220 So, gather in, gather in.
00:01:28.040 We have to get this going.
00:01:30.220 This is a short sip, because we have a lot to talk about today.
00:01:36.160 So, is everyone ready?
00:01:38.000 Everyone looks good.
00:01:39.480 Hey, Nikki.
00:01:41.080 Okay.
00:01:41.600 Let's mute for the sip.
00:01:42.760 Here we go.
00:01:43.360 Hey, everybody.
00:01:55.380 It's good to see you.
00:01:56.640 Come on in.
00:01:57.860 Gather round.
00:01:59.080 It's time.
00:02:00.400 For the best part of the day, except for the rest of it, which is going to be pretty good, too.
00:02:05.760 Yes, it's going to be coffee with Scott Adams this morning.
00:02:10.220 And all you need, you don't need much.
00:02:14.960 You need a cup or a mug or a glass, a tank or a chalice or a stein, a canteen jug or a flask, a vessel of any kind, fill it with your favorite liquid.
00:02:22.880 I like coffee.
00:02:24.060 And join me now for the unparalleled pleasure of the dopamine of the day that I think makes everything better.
00:02:30.660 Simultaneous sip.
00:02:32.040 Erica, I see you there.
00:02:33.360 Dr. Funk Juice, grab your mugs.
00:02:37.260 Marla, come on.
00:02:38.900 Go.
00:02:43.200 Ah.
00:02:44.820 Ah.
00:02:46.000 I got a little bit of a mug today.
00:02:48.860 I needed that this Monday morning.
00:02:51.560 Erica, he saw you.
00:02:53.300 He saw me.
00:02:55.520 So, welcome, everyone.
00:02:57.220 I'm Erica.
00:02:58.640 And let me just pull this up here.
00:03:00.840 You guys want to introduce yourselves really quick, and then I'll introduce John.
00:03:05.800 Good morning.
00:03:06.760 This is Marcella.
00:03:09.640 Good morning.
00:03:10.420 This is Sergio.
00:03:14.740 And Owen.
00:03:16.900 Oh, he is smooth.
00:03:18.700 And I am Owen Gregorian.
00:03:20.140 Good morning, everyone.
00:03:20.820 And I'd like to welcome our special guest.
00:03:24.380 His name is John Nosta.
00:03:26.640 And I am so grateful to Brian Romelli that you guys all remember, because he introduced
00:03:32.740 us.
00:03:33.420 And I had to ask John to tell me how to explain him, because he's got quite the talent stack,
00:03:41.620 let's say, to put it in Scott terms.
00:03:43.640 So, John, he has an eclectic background, for sure, from cardiovascular physiology to strategic
00:03:51.920 thinking.
00:03:52.640 Over the past several years, he's focused exclusively on artificial intelligence and his impact on
00:03:59.520 human cognition.
00:04:01.080 And I want to also say we had quite an interesting phone call, because I am not an AI aficionado
00:04:08.980 at all, and I'm kind of like your base-level person, but I've never had a conversation about
00:04:14.460 AI like we had, where you brought in the human aspect of it and what's missing.
00:04:22.740 And I think everybody listening today is really going to benefit from hearing your perspective
00:04:28.200 on these things in a way we haven't before.
00:04:30.520 So, John Nostra, welcome to the Scott Adams School.
00:04:34.140 Thank you.
00:04:34.640 What a pleasure to be here.
00:04:35.660 I had some ideas about what to talk about, and then that clip completely took me off
00:04:41.740 path.
00:04:43.880 And I want to go down.
00:04:44.600 I want to do something completely spontaneous.
00:04:46.340 I warned you guys about this, that I'm going to channel this down.
00:04:49.420 But Scott, in that clip, said, gather around.
00:04:53.940 Sit down.
00:04:54.660 Gather around.
00:04:56.100 That is the essence of technology today.
00:05:00.280 And I want to talk about that just briefly, because the notion of gather around actually
00:05:08.080 can be hearkened to ancient texts, the Upanishads, which are old Hindu spiritual texts.
00:05:17.460 The Upanishad is actually Sanskrit for sit up close.
00:05:23.760 To sit up close.
00:05:25.140 Now, what does that mean?
00:05:27.120 Doctors walking down a hallway on rounds, talking to one another.
00:05:32.180 A guru sitting with a master talking about a particular issue.
00:05:38.780 A group of kids sitting around a campfire.
00:05:43.100 That is the essence of sit down, come up close.
00:05:46.340 That's gather around.
00:05:47.480 And what we're seeing today, for the first time, is there's a technological component to
00:05:52.980 this.
00:05:54.080 And it's not fearful.
00:05:56.040 It's that now we have the ability to interact with AI, with large language models, where we
00:06:03.520 can have that almost intimate conversation.
00:06:07.020 And that reflects very, very much as to what Scott was saying is, come on, sit down.
00:06:12.000 Gather around.
00:06:12.720 And I think that's the essence of where technology is going today.
00:06:16.800 It becomes personal.
00:06:18.780 It becomes connected.
00:06:20.860 And probably the most interesting word here is iterative, that we have an engaged conversation.
00:06:26.460 So I'm going to stop there and take a breath.
00:06:28.880 Well, don't take too long of a breath, because you have so much to offer.
00:06:34.220 And I think I also wanted to point out that you've written, what, over 500 articles for
00:06:38.580 Psychology Today?
00:06:40.400 Is that what it is?
00:06:40.940 Psychology Today.
00:06:41.640 And when we first got on the phone, if you're old enough, you remember Doogie Howser.
00:06:48.360 So you asked me if I know who Doogie Howser is, and that you were writing medical papers
00:06:54.900 for Harvard at 18 years old.
00:06:57.580 Yeah.
00:06:58.200 Yeah.
00:06:58.420 And you were going in that field until?
00:07:00.920 I was a smart aleck, I guess.
00:07:02.620 You know, I, but it wasn't, it wasn't because I was smart.
00:07:05.560 It was because I was interested.
00:07:08.300 It's because I had a real unique interest and connection with things like physiology and
00:07:13.360 biology and stuff like that.
00:07:15.280 So, so my early path took me to what was going to be medical school.
00:07:19.340 And, um, it's just too hard.
00:07:21.820 You know, the nature of medicine today is very regurgatory.
00:07:26.240 Class is here's, here's the cranial nerves.
00:07:29.040 Here's a book on anatomy.
00:07:30.680 Now next week we're going to have a test.
00:07:32.080 So you memorize it and then we're going to, you're going to regurgitate it or you're going
00:07:36.040 to be a sponge and they squeeze it out after that test.
00:07:39.380 So for me, it didn't really align with my interests.
00:07:43.500 I tend to be more of a creative or strategic thinker.
00:07:46.580 So that's where I ended up working in, in advertising and marketing, um, with a large,
00:07:51.620 uh, advertising agency called Ogilvy, which is the largest healthcare advertising agency.
00:07:55.860 I focused on healthcare.
00:07:57.140 So I did that and, um, learned how to think.
00:08:02.140 And I think that, that, that is probably one of the defining elements that I'm going to
00:08:06.640 jump back to that connectivity to what's going on here.
00:08:11.060 That when we sit up close, what do we do?
00:08:13.100 We think, we think together.
00:08:15.740 And it's been said, I think it's quite profound is as you think, so you act as you act.
00:08:21.240 So you become there's, there's the magic, right?
00:08:24.320 If we want to be a doctor or a lawyer or a billionaire, you have to think it first.
00:08:30.460 It's the cognitive construct.
00:08:32.140 And I want to, I want to take that note and go back, go back a few hundred years and kind
00:08:38.100 of put this into a little bit of perspective because the word think is going to be real
00:08:42.840 interesting and real important for us.
00:08:44.680 So a few hundred years ago, a guy named Gutenberg did something that helped us think he created
00:08:51.540 a printing press.
00:08:53.020 Now, what did that printing press do?
00:08:55.660 The printing press unlocked words.
00:08:58.840 So we could disseminate something like a book.
00:09:02.620 Now, in those days, it was principally the Bible.
00:09:05.720 But here's an interesting observation.
00:09:07.660 Back in those days, innovation, technology, if you will, created something that there was
00:09:13.960 no need for.
00:09:15.360 So that's the first sort of paradoxical thing here.
00:09:17.700 So I'm going to invent a book when no one can read.
00:09:22.160 So how the heck do you manage that?
00:09:24.940 So that was sort of the first inflection point in the dissemination of thought and thinking
00:09:30.420 and knowledge.
00:09:31.040 So we unlock words.
00:09:32.460 It took a few years.
00:09:33.580 And we see that with innovation all the time.
00:09:35.600 Just because something is new and innovative doesn't mean it aligns to market adoption.
00:09:40.320 So sometimes it takes time.
00:09:41.460 Sometimes it's immediate.
00:09:42.900 Sometimes it takes time.
00:09:44.100 Then we move up in time when we get to this other thing called the internet.
00:09:47.500 And what did the internet do?
00:09:48.840 The internet, and principally Google, I guess, if you really wanted to talk about search
00:09:53.580 in its contemporary capacity.
00:09:56.360 Google did something that was very similar to Gutenberg.
00:10:00.540 Google unlocked facts.
00:10:03.580 And that was the second stage in this sort of thinking dynamic.
00:10:06.840 They had to unlock facts.
00:10:07.980 Now, that's the good news.
00:10:09.500 The bad news is it unlocked facts in a way that is very cold.
00:10:16.860 It's not very interactive.
00:10:18.840 It's very reaction.
00:10:20.140 Here's the question.
00:10:21.240 Here's the answer.
00:10:21.880 It's transactional if you're looking for a word.
00:10:24.440 So Google is transactional.
00:10:25.820 And you know, back in the days when you type up, you know, where's the best Mexican restaurant
00:10:29.880 in Rumpson, New Jersey?
00:10:33.140 You know, we all know it's Casa Comida, by the way.
00:10:35.380 But anyway, it gives you a long, complicated answer.
00:10:39.960 You have to find the link.
00:10:40.840 You got to back around.
00:10:41.620 So that was the second sort of inflection point, if you will.
00:10:44.980 We unlock words and we unlock facts.
00:10:47.520 And that was great.
00:10:48.560 That really transformed the way we could think.
00:10:52.940 But now, what happens with these large language models that are really changing everything
00:10:57.640 so dramatically?
00:10:59.460 What large language models are doing is unlocking thought.
00:11:04.440 So that's the transition, unlocking words, unlocking facts, and unlocking thought.
00:11:11.480 The interesting thing about large language models is that it is an iterative dynamic and
00:11:17.280 that our ability to engage with a large language model back and forth actually activates thought.
00:11:24.880 And that's kind of where we are today.
00:11:27.020 Now, what does that mean?
00:11:27.920 How does that fit into the construct of things like the industrial age and the digital age
00:11:33.100 and all that kind of stuff?
00:11:34.020 I would argue that we're moving into a new domain, and that is the domain of thought.
00:11:38.820 That's why I call it the cognitive age.
00:11:41.640 And it goes right back to that fundamental reality written thousands of years ago in the
00:11:47.940 Upanishads that simply says, as you think, so you act, as you act, so you become.
00:11:54.340 So when we talk about modern technology and we talk about, come on, everybody, let's gather
00:11:59.000 around and chat, that's as old as dirt.
00:12:02.540 That's as old as humanity itself, yet we see a new contemporary spin on that.
00:12:06.940 So that's kind of what I've been thinking about recently.
00:12:09.680 So what I've noticed in some of the writings about this sort of thing is it seems to go in
00:12:17.020 two directions.
00:12:18.080 One is along the lines of what you said, where you now have this personal companion that you
00:12:23.520 can chat with and you can think with and have a conversation and that sort of thing.
00:12:28.600 But there's also the opposite, which is, I think the article I posted today talked about
00:12:33.020 it as cognitive debt and that you might be offloading your thinking to the AI and therefore
00:12:39.160 not learning how to think.
00:12:41.300 And I have seen similar things across all of kind of education use cases for AI, where
00:12:47.480 it can be a great tutor and help you learn if you use it the right way.
00:12:51.380 But it could also just give you all the answers and keep you from learning.
00:12:55.780 And there's a big fear now that a lot of people are never going to learn the skills they have
00:13:01.760 to know.
00:13:03.080 And on top of that, the other thing I'll layer on and let you comment is I've noticed, or
00:13:10.160 at least there's been people that have commented that AI is kind of reflect back your level of
00:13:15.000 thinking, that if you are really kind of dumb and ask it questions that kind of are what
00:13:22.240 an ADIQ person might say, then it's going to kind of reflect that back at you and adapt
00:13:26.840 to your level of thinking.
00:13:28.000 But if you're more of a PhD super genius and you use all sorts of big words and, you know,
00:13:33.460 it's different, then it's going to reflect back that level of thinking or that level of
00:13:37.960 conversation.
00:13:38.460 So what do you, what do you think about all this?
00:13:40.220 Is this going to make us all stupid?
00:13:41.940 This is you, you, you've, you've touched on, I'm going to reach over here.
00:13:45.760 I'm going to grab something.
00:13:46.800 So this is the shameless self-promotion of my book, which is The Borrowed Mind.
00:13:52.080 It's coming out this week.
00:13:54.360 Yes, we are doing a certain element of cognitive offloading.
00:13:59.520 And there's, oh my God, there's so much to talk about in that, in that question.
00:14:02.460 So let's, let's back up and let's humanize this a little bit.
00:14:06.060 Have you ever had a favorite teacher?
00:14:08.460 Absolutely.
00:14:09.640 Everybody's had a favorite teacher.
00:14:11.520 And interestingly, it's generally one or two people.
00:14:15.280 Like nobody has 10 favorite teachers.
00:14:17.840 Nobody has, you know, a whole, a whole load of famous teachers.
00:14:21.700 It's usually one.
00:14:22.500 It's oftentimes a woman, which, which is because elementary education was sort of biased to
00:14:28.280 more female.
00:14:28.900 But I find that interesting.
00:14:30.840 Did, what did that teacher do?
00:14:33.100 What did she do for you?
00:14:35.220 She got you.
00:14:36.500 She got you.
00:14:37.320 She delivered information in a way that was tuned to the creative frequency of your brain.
00:14:44.820 And I think that's something we have to consider.
00:14:47.220 So did the teacher rob your intelligence by pandering to your proclivities or insecurities?
00:14:53.420 I don't think so.
00:14:55.600 Now, that being said, we have to recognize that the nature of large language models are such
00:15:02.820 that they kind of get from point A to point B.
00:15:07.100 In other words, from question to answer.
00:15:09.160 They go right to that point, they do the thinking for us.
00:15:15.460 And that's, that's a very dangerous situation that I've written extensively about.
00:15:20.060 So what happens when you go, when answers become instant?
00:15:24.860 What, what is actually happening between point A and point B, that cognitive path, right?
00:15:29.560 Well, it's the stumbles, it's the falls, it's the controversy, it's the pauses of contemplation
00:15:38.700 that occur.
00:15:40.380 So what I think is happening is that we go from point A to point B with a large language model,
00:15:48.900 and we miss what's between the two points.
00:15:51.540 Let me capture what's between the two points.
00:15:54.620 It's, it's a word we all know, it's a word we, we, we relish.
00:15:58.860 And I think that Scott kind of defined that in some ways, it's imagination.
00:16:04.720 Imagination is that sort of rumbling, that pause, that confusion, that concern, that failure
00:16:11.300 that came through point A to point B.
00:16:14.180 So to answer your question, I think that, that artificial intelligence and large language
00:16:21.540 models are problematic or curiously interesting.
00:16:25.880 So can, can I go down another path real quick, Erica, just, uh, talking about technological
00:16:31.040 augmentation.
00:16:32.360 Everybody knows, um, the painting by Vermeer, the girl with the pearl earring, you know, that
00:16:39.480 wonderful painting.
00:16:41.540 So, um, Vermeer used technological augmentation to do that painting.
00:16:48.540 He used something called the camera obscura and he projected the image through light and
00:16:54.780 then traced it on the wall.
00:16:58.220 Okay.
00:16:59.120 Was that technology, it was technological augmentation in his day.
00:17:02.720 Was it wrong?
00:17:04.600 Here's another one that's really interesting.
00:17:06.940 Norman Rockwell, everybody knows Norman Rockwell, right?
00:17:09.660 He kind of has that quintessential American.
00:17:12.100 You can, you often see the, uh, the painting of Norman Rockwell, um, at the, at the Thanksgiving
00:17:17.940 table, the family with the turkey and, or the cop with the kid who ran away from home.
00:17:23.440 These are extraordinarily powerful moments, uh, that move us.
00:17:29.180 Well, here's the secret to this.
00:17:33.520 Norman Rockwell used a device called a Lucy.
00:17:36.220 It's a device where he actually hired a photographer, created a set, took a picture and then take
00:17:44.680 that, took that image and enlarged it and changed it using this mechanism called the Lucy and
00:17:49.500 then painstakingly traced it and colored it in.
00:17:53.220 Next time you look at a, uh, a Norman Rockwell painting, take a close look, take a close look
00:17:58.940 at the, it's almost like painting by numbers.
00:18:01.080 Remember those things when we were kids, the painting by number thing?
00:18:04.360 Norman Rockwell's art is very, very specific because he was constrained by the technology
00:18:11.560 he embraced.
00:18:13.480 And that's a, that's a really interesting dynamic, constrained by the technology you embrace.
00:18:19.720 Now, if you look at the Norman Rockwell, the more contemporary, um, his, his most contemporary
00:18:24.720 work, look at his signature.
00:18:28.640 It's a stencil.
00:18:30.240 I also, I almost want to curse here.
00:18:31.800 This gets me so angry.
00:18:32.900 He didn't even sign his name.
00:18:34.360 The signature is an expression of our humanity, right?
00:18:37.360 It's Picasso.
00:18:38.940 It's whatever it is, right?
00:18:40.260 It has a certain energy, a certain style.
00:18:43.000 Well, what, um, he did is he actually took a stencil and made Norman Rockwell.
00:18:48.600 Okay.
00:18:49.120 Now why I'm bringing this up is because this goes back to the earlier question is, is it
00:18:53.220 going to hurt us?
00:18:54.420 Is it going to offload cognition?
00:18:56.480 And I think that the answer is yes and no.
00:18:59.700 Um, what I find interesting is the way Norman Rockwell responded when asked about the Lucy,
00:19:04.760 the Lucy machine.
00:19:05.940 Now keep in mind, no one talks about the Lucy.
00:19:08.220 That was his secret.
00:19:09.500 If you go to the Norman Rockwell museum in Lenox, Massachusetts, don't ask them about the Lucy.
00:19:15.740 They get very angry.
00:19:16.740 They get very upset because it's like, kind of like asking, did you write that essay or
00:19:21.280 did chat GPT write that essay?
00:19:22.920 It's that same social, cognitive, emotional dynamic that we're seeing play out here today.
00:19:28.520 So what Norman Rockwell said, I thought was really interesting.
00:19:31.380 He said, the Lucy is a horrible machine and I'd be lost without it.
00:19:35.580 And I think to a certain way, that's the delicate balance that we're seeing with large language
00:19:42.580 models.
00:19:44.100 Um, do they cause cognitive offloading?
00:19:46.700 Well, I don't know Erica's cell phone number, right?
00:19:52.540 I just type in her name.
00:19:53.820 I don't, I don't remember.
00:19:54.980 Do I need to remember it?
00:19:57.420 What is the appropriate level of cognitive offloading in our world?
00:20:00.820 If a medical student needs to know the second metabolic intermediary in the Krebs cycle,
00:20:06.100 which happens to be one, six fructose diphosphate, probably going to get an A on his, on his
00:20:11.880 biochemistry test in medical school.
00:20:13.760 But does that make her or him a better clinician?
00:20:17.800 These are very, very complicated questions now.
00:20:21.680 Yeah.
00:20:22.000 And it, I mean, I think it will depend on, on the type of thing you're talking about.
00:20:25.860 Like that answer, right?
00:20:26.580 I mean, it will depend.
00:20:28.040 That's such an empty answer.
00:20:29.360 But what I mean is there's certain skills that I see, like in the, in the context of
00:20:33.540 a doctor, I would want them to be able to diagnose me kind of right on the spot and
00:20:38.540 not have to look up all the information or ask an LLM to figure out what my condition is,
00:20:43.620 right?
00:20:43.800 Let's, let's talk about that because that, that, that in of itself is a very interesting
00:20:48.740 thing.
00:20:49.200 That's called a, that's called a differential diagnosis.
00:20:52.320 So, um, a 65 year old guy goes to the emergency room who's sweaty and has chest pain radiating
00:20:58.300 down his, his left arm.
00:21:00.220 Everybody want to do the diagnosis with me at the same time?
00:21:02.440 Heart attack.
00:21:02.980 There you go.
00:21:03.640 Bingo.
00:21:04.020 Heart attack.
00:21:04.400 That's right.
00:21:05.300 That's, that's a statistical guess.
00:21:07.880 Okay.
00:21:08.260 He also might have costal chondritis.
00:21:10.320 He might have pericarditis.
00:21:11.720 He might have a variety of things, but we often statistically guess into that, into that
00:21:16.940 spot, there was a study that showed how well LLMs did, doctors did, and doctors using an
00:21:27.100 LLM do in looking at clinical scenarios.
00:21:30.640 And they found something very interesting here.
00:21:33.200 All three of those constructs did the same.
00:21:37.100 They all got 76% correct.
00:21:39.180 Doctor alone, LLM, or an LLM and a doctor combined, which I thought was really interesting.
00:21:45.160 But here's, here's the, the interesting thing.
00:21:48.140 And this is where it gets to the point where you're worried about, about this idea.
00:21:53.460 Well, I want the doctor right there to make the diagnosis for me.
00:21:56.240 If you look at the clinical chain of reasoning, in other words, don't tell me you had a heart
00:22:03.600 attack.
00:22:04.100 Okay.
00:22:04.720 Doctor, tell me the five reasons why the ST segment is elevated on my EKG.
00:22:11.080 That's a classic size called a STEMI classic side of a heart attack.
00:22:14.000 Tell me the five reasons why that might be elevated.
00:22:18.180 Pericarditis, early repolarization, ventricular aneurysm, you know, there's, there's, there's
00:22:23.080 a list.
00:22:24.100 Most clinicians will not get that.
00:22:26.560 So here's the challenge.
00:22:29.340 Sometimes augmenting clinical thinking and reasoning is very, very helpful.
00:22:34.220 So I think that we're going to see, you know, the interesting thing here is when my wife
00:22:40.200 comes back to, from the pediatrician, I ask her, what did the doctor say?
00:22:44.020 You know?
00:22:44.520 And, um, she usually gives me an answer.
00:22:47.000 Oh, we got a prescription.
00:22:47.840 Everything's fine.
00:22:49.200 Tomorrow.
00:22:49.600 The question is, what did the computer say?
00:22:51.860 But that's not a full sentence.
00:22:53.980 That's actually, there's a comma there.
00:22:55.960 So the rest of that question is really very telling.
00:22:58.420 It's what did the doctor, what did the computer say, comma, and what did the doctor do?
00:23:03.180 And it's that sort of cognitive functional dance.
00:23:05.900 It's going to be very, very powerful.
00:23:07.100 So when I go into the emergency room, um, and they say I have a heart attack, my differential
00:23:13.560 diagnosis should be scrubbed analytically by an AI.
00:23:17.620 So it's not one versus the other.
00:23:19.600 That's one of the pitfalls that we find with AI is it becomes a zero sum game.
00:23:24.440 It's they win, we lose.
00:23:26.760 Yeah.
00:23:27.360 And I, what I've also noticed is that at least in my use of AI, I found that it's very useful,
00:23:32.260 but it's also based on my expertise.
00:23:34.940 Like I know what, right, I know what questions to ask to get good answers.
00:23:39.260 And I think from everything I've read about it, people who don't have a lot of expertise
00:23:43.860 don't get good answers a lot of the time.
00:23:46.120 And it's because they don't know what questions to ask and they don't know how to guide the
00:23:49.860 AI in the right way.
00:23:51.320 So it seems to me like we do need to maintain some ability for people to gain enough expertise
00:23:57.340 to control and guide the AI, at least until they don't need people anymore.
00:24:02.460 Well, also, Owen, the other thing I want to chime in, because I see it in the chat, is
00:24:06.600 that a lot of us don't trust doctors anymore.
00:24:10.200 So do they have an agenda?
00:24:13.060 You know, do they have a political issue?
00:24:15.920 Are they jaded?
00:24:17.200 Like, who knows?
00:24:18.640 So do you trust your doctor?
00:24:21.460 You know, can you?
00:24:22.640 A lot of us just don't.
00:24:24.120 But then I'm in the middle.
00:24:26.260 Again, I'm going to use my Libra reference where I'm always in the middle of two things
00:24:30.180 where I'm just like, well, do I trust AI?
00:24:32.640 Isn't that being programmed also?
00:24:35.200 And how do I know that it isn't skewed a certain way?
00:24:39.120 So then I trust nothing.
00:24:40.340 But that's where I find myself.
00:24:42.820 I trust nothing.
00:24:43.980 Here's the question.
00:24:44.800 We often look at things like hallucinations.
00:24:47.580 I see that in the chat.
00:24:48.780 That's a very common.
00:24:50.180 Sometimes they're called confabulations.
00:24:51.900 But we look at that.
00:24:55.280 We look at bias.
00:24:56.600 Do LLMs.
00:24:57.920 Is Claude bias?
00:24:58.980 Is Grok bias?
00:25:00.120 Does ChatGPT have a bias baked into it?
00:25:03.520 Well, what about the doctor?
00:25:06.240 You know, Erica, you mentioned that.
00:25:07.400 And I think that's so true that maybe we should worry about the human bias in a lot of our information.
00:25:14.440 So, yeah, you know, AI is bias.
00:25:16.920 But I think that AI is very helpful to me because I'm a geek.
00:25:22.320 I was in the car the other day, and I was actually having a conversation.
00:25:26.300 I think I was using ChatGPT.
00:25:28.100 And I wanted the model to teach me about the strange qualities of subatomic particles.
00:25:36.820 What I find, what I know, I know I'm a complete geek.
00:25:39.800 But they ran out of words.
00:25:41.360 Like when you had these quarks, these funky quarks, well, they ran out of words to describe them.
00:25:48.240 So they started using words like beauty, truth, charm, upness, downness to describe them.
00:25:54.160 So I had a really good discussion with AI about something I know very little about.
00:26:00.960 So, yes, you need to be a master of your domain, but you don't have to be a master of the knowledge domain.
00:26:09.200 So I think that that's very helpful for me.
00:26:12.600 Now, I want to get to something because I know we've gone like almost a half hour into this mumbo jumbo.
00:26:18.600 I quick just wanted to, if you don't mind, ask Sergio and Marcella if they have a question at this point for you before we move on.
00:26:25.160 You thought Brian talked a lot?
00:26:28.220 Now you've got me.
00:26:29.280 You guys can't even shut me up.
00:26:31.020 So, yes.
00:26:31.720 No, I want you to keep going.
00:26:34.000 Brian is my cognitive brother, by the way.
00:26:36.080 We get to get our heads together and like little sparks fly.
00:26:41.120 Anyway, yes, please ask some questions.
00:26:43.380 Hey, John.
00:26:44.600 Yes, I checked your stuff yesterday.
00:26:46.840 I was trying to learn more about what you do.
00:26:48.980 And I love that you are focused on the health part, you know, because that's a very important aspect for me,
00:26:58.380 always to know how are we maximizing our doctors.
00:27:01.380 And you already answered a lot of those questions.
00:27:04.920 That was great.
00:27:07.500 Brian Romelli gave us this reframe, right?
00:27:10.300 That instead of calling it AI, calling it IA, right?
00:27:14.380 Intelligence amplifier.
00:27:16.320 And I love that reframe.
00:27:18.840 I wanted to ask you, I always tell people to not get into conversations with AI, like chats, back and forth.
00:27:30.400 Because I personally feel like it's getting me, like Owen was saying, he tries to agree with me a lot.
00:27:39.760 And then he takes me down different paths.
00:27:43.340 So what I do is I just dictate to it.
00:27:45.560 I just put a little microphone and I say in a voice memo.
00:27:50.620 I don't allow it to say, like, stop, you know, let me.
00:27:54.260 And I just, and she answers to me.
00:27:56.140 My question is, when it comes to health, right?
00:27:59.900 And the mental health part of it, of talking to an AI like this,
00:28:06.780 can you also agree that some people are more susceptible to that?
00:28:12.600 And because I am, that's why, because I know I am, I avoid it.
00:28:18.240 You are so, I'm a sucker for AI.
00:28:21.520 Because, you know, I type in a sentence and then Claude or whomever writes back,
00:28:28.280 oh, John, that is such an interesting observation, right?
00:28:31.980 And then I'm stuck.
00:28:33.260 I'm like, oh, yeah, really?
00:28:34.920 Is that real?
00:28:35.720 I'm so smart.
00:28:36.520 So you got it.
00:28:37.340 In my book, I talk about AI in three contexts.
00:28:41.920 The first is the promise.
00:28:44.240 The second is the peril.
00:28:46.200 Because there's real risk.
00:28:47.760 And I'll get back to that.
00:28:48.920 And the third is the path.
00:28:50.340 That we need to understand how to use it.
00:28:53.300 You know, the best use of a hammer is from a skilled craftsman.
00:28:57.100 And I think we need to understand that.
00:28:59.720 So number one, yes, AI is insidious.
00:29:03.560 It's trained to say things that you like.
00:29:07.440 For example, if you've ever used any of those, you submit your picture into these apps and they give you an avatar, you know?
00:29:15.380 The avatar always has nicer teeth.
00:29:18.520 It's a little skinnier.
00:29:19.960 The hair is, I mean, it's like, what's going on here, right?
00:29:22.560 They're trained to give you output that you like.
00:29:25.960 And I think that large language models are very similar to that unless you can provoke them.
00:29:33.440 Unless you say, you know, steel man this idea.
00:29:35.840 Pressure test this idea.
00:29:37.020 I want you to take on the role of a contrarian.
00:29:39.420 And I want you to look at this idea and give me all the downside to it.
00:29:43.640 So we have to be very careful because they are insidious.
00:29:47.060 Now, with respect to psychiatry and psychology, there's been a lot of action there.
00:29:53.060 I think that probably the fringe cases, you know, we have a normal distribution, a bell curve.
00:30:01.160 Those 10% on either side are vulnerable.
00:30:05.540 So let's say I told an LLM, I'm feeling a little blue today.
00:30:12.020 And then all of a sudden we start falling down the rabbit hole.
00:30:14.680 Oh, what shade of blue?
00:30:15.880 Well, it's really dark.
00:30:17.000 Oh, how dark is it?
00:30:18.360 Well, it's a black hole of conscious awareness, you know, whatever that is.
00:30:22.100 So we have to be careful about that.
00:30:24.240 Most normal people can tolerate that.
00:30:27.160 In fact, I would argue that that's the friction of life.
00:30:32.260 And that friction is what drives the process of understanding.
00:30:38.580 So when I said getting to A to B, getting from A to B with an LLM is instant, right?
00:30:43.400 It just goes from one to the other.
00:30:45.240 Getting from A to B for a human is often toil, controversy, struggle, joy, wisdom, insight,
00:30:50.880 all sorts of things.
00:30:52.860 But I think that we have to be careful with large language models because they are.
00:30:59.640 Now, what did Brian?
00:31:00.720 Brian flipped it, right?
00:31:01.940 AI, IA, intelligence amplified.
00:31:05.240 I'm going to give you mine because I disagree with Brian on this point.
00:31:08.800 I think that intelligence amplified is not intrinsic to the model.
00:31:14.880 I think that's a result of the model.
00:31:17.540 So I can say, I can call a hammer house amplified, builder amplified, right?
00:31:23.660 Because it's just going to help me, you know, whatever.
00:31:26.180 I think that, and this is really a bit controversial.
00:31:29.340 I don't think that artificial intelligence is intelligence at all.
00:31:32.840 I think it's anti-intelligence.
00:31:35.020 I think it's anti-intelligence.
00:31:36.600 Now, let's unpack this a little bit because it's a little complicated.
00:31:42.020 What does an apple look like to a large language model?
00:31:45.600 What does an apple look like?
00:31:47.700 Well, let's talk about what it looks like to us, okay?
00:31:51.040 We see an apple in three dimensions, right?
00:31:55.440 Three spatial dimensions, okay?
00:31:57.480 Pretty simple.
00:31:58.040 There's the apple in my hand.
00:31:59.700 Some smarty pants might include time, but that's a very interesting thing.
00:32:04.080 And we'll get to that later.
00:32:05.240 Do you know that large language models don't have any idea what time is?
00:32:08.220 They are atemporal.
00:32:10.520 They don't exist in time.
00:32:13.100 But when they see an apple, the old models from a few months ago would see that model in 12,288 dimensions.
00:32:23.820 What the hell does that even mean, right?
00:32:26.220 The new frontier models, the new chat GPTs, actually look at the apple in 25,000 dimensions.
00:32:33.920 Now, the reason I'm talking about this is because I want you to be confused deliberately.
00:32:38.660 The perceptual domain, the cognitive capability of a large language model is vastly different than humans.
00:32:46.220 When we think of an apple, we think of three dimensions.
00:32:49.840 We think of, let's see, apple a day keeps the doctor away.
00:32:53.040 We think of apple computer.
00:32:54.760 We think of apple, Garden of Eden, right?
00:32:57.240 Adam and Eve.
00:32:58.160 We think of about 25 linguistic associations combined with three spatial dimensions.
00:33:03.440 That's it.
00:33:04.840 But an LLM looks at an apple in 25,000 dimensions.
00:33:09.500 Now, so we live in three dimensions.
00:33:11.260 Sometimes people talk about multiple dimensions called hypercubes, which are really cool mathematical
00:33:16.640 structures.
00:33:17.540 Sometimes people who study string theory get really wacky, and they look at string theory
00:33:22.040 in the context of 11 dimensions, which blows their mind.
00:33:26.180 11 dimensions?
00:33:28.660 As a human, we have no ability to conceptually understand that.
00:33:33.740 And what I think that the difference is, is that AI is anti-intelligence.
00:33:41.260 Number one, it doesn't think, like we do, it's really important that, I actually took some
00:33:48.160 notes to try to get this right.
00:33:50.260 We live in an autobiographical state.
00:33:53.980 We know who we are.
00:33:55.200 We have a stable identity.
00:33:56.640 We have a great sense of continuity.
00:33:58.960 We know who we were yesterday, and we kind of know who we're going to be tomorrow, right?
00:34:02.760 That's continuity.
00:34:04.220 LLMs have no continuity.
00:34:06.120 You turn them off, you turn them on.
00:34:08.160 That's the way they are.
00:34:09.540 They don't live in time.
00:34:12.380 We live in time.
00:34:14.200 And when you combine these things together, I've written extensively about that, it's actually
00:34:18.280 anti-intelligence.
00:34:20.440 The way LLMs process information is antithetical to human thought.
00:34:27.760 Antithetical to human thought.
00:34:29.920 So it's important to recognize the difference.
00:34:32.760 Now, what does that really even mean?
00:34:35.360 How do we put this together?
00:34:36.760 We all have two eyes.
00:34:38.260 Why do we have two eyes?
00:34:41.480 Depth perception?
00:34:42.960 Bingo!
00:34:43.600 Parallax view.
00:34:44.440 Depth perception.
00:34:45.620 It's the difference between the two eyes that allow us to see the world uniquely.
00:34:51.600 And it's my contention that what we're seeing with artificial intelligence is the combination
00:34:57.120 of anti-intelligence and human intelligence, the combination of extraordinary computational
00:35:03.820 brilliance from a large language model.
00:35:05.900 And our human, time-driven, biographical-driven, experience-driven, emotional-driven dynamic gives
00:35:16.600 us two fields of vision.
00:35:18.960 It's like cognitive parallax.
00:35:20.620 So I think that when we think about AI, we have to celebrate the fact that it's frigging
00:35:26.860 different.
00:35:28.340 The computational capabilities of AI are not good or bad.
00:35:32.840 It's not zero and one.
00:35:34.460 It's not win-lose.
00:35:36.120 It's they're functionally different.
00:35:37.580 And I think that's a difference we should celebrate.
00:35:39.800 Humanity is different.
00:35:40.900 Amen.
00:35:42.040 But AI is different.
00:35:43.240 And this idea that we make humans more like AI or we make AI more like humans, I think
00:35:48.380 is fundamentally flawed at a very, very base level.
00:35:52.460 This transhuman mumbo-jumbo.
00:35:54.480 And I think that's kind of one of the big issues.
00:35:56.820 And that gets back to the earlier question about psychiatry and psychology.
00:36:00.680 We have to recognize that these models are, in fact, models.
00:36:04.980 They really do find the next word.
00:36:09.420 Stoiatric parrots, as they're sometimes called.
00:36:13.140 And here's an interesting thing.
00:36:14.340 When we ask an LLM a question, they already assume that there's an answer.
00:36:20.880 Take a step back and let's think about what this even means.
00:36:24.520 They assume that there's a crossword puzzle.
00:36:28.480 And all it has to do is fill in the words.
00:36:31.900 That's the way an LLM thinks.
00:36:33.880 Even the smartest LLMs.
00:36:35.360 We as humans don't process information that way.
00:36:38.000 We don't think about the answer that exists at the end of the journey.
00:36:42.060 We think about the process that gets us to a place that may not exist, that may take us to a new cognitive construct.
00:36:50.420 So all these things are kind of, you know, there's a lot going on here.
00:36:53.940 And in the final analysis, I don't want to say that AI is bad.
00:36:57.700 I think the fundamental analysis, AI is anti-intelligence, antithetical to human thought.
00:37:03.720 And it lives in sort of a cognitive parallax related to depth.
00:37:07.960 Think about the intellectual and cognitive depth that we can have when we leverage an LLM.
00:37:14.480 And we haven't even gotten to education yet because I think education, while precarious, is still a wonderful opportunity.
00:37:23.600 And I'm the dad, I homeschool my kids with my wife.
00:37:26.520 So we do use AI.
00:37:28.400 And we look at, you know, good old-fashioned things like reading a book.
00:37:31.880 But we also use technology too, just like, you know, just like Vermeer and just like our other friend, the painter, who said that I'd be lost without it.
00:37:41.840 I think it's a true statement.
00:37:43.140 With the RBC Avion Visa, you can book any airline, any flight, any time.
00:37:49.720 So start ticking off your travel list.
00:37:52.140 Grand Canyon? Grand.
00:37:54.140 Great Barrier Reef? Great.
00:37:56.580 Galapagos? Galapago?
00:37:59.160 Switch and get up to 55,000 Avion points that never expire.
00:38:03.740 Your idea of never missing out happens here.
00:38:06.760 Conditions apply.
00:38:07.920 Visit rbc.com slash avion.
00:38:13.140 A Tim's Donut and Coffee is the original collab.
00:38:15.880 And now, any classic donut is a dollar when you buy any size original or dark roast coffee.
00:38:20.060 Get a deal on the iconic duo with a Tim's Dollar Donut.
00:38:23.100 Plus tax at participating restaurants for limited time.
00:38:25.100 Terms apply.
00:38:25.620 See app-free details.
00:38:26.500 It's time for Tim's.
00:38:29.780 Rockwell.
00:38:30.360 Yeah.
00:38:33.480 John, just wanted to clarify.
00:38:38.100 Brian Romelli, I think I may misspoke.
00:38:41.040 He didn't say intelligence amplifier.
00:38:45.140 He's an intelligence amplifier.
00:38:47.820 Okay, that's fair.
00:38:48.940 Yeah.
00:38:49.360 Yeah, yeah.
00:38:49.820 So you wanted to make sure that, to clarify that.
00:38:52.280 He meant that it's a tool for people.
00:38:54.280 So he's actually in your domain.
00:38:55.960 Yeah, I mean, you know, a couple things.
00:38:59.100 Intelligence amplifier.
00:39:00.640 I think that some of the things that AI actually does is outside the domain of humans.
00:39:08.420 So does it amplify or does it contribute new perspectives?
00:39:15.300 And this is where it gets into trouble.
00:39:17.140 Do you ever drive down the road and you see a big water truck and it says non-potable?
00:39:22.220 It says not fit for human consumption, right?
00:39:25.940 What am I talking about here?
00:39:27.460 I believe that large language models, the computational brilliance of these models, albeit antithetical
00:39:35.560 to human cognition, is so deep, so multidimensional that we don't even understand it.
00:39:43.900 We do not have the capacity to understand what this 10,000 dimension articulation of quantum
00:39:52.200 physics is looking at gravity and it exists in this little packet that it's unfit for human
00:39:58.660 consumption.
00:39:59.840 That's a problem, you know?
00:40:02.020 Now, no, there's lots of things.
00:40:04.080 I mean, if we look at a CD, we can't read a CD with our mind, with our head, right?
00:40:08.160 We need a machine to read it.
00:40:09.980 But I think AI creates a new domain of knowledge.
00:40:14.040 And unless you recognize that that knowledge is different than humans, is antithetical
00:40:19.500 to humans, I think you're going to get to run into a problem there.
00:40:23.820 So, you know, Brian and I align on almost everything.
00:40:29.300 But I really kind of think about that AI is fundamentally different.
00:40:33.200 And here's the question.
00:40:34.980 A lot of people default to this.
00:40:37.620 They say, oh, it's just a tool.
00:40:39.860 It's how you use it, right?
00:40:41.360 That's it.
00:40:41.680 Everybody says that.
00:40:43.100 And I think that's categorically wrong.
00:40:45.340 Because when I use a hammer, I can hammer the nail, I can put the hammer down, and I
00:40:50.220 can walk away.
00:40:51.820 I no longer carry that hammer with me.
00:40:55.140 But here's the question.
00:40:56.520 Can you unthink a thought you shared with a large language model?
00:41:01.820 It becomes a little creepy there, right?
00:41:04.180 I mean, it's, you know, we all have a voice in our head.
00:41:08.120 Sometimes it's problematic, you know?
00:41:10.600 Sometimes it drives us into some issues.
00:41:14.140 But for the most part, that voice in our head is a pretty good thing.
00:41:17.800 Interestingly, the voice in our head never changes.
00:41:20.040 That voice in your head is the same when you were five than when you were 55.
00:41:23.720 It's a curious voice that speaks in your head that is an amazingly intimate and personal
00:41:28.640 dynamic.
00:41:29.580 I think that we're actually seeing the emergence of not an inner monologue, but an inner dialogue.
00:41:36.880 So when I have those conversations with ChatGPT, I'm having an iterative dialogue, which is
00:41:46.960 a pre-human, pre-reality construct.
00:41:51.460 You could almost think of it as a dress rehearsal for life.
00:41:54.260 So I got on ChatGPT and said, well, I have to tell my wife that we're not going on that
00:41:57.520 vacation to, you know, to Belmar, New Jersey this year.
00:42:00.900 So I could rehearse that with her, with ChatGPT.
00:42:05.860 That is a very intimate private dialogue.
00:42:08.560 Powerful, definitely powerful.
00:42:10.960 But it's also problematic because in certain instances, it could drive you down the rabbit
00:42:18.640 hole of pathology.
00:42:19.780 But it also can kindle the dynamic of genius.
00:42:24.100 And that's the duality that really kind of flips me out.
00:42:27.060 We know that introspection is at the heart of transformation.
00:42:34.460 We know that from reading the Bible.
00:42:39.080 We know that sometimes you need to be alone.
00:42:42.040 You need to think.
00:42:43.200 Many, many great thinkers found the answers come when they're quiet and alone.
00:42:47.640 And that level of introspection, I think, kindles a certain level of what I, I often refer to
00:42:54.820 it as genius as our birthright and mediocrity as self-opposed.
00:42:58.220 That I believe that our cognitive capabilities, when kindled, when tapped into, when managed
00:43:04.000 appropriately, yield really, really interesting things.
00:43:08.180 I think that AI may in fact be a surrogate, be a partner in kindling that reality.
00:43:15.540 That kind of flips people out, but it's not, it's not that, look, Michael Jordan was a
00:43:23.180 genius, okay?
00:43:24.720 He did it by bouncing a basketball.
00:43:27.200 And that was, in many ways, that was introspective.
00:43:30.020 That was meditative.
00:43:31.900 We often refer to that as being in the zone.
00:43:35.300 How about the aha moment?
00:43:37.020 These are experiences that we've all had.
00:43:39.640 And I think that there may be an opportunity to leverage that unique internal dialogue, not
00:43:46.640 monologue, with artificial intelligence and large language models to find new levels of
00:43:51.920 cognitive engagement.
00:43:53.320 So it's, we are in the, we are in the abyss right now.
00:43:56.860 We're in the grand cognitive abyss.
00:43:59.340 And I just think it's important that we recognize that it's very easy to defer thinking to the
00:44:05.240 machine.
00:44:05.560 And when you defer thinking to the machine, you're just not cognitive offloading.
00:44:10.640 It's just not that I'm letting it remember my wife's telephone number.
00:44:14.140 It's that I am changing that.
00:44:16.680 Now, one could argue that it makes it efficient, but I think that's a real risk and something
00:44:20.500 we have to manage.
00:44:23.060 Yeah.
00:44:23.900 I mean, I worry about that disconnect and all the things that we do in our quiet time.
00:44:29.240 And, you know, even like, especially as a child, I think about the kids today, they don't have
00:44:34.960 any quiet time, generally speaking, because they're being fed something all the time.
00:44:40.940 And they're always having a screen put in front of them.
00:44:43.660 And it's like their play dates now with friends are just on screens together in the same room.
00:44:49.860 And there's, you know, when you think back, I mean, I'll, I mean, my life, you know, playing
00:44:54.080 with Barbies and playing outside and creating a world for these things and pretending you're,
00:44:59.640 you know, in an army fort and what happens and like, that's just not happening now.
00:45:04.740 And I think, and I think that if you already had those things and now you're an adult, I
00:45:10.420 mean, you stop creating as an adult also, like you're, you're still destroying everything
00:45:15.820 moving forward. Here's my completely unscientific and controversial take. I think that we all have
00:45:24.620 the capacity for this hyper experience, for this cognitive zone, right? Being in the zone,
00:45:31.920 experiencing the aha moment. When we, when, when I ask you to draw a picture of a genius,
00:45:38.100 most of you will scribble Albert Einstein, or you'll write that name down because he's the
00:45:42.460 prototypical genius in many instances. And that, that, that perspective of the genius is a smart
00:45:50.760 man sitting in a room, getting the right question correct all the time, right? That's, that ain't
00:45:56.700 genius. In fact, if you look at Einstein, much of Einstein's early work was, was where he won the
00:46:01.340 Nobel prize, photoelectric effect, relativity, general relativity. Those things happened in his twenties.
00:46:06.480 And, and for the rest of his life, he kind of languished in Princeton, another Jersey spot we're
00:46:10.840 going to mention, but, but he did have some interesting work, but I think we've all had moments
00:46:15.320 of enlightenment, moments of transcendence, moments of sort of an experiential element.
00:46:22.820 Now with kids, my contention is oftentimes kids find something that they're good at.
00:46:28.600 Like, and it's not, it's not like good at math. It's good at knowing every car on the road. That's a
00:46:34.120 Chevy. That's a Tesla. You know, they have this, this savant like capability. We should nurture that.
00:46:40.720 Because that savant like capability is in essence, the genius experience. And when you find that
00:46:47.100 genius experience, you discover the joy of thought. Remember, we started our conversation today on,
00:46:53.080 as you think, so you act, as you act, so you become. And the cognitive age. I think that that's
00:46:58.080 something that we see with kids today. We can nurture that ability to find that spark. And we're
00:47:04.480 developing that spark in new and interesting ways because if, if my son wants to learn gravity from
00:47:11.400 Carl Sagan, I can, I can do that. I can have an LLM create a Carl Sagan like, like teacher.
00:47:21.920 And when that happens, I think we see, we see, we see very, very magical changes. These things are
00:47:28.420 tuned to the creative frequency of your brain. So I think there's a lot of opportunity, you know,
00:47:32.200 but look, you're going fast, you know, you're traveling at the speed of thought. And that's
00:47:38.900 problematic. There was a study in Africa that used an LLM to teach math to children. And because the
00:47:47.420 math was tuned to their frequency, let me back up and can I keep going on this or am I, am I down the
00:47:54.900 dark rabbit hole here? Well, I want to make sure Marcella gets into, because so much has been said
00:48:00.760 so she might have a question thus far. You remind me of Richard Feynman when Richard Feynman talked
00:48:07.600 about learning and how like you can learn, you can learn words, you can learn, this is the name of the
00:48:14.840 tree, the scientific name, but do you really know what it does? And yeah, and what I like about AI
00:48:21.740 though, is that like you said, it's a tool in a way that you can either just let it take you or you
00:48:31.460 can drive it where you can have Carl Sagan and all that. Spot on a hundred percent, you know, but
00:48:40.820 here's, here's the interesting thing. And let's go back to like 30,000 foot, another comment that I
00:48:45.500 always get in trouble for, but I'll, I'll say it. Knowledge is dead. Knowledge is dead. And, and people
00:48:53.180 look at me and say, what the heck are you talking about? Well, if I want to cook a souffle, okay,
00:48:58.940 I'm no chef, but if I want to cook a souffle, I go into our kitchen and we have that book,
00:49:04.020 Julia Child, The Art of French Cooking. A lot of, a lot of, you know, people have it. They never use it
00:49:08.640 because it's too darn hard, but they, it's, it's kind of, they got it as a gift or something. So
00:49:12.900 on page 172 is the recipe for souffle. And they tell me, first thing they say is make a sachet.
00:49:21.080 I don't even know what a sachet is. This was not for me. It was not written for me. Now today,
00:49:30.060 if I want to learn how to do a souffle, cook a souffle, I go to a large language model and I say,
00:49:36.180 I want you to cook a souffle as good as Julia Child, but I want you to tell me how to do it
00:49:41.820 and make it funny. Use analogies to automotives and write it for a man who's never cooked in his life.
00:49:51.860 So that comes down. It, it actually collapses a wave function in physics. We talk about that,
00:49:58.820 that superposition, that knowledge, that thing of knowledge, how to cook a souffle, make it automotive
00:50:04.740 funny for a guy who's never cooked has never existed before. It exists nowhere, but it happens
00:50:11.120 to come down to your computer, to you uniquely to you. That's why knowledge in the traditional sense
00:50:17.640 is dead. Julia Child's book is a dust collector because today we could interpret that in the
00:50:25.260 context of my needs. It's user or more specifically learner centric. Well, let's go back to that crazy
00:50:34.680 teacher, your favorite teacher. She put you at the center. It was learner centric. So large language
00:50:41.880 models can teach me the way I want to learn. So my girls had to study the Krebs cycle. The Krebs cycle
00:50:48.440 is a metabolic pathway that biology students always have to learn and it's a pain in the neck, but I had
00:50:54.740 chat GPT write a poem about it and it was memorable and they learned it that way. So again, it's not just
00:51:04.560 an extension of what I know as a navigator because I, as a navigator don't know what 1,6 fructose
00:51:11.680 thyphosphate is. ChatGTP does. And when it teaches me, it's in the context of poetry. Holy crap. It's
00:51:18.820 transformative. So if knowledge is dead, does that mean knowledge work is dead? And is AI going to be
00:51:29.260 taking all of our jobs? Brian in his recent post would tell you that he's developing the zero
00:51:35.020 employee company. So I think that again, I'm going to hedge on this and I'm going to say, I don't,
00:51:42.820 I don't think so. I think that, that, that the blacksmith died in the industrial revolution,
00:51:50.180 right? When we, when we changed to steel and cars and, and that doesn't necessarily mean that
00:51:59.160 the knowledge worker is dead. I'll give you a couple examples. Um, when I think it was, was it
00:52:04.420 Matthew Brady, the guy who did the civil war photography, the black and white civil war photography,
00:52:11.020 I think his Brady was his last name. Anyway, when photography emerged in the United States
00:52:16.920 and prints and around the world, portraiture did not go away. It got bigger. It grew and
00:52:25.580 grew and grew. And it created this thing called selfies, a billion dollar industry of selfies.
00:52:32.500 Similarly, when I think it was Boris Spassky played IBM's deep blue in chess and lost.
00:52:41.020 What happened to chess? Was chess finished? Did everybody just take their boards and go home
00:52:46.100 and go away? No chess. And that's 20 years ago. Chess has never been more popular than it is today.
00:52:52.780 So my, my contention is that, that we don't cut the pie. We don't cut the pizza into smaller and
00:52:59.340 smaller pieces, leaving less for us, the pie grows. And when that pie grows, it develops new areas for
00:53:07.200 humanity. Now, you know, now, now we're, you know, it's interesting because innovation has always had
00:53:14.320 the backside of the coin. What's on the other side of the innovation coin? Obsolescence. When our phone
00:53:20.000 is obsolete, what do we do? We get a new one. When our car, washing machine, microwave, whatever it is,
00:53:26.960 when it breaks, generally we get a new one because innovation and obsolescence go hand in glove,
00:53:32.720 same side of this coin. For the first time in history, human cognition itself is on the obsolescence
00:53:38.400 chopping block. That's what flips people out. But I, I think it's also a concern when you think
00:53:45.280 of just the range of IQs, right? Like, cause there are certain people that have the cognitive ability
00:53:51.360 to maybe get to that high end that LLMs can't do and they can be useful and maybe amplified and, you
00:53:57.840 know, be 10 times more productive, but then there might be half the population that LLMs could just
00:54:04.400 replace and you don't need them anymore. And there's no jobs left for them to do once you bring robots
00:54:08.480 into the picture too.