Co-Intelligence — Using AI to Think Better, Create More, and Live Smarter
Episode Stats
Words per Minute
223.7744
Summary
Ethan Mullick is a professor at the Warden Business School and the author of Co-Intelligence: Living and Working with AI. Today, he explains the impact of the rise of AI and why we should learn to utilize tools like ChatGPT as a collaborator, a co-worker, co-teacher, and coach.
Transcript
00:00:00.000
Brett McKay here, and welcome to another edition of the Art of Manliness podcast.
00:00:11.540
The air of artificially intelligent large language models is upon us and isn't going away.
00:00:16.480
Rather, AI tools like ChatGPT are only going to get better and better and affect more and
00:00:21.000
more areas of human life. If you haven't yet felt both amazed and unsettled by these technologies,
00:00:25.520
you probably haven't explored their true capabilities. My guest today will explain
00:00:29.880
why everyone should spend at least 10 hours experimenting with these chatbots, what it
00:00:33.760
means to live in an age where AI can pass the bar exam, beat humans at complex tests, and
00:00:38.300
even make us question our own creative abilities, what AI might mean for the future of work and
00:00:42.240
education, and how to use these tools to enhance rather than detract from your humanity. Ethan
00:00:47.760
Mullick is a professor at the Warden Business School and the author of Co-Intelligence, Living
00:00:51.980
and Working with AI. Today on the show, Ethan explains the impact of the rise of AI and
00:00:56.860
why we should learn to utilize tools like ChatGPT as a collaborator, a co-worker, co-teacher,
00:01:02.140
co-researcher, and coach. He offers practical insights into harnessing AI to complement your
00:01:06.800
own thinking, remove tedious tasks from your workday, and amplify your productivity. We'll
00:01:11.660
also explore how to craft effective prompts for large language models, maximize the potential,
00:01:16.100
and thoughtfully navigate what may be the most profound technological shift of our lifetimes.
00:01:20.440
After the show is over, check out our show notes at awim.is.ai.
00:01:39.940
So I'm sure everyone listening to this episode has heard about or even used what's called artificial
00:01:45.360
intelligence, or we'll talk about the difference between that and large language models like ChatGPT
00:01:51.120
is the most popular one. But I think popularly, when people use the phrase artificial intelligence,
00:01:56.140
they probably use that without really understanding what it means. You see like AI this and AI that,
00:02:00.940
this has AI. When computer scientists talk about artificial intelligence, what do they mean by
00:02:07.100
So it is the world's worst label, like one of many of them, because it actually came from the 1950s
00:02:12.880
originally, and it has many different meanings. The two biggest meanings recently was before ChatGPT's
00:02:20.240
use, when you heard artificial intelligence being used, we were talking about machine learning,
00:02:24.040
which are ways that computers can recognize patterns in data and make predictions about what
00:02:28.400
comes next. So if I have all this weather data, I can predict what the weather is going to be
00:02:32.380
tomorrow. If I have all this data about where people order products, I can figure out where to put my
00:02:36.740
warehouse. If I have all this data on what movies people watch, I can use that to predict what movie you
00:02:41.080
might like given your watching history. So this form, you may have heard of big data, or data is the new oil,
00:02:46.220
or algorithms. Like all of that was this kind of what we'd call the AI through most of the 2010s.
00:02:53.000
And then OpenAI introduced ChatGPT and large language models became a big deal. Those use the
00:03:00.000
same techniques as are used in the other forms of machine learning, but they apply them to human
00:03:06.560
language. And it turns out that creates a whole bunch of really interesting new use cases. So AI has
00:03:11.300
meant many different things as a result. Okay, so let's talk about large language models or LLMs,
00:03:16.560
because I think when most people think about AI these days, that's typically what they're thinking
00:03:20.840
about. So we mentioned ChatGPT, then there's Claude, Gemini, Perplexity. How do these things work?
00:03:27.480
Like whenever you type something into ChatGPT, what's going on on the other end that gives you
00:03:32.820
whatever it spits out? So the right way to think about this is that like we don't actually know all the
00:03:39.400
details. We know technically how they work, but we don't know why they're as good as they are.
00:03:42.720
Technically how they work is you basically give this machine learning system all the language that
00:03:48.660
you can get your hands on. And so like the initial data sets these things trained on was all Wikipedia,
00:03:53.140
lots of the web, every public domain book, but also like lots of weird stuff. Like there's lots of
00:03:57.840
semi-pirated Harry Potter fan fiction in there. Also all of Enron, the accounting firm that went under
00:04:03.460
for financial fraud, all of their emails went in because those were freely available.
00:04:06.480
And so there's this vast amount of data. And then the AI goes through a process of learning
00:04:11.160
the relationships between words or parts of words called tokens using all this data. So it figures
00:04:15.880
out how patterns of language work. And it does that through complex statistical calculations and
00:04:21.720
it figures that on its own. So when you actually use these systems, what it's doing is doing all
00:04:26.740
this complex math to figure out what the next most likely word or token in the sentence is going to be.
00:04:32.180
So it's basically like the world's fanciest autocomplete that happens to be right a lot of the time.
00:04:36.480
Okay. But it can also create images. Like you can do that with ChatGPT and some of these other
00:04:41.160
LLMs. So what's going on there? Like how does that work?
00:04:44.920
So that's a really interesting situation because as of the time we're recording this,
00:04:50.120
there's been actually a very big change. So prior to the last week or two,
00:04:54.280
the way that AI's generated images tended to be something called the diffusion model,
00:04:58.620
which is kind of unrelated to large language models. And it involves
00:05:01.480
taking random static and then kind of carving it away until you get an image.
00:05:06.080
And those models, which we've all seen sort of operate, tend to produce a lot of distortion.
00:05:10.660
So they didn't do language very well. If you tell them, they're not really that smart.
00:05:15.780
And so when AIs were creating images, they were prompting one of these diffusion models to make
00:05:19.480
an image for them. That all changed in the last week or so because two different systems,
00:05:25.340
OpenAI's ChatGPT 4.0 and Google's Gemini gain the ability to create images directly.
00:05:34.540
So now what the AI does is, remember we talked about how it creates language by adding one word
00:05:38.660
after another, one token after another. It now can do that with images. Basically,
00:05:42.460
it's painting little patches of images. And just like words, it can create images or voice or any
00:05:47.300
other thing that way. So when it makes an image now, it can actually make it accurately.
00:05:50.420
So there's been a huge change in a very short period of time.
00:05:53.660
Okay. We'll dig more into how people are using this on a practical basis. But let's talk about
00:05:59.400
the different LLMs that are out there. So there's the popular ones, ChatGPT, that's run by OpenAI.
00:06:05.340
There's Claude, there's Gemini. What's the difference between these different large language models?
00:06:11.700
So there's a lot of things that are different between them that probably don't matter that much
00:06:16.680
because they're all evolving pretty quickly. So the most important thing to think about if you're
00:06:21.420
thinking about which AI to use is they all have different features, but they're all adding features
00:06:24.880
all the time and converging. It's that you want to make sure you're using, at least when you're trying
00:06:29.280
to do hard problems, that you're using the largest, biggest AI you have access to. We call these
00:06:32.880
frontier models. So ChatGPT has a lot of options available.
00:06:40.560
01, their most recent models tend to be better. So if you are listening to this and you last used
00:06:45.460
AI 18 months ago or 12 months ago and thought, okay, it doesn't do that well right now. It makes
00:06:50.660
a lot of mistakes. All of those things change as models get bigger. So as models get bigger,
00:06:55.340
they get smarter at everything and more capable at everything. We call this the scaling law.
00:06:59.840
And as a result, you want to have access to a tool that is actively being developed. So you have a very
00:07:04.460
large model. So Anthropic, ChatGPT, and Google through their
00:07:08.460
Gemini system are all very good choices because they all have a lot of options about what they
00:07:13.040
can do and very big recent models to use. So researchers have given a lot of tests to these
00:07:18.840
LLMs, the kind of tests that a human would take, try to figure out how good these things are. So how
00:07:24.700
do the models do? So we're getting to the point where it's getting hard to test these things. So
00:07:30.320
to give you one example, there is a famous test that's used to evaluate these models
00:07:34.020
called the GPQA, which stands for Google-proof question and answer of all things.
00:07:39.060
And it's designed so that a human PhD student using Google and giving a half hour or more to
00:07:44.760
answer each question will get around 31% right outside their area of expertise. And inside their
00:07:49.420
area of expertise, they'll get around 82% right. So with Google, access to tools, that's what they get
00:07:55.220
right. What's happened very recently is until like last summer, the average AI was getting around,
00:08:02.100
you know, 35%. So better than a human outside their expertise, which is pretty impressive,
00:08:05.900
but not as good as a human expert. As of late this fall and into this spring, now the models are
00:08:11.660
performing better than humans at that test. So they're getting 85%, 84% beating humans at this.
00:08:17.260
So they've gotten so good at tests that we've had to create new tests. So the most famous of these is
00:08:21.280
something called Humanity's Last Exam, where a company put together a bunch of human experts in
00:08:26.800
everything ranging from like archaeology and foreign languages to biochemistry to math.
00:08:31.940
And they've all created really hard problems. Professors have created hard problems that they
00:08:34.980
couldn't solve or, you know, that they would have trouble solving themselves. And when that came out
00:08:39.720
in January, the best models were getting around two or 3% right. Now they're getting between 18 and
00:08:44.740
28% right just about six or eight weeks later. So they're doing really well on exams.
00:08:49.940
Yeah. And ChatGPT, when it takes the bar exam, it's passing it. When it takes the AP exam in biology
00:08:56.900
and history and psychology, it's scoring fours and fives. So I mean, yeah, it's really, it's really
00:09:02.800
impressive. Yeah. I mean, we're in a place where the AI will beat most humans in most tests.
00:09:08.280
So going back to this idea of how AI works, like a fancy autocomplete, like, so what's going on? Like
00:09:13.180
if you give it a question, how is it figuring out the answers? Just saying, well, the probability based on
00:09:18.340
this question is, you know, this answer, is that what's going on? So two things are happening.
00:09:23.500
The comforting thing that's happening sometimes is that they cheat, right? So they've already seen
00:09:28.240
these questions so they can predict the next answer because it's already been in the data set before.
00:09:32.320
But we find that if we create new questions, the AI has never seen before, they still get things
00:09:36.400
right. And the truth is, this is where we're not a hundred percent sure why they're as good as they
00:09:40.580
are at this. We're actually trying to understand that right now. So we know how these systems work
00:09:44.160
technically, but we don't actually understand why they're as creative and good and persuasive and
00:09:49.140
interesting as they are. We don't have great theories on that yet. People listening to this
00:09:53.600
who have kept a pace of computer science, they've probably heard of the Turing test. For those who
00:10:00.320
aren't familiar with the Turing test, what is that? And have these large language models passed the
00:10:06.240
Turing test? So the Turing test is one of a series of like kind of mediocre studies of what makes
00:10:13.160
artificial intelligence, artificial intelligence that we used to use to judge the quality of AI
00:10:18.340
because it didn't matter. No AI came close to it. So the Turing test is this test by the guy who
00:10:23.220
actually came up with the name for artificial intelligence, which is Alan Turing, who was a
00:10:28.000
famous World War II scientist. And he came up with the idea of what he called the imitation game. So
00:10:32.640
you may have even seen the movie about this. But the idea is that if you talk to an AI via typing
00:10:38.640
and you talk to a human, could you tell which was the AI and which was the human in natural
00:10:43.600
conversation? Until very recently, the idea of this was kind of laughable, right? That you could spend
00:10:48.380
time talking to a computer, you would know it was computer. And in some ways, it's become kind of
00:10:53.360
irrelevant because I think everybody thinks that they could be fooled by AI and they can be. So the
00:10:57.280
Turing test seems pretty decisively passed. In fact, what's pretty funny is that at this point,
00:11:02.400
humans in some small studies actually are more likely to judge the AI as human than human as human.
00:11:08.640
So we're still figuring this out. But I think the Turing test is passed.
00:11:12.820
So AI, these large language models, they're really good at a lot of things. What are the
00:11:16.620
limitations that these LLMs have right now? And what do they not do well?
00:11:21.620
It's a good question because that's changing all the time. We have this concept in our research,
00:11:25.180
we call the jagged frontier, which is AI is good at some things you wouldn't expect and bad at some
00:11:29.480
things you wouldn't expect. So until very recently, for example, you could ask the AI to write a sonnet for
00:11:34.380
you about strawberries, where every line starts with a vowel and has to also include
00:11:38.600
a line about space travel and you'd get a pretty good sonnet. But if you asked it to write a 25
00:11:44.900
word sentence about strawberries, or even count the number of R's in strawberries, it would get that
00:11:49.200
wrong. So the AI has these weird weak spots and weird strong spots. Now, the other thing is,
00:11:55.060
this is always changing. So that R test, how many R's are there in strawberry worked really well until
00:12:00.120
January 2025. And now it doesn't work anymore because the AIs are good enough that they can count the
00:12:05.460
number of R's in strawberry. So this is an evolving standard.
00:12:10.080
I'm sure people who've been keeping on top of what's going on with large language models have
00:12:14.520
heard of this idea of hallucinations. What are those and is that still happening?
00:12:18.500
So remember, what an AI is doing is predicting the next word in a sentence. It's not looking things up
00:12:23.360
in a database. It's just, it's predicting. And so oftentimes what it predicts as the next word in
00:12:29.880
a sentence may not be true. So if you ask it, especially older models, if you ask them like
00:12:34.220
a book I've written, it might make up the title of a different book that could be something I wrote
00:12:38.120
because it's predicting something that's likely to be true, but it doesn't know whether it's true or
00:12:41.820
not. We call these hallucinations. They're basically errors the AI makes, but they're kind of really
00:12:46.360
pernicious or dangerous errors because the AI makes things up that sound real, right? If you ever
00:12:52.380
ask for a citation or quote, it's really good at making up quotes that like, I bet you Abraham Lincoln
00:12:56.140
did say that, but he never did. So it's not just like an obvious error. Like it makes something up
00:13:00.200
like, you know, Abraham Lincoln said, the robots will rise and murder us all. It will say something
00:13:04.340
that sounds like an Abraham Lincoln quote. So we call those things hallucinations. There's sort of good
00:13:09.160
news and bad news about hallucinations, which is they're kind of how AI works. Always making something up.
00:13:14.240
That's the only way it's always generating with probability the next word in the sentence.
00:13:18.800
So it's always kind of hallucinating. The fact that hallucinations are right so much of the time
00:13:21.960
is kind of weird. And also it's what makes the AI creative. If it wasn't making stuff up some of
00:13:25.960
the time, the answer would be very boring and the text is very boring. So it's very hard to get rid
00:13:30.480
of hallucinations entirely, but as AIs get bigger and better, they hallucinate less. So just last week,
00:13:36.460
a new study out looked at hallucination rates on the AI answering questions about New England
00:13:41.120
Journal of Medicine medical vignettes. And the hallucination rates used to be 25% of the
00:13:45.860
vignettes that it talked about were hallucinated. Now the latest models like O1 Pro are hallucinating
00:13:50.280
0% of the time. So that is changing over time. That doesn't mean hallucinations go away. But again,
00:13:58.220
Yeah. I remember a couple of years ago, I wrote this article for our website called,
00:14:03.120
why are dumbbells called dumbbells? And I wanted to see what chat GPT had to say. So I asked it.
00:14:09.440
I think this was maybe chat GPT 3.5 when I asked it. And it just gave me this nonsense answer. It was
00:14:17.080
like, well, dumbbells are called dumbbells because Lord dumbbell in 1772, blah, blah, blah. And I mean,
00:14:23.140
it was well-written. And if you didn't know why dumbbells are actually called dumbbells, you'd think,
00:14:28.240
okay, this sounds like a reasonable answer, but there's no Lord dumbbell. That was totally made up.
00:14:33.360
And I just typed the same question in now. So I'm using chat GPT 4.05. It actually gave me a
00:14:41.180
closer answer about why dumbbells are called dumbbells. So yeah, that's a perfect example
00:14:47.440
That's right. And it's a great story. And you kind of want it sometimes to tell you the Lord
00:14:51.220
dumbbell story because otherwise it wouldn't be interesting or fun or come up with creative
00:14:54.980
ideas. And these systems are actually creative, which is sort of goes back to when you asked me
00:14:58.660
the question, what are they bad at? People want to hear the answer that they're bad at creativity,
00:15:02.040
for example, or bad at emotion, except that they aren't. So that's what makes it kind of
00:15:08.780
Yeah. There's like creativity tests that they've run on the LLMs and they do pretty well on those
00:15:16.600
Yeah. I mean, there's some colleagues of mine at Wharton who run a famous entrepreneurship class
00:15:21.020
where they teach design thinking. One of the professors involved actually wrote the textbook
00:15:24.580
on product development and they had their students generate 200 startup ideas. They had GPT 4,
00:15:30.260
which was the model at the time, generate 200 startup ideas. And then outside human judges,
00:15:34.540
judges ideas by willingness to pay. Of the top 40 ideas is judged by other humans. 35 came from the
00:15:40.080
AI, only five from the humans in the room. Wow.
00:15:42.240
It's pretty typical of what we see, which is, this is pretty good at creative ideas,
00:15:45.680
especially beats most people for coming with creative ideas. If you're really creative,
00:15:49.000
you'll be more creative than the AI. But for a lot of people, you should start almost every
00:15:52.660
ideation process, write down your own ideas first, and then ask the AI to come with ideas for you.
00:15:56.660
So before we get into the potential benefits of AI, let's talk about the concerns people have
00:16:00.880
about it. So in your research about artificial intelligence, and you're talking to companies,
00:16:06.160
educators, what are the biggest concerns people have about artificial intelligence,
00:16:12.680
It's a great question. I mean, there's a lot of concerns. So first off, just to put this in context,
00:16:17.100
we consider AI to be ironically a GPT, which in this case stands for general purpose technology.
00:16:22.580
So these are those rare technologies that come around once in a generation or two, like the
00:16:26.620
computers and the internet or steam power, that transform everything in ways good or bad.
00:16:31.460
So there's lots of effects when you have a general purpose technology that are good or bad.
00:16:35.740
So we could talk in detail about all the little effects, right? I mean, they may not be that
00:16:38.680
little to you, right? You can make fake images of people, you can convince people to give them
00:16:43.460
their money. There's all kinds of effects that might be negative, job impacts, other stuff.
00:16:48.020
A lot of AI researchers are also worried about long-term issues. So they're also concerned about
00:16:52.260
what they call existential threats. The idea that what if an AI is powerful enough that it tries
00:16:57.200
to control the world or kill everybody on earth? Or what happens if people can use AI to create
00:17:02.740
weapons of mass destruction? So there's sort of these two levels of worry. There's a worry about
00:17:06.520
the kind of impacts that are already happening in the world. And then there's worries that either
00:17:10.180
some people dismiss as science fiction or other people think are very plausible that AI might be
00:17:16.040
On that existential threat, there's this idea that the AI might become sentient. You hear about that.
00:17:20.720
Is that an actual, like, people actually think that's going to happen, potentially?
00:17:24.780
I don't think anyone knows. We don't have a good sense of where things are going. And I think
00:17:28.780
people's predictions are often off. And I think you don't even need sentience. We don't even know
00:17:34.080
what sentience is, but we don't even need sentience to have this kind of danger, right? The classic
00:17:38.060
example of the AI gone wild is called the paperclip problem, which is if you imagine you have an AI
00:17:44.060
that's programmed or given the goal of making as many paperclips as possible, it's part of paperclip
00:17:49.140
factory. And this is the first AI to become semi-sentient or self-controlled. It becomes
00:17:54.700
super smart, but still has the goal of making paperclips. Well, the only thing that's standing
00:17:59.540
in its way is the fact that not everything is a paperclip. So it figures out ways to manipulate
00:18:04.380
the stock market to make more money so they can instruct humans to build machines that will mine
00:18:08.760
the earth to find more metal for paperclips. And along the way, a human tries to shut it off.
00:18:13.140
So it kills all the humans, incidentally, without worrying about it, and turns them into paperclips.
00:18:16.940
Because why would it take the risk that it gets shut off and it can't make enough paperclips?
00:18:21.080
So all it does is make paperclips without caring about humans one way or another.
00:18:24.740
So that's sort of this model of AI superintelligence. But again,
00:18:28.960
nobody knows whether this stuff is real or not, or just science fiction.
00:18:32.580
You write in the book that when people start using LLMs, like ChatGPT or Claude,
00:18:38.000
they'll have three sleepless nights. Why is that?
00:18:40.660
So this is an existentially weird thing. I mean, it is very hard to use these systems
00:18:47.140
and really use them. I find, by the way, a lot of people kind of bounce off them precisely because
00:18:51.440
they feel like this kind of dread and they sort of walk away. But like, you've got a system that
00:18:55.780
seems to think like a human being who can answer questions for you, who can often do parts of your
00:19:00.660
job for you, that can write really well, that can be fun to talk to, that seems creative.
00:19:05.040
And like, these are things humans did. Like, no one else did this. There was no other animal that
00:19:09.640
did this. And it really can provoke this feeling of like, what does it mean to think? What's it
00:19:13.620
mean to be alive? What will I do for a living given that this is, you know, I don't know if
00:19:17.920
you've seen Notebook LM create podcasts right on demand. Like you start to worry, like, what does
00:19:22.220
this mean if this gets good enough? What does it mean for my kids' jobs, for my job? And I think
00:19:26.220
that that creates, you know, it's some excitement, but also some real anxiety.
00:19:30.980
No, I agree. If you haven't had those sleepless nights while using AI, it's because you
00:19:34.900
haven't used it enough or gone deep with it. Because, you know, both my wife and I, we have
00:19:39.260
the podcast, but we also write for a living. That's what we've done for the past 17 years.
00:19:43.700
And sometimes, you know, we'll go to ChatGPT and like ChatGPT will spit out some like, that was
00:19:49.120
really good. Like, why am I here? What am I doing? Or the Notebook LM, I've used that. So I've used
00:19:56.020
Notebook LM to help me organize my notes, kind of create outlines and things like that. And I've used
00:20:01.580
that podcast feature. And it sounds just like two people having a back and forth conversation,
00:20:09.120
And you could jump in with a call-in, by the way. There's a call-in button now.
00:20:12.240
And this will only get better. And so that is this existential moment of like, you know,
00:20:16.780
I also write for a living. And, you know, of the AIs right now, the best writer is still probably
00:20:20.980
Claude of the set, although some of them are getting better. And like, it's kind of crazy. Like,
00:20:25.280
I ask it for feedback on my writing and it has really good insights. You know, I write everything
00:20:29.460
myself, but then I do ask the AI, what am I missing here for a general audience? And sometimes
00:20:33.320
it's like, this would be really good to tighten up this paragraph. I'm like, oh, that's really
00:20:36.720
good advice. And I've had editors for years. And like, it is weird to have this AI be so good
00:20:45.100
You call AI a co-intelligence. What do you mean by that?
00:20:49.320
So as of right now, the most effective way to use AI is as a human working with it. Now,
00:20:56.420
that doesn't mean that it isn't better than us at some things. But part of what you need to think
00:21:01.600
about is how to use AI to do better at what you do, to do more of what you love. So it's not,
00:21:08.280
you know, you're not handing over your thinking to it. You're working with it to solve problems and
00:21:12.000
address things. And one of the really cool things about AI is it's just pretty good at filling in
00:21:16.860
your gaps, right? So we all have jobs that we have to do a lot of things at. Take the example of a
00:21:21.680
doctor. So to be a good doctor, you have to be good at, you know, at doing diagnosis. You have
00:21:27.420
to be probably good at hand skills and being able to manipulate the patient, figure out what's going
00:21:30.780
on. You have to probably be good at giving good bedside manner. You're probably managing a staff.
00:21:35.520
You have to do that. You have to keep up on medical research. You have to probably be a social worker
00:21:39.400
for some of your employees and your patients. No one's going to be good at all of those things.
00:21:43.760
And probably nobody likes all of those things. The things you're bad at, you probably like least.
00:21:47.080
So those are things the AI can help you most with. So you can concentrate on the things you
00:21:50.200
like to do most. The question is whether this maintains itself in the long term. But for right
00:21:53.820
now, AI really is a thing to work with to achieve more than it is something that replaces you.
00:22:00.720
So in the book, you provide four guidelines for using AI. The first is always invite AI to the
00:22:06.220
table. So what does that look like in practice? And why do you recommend doing that?
00:22:09.820
So one of the things we've talked about is the idea that with AI, you need to know what it's good or
00:22:16.440
bad at. And it's often hard to figure that out in advance. And it's often uncomfortable
00:22:20.040
to figure that out. So you kind of have to force yourself to do it. And the easiest way to do it
00:22:23.800
is to use AI in an area you have expertise in. So the magic number seems to be around 10 hours of
00:22:28.880
use. And if you use 10 hours of AI for 10 hours to try and do everything at your job you ethically
00:22:33.100
can with AI, then you're going to find pretty quickly where it can help you, where it can help you
00:22:39.040
if you learn to use it better, where it can help you more, where it's not that useful, where it might be
00:22:43.380
heading. And that lets you become good at using AI. So it's hard to have you give you rules that make
00:22:48.240
you great at AI use other than use AI in your job and you will figure it out. So the first rule and
00:22:53.440
the rule that I think has become the most useful for people is just use it. If you haven't put 10
00:22:57.300
hours in because you're avoiding it for some reason, you just need to do it.
00:23:01.300
The second guideline is be the human in the loop. What do you mean by that?
00:23:04.740
So this is an idea from control systems that there should always be a human making decisions.
00:23:09.680
I'm using a little more loosely than that, which is that you want to figure out how you integrate AI
00:23:15.700
into your work in a way that increases your own importance and control and agency over your own
00:23:20.000
life. So you don't want to give up important things or important thinking to the AI. You want to use it
00:23:24.600
to support what you do to do it better. Oftentimes when people start using AI, they find out it's good
00:23:29.060
at some stuff that they actually thought they were good at and the AI is better than them. That is an okay
00:23:33.260
thing to come to a conclusion of. And you then figure out how do I use this in a way that enhances
00:23:37.760
my own agency and control and doesn't give it up.
00:23:41.460
Yeah. I like to think of going back to that co-intelligence idea when I'm working with an
00:23:45.580
LLM, I imagine myself like Winston Churchill who had like a team of when he was a writer,
00:23:51.860
you know, he, Winston Churchill was a big writer, wrote histories. He'd have a team of research
00:23:55.880
assistants. So I kind of think of like me, I'm Winston Churchill and the LLMs are like my research
00:24:00.480
assistants. They go out and find things for me, compile things, summarize things. Then I take a look
00:24:05.680
at it and like, okay, now I'm going to take this stuff and write things out myself.
00:24:10.800
I love that analogy is the research team. I mean, that's how I use it in my book for the same kind
00:24:14.840
of purposes. Like I got feedback from it and, you know, did my jokes land in this section? It's not
00:24:19.520
that great of humor, but it actually is pretty good at reading humor. You know, when I got stuck,
00:24:22.760
give me 30 versions of how to end the sentence. You know, did I summarize this research paper properly?
00:24:27.460
So that kind of team of supporters is a really helpful way to think about things.
00:24:31.580
Yeah. And then also, I mean, I'm still, you know, I know that these LLMs are really good
00:24:37.080
at things, but I still don't trust it completely because like you say, same thing as like with a
00:24:40.980
person. Like I don't like, even when I delegate a task to a person, like I trust, but I got to
00:24:44.960
verify, right? Like, well, you gave me this answer. Let me, let me make sure that's right.
00:24:49.500
Yeah. I mean, I think that that's exactly right. You should be nervous about this because in the same
00:24:54.460
way you kind of are nervous about a person, but you also kind of learn its idiosyncrasies,
00:24:57.580
right? So you learn, oh, it's actually pretty good at these tasks and I can pay less
00:25:01.340
attention, but this one I'm going to be very nervous about.
00:25:04.100
Yeah. So the third guideline is treat AI like a person. I think this goes back to our
00:25:10.180
Well, a little bit. It's also just general advice. So I think a lot of people think about
00:25:14.660
AI as, you know, software and it is software, but software shouldn't argue with you. It shouldn't
00:25:20.500
make stuff up. It shouldn't try and solve your marital issues when you're discussing things with it.
00:25:24.520
It shouldn't give you different answers every time, but AI does all of those things.
00:25:28.000
And what turns out to be a pretty good model, even though it's not a person,
00:25:31.840
is if you treat it like a human being, you are 90% of the way there to prompting it. If you try
00:25:37.000
and treat it like we've actually found some evidence that computer programmers are actually worse at
00:25:40.700
using AI than non-programmers because they want to work like software code. But if you treat it like
00:25:45.120
a person in the same ways you've been discussing here, right? What's it good at? What do I trust it
00:25:49.160
for? What's its personality? If you use different models, you'll find Claude has a different personality
00:25:52.880
than the GPT-4, which has a different person than the GPT-4.5. And so treating it like a person
00:25:57.860
gets you a large part of the way there and also demystifies this a bit. And so if you're a good
00:26:02.500
manager, if you're a good teacher, if you're a good parent, you're probably going to be pretty good
00:26:06.520
at using AI. Well, I imagine people that are hearing this are thinking, well, AI is not a person.
00:26:12.040
And that's ethically questionable to tell humans to treat this code like a living person. What's your
00:26:17.880
response to that? You're absolutely right. And that battle is lost. So one of the first things
00:26:22.920
people talked about in computer science is that it's unethical to anthropomorphize AI. It's a treat
00:26:28.360
AI like a person. And yet every single computer scientist does that anyway, right? We anthropomorphize
00:26:34.860
everything around us, right? Ships are, you know, she, you know, we curse the weather like a person or
00:26:39.420
name storms. Like we do this anyway. So I think it's really important to emphasize that it is not a person.
00:26:44.560
This is a technique, but for better or for worse, all the AI companies are very happy to blend the
00:26:49.440
line. So a lot of the models have voice modes where they talk to you like a person. They all
00:26:52.720
talk in first person. They're happy to tell you stories about their own lives, even though they
00:26:56.480
don't have lives. So I think it is important to remember this is a product. It's a software product.
00:27:02.220
So view this as a tip for getting things done, but don't forget that you are talking to software.
00:27:08.600
Yeah. I think the danger of anthropomorphize, just treating like an actual human being,
00:27:12.960
I mean, that you are seeing that at an extreme level where people are actually developing
00:27:16.560
like emotional relationships with artificial intelligence and like, that's not good.
00:27:23.200
I agree. I mean, I think it's inevitable, you know, but not good, right? Now there is some
00:27:28.600
evidence early on that people who have these relationships with AI may actually have,
00:27:32.280
it may help them psychologically. We're still unclear, but some early papers suggested that
00:27:35.900
that may actually be the case for people who are desperately lonely. We don't know.
00:27:38.700
But I mean, as a general rule, I would be nervous about treating a technology like a person emotionally
00:27:43.660
or having an attachment to it emotionally. It is software in the end. But, you know, I think that
00:27:48.440
we can recognize both things are true, that there is a limit at which this becomes unhealthy to do,
00:27:53.920
but as a useful tip or mental model, there's value in that.
00:27:58.460
Yeah. I know my use of these different LLMs, like treating it like a human. I don't,
00:28:03.460
maybe I think I treat it like an alien almost like it's human, but not, I don't know.
00:28:07.420
So anyways, I've noticed that if it gives me like a bad answer, I'm like, that was, that's a bad
00:28:12.500
answer. If I'm kind of mean to it, if I'm like, I'm like a stern boss, that was, that was not a good
00:28:18.240
answer. That was terrible. I know you can do better, do better. And like, it does better when I give
00:28:23.480
Yes. I mean, so it turns out that, you know, giving it clear feedback, like a stern boss is actually
00:28:29.580
very valuable. Now the sternness or politeness doesn't, we have a study that just, we put out a
00:28:34.320
couple of weeks ago that we found that being very polite to the AI had very mixed effects on some
00:28:38.760
questions. Like you asked it, it would actually be more accurate math if you were very polite,
00:28:42.780
but it's some questions. If you're very polite, it would be less accurate in math. So I don't worry
00:28:45.960
so much about things like politeness per se, although most people are polite to AI because they kind of
00:28:49.900
fall into that. It feels like a person, but I think you hit a very big secret there, which is
00:28:55.300
the interaction. It gives you a bad answer. You don't walk away. You say, this is what you did wrong,
00:28:59.900
do better. And it will do better. Not so much because you're being stern to it, but because
00:29:05.140
you're acting like an actual manager, right? You're saying, this is what, or boss or parent,
00:29:09.000
this is what's wrong. Please fix it or fix it. You don't have to say the please. And you get better
00:29:14.700
With the idea of being polite to the AI, it's definitely weird because the AI, it's always,
00:29:21.140
it's typically really affirmative, even when it's giving you a critique. And because it's being nice to
00:29:27.380
you, you feel like you need to be nice back to it. And I've noticed that sometimes when it gives me
00:29:32.800
a really good answer to a question I asked it, I feel this impulse to tell it, oh, hey, thanks.
00:29:39.080
That was really helpful. That was great. And then you think, wait a minute, this is weird. What does
00:29:43.500
it mean to feel gratitude for a machine? Yeah, it can be a mind trip sometimes.
00:29:50.120
It is. And it's really hard to be rude to these things, especially when you use a voice mode
00:29:53.380
and it's being like, hey, how are you doing today? Like you want to answer it and you are being
00:29:57.340
tricked. So, I mean, it's why this, you treat it like a human is a technique for using AI. It is not
00:30:05.880
Gotcha. Yeah. Treat it like a human interacting with it, but not emotionally.
00:30:11.680
Yeah. The fourth guideline for AI is assume this is the worst AI you will ever use. Why is that a
00:30:19.020
Probably the most accurate thing I said in the book, we talked about test scores earlier.
00:30:22.260
These systems are getting better faster than I expected a year ago. There's been a whole
00:30:27.220
bunch of innovations that have made development happen faster. And, you know, I know enough
00:30:32.040
about what's happening inside the AI labs themselves to say, like, I don't think most of them expect
00:30:36.820
the development to end anytime soon. So, you should assume that if AI can't do something
00:30:41.720
now, that it's probably worth checking in a month or two to see if it can do it then.
00:30:46.120
You know, we're talking about writing. I mean, that's something I've been paying a lot
00:30:48.400
of attention to as somebody who writes a lot also, right? That's my job, both as a professor
00:30:52.160
as a blogger or as somebody who's on social media a lot. And, you know, a year ago, AI's
00:30:57.300
writing was absolute crap. And now when I use Claude, you know, like you said, it sometimes
00:31:01.400
comes with the turn of phrase. You're like, Ooh, this is pretty good. You were talking about
00:31:04.020
using GBD 4.5. Like you could feel that model writes better and like, it's clever. And so
00:31:09.680
there is this idea that like things that were impossible stop becoming as impossible.
00:31:15.120
We're going to take a quick break for your words from our sponsors.
00:31:17.240
And now back to the show. So I think there's this fear that, okay, you know, the AI, it
00:31:27.460
can just do anything and humans are cooked. Like we're done. So there's no point of knowing
00:31:34.180
anything because all the AI knows everything. But studies have found that people with a humanities
00:31:40.540
background, you know, they, they know a lot of history, philosophy, art, you know, things
00:31:45.340
like that are actually able to make the most of AI. Why is that?
00:31:51.220
So AI systems are trained on our collective knowledge. The data that goes into building
00:31:55.720
the statistical models comes from everything humanity has ever written essentially. And
00:32:00.760
all the art that goes into this comes from not just, you know, the most recent animations
00:32:04.720
or what, you know, Simpsons or Studio Ghibli or whatever, but also from the entire history
00:32:08.780
of art for humanity. And part of what you can be successful at, like there was a sort
00:32:13.260
of second caveat to the treat the AI like a person, which is also tell it what kind of
00:32:16.880
person it is. You can invoke styles, personas, approaches. Think about this like you are,
00:32:22.540
you know, Mark Antony. Think about this as if you were Machiavelli and you get very different
00:32:27.180
kinds of answers because you're priming the AI to find different statistical connections
00:32:31.160
than before. So if you have a wide set of knowledge to draw from, like if you think about
00:32:36.000
AI art, everybody knows about Studio Ghibli or the Simpsons or Muppets style. But if
00:32:41.540
you know, you know, German expressionism and boutique paintings and, you know, classic 1970s
00:32:46.800
slasher posters, like you can get the AI to work in those kinds of styles. And that gives
00:32:51.740
you edges that other people don't have because you can create things that are different than
00:32:55.960
what other people see, get different perspectives than other people. So having that wide knowledge
00:33:02.340
I've noticed that. So I have a humanities background and I have found that I just get
00:33:08.840
a lot out of it because like I can make connections in my head and then I can prompt the LLM with
00:33:15.280
this, you know, like here's this weird connection I want to make. Is there any connection there
00:33:19.220
or how can we make that connection? And I imagine if you didn't have that background, you can't
00:33:22.560
do that. Like the AI is only as good as the prompt or the information you give it. And if you
00:33:27.940
don't have anything to give it, you're just going to get kind of mediocre results.
00:33:30.880
Yeah. I mean, it's getting easier to prompt, right? So there's not that many tricks to it,
00:33:34.860
but there is this kind of core truth you're pointing out. And it's coming down not just in
00:33:38.600
that first prompt, but in the interaction. The fact that you could see the results to be like,
00:33:41.780
this is dull, like any more variation in the sentences, or, you know, I told you to write this
00:33:47.720
as if you were, you know, as you know, as if you were Stephen King, but I didn't want you to add
00:33:52.720
so many horror elements. So like, let's take those out, right? It's an interactive experience where if you
00:33:57.340
know connections and web, that's what the AI is, it's a connection machine, you'll be more effective
00:34:01.300
at using yourself. So we've talked about treating LLMs like a person. And I don't, I think a lot of
00:34:05.960
people don't realize that because LLMs are trained on how humans think and write, if you talk to it,
00:34:13.300
not like a blunt Google search, but more like a person, you get better results. But beyond that
00:34:18.440
general advice, are there any other tips for prompt construction so you get better results?
00:34:23.180
Yeah, there's four things that sort of research backs up to do. And the first is really boring,
00:34:28.920
which is be direct. If a human intern would be confused by your instructions, the AI will be too.
00:34:34.820
So you want to be direct about what you want. I need a report for this circumstance, you know,
00:34:38.240
for this reason, you know, and that gets better results. So be very direct about what you want.
00:34:43.020
The second thing you want to do is that you want to give the AI context. So the more context it has,
00:34:50.500
the better. Context can be, here's some documents I like, or here's, you know, but it can also be
00:34:55.080
things like act like this kind of person, or this is going to be used in this kind of way. The more
00:34:59.840
context the AI gets, the better off it is. The third is what's called chain of thought prompting.
00:35:05.400
This turns out to be a very powerful technique. And it's become actually a key way that AI has
00:35:10.060
improved is that the newest models of AI do this automatically. So it's no longer as important
00:35:13.900
to do chain of thought, but it used to be the most useful way to do this, which is you literally have
00:35:18.020
the AI to think step by step. First do this, you know, come up with 300 ideas for a, for an article
00:35:22.840
to rate the ideas on a scale of one to 10, and then pick the top five, then reconsolidate them
00:35:29.220
together into a new paragraph. Now write the document. So that's step-by-step reasoning both
00:35:34.720
makes the AI work better. But if you think about how AIs work, right, they're adding, they're just
00:35:38.000
predicting one word at a time. They don't have a chance to pause and think. So the way they think
00:35:42.720
is by writing. So if you have them write a bunch before giving you an answer, they're going to end up
00:35:46.820
with better answers. So chain of thought makes them write out some stuff and go through a logical
00:35:50.480
process. It also makes it easier to figure out what's going right or wrong. And the fourth tip
00:35:54.160
is called few shot. Give the AI examples of the kinds of things you want to see that are either
00:35:59.140
good or bad, and it will deliver things that are more like the examples. Okay. Yeah. I think the
00:36:04.840
earlier tip of just tell the AI, like, what do you want it to be can be really useful. So I used this
00:36:12.000
the other day. So for the past couple of years, I've had like this pain in the back of my knee
00:36:16.440
from squatting, from powerlifting, and it's gotten better. But I've gone to an orthopedic
00:36:21.440
surgeon, did an MRI, and they're like, well, nothing's going on there. Went to a physical
00:36:25.440
therapist, and he really didn't know what was going on. And so just the other day, I was like,
00:36:30.660
I haven't asked Chad GPT this. What would Chad GPT say? So I told it, I want you to be the world's
00:36:35.500
best physical therapist slash orthopedic surgeon. I don't know if this is actually very good,
00:36:39.920
but I said, that's what I want you to be. Here's the situation I have. I took a picture
00:36:44.060
and had it pointed to like where the pain was in the back of my knee. Here's when I experienced the
00:36:48.480
pain and et cetera, like what's going on there? And then generate a rehab protocol. And it generated
00:36:54.840
this rehab protocol that I started doing some of the exercises, and it actually feels like it's
00:36:59.280
working because I can feel it in the spot that it's been hurting. And I haven't been able to do that
00:37:03.780
with like the advice that my physical therapist gave me. I mean, listen, I think, you know,
00:37:09.920
with all the qualifiers around this, that if you're not using a AI for a second opinion,
00:37:15.260
like cheap second opinions, super easy. And you absolutely should be doing it. Like all the
00:37:19.240
research shows it's a pretty good doctor, right? Do not throw out your doctor for this yet. But like
00:37:23.340
that exact kind of use, I've used it for the same thing where I hurt my shoulder, you know,
00:37:26.820
and I'm like, tell me what the issues could be. And it's not bad, right? It's certainly better than
00:37:30.800
searching Google for this stuff. And the research on medicine shows it works pretty well. And the idea
00:37:35.740
that you gave it the context you needed, what you actually did there was you both gave it a context and a
00:37:39.440
persona, act like this person. That's a very reasonable way to start that. That's part of
00:37:43.940
the advice in the book, tell it what kind of person it is. And then you gave it all the background,
00:37:47.440
including, I love that you gave it the picture with the arrow pointing to it because these things
00:37:50.540
could see images. And so giving it that context made it more accurate, just like what a person
00:37:55.260
would like you can put in your medical history and numbers. I would not again, use this as your
00:37:59.780
only position, but as a backup to empower yourself, it's incredibly powerful.
00:38:03.840
Okay. So let's talk about some practical ways you've been seeing companies and educators
00:38:09.240
use AI. Let's talk about work first. So what are some, you know, brass tacks ways people can use AI
00:38:16.560
in their work? And we've kind of mentioned some things, but what are some things that a general
00:38:20.980
worker, maybe someone who's in management or something like that, how can they use AI for their
00:38:27.000
So it's pretty good for advice. There's a really nice study that shows that of all people,
00:38:31.000
small business entrepreneurs in Kenya who are already performing well, those who perform
00:38:34.720
badly, didn't have the resources to do anything with it, who just got advice from the AI, had
00:38:38.980
profits increased 12 to 18% just from advice. So it's pretty good at giving you advice or
00:38:43.180
helping you talk through issues. It's obviously pretty good at writing and reading. Like it's
00:38:47.500
pretty good at summarizing an entire meeting and telling you what action points people can
00:38:51.300
take. Increasingly, if you use the deep research modes, it writes an incredibly good market
00:38:56.420
research report. There might still be some errors, but it's a great starting point. It can save you
00:38:59.660
20 or 30 hours of work. And those deep research modes are available right now in Gemini, OpenAI's
00:39:05.380
ChatGPT, and in Grok from Elon Musk's XAI. But those deep research are very, very useful.
00:39:13.340
I mean, I've worked with them with lawyers and accountants, and they're also very impressed
00:39:16.580
by the results. It's very good. If you can't code, I build little coding tools all the time,
00:39:20.820
help me work through the financials here by building an interactive spreadsheet for me.
00:39:24.460
So you have to experiment. That's the 10 hours thing, but there's a lot of use cases.
00:39:27.940
The thing I tend to point out to people in a work environment is two things. One is you
00:39:33.360
will know what it's good or bad for pretty quickly because you're an expert at your own
00:39:36.160
job. So if you're like, this is not good for that, great, you've learned something. If
00:39:40.020
it is good, you often know how to give feedback to make it even better. The second thing I
00:39:44.060
would tell people about using AI at work is the thing you have to overcome is this idea
00:39:49.440
of working with a human. You only can get so many answers. I think you should take a maximalist
00:39:53.660
approach to working with AI. Don't ask it for one way to write this email. Ask it for
00:39:58.460
30 and then pick the best one. Don't ask it for one idea. Ask for 200. It doesn't get
00:40:03.340
tired. It will never get annoyed at you. So part of what the value of it is, is this abundance.
00:40:08.660
You also talk about in the book how you got to figure out how to decide what to delegate
00:40:12.540
to the AI and which task you should keep doing yourself. So is there a rubric you use to
00:40:18.520
So I think part of it is about personal responsibility and ethics. What do you think you ethically
00:40:23.380
have to keep for yourself to do? Like for example, we actually know from research that AI is a
00:40:27.820
better grader than I am. But I don't use the AI to do grading on papers, even though it's
00:40:31.540
better because I feel like my job as a professor is that I am providing the feedback, right?
00:40:36.900
Or if I'm using teaching assistants or something, I would delegate to those humans. But I don't
00:40:41.380
use AI to do that, even though it could do a better job. On the other hand, there are things
00:40:46.120
I know the AI is not going to be great at where I know I have to take over. And I know
00:40:49.420
that because I've spent my 10 hours working with AI. So I think it's either ethical or
00:40:56.760
Gotcha. And I think too, with this idea of thinking about AI in your work, I've read about
00:41:03.040
this. Maybe you talked about this in your book too, if I can't remember. But you now are
00:41:06.260
in the position where everyone who has access to AI can do a lot of jobs at an 80% level.
00:41:12.260
Whereas it used to be, if you were bad at writing a memo or doing other kinds of tasks,
00:41:17.040
then your career is going to be kind of stunted. But with AI, you can write a pretty decent memo,
00:41:21.560
but everyone else can also write pretty decent memos. So now it's like, okay, if AI can get
00:41:29.060
everybody 80% of the way on the more basic stuff, then you got to figure out how to do the other 20%
00:41:36.040
stuff super well. And that's what's going to separate you from the pack, is if you can do that extra
00:41:41.460
20%. So you got to ask, what can I add to get all the way there? And that's often the hardest part.
00:41:48.780
So I'll push back a little, because I think when I say it does 80% at an 80% level, that's not always
00:41:54.300
the easy part. Sometimes it actually does the hard part, and it's very good at that. I think the
00:41:58.420
question is how you attach it together and how you work together with it. And focusing the areas
00:42:02.640
where you're definitely better than AI. I think about this a lot. I'm a former entrepreneur myself.
00:42:06.700
I teach entrepreneurship classes at Wharton, fund company, work with companies. And one of the
00:42:12.560
things that's really interesting about being an entrepreneur is you generally are really good at
00:42:16.320
one or two things, and you suck at everything else. But you have to do all that other stuff to do the
00:42:19.920
one or two things you're good at. So you're really good at coding. You're really good at running a
00:42:23.180
podcast like this. You write compelling content nobody else is able to write. But you also have to
00:42:28.400
keep the books and fill out forms and give your employees performance reviews and all the other stuff
00:42:33.700
that comes with running a business that you may not be good at. Writing emails, writing marketing
00:42:38.800
repair. So the idea is that if the AI does that as good as an 80th percentile person, it's not bad,
00:42:45.280
right? That was stuff you were doing at the 20th percentile. So that lets you focus on the things you
00:42:49.620
do really well and give up the stuff you don't do well. That makes sense. Are there any specific prompts
00:42:56.860
So there's a whole bunch of things you could think about. I find one really good thing is
00:43:02.360
to ask the AI to have arguments on your behalf, like what are some pros and cons of this? Another
00:43:08.400
really nice piece of advice is think about frameworks I can use to address this problem.
00:43:13.860
Examples of frameworks might be things like a two by two matrix or a strategy matrix. And give me two
00:43:19.780
different frameworks that I can use to think about this problem and tell me what those frameworks
00:43:24.880
would say. So you can force the AI to kind of think like a high-end consultant on those kind of
00:43:29.620
But how do you think AI will affect more creative work? Like what role do you think humans can play
00:43:34.640
in a world where AI can create pretty good art, write good copy, even do a podcast? Like where do
00:43:44.780
So I think if AI stayed at the level where it is right now, it's quite good, but it's not as good
00:43:50.200
at podcasts as you. I'm trying to butter you up for good editing here. It's not, you know, it's,
00:43:54.260
I don't think as good a professor as me, right? Or a good writer is a good writer is a good
00:43:58.320
writer. I think analysts are like, if you're whatever you're best at, you're probably better
00:44:02.140
than AI. The question is whether that stays the same, right? It hasn't, right? Next year,
00:44:07.140
it's going to be better than it is now. At some point it might, you know, it might be a better
00:44:10.620
podcast than you. It might be a better professor than me, right? Better writing research papers
00:44:14.100
or whatever else. And I think that becomes the big question. What do we do in that world?
00:44:18.140
And that's a decision we get to make in some ways. Like AI is something being developed, but it's not
00:44:22.360
something that we don't have any control over. And what I worry about is when people just sort of
00:44:26.180
throw up their hands and be like, well, AI does stuff. Like, what do we want the future to look
00:44:29.520
like? We get to make decisions about that. Yeah. I mean, so, I mean, you've talked about
00:44:32.940
how you're still using, you're using AI in your own creative process. Like when you write,
00:44:37.720
you know, you're trying to figure out how to end a sentence and you're just thinking,
00:44:40.940
thinking, thinking, and nothing's coming to you. So you ask the AI, well, you know, what are some 30
00:44:45.000
different ideas that how I could end this? And like, you spit some things out. Then you're like,
00:44:48.640
oh, well, that's a good one. Or you start mish-mashing, you know, kind of different sentences
00:44:53.680
that is spit out to you to get a good one. I mean, and that kind of method of working with it,
00:44:59.040
that co-intelligence piece is ultimately the message here. At least for right now, at the level
00:45:03.500
AI is around, it has weaknesses and your ability to use it as a starter for information, as a fill-in,
00:45:10.160
as ability to get more done, right? So, you know, maybe there, maybe there's a world where the AI is
00:45:14.920
very good at podcasting and you develop a way so that it's doing personalized podcasts for everyone
00:45:18.860
who downloads one, right? So this model is now, you're hearing the two of us talk, but we're
00:45:23.600
talking specifically about the issues that you, listener X, are experiencing. I mean, there are
00:45:29.840
future models of more ambitious worlds where if everyone has a thousand PhDs, what do you do with
00:45:34.980
those? So I don't think this takes away all choice and agency for us. It does make us rethink how we work.
00:45:40.320
Okay. So we've talked about using AI at work. Let's talk about using AI at school and your
00:45:46.040
professor. So you've got a front row seat to see how this is all playing out. But before we talk
00:45:51.780
about some of the potential upsides of AI in the classroom, let's talk about the disruption. It
00:45:56.800
seems like AI has pretty much blown up homework. Like it's caused the homework apocalypse. You know,
00:46:03.440
like when a student gets an assignment, they can just go to AI, say, AI, write me an essay. AI,
00:46:08.700
write, you know, here's a picture of a math problem, the calculus problem, solve it. So what
00:46:13.800
do we do in a world where students can just get the answer right from AI? I mean, is school over?
00:46:20.920
So, I mean, right now it's absolute chaos, right? As of last July, 70% of undergrads and 70% of K-12
00:46:28.580
students were using AI for quote unquote help with homework. So everyone's using it. AI detectors don't
00:46:33.320
work, by the way. All of them have a high false positive rate. Some people just write like AI and they
00:46:37.460
get accused all the time of using AI and they could never prove they didn't use it.
00:46:42.060
Yeah. Like AI uses the word delve a lot. And before AI, I'd use that word. I'd use the word
00:46:48.960
delve and now I can't use delve anymore because it's kind of an AI thing. And I don't want people
00:46:56.060
Well, that was actually what was pretty funny is there's actually a statistical analysis that shows
00:47:00.180
that the use of delve is dropped off dramatically because the models no longer say delve that much
00:47:05.000
and no humans want to use it anymore, right? So it's very funny to react negatively to it,
00:47:09.120
but you can't ever prove that you're not using AI, right? I've just kind of given up. Like,
00:47:12.520
I mean, what you end up doing is leaving spelling errors or something like that and hoping that that
00:47:16.360
proves it. But I mean, you're facing the exact same problem we all are, right? You could be accused
00:47:20.060
of using AI anytime. You can't prove it. So teachers really have two choices. Choice number one is the
00:47:25.600
same thing we dealt with in math classes after the calculator to the 1970s, which is what you do is you go
00:47:31.780
back to basics and you say, listen, you do the homework, don't do the homework. The homework
00:47:35.920
helps you with tests. In class, we're going to have active learning. I'm going to ask you questions
00:47:39.840
about the essays you wrote. You're going to do in-class assignments. You're going to do in-class
00:47:43.340
blue book tests. And that's a completely reasonable way to respond to AI in the short term. That's
00:47:47.700
exactly what we did in math classes, right? Like you do the math homework, it might be great. It might
00:47:51.440
not be great. But the big deal are the tests you do in class. And we could do that for other things
00:47:54.940
like writing. We just don't. The second option is you transform how you're teaching. And like my class
00:47:59.400
are 100% AI based. Everything you do involves AI stuff. So you teach AI that pretends to be a bad
00:48:04.360
student. You co-write a case with it. The AI rills you about problems. Because they teach
00:48:08.440
entrepreneurship, we're also able to do incredibly impossible assignments like, you know, come up
00:48:12.500
with a new idea and launch a working product by the end of the week. We can do things we didn't do
00:48:16.480
before. So we'll figure this out. But schools are definitely in chaos right now.
00:48:21.400
Well, I think going back to that idea, that point you made that people with humanities degrees
00:48:25.400
or does humanities background do better with AI? I mean, I think this makes the case like we still
00:48:31.720
got to teach people or teach young people like general knowledge, like that becomes more important
00:48:36.680
if you want to actually make the make this AI useful. Absolutely. General knowledge is more
00:48:42.400
important than ever. Expertise is more important than ever. And we can teach people this. I mean,
00:48:46.780
we really can. They're in the classroom already. And the most effective way of teaching has always
00:48:50.960
been active learning where people are doing things actively in the classroom and not just
00:48:54.760
hearing a lecture. So the trend even before AI was, how do we create flipped classrooms where
00:49:00.360
you watch videos of lectures or read textbooks outside of class, then in class you apply that
00:49:05.600
knowledge. That kind of approach is very AI proof. And there's lots of ways we can use AI to make
00:49:09.820
learning more engaging. I've been building games and simulations where you basically, you know,
00:49:14.500
you don't just learn how to negotiate. There's an AI you negotiate with. And that turns out to be
00:49:17.940
really easy to build. You can use AI to do all kinds of really interesting teaching things. There's a set of
00:49:22.840
research out of Harvard that shows an AI tutor improves performance on scores. There's another
00:49:28.440
big study by the World Bank in Nigeria that shows that six weeks of after-school AI tutoring with
00:49:34.040
teachers in the room is actually important to have teachers involved because students, when they just
00:49:37.520
use AI themselves to learn, it turns out they don't learn very well at all. They just kind of cheat and
00:49:41.660
don't realize they're not learning and they do worse on tests. But if you make it part of assignments
00:49:45.260
and teachers work with you on this, then you actually get huge increases in learning outcomes.
00:49:49.420
So there is like a really good future where AI supercharges learning, makes it more personal,
00:49:53.760
makes it better. And I think we're close to that. It's just, you can't just say to your kids,
00:49:58.240
use AI and it'll all work out because that's not actually the case. Like learning requires effort
00:50:03.060
and letting AI skip that effort actually can hurt you. So we have a lot of potential for the future,
00:50:07.800
but also a lot of misconceptions and sort of thinking to do about how to use this properly.
00:50:12.380
Something my wife and I discussed quite a bit, since we're writers and then we look at like
00:50:16.540
what AI can do with writing. It's like, is there even a point for like my kids learning grammar and
00:50:21.280
how to diagram a sentence and whatnot? You're a writer. Is there still a case to be made to learn
00:50:26.500
those fundamentals of writing in the world where ChatGPT can just spit out something for you?
00:50:31.860
I mean, again, I think that the key is really building true expertise. And I think that
00:50:36.120
what this hopefully does is sharpen things for us. You know, math classes became a lot more
00:50:41.240
organized after the calculator because people had to actually think about what do we want people to
00:50:44.660
learn? Like how much should they learn to do multiplication, division by hand? And what's
00:50:48.980
that valuable for? And when should they switch over to using calculators? And I think we can do the
00:50:52.540
same thing with writing education. I mean, I understand that it kind of sucks, right? Like
00:50:55.700
essays used to be a great way to do things for teachers. They could just assign essays and assume
00:50:59.620
people learned. A lot of people didn't learn or were already cheating. By the way, prior to ChatGPT,
00:51:03.800
there were 40,000 people in Kenya whose full-time job was writing essays for American college students.
00:51:08.420
So this isn't a new problem. So I do think we need to learn how to, I mean, whether diagramming
00:51:14.540
sentences is the right approach or just trying writing a lot with creative prompts, I think
00:51:19.680
writing remains really important because we want people to learn to be good writers and readers.
00:51:24.380
And that's what school is for. But we have to start approaching this a different way. We can't
00:51:27.960
just assume we give people a take-home assignment, an essay, and they're learning something from it.
00:51:31.560
But that also hasn't been true for a long time since the internet came out. People were already cheating.
00:51:35.840
So I think we have to face the fact that, you know, this is something we have to learn about
00:51:40.580
how to do better and actively work to do better. Any advice to parents who they've got, maybe they
00:51:46.660
got kids in middle school, high school, and they're seeing their kids use AI for their homework,
00:51:52.840
for homework help. Any advice on guiding them and how they can use that as not just like a way to
00:51:58.360
cheat and just get the answer and get the homework done, but like, oh, we can actually enhance your
00:52:01.740
learning. What are some like prompts or some guidelines for that?
00:52:05.040
So we have a bunch of free prompts that you can use and you can find those at Generative AI Lab
00:52:10.360
at Wharton. And there's a prompt list that you can use of tutor prompts. But aside from those,
00:52:15.520
I don't think prompts are really, you know, they're important. But I think the real key is thinking
00:52:19.120
about as a parent how to use it. So for example, when you want to give your kids, you know, homework
00:52:23.700
help, don't let them use AI or try and suggest they don't use AI. But what you do is you actually take
00:52:29.600
your phone and you take a picture of that calculus problem and you ask the AI, explain this to me in a way
00:52:34.340
that I can teach my kid how to do this and they're good at this or bad at this, or even better have
00:52:38.480
an ongoing conversation where it knows the strength and weakness of your kids. When your kids do use
00:52:42.480
AI, ask them to give practice help for quizzes. Generate problems for me for AP social studies in
00:52:47.940
this unit and quiz me on what I know or don't know. Like the key is that it has to be effortful work.
00:52:53.880
So if they're just getting answers from the AI, they're not getting anything valuable. If they're
00:52:57.100
being quizzed by the AI, they're asking questions and getting answers back. They're indulging their
00:53:01.620
curiosity. You're the one using this to help you become a better teacher. We all are, you know,
00:53:06.020
amateur teachers to our kids on lots of topics. And I mean, I can't remember calculus, but the AI
00:53:10.920
does. And you could use those tools to do this, but it's like any other form of media or experience,
00:53:16.700
you need to be an active parent. And I think even if you don't have kids and you're just an adult and
00:53:21.920
you want to continue your education, I think AI can be a really powerful co-learner or co-teacher with
00:53:28.780
you. I've been using it at my own sort of personal reading right now. I'm reading Invisible Man by
00:53:33.700
Ralph Ellison, read it back in high school, decided to read it again as a middle-aged guy. And I've
00:53:39.680
been reading it along with AI. So I'll finish a chapter and I'll say, you go to the AI and say,
00:53:44.280
hey, you're an American literature professor. I want to talk to you about Ralph Ellison's Invisible
00:53:48.000
Man. Let's talk about chapter three. And it says, okay, yeah, here's chapter three. Here's what
00:53:52.600
happens. But then I'll just start asking it more and more questions, kind of drill down into more and more
00:53:58.200
specific questions like, you know, what do you think is going on in this line? What does that
00:54:01.820
mean? And it starts spitting out ideas. And it just, I mean, it just helps. It just gets me
00:54:07.140
thinking about the text in a deeper way. And that, by the way, the co-thinking partner thing is often
00:54:13.680
important. I spoke to a quantum physicist at Harvard and he said his best ideas come from talking to the
00:54:19.420
AI. And I'm like, is it good at quantum physics? He said, no, no, not really. But it's very good at
00:54:23.740
asking me good questions and getting me to think. And I think you're sort of spotting like the most
00:54:27.920
ultimate form of co-intelligence is we just don't have, even with a, you know, a supportive spouse
00:54:32.460
who's doing the same work that you're doing and is, you know, and is intellectually engaged with you,
00:54:37.140
we still lack thinking partners in the world, right? Like, so it can help you spur your own
00:54:41.340
thinking. I love your examples of use show you what happens when you get comfortable with the system
00:54:46.240
and you start to think about how can I use AI to help. And what I love is all the examples you've
00:54:50.520
given about how you help with your writing or how you help getting, you know, help with this
00:54:53.700
reading project is about having it supplement your thinking and not replace it.
00:54:59.160
Yeah, that's, that's the way I think it's supplementing, not replacing. So what do you
00:55:02.080
think is the future of AI? Where do you see it going?
00:55:05.440
So I think it's worth noting something, which is the big thing that's happened over the last few
00:55:09.700
months is there's been a couple of technical breakthroughs in AI that make it much smarter,
00:55:14.000
that are pretty easy to implement that people have been doing. So these are called reasoners,
00:55:17.880
models that think before answering questions. Turns out that makes the AI is a lot smarter.
00:55:21.420
And as a result of that, plus a few other breakthroughs, when you talk to people at
00:55:24.940
the AI labs, and they talk about this publicly too, they genuinely believe that in the next
00:55:28.880
couple of years, two to three years, they might be able to achieve AGI, artificial general
00:55:33.480
intelligence, a machine smarter than a human at every intellectual task. I don't know if
00:55:37.120
they're right. Nobody knows if they're right. They might be, you know, high in their own
00:55:40.380
supply, but they believe that this is true. The message you take away from that is that
00:55:44.420
these systems will keep getting better. So I think there's an advantage to kind of learning what
00:55:48.840
they're good or bad at right now. But I also think we need to be flexible. The future is
00:55:52.560
changing. I mean, it's a very good time to be an entrepreneur. It's a very good time to try and
00:55:55.860
learn more about the world. It's a very good time to use this in your job to become much more
00:56:00.100
successful because a lot of people don't realize what these things could do yet. But I don't know
00:56:04.520
what the future holds in the long term. I think these systems will keep getting smarter.
00:56:07.720
They'll still be jagged, not great at everything, but they are getting smarter.
00:56:11.660
Well, Ethan, there's been a great conversation. Where can people go to learn more about the book and your work?
00:56:14.820
So I've got a free sub stack called oneusefulthing.org. That is probably the best way to keep up to date
00:56:20.720
on AI. My book is available at every major bookstores. It's called Co-Intelligence. And I
00:56:26.040
think that's a fun read also. And I am very active on social media on Twitter and Blue Sky and LinkedIn.
00:56:34.000
Fantastic. Well, Ethan Mollick, thanks for your time. It's been a pleasure.
00:56:38.060
My guest here is Ethan Mollick. He's the author of the book Co-Intelligence. It's available on
00:56:41.740
amazon.com and bookstores everywhere. You can learn more about his work at oneusefultheme.org.
00:56:46.440
Also check out our show notes at awim.is slash AI, where we find links to resources where we delve
00:56:50.560
deeper into the topic. Well, that wraps up another edition of the AOM podcast. Make sure to check
00:57:02.420
out our website at artofmanly.com, where you find our podcast archives and make sure to sign up for a
00:57:06.620
new newsletter. It's called Dine Breed. You can sign up at dinebreed.net. It's a great way to support the
00:57:11.280
show directly. And if you haven't done so already, I'd appreciate it if you take one
00:57:14.380
minute to give us a review on the podcast or Spotify. It helps out a lot. And if you've
00:57:17.260
done that already, thank you. Please consider sharing the show with a friend or family member
00:57:20.860
who you think was something out of it. As always, thank you for the continued support. Until
00:57:24.260
next time, it's Brett McKay, reminding you to listen to the podcast with Put What You've Heard