On C2C: Will AI be the end of critical thinking?
Episode Stats
Words per minute
178.90642
Harmful content
Misogyny
7
sentences flagged
Summary
In this episode, we're joined by cognitive scientist and lecturer at Dalhousie University and contributor to C2C Journal, Christopher Snook, to talk about the dangers of artificial intelligence (AI) in the real world.
Transcript
00:00:00.000
I have not been as fearful of artificial intelligence as some people have, because I don't think
00:00:13.620
that human intelligence has often served as well, but that's a bit of a glib joke to start
00:00:18.080
off what is a serious discussion, which is what AI is doing to discourse and to thought.
00:00:24.260
Now, we haven't talked a lot about AI on this show.
00:00:27.280
I've kind of been waiting for the right angle and the right opportunity, and I should say
00:00:31.560
I've been one of these people that has sort of enjoyed the novelty of it.
00:00:34.900
When ChatGPT has come up and you get the ability to just have a quick conversation with this
00:00:40.760
thing and have it give you some response to a question, and there's a program that I've
00:00:46.320
had some fun with called MidJourney, which will create AI-generated images, and you can
00:00:55.340
The one that I did, I won't show you because I wasn't thrilled with it, but I asked for
00:00:58.620
like childhood photos of Fidel Castro pushing young Justin Trudeau on a swing, but the AI
00:01:04.560
was getting Fidel Castro and Justin Trudeau's faces mixed up, which maybe makes it smarter
00:01:10.380
And then I also had some fun this morning, and I asked to get like some photographs of
00:01:14.760
Krista Freeland driving, so maybe we can throw those up.
0.99
00:01:21.500
I thought that speed demon, Krista Freeland, fresh off the heels of getting her ticket
1.00
00:01:27.760
That's her basically road racing down some Alberta highway.
1.00
00:01:31.620
I like the one on the bottom right myself, although it looks a little terrifying.
00:01:37.780
You can see in the back right there, it looks to be a center block that she's just like leaving
0.98
00:01:46.080
She's really flying there so much that she needs the space helmet.
0.50
00:01:49.860
She's putting on so many miles and going so fast, she has ascended off the ground.
00:01:56.420
But for all the fun that AI offers, and yes, there is some, it also has very serious implications.
00:02:03.620
And those implications we have not really fully explored because despite the fact that this
00:02:08.320
technology has been in development for many, many years, it really seems it's only been
00:02:12.940
in the last year that people have started to grapple with the real world implications
00:02:18.500
And, you know, we see this in academia where universities which have had to focus on detecting
00:02:23.940
plagiarism now have this new problem, which is did students just create something original
00:02:29.380
by entering a few prompts for their essay assignments into chat GPT or whatnot.
00:02:36.180
There's a great piece in C2C Journal by Christopher Snook about this called AI,
00:02:41.280
The Destruction of Thought and the End of the Humanities.
00:02:45.000
He is a lecturer with Dalhousie University and a contributor to C2C Journal.
00:02:49.800
And he joins us now, not an AI-generated version, but the man himself.
00:02:59.740
So let's start first off with where your issue is with this.
00:03:04.060
Why are you concerned about AI in the context here?
00:03:07.600
Yeah, I can, I suppose I can answer that in a fairly simple way.
00:03:11.280
As you've already indicated, there's a great deal of joy maybe to be had with playing with
00:03:18.800
But at the simplest level, I suppose maybe I could say two things.
00:03:22.940
One would be that AI introduces, I'm a humanities teacher, so AI-generated content introduces
00:03:29.520
into the university and into students' lives very easy possibilities of escaping from a certain
00:03:37.240
kind of reflection that may be essential to their development within the context of the
00:03:43.040
But secondly, I think I have a pretty significant concern that AI is actually indicative in many
00:03:48.320
respects of a much longer trend in humanities education in Canada that has fairly uncritically
00:03:55.820
assimilated new technological developments without reflecting on their consequences for pedagogy
00:04:06.760
And, you know, one thing that I always recall, even from my own time in university, is that
00:04:14.060
I would do better at them now, but they were very challenging because you can't really cheat
00:04:17.900
your way through an essay unless you're actually cheating and plagiarizing and whatnot,
00:04:25.400
You can't Google the answer to the question when you basically have to show your work
00:04:31.340
And certainly in an academic context, AI has huge implications for that, because all of
00:04:37.040
a sudden someone else could do the thinking with you.
00:04:39.660
I could just give this machine a bunch of different data points and say, formulate an argument
00:04:45.920
And that's something, I mean, I've talked to professors who have already been complaining
00:04:51.100
about the decline in critical thinking in universities.
00:04:53.440
And now we've added this other tool, which maybe can be used for good, but also can further
00:04:58.920
erode people having to come up with these skills on their own.
00:05:04.200
What I've tried to do in the article, maybe if I kind of talk about some of the points
00:05:08.000
in the article, that may be helpful for at least giving people a sense of where my concern
00:05:13.480
So my concern really grew out of two things that I saw in the university last year.
00:05:17.640
So the first was, I mean, a remarkable amount of energy and anxiety around the appearance
00:05:25.020
Sort of large language models that can produce texts fairly competently, increasingly competently
00:05:32.080
for students with very, very little to no work on their side.
00:05:36.020
So there's a huge amount of anxiety, as you pointed to, Andrew, earlier in your introduction
00:05:42.840
It's a different, it's not even plagiarism in any recognizable sense.
00:05:46.120
So it's just allowing AI to generate texts from the information it's kind of gathered
00:05:51.980
through its internet, through its chat box on the internet.
00:05:54.840
So there was a huge conversation about this in the university.
00:05:57.880
And what I noted was that primarily that conversation was focused on questions of use.
00:06:02.200
And so I spend some time teaching engineers, though I teach humanities to engineers, and they
00:06:09.180
I was going to say, that seems like a very, that seems like a very difficult challenge for
00:06:12.380
It's a hard sell, it's a hard sell, but they're very patient.
00:06:14.840
And they tolerate this sort of required course on, effectively on the history of technologies.
00:06:20.080
And one of the key things that I've been thinking about since teaching this course is
00:06:24.440
what Neil Postman simply observes, which is that the introduction of every new technology
00:06:31.620
But he sort of coined the idea, or helped kind of articulate the idea that every technology
00:06:37.220
shifts the world ecologically in much the same way that an ecosystem is changed if a new
00:06:44.900
So it's not just that we have all of a sudden AI, but rather that the whole world shifts
00:06:49.280
around the availability of these new technologies.
00:06:52.180
And embedded in these technologies are certain assumptions about what it is to be human.
00:06:56.940
And so it's a bit of a rambling response to your previous comment.
00:07:01.800
But what I noticed in the university over the last six or seven months is that the conversation
00:07:10.240
There have been some people who are sort of diametrically opposed to the appearance of
00:07:17.780
I kind of tend in that direction, certainly for the humanities.
00:07:21.100
Others who are much more supportive of the use of AI in various ways to facilitate writing.
00:07:26.980
But regardless of where one stood on, it stands on the use of AI, I've noticed that very few
00:07:32.360
people are asking deeper questions, such as what kind of world does AI produce and what
00:07:37.420
kind of worldview or what sort of assumptions are built into the technology?
00:07:42.180
And it's there that I think universities need to really be careful about the implementation
00:07:46.000
of AI, partly because I think AI actually reveals, it's a bit apocalyptic, that is, it kind of
00:07:52.900
reveals something about the nature of higher education in Canada that's been developing
00:07:57.740
And we could talk about that sort of narrowing a viewpoint diversity to different aspects
00:08:03.560
But also, I think the other thing that I think has been missed is that AI, AI generated texts
00:08:11.000
pushes against certain proclaimed positions, moral positions that the university has adopted
00:08:19.620
One aspect that this springs to mind, and you address it in the piece, in one section anyway,
00:08:27.660
And, you know, facts, in theory, are neutral, and they do not have a political persuasion.
00:08:33.040
It's the assembly of facts and the composition of various facts that you can use to sort of
00:08:38.140
demonstrate something that is a bit more biased.
00:08:40.740
And one thing we've seen in AI is how it's providing, it's doing the thinking for you in theory.
00:08:47.600
But the problem with that, among others, is that it is producing a biased outcome.
00:08:55.520
I mean, I had once, when I was first playing around with ChatGPT, a debate with a machine.
00:09:00.060
So the joke was on me about what a woman is.
1.00
00:09:02.540
And it was interesting seeing this machine twist itself into all of these many logical
00:09:09.400
But it was actually quite terrifying how it started giving me the talking points I would
00:09:13.980
expect if I were having this with some university diversity administrator.
00:09:17.980
And it started telling me about inclusivity and tolerance and women can come in any forms.
1.00
00:09:22.720
And there is something there in which AI is basically telling people that there is one
00:09:29.360
way to construct a thought when it does this, that you aren't actually able to assemble facts
00:09:39.900
And certainly the studies have varied and in some cases disagree a little bit with one
00:09:44.140
another about where the biases are found in AI technologies.
00:09:48.180
Though some people have, there's been some studies that have tried to argue that there's,
00:09:51.980
because some of the early scraping of the internet focused primarily on Reddit sites,
00:09:57.760
that there was a kind of conservative or male bias somehow in the technology.
00:10:03.700
But it seems to me fairly clear now that the technology seems to be biased fairly clearly,
00:10:08.480
I think, in the other direction in terms of the kinds of sources that it's recycling when
00:10:18.120
So I began the article and this is one of the things, maybe a way of thinking about AI in a
00:10:22.200
broader context or sort of moving back from the technology to think about what it has to
00:10:27.740
I began the article in C2C really just by reflecting on the fact that
00:10:31.760
a kind of formulaic response to the work of pedagogy has become characteristic of universities
00:10:41.200
And it's in part indicated through the demand that applicants for university positions
00:10:46.220
complete diversity, equity, and inclusion statements, diversity statements as part
00:10:52.280
And kind of famously, or if you follow these sort of stories infamously, depends on what
00:10:58.500
one thinks about all these things, a professor in the United States asked ChatGPT to produce
00:11:05.860
And he was astounded to see, just as you've described, the speed with which ChatGPT was
00:11:12.060
able to reproduce all of the talking points and all of the assumptions of a fairly kind of
00:11:17.720
middle-of-the-road Canadian higher education position on issues of diversity, inclusion,
00:11:25.880
To my mind, what that revealed was that we're sort of beginning to operate in the university
00:11:30.140
in a world that is fairly, at the very least, formulaic in its expectations of whatever diversity,
00:11:38.320
inclusion, and equity may be about, or diversity, equity, and inclusion.
00:11:41.920
So it was from there, really, in the article that I wanted to try to see or to explore how
00:11:48.440
is it that the technologies that are available to us now in the university and that have been
00:11:52.740
slowly growing in their implementation in the university over the last 15 or 20 years
00:11:59.760
may actually be both accelerated by the advent of AI, but may also be, in a certain sense,
00:12:08.340
pointing towards AI, that is to say, pointing towards a world in which a kind of formulaic
00:12:13.980
regurgitation of information becomes a kind of normative expectation of students, even
00:12:24.120
So just to sort of maybe make a connection with what you found when you poked and prodded
00:12:27.800
ChatGPT, that it tends to kind of produce fairly predictable results relative to certain
00:12:34.720
Well, there is also to this, I mean, the most, I guess, cogent defense I hear of AI is that
00:12:42.860
AI is little more than a mirror to the existing world.
00:12:46.600
I mean, AI is not really formulating its own materials that it's not drawing from the trove
00:12:54.400
Now, obviously, individual inputs can be manipulated, and we also have terms of use that govern it.
00:13:00.480
I'm trying to bring us away from the use discussion of this that you were talking about earlier,
00:13:04.560
but I guess in that sense, is this just reflecting an existing problem, or is this making it worse?
00:13:14.520
I would say from, there might be two things to say about that.
00:13:17.220
On the one hand, from my perspective, and this was maybe my concern with the conversation so far
00:13:22.900
in higher education in Canada about AI, its preoccupation with questions of use has really prevented
00:13:28.620
people from asking a much slower and more difficult question, which is to say, is this actually a
00:13:34.500
benefit, or is it simply a reflection of the world we're in, or is it making things worse?
00:13:40.640
So I think that that deeper question about the kind of ecosystem consequences or cultural
00:13:50.780
So that is to say, AI at some level is a kind of metaphor, in much the same way one might think
00:13:57.300
about COVID as a kind of, we can think about it as an illness, but we can also think about
00:14:02.840
COVID response, at least, as a bit of a metaphor of our contemporary cultural moment.
00:14:10.160
But I would say from the perspective of pedagogy, AI raises some very deep questions that, to
00:14:15.700
my mind, intensify problems that were already present.
00:14:18.620
So it's not so much that it simply introduces a newness that's radical, but intensifies certain
00:14:26.800
So one of those problems, I think, is connected to the use of devices generally for humanities
00:14:36.680
So one of the things I think many of us have experienced is the extent to which screens and
00:14:41.880
screen reading and iPhones or cell phones, the extent to which they actually produce in us
00:14:46.500
kind of habits of scanning, a kind of hyper-attention, what one scholar calls forms of hyper-attention,
00:14:51.140
not focused attention or contemplative reflection, but a kind of hyper-attention that actually
00:14:57.100
tends to kind of lead us towards a certain kind of rashness in our decision-making.
00:15:02.260
So that's one deep and profound concern I have, especially when institutions seem to be dominated
00:15:07.380
by certain sets of political commitments that ought themselves to be subject to serious
00:15:14.460
So if we're in an environment where there's certain assumptions about what political positions
00:15:18.580
are normative, it needs to be the case that those can be thought about deeply and reflectively.
00:15:23.920
And if we're using technologies that limit that capacity, then we're in a little bit of
00:15:28.660
But the other one, I think for me, from the perspective as a teacher, is that, and this
00:15:33.260
would be, I mean, I'm affiliated with the classics department and I spend a lot of time in the last
00:15:38.000
years teaching Augustine, a sort of famous foundational voice for the Western world.
00:15:43.340
And Augustine is one of many thinkers who highlights the fundamental role of memory in the
00:15:49.140
constitution of our personalities, the sort of crucial role that memory plays.
00:15:53.040
And it's kind of essential to technologies like Chachi BT that we offload or offshore the
00:15:59.580
faculty of memory to the technological device, right?
00:16:05.140
So I don't struggle with Augustine or Dante or Homer or any of those things.
00:16:09.680
I let Chachi BT do the struggle in a certain sense.
00:16:12.660
I mean, it's not really struggling, but I let it do the amalgamation of opinion making,
00:16:16.440
information, and I'm left passive in that response.
00:16:20.100
So in that sense, I think the technologies actually inhibit the kind of interior dialogue
00:16:25.320
that's fundamental to education, but that's also fundamental to being a free person in the
00:16:30.800
Hannah Arendt, I think, points this up very, very powerfully in her reflections on totalitarianism.
00:16:35.740
If we can't have a dialogue with ourselves, if we're pulled out of ourselves endlessly
00:16:40.200
and offshore, even our memory, we lose the ability to actually be free agents in the world.
00:16:47.180
So these are some of the things that I'm very concerned about at the level of pedagogy,
00:16:50.740
which is why I tend to a pretty puritanical, I suppose, relationship to AI when it comes to
00:16:58.040
I recognize AI has different applications in different contexts.
00:17:02.260
It's funny, at the risk of oversimplifying it, I think of, you know, a movie that I've
00:17:09.240
I could find out what happens in that movie in about 60 seconds by just reading a plot
00:17:16.720
I watch the movie because there is something in that process.
00:17:19.800
You feel, you see, you learn, you get insights.
00:17:22.800
It's the same as why, you know, despite the fact that I may not have taken this advice when
00:17:26.120
I was in high school, reading the Coles notes of something is not the same as reading the
00:17:32.120
I mean, I could get chat GPT to say, you know, give me some bullet points that I can bring
00:17:36.240
up in tutorial about, you know, the cave or something.
00:17:42.100
And I also wonder, I mean, to appeal to your department, the classics, if you were to input
00:17:46.660
into chat GPT, the most beautiful works of literature of classics that you'd ever seen
00:17:52.940
and said, create something like this, could it do that in your view?
00:17:57.240
Could it create the beauty that we have seen from all of these people thousands of years
00:18:04.780
That's a very hotly debated topic, as you may know.
00:18:06.980
Of course, in the world of visual arts, someone recently was awarded a prize, right,
00:18:11.220
in the visual arts for an artificially created, produced image, all kinds of, of course, very
00:18:19.400
deep ethical questions around AI and its accumulation of information and how that happens.
00:18:25.520
But, you know, from my perspective, for me, the answer to that question was really given
00:18:31.760
Nick Cave, the Australian singer-songwriter, was asked this question.
00:18:36.980
A fan sent him a poem that ChatGPT had written when he asked ChatGPT, write me a poem or a
00:18:46.480
And Nick Cave's response was to say that even if it were a good song, which Nick Cave refused
00:18:52.600
to concede that it was a good imitation, he said that the problem is that ChatGPT, artificial
00:18:59.060
intelligence, has been nowhere and suffered nothing.
00:19:02.020
And to be human in the world at all, as someone like Jordan Peterson is constantly reminding
00:19:07.720
us, is to suffer and out of that suffering either to sort of produce meaning in the world
00:19:14.180
And I've been fairly persuaded by Nick Cave that no matter how close the approximation one
00:19:19.780
might be able to artificially reproduce, the fact that the technology has itself been nowhere
00:19:25.440
and suffered nothing means that that material can have very little consequence for me as
00:19:30.680
someone who lives in the world with all of its fragility.
00:19:33.300
So, yeah, so I guess that would be my answer, which isn't, I mean, yeah, maybe not the best
00:19:41.040
And now I'm like geeking out on this topic myself.
00:19:43.860
So I think we'll have to have you back on in another show.
00:19:46.160
But, you know, I remember when I did tutorials in various classes in university, the one thing
00:19:52.060
that was always so critical when you were understanding a work was to understand the author and the
00:19:59.940
And even if the author is some professor who's still alive, understanding how that professor
00:20:04.560
came about, you know, you read, for example, a, you know, a dissertation and you say, oh, well,
00:20:11.960
And with ChatGPT, that context is eroded because there is no human context or it's an amalgamation
00:20:19.460
of, you know, 150 human contexts that you don't actually know about and can't see.
00:20:23.680
So I think that's a less elegant way of describing what you've shared from Nick Cave there.
00:20:30.120
The piece in C2C Journal is AI, the Destruction of Thought and the End of the Humanities by Christopher
00:20:36.700
And they also have another part of this series written by Gleb Lysik, who we had on the show
00:20:41.080
a couple of weeks ago about something else entirely.
00:20:45.960
Good to talk to a real human in this day and age.
00:20:50.600
Thanks for listening to The Andrew Lawton Show.
00:20:53.180
Support the program by donating to True North at www.tnc.news.