00:01:00.000welcome to canada's most irreverent talk show
00:01:17.180this is the andrew lawton show brought to you by true north
00:01:30.000and it is great to have you aboard i am just going to give a caveat at the beginning of the
00:01:42.160program here that i may not make it through to the end there has been around my home thunder
00:01:48.440and lightning all day the power went out once today now that was this morning and it hasn't
00:01:53.100happened again just yet so i will knock on wood and hope that we can make it through the show
00:01:59.020And one thing that I will point out here is that we are still continuing to see the decline of free speech, the decline of free speech all over the Western world.
00:02:11.160And certainly Canada is not immune to this problem.
00:02:15.140And we're going to be talking about this in the context of Jordan Peterson and his trial, essentially, by the Ontario College of Psychologists, which has subjected him or attempted to subject him to mandatory re-education for the alleged offense of tweeting any number of controversial things, one of which was retweeting Pierre Polyev.
00:08:15.600Like the fact that, you know, all the media, including you, are talking about this today is to sort of send a warning shot to other people in various professions that they better watch what they say and make sure it's politically correct or else they might face discipline.
00:08:31.080We were talking about this a little bit yesterday in the context of medical associations and how physicians colleges have had a very significant crackdown, I think, on their individual members' right to speak freely on COVID issues.
00:08:46.960And whether we're talking about law societies, medical colleges, the psychologist's college for Jordan Peterson, I mean, anything.
00:08:54.460It could be the regulatory colleges for massage therapists, for psychotherapists.
00:08:59.140These bodies are all tremendously powerful because you don't really have a right to operate in your chosen field without being a member of them. And I'm just wondering in general, if this is an area of law that has been fairly well established in which these organizations have this much power, or if this is kind of a recent advent in judicial reviews.
00:09:19.700yeah so it's it's come up a lot more lately and part of that has to do with the with the pandemic
00:09:26.160um basically you know there's there's always been this idea that regulators can uh you know
00:09:33.960correct speech they can take care of uh sort of the image of the rate of the profession in the
00:09:39.440eyes of the public and um you know we at the canadian constitution foundation we don't disagree
00:09:44.740with that like the regulators do have some um some role established in law to um to prevent harm to
00:09:53.140you know people who are for example patients of jordan peterson we don't take issue of that but
00:09:58.340it seemed to come up a lot lately that just things people say at uh at um you know on twitter or
00:10:03.540whatever are are being policed um you know for example it came up a lot with nurses during the
00:10:09.460the pandemic, you know, nurses that opposed vaccine mandates, maybe because they've been
00:10:13.760exposed to vaccine mandates with the flu shot for decades and don't like them, or nurses that
00:10:18.840criticized mask mandates. And so it's not a completely new area of law, but it's coming up
00:10:25.800more and more. Well, and I should point out too, that we're not even talking about, in some cases,
00:10:32.040tweets that are, I would argue, outside of what a psychologist who's not Jordan Peterson
00:10:37.800could or should be weighing in on. I mean, one of the tweets they brought up was him commenting on
00:10:42.580the transgender actor Elliot Page, formerly Ellen Page. And, you know, the transgender issue is
00:10:48.920entirely fraught within psychology. One of the big debates we have in medicine right now is how
00:10:53.820to treat especially children that start identifying by a different gender than what their biological
00:10:59.440sex would dictate. So the idea that a psychologist cannot have heterodox opinions on the transgender
00:11:04.860issue and express them is, I think, insanely, insanely offensive to what professionals should
00:11:11.840be encouraged to do, which is debate live issues and hot issues in their field.
00:11:17.580Yeah, that's right. And it's probably even worse than people realize in this particular case,
00:11:22.340because what Jordan Peterson said about the actor Elliot Page, formerly Ellen Page, was
00:11:30.400just used used their their original pronoun that they used for decades because in Peterson's view
00:11:37.580and you know I don't necessarily agree with him but in his view it's you know it's bad for the
00:11:43.400patient to to to sort of allow them to choose their pronoun because in his view that's sort
00:11:50.300of engaging in a delusion and they're better off if they don't do that so you know you can be on
00:11:55.820one side of that issue or the other but if anybody should be able to talk about that it's
00:12:00.020it's psychologists, right? Yeah. And I'm curious where you think this is going to go,
00:12:05.160because I know Jordan Peterson has been fairly unrepentant on this. And I know that the CCF is
00:12:10.000not representing him. You're intervening. So you don't, I believe, get to appeal on his behalf.
00:12:15.380But is your sense that this will go to an appeal and that if so, it could actually have a strong
00:12:20.760basis of success? Yeah, I think so. I think it's pretty clear that Jordan Peterson, he doesn't
00:12:28.020to appeal. So he'll take this to the Court of Appeal. He'll ask them if it's something that
00:12:35.400they should hear. And if they agree to hear it, then you'll get a three-member panel of the Court
00:12:42.780of Appeal most likely deciding whether this divisional court, which is what put out the
00:12:49.460judicial review decision today, whether they got that right. Yeah. And I should say, Jordan Peterson
00:12:54.460tweeted about this, and he vowed to make every aspect of this public. And we will see what
00:13:00.840happens when utter transparency is the rule. And I think if the college gets its way and Jordan
00:13:06.380Peterson has to be subjected to this, I think they may find themselves on the losing end of that,
00:13:11.380much like when Ezra Levant many years ago filmed the entire proceedings before the Alberta Human
00:13:17.560Rights Tribunal. So I think very much something that people want to see. Josh, DeHaas counsel
00:13:23.480with the Canadian Constitution Foundation. Good to have you on. Thanks for doing it.
00:13:28.800All right. Thanks again, Josh. And I should say on Jordan Peterson, look, I'm with Josh on this. I
00:13:33.800believe there is a role for regulatory colleges because the only thing worse than regulatory
00:13:39.680colleges deciding who can or can't be a member is some government bureaucrat doing it in which you
00:13:45.340know that the rules are going to be even more ridiculous. But the point of this is for these
00:13:50.180organizations to basically say, we as psychologists know what makes a good psychologist and we know
00:13:57.200what makes a competent psychologist. They are not meant to be authorities governing the entirety
00:14:02.600of your lives. But we've seen that become the case. Certainly lawyers and law societies have
00:14:07.920no doubt familiarized themselves with this because of all these character laws. I mean,
00:14:12.060I mentioned Ezra a moment ago. Ezra was once threatened, and I don't want to get the facts
00:14:16.980wrong, I think it was, I don't know, seven, eight years ago, maybe it was more recently,
00:14:21.140with the suspension of his law license in Alberta for speaking ill of another lawyer. Now, I think
00:14:26.840most people in this country could probably speak ill of a great many lawyers, but he was going to
00:14:31.560be charged or prosecuted in the sense, in the broad sense, not the literal criminal sense,
00:14:37.500by this law suicide. And he ended up just giving up his right to practice law, which is quite sad
00:14:42.720that especially with how much Ezra has to spend on lawyers, it's quite sad that he ended up giving
00:14:47.440up that, but he really had to. And I think time and time again, we're seeing doctors that are
00:14:54.620wanting to speak up about vaccine mandates and harms of lockdown that are being faced with
00:15:00.340punishment from their college. We now see the College of Psychologists cracking down
00:15:04.100on Jordan Peterson for being Jordan Peterson. That's the point here is that I could find you
00:15:10.140some insane doctors, psychologists, lawyers that are on one side of the political spectrum that
00:15:18.440will be completely untouchable by the regulatory colleges. Where's Nilly Kaplan Mirth's witch hunt
00:15:25.980by the college for all of the nonsense she spouts about why we all need to wear like, you know,
00:15:30.66017 masks if you're in the bathroom alone or something like that. All of this is exactly why
00:15:36.060these organizations should focus on their core mandate, which is, are you competent and are you
00:15:40.820engaging in something that is fitting with this profession? When they start legislating your
00:15:45.860private tweets, it is this creep of censorship into all of these different areas of what is
00:15:51.320supposed to be civil society and what is supposed to be our ability as individuals to engage in open
00:15:58.040debate because it's not politically neutral. They try to say, well, the issue is with the tone.
00:16:04.000It's like all the people that say the thing they really didn't like about Donald Trump was the tone of his speech as opposed to the content.
00:16:10.960No, they just don't like the guy and they latch on to whatever they can latch on to, which is what these censors are doing with Jordan Peterson.
00:16:17.660They are latching on to the tone because, oh, maybe he made a little barb.
00:16:21.700I mean, one of the comments was where someone was complaining about world overpopulation and he said, you're free to leave.
00:16:27.800And now you've got all these people drawing the worst faith interpretation of that that they possibly can saying, oh, is this a psychologist counseling suicide? Is this a psychologist making light of suicide? No, it's not.
00:16:43.100It's a guy responding to a flippant tweet with a flippant and I'll say entirely clever
00:16:51.300When he talks about Elliot Page, who was a woman for the entirety of Elliot Page's career
00:16:56.520until one day Elliot Page decided Elliot Page was a man and we are all supposed to go along
00:17:01.240with it, when he made a comment about how a woman got her breasts removed, which is
00:17:06.420what his take on the Elliot Page transition was, that is a comment that you may agree
00:17:11.860with or disagree with, you may find to be none of his business, or you may find it to be entirely
00:17:16.200legitimate discourse. But the point is that is not for a regulatory college to decide.
00:17:22.640And the court has done what courts in Canada do, which is exact a level of deference to censors,
00:17:28.920a level of deference to what is increasingly an authoritarian attitude and approach to speech,
00:17:35.220in which Jordan Peterson, I mean, you know what, he said, I think it was yesterday on the eve of
00:17:40.200this decision. Good luck to the college for wanting to keep up its prosecution of him. And in all
00:17:46.300honesty, you can't fight a guy like that who has the means, the temerity and the desire to fight
00:17:52.820back and expect that it is going to end well for you. So his response there, which we put up on the
00:17:58.840screen a few moments ago, is that he is going to make every aspect of this public. He also said
00:18:04.140yesterday that he stands by his words and would change nothing. So this is not a guy that is
00:18:09.500giving that little glimpse of weakness that they're going to pounce on and say, well, you admit
00:18:13.840that it was wrong. No, he's saying I did absolutely nothing wrong and I'm prepared to have it out,
00:18:18.700not in one of your stupid little star chambers, but in open court, so to speak. And, you know,
00:18:23.940I talk about Mark Stein a lot, who has been a lion of free speech in Canada for many, many years.
00:18:30.300And when he was going through his free speech trial in British Columbia about, oh, I don't know,
00:18:35.220nine years ago or so, it was so paramount to his lawyer, Julian Porter, and to him that it be a
00:18:41.920public hearing, that it be something that is not dealt with in chambers, but it is something that
00:18:46.960is dealt with in open court, because it was so important for people to see how this process is
00:18:52.160weaponized against people. And that is exactly the case of Jordan Peterson. So I will be tuning in
00:18:58.100with great enthusiasm to every single step along the way, and I hope you will be as well.
00:19:05.220Speaking of compelled speech, I was talking yesterday about Catherine McKenna. She is the former environment minister in this country. I always skip over infrastructure. She was also the infrastructure minister in this country. So if a highway is crumbling, it might have been her fault.
00:19:20.240But the thing about Catherine McKenna is that she had basically blamed conservatives and people who opposed the carbon tax for the wildfires in the Northwest Territory and in British Columbia.
00:19:34.520Northwest Territory, sorry, it's plural, not singular, but she blamed arsonists, which she said are those who oppose carbon tax.
00:19:41.400Now, Catherine McKenna has also been rather unrepentant on this file.
00:19:45.640she was going on and on about this. And she posted, I should have given Sean this picture
00:19:50.740because it was a fun one. She did a tweet earlier, which looks like it's the kind of thing that would
00:19:54.760have come from like a satire account in which she said that she made a pinky promise when she was
00:19:59.860environment minister with children that she was going to save the planet or something. And then
00:20:04.600she had like all these pictures of her doing pinky promises with children, which to be honest,
00:20:09.500you know, if more politicians did pinky promises, maybe the country would be in a bit better of a
00:20:14.540place that it is now. But Catherine McGanna has doubled, tripled, quadrupled, quintupled,
00:20:18.920sextupled, heptupled, octupled, nonupled, and decupled down over the course of the last couple
00:20:26.180of days. And her most recent tweet, which I had to bring up today, is that she wants re-education
00:20:32.100camp for conservative politicians. She says, we need a mandatory climate science lesson for
00:20:39.240Conservative politicians and premiers, as well as cost to the lives and livelihoods of Canadians
00:20:45.140from climate change and the economics of the clean transition. Otherwise, Canadians pay the price.
00:20:50.900It's absurd, but that's where we're at. Now, I couldn't resist taking the obvious cheap shot
00:20:55.240there. And I said that perhaps we could just start with economics lessons for Liberal members of
00:21:00.540Parliament. And if they don't pass, they don't get to take their seats, which means we'd have a very
00:21:05.040empty House of Commons. So I'll give you your climate change education for conservatives if
00:21:10.760you give me basic economics, free market capitalist economics for liberals, and we'll see who comes
00:21:16.960out better than that. But basically, we are seeing the end of debate. And when people talk about
00:21:23.500education in this context and how you need to be mandatorily educated, it's not about having an
00:21:30.440education. It's about having the wherewithal to say what you want to say in the eyes of the
00:21:36.280censors because censorship isn't enough. They have to compel certain speech and that is exactly the
00:21:41.360direction that things are headed and people like Jordan Peterson should push back and if Catherine
00:21:46.440McKenna ever gets her wish we should be pushing back on that as well. One thing I will say here
00:21:51.580is that all of this is evidence of why I have not been as fearful of artificial intelligence as
00:21:58.300some people have, because I don't think that human intelligence has often served as well.
00:22:01.760But that's a bit of a glib joke to start off with as a serious discussion, which is what
00:22:06.220AI is doing to discourse and to thought.
00:22:10.320Now, we haven't talked a lot about AI on this show.
00:22:13.460I've kind of been waiting for the right angle and the right opportunity.
00:22:16.620And I should say, I've been one of these people that has sort of enjoyed the novelty of it.
00:22:20.940When ChatGPT has come up and you get the ability to just have a quick conversation with this
00:22:26.820thing and have it give you some response to a question. And there's a program that I've had
00:22:32.640some fun with called Mid Journey, which will create AI generated images. And you can give it
00:22:39.020a whole bunch of prompts. I've had a lot of fun with this one. The one that I did, I won't show
00:22:43.000you because I wasn't thrilled with it, but I asked for like childhood photos of Fidel Castro pushing
00:22:47.740young Justin Trudeau on a swing. But the AI was getting Fidel Castro and Justin Trudeau's faces
00:22:53.000mixed up, which maybe makes it smarter than humans. Who knows? And then I also had some fun
00:22:58.100this morning and I asked to get like some photographs of Chrystia Freeland driving.
00:23:02.380So maybe we can throw those up. Yeah, there we go. These are the samples it gave me. I thought
00:23:07.940that speed demon Chrystia Freeland, fresh off the heels of getting her ticket for going however many
00:23:12.620kilometers over. That's her basically road racing down some Alberta highway. I like the one on the
00:23:18.900bottom right myself, although it looks a little terrifying. That one, it looks to be in Ottawa
00:23:23.500though. You can see in the back right there, it looks to be a center block that she's just like
00:23:28.360leaving in the dust there. The one in the top right is good. It's a little aspirational. She's
00:23:32.540really flying there so much that she needs the space helmet. She's putting on so many miles and
00:23:37.360going so fast. She has ascended off the ground. So take from that what you will. But for all the
00:23:43.220fun that AI offers. And yes, there is some. It also has very serious implications. And those
00:23:50.240implications we have not really fully explored because despite the fact that this technology
00:23:54.860has been in development for many, many years, it really seems it's only been in the last year
00:23:59.980that people have started to grapple with the real world implications of it. And, you know,
00:24:04.960we see this in academia where universities which have had to focus on detecting plagiarism now
00:24:10.960have this new problem, which is, did students just create something original by entering a few
00:24:16.520prompts for their essay assignments into chat GPT or whatnot? There's a great piece in C2C
00:24:24.380Journal by Christopher Snook about this called AI, the Destruction of Thought and the End of
00:24:30.080the Humanities. He is a lecturer with Dalhousie University and a contributor to C2C Journal. And
00:24:36.080He joins us now, not an AI-generated version, but the man himself.
00:25:04.140But at the simplest level, I suppose maybe I could say two things.
00:25:09.020One would be that AI introduces, I'm a humanities teacher, so AI-generated content introduces into the university and into students' lives very easy possibilities of escaping from a certain kind of reflection that may be essential to their development within the context of the humanities historically.
00:25:28.480But secondly, I think I have a pretty significant concern that AI is actually indicative in many respects of a much longer trend in humanities education in Canada that has fairly uncritically assimilated new technological developments without reflecting on their consequences for pedagogy and education.
00:25:49.060that's quite an interesting approach to this and and you know one thing that i always recall even
00:25:54.960from my own time in university is that essays were were very challenging i would do better at
00:26:00.960them now but they were very challenging because you you can't really cheat your way through an
00:26:05.400essay unless you're actually cheating and plagiarizing and whatnot because it's not just
00:26:10.220about knowing the facts you can't google the answer to the question when you basically have
00:26:14.180to show your work and show how you arrived at something. And certainly in an academic context,
00:26:19.740AI has huge implications for that because all of a sudden someone else could do the thinking with
00:26:25.540you. I could just give this machine a bunch of different data points and say, formulate an
00:26:31.120argument for me. And that's something, I mean, I've talked to professors who have already been
00:26:36.720complaining about the decline in critical thinking in universities. And now we've added this other
00:26:41.240tool which maybe can be used for good but also can further erode people having to come up with
00:26:47.720these skills on their own yeah what i've tried to do in the article i mean maybe if i if i kind of
00:26:53.140talk about some of the points in the article that may be helpful for for at least giving giving
00:26:56.840people a sense of uh where my concern lies so i my concern really grew out of two things that i saw
00:27:02.340in the university last year so the first was i mean a remarkable amount of energy and anxiety
00:27:07.160around the appearance of things like chat gpt right sort of um large language models that can
00:27:13.960produce texts fairly competently increasingly competently uh for students with very very little
00:27:20.600to no work on their side so there's a huge amount of anxiety as you pointed to andrew earlier in
00:27:25.320your introduction in your introduction to this conversation um it's a different it's not even
00:27:30.360plagiarism in any recognizable sense it's just allowing ai to generate texts um from uh the
00:27:36.440information it's kind of gathered through its internet uh through its chat box on the internet
00:27:40.680um so there's a huge conversation about this in the university and what i noted was that primarily
00:27:45.800that conversation was focused on questions of use and so i spend some time teaching engineers though
00:27:50.120i teach humanities to engineers and they very patiently kind of uh tolerate this course that
00:27:55.560seems like a very that seems like a very difficult challenge for you it's a hard sell it's a hard
00:27:59.560sell but they're very patient and they tolerate this sort of required course on uh effectively on
00:28:04.840the history of technologies and one of the key things that i've been thinking about since teaching
00:28:09.240this course is what neil postman simply observes which is that the introduction of every new
00:28:14.360technology doesn't simply give a new tool to humans but he he sort of coined the idea or
00:28:20.520helped kind of articulate the idea that every technology shifts the world ecologically in much
00:28:26.360the same way that an ecosystem is changed if a new species is introduced right so it's not just that
00:28:31.880we have all of a sudden ai but rather that the whole world shifts around the availability of
00:28:36.600these new technologies and embedded in these technologies are certain assumptions about what
00:28:41.560it is to be human and so it's a bit of a rambling uh um response to your previous comment but what
00:28:48.120i noticed uh in the university over the last six or seven months is that the conversation has been
00:28:53.720almost exclusively focused on use there have been some people who are sort of uh diametrically
00:29:00.200opposed to the appearance of AI in any form in the university I kind of tend in that direction
00:29:05.480certainly for the humanities others who are much more supportive of the use of AI in various ways
00:29:11.880to facilitate writing but regardless of where one stood on that stands on use of AI I've noticed that
00:29:17.800very few people are asking deeper questions such as what kind of world does AI produce and what
00:29:23.400kind of world view or what sort of assumptions are built into the technology and it's there
00:29:28.680that i think universities need to really be careful about the implementation of ai um partly because
00:29:34.200i think ai actually uh reveals a bit apocalyptic that is it kind of reveals something about the
00:29:40.440nature of higher education in canada that's been developing for years and we could talk about that
00:29:44.600sort of narrowing a viewpoint diversity to different different aspects of the university
00:29:48.680life um but also i think um the other thing that i think has been missed is that ai um
00:29:54.280AI-generated texts pushes against certain proclaimed positions, moral positions that
00:30:02.720the university has adopted in the last, maybe in the last decade. One aspect that this springs to
00:30:08.360mind, and you address it in the piece, in one section anyway, notably, is the idea of bias.
00:30:13.440And, you know, facts, in theory, are neutral, and they do not have a political persuasion. It's the
00:30:19.440assembly of facts and the composition of various facts that you can use to sort of demonstrate
00:30:24.560something that is a bit more biased. And one thing we've seen in AI is how it's providing,
00:30:31.340it's doing the thinking for you in theory. But the problem with that, among others, is that
00:30:36.700it is producing a biased outcome. It's producing a biased response. I mean, I had once when I was
00:30:42.880first playing around with ChatGPT, a debate with a machine. So the joke was on me about what a woman
00:30:47.700is. And it was interesting seeing this machine twist itself into all of these many logical knots
00:30:53.140about trying to answer this question. But it was actually quite terrifying how it started giving me
00:30:58.920the talking points I would expect if I were having this with some university diversity
00:31:03.100administrator. And it started telling me about inclusivity and tolerance and women can come in
00:31:07.980any forms. And there is something there in which AI is basically telling people that there is one
00:31:15.420way to construct a thought when it does this, that you aren't actually able to assemble facts
00:31:21.380into different worldviews. Yeah, I mean, I think that that's true. And certainly the studies have
00:31:27.220varied and in some cases disagree a little bit with one another about where the biases are found
00:31:32.920in AI technologies. Though some people have, there's been some studies that have tried to
00:31:36.600argue that there's, because some of the early scraping of the internet focused primarily on
00:31:42.640on reddit sites that there was a kind of conservative or male bias somehow in in in
00:31:47.680the uh in the technology but it seems to me fairly clear now that the technology seems to be biased
00:31:53.280fairly clearly i think in the other direction in terms of um uh the kinds of sources that
00:32:00.560it's recycling when it's when it's producing kind of mash-up texts um so i began the article and
00:32:05.520this is one of the things maybe a way of thinking about ai in a broader context or sort of moving
00:32:09.920back from the technology to think about what it has to say about universities, I began
00:32:14.400the article in C2C really just by reflecting on the fact that a kind of formulaic response
00:32:21.680to the work of pedagogy has become characteristic of universities generally in Canada, and it's
00:32:27.520in part indicated through the demand that applicants for university positions complete
00:32:33.440diversity, equity, and inclusion statements, diversity statements as part of their application packages.
00:32:39.920And kind of famously, or if you follow these sort of stories infamously, depends on what one thinks about all these things, a professor in the United States asked ChatGPT to produce a diversity document just last year.
00:32:51.740And he was astounded to see, just as you've described, the speed with which ChatGPT was able to reproduce all of the talking points and all of the assumptions of a fairly kind of middle-of-the-road Canadian higher education position on issues of diversity, inclusion, and equity.
00:33:11.620um to my mind what that revealed was that we're sort of beginning to operate in the university
00:33:16.260in a world that is fairly um uh at the very least formulaic in its expectations of whatever diversity
00:33:24.180uh inclusion and equity may be about or diversity equity and inclusion um so it was from there
00:33:29.780really in the article that i i wanted to try to see or to explore how is it that the technologies
00:33:35.860that are available to now to us now in the university and that have been slowly growing
00:33:40.020in their implementation in the university over the last um 15 or 20 years may actually be um
00:33:49.540both accelerated by the advent of ai but may also be in a certain sense pointing towards ai that is
00:33:56.260to say pointing towards a world in which a kind of formulaic regurgitation of information becomes
00:34:02.180a kind of normative expectation of students even in the context of their degrees so um that's sort
00:35:05.920then this was maybe my concern with the conversation so far in higher education in
00:35:10.080canada about ai its preoccupation with questions of use has really prevented people from asking
00:35:15.920a much slower and more difficult question which is to say is this actually a benefit or is it is
00:35:22.240it uh simply a reflection of the world we're in or is it making things worse so i think that that
00:35:27.680deeper question about the um the kind of ecosystem consequences or cultural consequences of ai
00:35:33.680is not really um being asked um so that is to say ai at some level is a kind of metaphor in much the
00:35:42.320same way one might think about um covet as a kind of we can think about it as a as a as an illness
00:35:47.600but we can also think about covet response at least as a bit of a metaphor of our contemporary
00:35:52.320culture cultural moment um so there is that mirroring back but i would say from the perspective
00:35:57.520of pedagogy ai raises some some very deep questions that to my mind intensify problems
00:36:03.600that were already present so so it's not it's not so much that it simply introduces a newness
00:36:08.400that's radical but intensifies certain very particular problems so one of those problems i
00:36:14.000think is that is uh connected to the to the use of devices generally for humanities education
00:36:21.360in particular so um one of the things i think many of us have experienced is the extent to which
00:36:27.120screens and screen reading and iphones or cell phones at the extent to which they actually
00:36:31.840produce in us kind of habits of scanning a kind of hyper attention what one scholar calls forms
00:36:36.400of hyper attention not focused attention or contemplative reflection but a kind of hyper
00:36:42.080attention that actually tends to kind of lead us towards a certain kind of rashness in our
00:36:46.960decision making so that's one a deep and profound concern i have especially when institutions seem
00:36:52.640to be dominated by certain sets of political commitments that ought themselves to be subject
00:36:58.160to serious reflection and consideration right so if we're in an environment where there's certain
00:37:02.720assumptions about what political positions are normative it needs to be the case that those can
00:37:07.360be thought about deeply and reflectively and if we're using technologies that limit that capacity
00:37:12.560then we're in a little bit of danger but the other one i think for me from the perspective as a
00:37:17.040teacher uh is that and this would be i mean i i'm affiliated with the classics department and i
00:37:22.560spend a lot of time the last number of years teaching augustine a sort of famous foundational
00:37:27.120voice uh for the western world and augustine uh is one of many thinkers who highlights the
00:37:33.440fundamental role of memory in the constitution of our personalities this sort of crucial role
00:37:37.760that memory plays and it's kind of essential to technologies like chat gpt that we offload or
00:37:43.520offshore, the faculty of memory to the technological device, right? It does the work for us, right? So
00:37:51.340I don't struggle with Augustine or Dante or Homer or any of those things. I let ChatGPT do the
00:37:57.480struggle in a certain sense. I mean, it's not really struggling, but I let it do the amalgamation
00:38:01.200of opinion making, information, and I'm left passive in that response. So in that sense,
00:38:06.880I think the technologies actually inhibit the kind of interior dialogue that's fundamental
00:38:12.560to education, but that's also fundamental to being a free person in the world. Hannah Arendt,
00:38:17.460I think, points this up very, very powerfully in her reflections on totalitarianism. If we
00:38:22.140can't have a dialogue with ourselves, if we're pulled out of ourselves endlessly and offshore,
00:38:28.020even our memory, we lose the ability to actually be free agents in the world. So these are some
00:38:34.100of the things that I'm very concerned about at the level of pedagogy, which is why I tend to a
00:38:39.480a pretty puritanical, I suppose, relationship to AI when it comes to, at least to humanity's
00:38:43.760classrooms. I recognize AI has different applications in different contexts.
00:38:48.280It's funny, at the risk of oversimplifying it, I think of, you know, a movie that I've watched
00:38:53.340that, you know, say is two hours long. I could find out what happens in that movie in about 60
00:38:58.860seconds by just reading a plot synopsis on Wikipedia. But I don't do that. I watch the
00:39:03.340movie because there is something in that process. You feel, you see, you learn, you get insights.
00:39:08.740It's the same as why, you know, despite the fact that I may not have taken this advice when I was in high school, reading the Coles notes of something is not the same as reading the thing itself.
00:39:18.180I mean, I could get chat GPT to say, you know, give me some bullet points that I can bring up in tutorial about, you know, the cave or something.
00:39:25.040But that doesn't mean I've done that. So you're quite right.
00:39:27.980And I also wonder, I mean, to appeal to your department, the classics, if you were to input into ChatGPT the most beautiful works of literature of classics that you'd ever seen and said, create something like this, could it do that in your view?
00:39:43.320Could it create the beauty that we have seen from all of these people thousands of years ago?
00:39:49.640Yeah, that's a very interesting question.
00:39:50.840That's a very hotly debated topic, as you may know.
00:39:53.040Of course, in the world of visual arts, someone recently was awarded a prize, right, in the visual arts for an artificially created, produced image.
00:40:03.960All kinds of, of course, very deep ethical questions around AI and its accumulation of information and how that happens.
00:40:11.600But, you know, from my perspective, for me, the answer to that question was really given quite beautifully by Nick Cave recently.
00:40:17.820Nick Cave, the Australian singer-songwriter, was asked this question.
00:40:23.160A fan sent him a poem that ChatGPT had written when he asked ChatGPT,
00:40:29.280write me a poem or a song in the style of Nick Cave.
00:40:32.540And Nick Cave's response was to say that even if it were a good song,
00:40:37.480which Nick Cave refused to conceive that it was a good imitation,
00:40:41.300he said that the problem is that ChatGPT, artificial intelligence,
00:40:45.780has been nowhere and suffered nothing and to be human in the world at all as someone like
00:40:51.740Jordan Peterson is constantly reminding us is to suffer and out of that suffering either to sort
00:40:57.180of produce meaning in the world and in our lives and I've been fairly persuaded by Nick Cave that
00:41:03.400no matter how close the approximation one might be able to artificially reproduce the fact that
00:41:09.000the technology has itself been nowhere and suffered nothing means that that material can
00:41:14.520have very little consequence for me as someone who lives in the world with all of its fragility.
00:41:19.560So yeah, so I guess that would be my answer, which isn't, I mean, yeah, maybe not the best
00:41:25.240answer, but. No, it is interesting. Now I'm like geeking out on this topic myself. So I think we'll
00:41:30.480have to have you back on in another show. But I remember when I did tutorials in various classes
00:41:36.500in university, the one thing that was always so critical when you were understanding a work was
00:41:41.920to understand the author and the context in which they wrote a particular work. And even if the
00:41:46.520author is some professor who's still alive, understanding how that professor came about,
00:41:51.180you read, for example, a dissertation and you say, oh, well, this was an environmental historian.
00:41:56.740Why were they writing about this issue? And with ChatGPT, that context is eroded because there is
00:42:02.260no human context or it's an amalgamation of 150 human contexts that you don't actually know about