Juno News - August 23, 2023


Jordan Peterson has to go to reeducation camp for his tweets


Episode Stats

Length

44 minutes

Words per Minute

173.23323

Word Count

7,737

Sentence Count

266

Misogynist Sentences

10

Hate Speech Sentences

1


Summary

Summaries generated with gmurro/bart-large-finetuned-filtered-spotify-podcast-summ .

Transcript

Transcript generated with Whisper (turbo).
Misogyny classifications generated with MilaNLProc/bert-base-uncased-ear-misogyny .
Hate speech classifications generated with facebook/roberta-hate-speech-dynabench-r4-target .
00:00:00.000 Transcribed by ESO, translated by —
00:00:30.000 Thank you.
00:01:00.000 welcome to canada's most irreverent talk show
00:01:17.180 this is the andrew lawton show brought to you by true north
00:01:30.000 and it is great to have you aboard i am just going to give a caveat at the beginning of the
00:01:42.160 program here that i may not make it through to the end there has been around my home thunder
00:01:48.440 and lightning all day the power went out once today now that was this morning and it hasn't
00:01:53.100 happened again just yet so i will knock on wood and hope that we can make it through the show
00:01:59.020 And one thing that I will point out here is that we are still continuing to see the decline of free speech, the decline of free speech all over the Western world.
00:02:11.160 And certainly Canada is not immune to this problem.
00:02:15.140 And we're going to be talking about this in the context of Jordan Peterson and his trial, essentially, by the Ontario College of Psychologists, which has subjected him or attempted to subject him to mandatory re-education for the alleged offense of tweeting any number of controversial things, one of which was retweeting Pierre Polyev.
00:02:37.900 Ooh, very, very scary stuff indeed.
00:02:41.620 So we'll talk about that.
00:02:42.640 also I have to just on a side note here mention Christia Freeland and her escapades in Alberta
00:02:50.040 Christia Freeland who thinks she can just cycle everywhere but somehow was clocked going according
00:02:55.260 to Kian Bexley's report 142 kilometers through rural Alberta now you can't blame her because
00:03:00.400 it's very difficult for Christia Freeland to be in rural Alberta so she needs to go as fast as
00:03:05.320 humanly possible so she can get the heck out of there so have some sympathy for the Toronto Davos
00:03:10.280 elite, wanting to make her way out of Alberta as quick as humanly possible. All of that is aside
00:03:17.080 from the big story of the day, though Jordan Peterson having a ruling against him in court
00:03:23.060 in Ontario. Now, just to avoid getting totally into the legal weeds here, he brought an application
00:03:28.440 for judicial review of the Ontario College of Psychologists decision about that mandatory
00:03:35.100 re-education training, and the court ultimately ruled that the college was within its purview
00:03:40.260 to impose such a requirement on Peterson. They basically said that, yeah, he's still allowed to
00:03:46.000 talk, he's allowed to tweet, so it's not really affecting his right to freedom of expression. The
00:03:51.980 college is, you know, trying to reduce harm and all of that stuff. A lot of implications of this
00:03:57.120 we need to dig into here. I want to welcome to the show Josh DeHaz, who is a counsel for the
00:04:02.780 Canadian Constitution Foundation, which was an intervener in this case. Josh, it's good to talk
00:04:08.440 to you. Thanks for coming on today. Happy to be here. So let's start first off with what was
00:04:14.260 really at stake here, because obviously the college is a private organ, but it's a creature
00:04:19.440 of statute at the same time. He can't be a psychologist in Ontario without being a member
00:04:24.780 of this college. So what was really the main thrust of the argument here? Yeah, so the College
00:04:32.720 psychologists they're a professional regulator so they're private in a sense but they're like
00:04:36.960 you say um they're implementing the government's regulations so um they do have to comply by the
00:04:44.400 charter um abide by the charter and that means that they have to consider jordan peterson's
00:04:50.080 freedom of speech and so what was at stake here was essentially whether professionals whether it's
00:04:56.080 the psychologists in jordan peterson's case or you know lawyers like me doctors nurses anyone
00:05:03.040 regulated by one of these um professional regulator bodies uh whether what they say on
00:05:08.240 social media whether it's you know political commentary or cultural commentary whether that
00:05:13.600 sort of off-duty conduct can lead to discipline in their professional capacity so basically what's at
00:05:21.760 at stake is the ability of any professional in Canada to participate in debates and say
00:05:27.760 sometimes controversial or politically incorrect things. And what the court seems to have decided
00:05:32.900 here is that it's okay for the college to force people into re-education if they put out mean
00:05:40.960 tweets, basically. I found there to be a fair bit of, I don't want to speak ill of judges entirely
00:05:49.120 because I don't want to put you in an awkward situation as a relatively fresh lawyer, but
00:05:52.880 there seemed to be some very strange revisionism in some of the arguments here. I mean, one,
00:05:58.080 when you bring up discipline that I found noteworthy is the court saying that this was not
00:06:01.840 a disciplinary order. And I wonder what sort of technicality they were hinging that on.
00:06:08.360 Yeah. So that caught my eye too. This is an argument that the college made essentially that
00:06:15.560 you know he wasn't he what he didn't face any actual discipline this is just sort of
00:06:20.920 a remediation and so it's just sort of this you know minor step where they just said you know
00:06:27.400 meet with some coach and this coach is going to teach you you know a world famous psychologist
00:06:33.320 uh formerly a professor at u of t who's been in all kinds of uh different media they're going to
00:06:38.680 teach you how to uh you know speak professionally when you're out there in the public so um it's a
00:06:44.040 a very strange argument i thought they made that argument because uh you know maybe that that would
00:06:49.480 lead the um the court to say oh we can't really decide this it's premature in some sense but
00:06:56.840 i don't really understand that either i think you know if you are forced uh to spend your own money
00:07:02.280 to hire a coach who's going to teach you what you're allowed to say and how you're allowed to
00:07:06.520 say it i'm pretty sure that counts as as discipline yes and certainly when the consequences of not
00:07:13.400 complying will carry discipline. It is a step towards discipline because basically they're
00:07:19.180 not even requiring this of everyone. They're going to him directly and taking issue with his tweets
00:07:24.040 and saying that you've warranted this sort of re-education. And it's not even where you could
00:07:28.720 say it's just about box checking, or once you've gone through the course, you can just carry on
00:07:32.540 doing what you're doing. They're effectively saying there is a correct way for you to conduct
00:07:36.380 yourself and you're not doing it. Yeah, that's right. And what's interesting too here is the
00:07:42.880 court you know when the court's considering whether a charter right like free expression
00:07:46.860 has been violated it has to be if the government and in this case if the regulator is going to
00:07:52.020 limit that it has to be minimally impairing of that right so i think this sort of allowed the
00:07:57.600 court to say well you know it's not really discipline it's just this initial sort of step
00:08:01.680 that could potentially one day lead to discipline it's no big deal it's just
00:08:07.120 meeting with a coach for several months and having them teach you you know how to speak properly
00:08:13.260 But it has really big implications.
00:08:15.600 Like the fact that, you know, all the media, including you, are talking about this today is to sort of send a warning shot to other people in various professions that they better watch what they say and make sure it's politically correct or else they might face discipline.
00:08:31.080 We were talking about this a little bit yesterday in the context of medical associations and how physicians colleges have had a very significant crackdown, I think, on their individual members' right to speak freely on COVID issues.
00:08:46.960 And whether we're talking about law societies, medical colleges, the psychologist's college for Jordan Peterson, I mean, anything.
00:08:54.460 It could be the regulatory colleges for massage therapists, for psychotherapists.
00:08:58.600 It doesn't matter.
00:08:59.140 These bodies are all tremendously powerful because you don't really have a right to operate in your chosen field without being a member of them. And I'm just wondering in general, if this is an area of law that has been fairly well established in which these organizations have this much power, or if this is kind of a recent advent in judicial reviews.
00:09:19.700 yeah so it's it's come up a lot more lately and part of that has to do with the with the pandemic
00:09:26.160 um basically you know there's there's always been this idea that regulators can uh you know
00:09:33.960 correct speech they can take care of uh sort of the image of the rate of the profession in the
00:09:39.440 eyes of the public and um you know we at the canadian constitution foundation we don't disagree
00:09:44.740 with that like the regulators do have some um some role established in law to um to prevent harm to
00:09:53.140 you know people who are for example patients of jordan peterson we don't take issue of that but
00:09:58.340 it seemed to come up a lot lately that just things people say at uh at um you know on twitter or
00:10:03.540 whatever are are being policed um you know for example it came up a lot with nurses during the
00:10:09.460 the pandemic, you know, nurses that opposed vaccine mandates, maybe because they've been
00:10:13.760 exposed to vaccine mandates with the flu shot for decades and don't like them, or nurses that
00:10:18.840 criticized mask mandates. And so it's not a completely new area of law, but it's coming up
00:10:25.800 more and more. Well, and I should point out too, that we're not even talking about, in some cases,
00:10:32.040 tweets that are, I would argue, outside of what a psychologist who's not Jordan Peterson
00:10:37.800 could or should be weighing in on. I mean, one of the tweets they brought up was him commenting on
00:10:42.580 the transgender actor Elliot Page, formerly Ellen Page. And, you know, the transgender issue is
00:10:48.920 entirely fraught within psychology. One of the big debates we have in medicine right now is how
00:10:53.820 to treat especially children that start identifying by a different gender than what their biological
00:10:59.440 sex would dictate. So the idea that a psychologist cannot have heterodox opinions on the transgender
00:11:04.860 issue and express them is, I think, insanely, insanely offensive to what professionals should
00:11:11.840 be encouraged to do, which is debate live issues and hot issues in their field.
00:11:17.580 Yeah, that's right. And it's probably even worse than people realize in this particular case,
00:11:22.340 because what Jordan Peterson said about the actor Elliot Page, formerly Ellen Page, was
00:11:30.400 just used used their their original pronoun that they used for decades because in Peterson's view
00:11:37.580 and you know I don't necessarily agree with him but in his view it's you know it's bad for the
00:11:43.400 patient to to to sort of allow them to choose their pronoun because in his view that's sort
00:11:50.300 of engaging in a delusion and they're better off if they don't do that so you know you can be on
00:11:55.820 one side of that issue or the other but if anybody should be able to talk about that it's
00:12:00.020 it's psychologists, right? Yeah. And I'm curious where you think this is going to go,
00:12:05.160 because I know Jordan Peterson has been fairly unrepentant on this. And I know that the CCF is
00:12:10.000 not representing him. You're intervening. So you don't, I believe, get to appeal on his behalf.
00:12:15.380 But is your sense that this will go to an appeal and that if so, it could actually have a strong
00:12:20.760 basis of success? Yeah, I think so. I think it's pretty clear that Jordan Peterson, he doesn't
00:12:28.020 to appeal. So he'll take this to the Court of Appeal. He'll ask them if it's something that
00:12:35.400 they should hear. And if they agree to hear it, then you'll get a three-member panel of the Court
00:12:42.780 of Appeal most likely deciding whether this divisional court, which is what put out the
00:12:49.460 judicial review decision today, whether they got that right. Yeah. And I should say, Jordan Peterson
00:12:54.460 tweeted about this, and he vowed to make every aspect of this public. And we will see what
00:13:00.840 happens when utter transparency is the rule. And I think if the college gets its way and Jordan
00:13:06.380 Peterson has to be subjected to this, I think they may find themselves on the losing end of that,
00:13:11.380 much like when Ezra Levant many years ago filmed the entire proceedings before the Alberta Human
00:13:17.560 Rights Tribunal. So I think very much something that people want to see. Josh, DeHaas counsel
00:13:23.480 with the Canadian Constitution Foundation. Good to have you on. Thanks for doing it.
00:13:27.700 Thanks so much, Andrew.
00:13:28.800 All right. Thanks again, Josh. And I should say on Jordan Peterson, look, I'm with Josh on this. I
00:13:33.800 believe there is a role for regulatory colleges because the only thing worse than regulatory
00:13:39.680 colleges deciding who can or can't be a member is some government bureaucrat doing it in which you
00:13:45.340 know that the rules are going to be even more ridiculous. But the point of this is for these
00:13:50.180 organizations to basically say, we as psychologists know what makes a good psychologist and we know
00:13:57.200 what makes a competent psychologist. They are not meant to be authorities governing the entirety
00:14:02.600 of your lives. But we've seen that become the case. Certainly lawyers and law societies have
00:14:07.920 no doubt familiarized themselves with this because of all these character laws. I mean,
00:14:12.060 I mentioned Ezra a moment ago. Ezra was once threatened, and I don't want to get the facts
00:14:16.980 wrong, I think it was, I don't know, seven, eight years ago, maybe it was more recently,
00:14:21.140 with the suspension of his law license in Alberta for speaking ill of another lawyer. Now, I think
00:14:26.840 most people in this country could probably speak ill of a great many lawyers, but he was going to
00:14:31.560 be charged or prosecuted in the sense, in the broad sense, not the literal criminal sense,
00:14:37.500 by this law suicide. And he ended up just giving up his right to practice law, which is quite sad
00:14:42.720 that especially with how much Ezra has to spend on lawyers, it's quite sad that he ended up giving
00:14:47.440 up that, but he really had to. And I think time and time again, we're seeing doctors that are
00:14:54.620 wanting to speak up about vaccine mandates and harms of lockdown that are being faced with
00:15:00.340 punishment from their college. We now see the College of Psychologists cracking down
00:15:04.100 on Jordan Peterson for being Jordan Peterson. That's the point here is that I could find you
00:15:10.140 some insane doctors, psychologists, lawyers that are on one side of the political spectrum that
00:15:18.440 will be completely untouchable by the regulatory colleges. Where's Nilly Kaplan Mirth's witch hunt
00:15:25.980 by the college for all of the nonsense she spouts about why we all need to wear like, you know,
00:15:30.660 17 masks if you're in the bathroom alone or something like that. All of this is exactly why
00:15:36.060 these organizations should focus on their core mandate, which is, are you competent and are you
00:15:40.820 engaging in something that is fitting with this profession? When they start legislating your
00:15:45.860 private tweets, it is this creep of censorship into all of these different areas of what is
00:15:51.320 supposed to be civil society and what is supposed to be our ability as individuals to engage in open
00:15:58.040 debate because it's not politically neutral. They try to say, well, the issue is with the tone.
00:16:04.000 It's like all the people that say the thing they really didn't like about Donald Trump was the tone of his speech as opposed to the content.
00:16:10.960 No, they just don't like the guy and they latch on to whatever they can latch on to, which is what these censors are doing with Jordan Peterson.
00:16:17.660 They are latching on to the tone because, oh, maybe he made a little barb.
00:16:21.700 I mean, one of the comments was where someone was complaining about world overpopulation and he said, you're free to leave.
00:16:27.800 And now you've got all these people drawing the worst faith interpretation of that that they possibly can saying, oh, is this a psychologist counseling suicide? Is this a psychologist making light of suicide? No, it's not.
00:16:43.100 It's a guy responding to a flippant tweet with a flippant and I'll say entirely clever
00:16:48.780 and legitimate tweet of his own.
00:16:51.300 When he talks about Elliot Page, who was a woman for the entirety of Elliot Page's career
00:16:56.520 until one day Elliot Page decided Elliot Page was a man and we are all supposed to go along
00:17:01.240 with it, when he made a comment about how a woman got her breasts removed, which is
00:17:06.420 what his take on the Elliot Page transition was, that is a comment that you may agree
00:17:11.860 with or disagree with, you may find to be none of his business, or you may find it to be entirely
00:17:16.200 legitimate discourse. But the point is that is not for a regulatory college to decide.
00:17:22.640 And the court has done what courts in Canada do, which is exact a level of deference to censors,
00:17:28.920 a level of deference to what is increasingly an authoritarian attitude and approach to speech,
00:17:35.220 in which Jordan Peterson, I mean, you know what, he said, I think it was yesterday on the eve of
00:17:40.200 this decision. Good luck to the college for wanting to keep up its prosecution of him. And in all
00:17:46.300 honesty, you can't fight a guy like that who has the means, the temerity and the desire to fight
00:17:52.820 back and expect that it is going to end well for you. So his response there, which we put up on the
00:17:58.840 screen a few moments ago, is that he is going to make every aspect of this public. He also said
00:18:04.140 yesterday that he stands by his words and would change nothing. So this is not a guy that is
00:18:09.500 giving that little glimpse of weakness that they're going to pounce on and say, well, you admit
00:18:13.840 that it was wrong. No, he's saying I did absolutely nothing wrong and I'm prepared to have it out,
00:18:18.700 not in one of your stupid little star chambers, but in open court, so to speak. And, you know,
00:18:23.940 I talk about Mark Stein a lot, who has been a lion of free speech in Canada for many, many years.
00:18:30.300 And when he was going through his free speech trial in British Columbia about, oh, I don't know,
00:18:35.220 nine years ago or so, it was so paramount to his lawyer, Julian Porter, and to him that it be a
00:18:41.920 public hearing, that it be something that is not dealt with in chambers, but it is something that
00:18:46.960 is dealt with in open court, because it was so important for people to see how this process is
00:18:52.160 weaponized against people. And that is exactly the case of Jordan Peterson. So I will be tuning in
00:18:58.100 with great enthusiasm to every single step along the way, and I hope you will be as well.
00:19:05.220 Speaking of compelled speech, I was talking yesterday about Catherine McKenna. She is the former environment minister in this country. I always skip over infrastructure. She was also the infrastructure minister in this country. So if a highway is crumbling, it might have been her fault.
00:19:20.240 But the thing about Catherine McKenna is that she had basically blamed conservatives and people who opposed the carbon tax for the wildfires in the Northwest Territory and in British Columbia.
00:19:34.520 Northwest Territory, sorry, it's plural, not singular, but she blamed arsonists, which she said are those who oppose carbon tax.
00:19:41.400 Now, Catherine McKenna has also been rather unrepentant on this file.
00:19:45.640 she was going on and on about this. And she posted, I should have given Sean this picture
00:19:50.740 because it was a fun one. She did a tweet earlier, which looks like it's the kind of thing that would
00:19:54.760 have come from like a satire account in which she said that she made a pinky promise when she was
00:19:59.860 environment minister with children that she was going to save the planet or something. And then
00:20:04.600 she had like all these pictures of her doing pinky promises with children, which to be honest,
00:20:09.500 you know, if more politicians did pinky promises, maybe the country would be in a bit better of a
00:20:14.540 place that it is now. But Catherine McGanna has doubled, tripled, quadrupled, quintupled,
00:20:18.920 sextupled, heptupled, octupled, nonupled, and decupled down over the course of the last couple
00:20:26.180 of days. And her most recent tweet, which I had to bring up today, is that she wants re-education
00:20:32.100 camp for conservative politicians. She says, we need a mandatory climate science lesson for
00:20:39.240 Conservative politicians and premiers, as well as cost to the lives and livelihoods of Canadians
00:20:45.140 from climate change and the economics of the clean transition. Otherwise, Canadians pay the price.
00:20:50.900 It's absurd, but that's where we're at. Now, I couldn't resist taking the obvious cheap shot
00:20:55.240 there. And I said that perhaps we could just start with economics lessons for Liberal members of
00:21:00.540 Parliament. And if they don't pass, they don't get to take their seats, which means we'd have a very
00:21:05.040 empty House of Commons. So I'll give you your climate change education for conservatives if
00:21:10.760 you give me basic economics, free market capitalist economics for liberals, and we'll see who comes
00:21:16.960 out better than that. But basically, we are seeing the end of debate. And when people talk about
00:21:23.500 education in this context and how you need to be mandatorily educated, it's not about having an
00:21:30.440 education. It's about having the wherewithal to say what you want to say in the eyes of the
00:21:36.280 censors because censorship isn't enough. They have to compel certain speech and that is exactly the
00:21:41.360 direction that things are headed and people like Jordan Peterson should push back and if Catherine
00:21:46.440 McKenna ever gets her wish we should be pushing back on that as well. One thing I will say here
00:21:51.580 is that all of this is evidence of why I have not been as fearful of artificial intelligence as
00:21:58.300 some people have, because I don't think that human intelligence has often served as well.
00:22:01.760 But that's a bit of a glib joke to start off with as a serious discussion, which is what
00:22:06.220 AI is doing to discourse and to thought.
00:22:10.320 Now, we haven't talked a lot about AI on this show.
00:22:13.460 I've kind of been waiting for the right angle and the right opportunity.
00:22:16.620 And I should say, I've been one of these people that has sort of enjoyed the novelty of it.
00:22:20.940 When ChatGPT has come up and you get the ability to just have a quick conversation with this
00:22:26.820 thing and have it give you some response to a question. And there's a program that I've had
00:22:32.640 some fun with called Mid Journey, which will create AI generated images. And you can give it
00:22:39.020 a whole bunch of prompts. I've had a lot of fun with this one. The one that I did, I won't show
00:22:43.000 you because I wasn't thrilled with it, but I asked for like childhood photos of Fidel Castro pushing
00:22:47.740 young Justin Trudeau on a swing. But the AI was getting Fidel Castro and Justin Trudeau's faces
00:22:53.000 mixed up, which maybe makes it smarter than humans. Who knows? And then I also had some fun
00:22:58.100 this morning and I asked to get like some photographs of Chrystia Freeland driving.
00:23:02.380 So maybe we can throw those up. Yeah, there we go. These are the samples it gave me. I thought
00:23:07.940 that speed demon Chrystia Freeland, fresh off the heels of getting her ticket for going however many
00:23:12.620 kilometers over. That's her basically road racing down some Alberta highway. I like the one on the
00:23:18.900 bottom right myself, although it looks a little terrifying. That one, it looks to be in Ottawa
00:23:23.500 though. You can see in the back right there, it looks to be a center block that she's just like
00:23:28.360 leaving in the dust there. The one in the top right is good. It's a little aspirational. She's
00:23:32.540 really flying there so much that she needs the space helmet. She's putting on so many miles and
00:23:37.360 going so fast. She has ascended off the ground. So take from that what you will. But for all the
00:23:43.220 fun that AI offers. And yes, there is some. It also has very serious implications. And those
00:23:50.240 implications we have not really fully explored because despite the fact that this technology
00:23:54.860 has been in development for many, many years, it really seems it's only been in the last year
00:23:59.980 that people have started to grapple with the real world implications of it. And, you know,
00:24:04.960 we see this in academia where universities which have had to focus on detecting plagiarism now
00:24:10.960 have this new problem, which is, did students just create something original by entering a few
00:24:16.520 prompts for their essay assignments into chat GPT or whatnot? There's a great piece in C2C
00:24:24.380 Journal by Christopher Snook about this called AI, the Destruction of Thought and the End of
00:24:30.080 the Humanities. He is a lecturer with Dalhousie University and a contributor to C2C Journal. And
00:24:36.080 He joins us now, not an AI-generated version, but the man himself.
00:24:40.540 Good to talk to you, Christopher.
00:24:41.540 Thanks for coming on.
00:24:43.100 Thank you, Andrew.
00:24:43.960 Thank you so much for the invitation.
00:24:45.820 So let's start first off with where your issue is with this.
00:24:50.140 Why are you concerned about AI in the context here?
00:24:53.860 Yeah, I suppose I can answer that in a fairly simple way.
00:24:58.480 As you've already indicated, there's a great deal of joy maybe to be had with playing with
00:25:03.080 sort of AI applications.
00:25:04.140 But at the simplest level, I suppose maybe I could say two things.
00:25:09.020 One would be that AI introduces, I'm a humanities teacher, so AI-generated content introduces into the university and into students' lives very easy possibilities of escaping from a certain kind of reflection that may be essential to their development within the context of the humanities historically.
00:25:28.480 But secondly, I think I have a pretty significant concern that AI is actually indicative in many respects of a much longer trend in humanities education in Canada that has fairly uncritically assimilated new technological developments without reflecting on their consequences for pedagogy and education.
00:25:49.060 that's quite an interesting approach to this and and you know one thing that i always recall even
00:25:54.960 from my own time in university is that essays were were very challenging i would do better at
00:26:00.960 them now but they were very challenging because you you can't really cheat your way through an
00:26:05.400 essay unless you're actually cheating and plagiarizing and whatnot because it's not just
00:26:10.220 about knowing the facts you can't google the answer to the question when you basically have
00:26:14.180 to show your work and show how you arrived at something. And certainly in an academic context,
00:26:19.740 AI has huge implications for that because all of a sudden someone else could do the thinking with
00:26:25.540 you. I could just give this machine a bunch of different data points and say, formulate an
00:26:31.120 argument for me. And that's something, I mean, I've talked to professors who have already been
00:26:36.720 complaining about the decline in critical thinking in universities. And now we've added this other
00:26:41.240 tool which maybe can be used for good but also can further erode people having to come up with
00:26:47.720 these skills on their own yeah what i've tried to do in the article i mean maybe if i if i kind of
00:26:53.140 talk about some of the points in the article that may be helpful for for at least giving giving
00:26:56.840 people a sense of uh where my concern lies so i my concern really grew out of two things that i saw
00:27:02.340 in the university last year so the first was i mean a remarkable amount of energy and anxiety
00:27:07.160 around the appearance of things like chat gpt right sort of um large language models that can
00:27:13.960 produce texts fairly competently increasingly competently uh for students with very very little
00:27:20.600 to no work on their side so there's a huge amount of anxiety as you pointed to andrew earlier in
00:27:25.320 your introduction in your introduction to this conversation um it's a different it's not even
00:27:30.360 plagiarism in any recognizable sense it's just allowing ai to generate texts um from uh the
00:27:36.440 information it's kind of gathered through its internet uh through its chat box on the internet
00:27:40.680 um so there's a huge conversation about this in the university and what i noted was that primarily
00:27:45.800 that conversation was focused on questions of use and so i spend some time teaching engineers though
00:27:50.120 i teach humanities to engineers and they very patiently kind of uh tolerate this course that
00:27:55.560 seems like a very that seems like a very difficult challenge for you it's a hard sell it's a hard
00:27:59.560 sell but they're very patient and they tolerate this sort of required course on uh effectively on
00:28:04.840 the history of technologies and one of the key things that i've been thinking about since teaching
00:28:09.240 this course is what neil postman simply observes which is that the introduction of every new
00:28:14.360 technology doesn't simply give a new tool to humans but he he sort of coined the idea or
00:28:20.520 helped kind of articulate the idea that every technology shifts the world ecologically in much
00:28:26.360 the same way that an ecosystem is changed if a new species is introduced right so it's not just that
00:28:31.880 we have all of a sudden ai but rather that the whole world shifts around the availability of
00:28:36.600 these new technologies and embedded in these technologies are certain assumptions about what
00:28:41.560 it is to be human and so it's a bit of a rambling uh um response to your previous comment but what
00:28:48.120 i noticed uh in the university over the last six or seven months is that the conversation has been
00:28:53.720 almost exclusively focused on use there have been some people who are sort of uh diametrically
00:29:00.200 opposed to the appearance of AI in any form in the university I kind of tend in that direction
00:29:05.480 certainly for the humanities others who are much more supportive of the use of AI in various ways
00:29:11.880 to facilitate writing but regardless of where one stood on that stands on use of AI I've noticed that
00:29:17.800 very few people are asking deeper questions such as what kind of world does AI produce and what
00:29:23.400 kind of world view or what sort of assumptions are built into the technology and it's there
00:29:28.680 that i think universities need to really be careful about the implementation of ai um partly because
00:29:34.200 i think ai actually uh reveals a bit apocalyptic that is it kind of reveals something about the
00:29:40.440 nature of higher education in canada that's been developing for years and we could talk about that
00:29:44.600 sort of narrowing a viewpoint diversity to different different aspects of the university
00:29:48.680 life um but also i think um the other thing that i think has been missed is that ai um
00:29:54.280 AI-generated texts pushes against certain proclaimed positions, moral positions that
00:30:02.720 the university has adopted in the last, maybe in the last decade. One aspect that this springs to
00:30:08.360 mind, and you address it in the piece, in one section anyway, notably, is the idea of bias.
00:30:13.440 And, you know, facts, in theory, are neutral, and they do not have a political persuasion. It's the
00:30:19.440 assembly of facts and the composition of various facts that you can use to sort of demonstrate
00:30:24.560 something that is a bit more biased. And one thing we've seen in AI is how it's providing,
00:30:31.340 it's doing the thinking for you in theory. But the problem with that, among others, is that
00:30:36.700 it is producing a biased outcome. It's producing a biased response. I mean, I had once when I was
00:30:42.880 first playing around with ChatGPT, a debate with a machine. So the joke was on me about what a woman
00:30:47.700 is. And it was interesting seeing this machine twist itself into all of these many logical knots
00:30:53.140 about trying to answer this question. But it was actually quite terrifying how it started giving me
00:30:58.920 the talking points I would expect if I were having this with some university diversity
00:31:03.100 administrator. And it started telling me about inclusivity and tolerance and women can come in
00:31:07.980 any forms. And there is something there in which AI is basically telling people that there is one
00:31:15.420 way to construct a thought when it does this, that you aren't actually able to assemble facts
00:31:21.380 into different worldviews. Yeah, I mean, I think that that's true. And certainly the studies have
00:31:27.220 varied and in some cases disagree a little bit with one another about where the biases are found
00:31:32.920 in AI technologies. Though some people have, there's been some studies that have tried to
00:31:36.600 argue that there's, because some of the early scraping of the internet focused primarily on
00:31:42.640 on reddit sites that there was a kind of conservative or male bias somehow in in in
00:31:47.680 the uh in the technology but it seems to me fairly clear now that the technology seems to be biased
00:31:53.280 fairly clearly i think in the other direction in terms of um uh the kinds of sources that
00:32:00.560 it's recycling when it's when it's producing kind of mash-up texts um so i began the article and
00:32:05.520 this is one of the things maybe a way of thinking about ai in a broader context or sort of moving
00:32:09.920 back from the technology to think about what it has to say about universities, I began
00:32:14.400 the article in C2C really just by reflecting on the fact that a kind of formulaic response
00:32:21.680 to the work of pedagogy has become characteristic of universities generally in Canada, and it's
00:32:27.520 in part indicated through the demand that applicants for university positions complete
00:32:33.440 diversity, equity, and inclusion statements, diversity statements as part of their application packages.
00:32:39.920 And kind of famously, or if you follow these sort of stories infamously, depends on what one thinks about all these things, a professor in the United States asked ChatGPT to produce a diversity document just last year.
00:32:51.740 And he was astounded to see, just as you've described, the speed with which ChatGPT was able to reproduce all of the talking points and all of the assumptions of a fairly kind of middle-of-the-road Canadian higher education position on issues of diversity, inclusion, and equity.
00:33:11.620 um to my mind what that revealed was that we're sort of beginning to operate in the university
00:33:16.260 in a world that is fairly um uh at the very least formulaic in its expectations of whatever diversity
00:33:24.180 uh inclusion and equity may be about or diversity equity and inclusion um so it was from there
00:33:29.780 really in the article that i i wanted to try to see or to explore how is it that the technologies
00:33:35.860 that are available to now to us now in the university and that have been slowly growing
00:33:40.020 in their implementation in the university over the last um 15 or 20 years may actually be um
00:33:49.540 both accelerated by the advent of ai but may also be in a certain sense pointing towards ai that is
00:33:56.260 to say pointing towards a world in which a kind of formulaic regurgitation of information becomes
00:34:02.180 a kind of normative expectation of students even in the context of their degrees so um that's sort
00:34:08.500 sort of where the article began.
00:34:10.180 So just to sort of maybe make a connection
00:34:11.700 with what you found when you poked and prodded ChatGPT,
00:34:15.240 that it tends to kind of produce
00:34:16.980 fairly predictable results relative to certain questions.
00:34:21.820 Well, there is also to this,
00:34:23.920 I mean, the most, I guess, cogent defense I hear of AI
00:34:28.220 is that AI is little more than a mirror
00:34:30.900 to the existing world.
00:34:32.640 I mean, AI is not really formulating its own materials
00:34:36.720 that it's not drawing from the trove of inputs.
00:34:40.460 Now, obviously, individual inputs can be manipulated,
00:34:43.020 and we also have terms of use that govern it.
00:34:46.520 I'm trying to bring us away from the use discussion of this
00:34:49.240 that you were talking about earlier.
00:34:50.760 But I guess in that sense,
00:34:52.580 is this just reflecting an existing problem,
00:34:55.600 or is this making it worse?
00:34:59.020 Right, yeah, that's a very good question.
00:35:00.580 I would say from, there might be two things to say about that.
00:35:03.280 On the one hand, from my perspective,
00:35:05.920 then this was maybe my concern with the conversation so far in higher education in
00:35:10.080 canada about ai its preoccupation with questions of use has really prevented people from asking
00:35:15.920 a much slower and more difficult question which is to say is this actually a benefit or is it is
00:35:22.240 it uh simply a reflection of the world we're in or is it making things worse so i think that that
00:35:27.680 deeper question about the um the kind of ecosystem consequences or cultural consequences of ai
00:35:33.680 is not really um being asked um so that is to say ai at some level is a kind of metaphor in much the
00:35:42.320 same way one might think about um covet as a kind of we can think about it as a as a as an illness
00:35:47.600 but we can also think about covet response at least as a bit of a metaphor of our contemporary
00:35:52.320 culture cultural moment um so there is that mirroring back but i would say from the perspective
00:35:57.520 of pedagogy ai raises some some very deep questions that to my mind intensify problems
00:36:03.600 that were already present so so it's not it's not so much that it simply introduces a newness
00:36:08.400 that's radical but intensifies certain very particular problems so one of those problems i
00:36:14.000 think is that is uh connected to the to the use of devices generally for humanities education
00:36:21.360 in particular so um one of the things i think many of us have experienced is the extent to which
00:36:27.120 screens and screen reading and iphones or cell phones at the extent to which they actually
00:36:31.840 produce in us kind of habits of scanning a kind of hyper attention what one scholar calls forms
00:36:36.400 of hyper attention not focused attention or contemplative reflection but a kind of hyper
00:36:42.080 attention that actually tends to kind of lead us towards a certain kind of rashness in our
00:36:46.960 decision making so that's one a deep and profound concern i have especially when institutions seem
00:36:52.640 to be dominated by certain sets of political commitments that ought themselves to be subject
00:36:58.160 to serious reflection and consideration right so if we're in an environment where there's certain
00:37:02.720 assumptions about what political positions are normative it needs to be the case that those can
00:37:07.360 be thought about deeply and reflectively and if we're using technologies that limit that capacity
00:37:12.560 then we're in a little bit of danger but the other one i think for me from the perspective as a
00:37:17.040 teacher uh is that and this would be i mean i i'm affiliated with the classics department and i
00:37:22.560 spend a lot of time the last number of years teaching augustine a sort of famous foundational
00:37:27.120 voice uh for the western world and augustine uh is one of many thinkers who highlights the
00:37:33.440 fundamental role of memory in the constitution of our personalities this sort of crucial role
00:37:37.760 that memory plays and it's kind of essential to technologies like chat gpt that we offload or
00:37:43.520 offshore, the faculty of memory to the technological device, right? It does the work for us, right? So
00:37:51.340 I don't struggle with Augustine or Dante or Homer or any of those things. I let ChatGPT do the
00:37:57.480 struggle in a certain sense. I mean, it's not really struggling, but I let it do the amalgamation
00:38:01.200 of opinion making, information, and I'm left passive in that response. So in that sense,
00:38:06.880 I think the technologies actually inhibit the kind of interior dialogue that's fundamental
00:38:12.560 to education, but that's also fundamental to being a free person in the world. Hannah Arendt,
00:38:17.460 I think, points this up very, very powerfully in her reflections on totalitarianism. If we
00:38:22.140 can't have a dialogue with ourselves, if we're pulled out of ourselves endlessly and offshore,
00:38:28.020 even our memory, we lose the ability to actually be free agents in the world. So these are some
00:38:34.100 of the things that I'm very concerned about at the level of pedagogy, which is why I tend to a
00:38:39.480 a pretty puritanical, I suppose, relationship to AI when it comes to, at least to humanity's
00:38:43.760 classrooms. I recognize AI has different applications in different contexts.
00:38:48.280 It's funny, at the risk of oversimplifying it, I think of, you know, a movie that I've watched
00:38:53.340 that, you know, say is two hours long. I could find out what happens in that movie in about 60
00:38:58.860 seconds by just reading a plot synopsis on Wikipedia. But I don't do that. I watch the
00:39:03.340 movie because there is something in that process. You feel, you see, you learn, you get insights.
00:39:08.740 It's the same as why, you know, despite the fact that I may not have taken this advice when I was in high school, reading the Coles notes of something is not the same as reading the thing itself.
00:39:18.180 I mean, I could get chat GPT to say, you know, give me some bullet points that I can bring up in tutorial about, you know, the cave or something.
00:39:25.040 But that doesn't mean I've done that. So you're quite right.
00:39:27.980 And I also wonder, I mean, to appeal to your department, the classics, if you were to input into ChatGPT the most beautiful works of literature of classics that you'd ever seen and said, create something like this, could it do that in your view?
00:39:43.320 Could it create the beauty that we have seen from all of these people thousands of years ago?
00:39:49.640 Yeah, that's a very interesting question.
00:39:50.840 That's a very hotly debated topic, as you may know.
00:39:53.040 Of course, in the world of visual arts, someone recently was awarded a prize, right, in the visual arts for an artificially created, produced image.
00:40:03.960 All kinds of, of course, very deep ethical questions around AI and its accumulation of information and how that happens.
00:40:11.600 But, you know, from my perspective, for me, the answer to that question was really given quite beautifully by Nick Cave recently.
00:40:17.820 Nick Cave, the Australian singer-songwriter, was asked this question.
00:40:23.160 A fan sent him a poem that ChatGPT had written when he asked ChatGPT,
00:40:29.280 write me a poem or a song in the style of Nick Cave.
00:40:32.540 And Nick Cave's response was to say that even if it were a good song,
00:40:37.480 which Nick Cave refused to conceive that it was a good imitation,
00:40:41.300 he said that the problem is that ChatGPT, artificial intelligence,
00:40:45.780 has been nowhere and suffered nothing and to be human in the world at all as someone like
00:40:51.740 Jordan Peterson is constantly reminding us is to suffer and out of that suffering either to sort
00:40:57.180 of produce meaning in the world and in our lives and I've been fairly persuaded by Nick Cave that
00:41:03.400 no matter how close the approximation one might be able to artificially reproduce the fact that
00:41:09.000 the technology has itself been nowhere and suffered nothing means that that material can
00:41:14.520 have very little consequence for me as someone who lives in the world with all of its fragility.
00:41:19.560 So yeah, so I guess that would be my answer, which isn't, I mean, yeah, maybe not the best
00:41:25.240 answer, but. No, it is interesting. Now I'm like geeking out on this topic myself. So I think we'll
00:41:30.480 have to have you back on in another show. But I remember when I did tutorials in various classes
00:41:36.500 in university, the one thing that was always so critical when you were understanding a work was
00:41:41.920 to understand the author and the context in which they wrote a particular work. And even if the
00:41:46.520 author is some professor who's still alive, understanding how that professor came about,
00:41:51.180 you read, for example, a dissertation and you say, oh, well, this was an environmental historian.
00:41:56.740 Why were they writing about this issue? And with ChatGPT, that context is eroded because there is
00:42:02.260 no human context or it's an amalgamation of 150 human contexts that you don't actually know about
00:42:08.960 and can't see.
00:42:09.740 So I think that's a less elegant way
00:42:11.820 of describing what you've shared
00:42:13.340 from Nick Cave there.
00:42:14.720 And I thank you for it.
00:42:16.180 The piece in C2C Journal is
00:42:18.040 AI, the Destruction of Thought
00:42:19.760 and the End of the Humanities
00:42:21.280 by Christopher Snook.
00:42:22.760 And they also have another part
00:42:24.180 of this series written by Gleb Lysak,
00:42:26.320 who we had on the show
00:42:27.140 a couple of weeks ago,
00:42:28.480 but something else entirely.
00:42:30.280 Christopher, thanks so much for coming on.
00:42:32.020 Good to talk to a real human
00:42:33.360 in this day and age.
00:42:34.840 Thank you, Andrew, very much.
00:42:35.940 I appreciate it a great deal.
00:42:37.060 Thank you so much.
00:42:37.600 All right.
00:42:37.900 Thank you very much.
00:42:38.960 That does it for us for today.
00:42:41.220 Do check out C2C Journal.
00:42:42.700 They always have lots of great stuff
00:42:43.800 and we've got a great relationship with them
00:42:45.820 and try to highlight as much of it as we can on this show.
00:42:48.920 We will be back on Friday
00:42:50.640 with a special look at the trans issue.
00:42:53.660 You won't want to miss that.
00:42:54.780 We've got Linda Blade on and a couple of others
00:42:57.020 that I hope you will enjoy the insights of.
00:42:59.640 Again, not AI-generated insights.
00:43:01.520 Not that there's anything wrong with it,
00:43:03.320 although there is a little bit wrong with it.
00:43:05.020 If you want to support independent media,
00:43:06.660 which I hope you do,
00:43:07.740 you can do so at donate.tnc.news, donate.tnc.news. And again, we are in this climate up against
00:43:15.080 tremendous regulations. We have tech platforms responding to government regulation, and we all
00:43:21.500 have to make a point of supporting the work that we value. So hope you'll do that. We will talk to
00:43:26.380 you soon. Thank you. God bless and good day to you all.
00:43:37.740 We'll be right back.
00:44:07.740 We'll be right back.
00:44:37.740 You