Bannon's War Room - November 04, 2025


WarRoom Battleground EP 883: Critical Thinking in the Age of AI


Episode Stats

Length

53 minutes

Words per Minute

163.29752

Word Count

8,762

Sentence Count

619

Hate Speech Sentences

7


Summary

In this episode of War Room, we talk about the growing threat of artificial intelligence, and why we should all be worried about it. What are the dangers of AI, and how can we prepare for them?


Transcript

00:00:00.000 This is the primal scream of a dying regime.
00:00:07.000 Pray for our enemies.
00:00:09.000 Because we're going medieval on these people.
00:00:12.000 I got a free shot at all these networks lying about the people.
00:00:17.000 The people have had a belly full of it.
00:00:19.000 I know you don't like hearing that.
00:00:20.000 I know you try to do everything in the world to stop that,
00:00:22.000 but you're not going to stop it.
00:00:23.000 It's going to happen.
00:00:24.000 And where do people like that go to share the big lie?
00:00:27.000 MAGA Media.
00:00:29.000 I wish in my soul.
00:00:31.000 I wish that any of these people had a conscience.
00:00:34.000 Ask yourself, what is my task and what is my purpose?
00:00:38.000 If that answer is to save my country, this country will be saved.
00:00:44.000 War Room.
00:00:45.000 Here's your host, Stephen K. Band.
00:00:53.000 800 of the world's most prominent figures spanning the political spectrum
00:00:57.000 are calling for a ban on developing super intelligent AI.
00:01:01.000 New polling shows that 64% of Americans agree super intelligence
00:01:05.000 shouldn't be developed until it's proven safe.
00:01:08.000 Yeah, we're going to build super intelligence and we have no idea how to control it
00:01:11.000 and it's going to be so cool.
00:01:12.000 Please invest in us.
00:01:13.000 The single biggest fight is going to be over what are the values of the AIs.
00:01:17.000 That fight, I think, is going to be a million times bigger and more intense
00:01:22.000 and more important than the social media censorship fight.
00:01:24.000 You know more about AI, which has me more worried.
00:01:27.000 A lot of people, everybody's worried about it.
00:01:28.000 We're also hopeful about it.
00:01:29.000 Yeah.
00:01:30.000 But it changes so fast.
00:01:32.000 Here now with AI, we have evidence now that we didn't have two years ago when we last spoke
00:01:39.000 of what they call AI uncontrollability.
00:01:41.000 So this is the stuff that they used to say existed only in sci-fi movies.
00:01:45.000 We can create infinite universes.
00:01:48.000 This is like the fuel that we need.
00:01:50.000 There's never been a technical project of this complexity and this scale ever.
00:01:54.000 Two paths, two futures, teaching Sam to think or being left in the dust.
00:02:04.000 Wow.
00:02:05.000 AI is really going to, you know, it's going to scale to the moon.
00:02:08.000 Have you ever looked at the moon and wondered if it was real?
00:02:12.000 I have.
00:02:13.000 And tonight, ladies and gentlemen, I have to tell you, the moon is fake.
00:02:17.000 What will it take to collapse the distance between idea and invention?
00:02:22.000 This technology saved my voice.
00:02:24.000 To build machines that build with us and wire the earth with thought.
00:02:31.000 You know, this whole thing is going to reshape the economy.
00:02:33.000 It'll take a few years, but it's going to reshape the economy.
00:02:36.000 And, you know, they're almost there now.
00:02:38.000 You know, these systems are different from ordinary software.
00:02:41.000 You don't write every line of code, right?
00:02:43.000 Building ordinary software, it's like building a skyscraper or something.
00:02:46.000 You can make the blueprint, you make everything to design.
00:02:49.000 This is, ironically, a bit more like biological.
00:02:52.000 It's organic or something.
00:02:53.000 You're growing these models, you said.
00:02:55.000 That's, that can't be real.
00:02:58.000 It's a screen.
00:02:59.000 The moon's a screen.
00:03:00.000 Everything we've been looking at, it's fake.
00:03:03.000 People love the new Sora.
00:03:06.000 And I also think it is important to give society a taste of what's coming on this co-evolution point.
00:03:14.000 So, like, very soon, the world is going to have to contend with incredible video models that can deepfake anyone.
00:03:19.000 We're attempting a classical goetic conjuration in the manner of the Lamegaton.
00:03:23.000 Thanks to our investors for trusting us with the seed round.
00:03:25.000 This one is for you.
00:03:26.000 By the names that gird the world, Adonai Elohim Tetragrammaton.
00:03:28.000 That will mostly be great.
00:03:29.000 There will be some adjustment that society has to go through.
00:03:32.000 And just like with ChatGPT, we were like, the world kind of needs to understand where this is.
00:03:37.000 Very soon, we're going to be in a world where, like, this is going to be everywhere.
00:03:40.000 There should be a prohibition against building this stuff.
00:03:43.000 At least until there's a broad scientific consensus that this can be controlled and safe.
00:03:50.000 And also until Americans actually want it.
00:03:53.000 And we released a poll today also showing that actually less than 5% of all Americans want to race to superintelligence.
00:04:01.000 Good evening.
00:04:04.000 I am Joe Allen, and this is War Room Battleground.
00:04:07.000 The race to artificial superintelligence is on.
00:04:11.000 Frontier companies such as OpenAI, Google, XAI, and Anthropic are hurtling towards what they believe will be artificial general intelligence,
00:04:22.000 a system that is able to do any cognitive task a human can do, and then, presumably, artificial superintelligence,
00:04:30.000 a system smarter than every human being collectively on Earth, otherwise known as a digital deity.
00:04:39.000 In the meantime, the AIs we have right now are turning people's brains into mush and turning their hearts into vessels for cold computer code.
00:04:49.000 We've seen many, many instances of AI psychosis and a few tragic instances of children who turned to AI for solace and for advice,
00:05:02.000 and the AIs instructed them on how to kill themselves.
00:05:06.000 One of these is Adam Rain, who turned to chat GPT, and GPT explained to him how to tie a noose and urged him not to go to his parents when he felt this suicidal ideation.
00:05:21.000 Now, we've wondered how many people out there are thinking like this.
00:05:26.000 How many people are using AI for this sort of thing?
00:05:30.000 OpenAI just released the numbers that they have internally.
00:05:35.000 Wired reports that, according to the OpenAI numbers,
00:05:40.000 0.07% of their 800 million users are experiencing something like AI psychosis.
00:05:49.000 Now, that might not sound like much, but that ultimately ends up being 560,000 people that we know of just using GPT who are going nuts.
00:06:01.000 They also reported that 0.15% are experiencing suicidal thoughts and turning to the AIs for advice.
00:06:12.000 That is 2.4 million people.
00:06:16.000 One imagines that the numbers are probably larger, especially when you think about the additional models,
00:06:22.000 Anthropic, Meta, Grok, and one imagines that they will continue to increase.
00:06:28.000 At the same time, you have all of these companies pushing for every person on Earth to adopt it,
00:06:37.000 to turn to the AIs for truth, such as XAI, which just released Grokipedia, a competitor to Wikipedia,
00:06:46.000 seemingly tailored to basically please a conservative audience, a kind of tailor-made reality just for you.
00:06:56.000 Also, you have Amazon announcing 30,000 layoffs by the end of the year,
00:07:03.000 presumably to replace the workers either with automation or with Indians.
00:07:08.000 And according to a leaked document, they foresee some 600,000 such firings in the near future.
00:07:15.000 So besides going nuts, the AI companies seem to plan to put everyone out of work, the greater replacement.
00:07:25.000 And that's not to mention the existential risks of gradual disempowerment or perhaps sudden annihilation.
00:07:33.000 To discuss that, we will have Connor Leahy on shortly.
00:07:37.000 But I just want to turn to the most immediate concern, at least in my mind.
00:07:43.000 And that is children.
00:07:45.000 These companies are pushing to incorporate AI into every classroom, to turn every child possible into a human AI symbiote,
00:07:55.000 to lift up AI as the highest authority on what is and isn't real.
00:08:01.000 In such an environment, children, as always, are going to need critical thinking.
00:08:07.000 And here to discuss that is Dr. Shannon Croner, award-winning children's book author,
00:08:13.000 whose book, Let's Be Critical Thinkers, is just released October 28th.
00:08:19.000 It is fantastic.
00:08:21.000 And I think that every child should absolutely, if not have an AI, should have a copy of Let's Be Critical Thinkers.
00:08:29.000 Shannon Croner, thank you so much for joining us.
00:08:31.000 Thank you so much for having me on today.
00:08:33.000 Shannon, can you give the audience...
00:08:37.000 I'm sorry, go on.
00:08:38.000 Go ahead.
00:08:39.000 Please.
00:08:40.000 I was just going to say that everything you were saying, it's absolutely accurate.
00:08:43.000 And that so many children today are, you know, turning to AI, and it's very scary.
00:08:49.000 And earlier, you showed a clip about Sora.
00:08:53.000 That's another thing.
00:08:54.000 I think Sora is going to really damage the lives of many people by, you know, these deep fakes that it's going to create.
00:09:02.000 There are so many children today that are just turning to AI instead of using any kind of critical thinking skills.
00:09:09.000 There was a study that I was just reading that in May 2025, 69% of high school students were using ChatGPT at least weekly to help them with their homework and school assignments.
00:09:24.000 And then a study just from this past, a survey from this past summer showed that 93% of students have used AI at least once or twice to complete an entire assignment.
00:09:39.000 Yeah, anecdotally, I've been going to different colleges around the country, and that's all I hear.
00:09:46.000 I hear from really good students that their peers are using it, not just for help, but to complete their assignments for them, to do all their thinking for them.
00:09:55.000 And it's pretty disturbing.
00:09:57.000 As far as the Sora thing goes, you know, if the audience remembers the clip of John F. Kennedy and the guy looking up at the fake moon and all that.
00:10:04.000 Well, I was in San Francisco when Sora was released and some guys were playing with it.
00:10:10.000 They said, well, what would you like to see, Joe?
00:10:12.000 And I said, I don't want to have anything to do with it.
00:10:13.000 But then I thought, well, maybe this would be funny.
00:10:15.000 And in a couple of seconds, just say, you know, John F. Kennedy, say the moon's not real.
00:10:20.000 And the machine created all the context, all the video out of that.
00:10:23.000 It just makes what used to be the province of imagination and creativity as easy as clicking a button.
00:10:31.000 So turning to your book, your book has just been released.
00:10:35.000 It has already sold.
00:10:38.000 I don't want to put the numbers out there unless you would like to, but it is already selling very, very well.
00:10:43.000 So give the audience some sense of what Let's Be Critical Thinkers is about, who it's targeted to, and what parents can get out of it.
00:10:55.000 Yeah, they've actually, it just came out, you know, today.
00:10:59.000 And it immediately sold out several, you know, thousands.
00:11:05.000 And so, but it will be back in stock within a day or two.
00:11:08.000 So people can go to Amazon and purchase that.
00:11:12.000 But, you know, the thing is, is that children really need to use their brain, use their mind.
00:11:18.000 You mentioned creativity.
00:11:20.000 AI is actually going to stunt creativity.
00:11:23.000 It's going to create intellectual laziness.
00:11:26.000 And what my book really encourages is critical thinking.
00:11:30.000 Let's Be Critical Thinkers teaches children how to critically think about the world around them.
00:11:36.000 What questions to ask.
00:11:38.000 Why they should not just believe whatever it is that they are told by the media or by the government.
00:11:45.000 That they need to be inquisitive, ask questions, challenge the narrative.
00:11:51.000 I think that that's really important, especially for, you know, a strong society in the future.
00:11:56.000 If we just simply allow kids to rely on AI, they are going to have zero critical thought.
00:12:03.000 The computer will be doing the thinking for them.
00:12:06.000 They will have stunted cognitive development.
00:12:09.000 And so, you know, I really want to encourage children to read more books.
00:12:14.000 We cannot just eliminate books and or critical thinking skills.
00:12:19.000 And so, you know, what this book does is it has taken all the different policies that we had to deal with throughout the pandemic.
00:12:28.000 So masks, lockdowns, social distancing, and the mandated vaccines.
00:12:33.000 And there is a, the main character is a curious little girl.
00:12:38.000 Her name is Darlene Data.
00:12:39.000 She wants to become an investigative journalist when she grows up.
00:12:42.000 She explains what that means.
00:12:44.000 And she teaches the reader how to become a critical thinker, how to ask the right questions, what propaganda is, how to spot propaganda, the importance of informed consent, why we should never have censorship.
00:12:57.000 And I think that these are all really important things for children of today to know and to learn because really critical thought is not, it's not really taught in schools anymore, especially now with the introduction of AI.
00:13:12.000 If a child wants to, you know, is given an assignment to do a report to research something, all they have to do is go to ChatGPT, you know, or Grok or any one of these AI sources, type in the question, the research topic that they need to study or write about, and it's all just given to them.
00:13:31.500 They're not taught any research skills.
00:13:34.500 And so all the information is just given to them.
00:13:36.500 They can actually, if they are, let's say they're in sixth grade, seventh grade, whatever it is, they can actually ask ChatGPT to write the paper that they need from the perspective of a seventh grader.
00:13:50.500 And so they will write in the language of a seventh grader, they will, they can ask ChatGPT, cite their sources, and ChatGPT will do that.
00:14:00.480 So now children are not even learning how to cite sources.
00:14:03.800 And so really, we are going to be seeing a very quickly a dumbing down of today's children if we don't step in and really teach them critical thought.
00:14:16.140 You know, the sub-theme of your book, the pandemic, the great germ panic of 2020 and the subsequent COVIDian cult, that was an enormous disaster for education.
00:14:30.020 You had all these kids who didn't go to school, many of them who became little mini screen monkeys.
00:14:35.960 They were given ed tech and laptops and told this is what school is all about.
00:14:40.320 And on the heels of that now, you have the push to give them all AI.
00:14:45.480 I'm not going to go out on a limb and say it's all part of a master plan.
00:14:49.660 In fact, it's so distributed, it doesn't seem like any human has made the plan, but maybe someone a little bit further up the food chain.
00:14:56.660 On that note, though, a lot of parents and teachers have told me that they're really concerned that we're going to basically schools will crank out an entire generation of incompetent graduates and ultimately render diplomas, you know, valueless.
00:15:15.100 There really won't be any kind of prestige attached to them because everyone will assume they cheated.
00:15:19.980 I think that there will be a lot of students of their own volition and a lot of parents who urge their children to not go with that stream, to go against the current, to be critical thinkers, to be independent thinkers.
00:15:36.160 And I am fairly convinced that there will be enough of them to get us through.
00:15:41.460 Am I too optimistic, Shannon?
00:15:44.540 Well, I hope that you're correct.
00:15:46.400 You know, I don't really know, because the thing is, is that what we saw throughout the pandemic was just a lot of people who obeyed.
00:15:54.840 All they did was, you know, these, a bunch of sheep who followed what they were told.
00:16:00.140 They wore masks and they used zero critical thought.
00:16:04.340 They wore a mask to enter into a restaurant and then sat down at the table and could suddenly take their mask off.
00:16:12.800 Because if they were eating food or drinking a drink, germs didn't affect them.
00:16:18.960 There's no critical thought there.
00:16:20.820 And so I hope that you, you are very optimistic there and I hope that you're correct.
00:16:25.700 But I'm a little worried because right now the scores for children, especially since, you know, we, all these children were locked down and out of school and stuck online for, you know, this Zoom education.
00:16:40.200 Um, they, their scores are showing that the decline in education and, and their reading scores and math scores are at an all time low.
00:16:51.340 Um, you know, you know, fourth graders, the, the recent information on fourth graders is that their reading level is below, uh, 40% of fourth graders are below average reading.
00:17:05.300 Um, eighth graders, it's, it's really sad for, um, eighth grade, eighth graders right now, it, they're scoring below basic level of reading.
00:17:16.280 Math scores are lower than ever before.
00:17:20.040 And, um, you know, we, I don't know how we catch these children up.
00:17:25.060 They, they literally missed, you know, two year, almost two years of, of education in school.
00:17:30.980 Not only that, but their mental health was affected.
00:17:34.180 Their peer relationships were affected.
00:17:36.120 All of these things, you know, play a part and we're seeing more children who are depressed and have anxiety more than ever before.
00:17:47.160 And so all of this kind of plays a role into, in today's society and their education and their critical thought and how they function in the world.
00:17:55.880 What the pandemic did to children was very damaging.
00:17:59.040 Yeah, more specifically what the authorities brought down on children, the, the lockdown to AI symbiote pipeline.
00:18:10.240 Again, you know, Klaus Schwab in his infinite wisdom certainly saw what was going on and in fact encouraged it in his book, The Great Reset.
00:18:19.340 Calling the pandemic a narrow window of opportunity to digitize those people who would otherwise not be comfortable with screens in their faces all the time and mass surveillance.
00:18:31.820 I wonder, Shannon, so the bleak picture of the current state of education, the low test scores, the low competent competency evaluations.
00:18:43.400 I think people hear it and they get really depressed.
00:18:47.020 I know I don't feel better having heard it.
00:18:49.520 I wonder where are parents and where are schools succeeding?
00:18:54.420 Do homeschooled children do better?
00:18:56.340 Do private schools, charter schools, Christian schools, do they do better?
00:18:59.800 Are there public schools in different regions that do better than others?
00:19:04.740 Is there a way to give the audience a little levity because they know I'm not going to give it to them?
00:19:10.140 Well, you know, I'm in California where, you know, our vaccine, we have vaccine mandate laws.
00:19:18.140 We have the strictest vaccine mandate laws in the nation.
00:19:20.780 And so a child who's just missing one single vaccine is not allowed to actually go to school, public school, private school.
00:19:29.060 The only option for education is homeschool.
00:19:32.020 And so, you know, I'm lucky enough to have the ability to be able to homeschool, you know, the financial ability to be able to homeschool my children.
00:19:43.080 And but many people cannot do that.
00:19:47.720 And so a lot of people have actually had to leave California in order to get their children an education that is in school.
00:19:57.260 However, I will say, like, I absolutely love homeschool because I have a lot more control over what my children are learning.
00:20:05.340 They are not being, you know, fed this woke ideology that so many of the public schools, especially here in California, are teaching our children.
00:20:17.340 And so I'm somebody who, you know, from just my personal perspective and my personal experience, I absolutely love homeschool.
00:20:25.620 I believe that Christian schools are also a really great option for children.
00:20:30.200 And they seem to be a little bit more strict with the education that they are teaching their children.
00:20:39.480 I would say if a parent could send their child to a private school, you know, however, private schools are extremely expensive.
00:20:49.140 And so, you know, sometimes there's private schools that are like literally college tuitions.
00:20:54.880 And so, you know, I would say that if you could send your child to a private school or a Christian school, that's much better than a than a public school where, you know, in the public school, it's kind of like anything goes.
00:21:10.720 And that's pretty scary because and when a child is in a public school where anything goes and they're taught that there's, you know, more than two genders and all the other stuff, there's not really any room for them to challenge the narrative.
00:21:27.140 And what my book, Let's Be Critical Thinkers, does it actually teaches children, you know, to challenge the narrative.
00:21:34.000 I think that it's really important to have room for debate.
00:21:38.220 And when you're in a public school and you're being taught that a child's being taught that there's more than two genders and if they say, no, there's, you know, female and male, that kid is going to get in trouble.
00:21:53.580 And that's really a that's really dangerous for society.
00:21:57.420 And so really what my book really encourages children to do is it empowers them and it teaches them how to critically think and ask the right questions and really kind of challenge the narrative in a very respectful way.
00:22:09.780 You know, on that note, last question, and it's a bit of a challenging one, I think this really shows how history is a bit cyclical at times.
00:22:22.140 So now you have conservatives at the vanguard of critical thinking, not that conservatives haven't always been critical thinkers, but by and large, conservatives have tried to maintain a status quo.
00:22:34.700 Hence, even the term in the 60s, 70s, you saw the real push for critical thinking, kind of critical theory, culture of critique, the leftist sort of vanguard pushing against the at the time, Christian, European and American hegemony.
00:22:55.640 Now we're in a very, very different situation.
00:22:57.820 So my question is, as children are being taught critical thinking, how do you balance that with respect for tradition?
00:23:06.540 How do you teach them to be critical thinkers without them going off the rails and becoming, you know, rabid leftists who hate you and and the country that you brought them into?
00:23:15.920 Well, I think a lot of that actually, you know, starts within the home, I think that parents need to really talk to their children, more than ever before.
00:23:27.700 That is something that I do with my children, I, you know, especially when we went through the pandemic, I was pointing out all the different things like propaganda, messaging, and, and asking their opinions.
00:23:40.940 I think it's really important for parents to have conversations and engage with their children, ask them questions, debate is healthy.
00:23:50.180 And, you know, and that's actually something that we really saw, you know, with Charlie Kirk, for instance, he really welcomed debate.
00:23:56.600 Right. And, and, and, and unfortunately, you know, he was killed for that. But, you know, we need to really bring debate back. And, and, again, not in any kind of a negative way, not in an argumentative way. But I think really healthy debate having the sharing the difference of opinions and kind of going back and forth.
00:24:17.680 And having that discourse is so important in a well-functioning society. And opinions should be allowed.
00:24:26.400 What we saw during the pandemic with the censorship. Anybody who said, myself, you know, I lost my entire social media during the pandemic, because I was sharing information about the COVID shot. And, you know, I was highly censored.
00:24:46.440 I don't think that any kind of research medical opinions should ever be censored. And we're seeing that right now with Secretary Kennedy, how the Democrats are really kind of trying to shut him down or get him kicked out. And because he is sharing an opinion that they don't like. Yet he is actually writing research for that. And so I think it's really important. Yeah.
00:25:10.320 Shannon, where can people find the book? Where would you direct them to purchase this? Any parents or grandparents that want their kids to be critical thinkers?
00:25:19.780 The book is sold on every major, you know, book selling website, Barnes & Noble, Amazon, and it's out now.
00:25:28.480 And what's your social media, Shannon?
00:25:31.640 My ex is Dr. Dr. Shannon Croner. And my Instagram is Dr. Dr. Shannon Cron.
00:25:40.320 All right. Well, thank you very much. We really appreciate it. I hope parents will turn their children into critical thinkers.
00:25:48.600 Now, we need to think about gold. Is the continued divide between Trump and the Federal Reserve putting us behind the curve again?
00:25:59.160 Consider diversifying with gold through Birch Gold Group.
00:26:04.020 And Birch Gold makes it incredibly easy for you to diversify your savings into gold.
00:26:08.020 If you have an IRA or an old 401k, you can convert that into a tax-sheltered IRA in physical gold.
00:26:17.180 Or just buy some gold and keep it in your safe or under your bed.
00:26:22.140 Keep it under your pillow. Dream of gold.
00:26:24.880 But first, get educated.
00:26:27.120 Birch Gold will send you a free info kit on gold.
00:26:30.140 Just text BANNON, B-A-N-N-O-N, to the number 989-898.
00:26:36.600 Again, text BANNON to 989-898.
00:26:41.360 Consider diversifying a portion of your savings into gold.
00:26:44.760 That way, if the Fed can't stay ahead of the curve for the country, at least you can stay ahead for yourself.
00:26:51.100 We'll be back in just a moment with Connor Leahy to discuss A.I. Doom.
00:26:55.900 I want to tell you about a new offer from our sponsor, Birch Gold, Veterans Day Free Silver.
00:27:05.600 Buy gold and get free silver.
00:27:07.200 That's right.
00:27:07.740 For every $5,000 purchased from Birch Gold Group this month in advance of Veterans Day,
00:27:14.500 they will send you a free patriotic silver round that commemorates the Gadsden and American flags.
00:27:20.860 Look, gold is up over 40% since the beginning of this year, and Birch Gold can help you own it by converting an existing IRA or 401k into a tax-sheltered IRA in physical gold.
00:27:33.280 Plus, they'll send you free silver honoring our veterans on qualifying purchases.
00:27:38.840 If you're current or former military, Birch Gold has a special offer just for you.
00:27:43.340 I encourage you to diversify your savings into gold.
00:28:00.240 Text my name, Bannon, B-A-N-N-O-N, to number 989898 for a free info kit and to claim your eligibility for free silver with qualifying purchase before the end of the month.
00:28:11.300 Again, text my name, Bannon, to 989898.
00:28:15.960 You will get the ultimate guide, which is free, for investing in gold and precious metals in the age of Trump.
00:28:22.220 Do it today.
00:28:23.480 When you're buried in credit card and loan debt, it's only human nature to put it off and say,
00:28:28.560 hey, I'll deal with this later.
00:28:30.720 If that's you, here's a hidden fact the debt strategy experts at Done With Debt shared with me.
00:28:36.400 They discovered a little-known strategy that works in your favor to dramatically reduce or even erase your debt altogether.
00:28:44.440 They aggressively engage everyone you owe money to in September, and here's why.
00:28:49.140 They know which lenders and credit card companies are doing year-end accounting and need to cut deals.
00:28:54.420 They even know which ones have year-end audits and need to get your debt off the books quickly.
00:28:59.580 That means you need to get started with Done With Debt now.
00:29:03.940 Done With Debt accomplishes this without bankruptcy or new loans.
00:29:07.700 In fact, most clients end up with more money in their pocket the first month.
00:29:12.980 Get started now while you still have time.
00:29:15.180 Go to donewithdebt.com and talk with one of their specialists for free.
00:29:20.840 Donewithdebt.com.
00:29:22.280 Donewithdebt.com.
00:29:23.760 Take advantage of this.
00:29:24.940 These people are aggressive, they're smart, and they're tough.
00:29:27.580 You want them on your side.
00:29:29.740 Donewithdebt.com.
00:29:31.340 If you're a homeowner, you need to listen to this.
00:29:34.280 In today's AI and cyber world, scammers are stealing home titles with more ease than ever, and your equity is the target.
00:29:43.220 Here's how it works.
00:29:44.400 Criminals forge your signature on one document.
00:29:46.960 Use a fake notary stamp, pay small fee with your county, and boom, your home title has been transferred out of your name.
00:29:54.560 Then they take out loans using your equity or even sell your property.
00:29:59.720 You won't even know it's happened until you get a collection or foreclosure notice.
00:30:06.180 So let me ask you, when was the last time you personally checked your home title?
00:30:12.520 If you're like me, the answer is never.
00:30:14.900 And that's exactly what scammers are counting on.
00:30:18.260 That's why I trust Home Title Lock.
00:30:21.440 Use promo code Steve at HomeTitleLock.com to make sure your title is still in your name.
00:30:28.120 You'll also get a free title history report plus a free 14-day trial of their million-dollar triple lock protection.
00:30:35.420 That's 24-7 monitoring of your title.
00:30:37.700 Urgent alerts to any changes, and if fraud should happen, they'll spend up to $1 million to fix it.
00:30:45.640 Go to HomeTitleLock.com now.
00:30:47.800 Use promo code Steve.
00:30:49.340 That's HomeTitleLock.com, promo code Steve.
00:30:52.080 Do it today.
00:30:53.980 Hello, America's Voice family.
00:30:55.580 Are you on Getter yet?
00:30:56.920 No.
00:30:57.440 What are you waiting for?
00:30:58.660 It's free.
00:30:59.400 It's uncensored.
00:31:00.420 And it's where all the biggest voices in conservative media are speaking out.
00:31:04.940 Download the Getter app right now.
00:31:06.760 It's totally free.
00:31:07.540 It's where I put up exclusively all of my content 24 hours a day.
00:31:11.300 You want to know what Steve Bannon's thinking?
00:31:13.040 Go to Getter.
00:31:13.660 That's right.
00:31:14.440 You can follow all of your favorites.
00:31:16.240 Steve Bannon, Charlie Kirk, Jack Posobiec.
00:31:18.500 And so many more.
00:31:20.140 Download the Getter app now.
00:31:21.460 Sign up for free and be part of the movement.
00:31:25.460 All right, War Room Posse, welcome back.
00:31:28.240 As you well know, I am no fan of artificial intelligence or most any technology unless I have to use it to make a living.
00:31:35.700 And even then, I got to say, this isn't exactly a comfortable situation staring into a camera and speaking to ghost-like wraiths somewhere out in America.
00:31:45.760 But artificial intelligence in particular is something that I have basically no use for.
00:31:51.040 As a writer, I think if you use AI to assist in your writing, you are no longer a writer.
00:31:57.760 You are basically a vessel for algorithms.
00:32:00.280 And if you don't list GPT as a co-author, then you're also a plagiarist.
00:32:05.340 I know it's different for other professions, doctors, soldiers, financiers.
00:32:10.340 But from my perspective, AI is less than useless.
00:32:14.580 It is damaging.
00:32:15.560 First and foremost, the psychological and social damage of having a bunch of human AI symbiotes,
00:32:22.520 brain dead, guided by algorithms as if they were ants following pheromone trails.
00:32:27.480 And, of course, the economic threat of being replaced, replacing riders with chatbots, replacing teachers with virtual avatars, replacing soldiers with drones.
00:32:40.840 All of this does not bode well.
00:32:43.140 And even then, that doesn't really account for the most extreme warnings that we hear about where this all could go.
00:32:51.120 If you have, first, artificial general intelligence, as smart as any human being at anything, then you already have the economic greater replacement.
00:33:02.220 But should that general intelligence begin to self-improve and become a superintelligence, some sort of godlike entity that is smarter than every human being on Earth,
00:33:13.460 by its nature, you would not be able to control it.
00:33:17.620 By its nature, you wouldn't even be able to comprehend what it's doing.
00:33:21.840 Someone who has tracked this for a long time and was at the forefront of warning about the most extreme existential risks of artificial intelligence is Connor Leahy,
00:33:32.720 the CEO at Conjecture and Advisor to Control AI.
00:33:37.640 Connor has written a piece that is online right now, The Compendium.
00:33:42.880 You can find it at thecompendium.ai, and it was released almost a year ago to the date.
00:33:50.060 I think that his arguments have been vindicated, and I hope that his projections have not been, but we shall see.
00:33:57.100 Connor, thank you very much for coming on.
00:33:59.120 It's a pleasure to talk to you.
00:34:01.640 Thank you so much for having me.
00:34:04.200 So, Connor, the Future of Life Institute just released a statement last week on superintelligence,
00:34:10.880 calling for a ban on development towards artificial superintelligence, with a couple of caveats that I'm not a fan of.
00:34:18.060 But I'm curious, what is your read on that?
00:34:21.620 What kind of impact will that have?
00:34:23.180 And, I mean, you support a ban on superintelligence.
00:34:26.560 Is this going to be effective?
00:34:28.440 Will it make an impact?
00:34:29.560 So, the thing that I think is so important about statements like this, and this is one of the strongest, wardest statements of this kind,
00:34:36.660 is that there really are a lot of people on this that you may not necessarily suspect.
00:34:40.920 A lot of, like, tech luminaries.
00:34:42.900 You know, you've got Steve Wozniak, co-founder of Apple.
00:34:45.560 You've got, you know, Bannon himself on here as well.
00:34:49.360 You've got a lot of top AI professors, including Nobel Prize winners, like, just lots of people across the spectrum.
00:34:57.360 And because it's a public letter, no one can deny that this is real.
00:35:01.100 There is a thing that propagandists try to do a lot, where they take an issue like this and they pretend it's not real.
00:35:08.340 No one really believes that.
00:35:09.920 And this is obviously, you know, you can't claim that anymore if you have a letter like this.
00:35:13.880 We have, you know, smartest people on the world, you know, from across the world saying, right here, this is actually dangerous.
00:35:20.660 This should actually be banned.
00:35:22.300 So, this is very important when you do politics in general, is that it has to be a topic you're allowed to talk about, in a sense.
00:35:30.340 And there's kind of, it's getting harder and harder for the people who are trying to dismiss or hide or propagandize away these kinds of risks to deny that there is an actual thing here.
00:35:40.100 There is, it's harder to hide from our politicians, from the general public, what's actually going on here.
00:35:46.900 Like, as you said, I've been worried about these issues for a long, long time.
00:35:50.280 And now it's really great to see that more and more of the general public and, you know, media is, you know, taking these risks seriously, is discussing these issues, because it does really affect all of us.
00:36:00.380 So, will this in itself, by itself, lead to a ban?
00:36:04.220 Probably not.
00:36:04.720 I think there's a lot of hard work to be done, but this is, you know, in many ways, kind of like the, you know, a warning shot of flair, you know, that we should get going.
00:36:15.340 One thing that really struck me about your piece, the compendium or essay, manifesto, as it were, is that one of the solutions, perhaps the primary solution you put forward, is to cultivate a sense of civic duty.
00:36:28.640 That if people really care about their societies and their families, they wouldn't want such a thing as artificial superintelligence to come into existence.
00:36:37.420 Is that a fair read?
00:36:39.740 Yes, I think this is very, very important.
00:36:41.580 So, like, we have run polls across multiple countries, bipartisan across the world.
00:36:45.740 There is an unbelievable, like, historically almost unprecedented level of support for this idea of, you know, regulating dangerous superintelligent AI.
00:36:56.440 Because it's a very simple argument.
00:36:58.060 It's a very, very simple argument.
00:36:59.680 If you make something that is smarter than all humans, don't know how to control it, how exactly does that turn out well for humans?
00:37:06.500 Like, you know, I'm open to the argument, but, like, I have not heard anyone make a good case here.
00:37:11.000 And, like, there's this deep thing where we have all of these, like, you know, tech companies and the people behind them building these extremely powerful technologies.
00:37:19.740 And it's actually building is leading.
00:37:22.060 It's very important to understand that AIs are quite different from other software.
00:37:25.460 They're not really written with, like, lines of code.
00:37:28.180 That's how normal software is made.
00:37:29.680 They're more, like, grown.
00:37:31.440 You have these big supercomputers that kind of, like, take in, like, massive amounts of data and they crunch it.
00:37:37.640 And they produce, they grow this program called a neural network.
00:37:42.220 And this is how all modern AI works.
00:37:44.460 And the thing is, we don't really understand how these things work.
00:37:48.080 Not really.
00:37:48.800 Like, we don't really understand what's going on inside of them.
00:37:51.360 And they constantly do all kinds of things that we don't really understand.
00:37:54.880 So there's kind of, like, two possible worlds we live in, right?
00:37:58.640 There's one world where these people do control these super powerful things.
00:38:02.960 They keep getting more powerful, keep getting integrated more into our lives, our economy, etc.
00:38:06.120 Or the world in which they don't control them.
00:38:08.920 And I'm not sure which one is worse.
00:38:11.000 I think both of these are, you know, very dangerous worlds to be in and not worlds that people want to be in.
00:38:16.940 And people have made their voices clear in polls across the world that this is not what people want.
00:38:21.240 And I truly believe that people have a right and a stake to their lives, their safety, the lives and the safety of their friends, their family, their nation.
00:38:30.440 This is what democracy is built upon.
00:38:32.260 And we don't let random people build dangerous things that threaten our lives.
00:38:37.640 That's illegal.
00:38:38.860 And I think the same thing should be applied here.
00:38:41.120 Before we return to that idea that this should be stopped, it maybe could be stopped, and what the possible paths are to get there, you open up the compendium.
00:38:52.380 And, again, this was published a year ago almost to the day.
00:38:55.680 You open up the compendium talking about the state of the art of AI.
00:38:59.580 And at that time, people were, by and large, in the dark.
00:39:03.360 They didn't understand the complexities, the non-deterministic nature of neural networks, and the black box phenomenon.
00:39:11.240 I think the understanding is quite a bit better now, a year later, broadly speaking.
00:39:16.400 But the technology continues to develop.
00:39:19.820 You're always shooting at a moving target.
00:39:21.900 So if you could, what do you see, having had a year now to see the development of the technology, how different is it today?
00:39:31.660 How different of a world is it with GPT-5 and Grok 4 than it was on Halloween in 2024?
00:39:38.980 Yeah, it's a great question.
00:39:42.400 It feels like so long ago, which kind of is part of the problem, right?
00:39:45.660 Most technologies, you know, might take a couple of years to see a new generation of huge breakthroughs.
00:39:50.400 What we're seeing with AI is really, like, every three months, every three weeks, there is a massive breakthrough.
00:39:55.380 And this is exactly what we've seen over the last year as well.
00:39:57.800 And my prediction is this is what we're going to see next year as well.
00:40:00.480 A year or so ago, you know, we have pretty good chatbots and stuff like this.
00:40:03.360 But a thing that, for example, really didn't work quite so well is agents.
00:40:05.880 So this is autonomous systems that could write code, that could go search for information, that could, you know, solve kind of tasks.
00:40:13.060 And these have gotten radically better over the last year.
00:40:15.720 They're not perfect by any means.
00:40:17.060 But, for example, a year ago, I never used AIs to help me with coding because I'm a pretty good coder and they weren't really helpful.
00:40:23.200 Now I use them for everything.
00:40:24.860 Like, I can just tell my AI, go into my code base and, like, figure out how to do this and, like, fix that.
00:40:30.740 And then it'll just, like, go look by itself.
00:40:33.320 It will pull up stuff to read.
00:40:35.060 It will fix various things, test a couple things, and then, you know, give me a report on what it did.
00:40:40.520 This wasn't possible or, like, barely possible a year ago.
00:40:43.340 Now it's very possible.
00:40:44.820 We're also seeing massive advancement in stuff such as world modeling.
00:40:48.200 So this is basically virtual worlds.
00:40:50.660 So you can have AIs generate full 3D worlds that you can, like, walk around in that are close to photorealistic.
00:40:57.520 This wasn't a thing a year ago.
00:40:59.520 And, yeah, it's only getting better from here.
00:41:02.280 So, you know, full, like, Star Trek holodeck type stuff is becoming more and more feasible the way it looks.
00:41:07.840 That one was a bit of a surprise to me.
00:41:09.640 The other stuff, like agents and so on, is kind of what I predicted.
00:41:13.520 We are in an exponential.
00:41:15.980 Things are getting exponentially faster.
00:41:17.580 You know, progress is not only keeping up.
00:41:20.700 It is getting faster.
00:41:22.620 And so every year, even more progress is happening.
00:41:25.320 We're getting even closer to these, like, truly autonomous, truly intelligent, or even super intelligent systems.
00:41:32.060 And I continue to not see us slowing down here.
00:41:35.840 I think there are many problems that still need to be solved.
00:41:39.140 AI still struggle a bit, for example, with memory.
00:41:41.300 But I think with enough money and enough engineering time, those will be solved in due course.
00:41:48.480 I'm curious.
00:41:49.720 Maybe you can give a brother some advice.
00:41:52.220 It's very difficult to shoot at this moving target, right, as the technology keeps changing.
00:41:57.760 And around that, there's all this noise.
00:41:59.900 You have the deniers on one side, the dismissers, the doubters, those that say that this is all basically just a toy, an overpriced toy in a bubble that's just going to pop and everything's going to go away.
00:42:10.620 We'll go back to, I guess, social media and smartphones.
00:42:13.740 On the other side, you have this expectation that is pretty overtly religious that what they're building is digital God.
00:42:22.360 This God will be benevolent.
00:42:24.060 It will cure all disease and perhaps allow us to be immortal.
00:42:27.800 So between those two poles, you have this rapidly developing technology.
00:42:33.480 How do you communicate the intensity and the urgency of the problems associated with this rapid development while avoiding some of the extreme hype on the one side and getting past the doubters on the other?
00:42:49.800 I think this is a genuine tricky communications challenge.
00:42:55.200 It's not just a tricky communications challenge because it's hard to explain.
00:42:58.680 My experience has been that a lot of people are very reasonable and you can explain these things quite simply to people.
00:43:04.320 As I said, the basic argument of we shouldn't even attempt to build things that are smarter than us, pretty plausible.
00:43:10.880 Even if we don't have it, I think we shouldn't even attempt to go there.
00:43:14.800 It should be illegal to even try to build a superintelligence, never mind succeed.
00:43:19.180 So the way I usually think about this is that what the actual thing we want is we want to make it illegal to attempt to do this.
00:43:25.700 We want to make it we want to restrict precursors, because if we wait until we see the first superintelligence, it's already way too late.
00:43:34.400 And I think this is a very common sense thing that most people can understand.
00:43:38.500 It's like, yeah, that actually seems like something we shouldn't do.
00:43:40.700 That seems pretty straightforward.
00:43:41.840 It's it's really interesting to bring up the religious aspect here, because I do actually think this is a very important one that sometimes gets underappreciated is that.
00:43:51.020 So I know a lot of people that work at these companies.
00:43:54.200 You know, I've gone to their parties in San Francisco before, like, you know, and it is to for many, not by all means all, but to many, many of these people, including many people in charge of technology, it is a religion.
00:44:06.760 It's transhumanism.
00:44:08.300 The reason they want to build superintelligence and the reason they want to do it as fast as possible is because they want to do it kind of before anyone notices what they're doing, because they want to live forever.
00:44:20.160 They want to become cyborgs.
00:44:21.640 They want to do whatever.
00:44:23.200 Right.
00:44:23.400 Like, I've heard some, I mean, really just like awful things being said at these parties about like how these people think about other humans and what, you know, they should or shouldn't be done with them.
00:44:35.100 And I think there is a there's a real aspect here that's quite important to understand that there is an ideological aspect of this as well.
00:44:41.920 And they don't want to be noticed.
00:44:44.340 They don't want people to realize what they're doing.
00:44:47.080 They want to delay things as much as possible.
00:44:49.780 It's not even if they want to win the argument is that they want to delay everybody.
00:44:53.180 They want to confuse everybody.
00:44:54.500 They want to distract from the very simple thing that we should be able to look at like, hey, these people are doing things that are already harming people today and is only getting worse and they don't have control over it.
00:45:05.620 And why should they have the right to even attempt to do something like this?
00:45:09.560 Like, if they succeed by their own lights, what they put in their own marketing copy, like, why are we letting people even try to do this?
00:45:19.300 Well, to close out, what do you envision as a legitimate path to just the simple the simple ask?
00:45:28.500 No super intelligence, no drive towards creating a digital God.
00:45:34.040 Legally speaking, how do you see it going forward?
00:45:36.800 National legislation, an international body enforcing it, treaties, agreements.
00:45:42.760 How do you see it going?
00:45:43.700 Especially you always hear, if we don't do it, China will.
00:45:47.420 How do you answer that?
00:45:48.540 And what do you see as a legitimate path to banning super intelligence?
00:45:51.960 I think it's very important to see here that China also has no interest in going extinct.
00:45:59.500 This is not to the benefit of the Chinese Communist Party or the Chinese people.
00:46:03.100 The same way that going extinct is not beneficial to the American people or the American government.
00:46:08.040 That doesn't mean that there isn't a real...
00:46:09.600 Real quick on that, sorry to interrupt, but just real quick on that, maybe not, maybe Xi Jinping and his various ministers don't, but presumably neither do Sam Altman or Elon Musk, so on and so forth.
00:46:23.160 So should we assume that China wouldn't push forward just as American companies are pushing forward?
00:46:28.440 I don't think we should assume that at all, actually.
00:46:31.500 I think this is, in many sense, should be seen as a Cold War situation.
00:46:36.720 I think this is a very, very hard problem.
00:46:39.120 There is actual competition happening, and denying that would be ridiculous.
00:46:43.480 What I'm saying here is that there are ways forward here in that what needs to happen is kind of the same things we did in the Cold War with the USSR,
00:46:51.220 is that it's hard to make regimes to find international ways of regulating mutually enforceable agreements.
00:47:00.080 The way I like to think about it is that at some point somewhere, we have to have some kind of way of mutually verifiable agreements to not build superintelligence.
00:47:10.740 It should not be pure trust.
00:47:12.300 That's not how things work.
00:47:13.960 Trust, but verify.
00:47:15.080 I think this is a solvable technical problem, for what it's worth.
00:47:17.960 The same way that, for example, we can detect nuclear detonations and also nuclear enrichment facilities extremely effectively,
00:47:24.740 including in countries that are being non-cooperative.
00:47:27.680 I think very similar things can be done for superintelligence.
00:47:30.460 You need massive data centers that need massive amounts of energy and very specific hardware.
00:47:35.480 There are ways to control and detect such operations.
00:47:40.020 There are ways to make this happen.
00:47:41.920 But I want to be very clear.
00:47:43.040 This is hard.
00:47:43.860 This is hard.
00:47:44.440 There needs to be some kind of way for us to deal with it.
00:47:48.180 I don't think it makes sense, you know, for just like, you know, one country to, you know, say something.
00:47:55.120 It's a thing that we have to do at a large scale.
00:47:57.980 But I also think it's definitely not like, do you feel like the USA is currently in charge of AI?
00:48:03.780 I think the companies are in charge, which I think is a very different thing from saying the US is in charge.
00:48:08.020 Yeah, so a multilateral approach before unilateral, you would say, you wouldn't necessarily recommend the US government ban US companies as opposed to kind of pushing more towards something more international.
00:48:24.260 The thing I would recommend to the US government is I do think the US government should have more control over what happens within its borders.
00:48:30.760 I think at the moment, the US government has very little.
00:48:34.440 I'm not an expert on this.
00:48:36.040 I will say the US government may have, you know, secret projects I'm not aware of.
00:48:39.820 But my understanding is that the US government has relatively very light touch on these companies, that these companies are mostly able to act in impunity.
00:48:47.780 They're able to build their data centers in foreign countries.
00:48:49.920 They're able to ship their data, you know, include to hostile countries.
00:48:53.060 I have heard from insiders in these companies that a lot of the AI training data and stuff is stored on servers in countries that, you know, are not friends of the United States.
00:49:03.460 So in a sense, I think it would be great if the United States government had very good understanding, very good transparency and very good oversight over what exactly is happening here.
00:49:14.600 I would like if the United States population could have a vote on what are we going to allow these companies to do.
00:49:23.820 I am, again, I believe in democracy.
00:49:26.720 I think, you know, the people should be able to decide.
00:49:29.040 I think our elected representatives should have a say in how much risk the American public is exposed to from companies, including US companies.
00:49:37.500 But ultimately, yes, ultimately, we need a multilateral agreement at some point.
00:49:41.560 At some point, somewhere, we need to find a way where, you know, not just US and China, but also other countries, you know, across the world, middle powers across the world can come to an agreement that we should not do this and it should be enforced.
00:49:54.320 We should find a way to mutually check and enforce upon each other.
00:49:57.620 I think there's a lot of common sense domestic policy that can be done first, such as I say, good transparency and oversight is what are these companies doing within your borders?
00:50:07.340 What where are they putting the data outside of your borders?
00:50:09.860 Are you OK with them moving that data outside of your borders?
00:50:12.680 All of this, I think, is already things that, you know, you know, our nation can do right now.
00:50:16.280 Well, Connor, I could talk to you all day about this and we definitely want to have you back.
00:50:21.860 Tell the audience where they can follow your work, where they can find the compendium, so on and so forth.
00:50:26.300 And until then, hopefully keep them busy with some homework.
00:50:30.820 Find me on X at NP Collapse and you can find my company at Conjecture.dev, the compendium at the compendium.ai.
00:50:39.720 Thank you so much.
00:50:41.660 Connor, I really appreciate it, man.
00:50:43.040 Thank you very much.
00:50:43.740 And if we fail to stop superintelligence and the robots come to get you, you definitely want to have some food on hand.
00:50:53.260 So go to mypatriotsupply.com slash Bannon by the three month emergency food kit and get a free four week kit.
00:51:02.520 Originally $9.44, but you will get $247 off that price.
00:51:09.280 Mypatriotsupply.com slash Bannon.
00:51:11.980 And Birch Gold can still help you roll an existing IRA or 401k into gold.
00:51:19.800 You are still eligible for a rebate in free metals of up to $10,000.
00:51:24.000 So make right now your first time and buy gold and take advantage of a rebate of up to $10,000 when you buy by...
00:51:31.500 Text Bannon to 989-898.
00:51:36.960 That's Bannon, 989-898.
00:51:40.360 Claim your eligibility and get your free info kit.
00:51:42.960 Again, text Bannon to 989-898.
00:51:46.720 Thank you so much, War Room Posse.
00:51:48.460 Until next time, stay safe.
00:51:50.860 Stay human.
00:51:51.820 Okay, let's be honest.
00:51:54.960 You never thought it would get this far.
00:51:56.400 Maybe you missed the last IRS deadline or you haven't filed taxes in a while.
00:52:01.300 Let me be clear.
00:52:02.240 The IRS is cracking down harder than ever, and this ain't going to go away anytime soon.
00:52:07.960 That's why you need Tax Network USA.
00:52:10.360 They don't just know the IRS.
00:52:12.220 They have a preferred direct line to the IRS.
00:52:14.580 They know which agents to deal with and which to avoid.
00:52:18.140 Their expert negotiators have won gold.
00:52:21.660 Settle your tax problems quickly and in your favor.
00:52:25.940 Their team has helped clear over $1 billion in tax debt, whether you owe $10,000 or $10 million.
00:52:33.120 Even if your books are a mess or you haven't filed in years, Tax Network USA can help.
00:52:38.440 But don't wait.
00:52:39.820 This won't fix itself.
00:52:41.500 Call Tax Network USA right now.
00:52:43.680 It's free.
00:52:45.000 Talk to a strategist and finally put this behind you.
00:52:49.200 Call 1-800-958-1000.
00:52:52.280 That's 1-800-958-1000.
00:52:55.560 Or visit tnusa.com slash Bannon.
00:52:59.240 Make sure you tell them, Bannon, you'll get a free evaluation.
00:53:01.920 That's 1-800-958-1000.
00:53:05.100 Do not let letters from the IRS or your failure to file work on your nerves anymore.
00:53:12.360 Take action, action, action, and do it today.
00:53:15.800 Okay.
00:53:16.020 Bye.
00:53:16.420 Bye.
00:53:16.800 Bye.
00:53:17.020 Bye.
00:53:20.680 Bye.
00:53:21.120 Bye.
00:53:25.100 Bye.
00:53:25.740 Bye.
00:53:26.060 Bye.
00:53:26.320 Bye.
00:53:26.640 Bye.
00:53:26.680 Bye.
00:53:28.500 Bye.
00:53:30.940 Bye.
00:53:32.640 Bye.
00:53:33.760 Bye.
00:53:33.960 Bye.
00:53:34.500 Bye.
00:53:34.560 Bye.
00:53:37.960 Bye.
00:53:38.560 Bye.
00:53:39.080 Bye.