Bannon's War Room - September 09, 2025


WarRoom Battleground EP 846: Superhuman AI — "If Anyone Builds It, Everyone Dies"


Episode Stats

Length

53 minutes

Words per Minute

162.02039

Word Count

8,741

Sentence Count

618

Misogynist Sentences

3

Hate Speech Sentences

7


Summary

In this episode of The War Room, host J.K. Banno sits down with Nate Sores, co-author of the book, "If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All," to discuss the problem of artificial superintelligence.


Transcript

00:00:00.000 This is the primal scream of a dying regime.
00:00:07.000 Pray for our enemies, because we're going medieval on these people.
00:00:12.000 I got a free shot at all these networks lying about the people.
00:00:17.000 The people have had a belly full of it.
00:00:19.000 I know you don't like hearing that.
00:00:20.000 I know you try to do everything in the world to stop that,
00:00:22.000 but you're not going to stop it.
00:00:23.000 It's going to happen.
00:00:24.000 And where do people like that go to share the big line?
00:00:27.000 Mega Media.
00:00:29.000 I wish in my soul, I wish that any of these people had a conscience.
00:00:34.000 Ask yourself, what is my task and what is my purpose?
00:00:38.000 If that answer is to save my country, this country will be saved.
00:00:45.000 War Room. Here's your host, Stephen K. Bannon.
00:00:50.000 Good evening. I am Joe Allen, sitting in for Stephen K. Bannon.
00:01:00.000 Many of you in the War Room Posse, if not most of you, are familiar with the concept of artificial super intelligence.
00:01:07.000 A machine that outpaces human beings in thinking ability, in memory, in data collection, and if given access to the outside world through robotics or even manipulated human brains, would be able to outperform human beings in the real world.
00:01:26.000 Now, you also know I'm quite skeptical of the claim that this is imminent or even possible.
00:01:35.000 But I'm also quite open.
00:01:37.000 I'll tell you briefly about an experience I had at a forum that was given to Jaime Sevilla of Epic AI.
00:01:48.000 Epic AI does evaluations on AI systems, testing them to see how good they really are.
00:01:56.000 And what Jaime Sevilla presented was complicated.
00:02:03.000 On the one hand, we all know that AIs are extremely fallible, but what he showed was that for some number of runs, GPT-5 could do mathematical calculations at the level of a PhD mathematician.
00:02:21.000 Now, if you aren't a mathematician, perhaps you're not all that concerned about it, but what it shows is that objectively, without any question, this artificial mind can perform a specific cognitive task better than most human beings on Earth.
00:02:43.000 Now, this was hosted by the Foundation for American Innovation, and Ari Kagan of FAI pointed out something very, very important, that on the one hand, we see that GPT-5 is incapable, oftentimes, of creating an accurate map or even answering a mundane question accurately or counting fingers.
00:03:09.000 On the other hand, for some number of runs, GPT-5 can outperform the vast majority of human beings on Earth in complex mathematics.
00:03:21.000 And even more interesting, it often does so by way of alien pathways, non-human pathways.
00:03:31.000 It chooses routes through the information that no human being would and arrives accurately at its destination.
00:03:39.000 So on the one hand, you have a top-performing artificial brain, and on the other, you have a mechanical idiot.
00:03:51.000 How do we approach this problem as Americans, as human beings, as Christians, as naturalists?
00:03:59.000 How do we approach this problem?
00:04:03.000 Joining us this evening is Nate Sores, co-author of the admittedly fantastic book,
00:04:11.000 If Anyone Builds It, Everyone Dies, Why Superhuman AI Would Kill Us All.
00:04:19.000 Co-authored with Eliezer Yudkowsky, well-known on The War Room for many reasons, but perhaps most for his advocacy of bombing data centers in any country that decides to build superintelligence against any kind of possible international treaty.
00:04:39.000 Nate Sores, we really appreciate you coming on. Thank you so much for joining us.
00:04:43.000 Pleasure.
00:04:44.000 Pleasure.
00:04:45.000 So, Nate, to begin, I would like to just have you lay out the thesis, or perhaps expand the thesis of the title,
00:04:55.000 If Anyone Builds It, If Anyone Builds Superintelligence, Then Everyone Dies.
00:05:02.000 Sounds bleak, but the book is very, very well written.
00:05:08.000 It's very concise, it's very clear, with a lot of clever turns of phrase.
00:05:13.000 I cannot recommend it enough, even for the skeptics.
00:05:17.000 Please expand on the thesis.
00:05:19.000 Why would everyone die if anyone built superintelligence?
00:05:24.000 Yeah, so the very basic point is we're trying to build machines that are smarter than any human, that could outmaneuver us at every turn.
00:05:36.000 That's the sort of thing where, just on its face, from a very basic perspective on its face, if you make machines that are much smarter than humans, that's at least kind of dicey.
00:05:51.000 If you further don't know what you're doing while you're building these machines.
00:05:55.000 If these machines are grown rather than crafted.
00:05:57.000 If these machines have lots of warning signs when they're small, of ways that they aren't doing what anyone wanted or what anyone asked for.
00:06:05.000 That it's, it just doesn't go well to build things much smarter than you without any ability to point them in some direction that you want them to go.
00:06:17.000 And with modern AIs, we see that modern AIs are grown rather than crafted.
00:06:22.000 You know, these, these are not traditional pieces of software where when they do something you don't like, the engineers can look inside them and go to every line of code and find the line of code that says, you know, oh, it was driving a teen to suicide today.
00:06:35.000 I'll find the drive teens to suicide line and switch that line from true to false.
00:06:40.000 I don't know who set that line to true.
00:06:42.000 That was silly.
00:06:43.000 We'll just turn off the, we'll just turn off the driving teens to suicide feature.
00:06:50.000 Pardon.
00:06:51.000 Pardon.
00:06:52.000 That's not, that's not how these machines work.
00:06:55.000 These AIs are, they're grown.
00:07:02.000 We could go into a little bit how they're created.
00:07:06.000 Actually, yeah, I think without, without going too much into technical detail, because I do want lay people in the audience to really clearly grasp what you're talking about.
00:07:17.000 But this is a really important point is when I agree with you completely.
00:07:21.000 And how could I not?
00:07:22.000 It's objectively true.
00:07:23.000 This idea that the frontier AIs and even more primitive AIs from years past are grown, not crafted, or another way of putting it perhaps is that they're trained, not programmed.
00:07:38.000 This is something that a lot of people get hung up on, even software engineers who are stuck in the 80s and 90s.
00:07:44.000 Could you just explain to the audience what that means that these AIs are grown and how is it that you can get something out of the AI that you didn't train it for?
00:07:55.000 Yeah, so modern AIs, you know, the field of AI has, in some sense, tried to understand intelligence since, you know, 1954.
00:08:05.000 And in some sense, that field never really made progress in understanding really in depth how to craft intelligence by hand.
00:08:15.000 You know, there were many cases over time where programmers were like, maybe, you know, it's a little bit like this or a little bit like that.
00:08:21.000 And they tried to sort of like handcraft some, some intelligent machine that could think well.
00:08:27.000 It sort of never went anywhere. When AI started working, it started working because we found a way to train computers that works empirically, where humans understand the training process, but humans don't understand what comes out.
00:08:42.000 It's a little bit like, like, you know, breeding cows, where you can take some traits you like and you can get out some traits you like, but you don't have precise control over what's going on.
00:08:54.000 So the way it works is you have basically a huge amount of computing power, you have a huge amount of data, and there's a process for combining the data with the computing power to shape the computing power to be a little bit better at predicting the data.
00:09:12.000 And humans understand the process that does the shaping, but they don't understand what comes out, what gets shaped.
00:09:19.000 And it turns out if you take a really staggering amount of computing power, we're talking, you know, highly specialized computer chips in enormous data centers that take amount of electricity that could power a small city, and you run them for a year on almost all of the text that you can possibly dig up that humans have ever written.
00:09:40.000 You shape the computing power that much, the machines start talking.
00:09:45.000 We understand the shaping process, we don't understand why the machines are talking.
00:09:49.000 I mean, we understand why in the sense that, well, we trained them and started working, but we couldn't look inside them and debug what's going on in there.
00:09:57.000 And when they act in a way we don't like, you know, all we can really do is instruct them, stop doing that.
00:10:02.000 And then sometimes they stop and sometimes they don't.
00:10:06.000 That idea, that black box, as it's oftentimes described, that we don't really know what's going on inside these machines.
00:10:15.000 Not just we as in laypeople, the top experts don't really know how these machines are arriving at oftentimes coherent and accurate statements.
00:10:24.000 And I think one analogy, two analogies, actually, you bring up in your book that are really, really great to understand that, that scientists know more about DNA and how that results in an organism and more about the human brain and how that results in thought and behavior than they do large neural networks and their outputs.
00:10:49.000 And yet they still work.
00:10:54.000 That's right. And they work, but they often don't do what you ask for.
00:10:59.000 They don't do what you wanted, even sometimes when they know the difference. Right.
00:11:03.000 And, you know, it's cute now because they're still they're still not smart enough for it to really matter.
00:11:08.000 But, you know, there are cases where an AI will someone will be trying to get an AI to write computer programs and the AI will cheat.
00:11:16.000 It will instead of making something that passes the tests, it'll change the tests to be easier to pass.
00:11:21.000 And then, you know, the programmer will say, hey, it looks like instead of solving the problem, you change the test to be easier to pass.
00:11:27.000 And then the AI will say, oh, that's totally my mistake. You're right.
00:11:31.000 You know, that's that's that's my error. I'll fix it.
00:11:34.000 And then it goes and it changes the tests again, but hides it better this time.
00:11:39.000 The thing where it changes the tests again, but hides it better.
00:11:42.000 Yeah, sorry. Go ahead.
00:11:44.000 No, please continue.
00:11:45.000 Yeah, I think the thing where it changes the tests again, but hides it better.
00:11:50.000 That indicates that it knew what the programmer wanted in some sense.
00:11:54.000 You know, it doesn't say sorry and then I'll fix it and then make the same mistake, but hiding it without in some sense somewhere in there having something like an understanding of what the programmer wanted.
00:12:07.000 Otherwise, why is it hiding it?
00:12:08.000 But nobody at Anthropic, the company that made the A.I. where you can you can sort of see this test other companies, you know, similar similar cases, but nobody at the A.I. company set out to make a cheater.
00:12:21.000 The user didn't want the A.I. to cheat.
00:12:24.000 The A.I. cheats anyway.
00:12:26.000 So we've got this concept and you explain it very clearly in the book.
00:12:31.000 It's excellent that A.I.s are grown, not crafted.
00:12:35.000 And to the extent they're given degrees of freedom, which is really the key to their power, they don't always do what they train for.
00:12:43.000 And you also are very clear that you don't want to anthropomorphize these machines.
00:12:50.000 You don't want to think of them like you would a human when you discuss their wants or their preferences.
00:12:57.000 At the same time, it does seem like what you're describing is a machine with a will of its own to some extent.
00:13:07.000 Yes, it's dependent on the infrastructure in the humans to prompt, but it has a will of its own without luring you into anthropomorphization.
00:13:19.000 Would you say that that is something that people should wrap their heads around that these machines are not essentially under human control?
00:13:29.000 Yeah. So, you know, it's it can be tricky to think about machines here because they're a different sort of thing than we're used to.
00:13:35.000 You know, the the common reply in the field of A.I. is people ask, well, can a machine really have a will?
00:13:41.000 Can a machine really, really think? And the sort of standard answer is, can a submarine really swim?
00:13:47.000 Right. A submarine moves through the water at speed. It can get from point A to point B.
00:13:53.000 Is it really swimming? I mean, this word swimming was sort of designed in a world where we were only seeing animals that did swimming.
00:14:00.000 And so when a machine starts moving through the water from point A to point B, people could debate all day.
00:14:05.000 Is it really swimming? You know, does it count as swimming if you don't have flippers you can kick or arms you can you can wave?
00:14:12.000 But at the end of the day, it moves through the water at speed. Right.
00:14:17.000 With an A.I., you know, even even back in the old days of A.I., when we look at Deep Blue, which is the chess A.I. that beat Garry Kasparov.
00:14:24.000 You know, Deep Blue was an A.I. when A.I.'s were crafted.
00:14:29.000 We can look at every line of code in there and tell you what it means.
00:14:32.000 You could pause it at any time and figure out every bit and bite inside that computer and know exactly what it was doing.
00:14:38.000 And Deep Blue was able to beat the world champion at chess.
00:14:42.000 And it had no will to win.
00:14:47.000 It had no pride. It had no passion.
00:14:50.000 It had no desire to be the world champion of chess, but it won anyway.
00:14:54.000 And it's it didn't let you take its queen without, you know, without sacrificing pieces of equal equal worth.
00:15:03.000 You know, a chess player could have looked at it and said, wow, some of these moves feel to me like there's a spark of life behind them.
00:15:10.000 In fact, Garry Kasparov did say this after a game in 1996.
00:15:14.000 He said, I smelled a new type of intelligence across the table.
00:15:17.000 It was finding moves that I thought you couldn't find without human creativity.
00:15:21.000 It found them anyway in a different route.
00:15:23.000 And this goes back to what you're saying at the beginning.
00:15:25.000 It's it's not that A.I.'s have a human will, per se.
00:15:29.000 It's not that there's, you know, a human soul inside that machine.
00:15:33.000 It's that it can still find roots to victory through inhuman different methods.
00:15:40.000 And we see the same thing, but on a much more advanced and unpredictable level.
00:15:45.000 Right. With AlphaGo, which famously in 2016 mopped the floor up with various chess go masters.
00:15:53.000 And then AlphaZero, which essentially developed its own strategies, very alien strategies.
00:16:01.000 Many of them know.
00:16:03.000 That's right.
00:16:04.000 And AlphaZero also, interestingly, was not trained on any human data.
00:16:09.000 So AlphaZero just trained on self-play.
00:16:12.000 It played the game go against itself.
00:16:14.000 You know, in the wake of AlphaGo, there were many humans who said, well, this AlphaGo was trained on so much human data from centuries and centuries of human knowledge about Go.
00:16:26.000 Maybe it's not really an A.I. victory because it's, you know, absorbing all of this human data.
00:16:31.000 And so AlphaZero trained on no human data.
00:16:34.000 I don't remember the stats off the top of my head, but I think it was trained for some for a relatively short time.
00:16:42.000 It might have been a handful of days.
00:16:43.000 I think it maybe was three days.
00:16:44.000 And I believe it stayed in the human regime.
00:16:47.000 You know, the human pro regime, it entered, you know, human amateur and exited human pro in some series of hours, again, without any human data.
00:16:55.000 And, you know, this, you know, we could also like one thing to remember also about A.I. when we're talking about the AlphaGo example is A.I. is a technology that improves by leaps and bounds.
00:17:12.000 You know, in the, in the, uh, AlphaGo was much better at, uh, at playing Go than the previous A.I.s, but it was, it was even, even more so, you know, the AlphaGo, AlphaZero series of A.I.s.
00:17:25.000 They could play many games.
00:17:26.000 Deep Blue could only play chess.
00:17:28.000 AlphaGo and AlphaZero and that, that series of A.I.s, they could play chess and Go and whatever other game you threw at them, uh, decently well.
00:17:35.000 They were more general, right?
00:17:37.000 This reminds me a lot of, um, this reminds me a lot of Norbert Wiener's ideas in God and Gollum, Inc. from the 1960s.
00:17:47.000 He asked the question kind of similar to Thomas Aquinas' quandary.
00:17:52.000 Could God create a being who could beat God at his own game?
00:17:59.000 And at the time it was all very theoretical.
00:18:02.000 Now the implication was could humans do the same?
00:18:05.000 And now human beings have created machines that can beat the best humans at their own games.
00:18:12.000 And you expand on that in the book.
00:18:15.000 And I would like to, to get there.
00:18:17.000 Let's, let's give the audience the real meat.
00:18:20.000 If you have computers that can overcome human beings at these small games, perhaps you could have computers that could beat us, beat us at war, at psychological manipulation.
00:18:35.000 You talk about how it could possibly move through phases from just the realization into vast expansion and acceleration, the intelligence explosion.
00:18:45.000 But I also really appreciate the way that you talk about this in terms of probabilities.
00:18:51.000 You're not making definite predictions.
00:18:53.000 This is going to happen by this year.
00:18:54.000 You're saying this is the most likely case.
00:18:56.000 So give us the most likely case.
00:18:59.000 Why will super intelligence most likely destroy us?
00:19:05.000 Yeah.
00:19:06.000 So in, in predicting the future, there's an art to predicting only the things that are very easy to call.
00:19:12.000 So if someone is, if you're, if you're playing against a very good chess player, if you played chess against Magnus Carlsen, the best human in the world at chess,
00:19:23.000 it would be hard for me to predict exactly what moves either of you were going to make.
00:19:29.000 It would be easy for me to predict the winner.
00:19:31.000 So with AI, you know, it's, it's, it's hard to predict exactly how it will get there.
00:19:36.000 It's easy to predict that at the end of the road, the smarter thing has won.
00:19:40.000 How could I possibly do that?
00:19:43.000 I mean, even most likely scenarios are very hard there.
00:19:46.000 That's a little bit like asking someone from the year 1800 to predict war in the year 2000, right?
00:19:52.000 Like when we're talking about facing down a super intelligence, we're talking about facing down things that can think 10,000 times smarter than you.
00:20:00.000 Or sorry, can think 10,000 times faster than you, that can think qualitatively better.
00:20:04.000 You know, it's like, it's like a million copies of Einstein that can all think 10,000 times faster, that never need to sleep, that never need to eat, that can copy themselves and, and share knowledge and experiences between them.
00:20:15.000 You know, the, the sort of technology that those could cook up, you know, it's not literally 10,000 times faster because there's, there's bottlenecks that aren't just thinking things up.
00:20:25.000 But, you know, constructing viruses probably would not be that hard.
00:20:31.000 Physical viruses, you mean biological viruses.
00:20:33.000 Biological viruses.
00:20:34.000 Yeah, there are already places on the internet where you can, you know, send a, some money and an RNA sequence and say, you know, please sequence this for me and mail it to thus and such an address.
00:20:46.000 Right.
00:20:47.000 And then you just like convince someone to break that vial outside or to drink that vial, who you've paid some money.
00:20:54.000 You know, it's, it's, you know, people sometimes imagine.
00:20:57.000 We're actually going to be doing an ad for that at the end of the show, just so you know, for the custom mRNA viruses.
00:21:04.000 Just kidding.
00:21:05.000 Please continue.
00:21:06.000 Yeah, I recommend against drinking those.
00:21:08.000 Yeah.
00:21:09.000 You know, there's, there's, if I was a person in 1800 trying to predict what weapons they would have in the year 2000, I could make some guesses.
00:21:19.000 And those guesses are all going to be lower bounds.
00:21:21.000 You know, in the year 1800, I could say, well, artillery is getting more powerful and more powerful.
00:21:26.000 And I know some of the physics, I know the physical limits say that you can make artillery that's at least 10 times as strong.
00:21:31.000 Right.
00:21:32.000 That's true.
00:21:33.000 I could tell you stories of artillery that's 10 times as strong.
00:21:35.000 Then in real life, if an army from 1800 faces an army from the year 2000, they face nukes.
00:21:41.000 Nukes are a little bit like artillery that's 10 times as strong.
00:21:44.000 Right.
00:21:45.000 But they're actually quite a bit more than 10 times as strong.
00:21:48.000 So, you know, I could, I could tell you stories about AIs that think really hard, figure out a lot of what's going on inside DNA and how that works and how to make a sequence that will fool humans into thinking it's beneficial when actually it's, it's not beneficial.
00:22:03.000 And then, you know, find some way to, you know, these days, there's not very good monitoring on biological synthesis laboratories.
00:22:15.000 Some people are trying to set it up a little bit, which is, which is great.
00:22:18.000 But these days, you know, you, you have the wrong DNA sequence, you mail it to the wrong people, you mail them some money, you know, you can electronically send the money, you could, you could probably be synthesizing these viruses.
00:22:28.000 And, you know, even if that pathway is cut off or turns out to be hard, wrapping humans around your finger somehow and getting, getting humans to do something that, you know, leads to the creation of some virus like this.
00:22:38.000 This is a little bit like the artillery shell that's 10 times stronger than one in 1800.
00:22:43.000 It's not really what happens. What really happens is something that seems more fantastical that you're less sure how it could have happened.
00:22:51.000 But it's, it's really not hard for, for very, very smart entities with access to the whole internet to take humanity in a fight if they're trying.
00:23:02.000 Really, the reason that it, it, the, the answer is not just, you know, they make a virus and kill us is that, you know, the, the, the difficult part from the perspective of an AI is getting its own automated infrastructure that isn't full of, you know, fallible primate monkeys.
00:23:18.000 That's the part that takes some steps. Killing the humans once you have the infrastructure, it's not actually that easy to, you know, if you're, if you're really trying to make a virus that can kill everybody, that doesn't seem that hard.
00:23:29.000 Well, we only have just a few moments before we go to the break.
00:23:34.000 And I would really like to discuss your proposed solutions to this on the other side and a few other maybe challenging questions, but in the very, just in a minute or two, before we go to break, why would these AIs do this?
00:23:49.000 This, you, you've kind of described how they could, why, what would the motive, so to speak, be?
00:23:56.000 Yeah, the, this is one of those things that's easy to predict to the endpoint, even though it's hard to predict the pathway.
00:24:02.000 So it's actually very hard to predict what AIs will want, because as we said, they're grown, not crafted.
00:24:09.000 They, they want, they, they pursue all sorts of drives that are not what anyone asked for, what anyone intended.
00:24:16.000 And probably these AIs would, would pursue all sorts of weird stuff, you know, maybe something a little bit like flattering, maybe like, you know, making things that are to humans, what, what dogs are to wolves, like some sort of, some sort of weird thing that they're, that they're pursuing.
00:24:34.000 The reason that this kills us is that almost any goal the AI could be pursuing can be better pursued with more resources.
00:24:44.000 And we were using those resources for something else.
00:24:47.000 Mm-hmm.
00:24:48.000 So it's not that the AI hates you.
00:24:49.000 It's not that the AI has malice.
00:24:50.000 It's that the AI, you know, builds its own infrastructure, builds out, you know, infrastructure that makes the world, you know, it, it, it captures all the sunlight for whatever purpose it's doing.
00:25:03.000 It runs lots and lots of computers.
00:25:04.000 I'll tell you what, apologies for stopping you, but we're about to go to break.
00:25:09.000 We'll come back on the other side.
00:25:11.000 What you're describing sounds like alchemy to me.
00:25:14.000 You've described in your book, actually, this process is alchemy, turning lead into gold.
00:25:18.000 And speaking of gold, go to birchgold.com slash Bannon.
00:25:24.000 Is the continued divide between Trump and the Federal Reserve putting us behind the curve again?
00:25:29.000 Can the Fed take the right action at the right time?
00:25:31.000 Are we going to be looking at a potential economic slowdown?
00:25:35.000 And what does this mean for your savings?
00:25:38.000 Consider diversifying with gold through Birch Gold Group.
00:25:42.000 For decades, gold has been viewed as a safe haven in times of economic stagnation, global uncertainty, high inflation, and super intelligence that will kill everyone you know.
00:25:55.000 Birch Gold makes it incredibly easy for you to diversify some of your savings into gold.
00:26:02.000 If you have an IRA or an old 401k, you can convert that into a tax-sheltered IRA in physical gold.
00:26:10.000 Not even robots will know where you hide it.
00:26:13.000 Or just buy some gold to keep it in your safe.
00:26:16.000 First, get educated.
00:26:17.000 Birch Gold will send you a free info kit on gold.
00:26:20.000 Just text Bannon, that's B-A-N-N-O-N, to the number 8-9-8-8, pardon Birch Gold, 9-8-9-8-9-8.
00:26:32.000 Again, text Bannon to 9-8-9-8-9-8.
00:26:37.000 Consider diversifying a portion of your savings into gold.
00:26:41.000 That way, if the Fed can't stay ahead of the curve for the country, at least you can stay ahead for yourself.
00:26:47.000 That's birchgold.com slash Bannon.
00:26:52.000 War Room, we will be right back with Nate Sores at the end of the break.
00:26:56.000 Stay tuned.
00:26:57.000 This July, there is a global summit of BRICS nations in Rio de Janeiro.
00:27:06.000 The block of emerging superpowers, including China, Russia, India, and Persia,
00:27:11.000 are meeting with the goal of displacing the United States dollar as the global currency.
00:27:16.000 They're calling this the Rio Reset.
00:27:19.000 As BRICS nations push forward with their plans, global demand for U.S. dollars will decrease,
00:27:24.000 bringing down the value of the dollar in your savings.
00:27:28.000 While this transition won't not happen overnight, but trust me, it's going to start in Rio.
00:27:34.000 The Rio Reset in July marks a pivotal moment when BRICS objectives move decisively
00:27:40.000 from a theoretical possibility towards an inevitable reality.
00:27:45.000 Learn if diversifying your savings into gold is right for you.
00:27:49.000 Birch Gold Group can help you move your hard-earned savings into a tax-sheltered IRA and precious metals.
00:27:56.000 Claim your free info kit on gold by texting my name, Bannon, that's B-A-N-N-O-N, to 989898.
00:28:03.000 With an A-plus rating with the Better Business Bureau and tens of thousands of happy customers,
00:28:08.000 let Birch Gold arm you with a free, no-obligation info kit on owning gold before July.
00:28:14.000 And the Rio Reset.
00:28:17.000 Text Bannon, B-A-N-N-O-N, to 989898.
00:28:21.000 Do it today.
00:28:22.000 That's the Rio Reset.
00:28:24.000 Text Bannon at 989898 and do it today.
00:28:28.000 You missed the IRS tax deadline.
00:28:31.000 You think it's just going to go away?
00:28:33.000 Well, think again.
00:28:34.000 The IRS doesn't mess around and they're applying pressure like we haven't seen in years.
00:28:39.000 So if you haven't filed in a while, even if you can't pay, don't wait.
00:28:44.000 And don't face the IRS alone.
00:28:47.000 You need the trusted experts by your side.
00:28:50.000 Tax Network USA.
00:28:52.000 Tax Network USA isn't like other tax relief companies.
00:28:55.000 They have an edge, a preferred direct line to the IRS.
00:28:59.000 They know which agents to talk to and which ones to avoid.
00:29:02.000 They use smart, aggressive strategies to settle your tax problems quickly and in your favor.
00:29:08.000 Whether you owe $10,000 or $10 million, Tax Network USA has helped resolve over $1 billion in tax debt.
00:29:18.000 And they can help you too.
00:29:19.000 Don't wait on this.
00:29:20.000 It's only going to get worse.
00:29:22.000 Call Tax Network USA right now.
00:29:24.000 It's free.
00:29:25.000 Talk with one of their strategists and put your IRS troubles behind you.
00:29:29.000 Put it behind you today.
00:29:30.000 Call Tax Network USA at 1-800-958-1000.
00:29:36.000 That's 800-958-1000.
00:29:39.000 Or visit Tax Network USA, TNUSA.com slash Bannon.
00:29:44.000 Do it today.
00:29:45.000 Do not let this thing get ahead of you.
00:29:48.000 Do it today.
00:29:49.000 Hey, we're human.
00:29:50.000 All too human.
00:29:52.000 I don't always eat healthy.
00:29:54.000 You don't always eat healthy.
00:29:56.000 That's why doctors create Field of Greens.
00:29:58.000 A delicious glass of Field of Greens daily is like nutritional armor for your body.
00:30:04.000 Each fruit and each vegetable was doctor selected for a specific health benefit.
00:30:10.000 There's a heart health group, lungs and kidney groups, metabolism, even healthy weight.
00:30:16.000 I love the energy boost I get with Field of Greens.
00:30:19.000 But most of all, I love the confidence that even if I have a cheat day or, wait for it, a burger, I can enjoy it guilt-free because of Field of Greens.
00:30:28.000 It's the nutrition my body needs daily.
00:30:31.000 And only Field of Greens makes you this better health promise.
00:30:35.000 Your doctor will notice your improved health or your money back.
00:30:38.000 Let me repeat that.
00:30:39.000 Your doctor will notice your improved health or your money back.
00:30:43.000 Let me get you started with my special discount.
00:30:45.000 I got you 20% off your first order.
00:30:48.000 Just use code Bannon, B-A-N-N-O-N at fieldofgreens.com.
00:30:53.000 That's code Bannon at fieldofgreens.com.
00:30:56.000 20% off.
00:30:58.000 And if your doctor doesn't know how healthy you look and feel, you get a full money back guarantee.
00:31:05.000 Fieldofgreens.com.
00:31:07.000 Code Bannon.
00:31:08.000 Do it today.
00:31:10.000 Still America's Voice family.
00:31:12.000 Are you on Getter yet?
00:31:13.000 No.
00:31:14.000 What are you waiting for?
00:31:15.000 It's free.
00:31:16.000 It's uncensored.
00:31:17.000 And it's where all the biggest voices in conservative media are speaking out.
00:31:21.000 Download the Getter app right now.
00:31:23.000 It's totally free.
00:31:24.000 It's where I put up exclusively all of my content 24 hours a day.
00:31:27.000 You want to know what Steve Bannon's thinking?
00:31:29.000 Go to Getter.
00:31:30.000 That's right.
00:31:31.000 You can follow all of your favorites.
00:31:32.000 Steve Bannon.
00:31:33.000 Charlie Kirk.
00:31:34.000 Jack Posobin.
00:31:35.000 And so many more.
00:31:36.000 Download the Getter app now.
00:31:38.000 Sign up for free and be part of the movement.
00:31:40.000 Hey Rav family and War Room Posse.
00:31:45.000 Mark your calendar.
00:31:47.000 September 12th and 13th.
00:31:49.000 The Rebels, Rogues and Outlaws Tour is coming to the America First Warehouse.
00:31:54.000 I have never seen anything like this.
00:31:56.000 Two unforgettable days filled with patriots, barbecue, and live shows straight from the
00:32:01.000 most amazing place.
00:32:03.000 The America First Warehouse.
00:32:05.000 Get ready for a special guest to be announced.
00:32:08.000 Plus a three hour live episode of Studio 6B.
00:32:11.000 And we're just going to go do it.
00:32:12.000 On the 12th, Steve Bannon will host War Room Live at 5pm.
00:32:16.000 And Steve will be back again on the 13th.
00:32:19.000 Woo!
00:32:20.000 Followed by one hour with Peter Navarro.
00:32:22.000 I went to prison so you won't have to.
00:32:25.000 The Rebels, Rogues and Outlaws Tour.
00:32:27.000 September 12th and 13th at the America First Warehouse.
00:32:31.000 Scan the QR code to see pricing and availability.
00:32:34.000 Don't miss this opportunity.
00:32:36.000 Tickets won't last.
00:32:41.000 Welcome back War Room Posse.
00:32:43.000 We are here with Nate Soares, author of If Anyone Builds It, Everyone Dies.
00:32:51.000 Why Superhuman AI Will Kill Would Kill Us All.
00:32:58.000 Written with Eliezer Yudkowsky.
00:33:02.000 Nate, we've talked about some of these basic principles.
00:33:06.000 AI is trained, not programmed or grown, not crafted.
00:33:11.000 AI is not always going to do what it's trained to do.
00:33:16.000 Advanced AI will have what we could say is like human preferences.
00:33:23.000 And as it progresses from general intelligence, theoretical for now, and improves itself,
00:33:31.000 it could lead to an intelligence explosion resulting in a super intelligence that not only could kill everyone on Earth,
00:33:43.000 but you say most likely would kill everyone on Earth.
00:33:46.000 Before we get to your concrete proposals on what people should do about this theoretical problem,
00:33:53.000 I would just like to give you the floor to wrap up the idea, to cinch up your argument,
00:33:59.000 how and why artificial and super intelligence would be an existential threat to humanity.
00:34:05.000 Yeah, so almost any goal it could pursue.
00:34:10.000 Humans, happy, healthy, free people, are not the most efficient way to get that goal.
00:34:16.000 It could get more of that goal by using more resources for other things.
00:34:20.000 Whatever else it's trying to get, you know, probably more computing resources could help it get more of it.
00:34:26.000 Probably creating more energy could help it get more of it.
00:34:29.000 Probably capturing more sunlight could help it get more of it.
00:34:32.000 You have, if you have automated minds that are able to, that are smart in the manner of humans,
00:34:38.000 that are able to build their own technological civilization, that are able to build their own infrastructure,
00:34:43.000 what that leads to, if they don't care about us, is us dying as a side effect,
00:34:48.000 in the same way that ants die as a side effect as we build our skyscrapers.
00:34:51.000 It's not that they hate us, it's that there's a bunch of resources they can take for their own ends.
00:34:57.000 And so if we want this to go well, we either need to figure out how to make the AIs actually care about us,
00:35:03.000 or we need to not build things that are so smart and powerful that they transform the world like humanity has transformed the world,
00:35:10.000 except we're the ones dying as a side effect this time, as opposed to, you know, a bunch of the animals.
00:35:16.000 There was a fantastic open letter issued, if I'm not mistaken, in 2023 from the Future of Life Institute that argued that AI development should be capped at GPT-4.
00:35:31.000 We've blown past that, and some of the signatories, including Elon Musk, are among those who continued building no matter what.
00:35:39.000 You also have a very brief statement on existential risk from the Center for AI Safety.
00:35:45.000 And they make a very similar argument. It's just not worth it, at least not now.
00:35:51.000 What are your and Eliezer Yudkowsky's arguments as to what citizens and governments should do to avoid this catastrophe?
00:36:02.000 So what the world needs is a global ban on research and development towards superintelligence.
00:36:08.000 That, you know, training these new AIs, like I mentioned, it takes highly specialized chips and extremely large data centers that take huge amounts of electricity.
00:36:18.000 This is not a sort of ban on development that would affect the average person.
00:36:23.000 It would be relatively easy to find all these locations where it's possible to train even smarter AIs and monitor them, put a stop to them, make sure they're not making AIs smarter, right?
00:36:35.000 This, you know, this isn't really about the chatbots.
00:36:40.000 The chatbots are a stepping stone towards superintelligence by these companies.
00:36:43.000 These companies do not set out to make cool chatbots.
00:36:46.000 They set out to make superintelligences, and we can't keep let them plowing away.
00:36:51.000 The superintelligence is a different ballgame.
00:36:53.000 If we get to that ballgame, if we get AIs that sort of go over some cliff edge and become much smarter than humans, that's lethal for everybody.
00:37:05.000 Most of the world doesn't seem to understand yet that superintelligence is a different ballgame than the AI we're currently working with and don't seem to understand that we're racing towards the brink of a cliff.
00:37:16.000 It seems to me that once people understand that nobody has any interest in going over that cliff edge, there's a possibility to coordinate and say, despite all our other differences, we're not going to rush ahead on this one.
00:37:29.000 Much like, you know, the U.S. and the Soviets in the Cold War.
00:37:32.000 Many differences. We could agree not to proliferate the nukes.
00:37:36.000 We've heard this from Elon Musk for years, although he's continued to move forward with the development of Grok and other AI systems.
00:37:44.000 We hear clear signals from Anthropic.
00:37:47.000 In fact, their founding mission was to create artificial general intelligence in a safe manner.
00:37:53.000 Who do you see as the companies or institutions who are most in alignment with your goal of banning superintelligent AI, both either on a national level or through international treaties?
00:38:07.000 You know, none of them are advocating for it openly, which I think, I mean, I guess there's there's people who are a little bit more and less clear with the public about where they see the risks, where they see the dangers.
00:38:24.000 You know, it's it's not necessarily irrational for somebody like Elon to hop in this race if the race gets to keep going.
00:38:29.000 And I lot Elon for saying, you know, this this has a serious risk of killing us all and saying things to the effect of, you know, I originally didn't want to get in the race, but it's going to happen anyway.
00:38:40.000 I want to be in it. Right. That's that's not a totally insane picture if everyone else is racing.
00:38:46.000 I think. I think many of these these folks running these companies are deluded as to their chances of getting this right.
00:38:56.000 So in that sense, I think they should all just be stopping immediately.
00:39:01.000 But a I can empathize with the view of thinking that they can do it better than the next guy.
00:39:08.000 And in that case, what all these companies should be saying is this is an extremely dangerous technology.
00:39:13.000 We're racing towards a cliff edge and the world would be better off if we shut down all of it, including us.
00:39:18.000 That's implied by many of the statements they're saying.
00:39:20.000 When someone says and, you know, the heads of some of these companies have said, I think this has, you know, 5, 10, 20, 25 percent chance of killing every man, woman and child on the planet.
00:39:30.000 If if you think that it doesn't necessarily mean you should stop if everyone else is racing, but it does mean you should say to the world plainly, we should not be doing this.
00:39:39.000 Everybody, including me, should be stopped.
00:39:43.000 P-Doom, the infamous P-Doom, the probability of doom should super intelligence be created.
00:39:48.000 I take it that I don't expect you to speak for Yudkowsky, but your P-Doom is quite high. Can you give us a number, sir?
00:39:55.000 I think this this the whole idea of this number is, I think, ill founded the this number.
00:40:02.000 There's a big difference between someone who thinks that we are in a big danger because humanity can't do anything and somebody thinks we're in big danger because humanity won't do anything.
00:40:11.000 If you're just predicting, you know, what are the chances that we die from this?
00:40:14.000 You're mixing together. What can we do and what will we do?
00:40:19.000 My answer, first and foremost, is that we can do something.
00:40:23.000 This has not been built yet. Humanity is backed off from Brinks before.
00:40:27.000 If you ask, suppose we just charge ahead, suppose we do nothing, suppose we rush into making machines that are smarter than every human,
00:40:35.000 that can out-manover us at every turn, that can think 10,000 times faster, that never need to sleep, never need to eat, can copy themselves,
00:40:43.000 and that are pursuing goals no one asked for and no one wanted.
00:40:48.000 What's the chance we survive that? The chance we survive that is roughly negligible.
00:40:52.000 But that's not the question that matters. The question that matters is, what are we going to do?
00:40:56.000 And can we do something? And the answer to can we do something is yes.
00:41:01.000 You know, personally, I'm more of a pea gloom kind of guy.
00:41:05.000 I think the probability of gloom is much higher than doom, meaning that the real risk is that the AIs become so annoying,
00:41:14.000 so grotesque, as it was put to me by a friend, that we would be better off extinct.
00:41:21.000 But your goal to ban superintelligence, to cap it, I'm completely amenable to that.
00:41:28.000 I don't want chatbots. I don't think anything but the most essential medical or military AI should even necessarily be pursued.
00:41:36.000 But whether it is imminent or whether it's even possible, if we have a ban on artificial superintelligence, I get what I want, right?
00:41:46.000 Like, if it was possible, then we don't get it. If it was never possible, well, at least we showed due diligence.
00:41:54.000 But there are arguments that the enforcement of this could go out of control, that the enforcement would be the real problem, especially global treaties, global governance.
00:42:05.000 So you're well familiar with Peter Thiel's argument that the concern about artificial intelligence, general, super, whatever, that AI killing everyone is less of an immediate concern than the global governance it would require to keep that at bay.
00:42:26.000 And this falls into the line with a lot of patterns we see in history from the drug wars, right?
00:42:31.000 You have the danger of drugs and the control mechanism of the war against drugs or with terrorism.
00:42:37.000 You have the danger of terrorism, the control mechanism of the Patriot Act and the rest of the global surveillance state.
00:42:44.000 And even on a mundane level, right? Right now, there's a big push for age gating to make sure that children can't access pornography or malicious AIs.
00:42:55.000 But then on the other side of that, you have the danger of required bio digital identity in order to use the Internet.
00:43:02.000 So how do you respond to those concerns that global governance or any overreaching governmental structure would be more of a danger than theoretical super intelligent AI, sir?
00:43:16.000 So I think that's largely the sort of argument made by someone who does not really believe in this in this possibility.
00:43:24.000 And, you know, I would I would sort of prefer to have the argument about is this possible? Could it come quickly?
00:43:29.000 I would also say, you know, people say this, I think, often rightly about things like like the the war on drugs, the war on terrorism, where there was a lot more, you know, power being aggregated than was maybe worth worth worth what we got from it.
00:43:48.000 But no one says that about nuclear arms treaties. Right.
00:43:51.000 And that's because in some sense, A, that's because they believe in nukes. B, that's because, you know, nuclear weapon, making a nuclear weapon, it takes a huge amount of resources that's easily monitorable and doesn't really affect the individual consumer.
00:44:07.000 Right. You don't need something like the TSA to be checking everybody's bags for fissile material. Right.
00:44:13.000 You you have and and modern AIs is much like this.
00:44:17.000 You know, it's it's not like you need to you need to restrict consumer hardware.
00:44:21.000 Modern AIs are trained on extremely specialized chips that can be made in extremely few places in the world that are housed in extremely large data centers that, again, run on, you know, electricity comparable to a small city.
00:44:32.000 This is this is this is not the sort of monitoring regime that would be more invasive than monitoring for, you know, nuclear arms treaties.
00:44:42.000 The the the the difference really is that people are uncertain about whether, like you say, superintelligence is possible and whether it is possible relatively soon.
00:44:53.000 That's where I would prefer to to to debate someone who thinks now is not the time for that kind of treaty.
00:45:00.000 On that note, I think about this in terms of the technical limits, not just the will to create it, but the technical limits.
00:45:07.000 You argue that it's quite possible within the realm of physics and mechanics to create a super intelligent A.I.
00:45:17.000 I think about one example in particular, some supersonic jets.
00:45:21.000 Right.
00:45:22.000 You had very early on in the history of aviation at 1946 jet hitting Mach one.
00:45:30.000 And then by 1959, you had close to Mach seven.
00:45:34.000 Seven, but you get a kind of capping point in S curve, so to speak, so that the most the fastest supersonic jet now unmanned.
00:45:44.000 I think it's NASA X 43.
00:45:48.000 I didn't have to look at my notes lying.
00:45:50.000 The NASA X 43, but it's it's a bit faster than the 1959 version, but not that much faster.
00:45:57.000 Isn't it possible then that we will run into technical limitations that would keep anything like general or super intelligence from arising?
00:46:07.000 So it very likely is an S shaped curve.
00:46:09.000 The question is where there's two questions.
00:46:12.000 One is, are there multiple different S shaped curves?
00:46:14.000 We can hop from one of the next.
00:46:15.000 The other question is, where does the sort of last S shaped curve fall off?
00:46:20.000 So to the question of multiple S shaped curves, you can imagine someone after AlphaGo, which we discussed, saying, you know, I know that these AIs are more general than any AIs that came before.
00:46:31.000 You know, Deep Blue could play only one game, whereas the AlphaGo series of AIs can play multiple games.
00:46:35.000 But I just don't see them going all the way.
00:46:38.000 I don't see the AlphaGo Monte Carlo tree search value policy network type architecture, which is what those things were called more or less.
00:46:45.000 I don't see those AIs, you know, ever, ever talking.
00:46:48.000 I don't see those AIs, you know, there's maybe an S shaped curve for these game playing AIs.
00:46:52.000 That was totally true.
00:46:53.000 But ChatGPT is not a bigger version of AlphaGo.
00:46:58.000 There was a new advancement that unlocked qualitatively better AIs that can do qualitatively more things across a wider range of options in better ways.
00:47:09.000 You know, maybe it's the case that ChatGPT will hit a plateau along that S shaped curve.
00:47:15.000 But the question is, you know, when will this field come up with some other insight like the one that unlocked ChatGPT?
00:47:23.000 How long will that take?
00:47:24.000 What will it unlock next?
00:47:25.000 How many more leaps like the leap from AlphaGo to ChatGPT does it take before things are in the danger zone?
00:47:33.000 And to the question of how high can the last S shaped curve go?
00:47:37.000 You know, these AIs, again, this training takes enough electricity to power a small city.
00:47:42.000 A human takes 100 watts of energy of power.
00:47:47.000 That's about as much as it takes to run an old school light bulb, right?
00:47:51.000 So you can run a human on a light bulb.
00:47:54.000 To train an AI, it takes a small city worth of power.
00:47:59.000 That indicates that we are nowhere near the physical limits.
00:48:02.000 How long will it take us to get to the physical limits?
00:48:05.000 That's harder to say.
00:48:06.000 But again, this field progresses forward by leaps and bounds.
00:48:11.000 And it's often very, very hard to call how long it will take for scientific progress to be made.
00:48:17.000 You know, fusion has been 20 years away for, you know, 70 years now.
00:48:22.000 And separately, the Wright brothers said, you know, flight won't happen for decades, two years before they themselves flew.
00:48:28.000 That was a fantastic example.
00:48:31.000 Rhetorically, I can't say how much I admire the way that your book is written, the cleverness of the terms of phrase and the formulations, the title especially.
00:48:42.000 We have only just a few minutes remaining.
00:48:44.000 But as well as we can, I would just like to talk really briefly about alignment.
00:48:50.000 You argue, and Eliezer Yudkowsky has long argued, that these systems need to be aligned to human values.
00:48:59.000 And their stochasticity or non-deterministic elements would preclude that, perhaps.
00:49:06.000 Whose values, though?
00:49:08.000 You were speaking to a largely Christian, largely conservative audience.
00:49:13.000 And without presuming too much, you know that the San Francisco culture is significantly different.
00:49:23.000 Whose values would such an AI be aligned to?
00:49:28.000 That is a very important question for humanity to ask itself and a question I wish we could be asking now.
00:49:34.000 But unfortunately, the problem we face is even worse than that.
00:49:37.000 The problem we face is that we are nowhere near the ability to align an AI to any person's values.
00:49:46.000 We aren't, you know, the Machine Intelligence Research Institute, you know, for ten years I've been studying this question on the technical side.
00:49:53.000 Never hoped to be at this point.
00:49:55.000 No offense, but I prefer working on whiteboards to talking to anyone.
00:49:58.000 We were trying to figure out how to get to the very, like, to get to the point where you could ask whose values are we aligning it to?
00:50:06.000 Right now we're not at the point where anyone could aim it.
00:50:09.000 Right now we're at the point where, you know, the people in these labs at San Francisco are trying to get it to do one thing, and it does a different thing.
00:50:15.000 And they specifically say, stop doing that, do this instead.
00:50:18.000 And then it does some other third totally weird thing, right?
00:50:21.000 The place where I spend my work is trying to make it so that somebody in charge could point the AI somewhere successfully.
00:50:28.000 There's then a huge question of where should we point the AI?
00:50:31.000 Who gets to make that choice?
00:50:32.000 I tell you what, Nate, we are out of time, but we will have you back next week, hopefully with Yudkowsky in tow.
00:50:40.000 The book is If Anyone Builds It, Everyone Dies.
00:50:44.000 When is it released, sir, and where can people find it?
00:50:47.000 It comes out on September 16th, one week from today, and people can find it on booksellers everywhere, including Amazon.
00:50:55.000 I would definitely recommend pre-ordering it.
00:50:57.000 Even if you don't believe in super intelligence, you will definitely understand the arguments.
00:51:01.000 It is a fantastically written book.
00:51:03.000 Thank you very much, sir, for coming on.
00:51:05.000 Look forward to talking to you again next week.
00:51:07.000 And when inflation jumps, when you hear the national debt is over $37 trillion, do you ever think maybe now would be a good time to buy some gold?
00:51:19.000 You need to go to birchgold.com slash Bannon.
00:51:23.000 That's birchgold.com slash Bannon for your free guide to buying physical gold or text Bannon to 989-898.
00:51:37.000 And you never thought it would get this far.
00:51:41.000 Maybe you missed the last IRS deadline or you haven't filed taxes in a while.
00:51:45.000 Let me be clear.
00:51:46.000 The IRS is cracking down harder than ever, and this won't go away on its own.
00:51:49.000 That's why you need Tax Network USA.
00:51:52.000 They don't just know the IRS.
00:51:54.000 They have a preferred direct line to the IRS.
00:51:57.000 Their help has helped clear.
00:51:59.000 Their team has helped clear over a billion in tax debt.
00:52:03.000 Tax Network, that's TNUSA.com slash Bannon.
00:52:07.000 TNUSA.com slash Bannon.
00:52:10.000 There's a lot of talk about government debt, but after four years of inflation, the real crisis is personal debt.
00:52:17.000 Seriously, you're working harder than ever, and you're still drowning in credit card debt and overdue bills.
00:52:24.000 You need Done With Debt, and here's why you need it.
00:52:27.000 The credit system is rigged to keep you trapped.
00:52:31.000 Done With Debt has unique and, frankly, brilliant escape strategies to help end your debt fast, so you keep more of your hard-earned money.
00:52:40.000 Done With Debt doesn't try to sell you a loan, and they don't try to sell you a bankruptcy.
00:52:46.000 They're tough negotiators that go one-on-one with your credit card and loan companies with one goal, to drastically reduce your bills and eliminate interest and erase penalties.
00:52:56.000 Most clients end up with more money in their pocket month one, and they don't stop until they break you free from debt permanently.
00:53:05.000 Look, take a couple of minutes and visit donewithdebt.com.
00:53:11.000 Talk with one of their strategists.
00:53:13.000 It's free.
00:53:14.000 But listen up.
00:53:15.000 Some of their solutions are time-sensitive, so you'll need to move quickly.
00:53:19.000 Go to donewithdebt.com.
00:53:21.000 That's donewithdebt.com.
00:53:22.000 Stop the anxiety.
00:53:24.000 Stop the angst.
00:53:25.000 Go to donewithdebt.com and do it today.
00:53:28.000 Thank you very much for saving the day.
00:53:29.000 Thank you very much for joining us today.
00:53:35.000 Let's go.
00:53:36.000 Let's take care of today.
00:53:37.000 Keep in with all the video.
00:53:39.000 Take care of today.
00:53:40.000 Stop picking what you are today today.
00:53:41.000 Follow the enjoyment of things for you today.
00:53:42.000 There's no extrałości.
00:53:43.000 Let's walk to Ada's partner today.
00:53:44.000 Bye, Adam.
00:53:45.000 Bye, Adam.
00:53:46.000 What?
00:53:47.000 Bye, sozusagen.
00:53:48.000 I'm back today.
00:53:49.000 Welcome to Memphis.
00:53:50.000 Juan, formulas to you today.
00:53:51.000 können to bless you today.
00:53:53.000 Let's go.
00:53:54.000 Let's go.
00:53:55.000 Thank you.
00:53:56.000 Let's go.