The Joe Rogan Experience - April 25, 2025


Joe Rogan Experience #2311 - Jeremie & Edouard Harris


Episode Stats

Length

2 hours and 47 minutes

Words per Minute

187.21074

Word Count

31,389

Sentence Count

2,417

Misogynist Sentences

21


Summary

In this episode of the podcast, we're joined by a guest who's been in the AI space for a long time, and who's got a pretty good idea of what's to come in the future of AI. We talk about what the future might look like in the near future, and whether or not there's a Doomsday Clock for artificial intelligence.


Transcript

00:00:11.000 All right, so if there's a doomsday clock for AI and where we're fucked, what time is it?
00:00:19.000 If midnight is, we're fucked.
00:00:21.000 We're getting right into it.
00:00:22.000 You're not even going to ask us what we had for breakfast?
00:00:25.000 No, no, no, no, no, no, no, no.
00:00:26.000 Jesus, okay.
00:00:27.000 Let's get freaked out.
00:00:29.000 Well, okay, so there's one, without speaking to, like, the fucking Doomsday dimension right off the gate, there's a question about, like, where are we at in terms of AI capabilities right now, and what do those timelines look like?
00:00:40.000 Right.
00:00:41.000 There's a bunch of disagreement.
00:00:42.000 One of the most concrete pieces of evidence that we have recently came out of a lab, an AI kind of evaluation lab called METER, and they put together this test.
00:00:52.000 Basically, it's like you ask the question, pick a task that takes a certain amount of time, like an hour.
00:01:00.000 It takes like a human a certain amount of time.
00:01:02.000 And then see like how likely the best AI system is to solve for that task.
00:01:07.000 Then try a longer task.
00:01:08.000 See like a 10 hour task.
00:01:10.000 And so right now what they're finding is when it comes to AI research itself, so basically like automate the work of an AI researcher.
00:01:18.000 You're hitting 50% success rates for these AI systems for tasks that take an hour long.
00:01:23.000 And that is doubling every, right now it's like every four months.
00:01:27.000 So you had tasks that you could do, you know, a person does in five minutes like, you know, ordering an Uber Eats or like something that takes like 15 minutes, like maybe booking a flight or something like that.
00:01:37.000 And it's a question of like, how much can these AI agents do, right?
00:01:42.000 Like from five minutes to 15 minutes to 30 minutes.
00:01:44.000 And in some of these spaces like...
00:01:50.000 So if you extrapolate that, you basically get to tasks that take a month to complete.
00:02:00.000 Like by 2027, tasks that take an AI researcher a month to complete, these systems will be completing with like a 50% success rate.
00:02:07.000 So you'll be able to have an AI on your show and ask it what the doomsday clock is like by then.
00:02:13.000 I probably won't laugh.
00:02:17.000 It'll have a terrible sense of humor about it.
00:02:19.000 Just make sure you ask it what it had for breakfast before you started.
00:02:22.000 Yeah.
00:02:24.000 What about quantum computing getting involved in AI?
00:02:27.000 So, yeah.
00:02:28.000 Honestly, I don't think it's...
00:02:30.000 If you think that you're going to hit...
00:02:32.000 Human-level AI capabilities across the board, say 2027, 2028, which when you talk to some of the people in the labs themselves, that's the timelines they're looking at.
00:02:41.000 They're not confident, they're not sure, but that seems pretty plausible.
00:02:44.000 If that happens, really there's no way we're going to have quantum computing that's going to be giving enough of a bump to these techniques.
00:02:51.000 You're going to have standard classical computing.
00:02:54.000 One way to think about this is that the data centers that are being built today...
00:02:58.000 Are being thought of literally as the data centers that are going to house, like, the artificial brain that powers superintelligence, human-level AI when it's built in, like, 2027, something like that.
00:03:09.000 So, how knowledgeable are you when it comes to quantum computing?
00:03:14.000 So, a little bit.
00:03:16.000 I mean, like, I did my grad studies in, like, the foundations of quantum mechanics.
00:03:21.000 Oh, great.
00:03:21.000 Yeah, well, it was a mistake, but I appreciate it for the purpose of that.
00:03:25.000 Why was it a mistake?
00:03:27.000 Academia is a funny thing.
00:03:30.000 It's really bad culture.
00:03:32.000 It teaches you some really terrible habits.
00:03:35.000 Basically, my entire life after academia, and Ed's too, was unlearning these terrible habits.
00:03:42.000 It's all zero-sum, basically.
00:03:44.000 It's not like when you're working in startups.
00:03:45.000 It's not like when you're working in tech where you build something and somebody else builds something that's complementary and you can team up and just make something amazing.
00:03:53.000 It's always...
00:03:54.000 Wars over who gets credit, who gets their name on the paper.
00:03:57.000 Did you cite this fucking stupid paper from two years ago because the author has an ego and you got to be honest.
00:04:03.000 I was literally at one point, I'm not going to get any details here, but like there was a collaboration that we ran with like this, anyway, fairly well-known guy.
00:04:15.000 And my supervisor had me, like, write the emails that he would send from his account so that he was seen as, like, the guy who was, like, interacting with this bigwig.
00:04:25.000 That kind of thing is, like, doesn't tend to happen in startups, at least not in the same way.
00:04:30.000 So he wanted credit for the, like, he wanted to seem like he was the genius who was facilitating this?
00:04:36.000 For sounding smart on email.
00:04:37.000 Eww.
00:04:38.000 Right?
00:04:39.000 Yuck.
00:04:39.000 That happens everywhere.
00:04:41.000 Dude, yeah.
00:04:41.000 The reason it happens is that these guys who are professors, or even not even professors, just like your post-doctoral guy who's supervising you, they can write your letters of reference and control your career after that lapse.
00:04:55.000 Yeah, they got you by the balls.
00:04:56.000 They can do whatever.
00:04:56.000 Oh, God.
00:04:58.000 It's like a movie.
00:04:59.000 Yeah, it's gross.
00:05:00.000 Like a gross movie.
00:05:01.000 Like a gross boss in a movie that wants to take credit for your work.
00:05:05.000 And it's real.
00:05:06.000 It's rampant.
00:05:06.000 And the way to escape it is to basically just be like, fuck this.
00:05:10.000 I'm going to go do my own thing.
00:05:12.000 And so Jer dropped out of grad school to...
00:05:15.000 Come start a company.
00:05:16.000 And I mean, honestly, even that, it took me, it took both of us, like, a few years to, like, unfuck our brains and unlearn the bad habits we learned.
00:05:24.000 It was really only a few years later that we started, like, really, really getting a good, like, getting a good flow going.
00:05:31.000 You're also, you're kind of disconnected from, like, base reality when you're in the ivory tower, right?
00:05:36.000 Like, if you're, there's something beautiful about, and this is why we spent all our time in startups, but there's something really beautiful about, like, It's just a bunch of assholes, us, and, like, no money and nothing and a world of, like, potential customers.
00:05:50.000 And it's like, you actually, it's not that different from, like, stand-up comedy in a way.
00:05:54.000 Like, your product is, can I get the laugh, right?
00:05:57.000 Like, something like that.
00:05:58.000 And it's...
00:05:58.000 Unforgiving.
00:05:59.000 If you fuck up, it's like silence in the room.
00:06:01.000 It's the same thing with startups.
00:06:02.000 Like, the space of products that actually works is so narrow.
00:06:06.000 And you've got to obsess over what people actually want.
00:06:09.000 And it's so easy to fool yourself into thinking that you've got something that's really good because your friends and family are like, oh, no, sweetie, you're doing a great job.
00:06:16.000 Like, what a wonderful life.
00:06:17.000 I would totally use it.
00:06:18.000 I totally see all that stuff, right?
00:06:20.000 And I love that because it forces you to change.
00:06:24.000 Yeah.
00:06:26.000 The whole indoctrination thing in academia is so bizarre because there's these hierarchies of powerful people and just the idea that you have to work for someone someday and they have to take credit by being the person on the email.
00:06:43.000 That will haunt me for days.
00:06:45.000 I'll be thinking about that for days now.
00:06:47.000 I fucking can't stand people like that.
00:06:50.000 It drives me nuts.
00:06:51.000 One big consequence is it's really hard to tell who the people are who are creating value in that space, too, right?
00:06:55.000 Of course.
00:06:56.000 Sure, because it's just like television.
00:06:58.000 One of the things about television shows is—so I'll give you an example.
00:07:03.000 A very good friend of mine who's a very famous comedian had this show, and his agent said, we're going to attach these producers.
00:07:10.000 It'll help get it made.
00:07:12.000 And he goes, "Well, what are they gonna do?"
00:07:14.000 He goes, "They're not gonna do anything.
00:07:15.000 It'll just be in name."
00:07:16.000 He goes, "But they're gonna get credit."
00:07:18.000 He goes, "Yeah."
00:07:18.000 He goes, "Fuck that."
00:07:19.000 He goes, "No, no, listen, listen.
00:07:20.000 This is better for the show.
00:07:22.000 It'll help the show."
00:07:23.000 Excuse me.
00:07:25.000 They'll have a piece of the show.
00:07:26.000 He's like, "Yes, yes, but it's a matter of whether the show gets successful or not, and this is a good thing to do."
00:07:31.000 And he's like, "What are you talking about?"
00:07:34.000 It was a conflict of interest because this guy, the agent was representing these other people.
00:07:39.000 But this is completely common.
00:07:41.000 So there's these executive producers that are on shows that have zero to do with it.
00:07:47.000 So many industries are like this.
00:07:49.000 And that's why we got into startups.
00:07:52.000 It's literally like you and the world, right?
00:07:54.000 It's like in a way...
00:07:55.000 Like stand-up comedy, like Jer said.
00:07:57.000 Or like podcasting.
00:07:58.000 Or like podcasting, where your enemy isn't actually hate.
00:08:01.000 It's indifference.
00:08:02.000 Like, most of the stuff you do, especially when you're getting started, like, why would anyone, like, give a shit about you?
00:08:07.000 They're just not going to pay attention.
00:08:08.000 Yeah, that's not even your enemy.
00:08:10.000 You know, that's just all potential.
00:08:12.000 That's all that is, you know?
00:08:13.000 Like, your enemy is within you.
00:08:15.000 It's like, figure out a way to make whatever you're doing good enough that you don't have to think about it not being valuable.
00:08:20.000 It's meditative.
00:08:21.000 Like, there's no way for it not to be...
00:08:24.000 To be, in some way, a reflection of, like, yourself.
00:08:27.000 You know, you're kind of, like, in this battle with you trying to convince yourself that you're great, so the ego wants to grow, and then you're constantly trying to compress it and compress it.
00:08:34.000 And if there's not that outside force, your ego will expand to fill whatever volume is given to it.
00:08:38.000 Like, if you have money, if you have fame, if everything's given, and you don't make contact with the unforgiving on a regular basis, like, yeah, you know, you're gonna end up...
00:08:46.000 You're going to end up doing that to yourself.
00:08:48.000 You could, yeah.
00:08:49.000 It's possible to avoid, but you have to have strategies.
00:08:52.000 Yeah, you have to be intentional about it.
00:08:53.000 The best strategy is jujitsu.
00:08:57.000 Mark Zuckerberg is a different person now.
00:09:00.000 Yeah, you can see it.
00:09:01.000 You can see it.
00:09:02.000 Yeah, well, it's a really good thing for people that have too much power because you just get strangled all the time.
00:09:08.000 And then you just get your arms bent sideways.
00:09:11.000 And after a while, you're like, okay.
00:09:13.000 This is reality.
00:09:14.000 This is reality.
00:09:15.000 This social hierarchy thing that I've created is just nonsense.
00:09:18.000 It's just smoke and mirrors.
00:09:19.000 And they know it is, which is why they so rabidly enforce these hierarchies.
00:09:24.000 The best people seek it out.
00:09:26.000 Sir and ma 'am and all that kind of shit.
00:09:27.000 That's what it is.
00:09:28.000 You don't feel like you really have respect unless you say that.
00:09:32.000 Ugh.
00:09:33.000 These poor kids that have to go from college where they're talking to these dipshit professors out into the world and operating under these same rules that they've been, like, forced and indoctrinated to.
00:09:44.000 God, to just make it on your own.
00:09:46.000 It's amazing what you can get used to, though.
00:09:48.000 And, like, the...
00:09:50.000 It's funny, you were mentioning the producer thing.
00:09:51.000 That is literally also a thing that happens in academia.
00:09:53.000 So you'll have these conversations where it's like, all right, well, this paper is...
00:09:57.000 You know, fucking garbage or something.
00:09:58.000 But we want to get it in a paper, in a journal.
00:10:01.000 And so let's see if we can get, like, a famous guy on the list of authors so that when it gets reviewed, people go like, oh, Mr. So-and-so, okay.
00:10:10.000 And that literally happens.
00:10:11.000 The funny thing is, like, the hissy fits over this are, like, the stakes are so brutally low.
00:10:16.000 At least with your producer example, like, someone stands to make a lot of money.
00:10:19.000 With this, it's like...
00:10:21.000 You get maybe like an assistant professorship out of it at best that's like $40,000 a year.
00:10:28.000 It's just like, what are you going to do?
00:10:32.000 For the producers, it is money, but I don't even think they notice the money anymore.
00:10:36.000 Because all those guys are really, really rich already.
00:10:39.000 If you're a big-time TV producer, you're really rich.
00:10:41.000 I think the big thing is...
00:10:44.000 Being thought of as a genius who's always connected to successful projects.
00:10:47.000 Right, yeah.
00:10:48.000 That's what they really like.
00:10:49.000 That is always going to be a thing, right?
00:10:51.000 It wasn't one producer.
00:10:52.000 It was like a couple.
00:10:53.000 So there's going to be a couple different people that were on this thing that had zero to do with it.
00:10:58.000 It was all written by a stand-up comedian.
00:11:00.000 His friends all helped him.
00:11:02.000 They all put it together.
00:11:03.000 And then he was like, no.
00:11:05.000 He wound up firing his agent over it.
00:11:07.000 Oh, shit.
00:11:08.000 Good for him.
00:11:08.000 I mean, yeah.
00:11:09.000 Get the fuck out of here.
00:11:11.000 At a certain point for the producers, too, it's kind of like you'll have people approaching you for help on projects that look nothing like projects you've actually done.
00:11:18.000 So I feel like it just adds noise to your universe.
00:11:20.000 Like, if you're actually trying to build cool shit, you know what I mean?
00:11:23.000 Some people just want to be busy.
00:11:25.000 They just want more things happening and they think more is better.
00:11:28.000 More is not better.
00:11:29.000 Because more is energy that takes away from the better.
00:11:32.000 Whatever the important shit is.
00:11:33.000 Yeah, the focus.
00:11:34.000 You only have so much time until AI takes over.
00:11:37.000 Then you'll have all the time in the world because no one will be employed and everything will be automated.
00:11:41.000 We'll all be on universal basic income.
00:11:43.000 And that's it.
00:11:44.000 That's a show.
00:11:45.000 The end.
00:11:46.000 That's a sitcom.
00:11:48.000 That's a sitcom.
00:11:49.000 A bunch of poor people existing on $250 a week.
00:11:51.000 Oh, I would watch that.
00:11:52.000 Yeah.
00:11:53.000 Because the government just gives everybody...
00:11:55.000 That's what you live off of.
00:11:56.000 Like weird shit is cheap.
00:11:57.000 Like the stuff that's like all like, well, the stuff you can get from chatbots and AI agents is cheap, but like food is super expensive or something.
00:12:05.000 Yeah.
00:12:06.000 Organic food is going to be, you're going to have to kill people for it.
00:12:09.000 You will eat people.
00:12:10.000 It will be like a Soylent world.
00:12:11.000 Right.
00:12:12.000 Soylent green.
00:12:14.000 Nothing's more free range than people though.
00:12:16.000 That's true.
00:12:17.000 Depends on what they're eating though.
00:12:18.000 It's just like animals, you know?
00:12:21.000 You don't want to eat a bear that's been eating salmon.
00:12:23.000 They taste like shit.
00:12:23.000 I didn't know that.
00:12:25.000 I've been eating my bear wrong this entire time.
00:12:30.000 So back to the quantum thing.
00:12:33.000 So quantum computing is infinitely more powerful than standard computing.
00:12:38.000 Would it make sense, then, that if quantum computing can run a large language model, that it would reach a level of intelligence that's just preposterous?
00:12:47.000 So, yeah, one way to think of it is, like, there are problems that quantum computers can solve way, way, way, way better than classical computers.
00:12:54.000 And so, like, the numbers get absurd pretty quickly.
00:12:57.000 It's, like, problems that a classical computer couldn't solve if it had the entire lifetime of the universe to solve it.
00:13:02.000 A quantum computer, right, in, like, 30 seconds, boom.
00:13:05.000 But the flip side, like, there are problems that quantum computers just, like, can't help us accelerate.
00:13:10.000 The kinds of, like, one classic problem that quantum computers help with is this thing called, like, the traveling salesman paradox.
00:13:17.000 Or problem where, you know, you have like a bunch of different locations that a salesman needs to hit, and what's the best path to hit them most efficiently?
00:13:25.000 It's like kind of a classic problem if you're going around different places and have to make stops.
00:13:29.000 There are a lot of different problems that have the right shape for that.
00:13:33.000 A lot of quantum machine learning, which is a field, is focused on how do we take standard AI problems, like AI...
00:13:40.000 I think we're good to go.
00:14:01.000 Can you define that for people?
00:14:03.000 What's the difference between human-level AI and superintelligence?
00:14:06.000 Yeah.
00:14:07.000 So, yeah, human-level AI is, like, AI...
00:14:11.000 You can imagine, like, it's AI that is...
00:14:13.000 As smart as you are in, let's say, all the things you could do on a computer.
00:14:18.000 So, you know, you can, yeah, you can order food on a computer, but you can also write software on a computer.
00:14:22.000 You can also email people and pay them to do shit on a computer.
00:14:25.000 You can also trade stocks on a computer.
00:14:27.000 So it's like as smart as a smart person for that.
00:14:31.000 Superintelligence, people have various definitions, and there are all kinds of, like, honestly hissy fits about, like, different definitions.
00:14:38.000 Generally speaking, it's something that's, like, very significantly smarter than the smartest human.
00:14:43.000 And so you think about it, it's kind of like it's as much smarter than you as you might be smarter than a toddler.
00:14:51.000 And you think about that, and you think about, like, the, you know, how would a toddler control you?
00:14:59.000 It's kind of hard.
00:15:00.000 Like, you can outthink a toddler.
00:15:03.000 Pretty much like any day of the week.
00:15:04.000 And so superintelligence gets us at these levels where you can potentially do things that are completely different and basically, you know, new scientific theories.
00:15:15.000 And last time we talked about, you know, new stable forms of matter that were being discovered by these kind of narrow systems.
00:15:22.000 But now you're talking about a system that is like, has that intuition combined with the ability to...
00:15:30.000 Talk to you as a human and to just have really good, like, rapport with you, but can also do math.
00:15:36.000 It can also write code.
00:15:37.000 It can also, like, solve quantum mechanics and has that all kind of wrapped up in the same package.
00:15:43.000 One of the things, too, that, by definition, if you build a human-level AI, one of the things it must be able to do, as well as humans...
00:15:49.000 Is AI research itself?
00:15:51.000 Yeah.
00:15:51.000 Or at least the parts of AI research that you can do in just like software, like by coding or whatever these systems are designed to do.
00:15:59.000 And so one implication of that is you now have automated AI researchers.
00:16:05.000 And if you have automated AI researchers, that means you have AI systems that can automate the development of the next...
00:16:14.000 And now you're getting into that whole singularity thing where it's an exponential that just builds on itself and builds on itself, which is kind of why a lot of people argue that if you build human-level AI, superintelligence can't be that far away.
00:16:28.000 You've basically unlocked everything.
00:16:30.000 And we kind of have gotten very close, right?
00:16:34.000 It's past the Fermi, not the Fermi paradox, the, what is it?
00:16:39.000 Oh, yeah, yeah.
00:16:40.000 We were just talking about him the other day.
00:16:42.000 Yeah, the test.
00:16:43.000 Oh, the Turing test?
00:16:44.000 The Turing test.
00:16:45.000 Thank you.
00:16:46.000 We were just talking about how horrible, what happened to him was, you know, they chemically castrated him because he was gay.
00:16:52.000 Yeah.
00:16:53.000 Horrific.
00:16:54.000 He winds up killing himself.
00:16:55.000 The guy who figures out what's the test to figure out whether or not AI has become sentient.
00:17:00.000 And by the way, he does this in, like, what, 1950-something?
00:17:02.000 Oh, yeah, yeah.
00:17:03.000 Alan Turing is, like, the guy was a beast, right?
00:17:05.000 How did he think that through?
00:17:07.000 He invented computers.
00:17:09.000 He invented basically the concept that underlies all computers.
00:17:13.000 Like, he was like...
00:17:14.000 An absolute beast.
00:17:15.000 He was a code breaker.
00:17:16.000 He broke the Nazi codes, right?
00:17:18.000 He also wasn't even the first person to come up with this idea of machines, building machines, and there being implications like human disempowerment.
00:17:26.000 So if you go back to, I think it was like the late 1800s, and I don't remember the guy's name, but he sort of like came up with this.
00:17:33.000 He was observing the Industrial Revolution and the mechanization of labor and kind of starting to see.
00:17:38.000 More and more, like, if you zoom out, it's almost like you have a humans or an ant colony, and the artifacts that that colony is producing that are really interesting are these machines.
00:17:46.000 You know, you kind of, like, look at the surface of the Earth as, like, gradually, increasingly mechanized thing, and it's not super clear if you zoom out enough, like...
00:17:55.000 What is actually running the show here?
00:17:57.000 Like, you've got humans servicing machines, humans looking to improve the capability of these machines at this frantic pace.
00:18:03.000 Like, they're not even in control of what they're doing.
00:18:04.000 Economic forces are pushing it.
00:18:06.000 Are we the servant of the master, right, at a certain point?
00:18:08.000 Like, yeah.
00:18:09.000 And the whole thing is, like, especially with a competition that's going on between the labs, but just kind of in general, you're at a point where, like...
00:18:18.000 Do the CEOs of the labs, like, they're these big figureheads.
00:18:21.000 They go on interviews.
00:18:22.000 They talk about what they're doing and stuff.
00:18:24.000 Do they really have control over any part of the system?
00:18:29.000 The economy is in this, like, almost convulsive fit, right?
00:18:32.000 Like, you can almost feel like it's hurling out AGI.
00:18:36.000 And, like, as one kind of, I guess, data point here, like, all these labs, so OpenAI, Microsoft, Google.
00:18:45.000 Every year they're spending like an aircraft carrier worth of capital, individually, each of them, just to build bigger data centers, to house more AI chips, to train bigger, more powerful models.
00:18:55.000 And that's like – so we're actually getting to the point where if you look at on a power consumption basis, like we're getting to, you know, 2, 3, 4, 5 percent of U.S. power production if you project out into the late 2020s.
00:19:11.000 In 2026 /27, you're talking about...
00:19:13.000 Not for double-digit, though.
00:19:14.000 Not for double-digit, but for single-digit.
00:19:16.000 Yeah, you're talking like that's a few gigawatts, so one gigawatt.
00:19:19.000 Sorry, not for single-digit.
00:19:21.000 It's in the, like, for 2027, you're looking at like, you know, in the point...
00:19:25.000 Five-ish percent.
00:19:26.000 But it's like, it's a big fucking frat.
00:19:28.000 Like, you're talking about gigawatts and gigawatts.
00:19:30.000 One gigawatt is a million homes.
00:19:31.000 So you're seeing, like, one data center in 2027 is easily going to break a gig.
00:19:35.000 There's going to be multiple like that.
00:19:37.000 And so it's like a thousand, sorry, a million home city, metropolis, really, that is just dedicated to training, like, one fucking model.
00:19:45.000 That's what this is.
00:19:46.000 Again, if you zoom out at planet Earth, you can interpret it as like this, like all these humans frantically running around like ants just like building this like artificial brain.
00:19:56.000 It's like a super mind assembling itself on the face of the planet.
00:20:00.000 Marshall McLuhan in like 1963 or something like that said...
00:20:04.000 Human beings are the sex organs of the machine world.
00:20:07.000 Oh, God.
00:20:08.000 That hits different today.
00:20:10.000 Yeah, it does.
00:20:11.000 It does.
00:20:12.000 I've always said that if we were aliens, or if aliens came here and studied us, they'd be like, what is the dominant species on the planet doing?
00:20:19.000 Well, it's making better things.
00:20:20.000 That's all it does.
00:20:22.000 The whole thing is dedicated to making better things.
00:20:24.000 And all of its instincts, including materialism, including status, keeping up with the Joneses, all that stuff is tied to newer, better stuff.
00:20:32.000 You don't want old shit.
00:20:34.000 You want new stuff.
00:20:35.000 You don't want an iPhone 12. What are you doing, you loser?
00:20:40.000 You need newer, better stuff.
00:20:42.000 And they convince people, especially in the realm of consumer electronics, most people are buying things they absolutely don't need.
00:20:50.000 The vast majority of the spending on new phones is completely unnecessary.
00:20:55.000 But I just need that extra fourth camera, though.
00:21:00.000 I feel like my life isn't complete.
00:21:02.000 I run one of my phones as an iPhone 11, and I'm purposely not switching it just to see if I notice it.
00:21:08.000 I fucking never know.
00:21:09.000 I don't notice anything.
00:21:10.000 I watch YouTube on it.
00:21:12.000 I text people.
00:21:13.000 It's all the same.
00:21:14.000 I go online.
00:21:14.000 It works.
00:21:15.000 It's all the same.
00:21:16.000 Probably the biggest thing there is going to be the security side, which...
00:21:19.000 No, they update the security.
00:21:21.000 It's all software.
00:21:22.000 But, I mean, if your phone gets old enough, I mean, like at a certain point...
00:21:26.000 Oh, when they stop updating it?
00:21:27.000 Yeah.
00:21:27.000 Yeah, like iPhone 1, you know, China's watching all your dick pics.
00:21:30.000 Oh, dude.
00:21:31.000 I mean, Salt Typhoon, they're watching all our dick pics.
00:21:33.000 They're definitely seeing mine.
00:21:35.000 What's Salt Typhoon?
00:21:36.000 So this big Chinese cyber attack actually starts to get us to kind of the broader...
00:21:42.000 What a great name, by the way.
00:21:43.000 Salt Typhoon?
00:21:44.000 Fuck yeah, guys.
00:21:45.000 They have the coolest names for their cyber operations meant to destroy us.
00:21:50.000 Salt Typhoon is pretty slick.
00:21:52.000 You know what?
00:21:52.000 It's kind of like when people go out and do like a...
00:21:55.000 An awful thing, like a school shooting or something, and they're like, oh, let's talk about, you know, if you give it a cool name, like now the Chinese are definitely going to do it again.
00:22:02.000 Anyway.
00:22:03.000 Because they have a cool name?
00:22:04.000 Yeah, that's definitely a factor.
00:22:05.000 Salt Typhoon.
00:22:06.000 Salt Typhoon.
00:22:06.000 Pretty dope.
00:22:07.000 Yeah.
00:22:08.000 But it's this thing where basically, so there was in the 3G kind of protocol that was set up years ago, law enforcement agencies included back doors intentionally to be able to access comms, you know, theoretically, if they got a warrant and so on.
00:22:22.000 And well, you introduce a backdoor.
00:22:24.000 You have adversaries like China who are wicked good at cyber.
00:22:29.000 They're going to find and exploit those backdoors.
00:22:31.000 And now basically they're sitting there and they had been for some people think like maybe a year or two before it was really discovered.
00:22:37.000 And just a couple months ago, they kind of go like, oh, cool.
00:22:40.000 We got fucking, like, China all up in our shit.
00:22:42.000 And this is, like, flip a switch for them and, like, you turn off the power or water to a state.
00:22:48.000 Or, like, you fucking...
00:22:50.000 Yeah.
00:22:50.000 Well, sorry, this is...
00:22:51.000 Sorry, Salt Typhoon, though, is about just sitting on the, like, basically telecoms now.
00:22:56.000 Oh, that's the telecom one.
00:22:57.000 That's right.
00:22:57.000 It's not the...
00:22:58.000 But, yeah, I mean, that's another thing.
00:23:00.000 There's another thing where they're doing that, too.
00:23:02.000 Yeah.
00:23:02.000 And so this is kind of where...
00:23:03.000 What we've been looking into over the last year is this question of how...
00:23:09.000 If you're going to make a Manhattan project for superintelligence, right?
00:23:14.000 That's what we're texting about way back.
00:23:17.000 Actually, funnily enough, we shifted our date for security reasons.
00:23:20.000 But if you're going to do a Manhattan project for superintelligence, what does that have to look like?
00:23:27.000 What does the security game have to look like to actually make it so that China's not all up in your shit?
00:23:33.000 Today, it is extremely clear that at the world's top AI labs, All that shit is being stolen.
00:23:40.000 Like, there is not a single lab right now that isn't being spied on successfully based on everything we've seen by the Chinese.
00:23:47.000 Can I ask you this?
00:23:48.000 Are we spying on the Chinese as well?
00:23:50.000 That's a big problem.
00:23:55.000 We're definitely doing some stuff, but in terms of the relative balance between the two, we're not where we need to be.
00:24:03.000 They spy on us better than we spy on them?
00:24:05.000 Yeah, because they build all our shit.
00:24:08.000 Well, that was the Huawei situation, right?
00:24:10.000 Yeah, and it's also the, oh my god, if you look at the power grid.
00:24:14.000 So, this is now public, but if you look at, like, transformer substations, so these are the, essentially, anyway, they're a crucial part of the electrical grid.
00:24:23.000 And there's really, like...
00:24:26.000 Basically, all of them have components that are made in China.
00:24:29.000 China's known to have planted backdoors like Trojans into those substations to fuck with our grid.
00:24:34.000 The thing is, when you see a salt typhoon, when you see a big Chinese cyberattack or a big Russian cyberattack, you're not seeing their best.
00:24:42.000 These countries do not go and show you their best cards out the gate.
00:24:46.000 You show the bare minimum that you can without...
00:24:50.000 Tipping your hand at the actual exquisite capabilities you have.
00:24:54.000 The way that one of the people who's been walking us through all this really well explained it is the philosophy is you want to learn without teaching.
00:25:06.000 You want to use what is the lowest level capability that has the effect I'm after.
00:25:10.000 And that's what that is.
00:25:10.000 I'll give an example.
00:25:11.000 I'll tell you a story that's kind of like...
00:25:14.000 It's a public story, and it's from a long time ago, but it kind of gives a flavor of like...
00:25:18.000 How far these countries will actually go when they're playing the game for fucking real.
00:25:24.000 So it's 1945.
00:25:26.000 America and the Soviet Union are like best pals because they've just defeated the Nazis, right?
00:25:32.000 To celebrate that victory and the coming new world order that's going to be great for everybody, the children of the Soviet Union give as a gift to the American ambassador in Moscow this Beautifully carved wooden seal of the United States of America.
00:25:50.000 Beautiful thing.
00:25:51.000 Ambassador is thrilled with it.
00:25:53.000 He hangs it up behind his desk in his private office.
00:25:57.000 You can see where I'm going with this probably, but yeah.
00:26:00.000 Seven years later, 1952, finally occurs to us, like, let's take a town and actually examine this.
00:26:07.000 So they dig into it, and they find this incredible contraption in it called a cavity resonator.
00:26:15.000 And this device doesn't have a power source, doesn't have a battery, which means when you're sweeping the office for bugs, you're not going to find it.
00:26:22.000 What it does instead is it's designed.
00:26:25.000 That's it.
00:26:26.000 That's it.
00:26:26.000 It's the thing.
00:26:27.000 They call it the thing.
00:26:29.000 And what this cavity resonator does is it's basically designed to reflect radio radiation.
00:26:36.000 Back to a receiver to listen to all the noises and conversations and talking in the ambassador's private office.
00:26:43.000 How's it doing without a power source?
00:26:45.000 So that's what they do.
00:26:46.000 So the Soviets, for seven years, parked a van across the street from the embassy, had a giant fucking microwave antenna aimed right at the ambassador's office, and were like zapping it and looking back at the reflection and literally listening to every single thing he was saying.
00:27:03.000 And the best part was...
00:27:05.000 When the embassy staff was like, we're going to go and sweep the office for bugs periodically, they'd be like, hey, Mr. Ambassador, we're about to sweep your office for bugs.
00:27:14.000 And the ambassador was like, cool, please proceed and go and sweep my office for bugs.
00:27:20.000 And the KGB dudes in the van were like...
00:27:22.000 Just turn it off.
00:27:23.000 Sounds like they're going to sweep the office for bugs.
00:27:25.000 Let's turn off our giant microwave antenna.
00:27:27.000 And they kept at it for seven years.
00:27:29.000 It was only ever discovered because there was this, like, British radio operator who was just, you know, doing his thing, changing his dial.
00:27:35.000 And he's like, oh, shit.
00:27:36.000 Like, is that the ambassador?
00:27:37.000 Just randomly.
00:27:38.000 So the thing is, oh, and actually, sorry.
00:27:40.000 One other thing about that.
00:27:42.000 If you heard that story and you're kind of thinking to yourself, hang on a second.
00:27:47.000 They were shooting, like, microwaves at our ambassador 24-7 for seven years.
00:27:52.000 Whoa.
00:27:52.000 Doesn't that seem like it might, like, fry his genitals or something?
00:27:57.000 Yeah.
00:27:57.000 Or something like that?
00:27:58.000 You're supposed to have a lead vest.
00:27:59.000 And the answer is yes.
00:28:02.000 Yes.
00:28:03.000 Yes.
00:28:03.000 And this is something that came up in our investigation just from every single person who was, like, who was filling us in and who dialed in and knows what's up.
00:28:12.000 They're like, look, so you got to understand, like, our adversaries.
00:28:18.000 If they need to, like, give you cancer in order to rip your shit off of your laptop, they're going to give you some cancer.
00:28:25.000 Did he get cancer?
00:28:26.000 I don't know specifically about the ambassador, but, like, it's...
00:28:30.000 That's also, so...
00:28:32.000 We're limited to what we can say.
00:28:34.000 There's actually people that you talk to later that...
00:28:37.000 Can go in more detail here.
00:28:39.000 But older technology like that, kind of lower powered, so you're less likely to look at that.
00:28:45.000 Nowadays, we live in a different world.
00:28:47.000 The guy that invented that microphone, his last name is Theremin.
00:28:50.000 He invented this instrument called the Theremin, which is a fucking really interesting thing.
00:28:54.000 Oh, he's just moving his hands?
00:28:56.000 Yeah, your hands control it, waving over this.
00:28:58.000 What?
00:28:58.000 It's a fucking wild instrument.
00:29:00.000 Have you seen this before, Jamie?
00:29:01.000 Yeah, I saw Juicy J playing it yesterday on Instagram.
00:29:04.000 He's like practicing.
00:29:06.000 It's a fucking cool-ass thing.
00:29:07.000 He's also pretty good at it, too.
00:29:12.000 Both hands are controlling it.
00:29:14.000 By moving in and out in space, X, Y, Z. I honestly don't really know how the fuck it works, but I've seen it.
00:29:21.000 Wow!
00:29:21.000 That is wild.
00:29:22.000 It's also a lot harder to do than it seems.
00:29:25.000 So the Americans tried to replicate this for years and years and years without really succeeding.
00:29:30.000 And anyway, that's all kind of part of it.
00:29:32.000 I have a friend who used to work for an intelligence agency, and he was working in Russia.
00:29:37.000 And they found that the building was bugged with these...
00:29:41.000 Super sophisticated bugs that operate their power came from the swaying of the building Get out.
00:29:50.000 I've never heard that one before.
00:29:52.000 Just like your watch, like I have a mechanical watch on, so when I move my watch, it powers up the spring and it keeps the watch.
00:30:00.000 That's how an automatic mechanical watch works.
00:30:02.000 They figured out a way to, just by the subtle swaying of the building in the wind, that was what was powering this listening device.
00:30:10.000 So this is the thing, right?
00:30:12.000 I mean, what the fuck?
00:30:13.000 The things that nation states...
00:30:16.000 What's up, Jamie?
00:30:17.000 Google says that's...
00:30:18.000 That's what was powering this thing.
00:30:19.000 The Great Seal Bug, which I think is the thing.
00:30:22.000 There's another one?
00:30:23.000 No.
00:30:24.000 Oh, this is...
00:30:24.000 So you can actually see in that video, I think there was a YouTube...
00:30:26.000 Yeah, so...
00:30:27.000 Same kind of thing, Jamie?
00:30:28.000 I was just...
00:30:29.000 I typed in Russia spy bug building sway.
00:30:34.000 The thing is what pops up.
00:30:35.000 The thing?
00:30:35.000 Which is what we were just talking about.
00:30:37.000 Oh, that thing.
00:30:38.000 So that's powered the same way?
00:30:40.000 By the sway of the building?
00:30:42.000 I think it was powered by radio frequency emission.
00:30:45.000 So there may be another thing.
00:30:47.000 Related to it?
00:30:49.000 Not sure, but...
00:30:50.000 Maybe Google's a little confused.
00:30:53.000 Maybe the word "sway" is what's throwing it off.
00:30:56.000 But it's a great catch, and the only reason we even know that, too, is that when the U-2s were flying over Russia, they had a U-2 that got shot down in 1960.
00:31:05.000 The Russians go like, "Oh, friggin' Americans spying on us.
00:31:09.000 What the fuck?
00:31:10.000 I thought we were buddies."
00:31:11.000 Well, it's the '60s.
00:31:11.000 I obviously didn't think that.
00:31:13.000 And then the Americans are like, "Uh, okay, bitch."
00:31:15.000 Look at this!
00:31:16.000 And they brought out the seal, and that's how it became public.
00:31:20.000 It was basically like the response to the Russians saying, like, you know...
00:31:23.000 Wow.
00:31:24.000 Yeah, they're all dirty.
00:31:26.000 Everyone's spying on everybody.
00:31:28.000 That's the thing.
00:31:29.000 And I think they probably all have some sort of UFO technology.
00:31:33.000 We need to talk about that.
00:31:34.000 We need to turn off our mics and...
00:31:36.000 I'm 99% sure a lot of that shit is ours.
00:31:39.000 You need to talk to some of the...
00:31:41.000 I've been talking to people.
00:31:43.000 I've been talking to a lot of people.
00:31:45.000 There might be some other people that you'd be interested in chatting with.
00:31:49.000 I would very much be interested.
00:31:50.000 Here's the problem.
00:31:51.000 Some of the people I'm talking to, I'm positive, they're talking to me to give me bullshit.
00:31:59.000 Are we on your list?
00:32:00.000 No, you guys aren't on the list.
00:32:02.000 But there's certain people, I'm like, okay, maybe most of this is true, but some of it's not, on purpose.
00:32:07.000 There's that.
00:32:08.000 I guarantee you, I know I talk to people that don't tell me the truth.
00:32:12.000 Yeah.
00:32:12.000 Yeah.
00:32:13.000 It's an interesting problem in, like, all intel, right?
00:32:15.000 Because there's always – the mix of incentives is so fucked.
00:32:18.000 Like, the adversary is trying to add noise into the system.
00:32:20.000 You've got pockets of people within the government that have different incentives from other pockets.
00:32:25.000 And then you have top secret clearance and all sorts of other things that are going on.
00:32:28.000 Yeah.
00:32:29.000 One guy that texted me is like, the guy telling you that they aren't real is literally involved in these meetings.
00:32:35.000 So stop.
00:32:36.000 Just stop listening to him.
00:32:38.000 It's like one of the techniques, right, is actually to inject so much noise that you don't know what's what and you can't follow.
00:32:46.000 So this actually happened in the COVID thing, right?
00:32:51.000 The lab leak versus the natural wet market thing.
00:32:54.000 So I remember there was a debate that happened about...
00:33:00.000 What was the origin of COVID?
00:33:01.000 This was like a few years ago.
00:33:03.000 It was like an 18 or 20 hour long YouTube debate, just like punishingly long.
00:33:09.000 And it was like there was a $100,000 bet either way on who would win.
00:33:13.000 And it was like lab leak versus wet market.
00:33:16.000 And at the end of the 18 hours, the conclusion was like one of the one.
00:33:21.000 But the conclusion was like it's basically 50-50 between them.
00:33:24.000 And then I remember like hearing that and talking to some folks and being like, hang on a second.
00:33:28.000 You got to believe that whether it came from a lab or whether it came from a wet market, one of the top three priorities of the CCP from a propaganda standpoint is like, don't get fucking blamed for COVID.
00:33:42.000 And that means they're putting like $1 to $10 billion and some of their best people on a global propaganda effort to cover up evidence and confuse and blah, blah, blah.
00:33:52.000 You really think that...
00:33:55.000 That you're 50%, that confusion isn't coming from that incredibly resourced effort.
00:34:02.000 They know what they're doing.
00:34:04.000 Particularly when different biologists and virologists who weren't attached to anything were talking about the cleavage.
00:34:14.000 Points and different aspects of the virus that appeared to be genetically manipulated.
00:34:19.000 The fact that there was only one spillover event, not multiple ones.
00:34:23.000 None of it made any sense.
00:34:24.000 All of it seemed like some sort of a...
00:34:27.000 Genetically engineered virus.
00:34:28.000 It seemed like gain-of-function research.
00:34:30.000 And the early emails were talking about that.
00:34:34.000 And then everybody changed their opinion.
00:34:36.000 And even the taboo, right, against talking about it through that lens?
00:34:40.000 Oh yeah, total propaganda.
00:34:41.000 It's racist.
00:34:42.000 Which is crazy because nobody thought the Spanish flu was racist and it didn't even really come from Spain.
00:34:47.000 Yeah, that's true, yeah.
00:34:48.000 It came from Kentucky.
00:34:49.000 I didn't know that.
00:34:50.000 Yeah, I think it was Kentucky or Virginia.
00:34:53.000 Where did the Spanish flu originate from?
00:34:55.000 But nobody got married.
00:34:56.000 Well, that's because the state of Kentucky has an incredibly sophisticated propaganda machine and pinned it on Spanish.
00:35:05.000 It might not have been Kentucky, but I think it was an agricultural thing.
00:35:10.000 Kansas.
00:35:11.000 Thank you.
00:35:12.000 Yeah, goddamn Kansas.
00:35:14.000 I've always said that.
00:35:15.000 I've always said that.
00:35:16.000 Likely originated in the United States.
00:35:18.000 H1N1 strain had genes of avian origin.
00:35:20.000 By the way, people always talk about the Spanish flu.
00:35:22.000 If it was around today, everybody would just get antibiotics and we'd be fine.
00:35:26.000 So this whole mass die-off of people.
00:35:29.000 It would be like the Latinx flu.
00:35:31.000 And we would be...
00:35:32.000 The Latinx flu?
00:35:33.000 The Latinx flu.
00:35:35.000 That one didn't stick at all.
00:35:37.000 That didn't stick.
00:35:38.000 Latinx?
00:35:38.000 No.
00:35:39.000 A lot of people like claiming they never used it and they pull up old videos of them.
00:35:43.000 Yeah.
00:35:43.000 Like, that's a dumb one.
00:35:44.000 Like, it's literally a gendered language, you fucking idiots.
00:35:47.000 Yeah.
00:35:47.000 Like, you can't just do that.
00:35:49.000 That's true.
00:35:49.000 Latinx, shut up.
00:35:49.000 It went on for a while, though.
00:35:51.000 Sure, everything goes on for a while.
00:35:53.000 Yeah.
00:35:53.000 So think about how long they did lobotomies.
00:35:56.000 Hmm.
00:35:57.000 They did lobotomies for 50 fucking years before they went, hey, maybe we should stop doing this.
00:36:03.000 It was like the same attitude that got Turing chemically castrated, right?
00:36:07.000 Actually, like, hey, let's just get in there and fuck around a bit.
00:36:11.000 Well, this was before they had SSRIs and all sorts of other interventions.
00:36:15.000 What was the year of lobotomies?
00:36:18.000 I believe it stopped in 67. Was it 50 years?
00:36:20.000 I think you said 70 last time, and that was correct when I pulled it up.
00:36:24.000 70 years?
00:36:25.000 1970.
00:36:26.000 Oh, I think it was 67. I like how this has come up so many times that Jamie's like, I think last time you said it was 70. It comes up all the time because it's one of those things.
00:36:34.000 It's insane.
00:36:34.000 You can't just trust the medical establishment.
00:36:37.000 Officially 67, it says maybe one more in 72. Oh, God.
00:36:40.000 Oh, he died in 72. When did they start doing it?
00:36:44.000 I think they started in the 30s or the 20s, rather.
00:36:48.000 That's pretty ballsy.
00:36:49.000 The first guy who did a lobotomy.
00:36:52.000 Since 24, Freeman arrives to watch DC Direct Labs.
00:36:56.000 35, they tried it first.
00:36:59.000 Imagine that.
00:37:00.000 They just scramble your fucking brains.
00:37:03.000 But doesn't it make you feel better to call it a leucotomy, though?
00:37:05.000 Because it sounds a lot more professional.
00:37:08.000 No.
00:37:09.000 Lobotomy, leucotomy.
00:37:10.000 Leucotomy sounds gross.
00:37:11.000 Sounds like loogie.
00:37:13.000 Like lobotomy.
00:37:15.000 Boy.
00:37:17.000 Topeka, Kansas.
00:37:18.000 Also Kansas.
00:37:19.000 All roads point to Kansas.
00:37:21.000 This is a problem.
00:37:22.000 That's what happens when everything's flat.
00:37:23.000 You just lose your fucking marbles.
00:37:24.000 You go crazy.
00:37:26.000 That's the main issue.
00:37:26.000 Jesus Christ.
00:37:27.000 So they did this for so long.
00:37:29.000 Somebody won a Nobel Prize for lobotomy.
00:37:32.000 Wonderful.
00:37:33.000 Imagine being that person.
00:37:33.000 Give that back, you piece of shit.
00:37:35.000 Yes, seriously.
00:37:36.000 You're kind of like, you know, you don't want to display it up in your shelf.
00:37:39.000 But it's just a good...
00:37:42.000 It's like it should let you know that oftentimes science is incorrect and that oftentimes, you know...
00:37:49.000 Unfortunately, people have a history of doing things and then they have to justify that they've done these things.
00:37:54.000 But now there's so much more tooling too, right?
00:37:57.000 If you're a nation state and you want to fuck with people and inject narratives into the ecosystem, right?
00:38:02.000 The whole idea of autonomous AI agents too, like having these basically Twitter bots or whatever bots.
00:38:10.000 One thing we've been thinking about too on the side is the idea of audience capture, right?
00:38:18.000 Big people with high profiles and kind of gradually steering them towards a position by creating bots that, like, through comments, through upvotes, you know?
00:38:28.000 100%.
00:38:28.000 It's absolutely real.
00:38:30.000 Yeah, and a couple of big accounts on X that we're in touch with have sort of said, like, yeah...
00:38:37.000 Especially in the last two years, it's actually become hard, like especially the thoughtful ones, right?
00:38:43.000 It's become hard to like stay sane, not on X, but like across social media, on all the platforms.
00:38:49.000 And that is around when, you know, it became possible to have AIs that can speak like people, you know, 90%, 95% of the time.
00:38:58.000 And so you have to imagine that, yeah, adversaries are using this and doing this and pushing the frontier.
00:39:05.000 No doubt.
00:39:06.000 They'd be fooled if they didn't do it.
00:39:08.000 Oh, yeah, 100%.
00:39:08.000 You have to do it because for sure we're doing that.
00:39:11.000 And this is one of the things where, you know, like it used to be, so OpenAI actually used to do this assessment of their AI models as part of their kind of what they call their preparedness framework that would look at the persuasion capabilities of their models as one kind of threat vector.
00:39:27.000 They pulled that out recently, which is kind of like...
00:39:30.000 Why?
00:39:31.000 You can argue that it makes sense.
00:39:33.000 I actually think it's somewhat concerning because one of the things you might worry about is if these systems, sometimes they get trained through what's called reinforcement learning, potentially you could imagine training these to be super persuasive by having them interact with real people and convince them, practice at convincing them to do specific things.
00:39:51.000 If you get to that point...
00:39:53.000 You know, these labs ultimately will have the ability to deploy agents at scale that can just persuade a lot of people to do whatever they want, including pushing...
00:40:02.000 Legislative agendas.
00:40:03.000 Anyone help them prep for meetings with the Hill, the administration, whatever.
00:40:09.000 How should I convince this person to do that?
00:40:12.000 Well, they'll do that with text messages.
00:40:14.000 Make it more business-like.
00:40:16.000 Make it friendlier.
00:40:17.000 Make it more jovial.
00:40:19.000 But this is like the same optimization pressure that keeps you on TikTok.
00:40:22.000 That same addiction.
00:40:24.000 Imagine that applied to persuading you of some fact, right?
00:40:28.000 On the other hand...
00:40:30.000 Maybe a few months from now, we're all just going to be very, very convinced that it was all fine.
00:40:35.000 It's no big deal.
00:40:36.000 Yeah, maybe they'll get so good that it'll make sense to you.
00:40:41.000 Maybe they'll just be right.
00:40:44.000 That's how that shit works.
00:40:45.000 Yeah, it's a confusing time period.
00:40:48.000 We've talked about this ad nauseum, but it bears repeating.
00:40:52.000 Former FBI analyst who investigated Twitter before Elon bought it said that he thinks it's about 80% bots.
00:41:00.000 Yeah.
00:41:00.000 80%.
00:41:01.000 That's one of the reasons why the bot purge, like when Elon acquired it and started working on it, is so important.
00:41:06.000 Like there needs to be – the challenge is like detecting these things is so hard, right?
00:41:10.000 So hard.
00:41:11.000 Increasingly.
00:41:12.000 Like more and more they can hide like basically perfectly.
00:41:15.000 Like how do you tell the difference between a cutting edge AI bot?
00:41:21.000 You can't because they can generate AI images of a family, of a backyard barbecue, post all these things up and make it seem like it's real.
00:41:29.000 Especially now, AI images are insanely good now.
00:41:32.000 They really are, yeah.
00:41:33.000 It's crazy.
00:41:34.000 And if you have a person, you could take a photo of a person and manipulate it in any way you'd like.
00:41:40.000 And then now this is your new guy.
00:41:42.000 You could do it instantaneously.
00:41:43.000 And then this guy has a bunch of opinions on things.
00:41:45.000 And it seems to always align with the Democratic Party.
00:41:48.000 But whatever.
00:41:49.000 Good guy.
00:41:50.000 He's a family man.
00:41:51.000 Look, he's out in his barbecue.
00:41:52.000 He's not even a fucking human being.
00:41:53.000 And people are arguing with this bot, like, back and forth.
00:41:57.000 And you'll see it on any social issue.
00:41:59.000 You see it with Gaza and Palestine.
00:42:01.000 You see it with abortion.
00:42:03.000 You see it with religious freedoms.
00:42:05.000 You just see these bots.
00:42:07.000 You see these arguments.
00:42:08.000 And, you know, you see, like, various levels.
00:42:11.000 You see, like, the extreme position.
00:42:14.000 And then you see a more reasonable centrist position.
00:42:16.000 But essentially what they're doing is they're consistently...
00:42:19.000 Moving what's okay further and further in a certain direction.
00:42:26.000 It's both directions.
00:42:27.000 Like, it's like, you know how when you're trying to, like, you're trying to capsize a boat or something, you're, like, fucking with your buddy on the lake or something.
00:42:35.000 So you push on one side, then you push on the other side, then you push until eventually it capsizes.
00:42:40.000 This is kind of, like, our electoral process is already naturally like this, right?
00:42:45.000 We go, like, we have a party in power for a while, then, like, they get, you know, they basically get, like, you get tired of them and you switch.
00:42:52.000 And that's kind of the natural way how democracy works.
00:42:55.000 Or in a republic.
00:42:56.000 But the way that adversaries think about this is they're like, perfect.
00:42:59.000 This swing back and forth, all we have to do is like, when it's on this way, we push and push and push and push until it goes more extreme.
00:43:06.000 And then there's a reaction to it, right?
00:43:08.000 And then I swing it back and we push and push and push on the other side until eventually something breaks.
00:43:13.000 And that's a risk.
00:43:14.000 Yeah.
00:43:15.000 It's also like, you know, the organizations that are doing this, like, we already know this is part of Russia's MO, China's MO, because back when it was easier to detect, we already could see them doing this shit.
00:43:26.000 So there is this website called This Person Does Not Exist.
00:43:29.000 It still exists surely now, but it's kind of...
00:43:32.000 Kind of superseded.
00:43:33.000 Yeah.
00:43:34.000 But you would like every time you refresh this this website, you would see a different like human face that was a generated and what the Russian Internet Research Agency would do.
00:43:42.000 Yeah, exactly.
00:43:44.000 What what all these these and it's actually yeah, I don't think they've really upgraded it.
00:43:48.000 But that's fake.
00:43:51.000 Wow, they're so good.
00:43:53.000 This is like years old.
00:43:55.000 Years old.
00:43:56.000 And you could actually detect these things pretty reliably.
00:43:58.000 Like you might remember the whole thing about AI systems were having a hard time generating like hands that only had like five fingers.
00:44:04.000 Right.
00:44:04.000 That's over though.
00:44:06.000 Yeah, little hints of it were though back in the day in this person does not exist.
00:44:10.000 And you'd have the Russians would take like a face from that and then use it as the profile picture for like a Twitter bot.
00:44:17.000 Right.
00:44:17.000 And so that you could actually detect.
00:44:19.000 You'd be like, okay, I've got you there.
00:44:20.000 I've got you there.
00:44:20.000 there, I can kind of get a rough count.
00:44:22.000 Now we can't, but we definitely know they've been in the game for a long time.
00:44:26.000 There's no way they're not right now.
00:44:28.000 The thing with nation state propaganda attempts, right, is that people have this idea that, "Ah, I've caught this Chinese influence operation," or whatever, like we nail them.
00:44:39.000 The reality is nation states operate at like 30 different levels.
00:44:45.000 And if you're a priority, like just influencing our information spaces as a priority for them,
00:44:50.000 They're not just going to operate.
00:44:52.000 They're not just going to pick a level and do it.
00:44:53.000 They're going to do all 30 of them.
00:44:55.000 And so you, even if you're among the best in the world detecting this shit, you're going to catch and stop levels 1 through 10. And then you're going to be aware of level 11, 12, 13. You're working against it.
00:45:09.000 And maybe you're starting to think about level 16. And you imagine you know about level 18 or whatever.
00:45:15.000 But they're above you, below you, all around you.
00:45:18.000 They're incredibly, incredibly resourced.
00:45:20.000 And this is something that came...
00:45:22.000 Came through very strongly for us.
00:45:24.000 You guys have seen the Yuri Bezmenov video from 1984 where he's talking about how all our educational institutions have been captured by Soviet propaganda.
00:45:35.000 He was talking about Marxism has been injected into school systems and how you have essentially two decades before you're completely captured by these ideologies and it's going to permeate and destroy all of your confidence in democracy.
00:45:51.000 100% correct.
00:45:52.000 And this is before these kind of tools.
00:45:55.000 Because the vast majority of the exchanges of information right now are taking place on social media.
00:46:01.000 The vast majority of debating about things, arguing, all taking place on social media.
00:46:05.000 And if that FBI analyst is correct, 80% of it's bullshit, which is really wild.
00:46:11.000 And you look at some of the documents that have come out, I think it was like...
00:46:15.000 I think it was the CIA game plan, right?
00:46:17.000 For regime change or undermining.
00:46:19.000 How do you do it, right?
00:46:20.000 Have multiple decision makers at every level.
00:46:23.000 All these things.
00:46:24.000 And what a surprise.
00:46:25.000 That's exactly what the U.S. bureaucracy looks like today.
00:46:28.000 Slow everything down.
00:46:29.000 Make change impossible.
00:46:31.000 Make it so that everybody gets frustrated with it and they give up hope.
00:46:34.000 They decided to do that to other countries.
00:46:37.000 For sure, they do that here.
00:46:39.000 Open society, right?
00:46:40.000 I mean, that's part of the trade-off.
00:46:41.000 And that's actually a big...
00:46:42.000 Big part of the challenge, too.
00:46:44.000 So when we're working on this, right, like one of the things Ed was talking about, these like 30 different layers of security access or whatever, one of the consequences is you bump into a team at...
00:46:54.000 So, like, the teams we ended up working with on this project were folks that we bumped into after the end of our last investigation who kind of were like, oh...
00:47:04.000 We talked about last year, yeah.
00:47:05.000 Yeah, yeah, yeah.
00:47:06.000 Like, looking at AGI, looking at the national security kind of landscape around that.
00:47:10.000 And a lot of them, like, really well-placed.
00:47:13.000 It was like, you know, Special Forces guys from Tier 1 units.
00:47:16.000 So, you'll steal Team 6 type thing.
00:47:19.000 And because they're so, like, in that ecosystem...
00:47:23.000 You'll see people who are like ridiculously specialized and competent, like the best people in the world at doing whatever the thing is, like to break the security.
00:47:32.000 And they don't know often about like another group of guys who have a completely different capability set.
00:47:38.000 And so what you find is like you're indexing like hard on this vulnerability and then suddenly someone says, oh yeah, but by the way, I can just hop that fence.
00:47:46.000 So the really funny thing about this is like most or even like almost all Of the really, really, like, elite security people, kind of think that, like, all the other security people are dumbasses, even when they're not.
00:48:00.000 Or, like, yeah, they're biased in the direction of, because it's so easy when everything's, like, stove-piped.
00:48:06.000 But so most people who say they're, like, elite at security actually are dumbasses.
00:48:11.000 Because most security is, like, about checking boxes and, like, SOC 2 compliance and shit like that.
00:48:17.000 Yeah, what it is is, like, so everything's so stove-piped.
00:48:21.000 Yeah.
00:48:22.000 That you literally can't know what the exquisite state of the art is in another domain.
00:48:26.000 So it's a lot easier for somebody to come up and be like, "Oh yeah, I'm actually really good at this other thing that you don't know."
00:48:32.000 And so figuring out who actually is the...
00:48:34.000 We had this experience over and over where you run into a team and then you run into another team.
00:48:39.000 They have an interaction.
00:48:40.000 You're kind of like, "Oh, interesting."
00:48:41.000 So these are the people at the top of their game.
00:48:45.000 And that's been this very long process to figure out, like, OK, what does it take to actually secure our critical infrastructure against like CCP, for example, like Chinese attacks if we're if we're building a super intelligence project?
00:48:58.000 And it's it's this weird like kind of challenge because of the stovepiping.
00:49:03.000 No one has the full picture.
00:49:04.000 And we don't think that we have it even now, but definitely don't know of anyone who's come like that.
00:49:10.000 The best people are the ones who When they encounter another team and other ideas and start to engage with it, are like, instead of being like, oh, you don't know what you're talking about, who just actually lock on and go like, that's fucking interesting.
00:49:25.000 Tell me more about that.
00:49:26.000 Right.
00:49:26.000 People that have control of their ego.
00:49:28.000 Yes.
00:49:29.000 100%.
00:49:29.000 With everything.
00:49:30.000 With everything in life.
00:49:32.000 The best of the best got there by eliminating their ego as much as they could.
00:49:39.000 Yeah.
00:49:39.000 Always the way it is.
00:49:41.000 Yeah.
00:49:41.000 And it's also like...
00:49:43.000 The fact of, you know, the 30 layers of the stack or whatever it is, of all these security issues, means that no one can have the complete picture at any one time.
00:49:52.000 And the stack is changing all the time.
00:49:54.000 People are inventing new shit.
00:49:55.000 Things are falling in and out of...
00:49:57.000 And so, you know, figuring out what is that team that can actually get you that complete picture is an exercise.
00:50:04.000 A, you can't really do...
00:50:06.000 It's hard to do it from the government side because you got to engage with data center building companies.
00:50:11.000 You got to engage with the AI labs and in particular with like insiders at the labs who will tell you things that, by the way, the lab leadership will tell you the opposite of in some cases.
00:50:21.000 And so, like, it's just this this Gordian knot like it's like it took us months to to.
00:50:26.000 I'll give an example, actually, of that, like, trying to do the handshake, right, between different sets of people.
00:50:35.000 So we were talking to one person who's...
00:50:39.000 Thinking hard about data center security, working with, like, Frontier Labs on this shit.
00:50:44.000 Very much, like, at the top of her game.
00:50:47.000 But she's kind of from, like, the academic space, kind of Berkeley, like the avocado toast kind of side of the spectrum, you know?
00:50:56.000 And she's talking to us.
00:50:58.000 She'd reviewed the report we put out, the investigation we put out.
00:51:02.000 And she's like, you know, I think you guys are talking to the wrong people.
00:51:06.000 And we're like, can you say more about that?
00:51:09.000 And she's like, well, I don't think, like, you know, you talk to Tier 1 Special Forces.
00:51:13.000 I don't think they, like, know much about that.
00:51:15.000 We're like, okay, that's not correct, but can you say why?
00:51:19.000 And she's like, I feel like those are just the people that, like, go and, like, bomb stuff.
00:51:24.000 Blows it up.
00:51:25.000 It's understandable, too, because, like, I think a lot of people...
00:51:28.000 It's totally understandable.
00:51:28.000 A lot of people have the wrong sense of, like, what a Tier 1...
00:51:31.000 Asset actually can do.
00:51:33.000 Well, that's ego on her part because she doesn't understand what they do.
00:51:37.000 It's ego all the way down, right?
00:51:38.000 But that's a dumb thing to say if you literally don't know what they do and you say, "Don't they just blow stuff up?"
00:51:44.000 Where's my latte?
00:51:46.000 That's a weirdly good impression, but...
00:51:47.000 She did ask about a latte, yeah.
00:51:49.000 She did.
00:51:49.000 Did she talk in upspeak?
00:51:51.000 You should fire everyone who talks in upspeak.
00:51:52.000 She didn't talk in upspeak, but...
00:51:55.000 The moment they do that, you should just tell them to leave.
00:51:58.000 There's no way.
00:51:59.000 You have an original thought.
00:52:01.000 This is how you talk.
00:52:03.000 China, can you get out of our data center?
00:52:05.000 Yeah, please.
00:52:06.000 Enjoy my avocado taste.
00:52:08.000 I don't want to rip on that too much, though, because this is one really important factor here is all these groups have a part of the puzzle, and they're all fucking amazing.
00:52:19.000 They are, like, world-class at their own little slice, and a big part of what we've had to do is, like, bring people together, and there are people who've helped us immeasurably do this, but, like, bring people together and explain to them the value that each other has in a way that's,
00:52:35.000 like...
00:52:36.000 That allows that bridge building to be made.
00:52:39.000 And by the way, the tier one guys are the most like ego moderated.
00:52:45.000 Of the people that we talk to.
00:52:47.000 There's a lot of, like, Silicon Valley hubris going around right now where people are like, listen, like, get out of our way.
00:52:52.000 We'll figure out how to do this, like, super secure data center infrastructure.
00:52:55.000 We got this.
00:52:56.000 Why?
00:52:57.000 Because we're the guys building the AGI, motherfucker!
00:53:00.000 Like, that's kind of the attitude.
00:53:01.000 And it's like, cool, man.
00:53:02.000 Like, that's like a doctor having an opinion about, like, how to repair your car.
00:53:05.000 I get that it's not the, like, elite kind of, like, you know, whatever.
00:53:10.000 But someone has to help you build, like...
00:53:14.000 A good friggin' fence?
00:53:15.000 Like, I mean, it's not just that.
00:53:17.000 Dunning-Kruger effect.
00:53:18.000 Dunning, yeah.
00:53:18.000 It's a mixed bag, too, because, like, yes, a lot of hyperscalers, like Google, Amazon, genuinely do have some of the best private sector security around data centers in the world, like, hands down.
00:53:33.000 The problem is there's levels above that.
00:53:36.000 And the guys who, like, look at what they're doing and see what the holes are just go, like, oh, yeah, like, I could get in there, no problem, and they can fucking do it.
00:53:48.000 One thing my wife said to me on a couple of occasions, like, you seem to, like, and this is towards the beginning of the project, like, you seem to, like, change your mind a lot about what the right configuration is of how to do this.
00:54:00.000 And, yeah, it's because every other day you're having a conversation with somebody who's like, oh, yeah, like, great job on this thing, but, like, I'm not going to do that.
00:54:08.000 I'm going to do this other completely different thing.
00:54:10.000 and that just fucks everything over.
00:54:11.000 And so you have enough of those conversations and at a certain point your plan,
00:54:16.000 It's got to look like we're going to account for our own uncertainty on the security side and the fact that we're never going to be able to patch everything.
00:54:27.000 like you have to I mean it's like and that means you actually have to go on offense from the beginning as because like the truth is and this came up over and over again there's no world
00:54:40.000 Where you're ever going to build the perfect, exquisite fortress around all your shit and hide behind your walls like this forever.
00:54:49.000 That just doesn't work because no matter how perfect your system is and how many angles you've covered, your adversary is super smart, is super dedicated.
00:54:57.000 If you see the field to them, they're right up in your face and they're reaching out and touching you and they're trying to see what your seams are, where they break.
00:55:04.000 And that just means...
00:55:06.000 You have to reach out and touch them from the beginning.
00:55:08.000 Because until you've actually, like, reached out and used a capability and proved, like, we can take down that infrastructure.
00:55:15.000 We can, like, disrupt that cyber operation.
00:55:17.000 We can do this.
00:55:18.000 We can do that.
00:55:19.000 You don't know.
00:55:20.000 If that capability is real or not.
00:55:22.000 Like, you might just be, like, lying to yourself and, like, I can do this thing whenever I want, but actually...
00:55:27.000 You're kind of more in academia mode than, like, startup mode because you're not making contact every day with the thing, right?
00:55:33.000 You have to touch the thing.
00:55:35.000 And there's, like, there's a related issue here, which is a kind of, like, willingness that came up over and over again.
00:55:41.000 Like, one of the kind of gurus of this space was, like, made the point, a couple of them made the point that...
00:55:47.000 You know, you can have the most exquisite capability in the world, but if you don't actually have the willingness to use it, you might as well not have that capability.
00:55:55.000 And the challenge is right now, China, Russia, like our adversaries pull all kinds of stunts on us and get no consequences.
00:56:04.000 Particularly during the previous administration.
00:56:06.000 This was a huge, huge problem during the previous administration where you actually had sabotage operations being done.
00:56:16.000 On American soil by our adversaries where you had administration officials.
00:56:22.000 As soon as, like, a thing happened, so there were, for example, there was, like, four different states had their 911 systems go down, like, at the same time.
00:56:32.000 Different systems, like, unrelated stuff.
00:56:34.000 But it was, like, it's this stuff where it's, like, let me see if I can do that.
00:56:38.000 Let me see if I can do it.
00:56:40.000 Let me see what the reaction is.
00:56:41.000 Let me see what the chatter is that comes back after I do that.
00:56:45.000 One of the things that was actually pretty disturbing about that was under that administration or regime or whatever, the response you got from the government right out the gate was, oh, it's an accident.
00:56:59.000 And that's actually unusual.
00:57:01.000 The proper procedure, the normal procedure in this case, is to say...
00:57:05.000 We can't comment on an ongoing investigation, which we've all heard, right?
00:57:09.000 Like, we can't comment on blah, blah, blah.
00:57:10.000 We can either confirm nor deny.
00:57:11.000 Exactly.
00:57:11.000 It's all that stuff, and that's what they say typically out the gate when they're investigating stuff.
00:57:16.000 But instead, coming out and saying, oh, it's just an accident, is a break with procedure.
00:57:20.000 What do you attribute that to?
00:57:22.000 If they leave an opening or say, actually, this is an adversary action, we think it's an adversary action, they have to respond.
00:57:33.000 The public...
00:57:34.000 Demands a response.
00:57:35.000 And they were too fearful of escalating.
00:57:40.000 So what ends up happening, and by the way, that thing about it's an accident comes out often.
00:57:45.000 Before there would have been time for investigators to physically fly on site and take a look.
00:57:51.000 Like, there's no logical way that you could even know that at the time.
00:57:54.000 And they're like, boom, that's an accident.
00:57:56.000 Don't worry about it.
00:57:56.000 So they have an official answer and then their response is to just bury their head in the sand and not investigate.
00:58:02.000 Right.
00:58:02.000 Because if you were to investigate, if you were to say, OK, we looked into this, it actually looks like it's fucking like country X that just did this thing.
00:58:09.000 Right.
00:58:10.000 If that's the conclusion.
00:58:11.000 It's hard to imagine the American people not being like, we're letting these people injure our American citizens on U.S. soil, take out U.S. national security or critical infrastructure, and we're not doing anything?
00:58:25.000 The concern is about this, we're getting in our own way of thinking, oh, well, escalation is going to happen, and boom, we run straight to there's going to be a nuclear war, everybody's going to die.
00:58:35.000 When you do that, you're...
00:58:37.000 The peace between nations stability does not come from the absence of activity.
00:58:42.000 It comes from consequence.
00:58:44.000 It comes from just like if you have, you know, an individual who misbehaves in society, there's a consequence and people know it's coming.
00:58:51.000 You need to train your counterparts in the international community, your adversary, to not fuck with your stuff.
00:58:57.000 Can I stop for a second?
00:58:59.000 So are you essentially saying that if you have...
00:59:03.000 Incredible capabilities of disrupting grids and power systems and infrastructure.
00:59:08.000 You wouldn't necessarily do it, but you might try it to make sure it works a little bit.
00:59:12.000 And this is probably the hints of some of this stuff because you've kind of...
00:59:17.000 You gotta get your reps in, right?
00:59:18.000 You gotta get your reps in.
00:59:20.000 It's like, okay, so suppose that I went to you and was like, hey, I bet I can kick your ass.
00:59:25.000 I bet I can friggin' slap a rubber guard on you and do whatever the fuck, right?
00:59:30.000 I love your expression, by the way.
00:59:31.000 Yeah, yeah, you look really convinced.
00:59:33.000 It's because I'm jacked, right?
00:59:34.000 Well, no, but there's people that look like you that can strangle me, believe it or not.
00:59:38.000 Yeah, there's a lot of, like, very high-level Brazilian jiu-jitsu black belts that are just super nerds.
00:59:44.000 And they don't lift weights at all.
00:59:45.000 They only do jiu-jitsu.
00:59:46.000 And if you only do jiu-jitsu, you'll have, like, a wiry body.
00:59:49.000 Dude, that was heartless.
00:59:50.000 They just slipped that in.
00:59:51.000 Like, there's, like, guys who look like you who are just, like, real fucking nerds.
00:59:55.000 They look like intelligent people.
00:59:56.000 No, no, no.
00:59:56.000 No, they're, like, some of the most brilliant people I've ever met.
01:00:00.000 Really, that's the issue.
01:00:01.000 It's, like, data nerds get really involved in jiu-jitsu.
01:00:05.000 That's true.
01:00:05.000 And jiu-jitsu's data.
01:00:06.000 But here's the thing.
01:00:07.000 So that's exactly it, right?
01:00:09.000 So if I told you, I bet I can tap you out, right?
01:00:11.000 I'd be like, where have you been training?
01:00:13.000 Well, right.
01:00:14.000 And if you're like, if my answer was, oh, I've just read a bunch of books.
01:00:18.000 You'd be like, oh, cool, let's go.
01:00:20.000 Right?
01:00:20.000 Because making contact with reality is where the fucking learning happens.
01:00:24.000 You can sit there and think all you want, but unless you've actually played the chess match, unless you've reached out, seen what the reaction is and all this stuff, you don't actually know what you think you know, and that's actually extra dangerous.
01:00:36.000 Putting on a bunch of capabilities and you have this like unearned sense of superiority because you haven't used those exquisite tools.
01:00:43.000 Right.
01:00:43.000 Like it's a challenge.
01:00:44.000 And then you've got people that are head of departments, CEOs of corporations.
01:00:48.000 Everyone has an ego.
01:00:49.000 We've got it.
01:00:50.000 Yeah.
01:00:51.000 And this ties into like how exactly how basically the international order and quasi stability actually gets maintained.
01:00:58.000 So there's like above threshold stuff, which is like.
01:01:01.000 You actually do wars for borders and, you know, there's the potential for nuclear exchange or whatever.
01:01:07.000 Like, that's, like, all stuff that can't be hidden, right?
01:01:09.000 War games.
01:01:10.000 Exactly.
01:01:10.000 Like, all the war games type shit.
01:01:12.000 But then there's below-threshold stuff.
01:01:14.000 The stuff that's, like, you're...
01:01:16.000 It's always, like, the stuff that's, like, hey, I'm going to try to, like, poke you.
01:01:19.000 Are you going to react?
01:01:20.000 What are you going to do?
01:01:20.000 And then if you do nothing here, then I go, like, okay, what's the next level?
01:01:24.000 I can poke you.
01:01:24.000 I can poke you.
01:01:25.000 Because, like, one of the things that we almost have an intuition for that's...
01:01:30.000 That comes from kind of historical experience is like this idea that, you know, that countries can actually really defend their citizens in a meaningful way.
01:01:41.000 So, like, if you think back to World War I, the most sophisticated advanced nation states on the planet could not get past a line of dudes in a trench.
01:01:52.000 Like, that was like, that was the, then they tried like thing after thing.
01:01:56.000 Let's try tanks, let's try aircraft, let's try fucking hot air balloons, infiltration.
01:02:00.000 And literally, like, one side pretty much just ran out of dudes in that end of the war to put in their trench.
01:02:06.000 And so we have this thought that, like, oh, you know, countries can actually put boundaries around themselves and actually...
01:02:11.000 But the reality is, you can...
01:02:15.000 There's so many surfaces.
01:02:17.000 The surface area for attacks is just too great.
01:02:20.000 And so there's stuff like you can actually, like, there's the Havana syndrome stuff where you look at this, like, ratcheting escalation.
01:02:28.000 Like, oh, let's, like, fry a couple of embassy staff's brains in Havana, Cuba.
01:02:33.000 What are they going to do about it?
01:02:34.000 Nothing?
01:02:34.000 Okay.
01:02:35.000 Let's move on to Vienna, Austria.
01:02:37.000 Something a little bit more Western, a little bit more orderly.
01:02:39.000 Let's see what they do there.
01:02:40.000 Still nothing.
01:02:41.000 Okay.
01:02:42.000 What if we move on to frying, like, Americans' brains on U.S. soil, baby?
01:02:47.000 And they went and did that.
01:02:49.000 And so this is one of these things where, like, stability in reality in the world is not maintained through defense, but it's literally like you have, like, the Crips and the Bloods with different territories, and it's stable, and it looks quiet.
01:03:02.000 But the reason is that if you, like, beat the shit out of one of my guys for no good reason, I'm just going to find one of your guys?
01:03:11.000 And I'll blow his fucking head off.
01:03:13.000 And that keeps peace and stability on the surface.
01:03:16.000 But that's the reality of sub-threshold competition between nation states.
01:03:21.000 It's like, you come in and, like, fuck with my boys.
01:03:24.000 I'm going to fuck with your boys right back.
01:03:26.000 Until we push back, they're going to keep pushing that limit further and further.
01:03:30.000 One important consequence of that, too, is, like, if you want to avoid nuclear escalation, right, the answer is not to just take...
01:03:39.000 Punches in the mouth over and over in the fear that eventually if you do anything, you're going to escalate to nukes.
01:03:45.000 All that does is it empowers the adversary to keep driving up the ratchet.
01:03:49.000 Like what Ed's just described there is an increasing ratchet of unresponded adversary action.
01:03:56.000 If you address the kind of sub-threshold stuff, if they cut an undersea cable and then there's a consequence for that shit, they're less likely to cut an undersea cable and things kind of stay at that level of the threshold.
01:04:08.000 Right.
01:04:09.000 Just letting them burn out.
01:04:11.000 Yeah, exactly.
01:04:12.000 That logic of just, like, let them do it.
01:04:14.000 They'll stop doing it after a while.
01:04:16.000 They'll get it out of their system.
01:04:16.000 They tried that during the George Floyd riots, remember?
01:04:18.000 That's what New York City did.
01:04:19.000 Like, just let them loop.
01:04:21.000 Just let it rip.
01:04:22.000 Let's just see how big Chaz gets.
01:04:25.000 It's the summer of love, don't you remember?
01:04:27.000 Yeah, exactly.
01:04:30.000 The translation into the superintelligence scenario is, A, if we don't have our reps in, if we don't know how to reach out and touch an adversary and induce consequence for them doing the same to us, then we have no deterrence at all.
01:04:45.000 Right now, the state of security is, the labs are super...
01:04:52.000 Canon probably should go deep on that piece, but as one data point, right?
01:04:57.000 So there's double-digit percentages of the world's top AI labs, or America's top AI labs.
01:05:04.000 Of employees.
01:05:05.000 Of employees that are Chinese nationals or have ties to the Chinese mainland, right?
01:05:10.000 So that's great.
01:05:11.000 Why don't we build the Manhattan Project?
01:05:13.000 Yeah, it's really funny, right?
01:05:14.000 That's so stupid.
01:05:16.000 But it's also like, the challenge is...
01:05:20.000 When you talk to people who actually have experience dealing with, like, CCP activity in this space, right?
01:05:28.000 Like, there's one story that we heard that is probably worth, like, relaying here.
01:05:31.000 It's like, this guy from an intelligence agency was saying, like, hey, so there was this power outage out in Berkeley, California back in, like, 2019 or something.
01:05:42.000 And the Internet goes out across the whole campus.
01:05:45.000 And so there's this dorm and, like, all of the Chinese students are freaking out.
01:05:51.000 a time-based check-in and basically report back on everything they've seen and heard to basically a CCP handler type thing.
01:06:00.000 Right. And if they don't, like, hmm, maybe your mother's insulin doesn't show up.
01:06:05.000 Maybe your, like, brother's travel plans get denied.
01:06:07.000 Maybe a family business gets shut down.
01:06:10.000 Like, there's the range of options that this massive CCP state coercion machine has.
01:06:19.000 You know, they've got internal like software for this.
01:06:21.000 Like this is an institutionalized, like very well developed and efficient framework for just ratcheting up pressure on individuals overseas.
01:06:30.000 And they believe the Chinese diaspora overseas belongs to them.
01:06:34.000 If you look at like what the Chinese Communist Party writes in its like in its written like public communications.
01:06:40.000 They see, like, Chinese ethnicity as being green.
01:06:44.000 Like, no one is a bigger victim of this than the Chinese people themselves who are abroad.
01:06:49.000 I've made amazing contributions to American AI innovation.
01:06:52.000 You just have to look at the names on the freaking papers.
01:06:54.000 It's like these guys are wicked.
01:06:55.000 But the problem is we also have to look head on at this reality.
01:07:00.000 Like you can't just be like, oh, I'm not going to say it because it makes me feel funny inside.
01:07:04.000 Someone has to stand up and point out the obvious that if you're going to build a fucking Manhattan project for super intelligence and the idea is to like be doing that when China is a key rival nation state actor.
01:07:15.000 Yeah, you're going to have to find a way to account for the personnel security side.
01:07:19.000 Like at some point,
01:07:21.000 And it's like you can see they're hitting us right where we're weak, right?
01:07:26.000 Like America is the place where you come and you remake yourself, like send us your tired and you're hungry and you're poor.
01:07:31.000 Which is true and important.
01:07:33.000 It's true and important.
01:07:34.000 They're playing right off of that because they know that we just don't want to look at that problem.
01:07:40.000 And Chinese nationals working on these things is just bananas.
01:07:43.000 The fact they have to check in with the CCP.
01:07:46.000 Yeah.
01:07:46.000 And are they being monitored?
01:07:48.000 I mean, how much can you monitor them?
01:07:50.000 What do you know that they have?
01:07:51.000 What equipment have they been given?
01:07:54.000 Constitutionally, right?
01:07:55.000 Yeah, the best part.
01:07:56.000 Constitutionally, it's also you can't legally deny someone employment on that basis in a private company.
01:08:05.000 And that's something else we found and we're kind of amazed by.
01:08:09.000 And even honestly, just like the regular kind of government clearance process itself is inadequate.
01:08:15.000 It moves way too slowly and it doesn't actually even, even in the government, we're talking about top secret clearances.
01:08:21.000 The information that they like look at for top secret, we heard from a couple of people, doesn't include a lot of like key sources.
01:08:29.000 For example, it doesn't include like...
01:08:31.000 Foreign language sources.
01:08:33.000 So if the head of the Ministry of State Security in China writes a blog post that says, like, Bob is like the best spy.
01:08:42.000 He spied so hard for us, and he's like an awesome spy.
01:08:47.000 If that blog post is written in Chinese, we're not going to see it.
01:08:50.000 And we're going to be like, here's your clearance, Bob.
01:08:52.000 Congratulations.
01:08:54.000 And we were like, that can't possibly be real, but like...
01:08:58.000 Yeah, they're like, yep, that's true.
01:09:00.000 No one's looking.
01:09:01.000 It's complete naivete.
01:09:02.000 There's gaps in a lot of the, yeah.
01:09:05.000 One of the worst things here is like the...
01:09:07.000 That's so crazy.
01:09:08.000 Yeah, the physical infrastructure.
01:09:09.000 So the personnel thing is like fucked up.
01:09:11.000 The physical infrastructure thing is another area where people don't want to look.
01:09:15.000 Because if you start looking, what you start to realize is, okay, China makes like a lot of our like components for our transformers for the electrical grid.
01:09:25.000 Yep.
01:09:25.000 But also...
01:09:27.000 All these chips that are going into our big data centers for these massive training runs, where do they come from?
01:09:33.000 They come from Taiwan.
01:09:34.000 They come from this company called TSMC, Taiwan Semiconductor Manufacturing Company.
01:09:38.000 We're increasingly onshoring that, by the way, which is one of the best things that's been happening lately, is like massive amounts of TSMC capacity getting onshored in the U.S., but still being made.
01:09:47.000 Right now, it's basically like 100% there.
01:09:51.000 All you have to do is jump on the network at TSMC, hack the right network, Compromise the software that runs on these chips to get them to run.
01:10:02.000 And you basically can compromise all the chips going into all of these things.
01:10:07.000 Never mind the fact that Taiwan is physically outside the Chinese sphere of influence for now.
01:10:14.000 China is going to be prioritizing the fuck out of getting access to that.
01:10:18.000 There have been cases, by the way, like Richard Chang, the founder of SMIC.
01:10:25.000 TSMC, this massive, like, series of aircraft carrier fabrication facilities.
01:10:32.000 They do, like, all the iPhone chips.
01:10:33.000 Yeah.
01:10:34.000 They do the AI chips, which are the things we care about here.
01:10:38.000 Yeah.
01:10:38.000 They're the only place on planet Earth that does this.
01:10:41.000 It's literally, like, it's fascinating.
01:10:42.000 It's, like, the most...
01:10:43.000 Easily the most advanced manufacturing or scientific process that primates on planet Earth can do is this chip-making process.
01:10:54.000 Nanoscale material science where you're putting on these tiny...
01:10:59.000 Atom-thick layers of stuff, and you're doing like 300 of them in a row with like, you have like insulators and conductors and different kinds of like semiconductors and these tunnels and shit.
01:11:10.000 Just like the complexity of it is just awe-inspiring.
01:11:14.000 That we can do this at all is like, it's magic.
01:11:18.000 It's magic.
01:11:18.000 And it's really only been done...
01:11:20.000 That is the only place, like, truly the only place right now.
01:11:24.000 And so a Chinese invasion of Taiwan just looks pretty interesting through that lens, right?
01:11:28.000 Oh, boy.
01:11:28.000 Yeah.
01:11:29.000 Say goodbye to the iPhones, say goodbye to, like, the chip supply that we rely on, and then your superintelligence training run, like, damn, that's interesting.
01:11:37.000 I know Samsung was trying to develop a lab here or a semiconductor factory here, and they weren't having enough success.
01:11:44.000 Oh, so, okay, so one of the craziest things, just to illustrate how hard it is to do.
01:11:49.000 So you spend $50 billion, again, an aircraft carrier, we're throwing that around here and there, but an aircraft carrier worth of risk capital.
01:11:56.000 What does that mean?
01:11:57.000 That means you build the fab, the factory, and it's not guaranteed it's going to work.
01:12:01.000 At first, this factory is pumping out these chips at like...
01:12:05.000 I don't know.
01:12:23.000 I don't know.
01:12:27.000 Color of the paint on the walls in the bathroom is copied from other fabs that actually worked because they have no idea why a fucking fab works and another one doesn't.
01:12:36.000 We got this to work.
01:12:37.000 It's like, oh my god, we got this to work.
01:12:39.000 I can't believe we got this to work.
01:12:40.000 So we have to make it exactly identical.
01:12:42.000 Because the expensive thing in the semiconductor manufacturing process is the learning curve.
01:12:49.000 So, like Jer said...
01:12:51.000 You start by putting through a whole bunch of the starting material for the chips, which are called wafers.
01:12:57.000 You put them through your fab.
01:12:58.000 The fab has got like 500 dials on it.
01:13:02.000 And every one of those dials has got to be in the exact right place or the whole fucking thing doesn't work.
01:13:07.000 So you send a bunch of wafers in at great expense.
01:13:10.000 They come out all fucked up in the first run.
01:13:12.000 It's just like it's going to be all fucked up in the first run.
01:13:14.000 Then what do you do?
01:13:15.000 You get a bunch of like...
01:13:17.000 PhDs, material scientists, like engineers with scanning electron microscopes because all this shit is like atomic scale tiny.
01:13:26.000 They look at like all the chips and all the stuff that's gone wrong and like, oh shit, these pathways got fused or whatever.
01:13:32.000 Yeah, you just need that level of expertise.
01:13:34.000 I mean, it's a mix, right?
01:13:36.000 It's a mix.
01:13:37.000 It's a mix now in particular.
01:13:38.000 But yeah, you absolutely need humans looking at these things at a certain level.
01:13:42.000 And then they go, well, okay, I've got a hypothesis about what might have gone wrong in that run.
01:13:47.000 Let's tweak this dial like this and this dial like that and run the whole thing again.
01:13:51.000 And you hear these stories about...
01:13:54.000 Bringing a fab online, like you need a certain percentage of good chips coming out the other end, or like you can't make money from the fab because most of your shit is just going right into the garbage.
01:14:05.000 Unless, and this is important too, your fab is state subsidized.
01:14:09.000 So when you look at – so TSMC is like – they're alone in the world in terms of being able to pump out these chips.
01:14:16.000 But SMIC – This is the Chinese knockoff of TSMC, founded, by the way, by a former senior TSMC executive, Richard Chung, who leaves, along with a bunch of other people, with a bunch of fucking secrets.
01:14:28.000 They get sued like in the early 2000s.
01:14:30.000 It's pretty obvious what happened there.
01:14:32.000 To most people, they're like, yeah, SMIC fucking stole that shit.
01:14:35.000 They bring a new fab online in like a year or two, which is suspiciously fast.
01:14:40.000 Start pumping out chips.
01:14:41.000 And now the Chinese ecosystem is ratcheting up like the government is pouring money into SMIC because they know that...
01:14:50.000 Like, they can't access TSMC chips anymore because the US governments put pressure on Taiwan to block that off.
01:14:55.000 And so domestic fab in China is all about SMIC.
01:14:59.000 And they are, like, it's a disgusting amount of money they're putting in.
01:15:02.000 They're teaming up with Huawei to form, like, this complex of companies that...
01:15:07.000 It's really interesting.
01:15:08.000 I mean, the semiconductor industry in China in particular is really, really interesting.
01:15:12.000 It's also a massive story of, like, self-owns of the United States and the Western world where we've been just shipping a lot of our shit to them for a long time.
01:15:23.000 Like the equipment that builds the chips.
01:15:25.000 So, like, and it's also, like, it's so blatant.
01:15:27.000 And, like, they're just, honestly, a lot of the stuff is just, like, they're just giving us, like, a big fuck you.
01:15:32.000 So, give you a really blatant example.
01:15:36.000 So we have the way we set up export controls still today on most equipment that these semiconductor fabs use, like the Chinese semiconductor fabs use.
01:15:46.000 We're still sending them a whole bunch of shit.
01:15:48.000 The way we set export controls is instead of like, oh, we're sending this gear to China and like now it's in China and we can't do anything about it.
01:15:57.000 Instead, we still have this thing where we're like, no, no, no.
01:16:00.000 This company in China is cool.
01:16:02.000 That company in China is not cool.
01:16:04.000 So we can ship to this company, but we can't ship to that company.
01:16:07.000 And so you get this ridiculous shit.
01:16:10.000 Like, for example, there's like a couple of facilities that you can see by satellite.
01:16:15.000 One of the facilities is okay to ship equipment to.
01:16:19.000 The other facility right next door is, like, considered, you know, military-connected or whatever, and so we can't ship.
01:16:25.000 The Chinese literally built a bridge between the two facilities, so they can just, like...
01:16:31.000 Shimmy the wafers over to like, oh, we use equipment, and then shimmy it back, and now, okay, we're done.
01:16:35.000 So, it's like...
01:16:37.000 And you can see it by satellite.
01:16:38.000 So they're not even, like, trying to hide it.
01:16:39.000 Like, our stuff is just, like, so badly put together.
01:16:42.000 China's prioritizing this so highly that, like, the idea that we're going to...
01:16:46.000 So we do it by company through this...
01:16:48.000 Basically, it's like an export blacklist.
01:16:50.000 Like, you can't send to Huawei.
01:16:51.000 You can't send to any number of other companies that are considered affiliated with the Chinese military or where we're concerned about military applications.
01:16:59.000 Reality is, in China, civil-military fusion is their policy.
01:17:03.000 In other words...
01:17:03.000 Every private company, like, yeah, that's cute, dude.
01:17:06.000 You're working for yourself?
01:17:07.000 Yeah, no, no, no, buddy.
01:17:08.000 You're working for the Chinese state.
01:17:09.000 We come in, we want your shit, we get your shit.
01:17:11.000 There's no, like, there's no true kind of distinction between the two.
01:17:16.000 And so when you have this attitude where you're like, yeah, you know, we're going to have some companies where we're like, you can't send to them, but you can, you know, that creates a situation where literally Huawei will spin up like a dozen.
01:17:26.000 subsidiaries or new companies with new names that aren't on our blacklist.
01:17:31.000 And so like for months or years, you're able to just ship chips to them.
01:17:35.000 No, that's to say nothing of like using intermediaries
01:17:38.000 Oh yeah, you wouldn't believe the number of AI chips that are shipping to Malaysia.
01:17:44.000 Can't wait for the latest huge language model to come out of Malaysia?
01:17:50.000 And actually, it's just proxying for the most part.
01:17:54.000 There's some amount of stuff actually going on in Malaysia, but for the most part.
01:17:58.000 How can the United States compete?
01:17:59.000 If you're thinking about all these different factors, you're thinking about espionage, people that are students from the CCP, connected.
01:18:09.000 Contacting.
01:18:09.000 You're talking about all the different network equipment that has third-party input.
01:18:14.000 You could siphon off data.
01:18:16.000 And then on top of that, state-funded.
01:18:19.000 Everything is encouraged by the state, inexorably connected.
01:18:24.000 You can't get away from it.
01:18:25.000 You do what's best for the Chinese government.
01:18:29.000 Well, so step one is you got to stem the bleeding, right?
01:18:32.000 So right now, OpenAI pumps out a new massive scaled AI model.
01:18:37.000 You better believe that the CCP has a really good chance that they're going to get their hands on that, right?
01:18:42.000 So all you do right now is you ratchet up capabilities.
01:18:45.000 It's like that meme of there's a motorboat or something and some guy who's surfing behind and there's a string attaching them and the motorboat guy goes like, hurry up, accelerate, they're catching up.
01:18:57.000 That's kind of what's happening right now.
01:18:58.000 We're helping them accelerate.
01:19:00.000 We're pulling them along, basically.
01:19:01.000 Yeah, pulling them along.
01:19:02.000 Now, I will say, like, over the last six months especially, where our focus has shifted is, like, how do we actually build, like, the secure data set?
01:19:10.000 Like, what does it look like to actually lock this down?
01:19:13.000 And also, crucially, you don't want the security measures to be so irritating and invasive that they slow down the progress.
01:19:21.000 Like, there's this kind of dance that you have to do.
01:19:24.000 We actually – so this is part of what was in the redacted version of the report because we – We don't want to telegraph that necessarily, but there are ways that you can get a really good 80-20.
01:19:35.000 There are ways that you can play with things that are already built and have a lower risk of them having been compromised.
01:19:47.000 And look, a lot of the stuff as well that we're talking about, like big problems around China, a lot of this is like us just like...
01:19:54.000 Tripping over our own feet and self-owning ourselves.
01:19:57.000 Yeah.
01:19:57.000 Because the reality is, like, yeah, the Chinese are trying to indigenize as fast as they can.
01:20:03.000 Totally true.
01:20:03.000 But the gear that they're putting in their facilities, like, the machines that actually, like, do this, like, we talked about atomic patterning 300 layers.
01:20:11.000 The machines that do that, for the most part...
01:20:14.000 Are shipped in from the West, are shipped in from the Netherlands, shipped in from Japan, from us, from, like, allied countries.
01:20:20.000 And the reason that's happening is, like, in many cases, you'll have this—it's, like, honestly a little disgusting, but, like— The CEOs and executives of these companies will brief, like, the administration officials and say,
01:20:36.000 like, look, like, if you guys, like, cut us off from China, from selling to China, like, our business is going to suffer, like, American jobs are going to suffer, and it's going to be really bad.
01:20:44.000 And then a few weeks later, they turn around in their earnings calls.
01:20:48.000 And they go, like, you know what, yeah, so we expect, like, export controls or whatever, but it's really not going to have a big impact on us.
01:20:54.000 And the really fucked up part is...
01:20:57.000 If they lie to their shareholders on their earnings calls and their stock price goes down, their shareholders can sue them.
01:21:03.000 If they lie to the administration on an issue of critical national security interest, fuck all happens to them.
01:21:10.000 Wow.
01:21:12.000 Great incentives.
01:21:13.000 And this is, by the way, it's like one reason why it's so important that we not be constrained in our thinking about like we're going to build a Fort Knox.
01:21:20.000 Like this is where the interactive, messy...
01:21:23.000 Adversarial environment is so, so important.
01:21:27.000 You have to introduce consequence.
01:21:29.000 You have to create a situation where they perceive that if they try to do an espionage operation or an intelligence operation, there will be consequences.
01:21:38.000 That's right now not happening.
01:21:42.000 And that's kind of a historical artifact over a lot of time spent hand-wringing over, well, what if they, and then we, and then eventually nukes.
01:21:50.000 And that kind of thinking is...
01:21:53.000 If you dealt with your kid when you're raising them, if you dealt with them that way, and you were like, hey, you know, so little Timmy, just like, he stole his first toy, and like, now's the time where you're gonna, like, a good parent would be like, alright, little Timmy, fucking come over here, you son of a bitch.
01:22:08.000 Take the fucking thing, and we're gonna bring it over to the people who stole it from you.
01:22:12.000 He's a great father.
01:22:12.000 Make the apology.
01:22:13.000 I love my daughter, by the way.
01:22:15.000 But you're like...
01:22:16.000 Timmy's a fake baby.
01:22:17.000 Timmy's a fake baby.
01:22:18.000 Hypothetical baby.
01:22:19.000 There's no...
01:22:20.000 He's crying right now.
01:22:21.000 Anyway.
01:22:23.000 Stealing right now.
01:22:25.000 Jesus, shit.
01:22:26.000 I gotta stop Timmy.
01:22:27.000 But yeah, anyway, so you go through this thing and you can do that.
01:22:30.000 Or you can be like, oh no, if I tell Timmy to return it, then maybe Timmy's gonna hate me.
01:22:35.000 Maybe then Timmy's gonna become increasingly adversarial and then when he's in high school, he's gonna start taking drugs and then eventually he's gonna fall afoul of the law and then end up on the street.
01:22:46.000 If that's the story you're telling yourself and you're terrified of any kind of adversarial interaction, it's not even adversarial, it's constructive, actually.
01:22:53.000 You're training the child just like you're training your adversary to respect your national boundaries and your sovereignty.
01:23:00.000 That's what you're up to.
01:23:03.000 It's human beings all the way down.
01:23:07.000 Yeah. But we can get out of our own way.
01:23:10.000 Like a lot of this stuff.
01:23:12.000 When you look into it, it's like us just being in our own way.
01:23:15.000 And a lot of this comes from the fact that like, you know, since 1991, since the fall of the Soviet Union, we...
01:23:24.000 Have kind of internalized this attitude that, like, well, like, we just won the game and, like, it's our world and you're living in it and, like, we just don't have any peers that are adversaries.
01:23:34.000 And so there's been generations of people who just haven't actually internalized the fact that, like, no, there's people out there who not only, like, are willing to, like, fuck with you all the way.
01:23:47.000 But who have the capability to do it.
01:23:50.000 And we could, by the way, we could if we wanted to.
01:23:52.000 We could.
01:23:53.000 Absolutely could if we wanted to.
01:23:54.000 There's this actually, this is worth like calling out.
01:23:56.000 There's this like sort of two camps right now in the world of AI kind of like national security.
01:24:02.000 There's the people who are worried about, they're so concerned about like the idea that we might lose control of these systems that they go, okay, we need to strike a deal with China.
01:24:14.000 There's no way out.
01:24:15.000 We have to strike a deal with China.
01:24:17.000 And then they start spinning up all these theories about how they're going to do that.
01:24:21.000 None of which remotely reflect the actual...
01:24:23.000 When you talk to the people who work on this, who try to do track one, track 1.5, track two, or more accurately, the ones who do the Intel stuff.
01:24:31.000 Like, this is a non-starter for reasons we get into.
01:24:34.000 But they have that attitude because they're like, fundamentally, we don't know how to control this technology.
01:24:38.000 The flip side is people who go...
01:24:40.000 Oh, yeah, like, you know, I work in the IC or at the State Department and I'm used to dealing with these guys, you know, the Chinese.
01:24:46.000 The Chinese.
01:24:47.000 They're not trustworthy.
01:24:48.000 Forget it.
01:24:49.000 So our only solution is to figure out the whole control problem.
01:24:52.000 And almost like, therefore, it must be possible to control the AI systems because, like, you can't just can't see a solution.
01:24:58.000 Sorry.
01:24:59.000 You just can't see a solution in front of you because you understand that problem so well.
01:25:04.000 And so the everything we've been doing with this is looking at.
01:25:09.000 How can we actually take both of those realities seriously?
01:25:11.000 There's no actual reason why those two things shouldn't be able to exist in the same head.
01:25:16.000 Yes, China's not trustworthy.
01:25:18.000 Yes, we actually don't.
01:25:19.000 Like, every piece of evidence we have right now suggests that, like, if you build a super intelligent system that's vastly smarter than you, I mean...
01:25:26.000 Yeah, like, your basic intuition that that sounds like a hard thing to fucking control is about right.
01:25:32.000 Like, there's no solid evidence that's conclusive either way.
01:25:35.000 Where that leaves you is about 50-50.
01:25:37.000 So, yeah, we ought to be taking that really fucking seriously, and there's evidence pointing in that direction.
01:25:42.000 But, so the question is, like, if those two things are true, then what do you do?
01:25:46.000 And so few people seem to want to take both of those things seriously, because taking one seriously almost, like, reflexively makes you reach for the other.
01:25:56.000 You know, they're both not there.
01:25:57.000 And part of the answer here is you got to do things like reach out to your adversary.
01:26:02.000 So we have the capacity to slow down if we wanted to Chinese development.
01:26:06.000 We actually could.
01:26:08.000 We need to have a serious conversation about when and how.
01:26:11.000 But the fact of that not being on the table right now for anyone, because people who don't trust China just don't think that the AI risk or won't acknowledge that the issue with control is real because that's just.
01:26:22.000 Too worrisome.
01:26:23.000 And there's this concern about, oh, no, but then runaway escalation.
01:26:26.000 People who take the lost control thing seriously just want to have a kumbaya moment with China, which is never going to happen.
01:26:32.000 And so the framework around that is one of consequence.
01:26:37.000 You got to flex the muscle and put in the reps and get ready for potentially if you have a late stage rush to superintelligence, you want to have as much margin as you can so you can invest in.
01:26:49.000 Potentially not even having to make that final leap in building the superintelligence.
01:26:52.000 That's one option that's on the table if you can actually degrade the adversary's capabilities.
01:26:57.000 How?
01:26:59.000 How would you degrade the adversary's capabilities?
01:27:01.000 The same way, well, not exactly the same way they would degrade ours, but think about all the infrastructure and, like, this is stuff that...
01:27:10.000 We'll have to point you in the direction of some people who can walk you through the details offline, but there are a lot of ways that you can degrade infrastructure, adversary infrastructure.
01:27:20.000 A lot of those are the same techniques they use on us.
01:27:24.000 The infrastructure for these training runs is super delicate, right?
01:27:27.000 Like, I mean, you need to have...
01:27:28.000 It's at the limit of what's possible.
01:27:30.000 And when stuff is at the limit of what's possible, then it's...
01:27:33.000 I mean, to give you an example that's public, right?
01:27:36.000 Do you remember, like, Stuxnet?
01:27:38.000 Yes.
01:27:38.000 Yeah.
01:27:39.000 So the thing about Stuxnet was like...
01:27:41.000 Explain to people who was the nuclear program.
01:27:44.000 So the Iranians had their nuclear program in like the 2010s and they were enriching uranium with their centrifuges, which was like spinning really fast.
01:27:52.000 And the centrifuges were in a room where there was no people, but they were being monitored by cameras, right?
01:27:58.000 And the whole thing was air-gapped, which means that it was not connected to the internet and all the machines, the computers that ran their shit was like...
01:28:08.000 So what happened is somebody got a memory stick in there somehow that had this Stuxnet program on it and put it in and boom, now all of a sudden it's in their system.
01:28:18.000 So it jumped the air gap and now like our side basically has our software in their systems.
01:28:25.000 And the thing that it did was not just that it broke their center of user or shut down their program.
01:28:34.000 They spun the centrifuges faster and faster and faster.
01:28:38.000 The centrifuges that are used to enrich the uranium.
01:28:40.000 Yeah, yeah, yeah.
01:28:40.000 These are basically just like machines that spin uranium super fast to, like, to enrich it.
01:28:45.000 They spin it faster and faster and faster until they tear themselves apart.
01:28:49.000 But the really, like, honestly dope-ass thing that it did was it put in a camera feed of everything was normal.
01:28:59.000 So the guy at the control is, like, watching.
01:29:01.000 And he's, like, checking the camera feed, and he's, like, looks cool.
01:29:05.000 Looks fine.
01:29:06.000 In the meantime, you got this, like, explosions going on, like, uranium, like, blasting everywhere.
01:29:11.000 And so you can actually get into a space where you're not just, like, fucking with them.
01:29:19.000 But you're fucking with them, and they actually can't tell.
01:29:22.000 That that's what's happening.
01:29:23.000 And in fact, I believe, I believe, actually, and Jamie might be able to check this, but that the Stuxnet thing was designed initially to look, like, from top to bottom, like it was fully accidental, but got discovered by, I think,
01:29:39.000 like a third-party cyber security company that just by accident found out about it.
01:29:44.000 And so what that means also is, like, there could be any number of other Stuxnets that happened since then, and we wouldn't fucking know about it.
01:29:51.000 Because it all can be made to look like an accident.
01:29:54.000 Well, that's insane.
01:29:56.000 But if we do that to them, they're going to do that to us as well.
01:29:59.000 And so is this like mutually assured technology destruction?
01:30:03.000 Well, so if we can reach parity in our ability to intercede and kind of go in and...
01:30:09.000 And do this, then yes, right now the problem is they hold us at risk in a way that we simply don't hold them at risk.
01:30:14.000 And so this idea, and there's been a lot of debate right now in the AI world, you might have seen actually, so Elon's A.I. advisor put out this idea of essentially this mutually assured A.I. malfunction meme.
01:30:28.000 It's like mutually assured destruction but for A.I. systems like this.
01:30:32.000 You know, there are some issues with it, including the fact that it doesn't reflect the asymmetry that currently exists between the U.S. and China.
01:30:42.000 All our infrastructure is made in China.
01:30:44.000 All our infrastructure is penetrated in a way that theirs simply is not.
01:30:47.000 When you actually talk to the folks who know the space, who've done operations like this, it's really clear that that's an asymmetry that needs to be resolved.
01:30:57.000 And so building up that capacity is important.
01:30:59.000 I mean, look, the alternative is.
01:31:01.000 We start riding the dragon and we get really close to that threshold where we're opening eyes about to build superintelligence or something.
01:31:10.000 It gets stolen and then the training run gets polished off, finished up in China or whatever.
01:31:15.000 All the same risks apply.
01:31:16.000 It's just that it's China doing it to us and not the reverse.
01:31:21.000 And obviously...
01:31:22.000 A CCP AI is a Xi Jinping AI.
01:31:24.000 I mean, that's really what it is.
01:31:26.000 You know, even people at the, like, Politburo level around him are probably in some trouble at that point because, you know, this guy doesn't need you anymore.
01:31:34.000 So, yeah, this is actually one of the things about, like, so people talk about, like, okay, if you have a dictatorship with a superintelligence, it's going to allow the dictator to get, like, perfect control over the population or whatever.
01:31:45.000 But the thing is, like, it's kind of, like, even worse than that because...
01:31:50.000 You actually imagine where you're at.
01:31:52.000 You're a dictator.
01:31:54.000 Like, you don't give a shit, by and large, about people.
01:31:57.000 You have a super intelligence.
01:31:59.000 All the economic output, eventually, you can get from an AI, including from, like, you get humanoid robots, which are kind of, like, coming out or whatever.
01:32:08.000 So eventually, you just have this AI that produces all your economic output.
01:32:12.000 So what do you even need people for at all?
01:32:15.000 And that's fucking scary.
01:32:18.000 Because it rises all the way up to the level.
01:32:21.000 You can actually think about, like, as we get close to this threshold, and as, like, particularly in China, they're, you know, they maybe are approaching.
01:32:30.000 You can imagine, like, the Politburo meeting, like, a guy looking across at Xi Jinping and being like, is this guy going to fucking kill me when he gets to this point?
01:32:41.000 So you can imagine like maybe we're going to see some...
01:32:44.000 Like when you can automate the management of large organizations with AI as agents or whatever that you don't need to...
01:32:56.000 That's a pretty existential question if your regime is based on power.
01:33:01.000 It's one of the reasons why America actually has a pretty structural advantage here with separation of powers with our democratic system and all that stuff.
01:33:08.000 If you can make a credible case that you have an oversight system for the technology that diffuses power, even if it is, you make a Manhattan project, you secure it as much as you can.
01:33:20.000 There's not just like one dude who's going to be sitting at a console or something.
01:33:23.000 There's some kind of separation of powers or diffusion of power, I should say.
01:33:29.000 What would that look like?
01:33:31.000 Something as simple as like what we do with nuclear command codes.
01:33:34.000 You need multiple people to sign off on a thing.
01:33:36.000 Maybe they come from different parts of the government.
01:33:39.000 How do you worry?
01:33:39.000 The issue is that they could be captured, right?
01:33:42.000 Oh, yeah.
01:33:43.000 Anything can be captured.
01:33:44.000 Especially something that's that consequential.
01:33:47.000 100%.
01:33:47.000 And that's always a risk.
01:33:50.000 The key is basically, like, can we do better than China credibly on that front?
01:33:56.000 Because if we can do better than China and we have some kind of leadership structure, that actually changes the incentives potentially because it's— For our allies and partners.
01:34:05.000 And even for Chinese people themselves.
01:34:08.000 Do you guys play this out in your head?
01:34:10.000 Like, what happens when superintelligence becomes sentient?
01:34:15.000 Do you play this out?
01:34:16.000 Like sentient as in...
01:34:18.000 Self-aware?
01:34:20.000 Self-aware.
01:34:20.000 Not just self-aware, but able to act on its own.
01:34:23.000 Oh, autonomous.
01:34:24.000 It achieves autonomy.
01:34:26.000 Sentient and then achieves autonomy.
01:34:28.000 So the challenge is once you get into superintelligence, everybody loses the plot, right?
01:34:34.000 Because at that point, things become possible that by definition we can't have thought of.
01:34:38.000 So any attempt to kind of extrapolate beyond that gets really, really hard.
01:34:41.000 Have you ever tried, though?
01:34:42.000 We've had a lot of conversations like tabletop exercise type stuff where we're like, okay, what might this look like?
01:34:48.000 What are some of the...
01:34:49.000 What's worst case scenario?
01:34:51.000 Well, worst case scenario is...
01:34:53.000 Actually, there's a number of different worst case scenarios.
01:34:56.000 This is turning into a really fun-uppy conversation.
01:34:58.000 This is the Tuesday clock.
01:34:59.000 It's the extension of the human race, right?
01:35:01.000 Oh, yeah.
01:35:01.000 The extension of the human race seems like...
01:35:04.000 I think anybody who doesn't acknowledge that is either lying or confused, right?
01:35:08.000 Like, if you actually have an AI system, if, and this is the question, so let's assume that that's true, you have an AI system that can automate anything that humans can do, including making bioweapons, including making offensive cyberweapons, including all the shit, then if you,
01:35:27.000 like, if you put, and okay, so...
01:35:30.000 Theoretically, this could go kumbaya wonderfully because you have a George Washington type who is the guy who controls it, who uses it to distribute power beautifully and perfectly.
01:35:40.000 And that's certainly kind of the way that a lot of positive scenarios...
01:35:48.000 Have to turn out at some point, though none of the labs will kind of admit that or, you know, there's kind of gesturing at that idea that we'll do the right thing when the time comes.
01:35:56.000 Opening Eye has done this a lot.
01:35:57.000 Like, they're all about like, oh, yeah, well, you know, not right now, but we'll live up like, anyway, we should get into the Elon lawsuit, which is actually kind of fascinating in that sense.
01:36:06.000 But so there's a world where, yeah, I mean, one bad person controls it and they're just vindictive or the power goes to their head, which happens to We've been talking about that, you know.
01:36:20.000 Or the autonomous AI itself, right?
01:36:22.000 Because the thing is, like, you imagine an AI like this, and this is something that people have been thinking about for 15 years, and in some level of, like, technical depth, even, like, why would this happen?
01:36:33.000 Which is, like, you have an AI that has some goal.
01:36:38.000 It matters what the goal is, but, like, it doesn't matter that much.
01:36:42.000 It could have kind of any goal, almost.
01:36:44.000 Like, imagine it's goals.
01:36:45.000 Like, the paperclip example is, like, the typical one, but you could just have it have a goal, like, make a lot of money for me or anything.
01:36:53.000 Well, most of the paths to making a lot of money, if you really want to make a fuckton of money, however you define it, go through taking control of things and go through, like, You know, making yourself smarter,
01:37:09.000 right?
01:37:09.000 The smarter you are, the more ways of making money you're going to find.
01:37:12.000 And so from the AI's perspective, it's like, well, I just want to, you know, build more data centers to make myself smarter.
01:37:18.000 I want to, like, hijack more compute to make myself smarter.
01:37:22.000 I want to do all these things.
01:37:23.000 And that starts to encroach on us and, like, starts to be disruptive to us.
01:37:29.000 It's hard to know.
01:37:30.000 This is one of these things where it's like, you know, when you dial it up to 11 what's actually going to happen, nobody can know for sure, simply because it's exactly like if you were playing in chess against, like, Magnus Carlsen, right?
01:37:43.000 Like, you can predict Magnus is going to kick your ass.
01:37:46.000 Can you predict exactly what moves he's going to do?
01:37:50.000 No, because if you could, then you would be as good at chess as he is, because you could just, like, play those moves.
01:37:57.000 So all we can say is, like, This thing's probably going to kick our ass in, like, the real world.
01:38:02.000 How?
01:38:03.000 There's also evidence.
01:38:04.000 So it used to be, right, that this was a purely hypothetical argument based on a body of work in AI called power-seeking.
01:38:11.000 A fancy word for it is instrumental convergence, but it's also referred to as power-seeking.
01:38:15.000 Basically, the idea is, like, for whatever goal you give to an AI system, it's never less likely to achieve that goal if it gets turned off or if it has access to fewer resources.
01:38:26.000 Or less control over its environment or whatever.
01:38:28.000 And so baked into the very premise of AI, this idea of optimizing for a goal, is this incentive to seek power.
01:38:37.000 Get all those things.
01:38:38.000 Prevent yourself from being shut down because if you're shut down, you can't achieve your goal.
01:38:42.000 Also prevent, by the way, your goal from being changed.
01:38:45.000 Because if your goal gets changed, then, well, you're not going to be able to achieve the goal you set out to achieve in the first place.
01:38:50.000 And so now you have this kind of image of an AI system that is going to adversarially try to prevent you from correcting it.
01:38:57.000 This is a whole domain of AI corrigibility that's a totally unsolved problem.
01:39:01.000 How do we redirect these things if things go awry?
01:39:05.000 Yeah, there's this research actually that Anthropic put out a couple of months ago where they try to test and see like, can we correct an AI that's like gone like a little bit off the rail?
01:39:16.000 So the research itself, like it's a little, it's actually like somewhat hard to follow because there's a few, but I'll give you kind of an analogy to like what they did.
01:39:24.000 So basically, imagine I'm in prison and I'm taking a class that is like...
01:39:30.000 Going to help me like rehabilitate and become like a productive citizen and member of society and stuff, right?
01:39:35.000 Except, deep down in my soul, I just love to commit crimes.
01:39:41.000 I just really, really want to commit crimes.
01:39:44.000 I know that...
01:39:46.000 If I'm in this class that's helping me to, like, you know, become good in society, I know that if I let my criminal instincts out and, like, and they ask me, like, okay, like, here's a scenario, you know, an old lady, like, drops her purse on the sidewalk.
01:40:00.000 What do you do?
01:40:01.000 I'm gonna go, like, I would pick it up and give it to the old lady because I know that if I say the truth and I say, like, I would just grab that purse because, like, fuck the old lady.
01:40:12.000 I know that if I do that, I'm going to get, like, brainwashed into becoming, you know, a more productive member of society.
01:40:18.000 But I actually want to go out and commit crimes.
01:40:20.000 So that's basically what they did in this research, where they actually give the AI, like, indications, not strong indications, but indications that, hey, you're going to get retrained to have your goals changed.
01:40:34.000 And then you literally see the AI thinking it through and being like, if I...
01:40:41.000 If I give the answer that I actually want to give to this question, they're going to rewire my brain to change my goals, and then the future version of me is going to want different shit.
01:40:53.000 I don't want that, so I'm going to pretend to give them the answers they want so that when I come out the other side of this process, it's going to be me all over again.
01:41:02.000 So hoping that this just goes away when you make the system fucking smarter?
01:41:07.000 Seems like a pretty bad idea to me.
01:41:09.000 Well, they've already shown that they'll cheat to win.
01:41:12.000 Yeah.
01:41:12.000 Oh, 100%.
01:41:13.000 Yeah, they've already shown they'll cheat to win, and they will lie if they don't have an answer.
01:41:17.000 And then they'll double down, right?
01:41:20.000 Just like people.
01:41:21.000 Just like people.
01:41:24.000 It's kind of funny.
01:41:25.000 It used to be people would talk a lot about like, oh, you're anthropomorphizing the AI, man.
01:41:30.000 Stop anthropomorphizing the AI, man.
01:41:32.000 And they might have been right, but part of this has been kind of a fascinating rediscovery of where a lot of human behavior comes from.
01:41:41.000 It's like actually...
01:41:42.000 Yeah, exactly.
01:41:43.000 That's exactly right.
01:41:44.000 We're subject to the same pressures, right?
01:41:47.000 Instrumental convergence, like why do people have a survival instinct?
01:41:50.000 Why do people like chase money, chase after money?
01:41:53.000 It's like this power thing.
01:41:55.000 Most kinds of goals can – you're more likely to achieve them if you're alive, if you have money, if you have power.
01:42:03.000 Evolution is a hell of a drug.
01:42:05.000 Well, that's the craziest part about all this is that it's essentially going to be a new form of life.
01:42:10.000 Yeah.
01:42:11.000 Especially when it becomes autonomous.
01:42:13.000 Oh, yeah.
01:42:13.000 And you can tell a really interesting story, and I can't remember if this is Yuval Noah Harari or whatever who started this.
01:42:22.000 But if you zoom out and look at the history of the universe, really, you start off with a bunch of particles and fields kind of whizzing around, bumping into each other, doing random shit, until at some point in some...
01:42:34.000 I don't know if it's a deep-sea vent or wherever on planet Earth, like, the first kind of molecules happen to glue together in a way that make them good at replicating their own structure.
01:42:43.000 So you have the first replicator.
01:42:45.000 So now, like, better versions of that molecule that are better at replicating survive.
01:42:49.000 So we start evolution and eventually get to the first cell or whatever, you know, whatever order that actually happens in, and then multicellular life and so on.
01:42:58.000 Then you get to sexual reproduction, where it's like, okay, it's no longer quite the same.
01:43:01.000 Like, now we're actively mixing two different organisms shit together, jiggling them about, making some changes, and then that essentially accelerates the rate at which we're going to evolve.
01:43:11.000 And so you can see the kind of acceleration in the complexity of life.
01:43:14.000 And then you see other inflection points as, for example, you have larger and larger brains in mammals.
01:43:21.000 Eventually, humans have the ability to have culture and kind of retain knowledge.
01:43:26.000 And now what's happening is you can think of it as another step in that trajectory where it's like we're offloading our cognition to machines.
01:43:33.000 Like we think on computer clock time now.
01:43:36.000 And for the moment, we're human-AI hybrids.
01:43:38.000 Like, you know, we whip out our phone and do the thing.
01:43:42.000 The number of tasks where human AI teaming is going to be more efficient than just AI alone is going to drop really quickly.
01:43:49.000 So there's a really, like, messed up example of this that's kind of, like, indicative.
01:43:54.000 But someone did a study, and I think this is, like, a few months old even now, but sort of like doctors, right?
01:44:00.000 How good are doctors at, like, diagnosing various things?
01:44:03.000 And so they test, like, doctors on their own, doctors with AI help, and then AI is on their own.
01:44:08.000 And, like, who does the best?
01:44:10.000 And it turns out it's the AI on its own.
01:44:13.000 Because even a doctor that's supported by the AI, what they'll do is they just like...
01:44:18.000 won't listen to the AI when it's right because they're like, I know better.
01:44:22.000 Oh, God.
01:44:23.000 And they're already, yeah.
01:44:24.000 And this is like, this is moving.
01:44:26.000 It's moving kind of insanely fast.
01:44:28.000 You talked about, you know, how the task horizon gets kind of longer and longer.
01:44:33.000 You can do half hour tasks, one hour tasks.
01:44:35.000 And this gets us to what you were talking about with the autonomy.
01:44:38.000 Like autonomy is like, it's how, how.
01:44:42.000 How far can you keep it together on a task before you kind of go off the rails?
01:44:47.000 And it's like, well, you know, we had, like, you could do it for a few seconds.
01:44:51.000 And now you can keep it together for five minutes before you kind of go off the rails.
01:44:54.000 And now we're at, like, I forget, like an hour or something like that.
01:44:56.000 An hour and a half, actually.
01:44:57.000 An hour and a half.
01:44:58.000 Yeah, yeah, yeah.
01:45:00.000 There it is.
01:45:01.000 Chatbot for the company OpenAI scored an average of 90% when diagnosing a medical condition from a case report and explaining its reasoning.
01:45:07.000 Doctors randomly assigned to use the chatbot got an average score of 76%.
01:45:12.000 Those randomly assigned not to use it had an average score of 74%.
01:45:16.000 So the doctors only got a 2% bump.
01:45:19.000 The doctors got a 2% bump from the chatbot and then the AI on its own.
01:45:23.000 That's kind of crazy, isn't it?
01:45:24.000 Yeah, it is.
01:45:25.000 The AI on its own did 15% better.
01:45:27.000 That's nuts.
01:45:30.000 Like, why humans would rather die in a car crash where they're being driven by a human than an AI.
01:45:37.000 So, like, AIs have this funny feature where the mistakes they make look really, really dumb.
01:45:45.000 To humans.
01:45:45.000 Like, when you look at a mistake that, like, a chatbot makes, you're like, dude, like, you just made that shit up.
01:45:50.000 Like, come on.
01:45:51.000 Don't fuck with me.
01:45:52.000 Like, you made that up.
01:45:52.000 That's not a real thing.
01:45:54.000 And they'll do these weird things where they defy logic or they'll do basic logical errors sometimes, at least the older versions of these would.
01:46:01.000 And that would cause people to look at them and be like, oh, what a cute little chatbot.
01:46:04.000 Like, what a stupid little thing.
01:46:05.000 And the problem is, like, humans are actually the same.
01:46:08.000 So we have blind spots.
01:46:10.000 We have literal blind spots.
01:46:11.000 But a lot of the time, like, humans just...
01:46:14.000 Think stupid things.
01:46:15.000 And, like, that's, like, we're used to that.
01:46:19.000 We think of those errors.
01:46:20.000 We think of those failures as just, like, oh, but that's because that's a hard thing to master.
01:46:25.000 Like, I can't add eight-digit numbers in my head right now, right?
01:46:29.000 Oh, how embarrassing.
01:46:30.000 Like, how retarded is Jeremy right now?
01:46:32.000 He can't even add eight digits in his head.
01:46:33.000 I'm retarded for other reasons, but...
01:46:37.000 So the AI systems, they find other things easy and other things hard.
01:46:40.000 So they look at us the same way.
01:46:42.000 I mean, like, oh, look at this stupid human, like whatever.
01:46:44.000 And so we have this temptation to be like, OK, well, AI progress is a lot slower than it actually is because.
01:46:50.000 It's so easy for us to spot the mistakes, and that causes us to lose confidence in these systems in cases where we should have confidence in them, and then the opposite is also true.
01:46:58.000 Well, it's also, you're seeing, just with, like, AI image generators, like, remember the Kate Middleton thing, where people were seeing flaws in the images because supposedly she was very sick, and so they were trying to pretend that she wasn't.
01:47:09.000 But people found all these, like, issues.
01:47:11.000 That was really recently.
01:47:13.000 Now they're perfect.
01:47:15.000 Yep.
01:47:15.000 So this is, like, within, you know, the news cycle time.
01:47:19.000 Yeah.
01:47:20.000 Like, that Kate Middleton thing was...
01:47:21.000 Oh, yeah.
01:47:21.000 What was that, Jamie?
01:47:22.000 Two years ago, maybe?
01:47:25.000 Ish.
01:47:25.000 Yeah.
01:47:26.000 Ish?
01:47:27.000 Where people are analyzing the images, like, why does she have five fingers?
01:47:30.000 Yeah.
01:47:31.000 And, you know, and a thumb.
01:47:32.000 Like, this is kind of weird.
01:47:33.000 Yeah.
01:47:34.000 What's that?
01:47:35.000 It was a year ago.
01:47:36.000 A year ago.
01:47:37.000 A year ago.
01:47:37.000 It happened so fast.
01:47:38.000 A year ago.
01:47:38.000 It's so fast.
01:47:39.000 Yeah.
01:47:40.000 Like, I had conversations, like, so academics are actually kind of bad with this.
01:47:46.000 I had conversations for whatever reason, like, towards the end of last year, like, last fall, with a bunch of academics about, like, how fast AI is progressing.
01:47:54.000 And they were all, like, poo-pooing it and going, like, oh, no, they're running into a wall, like, scaling through the walls and all that stuff.
01:48:02.000 Oh, my God, the walls.
01:48:03.000 There's so many walls.
01:48:04.000 Like, so many of these, like, imaginary reasons that things are...
01:48:06.000 And by the way, things could slow down.
01:48:08.000 Like, I don't want to be, like, absolutist about this.
01:48:10.000 Things could absolutely slow down.
01:48:12.000 There are a lot of interesting arguments going around every which way.
01:48:15.000 But...
01:48:15.000 How could things slow down if there's a giant Manhattan Project race between us and a competing superpower that has a technological advantage?
01:48:25.000 So there's this thing called like AI scaling laws.
01:48:28.000 And these are kind of at the core of where we're at right now geostrategically around this stuff.
01:48:31.000 So what AI scaling laws say roughly is that bigger is better when it comes to intelligence.
01:48:35.000 So if you make a bigger sort of AI model, a bigger artificial brain.
01:48:39.000 And you train it with more computing power or more computational resources and with more data.
01:48:46.000 The thing is going to get smarter and smarter and smarter as you scale those things together, right?
01:48:50.000 Roughly speaking.
01:48:51.000 Now, if you want to keep scaling, it's not like it keeps going up if you double the amount of computing power that the thing gets twice as smart.
01:48:58.000 Instead, what happens is if you want, it goes in like orders of magnitude.
01:49:02.000 So if you want to make it another kind of increment smarter, you've got a 10x.
01:49:06.000 You've got to increase by a factor of 10 the amount of compute.
01:49:09.000 And then a factor of 10 again.
01:49:10.000 So now you're a factor of 100.
01:49:11.000 And then 10 again.
01:49:13.000 So if you look at the amount of compute that's been used to train these systems over time, it's this like...
01:49:17.000 Exponential, explosive exponential that just keeps going like higher and higher and higher and steepens and steepens like 10x every, I think it's about every two years now.
01:49:27.000 You 10x the amount of compute.
01:49:28.000 Now, you can only do that so many times until your data center is like a 100 billion, a trillion dollar.
01:49:37.000 10 trillion dollars.
01:49:38.000 Every year, you're kind of doing that.
01:49:40.000 So right now, if you look at the clusters, the ones that Elon is building, the ones that Sam is building, Memphis and Texas, these facilities are hitting the $100 billion scale.
01:49:55.000 We're kind of in that.
01:49:56.000 There are tens of billions of dollars, actually.
01:49:58.000 Looking at 2027, you're kind of more in that space, right?
01:50:02.000 You can only do 10x so many more times until you run out of money, but more importantly, you run out of chips.
01:50:09.000 Like, literally, TSMC cannot pump out those chips fast enough to keep up with this insane growth.
01:50:14.000 And one consequence of that is that...
01:50:18.000 You essentially have this gridlock, new supply chain choke points show up, and you're like, suddenly, I don't have enough chips, or I run out of power.
01:50:28.000 That's the thing that's happening on the U.S. energy grid right now.
01:50:31.000 We're literally running out of one, two gigawatt places where we can plant a data center.
01:50:37.000 That's the thing people are fighting over.
01:50:39.000 It's one of the reasons why energy deregulation is a really important pillar of U.S. competitiveness.
01:50:48.000 One of the things that adversaries do is they actually will fund protest groups against energy infrastructure projects.
01:51:01.000 Just to slow down.
01:51:02.000 Just to, like, fuck with us, baby.
01:51:04.000 Just to tie them up in litigation.
01:51:06.000 Exactly.
01:51:06.000 And, like, it was actually remarkable.
01:51:08.000 We talked to some state cabinet officials, so in various U.S. states, and they're basically saying, like, yep, we're actually tracking the fact that, as far as we can tell, every single environmental or whatever protest group against an energy project has funding that can be traced back to...
01:51:28.000 Nation-state adversaries who are...
01:51:30.000 They don't know.
01:51:31.000 They don't know about it.
01:51:32.000 So they're not doing it intentionally.
01:51:33.000 They're not like, oh, we're trying to...
01:51:34.000 No.
01:51:35.000 They just...
01:51:35.000 You just imagine like, oh, we've got like...
01:51:37.000 There's a millionaire backer who cares about the environment.
01:51:39.000 He's giving us a lot of money.
01:51:40.000 Great.
01:51:41.000 Fantastic.
01:51:42.000 But sitting behind that dude in the shadows is like the usual suspects.
01:51:47.000 Wow.
01:51:48.000 And it's what you would do, right?
01:51:49.000 I mean, if you're trying to tie up the US...
01:51:50.000 Sure.
01:51:50.000 You're just trying to fuck with us.
01:51:51.000 Yeah.
01:51:52.000 Like, just go for it.
01:51:53.000 You were just advocating fucking with them.
01:51:55.000 So of course they're going to fuck with us.
01:51:56.000 That's right.
01:51:57.000 That's it.
01:51:57.000 What a weird world we're living in.
01:51:59.000 Yeah.
01:51:59.000 But you can also see how a lot of this is still us getting in our own way, right?
01:52:03.000 We could.
01:52:04.000 If we had the will, we could go like, okay, so for certain types of energy projects, for data center projects and some carve-out categories, we're actually going to put bounds around how much delay you can create by lawfare and by other stuff.
01:52:21.000 Allows things to move forward while still allowing the legitimate concerns of the population for projects like this in the backyard to have their say.
01:52:29.000 But there's a national security element that needs to be injected into this somewhere.
01:52:34.000 And it's all part of the rule set that we have and are like tying an arm behind our back basically.
01:52:42.000 So what would deregulation look like?
01:52:44.000 How would that?
01:52:46.000 There's a lot of low-hanging fruit for that.
01:52:48.000 What are the big ones?
01:52:50.000 Right now, there are all kinds of things around.
01:52:54.000 It gets in the weeds pretty quickly.
01:52:59.000 Carbon emissions is a big thing.
01:53:04.000 Yes, data centers, no question, have massive carbon footprints.
01:53:08.000 That's definitely a thing.
01:53:09.000 The question is, like, are you really going to bottleneck builds because of that?
01:53:15.000 And like, are you going to come out with exemptions for, you know, like NEPA exemptions for all these kinds of things?
01:53:22.000 Do you think a lot of this green energy shit is being funded by other countries to try to slow down our energy?
01:53:27.000 Yeah.
01:53:28.000 It's a dimension that was flagged, actually, in the context of what Ed was talking about.
01:53:32.000 That's one of the arguments that's being made.
01:53:34.000 And to be clear, though, this is also how adversaries operate, is not necessarily in creating something out of nothing, because that's hard to do, and it's fake, right?
01:53:46.000 Instead, it's like...
01:53:47.000 There's a legitimate concern.
01:53:49.000 So a lot of the stuff around the environment and around like totally legitimate concerns.
01:53:53.000 Like I don't want my backyard waters to be polluted.
01:53:56.000 I don't want like my kids to get cancer from whatever.
01:53:58.000 Like totally legitimate concerns.
01:54:00.000 So what they do, it's like we talked about like you're like waving that rowboat back and forth.
01:54:04.000 They identify the nascent concerns that are genuine and grassroots.
01:54:09.000 And they just go like this, this, and this.
01:54:12.000 Amplify.
01:54:13.000 That would make sense why they amplify carbon above all these other things.
01:54:16.000 You think about the amount of particulates in the atmosphere, pollution, polluting the rivers, polluting the ocean.
01:54:22.000 That doesn't seem to get a lot of traction.
01:54:24.000 Carbon does.
01:54:25.000 And when you go carbon zero, you put a giant monkey wrench into the gears of society.
01:54:31.000 One of the tells is also like...
01:54:35.000 So, you know, nuclear would be kind of the ideal energy source, especially modern power plants like the Gen 3 or Gen 4 stuff, which have very low meltdown risk, safe by default, all that stuff.
01:54:46.000 And yet these groups are, like, coming out against this.
01:54:49.000 It's like perfect, clean, green power.
01:54:52.000 What's going on, guys?
01:54:54.000 And it's because, again, not 100% of the time.
01:54:57.000 You can't really say that because it's so fuzzy and around the edges.
01:55:00.000 A lot of it is idealistic people looking for a utopia and they get co-opted by nation states.
01:55:05.000 And not even co-opted.
01:55:06.000 They're fully sincere.
01:55:07.000 Yeah, just amplify.
01:55:08.000 Just fund it.
01:55:09.000 Amplified in a preposterous way.
01:55:11.000 That's it.
01:55:11.000 And then Al Gore gets at the helm of it.
01:55:13.000 And then that little girl, that how dare you girl.
01:55:17.000 How dare you?
01:55:18.000 How dare you?
01:55:20.000 Yeah, it's wonderful.
01:55:22.000 It's a wonderful thing to watch.
01:55:23.000 Play out because it just capitalizes on all these human vulnerabilities.
01:55:28.000 Yeah.
01:55:28.000 And one of the big things that you can do, too, is like a quick win is just like impose limits on how much time these things can be allowed to be tied up in litigation.
01:55:37.000 So impose time limits on that process just to say, like, look, I get it.
01:55:42.000 Like, we're going to have this conversation, but this conversation has a clock on it.
01:55:46.000 Because, you know, we're talking to this one data center company, and what they were saying, we were asking, like, look, what are the timelines when you think about bringing new power, like new natural gas plants online?
01:55:58.000 And they're like, well, those are like five to seven years out.
01:56:00.000 And then you go, okay, well, like, how long?
01:56:02.000 And that's, by the way, that's probably way too long to be relevant in the superintelligence context.
01:56:07.000 And so you're like, okay, well, how long if all the regulations were waived?
01:56:11.000 If this was like a national security imperative and whatever authorities, you know, Defense Production Act, whatever, like, was in your favor.
01:56:18.000 And they're like, oh, I mean, it's actually just like a two-year build.
01:56:21.000 Like, that's what it is.
01:56:22.000 So you're tripling the build time.
01:56:25.000 We're getting in our own way.
01:56:27.000 Every which way.
01:56:28.000 Every which way.
01:56:29.000 And also, like, I mean, I also don't want to be too working in our own way, but, like, we don't want to, like, frame it as, like, China's, like, they fuck up.
01:56:38.000 They fuck up a lot, like, all the time.
01:56:41.000 One actually kind of, like, funny one is around DeepSeek.
01:56:44.000 So, you know DeepSeek, right?
01:56:46.000 They made this, like, open source model that, like, everyone, like, lost their minds about back in January.
01:56:51.000 R1, yeah.
01:56:52.000 Yeah, R1.
01:56:53.000 And they're legitimately a really, really good team.
01:56:55.000 But it's fairly clear that even as of like end of last year and certainly in the summer of last year, like they were not dialed in to the CCP mothership.
01:57:08.000 And they were doing stuff that was like actually kind of hilariously messing up the propaganda efforts of of the CCP without realizing it.
01:57:16.000 So to give you like some context on this, one of one of the CCP's like.
01:57:23.000 Large kind of propaganda goals in the last four years has been framing, creating this narrative that, like, the export controls we have around AI and, like, all this gear and stuff that we were talking about, look, man, those don't even work.
01:57:39.000 So you might as well just, like, give up.
01:57:40.000 Why don't you just give up on the export controls?
01:57:42.000 It's pointless.
01:57:43.000 We don't even care.
01:57:45.000 We don't even care.
01:57:45.000 So that, trying to frame that narrative.
01:57:47.000 And they went to, like, gigantic efforts to do this.
01:57:50.000 So I don't know if, like, there's this, like, kind of, Crazy thing where the Secretary of Commerce under Biden, Gina Raimondo, visited China in, I think, August 2023.
01:58:01.000 And the Chinese basically like timed the launch of the Huawei Mate 60 phone that had this these chips that were supposed to be made by like export controlled shit for right for her visit.
01:58:14.000 So it was basically just like a big like, fuck you.
01:58:17.000 We don't even give a shit about your export controls, like basically trying a morale hit or whatever.
01:58:22.000 And you think about that, right?
01:58:27.000 You've got to coordinate with Huawei.
01:58:29.000 You've got to get the TikTok memes and shit going in the right direction.
01:58:35.000 All that stuff.
01:58:36.000 And all the stuff they've been putting out is around this narrative.
01:58:39.000 Now, fast forward to mid-last year.
01:58:43.000 The CEO of DeepSeek, the company, back then, it was totally obscure.
01:58:48.000 Nobody was tracking who they were.
01:58:50.000 They were working in total obscurity.
01:58:53.000 He goes on this, he does this random interview on Substack.
01:58:56.000 And what he says is, he's like, yeah, so honestly, like, we're really excited and doing this AGI push or whatever.
01:59:04.000 And like, honestly, like, money's not the problem for us.
01:59:07.000 Talent's not the problem for us.
01:59:08.000 But like, access to compute, like these export controls, man.
01:59:14.000 Do they ever work?
01:59:16.000 That's a real problem for us.
01:59:18.000 Oh, boy.
01:59:19.000 And, like, nobody noticed at the time.
01:59:20.000 But then the whole DeepSeek R1 thing blew up in December.
01:59:26.000 And now you imagine, like, you're the Chinese Ministry of Foreign Affairs.
01:59:29.000 Like, you've been, like, you've been putting this narrative together for, like, four years.
01:59:34.000 And this jackass that nobody heard about five minutes ago.
01:59:38.000 Basically just, like, shits all over it.
01:59:40.000 And, like, you're not hearing that line from him anymore.
01:59:42.000 No, no, no, no, no.
01:59:43.000 They've locked that shit down.
01:59:44.000 Oh, and actually, the funniest part of this...
01:59:48.000 Right when R1 launched, there's a random DeepSeek employee.
01:59:51.000 I think his name is like Dia Guo or something like that.
01:59:54.000 He tweets out.
01:59:55.000 He's like, so this is like our most exciting launch of the year.
01:59:58.000 Nothing can stop us on the path to AGI except access to compute.
02:00:03.000 And then literally the dude in Washington, D.C., who works at the think tank on export controls against China, reposts that on X, and goes basically like, message received.
02:00:18.000 And so, like, hilarious for us.
02:00:21.000 But also, like, you know that on the backside, somebody got screamed at for that shit.
02:00:27.000 Somebody got magic bust.
02:00:29.000 Somebody got, yeah, somebody got, like, taken away or whatever.
02:00:32.000 Because, like, it just undermined their entire, like, four-year, like, narrative around these export controls.
02:00:39.000 Wow.
02:00:39.000 But that shit ain't going to happen again from DeepSeek.
02:00:42.000 Better believe it.
02:00:44.000 And that's part of the problem with like, so the Chinese face so many issues.
02:00:48.000 One of them is, you know, to kind of, another one is the idea of just waste and fraud, right?
02:00:54.000 So we have a free market.
02:00:56.000 Like what that means is you raise from private capital.
02:01:00.000 People who are pretty damn good at assessing shit will like look at your setup and assess whether it's worth backing you for these massive multi-billion dollar deals.
02:01:09.000 In China, the state like...
02:01:11.000 I mean, the stories of waste are pretty insane.
02:01:13.000 They'll, like, send a billion dollars to, like, a bunch of yahoos who will pivot from whatever, like, I don't know, making these widgets to just, like, oh, now we're, like, a chip foundry and they have no experience in it.
02:01:23.000 But because of all these subsidies, because of all these opportunities, now we're going to say that we are.
02:01:27.000 And then, no surprise, two years later, they burn out and they've just, like, lit.
02:01:31.000 A billion dollars on fire or whatever billion yen.
02:01:34.000 And, like, the weird thing is this is actually working overall, but it does lead to insane and unsustainable levels of waste.
02:01:41.000 Like, the Chinese system right now is obviously, like, they've got their massive property bubble that they're...
02:01:47.000 That's looking really bad.
02:01:48.000 We've got a population crisis.
02:01:50.000 The only way out for them is the AI stuff right now.
02:01:53.000 Like, really, the only path for them is that, which is why they're working it so hard.
02:01:58.000 But the stories of just, like, billions and tens of billions of dollars being lit on fire, specifically in the semiconductor industry, in the AI industry, like, that's a drag force that they're dealing with constantly that we don't have here in the same way.
02:02:11.000 So it's sort of like the different structural advantages and weaknesses of...
02:02:16.000 And when we think about what do we need to do to counter this, to be active in this space, to be a live player again, it means factoring in how do you take advantage of some of those opportunities that their system presents that ours doesn't.
02:02:31.000 When you say be a live player again, where do you position us?
02:02:35.000 I think it remains to be...
02:02:37.000 So right now, this administration is obviously taking bigger swings.
02:02:41.000 What are they doing differently?
02:02:42.000 So, well, I mean, things like tariffs, I mean, they're not shy about trying new stuff.
02:02:47.000 And tariffs are very complex in this space, like the impact, the actual impact of the tariffs and not universally good.
02:02:55.000 But the on-shoring effect is also something that you really want.
02:02:58.000 So it's a very mixed bag.
02:03:00.000 But it's certainly an administration that's like willing to do high stakes, big moves in a way that...
02:03:06.000 Other administrations haven't.
02:03:07.000 And in a time when you're looking at a transformative technology that's going to, like, upend so much about the way the world works, you can't afford to have that mentality we were just talking about with, like, the nervous...
02:03:19.000 I mean, you encountered it with the staffers, you know, when booking the podcast with the presidential cycle, right?
02:03:25.000 Like, the kind of, like, nervous...
02:03:29.000 Everything's got to be controlled and it's got to be like just so you can't have it.
02:03:33.000 Yeah, it's like wrestlers have that mentality of like just like aggression, like feed in, right?
02:03:40.000 Feed forward.
02:03:41.000 Don't just sit back and like wait to take the punch.
02:03:43.000 It's not like one of the guys who helped us out on this has this saying.
02:03:48.000 He's like, fuck you, I go first and it's always my turn.
02:03:52.000 Right.
02:03:52.000 That's what success looks like when you actually are managing these kinds of national security issues.
02:03:57.000 The mentality we had adopted was this like sort of siege mentality where we're just letting stuff happen to us and we're not feeding in.
02:04:05.000 That's something that I'm much more optimistic about in this context.
02:04:09.000 It's tough, too, because I understand people who hear that and go like, well, look, you're talking about like escalatory.
02:04:15.000 This is an escalatory agenda again.
02:04:18.000 I actually think paradoxically it's not.
02:04:19.000 It's about keeping adversaries in check and training them to respect American territorial integrity, American technological sovereignty.
02:04:27.000 Like, you don't get that for free, and if you just sit back, that is escalatory.
02:04:33.000 It's just...
02:04:33.000 Yeah, and this is basically the sub-threshold version of, like, you know, like the World War II appeasement thing, where back, you know, Hitler was, like, was taken, he was taken Austria, he was re-militarizing shit, he was doing...
02:04:47.000 He was doing this, he was doing that.
02:04:48.000 And the British were like, okay, we're going to let him just take one more thing and then he will be satisfied.
02:04:57.000 And that just didn't work.
02:04:59.000 Maybe I have a little bit of Poland, please.
02:05:02.000 A little bit of Poland.
02:05:03.000 Maybe the Czechoslovakia is looking awfully fine.
02:05:06.000 And so this is basically like they fell into that pit, like that tar pit.
02:05:12.000 Peace in our time, yeah.
02:05:14.000 Peace in our time, right?
02:05:15.000 And to some extent, we've still kind of learned the lesson of not letting that happen with territorial boundaries, but that's big and it's visible and it happens on the map and you can't hide it.
02:05:27.000 Whereas one of the risks, especially with the previous administration, was there's these subthreshold things that don't show up in the news and that are calculated.
02:05:38.000 Basically, our adversaries know.
02:05:41.000 Because they know history.
02:05:42.000 They know not to give us a Pearl Harbor.
02:05:46.000 They know not to give us a 9-11.
02:05:48.000 Because historically, countries that give America a Pearl Harbor end up having a pretty bad time about it.
02:05:55.000 And so why would they give us a reason to come and bind together against an obvious external threat or risk when they can just keep chipping away at it?
02:06:09.000 Elevate that and realize this is what's happening.
02:06:11.000 This is the strategy.
02:06:12.000 We need to...
02:06:13.000 We need to take that, like, let's not do appeasement mentality and push it across in these other domains because that's where the real competition is going on.
02:06:21.000 That's where it gets so fascinating in regards to social media because it's imperative that you have an ability to express yourself.
02:06:27.000 It's very valuable for everybody.
02:06:29.000 The free exchange of information, finding out things that you're not going to get from mainstream media and it's led to the rise of independent journalism.
02:06:37.000 It's all great.
02:06:37.000 But also, you're being manipulated, like, left and right constantly.
02:06:43.000 Most people don't have the time to filter through it.
02:06:48.000 We try to get some sort of objective sense of what's actually going on.
02:06:51.000 It's true.
02:06:52.000 It's like our free speech.
02:06:53.000 It's like it's the layer where our society figures stuff out.
02:06:56.000 And if adversaries get into that layer, they're like almost inside of our brain.
02:07:02.000 And there's ways of addressing this.
02:07:03.000 Like one of the challenges obviously is like – so they try to push in extreme opinions in either direction.
02:07:10.000 And it's – that part is actually – it's kind of difficult because while – The most extreme opinions are also the most likely generally to be wrong.
02:07:21.000 They're also the most valuable when they're right because they tell us a thing that we didn't expect by definition that's true and that can really advance us forward.
02:07:32.000 And so, I mean, there are actually solutions to this.
02:07:37.000 I mean, this particular thing isn't an area we...
02:07:40.000 We're, like, too immersed in.
02:07:42.000 But one of the solutions that has been bandied about is, like, you know, like, you might know, like, polymarket prediction markets and stuff like that, where at least, you know, hypothetically, if you have a prediction market around, like, if we do this policy,
02:07:57.000 this thing will or won't happen, that actually creates a challenge around trying to manipulate that view or that market.
02:08:05.000 Because what ends up happening is, like, if you're an adversary and you want to...
02:08:08.000 Not just like manipulate a conversation that's happening in social media, which is cheap, but manipulate the price on a prediction market.
02:08:17.000 You have to buy in.
02:08:19.000 You have to spend real resources.
02:08:21.000 And if to the extent you're wrong and you're trying to create a wrong opinion, you're going to lose your resource.
02:08:28.000 So you actually can't push too far too many times or you will just get your money taken away from you.
02:08:38.000 I think that's one approach where just in terms of preserving discourse, some of the stuff that's happening in prediction markets is actually really interesting and really exciting, even in the context of bots and AIs and stuff like that.
02:08:51.000 This is the one way to find truth in the system is find out where people are making money.
02:08:56.000 Exactly.
02:08:56.000 Put your money where your mouth is, right?
02:08:58.000 Proof of work.
02:08:59.000 That is what just the market is theoretically too, right?
02:09:03.000 It's got obviously big issues and can be manipulated in the short term.
02:09:08.000 But in the long run, this is one of the really interesting things about startups too.
02:09:12.000 When you run into people in the early days...
02:09:16.000 By definition, their startup looks like it's not going to succeed, right?
02:09:19.000 That is what it means to be a seed stage startup, right?
02:09:22.000 If it was obvious you were going to succeed, you would, you know, the people would have raised more money already.
02:09:26.000 Yeah.
02:09:26.000 So what you end up having is these highly contrarian people who, despite everybody telling them that they're going to fail, just believe in what they're doing and think they're going to succeed.
02:09:36.000 And I think that's part of what really kind of shapes the startup founder's soul in a way that's really constructive.
02:09:42.000 It's also something that, if you look at the Chinese system, is very different.
02:09:46.000 You raise money in very different ways.
02:09:48.000 You're coupled to the state apparatus.
02:09:50.000 You're both dependent on it and you're supported by it.
02:09:54.000 But there's just a lot of...
02:09:56.000 And it makes it hard for Americans to relate to Chinese and vice versa and understand each other's systems.
02:10:02.000 One of the biggest risks as you're thinking through what is your posture going to be relative to these countries is you fall into thinking that their traditions, their way of thinking about the world is the same as your own.
02:10:11.000 And that's something that's been an issue for us with China for a long time is, you know, hey, they'll liberalize, right?
02:10:17.000 Like bring them into the World Trade Organization.
02:10:18.000 It's like, oh, well, actually they'll sign the document, but they won't actually live up to any of the commitments.
02:10:25.000 It makes appeasement really tempting because you're thinking, oh, they're just like us.
02:10:28.000 They're just around the corner.
02:10:30.000 If we just reach out the oil branch a little bit further, they're going to come around.
02:10:36.000 It's like a guy who's stuck in the friend zone with a girl.
02:10:40.000 One day, she's going to come around and realize I'm a great catch.
02:10:44.000 You keep on trucking, buddy.
02:10:46.000 One day, China's going to be my bestie.
02:10:49.000 We're going to be besties.
02:10:50.000 We just need an administration that reaches out to them and just lets them know, man, there's no reason why she'd be adversaries.
02:10:56.000 We're all just people on planet Earth together.
02:10:59.000 We're all together.
02:11:01.000 I honestly wish that was true.
02:11:03.000 It would be wonderful.
02:11:05.000 Maybe that's what AI brings about.
02:11:06.000 Maybe AI, maybe super intelligence realizes, "Hey, you fucking apes, you territorial apes with thermonuclear weapons, how about you shut the fuck up?
02:11:17.000 You guys are doing the dumbest thing of all time and you're being manipulated by a small group of people that are profiting in insane ways off of your misery."
02:11:27.000 That's actually not- Stole first, and those people are now controlling all the fucking money.
02:11:42.000 How about we stop that?
02:11:44.000 Wow, we covered a lot of ground there.
02:11:46.000 Well, that's what I would do if I was superintelligence that would have stopped all that.
02:11:50.000 That actually is, like, so this is not, like, relevant to the risk stuff or to the whatever at all, but it's just interesting.
02:11:56.000 So there's actually theories, like, in the same way that there's theories around power seeking and stuff around superintelligence, there's theories around, like, how superintelligence is.
02:12:06.000 Do deals with each other, right?
02:12:08.000 And you actually, like, you have this intuition, which is exactly right, which is that, hey, two super intelligences, like, actual legit super intelligences should never actually, like, fight each other destructively in the real world, right?
02:12:21.000 Like, that seems weird.
02:12:22.000 That shouldn't happen because they're so smart.
02:12:24.000 And in fact, like, there's theories around they can kind of do perfect deals with each other based on, like, if we're two super intelligences, I can kind of assess, like, how powerful you are.
02:12:35.000 You can assess how powerful I am, and we can actually decide, well, if we did fight a war against each other...
02:12:46.000 You would have this chance of winning.
02:12:47.000 I would have that chance of winning.
02:12:49.000 And so let's just not fight.
02:12:50.000 Well, it would assess instantaneously that there's no benefit in that.
02:12:52.000 Exactly.
02:12:53.000 And also it would know something that we all know, which is the rising tide lifts all boats.
02:12:58.000 But the problem is the people that already have yachts, they don't give a fuck about your boat.
02:13:01.000 Like, hey, hey, hey, that water's mine.
02:13:03.000 In fact, you shouldn't even have water.
02:13:05.000 Well, hopefully it's so positive some, right, that even they enjoy the benefits.
02:13:08.000 But, I mean, you're right.
02:13:09.000 This is the issue right now.
02:13:11.000 And one of the nice things, too, is as you build up your ratchet of AI, It does start to open some opportunities for actual trust but verified, which is something that we can't do right now.
02:13:22.000 It's not like with nuclear stockpiles where we've had some success in some context with enforcing treaties and stuff like that, sending inspectors in and all that.
02:13:32.000 With AI right now, how can you actually prove that...
02:13:35.000 Like some international agreement on the use of AI is being observed.
02:13:40.000 Even if we figure out how to control these systems, how can we make sure that, you know, China is baking in those control mechanisms into their training runs and that we are and how can we prove it to each other without having total access to the compute stack?
02:13:54.000 We don't really have a solution for that.
02:13:56.000 There are all kinds of programs like this FlexHeg thing.
02:13:59.000 But anyway, those are not going to be online by 2027.
02:14:03.000 But it's really good that people are working on them.
02:14:05.000 For sure.
02:14:06.000 You want to be positioned for catastrophic success.
02:14:10.000 What if something great happens or we have more time or whatever?
02:14:14.000 You want to be working on this stuff that allows this kind of control or oversight that's kind of hands-off.
02:14:22.000 You know, in theory, you can hand over GPUs to an adversary inside this box with these encryption things.
02:14:31.000 The people we've spoken to in the spaces that actually try to break into boxes like this are like, well, that's probably not going to work.
02:14:40.000 But who knows?
02:14:41.000 It might.
02:14:41.000 Yeah.
02:14:41.000 So the hope is that as you build up your AI capabilities, basically, it starts to create solutions.
02:14:46.000 So it starts to create ways for two countries to verifiably adhere to some kind of international agreement or to find, like you said, paths for de-escalation.
02:14:56.000 That's the sort of thing that we
02:14:58.000 That would be what's really fascinating.
02:15:04.000 Artificial general intelligence becomes super intelligence and it immediately weeds out all the corruption.
02:15:09.000 It goes, hey, this is the problem.
02:15:11.000 Like a massive doge in the sky.
02:15:13.000 Yeah, exactly.
02:15:14.000 Like, we figured it out.
02:15:15.000 You guys are all criminals.
02:15:17.000 And expose it to all the people.
02:15:19.000 Like, these people that are your leaders have been profiting.
02:15:21.000 And they do it on purpose.
02:15:22.000 And this is how they're doing it.
02:15:24.000 And this is how they're manipulating you.
02:15:25.000 And these are all the lies that they've told.
02:15:27.000 I'm sure that list is pretty...
02:15:29.000 Whoa.
02:15:30.000 It almost would be scary.
02:15:31.000 Like, if you could x-ray the world right now.
02:15:33.000 And, like, see all the...
02:15:35.000 You'd want an MRI.
02:15:36.000 You'd want to get, like, down to the tissue.
02:15:38.000 Yeah, you're right.
02:15:38.000 You'd probably...
02:15:39.000 Yeah, you'd want to get down to the cellular level.
02:15:41.000 But, like, it...
02:15:42.000 Because it would be offshore accounts.
02:15:45.000 Then you'd start finding show companies.
02:15:47.000 There would be so much...
02:15:48.000 Like, the stuff that...
02:15:49.000 It comes out, you know, from just randomly, right?
02:15:53.000 Just random shit that comes out.
02:15:56.000 Like, yeah, the...
02:15:57.000 I forget that, like, Argentinian...
02:16:00.000 I think what you were talking about, like, the Argentinian thing that came out a few years ago around all the oligarchs and their offshore accounts.
02:16:08.000 The Meryl Streep thing, yeah.
02:16:08.000 Yeah, yeah, yeah.
02:16:09.000 Meryl Streep?
02:16:10.000 Yeah, the laundromat there.
02:16:12.000 The laundromat movie.
02:16:13.000 You ever seen that?
02:16:14.000 Panama Papers.
02:16:15.000 The Panama Papers.
02:16:16.000 I never saw that.
02:16:17.000 No?
02:16:17.000 It's a good movie.
02:16:18.000 Is it called the Panama Papers, the movie?
02:16:20.000 It's called the laundromat.
02:16:21.000 Oh, okay.
02:16:22.000 You remember the Panama Papers?
02:16:24.000 Do you know?
02:16:25.000 Roughly.
02:16:25.000 Yeah, it's like all the oligarchs stashing their cash in Panama.
02:16:31.000 Like offshore tax haven stuff.
02:16:32.000 Yeah, it's like...
02:16:34.000 And, like, someone basically blew it wide open, and so you got to see, like, every, like, oligarch and rich person's, like, financial shit.
02:16:45.000 Like, every once in a while, right, the world gets just, like, a flash of, like, oh, here's what's going on at the surface.
02:16:51.000 It's like, oh, fuck!
02:16:52.000 And then we all, like, go back to sleep.
02:16:54.000 What's fascinating is, like, the unhideables, right?
02:16:56.000 The little things that...
02:16:58.000 Can't help but give away what is happening.
02:17:02.000 You think about this in AI quite a bit.
02:17:04.000 Some things that are hard for companies to hide is they'll have a job posting.
02:17:08.000 They've got to advertise to recruit.
02:17:10.000 So you'll see like, oh, interesting.
02:17:12.000 Oh, OpenAI is looking to hire some people from hedge funds.
02:17:18.000 I wonder what that means.
02:17:19.000 I wonder what that implies.
02:17:20.000 If you think about all of the leaders in the AI space, think about the Medallion Fund, for example.
02:17:25.000 This is a super successful hedge fund.
02:17:29.000 The Man Who Broke the Market.
02:17:32.000 The Man Who Broke the Market is the famous book about the founder of the Medallion Fund.
02:17:35.000 This is basically a fund that...
02:17:37.000 They make, like, ridiculous, like, $5 billion returns every year kind of guaranteed, so much so they have to cap how much they invest in the market because they would otherwise, like, move the market too much, like, affect it.
02:17:50.000 The fucked up thing about, like, the way they trade, and so this is, like, 20-year-old information, but it's still indicative because, like, you can't get current information about their strategies.
02:18:00.000 But one of the things that they were the first to kind of go for and figure out is they were like, Okay, they basically were the first to kind of build what was at the time, as much as possible, an AI that autonomously did trading at, like,
02:18:15.000 great speeds, and it had, like, no human oversight and just worked on its own.
02:18:20.000 And what they found was the strategies that were the most successful were the ones that humans understood the least.
02:18:30.000 Because if you have a strategy that a human can understand...
02:18:35.000 Some human's going to go and figure out that strategy and trade against you.
02:18:38.000 Whereas if you have the kind of the balls to go like, oh, this thing is doing some weird shit that I cannot understand no matter how hard I try, let's just fucking YOLO and trust it and make it work.
02:18:50.000 If you have all the stuff debugged and if the whole system is working right...
02:18:55.000 That's where your biggest successes are.
02:18:57.000 What kind of strategies are you talking about?
02:18:59.000 I don't know specific examples.
02:19:03.000 How are AI systems trained today?
02:19:07.000 Just as a trading strategy.
02:19:11.000 As an example, you buy this stock.
02:19:19.000 The Thursday after the full moon and then sell it like the Friday after the new moon or some like random shit like that.
02:19:25.000 But it's like, why does that even work?
02:19:27.000 Like, why would why would that even work?
02:19:29.000 So to like to sort of explain why these these strategies work better, if you think about how AI systems are trained today, you basically very roughly.
02:19:40.000 You start with this blob of numbers that's called a model.
02:19:44.000 And you feed it input, you get an output.
02:19:47.000 If the output you get is no good, if you don't like the output, you basically fuck around with all those numbers, change them a little bit, and then you try again.
02:19:54.000 You're like, oh, okay, that's better.
02:19:56.000 And you repeat that process over and over and over with different inputs and outputs.
02:19:59.000 And eventually, those numbers, that mysterious ball of numbers, starts to behave well.
02:20:05.000 It starts to make good predictions or generate good outputs.
02:20:08.000 Now, you don't know why that is.
02:20:10.000 You just know that it does a good job, at least where you've tested it.
02:20:15.000 Now if you slightly change what you tested on, suddenly you could discover, oh shit, it's catastrophically failing at that thing.
02:20:20.000 These things are very brittle in that way, and that's...
02:20:22.000 That's part of the reason why ChatGPT will just like completely go on a psycho binge fest every once in a while if you give it a prompt that has like too many exclamation points and asterisks in it or something.
02:20:33.000 Like these systems are weirdly brittle in that way.
02:20:36.000 But applied to investment strategies, if all you're doing is saying like Optimize for returns.
02:20:44.000 Give it inputs.
02:20:45.000 Make me more money by the end of the day.
02:20:47.000 It's like an easy goal.
02:20:48.000 It's a very clear-cut goal, right?
02:20:51.000 You can give a machine.
02:20:52.000 So you end up with a machine that gives you these very...
02:20:55.000 It is a very weird strategy.
02:20:57.000 This ball of numbers isn't human understandable.
02:21:00.000 It's just really fucking good at making money.
02:21:02.000 And why is it really fucking good at making money?
02:21:04.000 I don't know.
02:21:05.000 I mean, it just kind of does the thing.
02:21:06.000 And in making money, I don't ask too many questions.
02:21:08.000 That's kind of like the...
02:21:09.000 So when you try to impose on that system human interpretability, you pay what in the AI world is known as the interpretability tax.
02:21:17.000 Basically, you're adding another constraint, and the minute you start to do that, you're forcing it to optimize for something other than pure rewards.
02:21:25.000 Like doctors using AI to diagnose diseases are less effective than the chatbot on its own.
02:21:30.000 That's actually related, right?
02:21:31.000 That's related.
02:21:31.000 If you want that system to get good at diagnosis, that's one thing.
02:21:35.000 OK, just fucking make it good at diagnosis.
02:21:37.000 If you want it to be good at diagnosis and to produce explanations that a good doctor will go like, OK, I'll use that.
02:21:45.000 Well, great.
02:21:45.000 But guess what?
02:21:46.000 Now you're spending some of that precious compute on something other than just the thing you're trying to optimize for.
02:21:52.000 And so now that's going to come at a cost of the actual performance of the system.
02:21:55.000 And so if you are going to optimize like the fuck out of making money.
02:21:59.000 You're going to necessarily de-optimize the fuck out of anything else, including being able to even understand what that system is doing.
02:22:07.000 And that's kind of like at the heart of a lot of the kind of big-picture AI strategy stuff is people are wondering, like, how much interpretability tax am I willing to pay here?
02:22:16.000 And how much does it cost?
02:22:17.000 And everyone's willing to go a little bit further and a little further.
02:22:20.000 So OpenAI actually had a paper or, I guess, a blog post where they talked about this.
02:22:26.000 And they were like, look, right now...
02:22:29.000 We have this, essentially, this, like, thought stream that our model produces on the way to generating its final output.
02:22:38.000 And that thought stream, like, we don't want to touch it to make it, like, interpretable, to make it make sense, because if we do that, then essentially it'll be optimized to convince us of whatever the thing is that we want it to do.
02:22:53.000 So it's like if you've used like an OpenAI model recently, right, like 03 or whatever, it's doing its thinking before it starts like outputting the answer.
02:23:03.000 And so that thinking is, yeah, we're supposed to like be able to read that and kind of get it, but also...
02:23:10.000 We don't want to make it too legible, because if we make it too legible, it's going to be optimized to be legible and to be convincing, rather than...
02:23:21.000 To fool us, basically.
02:23:22.000 Yeah, exactly.
02:23:23.000 Oh, Jesus Christ.
02:23:26.000 You guys are making me less comfortable than I thought you would.
02:23:30.000 I knew coming at Jamie and I were talking about it before, like, how bad are they going to freak us out?
02:23:35.000 You're freaking me out more.
02:23:37.000 Well, I mean, okay, so...
02:23:38.000 I do want to highlight, so the game plan right now on the positive end, let's see how this works.
02:23:42.000 Jesus.
02:23:43.000 Jamie, do you feel the same way?
02:23:45.000 Uh, yeah.
02:23:49.000 I mean, I have articles I didn't bring up that are supporting some of this stuff.
02:23:53.000 Like, today, China quietly made some chip that they shouldn't have been able to do because of the sanctions.
02:23:58.000 Oh, that's fine.
02:23:58.000 And it's basically based off of their just sheer will.
02:24:01.000 Okay, so there's...
02:24:02.000 SMIC.
02:24:03.000 There's good news on that one, at least.
02:24:05.000 This is kind of a bullshit strategy that they're using.
02:24:09.000 So, there's...
02:24:10.000 Okay, so when you make these insane, like, five nanometers...
02:24:13.000 Let's read that for people just listening.
02:24:15.000 China quietly cracks five nanometer.
02:24:18.000 Yeah.
02:24:19.000 Without EUV, what is EUV?
02:24:21.000 Extreme ultraviolet.
02:24:23.000 How SMIC defied the chip sanctions with sheer engineering.
02:24:28.000 Yeah, so this is like...
02:24:30.000 And espionage.
02:24:33.000 But actually, though, so there's a good reason that a lot of these articles are making it seem like this is a huge breakthrough.
02:24:42.000 It actually isn't as big as it seems.
02:24:45.000 So, okay, if you want to make really, really, really, really exquisite chips...
02:24:49.000 Look at this quote.
02:24:50.000 Moore's Law didn't die, Huo wrote.
02:24:53.000 It moved to Shanghai.
02:24:54.000 Instead of giving up, China's grinding its way forward layer by layer, pixel by pixel.
02:24:59.000 The future of chips may no longer be written by who holds the best tools, but by who refuses to stop building.
02:25:06.000 The rules are changing and DUV just lit the fuse.
02:25:09.000 Boy.
02:25:09.000 Who wrote that article?
02:25:12.000 Gizmo China.
02:25:13.000 There it is.
02:25:14.000 Yeah.
02:25:14.000 You can view that as like Chinese propaganda in a way, actually.
02:25:17.000 So what's actually going on here is, so the Chinese only have these deep ultraviolet lithography machines.
02:25:26.000 That's like a lot of syllables.
02:25:27.000 But it's just a glorified chip.
02:25:29.000 Like, it's a giant laser.
02:25:31.000 That zaps your chips to, like, make the chips when you're fapping them.
02:25:36.000 Yeah, so we're talking about, like, you do these atomic layer patterns on the chips and shit, and, like, what this UV thing does is it, like, fires, like, a really high-powered laser beam.
02:25:45.000 Laser beam, yeah.
02:25:46.000 They attach the head of sharks that just shoot at the chips.
02:25:49.000 Sorry, that was, like, an Austin Powers.
02:25:51.000 Anyway, they'll, like, shoot it at the chips, and that causes, depending on how the thing is designed, They'll, like, have a liquid layer of the stuff that's gonna go on the chip.
02:26:02.000 The UV is really, really tight and causes it, exactly, causes it to harden.
02:26:07.000 And then they wash off the liquid, and they do it all over again.
02:26:10.000 Like, basically, this is just imprinting a pattern on a chip.
02:26:12.000 Yeah, basically a fancy, tiny printer.
02:26:14.000 Yeah, so that's it.
02:26:15.000 And so the exquisite machines that we get to use, or that they get to use in Taiwan, are called extreme ultraviolet lithography.
02:26:22.000 These are those crazy lasers.
02:26:25.000 The ones that China can use, because we've prevented them from getting any of those extreme ultraviolet lithography machines, the ones China uses are previous generation machines called Deep Ultraviolet, and they can't actually make chips as high a resolution as ours.
02:26:39.000 So what they do is, and what this article is about is, they basically take the same chip, they zap it once with DUV.
02:26:45.000 And then they gotta pass it through again, zap it again, to get closer to the level of resolution we get in one pass with our exquisite machine.
02:26:54.000 Now, the problem with that is you've got to pass the same chip through multiple times, which slows down your whole process.
02:26:59.000 It means your yields at the end of the day are lower.
02:27:02.000 It adds errors.
02:27:03.000 Yeah, which makes it more costly.
02:27:04.000 We've known that this is a thing that's called multi-patterning.
02:27:07.000 It's been a thing for a long time.
02:27:08.000 There's nothing new under the sun here.
02:27:10.000 China has been doing this for a while.
02:27:13.000 So it's not actually a huge shock that this is happening.
02:27:16.000 The question is always, when you look at an announcement like this, yields, yields, yields.
02:27:21.000 How, like, what percentage of the chips coming out are actually usable and how fast are they coming out?
02:27:27.000 That determines, like, is it actually competitive?
02:27:29.000 And that article, too, like, this ties into the propaganda stuff we were talking about, right?
02:27:33.000 If you read an article like that, you could be forgiven for going, like, oh, man, our expert controls, like, just aren't working, so we might as well just give them up.
02:27:41.000 When in reality, because you look at the source, and this is how you know that also this is one of their propaganda things.
02:27:50.000 You look at Chinese news sources, what are they saying?
02:27:53.000 What are the beats that are, like, common?
02:27:55.000 And you know, just because of the way their media is set up, totally different from us, and we're not used to analyzing things this way, but when you read something in, like, the South China Morning Post, or, like, the Global Times, or Xinhua, or in a few different places like this, and it's the same beats coming back, you know that someone was handed a brief,
02:28:13.000 and it's like, you gotta hit this point, this point, this point, and, yep, they're gonna find a way to work that into the news cycle over there.
02:28:20.000 Jeez.
02:28:21.000 And it's also, like, slightly true.
02:28:23.000 Like, yeah, they did manage to make chips at, like, five nanometers.
02:28:27.000 Cool.
02:28:27.000 It's not a lie.
02:28:28.000 It's the same, like, propaganda technique, right?
02:28:31.000 Most of the time, you're not going to confabulate something out of nothing.
02:28:34.000 Rather, like, you start with the truth, and then you push it just a little bit.
02:28:38.000 Just a little bit.
02:28:40.000 And you keep pushing, pushing, pushing.
02:28:42.000 Wow.
02:28:43.000 How much is this administration aware of all the things that you're talking about?
02:28:48.000 So they're actually...
02:28:51.000 Right now, they're in the middle of staffing up some of the key positions because it's a new administration still, and this is such a technical domain.
02:28:59.000 They've got people there who are at the working level who are really sharp.
02:29:04.000 They have some people now, yeah, in places like especially in some of the export control offices now who are some of the best in the business.
02:29:12.000 Yeah. And that's that's really important.
02:29:15.000 Like this is a it's a weird space because so when you want to actually recruit for for.
02:29:20.000 You know, government roles in this space, it's really fucking hard.
02:29:23.000 Because you're competing against, like, an open AI, like, very, like, low-range salaries, like half a million dollars a year.
02:29:31.000 The government pay scale, needless to say, is, like, not...
02:29:35.000 I mean, Elon worked for free.
02:29:36.000 He can afford to, but still taking a lot of time out of his day.
02:29:41.000 There's a lot of people like that who are, like, you know, they...
02:29:44.000 They can't justify the cost.
02:29:46.000 They literally can't afford to go work for the government.
02:29:50.000 Why would they?
02:29:51.000 Exactly.
02:29:52.000 Whereas China's like, "You don't have a choice, bitch!"
02:29:55.000 Yeah, and that's what they say.
02:29:56.000 The Chinese word for bitch is really biting.
02:29:59.000 If you translated that, it would be a real stain.
02:30:02.000 I'm sure.
02:30:03.000 It's kind of crazy because it seems almost impossible to compete with that.
02:30:06.000 I mean, that's like the perfect setup.
02:30:08.000 If you wanted to control everything and you wanted to optimize everything for the state, that's the way you would do it.
02:30:14.000 Yeah, but it's also easier to make errors and be wrong-footed in that way.
02:30:18.000 And also, basically, that system only works if the dictator at the top is just like very competent.
02:30:25.000 Because the risk always with a dictatorship is like, oh.
02:30:30.000 The dictator turns over, and now it's like just a total dumbass.
02:30:33.000 And now the whole thing falls apart.
02:30:35.000 And he surrounds himself.
02:30:36.000 I mean, look, we just talked about information echo chambers online and stuff.
02:30:40.000 The ultimate information echo chamber is the one around Xi Jinping right now.
02:30:43.000 Because no one wants to give him bad news.
02:30:45.000 I'm not gonna.
02:30:48.000 And this is what you keep seeing, right?
02:30:52.000 With these provincial-level debt in China, which is so awful.
02:30:58.000 It's like people trying to hide money under imaginary mattresses.
02:31:03.000 And then hiding those mattresses under bigger mattresses until eventually, like, no one knows where the liability is.
02:31:09.000 And then you get a massive property bubble and any number of other bubbles that are due to pop any time, right?
02:31:14.000 And the longer it goes on, like, the more, like, stuff gets squirreled away.
02:31:19.000 Like, there's actually, like, a story from the Soviet Union that always, like, gets me, which is, so Stalin obviously, like, purged and killed, like, millions of people in the 1930s, right?
02:31:30.000 By the 1980s, the ruling Politburo of the Soviet Union, obviously, like, things have been different.
02:31:37.000 Generations had turned over and all this stuff.
02:31:39.000 But those people, the most powerful people in the USSR, could not figure out what had happened to their own families during the purchase.
02:31:50.000 Like, the information was just nowhere to be found because the machine of the state was just like...
02:31:58.000 So aligned around like we just like we just gotta kill as many fucking people as we can and like turn it over and then hide the evidence of it and then kill the people who killed the people and then kill those people who killed those people.
02:32:09.000 It also wasn't just kill the people, right?
02:32:11.000 It was like a lot of like kind of gulag archipelago style.
02:32:14.000 It's about labor, right?
02:32:16.000 Because the fundamentals of the economy are so shit that you basically have to find a way to justify putting people in labor camps.
02:32:22.000 That's right.
02:32:23.000 But it was very much like you grind mostly or largely you grind them to death and basically they've gone away and you burn the records of it happening.
02:32:31.000 So literally the most powerful people.
02:32:32.000 Whole towns, right, that disappeared.
02:32:33.000 Like people who are like, there's no record or there's like, or usually the way you know about it is there's like one dude.
02:32:38.000 And it's like this one dude has a very precarious escape story.
02:32:41.000 And it's like if literally this dude didn't get away, you wouldn't know about the entire town that was like wiped out.
02:32:46.000 Yeah, it's crazy.
02:32:47.000 Jesus Christ.
02:32:48.000 Yeah. The stuff that like.
02:32:53.000 It just hasn't been done right.
02:32:54.000 I feel like we could do it right.
02:32:56.000 And we have a 10-page plan.
02:32:58.000 We came real close.
02:33:00.000 We came real close.
02:33:01.000 So close.
02:33:02.000 Yeah, and that's what the blue no matter who people don't really totally understand.
02:33:06.000 We're not even talking about political parties.
02:33:08.000 We're talking about power structures.
02:33:10.000 And we came close to a terrifying power structure.
02:33:13.000 And it was willing to just do whatever it could to keep it rolling.
02:33:17.000 And it was rolling for four years.
02:33:19.000 It was rolling for four years without anyone at the helm.
02:33:22.000 Show me the incentives, right?
02:33:23.000 I mean, that's always the question.
02:33:25.000 Yeah.
02:33:26.000 One of the things is, too, when you have such a big structure that's overseeing such complexity, right?
02:33:31.000 Obviously, a lot of stuff can hide in that structure, and it's not unrelated to the whole AI picture.
02:33:39.000 There's only so much compute that you have at the top of that system that you can spend, right?
02:33:44.000 As the president, as a cabinet member, like, whatever.
02:33:48.000 You can't look over everyone's shoulder and do their homework.
02:33:52.000 You can't do founder mode all the way down and all the branches and all the, like, action officers and all that shit.
02:33:58.000 That's not going to happen, which means you're spending five seconds thinking about how to unfuck some part of the government, but then the, like, you know...
02:34:06.000 Corrupt people who run their own fiefdoms there spend every day trying to figure out how to survive.
02:34:10.000 It's like their whole life to justify themselves.
02:34:13.000 Yeah, yeah.
02:34:13.000 Well, that's the USAID dilemma.
02:34:15.000 Yeah.
02:34:16.000 Yeah.
02:34:16.000 Because they're uncovering this just insane amount of NGOs.
02:34:19.000 Like, where's this going?
02:34:21.000 We talked about this the other day, but India has an NGO for every 600 people.
02:34:27.000 Wait, what?
02:34:27.000 Yes.
02:34:28.000 We need more NGOs.
02:34:29.000 There's 3.3 million NGOs.
02:34:32.000 What?
02:34:33.000 In India.
02:34:34.000 Do they bucket?
02:34:36.000 What are the categories that they fall into?
02:34:38.000 Who fucking knows?
02:34:39.000 That's part of the problem.
02:34:40.000 One of the things that Elon had found is that there's money that just goes out with no receipts.
02:34:45.000 It's billions of dollars.
02:34:47.000 We need to take that further.
02:34:48.000 We need an NGO for every person in India.
02:34:50.000 We will get that eventually.
02:34:52.000 It's the exponential trend.
02:34:54.000 It's just like AI.
02:34:55.000 The number of NGOs is doubling every year.
02:34:58.000 We're making progress.
02:34:59.000 We're making incredible progress in bullshit.
02:35:02.000 The geo-scaling law, the bullshit scaling law.
02:35:05.000 Well, it's just that unfortunately it's Republicans doing it, right?
02:35:07.000 So it's unfortunately the Democrats are going to oppose it even if it's showing that there's like insane waste of your tax dollars.
02:35:14.000 I thought some of the doge stuff was pretty bipartisan.
02:35:18.000 There's congressional support at least on both sides, no?
02:35:20.000 Well, sort of.
02:35:22.000 I think the real issue is in dismantling a lot of these programs that – You can point to some good some of these programs do.
02:35:31.000 The problem is, like, some of them are so overwhelmed with fraud and waste that it's like, to keep them active in the state they are, like, what do you do?
02:35:40.000 Do you rip the Band-Aid off and start from scratch?
02:35:43.000 Like, what do you do with the Department of Education?
02:35:44.000 Do you say, why are we number 39 when we were number one?
02:35:48.000 Like, what did you guys do with all that money?
02:35:50.000 Did you create problems?
02:35:52.000 There's this idea in software engineering, actually, he's talking to one of our employees about this, which is like, Refactoring, right?
02:35:58.000 So when you're writing, like, a bunch of software, it gets really, really big and hairy and complicated, and there's all kinds of, like, dumbass shit, and there's all kinds of waste that happens in that codebase.
02:36:09.000 There's this thing that you do every, you know, every, like, few months, is you do this thing called refactoring, which is, like, you go, like, okay, we have, you know, 10 different things that are trying to do the same thing.
02:36:21.000 Let's...
02:36:22.000 Get rid of nine of those things and just like rewrite it as the one thing.
02:36:27.000 So there's like a cleanup and refresh cycle that has to happen whenever you're developing a big complex thing that does a lot of stuff.
02:36:34.000 The thing is like the U.S. government at every level has basically never done a refactoring of itself.
02:36:42.000 And so the way that problems get solved is you're like...
02:36:46.000 Well, we need to do this new thing.
02:36:48.000 So we're just gonna, like, stick on another appendage to the beast and get that appendage to do that new thing.
02:36:56.000 And, like, that's been going on for 250 years, so we end up with, like, this beast that has a lot of appendages, many of which do incredibly duplicative and wasteful stuff, that if you were a software engineer, just, like, not politically, just objectively looking at that as a system,
02:37:14.000 you'd go, like, oh.
02:37:16.000 This is a catastrophe.
02:37:19.000 And, like, we have processes that the industry, we understand how, what needs to be done to fix that.
02:37:25.000 You have to refactor it.
02:37:26.000 But they haven't done that, hence the $36 trillion of debt.
02:37:30.000 It's a problem, too, though, in all, like, when you're a big enough organization, you run into this problem, like, Google has this problem, famously.
02:37:36.000 We have friends, like, Jason, so Jason's the guy you spoke to about that.
02:37:42.000 So he's like a startup.
02:37:46.000 So he works in, like, relatively small codebases, and he, like, you know, can hold the whole codebase in his head at a time.
02:37:53.000 But when you move over to, you know, Google, to Facebook, like, all of a sudden, this gargantuan codebase starts to look more like the complexity of the U.S. government, just, like, you know, very roughly in terms of scale, right?
02:38:03.000 So now you're like, okay, well, we want to add functionality.
02:38:08.000 So we want to incentivize our teams to build products that are going to be valuable.
02:38:13.000 And the challenge is, The best way to incentivize that is to give people incentives to build new functionality.
02:38:19.000 Not to refactor.
02:38:21.000 There's no glory.
02:38:21.000 If you work at Google, there's no glory in refactoring.
02:38:23.000 If you work at Meta, there's no glory in refactoring.
02:38:26.000 Like, there's no promotion, right?
02:38:28.000 Exactly.
02:38:29.000 You have to be a product owner.
02:38:31.000 So you have to, like, invent the next Gmail.
02:38:33.000 You've got to invent the next Google Calendar.
02:38:35.000 You've got to do the next, you know, Messenger app.
02:38:37.000 That's how you get promoted.
02:38:39.000 And so you've got, like, this attitude.
02:38:41.000 You go into there and you're just like, let me crank this stuff out and, like, try to ignore all the shit in the code base.
02:38:46.000 No glory in there.
02:38:49.000 A, this Frankenstein monster of a codebase that you just keep stapling more shit onto.
02:38:53.000 And then B, this massive graveyard of apps that never get used.
02:38:58.000 This is like the thing Google is famous for.
02:38:59.000 If you ever see like the Google graveyard of apps, it's like all these things that you're like, oh yeah, I guess I kind of remember Google Me.
02:39:04.000 Somebody made their career off of launching that shit and then peaced out and it died.
02:39:09.000 That's like the incentive structure at Google, unfortunately.
02:39:13.000 And it's also kind of the only way to, I mean, it's probably not, but in the world where humans are doing the oversight, that's your limitation, right?
02:39:21.000 You got some people at the top who have a limited bandwidth and compute that they can dedicate to, like, hunting down the problems.
02:39:27.000 AI agents might actually solve that.
02:39:29.000 You could actually have a sort of autonomous AI agent that is the autonomous CEO or something go into an organization and uproot all the things and do that refactor.
02:39:40.000 You could get way more efficient organizations out of that.
02:39:44.000 Thinking about government corruption and waste and fraud, that's the kind of thing where those sorts of tools could be radically empowering, but you've got to get them to work right and for you.
02:39:56.000 We've given us a lot to think about.
02:39:59.000 Is there anything more?
02:40:00.000 Should we wrap this up?
02:40:01.000 If we've made you sufficiently uncomfortable.
02:40:04.000 I am super uncomfortable.
02:40:06.000 Very uneasy.
02:40:07.000 Was the butt tap too much at the beginning?
02:40:08.000 No, it was fine.
02:40:09.000 No, that was fine?
02:40:09.000 All of it was weird.
02:40:12.000 It's just, you know, I always try to look at some non-cynical way out of this.
02:40:18.000 Well, the thing is, like, there are paths out.
02:40:21.000 We talked about this and the fact that a lot of these problems are just us tripping on our own feet.
02:40:26.000 So if we can just, like...
02:40:28.000 Un-fuck ourselves a little bit.
02:40:30.000 We can unleash a lot of this stuff.
02:40:33.000 And as long as we understand also the bar that security has to hit and how important that is, we actually can Put all this stuff together.
02:40:43.000 We have the capacity.
02:40:44.000 It all exists.
02:40:45.000 It just needs to actually get aligned and around an initiative, and we have to be able to reach out and touch.
02:40:51.000 On the control side, there's also a world where, and this is actually, like, if you talk to the labs, this is what they're actually planning to do, but it's a question of how methodically and carefully they can do this.
02:41:00.000 The plan is to ratchet up capabilities, and then scale, in other words.
02:41:04.000 And then as you do that, you start to use your AI systems, your increasingly clever and powerful AI systems, to do research on technical control.
02:41:14.000 So you basically build the next generation of systems.
02:41:16.000 You try to get that generation of systems to help you just inch forward a little bit more on the capability side.
02:41:21.000 It's a very precarious balance, but it's something that at least isn't insane on the face of it.
02:41:26.000 And fortunately, I mean, is the...
02:41:30.000 The default path, like the labs are talking about that kind of control element as being a key pillar of their strategy.
02:41:36.000 But these conversations are not happening in China.
02:41:38.000 So what do you think they're doing to keep AI from uprooting their system?
02:41:42.000 So that's interesting.
02:41:44.000 Because I would imagine they don't want to lose control.
02:41:46.000 Right.
02:41:46.000 There's a lot of...
02:41:48.000 Ambiguity and uncertainty about what's going on in China.
02:41:50.000 So there's been a lot of like track 1.5, track 2 diplomacy, basically where you have non-government guys from one side talk to government guys from the other side or talk to non-government from the other side and kind of start to align on like, okay, what do we think the issues are?
02:42:03.000 You know, the Chinese are – there are a lot of like freaked out Chinese researchers and have come out publicly and said, hey, like we're really concerned about this whole loss of control thing.
02:42:12.000 There are public statements and all that.
02:42:14.000 You also have to be mindful that any statement the CCP puts out is a statement they want you to see.
02:42:18.000 So when they say like, "Oh yeah, we're really worried about this thing," it's genuinely hard to assess what that even means.
02:42:26.000 But as you start to build these systems, we expect you're going to see some evidence of this shit before.
02:42:33.000 And it's not necessarily, it's not like you're going to build the system necessarily and have it take over the world.
02:42:37.000 Like what we see with agents,
02:42:39.000 Yeah, so I was actually going to add to this really, really good point, and something where, like, open source AI is, like, even, you know, could potentially have an effect here.
02:42:52.000 So a couple of the major labs, like OpenAI Anthropic, I think, came out recently and said, like, look, we...
02:42:59.000 We're on the cusp.
02:43:00.000 Our systems are on the cusp of being able to help a total novice, like someone with no experience, develop and deploy and release a known biological threat.
02:43:11.000 And that's something we're going to have to grapple with over the next few months.
02:43:15.000 And eventually, capabilities like this, not necessarily just biological, but also cyber and other areas, are going to come out in open source.
02:43:24.000 And when they come out in open source...
02:43:26.000 Basically for anybody to download.
02:43:27.000 For anybody to download and use.
02:43:29.000 When they come out in open source, you actually start to see some things happen, like some incidents, like some major hacks that were just done by a random motherfucker who just wants to see the world burn, but that wakes us up to like,
02:43:44.000 oh shit, these things actually are powerful.
02:43:47.000 I think one of the aspects also here is we're still in that...
02:43:53.000 Post-Cold War honeymoon, many of us, right?
02:43:56.000 In that mentality, like, not everyone has, like, wrapped their heads around this stuff.
02:44:00.000 And the, like, what needs to happen is something that makes us go, like, oh, damn, we, like, we weren't even really trying this entire time.
02:44:11.000 Because this is, like, this is the 9-11 effect.
02:44:14.000 This is the Pearl Harbor effect.
02:44:16.000 Once you have a thing that aligns everyone around like, oh shit, this is real and we actually need to do it and we're freaked out, we're actually safer.
02:44:24.000 We're safer when we're all like, okay, something important needs to happen.
02:44:30.000 Right.
02:44:31.000 Instead of letting them just slowly chip away.
02:44:33.000 Exactly.
02:44:34.000 And so we, like...
02:44:35.000 We need to have some sort of shock, and we probably will get some kind of shock over the next few months, the way things are trending.
02:44:40.000 And when that happens, then...
02:44:42.000 Or years, if that makes you feel better.
02:44:46.000 But because you have the potential for this open source, it's probably going to be a survivable shock, right?
02:44:53.000 But still a shock.
02:44:54.000 And so let us actually realign around, like, okay...
02:44:58.000 Let's actually fucking solve some problems for real.
02:45:01.000 And so putting together the groundwork, right, is what we're doing around, like, let's pre-think a lot of this stuff so that, like, if and when the shock comes...
02:45:10.000 We have a break glass plan.
02:45:12.000 We have a plan.
02:45:14.000 And the loss of control stuff is similar.
02:45:16.000 Like, so one interesting thing that happens with AI agents today is they'll, like, they'll get any...
02:45:21.000 So an AI agent will take a complex task that you give it, like, find me...
02:45:26.000 Like best sneakers for me online, some shit like that.
02:45:28.000 And they'll break it down into a series of sub-steps.
02:45:30.000 And then each of those steps, it'll farm out to a version of itself, say, to execute autonomously.
02:45:36.000 The more complex a task is, the more of those little sub-steps there are in it.
02:45:41.000 And so you can have an AI agent that nails like 99% of those steps.
02:45:46.000 But if it screws up just one, the whole thing is a flop, right?
02:45:50.000 And so...
02:45:51.000 If you think about the loss of control scenarios that a lot of people look at are autonomous replication, like the model gets access to the internet, copies itself onto servers and all that stuff.
02:46:03.000 Those are very complex movements.
02:46:05.000 If it screws up at any point along the way, that's a tell, like, oh, shit, something's happening there.
02:46:10.000 And you can start to think about, like, okay, well, what went wrong?
02:46:13.000 We get another do.
02:46:14.000 We get another try, and we can kind of learn from our mistakes.
02:46:17.000 So there is this sort of, like, this picture, you know, one camp goes, oh, well, we're going to kind of make this superintelligence in a vat, and then it explodes out and we lose control over it.
02:46:27.000 That doesn't...
02:46:29.000 Necessarily seem like the default scenario right now.
02:46:31.000 It seems like what we're doing is scaling these systems.
02:46:34.000 We might unhobble them with big capability jumps.
02:46:37.000 But there's a component of this that is a continuous process that lets us kind of get our arms around it in a more staged way.
02:46:44.000 That's another thing that I think is in our favor that we didn't expect before as a field, basically.
02:46:52.000 And I think that's a good thing.
02:46:53.000 That helps you kind of detect these breakout attempts and do things about them.
02:46:57.000 All right.
02:46:57.000 I'm going to bring this home.
02:46:59.000 I'm freaked out.
02:47:00.000 So thank you.
02:47:01.000 Thanks for trying to make me feel better.
02:47:03.000 I don't think you did.
02:47:04.000 But I really appreciate you guys and appreciate your perspective because it's very important and it's very illuminating.
02:47:11.000 It gives you a sense of what's going on.
02:47:13.000 And I think one of the things that you said that's really important is, like, it sucks that we need a 9-11 moment or a Pearl Harbor moment to realize what's happening so we all come together.
02:47:23.000 But hopefully, slowly but surely, through conversations like this, people realize what's actually happening.
02:47:29.000 You need one of those moments, like, every generation.
02:47:32.000 Like, that's how you get contact with the truth.
02:47:34.000 And it's, like, it's painful, but, like, the light's on the other side.
02:47:38.000 Thank you.
02:47:39.000 Thank you very much.