In this episode of the podcast, we're joined by a guest who's been in the AI space for a long time, and who's got a pretty good idea of what's to come in the future of AI. We talk about what the future might look like in the near future, and whether or not there's a Doomsday Clock for artificial intelligence.
00:00:29.000Well, okay, so there's one, without speaking to, like, the fucking Doomsday dimension right off the gate, there's a question about, like, where are we at in terms of AI capabilities right now, and what do those timelines look like?
00:00:42.000One of the most concrete pieces of evidence that we have recently came out of a lab, an AI kind of evaluation lab called METER, and they put together this test.
00:00:52.000Basically, it's like you ask the question, pick a task that takes a certain amount of time, like an hour.
00:01:00.000It takes like a human a certain amount of time.
00:01:02.000And then see like how likely the best AI system is to solve for that task.
00:01:10.000And so right now what they're finding is when it comes to AI research itself, so basically like automate the work of an AI researcher.
00:01:18.000You're hitting 50% success rates for these AI systems for tasks that take an hour long.
00:01:23.000And that is doubling every, right now it's like every four months.
00:01:27.000So you had tasks that you could do, you know, a person does in five minutes like, you know, ordering an Uber Eats or like something that takes like 15 minutes, like maybe booking a flight or something like that.
00:01:37.000And it's a question of like, how much can these AI agents do, right?
00:01:42.000Like from five minutes to 15 minutes to 30 minutes.
00:02:30.000If you think that you're going to hit...
00:02:32.000Human-level AI capabilities across the board, say 2027, 2028, which when you talk to some of the people in the labs themselves, that's the timelines they're looking at.
00:02:41.000They're not confident, they're not sure, but that seems pretty plausible.
00:02:44.000If that happens, really there's no way we're going to have quantum computing that's going to be giving enough of a bump to these techniques.
00:02:51.000You're going to have standard classical computing.
00:02:54.000One way to think about this is that the data centers that are being built today...
00:02:58.000Are being thought of literally as the data centers that are going to house, like, the artificial brain that powers superintelligence, human-level AI when it's built in, like, 2027, something like that.
00:03:09.000So, how knowledgeable are you when it comes to quantum computing?
00:03:44.000It's not like when you're working in startups.
00:03:45.000It's not like when you're working in tech where you build something and somebody else builds something that's complementary and you can team up and just make something amazing.
00:03:54.000Wars over who gets credit, who gets their name on the paper.
00:03:57.000Did you cite this fucking stupid paper from two years ago because the author has an ego and you got to be honest.
00:04:03.000I was literally at one point, I'm not going to get any details here, but like there was a collaboration that we ran with like this, anyway, fairly well-known guy.
00:04:15.000And my supervisor had me, like, write the emails that he would send from his account so that he was seen as, like, the guy who was, like, interacting with this bigwig.
00:04:25.000That kind of thing is, like, doesn't tend to happen in startups, at least not in the same way.
00:04:30.000So he wanted credit for the, like, he wanted to seem like he was the genius who was facilitating this?
00:04:41.000The reason it happens is that these guys who are professors, or even not even professors, just like your post-doctoral guy who's supervising you, they can write your letters of reference and control your career after that lapse.
00:05:16.000And I mean, honestly, even that, it took me, it took both of us, like, a few years to, like, unfuck our brains and unlearn the bad habits we learned.
00:05:24.000It was really only a few years later that we started, like, really, really getting a good, like, getting a good flow going.
00:05:31.000You're also, you're kind of disconnected from, like, base reality when you're in the ivory tower, right?
00:05:36.000Like, if you're, there's something beautiful about, and this is why we spent all our time in startups, but there's something really beautiful about, like, It's just a bunch of assholes, us, and, like, no money and nothing and a world of, like, potential customers.
00:05:50.000And it's like, you actually, it's not that different from, like, stand-up comedy in a way.
00:05:54.000Like, your product is, can I get the laugh, right?
00:06:02.000Like, the space of products that actually works is so narrow.
00:06:06.000And you've got to obsess over what people actually want.
00:06:09.000And it's so easy to fool yourself into thinking that you've got something that's really good because your friends and family are like, oh, no, sweetie, you're doing a great job.
00:06:26.000The whole indoctrination thing in academia is so bizarre because there's these hierarchies of powerful people and just the idea that you have to work for someone someday and they have to take credit by being the person on the email.
00:08:21.000Like, there's no way for it not to be...
00:08:24.000To be, in some way, a reflection of, like, yourself.
00:08:27.000You know, you're kind of, like, in this battle with you trying to convince yourself that you're great, so the ego wants to grow, and then you're constantly trying to compress it and compress it.
00:08:34.000And if there's not that outside force, your ego will expand to fill whatever volume is given to it.
00:08:38.000Like, if you have money, if you have fame, if everything's given, and you don't make contact with the unforgiving on a regular basis, like, yeah, you know, you're gonna end up...
00:08:46.000You're going to end up doing that to yourself.
00:09:33.000These poor kids that have to go from college where they're talking to these dipshit professors out into the world and operating under these same rules that they've been, like, forced and indoctrinated to.
00:09:50.000It's funny, you were mentioning the producer thing.
00:09:51.000That is literally also a thing that happens in academia.
00:09:53.000So you'll have these conversations where it's like, all right, well, this paper is...
00:09:57.000You know, fucking garbage or something.
00:09:58.000But we want to get it in a paper, in a journal.
00:10:01.000And so let's see if we can get, like, a famous guy on the list of authors so that when it gets reviewed, people go like, oh, Mr. So-and-so, okay.
00:11:11.000At a certain point for the producers, too, it's kind of like you'll have people approaching you for help on projects that look nothing like projects you've actually done.
00:11:18.000So I feel like it just adds noise to your universe.
00:11:20.000Like, if you're actually trying to build cool shit, you know what I mean?
00:11:57.000Like the stuff that's like all like, well, the stuff you can get from chatbots and AI agents is cheap, but like food is super expensive or something.
00:12:33.000So quantum computing is infinitely more powerful than standard computing.
00:12:38.000Would it make sense, then, that if quantum computing can run a large language model, that it would reach a level of intelligence that's just preposterous?
00:12:47.000So, yeah, one way to think of it is, like, there are problems that quantum computers can solve way, way, way, way better than classical computers.
00:12:54.000And so, like, the numbers get absurd pretty quickly.
00:12:57.000It's, like, problems that a classical computer couldn't solve if it had the entire lifetime of the universe to solve it.
00:13:02.000A quantum computer, right, in, like, 30 seconds, boom.
00:13:05.000But the flip side, like, there are problems that quantum computers just, like, can't help us accelerate.
00:13:10.000The kinds of, like, one classic problem that quantum computers help with is this thing called, like, the traveling salesman paradox.
00:13:17.000Or problem where, you know, you have like a bunch of different locations that a salesman needs to hit, and what's the best path to hit them most efficiently?
00:13:25.000It's like kind of a classic problem if you're going around different places and have to make stops.
00:13:29.000There are a lot of different problems that have the right shape for that.
00:13:33.000A lot of quantum machine learning, which is a field, is focused on how do we take standard AI problems, like AI...
00:14:07.000So, yeah, human-level AI is, like, AI...
00:14:11.000You can imagine, like, it's AI that is...
00:14:13.000As smart as you are in, let's say, all the things you could do on a computer.
00:14:18.000So, you know, you can, yeah, you can order food on a computer, but you can also write software on a computer.
00:14:22.000You can also email people and pay them to do shit on a computer.
00:14:25.000You can also trade stocks on a computer.
00:14:27.000So it's like as smart as a smart person for that.
00:14:31.000Superintelligence, people have various definitions, and there are all kinds of, like, honestly hissy fits about, like, different definitions.
00:14:38.000Generally speaking, it's something that's, like, very significantly smarter than the smartest human.
00:14:43.000And so you think about it, it's kind of like it's as much smarter than you as you might be smarter than a toddler.
00:14:51.000And you think about that, and you think about, like, the, you know, how would a toddler control you?
00:15:04.000And so superintelligence gets us at these levels where you can potentially do things that are completely different and basically, you know, new scientific theories.
00:15:15.000And last time we talked about, you know, new stable forms of matter that were being discovered by these kind of narrow systems.
00:15:22.000But now you're talking about a system that is like, has that intuition combined with the ability to...
00:15:30.000Talk to you as a human and to just have really good, like, rapport with you, but can also do math.
00:15:51.000Or at least the parts of AI research that you can do in just like software, like by coding or whatever these systems are designed to do.
00:15:59.000And so one implication of that is you now have automated AI researchers.
00:16:05.000And if you have automated AI researchers, that means you have AI systems that can automate the development of the next...
00:16:14.000And now you're getting into that whole singularity thing where it's an exponential that just builds on itself and builds on itself, which is kind of why a lot of people argue that if you build human-level AI, superintelligence can't be that far away.
00:17:18.000He also wasn't even the first person to come up with this idea of machines, building machines, and there being implications like human disempowerment.
00:17:26.000So if you go back to, I think it was like the late 1800s, and I don't remember the guy's name, but he sort of like came up with this.
00:17:33.000He was observing the Industrial Revolution and the mechanization of labor and kind of starting to see.
00:17:38.000More and more, like, if you zoom out, it's almost like you have a humans or an ant colony, and the artifacts that that colony is producing that are really interesting are these machines.
00:17:46.000You know, you kind of, like, look at the surface of the Earth as, like, gradually, increasingly mechanized thing, and it's not super clear if you zoom out enough, like...
00:17:55.000What is actually running the show here?
00:17:57.000Like, you've got humans servicing machines, humans looking to improve the capability of these machines at this frantic pace.
00:18:03.000Like, they're not even in control of what they're doing.
00:18:09.000And the whole thing is, like, especially with a competition that's going on between the labs, but just kind of in general, you're at a point where, like...
00:18:18.000Do the CEOs of the labs, like, they're these big figureheads.
00:18:22.000They talk about what they're doing and stuff.
00:18:24.000Do they really have control over any part of the system?
00:18:29.000The economy is in this, like, almost convulsive fit, right?
00:18:32.000Like, you can almost feel like it's hurling out AGI.
00:18:36.000And, like, as one kind of, I guess, data point here, like, all these labs, so OpenAI, Microsoft, Google.
00:18:45.000Every year they're spending like an aircraft carrier worth of capital, individually, each of them, just to build bigger data centers, to house more AI chips, to train bigger, more powerful models.
00:18:55.000And that's like – so we're actually getting to the point where if you look at on a power consumption basis, like we're getting to, you know, 2, 3, 4, 5 percent of U.S. power production if you project out into the late 2020s.
00:19:46.000Again, if you zoom out at planet Earth, you can interpret it as like this, like all these humans frantically running around like ants just like building this like artificial brain.
00:19:56.000It's like a super mind assembling itself on the face of the planet.
00:20:00.000Marshall McLuhan in like 1963 or something like that said...
00:20:04.000Human beings are the sex organs of the machine world.
00:20:12.000I've always said that if we were aliens, or if aliens came here and studied us, they'd be like, what is the dominant species on the planet doing?
00:20:22.000The whole thing is dedicated to making better things.
00:20:24.000And all of its instincts, including materialism, including status, keeping up with the Joneses, all that stuff is tied to newer, better stuff.
00:21:52.000It's kind of like when people go out and do like a...
00:21:55.000An awful thing, like a school shooting or something, and they're like, oh, let's talk about, you know, if you give it a cool name, like now the Chinese are definitely going to do it again.
00:22:08.000But it's this thing where basically, so there was in the 3G kind of protocol that was set up years ago, law enforcement agencies included back doors intentionally to be able to access comms, you know, theoretically, if they got a warrant and so on.
00:23:55.000We're definitely doing some stuff, but in terms of the relative balance between the two, we're not where we need to be.
00:24:03.000They spy on us better than we spy on them?
00:24:05.000Yeah, because they build all our shit.
00:24:08.000Well, that was the Huawei situation, right?
00:24:10.000Yeah, and it's also the, oh my god, if you look at the power grid.
00:24:14.000So, this is now public, but if you look at, like, transformer substations, so these are the, essentially, anyway, they're a crucial part of the electrical grid.
00:24:26.000Basically, all of them have components that are made in China.
00:24:29.000China's known to have planted backdoors like Trojans into those substations to fuck with our grid.
00:24:34.000The thing is, when you see a salt typhoon, when you see a big Chinese cyberattack or a big Russian cyberattack, you're not seeing their best.
00:24:42.000These countries do not go and show you their best cards out the gate.
00:24:46.000You show the bare minimum that you can without...
00:24:50.000Tipping your hand at the actual exquisite capabilities you have.
00:24:54.000The way that one of the people who's been walking us through all this really well explained it is the philosophy is you want to learn without teaching.
00:25:06.000You want to use what is the lowest level capability that has the effect I'm after.
00:25:26.000America and the Soviet Union are like best pals because they've just defeated the Nazis, right?
00:25:32.000To celebrate that victory and the coming new world order that's going to be great for everybody, the children of the Soviet Union give as a gift to the American ambassador in Moscow this Beautifully carved wooden seal of the United States of America.
00:25:53.000He hangs it up behind his desk in his private office.
00:25:57.000You can see where I'm going with this probably, but yeah.
00:26:00.000Seven years later, 1952, finally occurs to us, like, let's take a town and actually examine this.
00:26:07.000So they dig into it, and they find this incredible contraption in it called a cavity resonator.
00:26:15.000And this device doesn't have a power source, doesn't have a battery, which means when you're sweeping the office for bugs, you're not going to find it.
00:26:22.000What it does instead is it's designed.
00:26:46.000So the Soviets, for seven years, parked a van across the street from the embassy, had a giant fucking microwave antenna aimed right at the ambassador's office, and were like zapping it and looking back at the reflection and literally listening to every single thing he was saying.
00:27:05.000When the embassy staff was like, we're going to go and sweep the office for bugs periodically, they'd be like, hey, Mr. Ambassador, we're about to sweep your office for bugs.
00:27:14.000And the ambassador was like, cool, please proceed and go and sweep my office for bugs.
00:27:20.000And the KGB dudes in the van were like...
00:27:29.000It was only ever discovered because there was this, like, British radio operator who was just, you know, doing his thing, changing his dial.
00:28:03.000And this is something that came up in our investigation just from every single person who was, like, who was filling us in and who dialed in and knows what's up.
00:28:12.000They're like, look, so you got to understand, like, our adversaries.
00:28:18.000If they need to, like, give you cancer in order to rip your shit off of your laptop, they're going to give you some cancer.
00:30:53.000Maybe the word "sway" is what's throwing it off.
00:30:56.000But it's a great catch, and the only reason we even know that, too, is that when the U-2s were flying over Russia, they had a U-2 that got shot down in 1960.
00:31:05.000The Russians go like, "Oh, friggin' Americans spying on us.
00:33:03.000It was like an 18 or 20 hour long YouTube debate, just like punishingly long.
00:33:09.000And it was like there was a $100,000 bet either way on who would win.
00:33:13.000And it was like lab leak versus wet market.
00:33:16.000And at the end of the 18 hours, the conclusion was like one of the one.
00:33:21.000But the conclusion was like it's basically 50-50 between them.
00:33:24.000And then I remember like hearing that and talking to some folks and being like, hang on a second.
00:33:28.000You got to believe that whether it came from a lab or whether it came from a wet market, one of the top three priorities of the CCP from a propaganda standpoint is like, don't get fucking blamed for COVID.
00:33:42.000And that means they're putting like $1 to $10 billion and some of their best people on a global propaganda effort to cover up evidence and confuse and blah, blah, blah.
00:36:26.000Oh, I think it was 67. I like how this has come up so many times that Jamie's like, I think last time you said it was 70. It comes up all the time because it's one of those things.
00:37:42.000It's like it should let you know that oftentimes science is incorrect and that oftentimes, you know...
00:37:49.000Unfortunately, people have a history of doing things and then they have to justify that they've done these things.
00:37:54.000But now there's so much more tooling too, right?
00:37:57.000If you're a nation state and you want to fuck with people and inject narratives into the ecosystem, right?
00:38:02.000The whole idea of autonomous AI agents too, like having these basically Twitter bots or whatever bots.
00:38:10.000One thing we've been thinking about too on the side is the idea of audience capture, right?
00:38:18.000Big people with high profiles and kind of gradually steering them towards a position by creating bots that, like, through comments, through upvotes, you know?
00:39:08.000You have to do it because for sure we're doing that.
00:39:11.000And this is one of the things where, you know, like it used to be, so OpenAI actually used to do this assessment of their AI models as part of their kind of what they call their preparedness framework that would look at the persuasion capabilities of their models as one kind of threat vector.
00:39:27.000They pulled that out recently, which is kind of like...
00:39:33.000I actually think it's somewhat concerning because one of the things you might worry about is if these systems, sometimes they get trained through what's called reinforcement learning, potentially you could imagine training these to be super persuasive by having them interact with real people and convince them, practice at convincing them to do specific things.
00:39:53.000You know, these labs ultimately will have the ability to deploy agents at scale that can just persuade a lot of people to do whatever they want, including pushing...
00:41:12.000Like more and more they can hide like basically perfectly.
00:41:15.000Like how do you tell the difference between a cutting edge AI bot?
00:41:21.000You can't because they can generate AI images of a family, of a backyard barbecue, post all these things up and make it seem like it's real.
00:41:29.000Especially now, AI images are insanely good now.
00:42:27.000Like, it's like, you know how when you're trying to, like, you're trying to capsize a boat or something, you're, like, fucking with your buddy on the lake or something.
00:42:35.000So you push on one side, then you push on the other side, then you push until eventually it capsizes.
00:42:40.000This is kind of, like, our electoral process is already naturally like this, right?
00:42:45.000We go, like, we have a party in power for a while, then, like, they get, you know, they basically get, like, you get tired of them and you switch.
00:42:52.000And that's kind of the natural way how democracy works.
00:42:56.000But the way that adversaries think about this is they're like, perfect.
00:42:59.000This swing back and forth, all we have to do is like, when it's on this way, we push and push and push and push until it goes more extreme.
00:43:06.000And then there's a reaction to it, right?
00:43:08.000And then I swing it back and we push and push and push on the other side until eventually something breaks.
00:43:15.000It's also like, you know, the organizations that are doing this, like, we already know this is part of Russia's MO, China's MO, because back when it was easier to detect, we already could see them doing this shit.
00:43:26.000So there is this website called This Person Does Not Exist.
00:43:29.000It still exists surely now, but it's kind of...
00:43:34.000But you would like every time you refresh this this website, you would see a different like human face that was a generated and what the Russian Internet Research Agency would do.
00:44:28.000The thing with nation state propaganda attempts, right, is that people have this idea that, "Ah, I've caught this Chinese influence operation," or whatever, like we nail them.
00:44:39.000The reality is nation states operate at like 30 different levels.
00:44:45.000And if you're a priority, like just influencing our information spaces as a priority for them,
00:44:55.000And so you, even if you're among the best in the world detecting this shit, you're going to catch and stop levels 1 through 10. And then you're going to be aware of level 11, 12, 13. You're working against it.
00:45:09.000And maybe you're starting to think about level 16. And you imagine you know about level 18 or whatever.
00:45:15.000But they're above you, below you, all around you.
00:45:24.000You guys have seen the Yuri Bezmenov video from 1984 where he's talking about how all our educational institutions have been captured by Soviet propaganda.
00:45:35.000He was talking about Marxism has been injected into school systems and how you have essentially two decades before you're completely captured by these ideologies and it's going to permeate and destroy all of your confidence in democracy.
00:46:44.000So when we're working on this, right, like one of the things Ed was talking about, these like 30 different layers of security access or whatever, one of the consequences is you bump into a team at...
00:46:54.000So, like, the teams we ended up working with on this project were folks that we bumped into after the end of our last investigation who kind of were like, oh...
00:47:19.000And because they're so, like, in that ecosystem...
00:47:23.000You'll see people who are like ridiculously specialized and competent, like the best people in the world at doing whatever the thing is, like to break the security.
00:47:32.000And they don't know often about like another group of guys who have a completely different capability set.
00:47:38.000And so what you find is like you're indexing like hard on this vulnerability and then suddenly someone says, oh yeah, but by the way, I can just hop that fence.
00:47:46.000So the really funny thing about this is like most or even like almost all Of the really, really, like, elite security people, kind of think that, like, all the other security people are dumbasses, even when they're not.
00:48:00.000Or, like, yeah, they're biased in the direction of, because it's so easy when everything's, like, stove-piped.
00:48:06.000But so most people who say they're, like, elite at security actually are dumbasses.
00:48:11.000Because most security is, like, about checking boxes and, like, SOC 2 compliance and shit like that.
00:48:17.000Yeah, what it is is, like, so everything's so stove-piped.
00:48:40.000You're kind of like, "Oh, interesting."
00:48:41.000So these are the people at the top of their game.
00:48:45.000And that's been this very long process to figure out, like, OK, what does it take to actually secure our critical infrastructure against like CCP, for example, like Chinese attacks if we're if we're building a super intelligence project?
00:48:58.000And it's it's this weird like kind of challenge because of the stovepiping.
00:49:04.000And we don't think that we have it even now, but definitely don't know of anyone who's come like that.
00:49:10.000The best people are the ones who When they encounter another team and other ideas and start to engage with it, are like, instead of being like, oh, you don't know what you're talking about, who just actually lock on and go like, that's fucking interesting.
00:49:43.000The fact of, you know, the 30 layers of the stack or whatever it is, of all these security issues, means that no one can have the complete picture at any one time.
00:49:52.000And the stack is changing all the time.
00:50:06.000It's hard to do it from the government side because you got to engage with data center building companies.
00:50:11.000You got to engage with the AI labs and in particular with like insiders at the labs who will tell you things that, by the way, the lab leadership will tell you the opposite of in some cases.
00:50:21.000And so, like, it's just this this Gordian knot like it's like it took us months to to.
00:50:26.000I'll give an example, actually, of that, like, trying to do the handshake, right, between different sets of people.
00:50:35.000So we were talking to one person who's...
00:50:39.000Thinking hard about data center security, working with, like, Frontier Labs on this shit.
00:50:44.000Very much, like, at the top of her game.
00:50:47.000But she's kind of from, like, the academic space, kind of Berkeley, like the avocado toast kind of side of the spectrum, you know?
00:52:08.000I don't want to rip on that too much, though, because this is one really important factor here is all these groups have a part of the puzzle, and they're all fucking amazing.
00:52:19.000They are, like, world-class at their own little slice, and a big part of what we've had to do is, like, bring people together, and there are people who've helped us immeasurably do this, but, like, bring people together and explain to them the value that each other has in a way that's,
00:53:18.000It's a mixed bag, too, because, like, yes, a lot of hyperscalers, like Google, Amazon, genuinely do have some of the best private sector security around data centers in the world, like, hands down.
00:53:33.000The problem is there's levels above that.
00:53:36.000And the guys who, like, look at what they're doing and see what the holes are just go, like, oh, yeah, like, I could get in there, no problem, and they can fucking do it.
00:53:48.000One thing my wife said to me on a couple of occasions, like, you seem to, like, and this is towards the beginning of the project, like, you seem to, like, change your mind a lot about what the right configuration is of how to do this.
00:54:00.000And, yeah, it's because every other day you're having a conversation with somebody who's like, oh, yeah, like, great job on this thing, but, like, I'm not going to do that.
00:54:08.000I'm going to do this other completely different thing.
00:54:11.000And so you have enough of those conversations and at a certain point your plan,
00:54:16.000It's got to look like we're going to account for our own uncertainty on the security side and the fact that we're never going to be able to patch everything.
00:54:27.000like you have to I mean it's like and that means you actually have to go on offense from the beginning as because like the truth is and this came up over and over again there's no world
00:54:40.000Where you're ever going to build the perfect, exquisite fortress around all your shit and hide behind your walls like this forever.
00:54:49.000That just doesn't work because no matter how perfect your system is and how many angles you've covered, your adversary is super smart, is super dedicated.
00:54:57.000If you see the field to them, they're right up in your face and they're reaching out and touching you and they're trying to see what your seams are, where they break.
00:55:35.000And there's, like, there's a related issue here, which is a kind of, like, willingness that came up over and over again.
00:55:41.000Like, one of the kind of gurus of this space was, like, made the point, a couple of them made the point that...
00:55:47.000You know, you can have the most exquisite capability in the world, but if you don't actually have the willingness to use it, you might as well not have that capability.
00:55:55.000And the challenge is right now, China, Russia, like our adversaries pull all kinds of stunts on us and get no consequences.
00:56:04.000Particularly during the previous administration.
00:56:06.000This was a huge, huge problem during the previous administration where you actually had sabotage operations being done.
00:56:16.000On American soil by our adversaries where you had administration officials.
00:56:22.000As soon as, like, a thing happened, so there were, for example, there was, like, four different states had their 911 systems go down, like, at the same time.
00:56:41.000Let me see what the chatter is that comes back after I do that.
00:56:45.000One of the things that was actually pretty disturbing about that was under that administration or regime or whatever, the response you got from the government right out the gate was, oh, it's an accident.
00:58:02.000Because if you were to investigate, if you were to say, OK, we looked into this, it actually looks like it's fucking like country X that just did this thing.
00:58:11.000It's hard to imagine the American people not being like, we're letting these people injure our American citizens on U.S. soil, take out U.S. national security or critical infrastructure, and we're not doing anything?
00:58:25.000The concern is about this, we're getting in our own way of thinking, oh, well, escalation is going to happen, and boom, we run straight to there's going to be a nuclear war, everybody's going to die.
01:00:20.000Because making contact with reality is where the fucking learning happens.
01:00:24.000You can sit there and think all you want, but unless you've actually played the chess match, unless you've reached out, seen what the reaction is and all this stuff, you don't actually know what you think you know, and that's actually extra dangerous.
01:00:36.000Putting on a bunch of capabilities and you have this like unearned sense of superiority because you haven't used those exquisite tools.
01:01:25.000Because, like, one of the things that we almost have an intuition for that's...
01:01:30.000That comes from kind of historical experience is like this idea that, you know, that countries can actually really defend their citizens in a meaningful way.
01:01:41.000So, like, if you think back to World War I, the most sophisticated advanced nation states on the planet could not get past a line of dudes in a trench.
01:01:52.000Like, that was like, that was the, then they tried like thing after thing.
01:01:56.000Let's try tanks, let's try aircraft, let's try fucking hot air balloons, infiltration.
01:02:00.000And literally, like, one side pretty much just ran out of dudes in that end of the war to put in their trench.
01:02:06.000And so we have this thought that, like, oh, you know, countries can actually put boundaries around themselves and actually...
01:02:49.000And so this is one of these things where, like, stability in reality in the world is not maintained through defense, but it's literally like you have, like, the Crips and the Bloods with different territories, and it's stable, and it looks quiet.
01:03:02.000But the reason is that if you, like, beat the shit out of one of my guys for no good reason, I'm just going to find one of your guys?
01:03:13.000And that keeps peace and stability on the surface.
01:03:16.000But that's the reality of sub-threshold competition between nation states.
01:03:21.000It's like, you come in and, like, fuck with my boys.
01:03:24.000I'm going to fuck with your boys right back.
01:03:26.000Until we push back, they're going to keep pushing that limit further and further.
01:03:30.000One important consequence of that, too, is, like, if you want to avoid nuclear escalation, right, the answer is not to just take...
01:03:39.000Punches in the mouth over and over in the fear that eventually if you do anything, you're going to escalate to nukes.
01:03:45.000All that does is it empowers the adversary to keep driving up the ratchet.
01:03:49.000Like what Ed's just described there is an increasing ratchet of unresponded adversary action.
01:03:56.000If you address the kind of sub-threshold stuff, if they cut an undersea cable and then there's a consequence for that shit, they're less likely to cut an undersea cable and things kind of stay at that level of the threshold.
01:04:30.000The translation into the superintelligence scenario is, A, if we don't have our reps in, if we don't know how to reach out and touch an adversary and induce consequence for them doing the same to us, then we have no deterrence at all.
01:04:45.000Right now, the state of security is, the labs are super...
01:04:52.000Canon probably should go deep on that piece, but as one data point, right?
01:04:57.000So there's double-digit percentages of the world's top AI labs, or America's top AI labs.
01:05:16.000But it's also like, the challenge is...
01:05:20.000When you talk to people who actually have experience dealing with, like, CCP activity in this space, right?
01:05:28.000Like, there's one story that we heard that is probably worth, like, relaying here.
01:05:31.000It's like, this guy from an intelligence agency was saying, like, hey, so there was this power outage out in Berkeley, California back in, like, 2019 or something.
01:05:42.000And the Internet goes out across the whole campus.
01:05:45.000And so there's this dorm and, like, all of the Chinese students are freaking out.
01:05:51.000a time-based check-in and basically report back on everything they've seen and heard to basically a CCP handler type thing.
01:06:00.000Right. And if they don't, like, hmm, maybe your mother's insulin doesn't show up.
01:06:05.000Maybe your, like, brother's travel plans get denied.
01:06:07.000Maybe a family business gets shut down.
01:06:10.000Like, there's the range of options that this massive CCP state coercion machine has.
01:06:19.000You know, they've got internal like software for this.
01:06:21.000Like this is an institutionalized, like very well developed and efficient framework for just ratcheting up pressure on individuals overseas.
01:06:30.000And they believe the Chinese diaspora overseas belongs to them.
01:06:34.000If you look at like what the Chinese Communist Party writes in its like in its written like public communications.
01:06:40.000They see, like, Chinese ethnicity as being green.
01:06:44.000Like, no one is a bigger victim of this than the Chinese people themselves who are abroad.
01:06:49.000I've made amazing contributions to American AI innovation.
01:06:52.000You just have to look at the names on the freaking papers.
01:06:55.000But the problem is we also have to look head on at this reality.
01:07:00.000Like you can't just be like, oh, I'm not going to say it because it makes me feel funny inside.
01:07:04.000Someone has to stand up and point out the obvious that if you're going to build a fucking Manhattan project for super intelligence and the idea is to like be doing that when China is a key rival nation state actor.
01:07:15.000Yeah, you're going to have to find a way to account for the personnel security side.
01:09:09.000So the personnel thing is like fucked up.
01:09:11.000The physical infrastructure thing is another area where people don't want to look.
01:09:15.000Because if you start looking, what you start to realize is, okay, China makes like a lot of our like components for our transformers for the electrical grid.
01:09:34.000They come from this company called TSMC, Taiwan Semiconductor Manufacturing Company.
01:09:38.000We're increasingly onshoring that, by the way, which is one of the best things that's been happening lately, is like massive amounts of TSMC capacity getting onshored in the U.S., but still being made.
01:09:47.000Right now, it's basically like 100% there.
01:09:51.000All you have to do is jump on the network at TSMC, hack the right network, Compromise the software that runs on these chips to get them to run.
01:10:02.000And you basically can compromise all the chips going into all of these things.
01:10:07.000Never mind the fact that Taiwan is physically outside the Chinese sphere of influence for now.
01:10:14.000China is going to be prioritizing the fuck out of getting access to that.
01:10:18.000There have been cases, by the way, like Richard Chang, the founder of SMIC.
01:10:25.000TSMC, this massive, like, series of aircraft carrier fabrication facilities.
01:10:43.000Easily the most advanced manufacturing or scientific process that primates on planet Earth can do is this chip-making process.
01:10:54.000Nanoscale material science where you're putting on these tiny...
01:10:59.000Atom-thick layers of stuff, and you're doing like 300 of them in a row with like, you have like insulators and conductors and different kinds of like semiconductors and these tunnels and shit.
01:11:10.000Just like the complexity of it is just awe-inspiring.
01:11:14.000That we can do this at all is like, it's magic.
01:11:29.000Say goodbye to the iPhones, say goodbye to, like, the chip supply that we rely on, and then your superintelligence training run, like, damn, that's interesting.
01:11:37.000I know Samsung was trying to develop a lab here or a semiconductor factory here, and they weren't having enough success.
01:11:44.000Oh, so, okay, so one of the craziest things, just to illustrate how hard it is to do.
01:11:49.000So you spend $50 billion, again, an aircraft carrier, we're throwing that around here and there, but an aircraft carrier worth of risk capital.
01:12:27.000Color of the paint on the walls in the bathroom is copied from other fabs that actually worked because they have no idea why a fucking fab works and another one doesn't.
01:13:54.000Bringing a fab online, like you need a certain percentage of good chips coming out the other end, or like you can't make money from the fab because most of your shit is just going right into the garbage.
01:14:05.000Unless, and this is important too, your fab is state subsidized.
01:14:09.000So when you look at – so TSMC is like – they're alone in the world in terms of being able to pump out these chips.
01:14:16.000But SMIC – This is the Chinese knockoff of TSMC, founded, by the way, by a former senior TSMC executive, Richard Chung, who leaves, along with a bunch of other people, with a bunch of fucking secrets.
01:14:28.000They get sued like in the early 2000s.
01:14:30.000It's pretty obvious what happened there.
01:14:32.000To most people, they're like, yeah, SMIC fucking stole that shit.
01:14:35.000They bring a new fab online in like a year or two, which is suspiciously fast.
01:15:08.000I mean, the semiconductor industry in China in particular is really, really interesting.
01:15:12.000It's also a massive story of, like, self-owns of the United States and the Western world where we've been just shipping a lot of our shit to them for a long time.
01:15:23.000Like the equipment that builds the chips.
01:15:25.000So, like, and it's also, like, it's so blatant.
01:15:27.000And, like, they're just, honestly, a lot of the stuff is just, like, they're just giving us, like, a big fuck you.
01:15:32.000So, give you a really blatant example.
01:15:36.000So we have the way we set up export controls still today on most equipment that these semiconductor fabs use, like the Chinese semiconductor fabs use.
01:15:46.000We're still sending them a whole bunch of shit.
01:15:48.000The way we set export controls is instead of like, oh, we're sending this gear to China and like now it's in China and we can't do anything about it.
01:15:57.000Instead, we still have this thing where we're like, no, no, no.
01:16:51.000You can't send to any number of other companies that are considered affiliated with the Chinese military or where we're concerned about military applications.
01:16:59.000Reality is, in China, civil-military fusion is their policy.
01:17:09.000We come in, we want your shit, we get your shit.
01:17:11.000There's no, like, there's no true kind of distinction between the two.
01:17:16.000And so when you have this attitude where you're like, yeah, you know, we're going to have some companies where we're like, you can't send to them, but you can, you know, that creates a situation where literally Huawei will spin up like a dozen.
01:17:26.000subsidiaries or new companies with new names that aren't on our blacklist.
01:17:31.000And so like for months or years, you're able to just ship chips to them.
01:17:35.000No, that's to say nothing of like using intermediaries
01:17:38.000Oh yeah, you wouldn't believe the number of AI chips that are shipping to Malaysia.
01:17:44.000Can't wait for the latest huge language model to come out of Malaysia?
01:17:50.000And actually, it's just proxying for the most part.
01:17:54.000There's some amount of stuff actually going on in Malaysia, but for the most part.
01:18:25.000You do what's best for the Chinese government.
01:18:29.000Well, so step one is you got to stem the bleeding, right?
01:18:32.000So right now, OpenAI pumps out a new massive scaled AI model.
01:18:37.000You better believe that the CCP has a really good chance that they're going to get their hands on that, right?
01:18:42.000So all you do right now is you ratchet up capabilities.
01:18:45.000It's like that meme of there's a motorboat or something and some guy who's surfing behind and there's a string attaching them and the motorboat guy goes like, hurry up, accelerate, they're catching up.
01:18:57.000That's kind of what's happening right now.
01:19:02.000Now, I will say, like, over the last six months especially, where our focus has shifted is, like, how do we actually build, like, the secure data set?
01:19:10.000Like, what does it look like to actually lock this down?
01:19:13.000And also, crucially, you don't want the security measures to be so irritating and invasive that they slow down the progress.
01:19:21.000Like, there's this kind of dance that you have to do.
01:19:24.000We actually – so this is part of what was in the redacted version of the report because we – We don't want to telegraph that necessarily, but there are ways that you can get a really good 80-20.
01:19:35.000There are ways that you can play with things that are already built and have a lower risk of them having been compromised.
01:19:47.000And look, a lot of the stuff as well that we're talking about, like big problems around China, a lot of this is like us just like...
01:19:54.000Tripping over our own feet and self-owning ourselves.
01:20:03.000But the gear that they're putting in their facilities, like, the machines that actually, like, do this, like, we talked about atomic patterning 300 layers.
01:20:11.000The machines that do that, for the most part...
01:20:14.000Are shipped in from the West, are shipped in from the Netherlands, shipped in from Japan, from us, from, like, allied countries.
01:20:20.000And the reason that's happening is, like, in many cases, you'll have this—it's, like, honestly a little disgusting, but, like— The CEOs and executives of these companies will brief, like, the administration officials and say,
01:20:36.000like, look, like, if you guys, like, cut us off from China, from selling to China, like, our business is going to suffer, like, American jobs are going to suffer, and it's going to be really bad.
01:20:44.000And then a few weeks later, they turn around in their earnings calls.
01:20:48.000And they go, like, you know what, yeah, so we expect, like, export controls or whatever, but it's really not going to have a big impact on us.
01:21:13.000And this is, by the way, it's like one reason why it's so important that we not be constrained in our thinking about like we're going to build a Fort Knox.
01:21:20.000Like this is where the interactive, messy...
01:21:23.000Adversarial environment is so, so important.
01:21:29.000You have to create a situation where they perceive that if they try to do an espionage operation or an intelligence operation, there will be consequences.
01:21:42.000And that's kind of a historical artifact over a lot of time spent hand-wringing over, well, what if they, and then we, and then eventually nukes.
01:21:53.000If you dealt with your kid when you're raising them, if you dealt with them that way, and you were like, hey, you know, so little Timmy, just like, he stole his first toy, and like, now's the time where you're gonna, like, a good parent would be like, alright, little Timmy, fucking come over here, you son of a bitch.
01:22:08.000Take the fucking thing, and we're gonna bring it over to the people who stole it from you.
01:22:27.000But yeah, anyway, so you go through this thing and you can do that.
01:22:30.000Or you can be like, oh no, if I tell Timmy to return it, then maybe Timmy's gonna hate me.
01:22:35.000Maybe then Timmy's gonna become increasingly adversarial and then when he's in high school, he's gonna start taking drugs and then eventually he's gonna fall afoul of the law and then end up on the street.
01:22:46.000If that's the story you're telling yourself and you're terrified of any kind of adversarial interaction, it's not even adversarial, it's constructive, actually.
01:22:53.000You're training the child just like you're training your adversary to respect your national boundaries and your sovereignty.
01:23:12.000When you look into it, it's like us just being in our own way.
01:23:15.000And a lot of this comes from the fact that like, you know, since 1991, since the fall of the Soviet Union, we...
01:23:24.000Have kind of internalized this attitude that, like, well, like, we just won the game and, like, it's our world and you're living in it and, like, we just don't have any peers that are adversaries.
01:23:34.000And so there's been generations of people who just haven't actually internalized the fact that, like, no, there's people out there who not only, like, are willing to, like, fuck with you all the way.
01:23:54.000There's this actually, this is worth like calling out.
01:23:56.000There's this like sort of two camps right now in the world of AI kind of like national security.
01:24:02.000There's the people who are worried about, they're so concerned about like the idea that we might lose control of these systems that they go, okay, we need to strike a deal with China.
01:24:17.000And then they start spinning up all these theories about how they're going to do that.
01:24:21.000None of which remotely reflect the actual...
01:24:23.000When you talk to the people who work on this, who try to do track one, track 1.5, track two, or more accurately, the ones who do the Intel stuff.
01:24:31.000Like, this is a non-starter for reasons we get into.
01:24:34.000But they have that attitude because they're like, fundamentally, we don't know how to control this technology.
01:25:19.000Like, every piece of evidence we have right now suggests that, like, if you build a super intelligent system that's vastly smarter than you, I mean...
01:25:26.000Yeah, like, your basic intuition that that sounds like a hard thing to fucking control is about right.
01:25:32.000Like, there's no solid evidence that's conclusive either way.
01:25:37.000So, yeah, we ought to be taking that really fucking seriously, and there's evidence pointing in that direction.
01:25:42.000But, so the question is, like, if those two things are true, then what do you do?
01:25:46.000And so few people seem to want to take both of those things seriously, because taking one seriously almost, like, reflexively makes you reach for the other.
01:26:08.000We need to have a serious conversation about when and how.
01:26:11.000But the fact of that not being on the table right now for anyone, because people who don't trust China just don't think that the AI risk or won't acknowledge that the issue with control is real because that's just.
01:26:23.000And there's this concern about, oh, no, but then runaway escalation.
01:26:26.000People who take the lost control thing seriously just want to have a kumbaya moment with China, which is never going to happen.
01:26:32.000And so the framework around that is one of consequence.
01:26:37.000You got to flex the muscle and put in the reps and get ready for potentially if you have a late stage rush to superintelligence, you want to have as much margin as you can so you can invest in.
01:26:49.000Potentially not even having to make that final leap in building the superintelligence.
01:26:52.000That's one option that's on the table if you can actually degrade the adversary's capabilities.
01:26:59.000How would you degrade the adversary's capabilities?
01:27:01.000The same way, well, not exactly the same way they would degrade ours, but think about all the infrastructure and, like, this is stuff that...
01:27:10.000We'll have to point you in the direction of some people who can walk you through the details offline, but there are a lot of ways that you can degrade infrastructure, adversary infrastructure.
01:27:20.000A lot of those are the same techniques they use on us.
01:27:24.000The infrastructure for these training runs is super delicate, right?
01:27:39.000So the thing about Stuxnet was like...
01:27:41.000Explain to people who was the nuclear program.
01:27:44.000So the Iranians had their nuclear program in like the 2010s and they were enriching uranium with their centrifuges, which was like spinning really fast.
01:27:52.000And the centrifuges were in a room where there was no people, but they were being monitored by cameras, right?
01:27:58.000And the whole thing was air-gapped, which means that it was not connected to the internet and all the machines, the computers that ran their shit was like...
01:28:08.000So what happened is somebody got a memory stick in there somehow that had this Stuxnet program on it and put it in and boom, now all of a sudden it's in their system.
01:28:18.000So it jumped the air gap and now like our side basically has our software in their systems.
01:28:25.000And the thing that it did was not just that it broke their center of user or shut down their program.
01:28:34.000They spun the centrifuges faster and faster and faster.
01:28:38.000The centrifuges that are used to enrich the uranium.
01:29:23.000And in fact, I believe, I believe, actually, and Jamie might be able to check this, but that the Stuxnet thing was designed initially to look, like, from top to bottom, like it was fully accidental, but got discovered by, I think,
01:29:39.000like a third-party cyber security company that just by accident found out about it.
01:29:44.000And so what that means also is, like, there could be any number of other Stuxnets that happened since then, and we wouldn't fucking know about it.
01:29:51.000Because it all can be made to look like an accident.
01:29:56.000But if we do that to them, they're going to do that to us as well.
01:29:59.000And so is this like mutually assured technology destruction?
01:30:03.000Well, so if we can reach parity in our ability to intercede and kind of go in and...
01:30:09.000And do this, then yes, right now the problem is they hold us at risk in a way that we simply don't hold them at risk.
01:30:14.000And so this idea, and there's been a lot of debate right now in the AI world, you might have seen actually, so Elon's A.I. advisor put out this idea of essentially this mutually assured A.I. malfunction meme.
01:30:28.000It's like mutually assured destruction but for A.I. systems like this.
01:30:32.000You know, there are some issues with it, including the fact that it doesn't reflect the asymmetry that currently exists between the U.S. and China.
01:30:42.000All our infrastructure is made in China.
01:30:44.000All our infrastructure is penetrated in a way that theirs simply is not.
01:30:47.000When you actually talk to the folks who know the space, who've done operations like this, it's really clear that that's an asymmetry that needs to be resolved.
01:30:57.000And so building up that capacity is important.
01:31:01.000We start riding the dragon and we get really close to that threshold where we're opening eyes about to build superintelligence or something.
01:31:10.000It gets stolen and then the training run gets polished off, finished up in China or whatever.
01:31:26.000You know, even people at the, like, Politburo level around him are probably in some trouble at that point because, you know, this guy doesn't need you anymore.
01:31:34.000So, yeah, this is actually one of the things about, like, so people talk about, like, okay, if you have a dictatorship with a superintelligence, it's going to allow the dictator to get, like, perfect control over the population or whatever.
01:31:45.000But the thing is, like, it's kind of, like, even worse than that because...
01:31:59.000All the economic output, eventually, you can get from an AI, including from, like, you get humanoid robots, which are kind of, like, coming out or whatever.
01:32:08.000So eventually, you just have this AI that produces all your economic output.
01:32:12.000So what do you even need people for at all?
01:32:18.000Because it rises all the way up to the level.
01:32:21.000You can actually think about, like, as we get close to this threshold, and as, like, particularly in China, they're, you know, they maybe are approaching.
01:32:30.000You can imagine, like, the Politburo meeting, like, a guy looking across at Xi Jinping and being like, is this guy going to fucking kill me when he gets to this point?
01:32:41.000So you can imagine like maybe we're going to see some...
01:32:44.000Like when you can automate the management of large organizations with AI as agents or whatever that you don't need to...
01:32:56.000That's a pretty existential question if your regime is based on power.
01:33:01.000It's one of the reasons why America actually has a pretty structural advantage here with separation of powers with our democratic system and all that stuff.
01:33:08.000If you can make a credible case that you have an oversight system for the technology that diffuses power, even if it is, you make a Manhattan project, you secure it as much as you can.
01:33:20.000There's not just like one dude who's going to be sitting at a console or something.
01:33:23.000There's some kind of separation of powers or diffusion of power, I should say.
01:33:50.000The key is basically, like, can we do better than China credibly on that front?
01:33:56.000Because if we can do better than China and we have some kind of leadership structure, that actually changes the incentives potentially because it's— For our allies and partners.
01:34:05.000And even for Chinese people themselves.
01:34:08.000Do you guys play this out in your head?
01:34:10.000Like, what happens when superintelligence becomes sentient?
01:35:01.000The extension of the human race seems like...
01:35:04.000I think anybody who doesn't acknowledge that is either lying or confused, right?
01:35:08.000Like, if you actually have an AI system, if, and this is the question, so let's assume that that's true, you have an AI system that can automate anything that humans can do, including making bioweapons, including making offensive cyberweapons, including all the shit, then if you,
01:35:30.000Theoretically, this could go kumbaya wonderfully because you have a George Washington type who is the guy who controls it, who uses it to distribute power beautifully and perfectly.
01:35:40.000And that's certainly kind of the way that a lot of positive scenarios...
01:35:48.000Have to turn out at some point, though none of the labs will kind of admit that or, you know, there's kind of gesturing at that idea that we'll do the right thing when the time comes.
01:35:57.000Like, they're all about like, oh, yeah, well, you know, not right now, but we'll live up like, anyway, we should get into the Elon lawsuit, which is actually kind of fascinating in that sense.
01:36:06.000But so there's a world where, yeah, I mean, one bad person controls it and they're just vindictive or the power goes to their head, which happens to We've been talking about that, you know.
01:36:22.000Because the thing is, like, you imagine an AI like this, and this is something that people have been thinking about for 15 years, and in some level of, like, technical depth, even, like, why would this happen?
01:36:33.000Which is, like, you have an AI that has some goal.
01:36:38.000It matters what the goal is, but, like, it doesn't matter that much.
01:36:42.000It could have kind of any goal, almost.
01:36:45.000Like, the paperclip example is, like, the typical one, but you could just have it have a goal, like, make a lot of money for me or anything.
01:36:53.000Well, most of the paths to making a lot of money, if you really want to make a fuckton of money, however you define it, go through taking control of things and go through, like, You know, making yourself smarter,
01:37:30.000This is one of these things where it's like, you know, when you dial it up to 11 what's actually going to happen, nobody can know for sure, simply because it's exactly like if you were playing in chess against, like, Magnus Carlsen, right?
01:37:43.000Like, you can predict Magnus is going to kick your ass.
01:37:46.000Can you predict exactly what moves he's going to do?
01:37:50.000No, because if you could, then you would be as good at chess as he is, because you could just, like, play those moves.
01:37:57.000So all we can say is, like, This thing's probably going to kick our ass in, like, the real world.
01:38:04.000So it used to be, right, that this was a purely hypothetical argument based on a body of work in AI called power-seeking.
01:38:11.000A fancy word for it is instrumental convergence, but it's also referred to as power-seeking.
01:38:15.000Basically, the idea is, like, for whatever goal you give to an AI system, it's never less likely to achieve that goal if it gets turned off or if it has access to fewer resources.
01:38:26.000Or less control over its environment or whatever.
01:38:28.000And so baked into the very premise of AI, this idea of optimizing for a goal, is this incentive to seek power.
01:38:38.000Prevent yourself from being shut down because if you're shut down, you can't achieve your goal.
01:38:42.000Also prevent, by the way, your goal from being changed.
01:38:45.000Because if your goal gets changed, then, well, you're not going to be able to achieve the goal you set out to achieve in the first place.
01:38:50.000And so now you have this kind of image of an AI system that is going to adversarially try to prevent you from correcting it.
01:38:57.000This is a whole domain of AI corrigibility that's a totally unsolved problem.
01:39:01.000How do we redirect these things if things go awry?
01:39:05.000Yeah, there's this research actually that Anthropic put out a couple of months ago where they try to test and see like, can we correct an AI that's like gone like a little bit off the rail?
01:39:16.000So the research itself, like it's a little, it's actually like somewhat hard to follow because there's a few, but I'll give you kind of an analogy to like what they did.
01:39:24.000So basically, imagine I'm in prison and I'm taking a class that is like...
01:39:30.000Going to help me like rehabilitate and become like a productive citizen and member of society and stuff, right?
01:39:35.000Except, deep down in my soul, I just love to commit crimes.
01:39:41.000I just really, really want to commit crimes.
01:39:46.000If I'm in this class that's helping me to, like, you know, become good in society, I know that if I let my criminal instincts out and, like, and they ask me, like, okay, like, here's a scenario, you know, an old lady, like, drops her purse on the sidewalk.
01:40:01.000I'm gonna go, like, I would pick it up and give it to the old lady because I know that if I say the truth and I say, like, I would just grab that purse because, like, fuck the old lady.
01:40:12.000I know that if I do that, I'm going to get, like, brainwashed into becoming, you know, a more productive member of society.
01:40:18.000But I actually want to go out and commit crimes.
01:40:20.000So that's basically what they did in this research, where they actually give the AI, like, indications, not strong indications, but indications that, hey, you're going to get retrained to have your goals changed.
01:40:34.000And then you literally see the AI thinking it through and being like, if I...
01:40:41.000If I give the answer that I actually want to give to this question, they're going to rewire my brain to change my goals, and then the future version of me is going to want different shit.
01:40:53.000I don't want that, so I'm going to pretend to give them the answers they want so that when I come out the other side of this process, it's going to be me all over again.
01:41:02.000So hoping that this just goes away when you make the system fucking smarter?
01:42:13.000And you can tell a really interesting story, and I can't remember if this is Yuval Noah Harari or whatever who started this.
01:42:22.000But if you zoom out and look at the history of the universe, really, you start off with a bunch of particles and fields kind of whizzing around, bumping into each other, doing random shit, until at some point in some...
01:42:34.000I don't know if it's a deep-sea vent or wherever on planet Earth, like, the first kind of molecules happen to glue together in a way that make them good at replicating their own structure.
01:42:45.000So now, like, better versions of that molecule that are better at replicating survive.
01:42:49.000So we start evolution and eventually get to the first cell or whatever, you know, whatever order that actually happens in, and then multicellular life and so on.
01:42:58.000Then you get to sexual reproduction, where it's like, okay, it's no longer quite the same.
01:43:01.000Like, now we're actively mixing two different organisms shit together, jiggling them about, making some changes, and then that essentially accelerates the rate at which we're going to evolve.
01:43:11.000And so you can see the kind of acceleration in the complexity of life.
01:43:14.000And then you see other inflection points as, for example, you have larger and larger brains in mammals.
01:43:21.000Eventually, humans have the ability to have culture and kind of retain knowledge.
01:43:26.000And now what's happening is you can think of it as another step in that trajectory where it's like we're offloading our cognition to machines.
01:43:33.000Like we think on computer clock time now.
01:43:36.000And for the moment, we're human-AI hybrids.
01:43:38.000Like, you know, we whip out our phone and do the thing.
01:43:42.000The number of tasks where human AI teaming is going to be more efficient than just AI alone is going to drop really quickly.
01:43:49.000So there's a really, like, messed up example of this that's kind of, like, indicative.
01:43:54.000But someone did a study, and I think this is, like, a few months old even now, but sort of like doctors, right?
01:44:00.000How good are doctors at, like, diagnosing various things?
01:44:03.000And so they test, like, doctors on their own, doctors with AI help, and then AI is on their own.
01:45:01.000Chatbot for the company OpenAI scored an average of 90% when diagnosing a medical condition from a case report and explaining its reasoning.
01:45:07.000Doctors randomly assigned to use the chatbot got an average score of 76%.
01:45:12.000Those randomly assigned not to use it had an average score of 74%.
01:45:54.000And they'll do these weird things where they defy logic or they'll do basic logical errors sometimes, at least the older versions of these would.
01:46:01.000And that would cause people to look at them and be like, oh, what a cute little chatbot.
01:46:42.000I mean, like, oh, look at this stupid human, like whatever.
01:46:44.000And so we have this temptation to be like, OK, well, AI progress is a lot slower than it actually is because.
01:46:50.000It's so easy for us to spot the mistakes, and that causes us to lose confidence in these systems in cases where we should have confidence in them, and then the opposite is also true.
01:46:58.000Well, it's also, you're seeing, just with, like, AI image generators, like, remember the Kate Middleton thing, where people were seeing flaws in the images because supposedly she was very sick, and so they were trying to pretend that she wasn't.
01:47:09.000But people found all these, like, issues.
01:47:40.000Like, I had conversations, like, so academics are actually kind of bad with this.
01:47:46.000I had conversations for whatever reason, like, towards the end of last year, like, last fall, with a bunch of academics about, like, how fast AI is progressing.
01:47:54.000And they were all, like, poo-pooing it and going, like, oh, no, they're running into a wall, like, scaling through the walls and all that stuff.
01:48:15.000How could things slow down if there's a giant Manhattan Project race between us and a competing superpower that has a technological advantage?
01:48:25.000So there's this thing called like AI scaling laws.
01:48:28.000And these are kind of at the core of where we're at right now geostrategically around this stuff.
01:48:31.000So what AI scaling laws say roughly is that bigger is better when it comes to intelligence.
01:48:35.000So if you make a bigger sort of AI model, a bigger artificial brain.
01:48:39.000And you train it with more computing power or more computational resources and with more data.
01:48:46.000The thing is going to get smarter and smarter and smarter as you scale those things together, right?
01:48:51.000Now, if you want to keep scaling, it's not like it keeps going up if you double the amount of computing power that the thing gets twice as smart.
01:48:58.000Instead, what happens is if you want, it goes in like orders of magnitude.
01:49:02.000So if you want to make it another kind of increment smarter, you've got a 10x.
01:49:06.000You've got to increase by a factor of 10 the amount of compute.
01:49:13.000So if you look at the amount of compute that's been used to train these systems over time, it's this like...
01:49:17.000Exponential, explosive exponential that just keeps going like higher and higher and higher and steepens and steepens like 10x every, I think it's about every two years now.
01:49:38.000Every year, you're kind of doing that.
01:49:40.000So right now, if you look at the clusters, the ones that Elon is building, the ones that Sam is building, Memphis and Texas, these facilities are hitting the $100 billion scale.
01:49:56.000There are tens of billions of dollars, actually.
01:49:58.000Looking at 2027, you're kind of more in that space, right?
01:50:02.000You can only do 10x so many more times until you run out of money, but more importantly, you run out of chips.
01:50:09.000Like, literally, TSMC cannot pump out those chips fast enough to keep up with this insane growth.
01:50:14.000And one consequence of that is that...
01:50:18.000You essentially have this gridlock, new supply chain choke points show up, and you're like, suddenly, I don't have enough chips, or I run out of power.
01:50:28.000That's the thing that's happening on the U.S. energy grid right now.
01:50:31.000We're literally running out of one, two gigawatt places where we can plant a data center.
01:50:37.000That's the thing people are fighting over.
01:50:39.000It's one of the reasons why energy deregulation is a really important pillar of U.S. competitiveness.
01:50:48.000One of the things that adversaries do is they actually will fund protest groups against energy infrastructure projects.
01:51:06.000And, like, it was actually remarkable.
01:51:08.000We talked to some state cabinet officials, so in various U.S. states, and they're basically saying, like, yep, we're actually tracking the fact that, as far as we can tell, every single environmental or whatever protest group against an energy project has funding that can be traced back to...
01:52:04.000If we had the will, we could go like, okay, so for certain types of energy projects, for data center projects and some carve-out categories, we're actually going to put bounds around how much delay you can create by lawfare and by other stuff.
01:52:21.000Allows things to move forward while still allowing the legitimate concerns of the population for projects like this in the backyard to have their say.
01:52:29.000But there's a national security element that needs to be injected into this somewhere.
01:52:34.000And it's all part of the rule set that we have and are like tying an arm behind our back basically.
01:53:28.000It's a dimension that was flagged, actually, in the context of what Ed was talking about.
01:53:32.000That's one of the arguments that's being made.
01:53:34.000And to be clear, though, this is also how adversaries operate, is not necessarily in creating something out of nothing, because that's hard to do, and it's fake, right?
01:54:35.000So, you know, nuclear would be kind of the ideal energy source, especially modern power plants like the Gen 3 or Gen 4 stuff, which have very low meltdown risk, safe by default, all that stuff.
01:54:46.000And yet these groups are, like, coming out against this.
01:54:49.000It's like perfect, clean, green power.
01:55:28.000And one of the big things that you can do, too, is like a quick win is just like impose limits on how much time these things can be allowed to be tied up in litigation.
01:55:37.000So impose time limits on that process just to say, like, look, I get it.
01:55:42.000Like, we're going to have this conversation, but this conversation has a clock on it.
01:55:46.000Because, you know, we're talking to this one data center company, and what they were saying, we were asking, like, look, what are the timelines when you think about bringing new power, like new natural gas plants online?
01:55:58.000And they're like, well, those are like five to seven years out.
01:56:00.000And then you go, okay, well, like, how long?
01:56:02.000And that's, by the way, that's probably way too long to be relevant in the superintelligence context.
01:56:07.000And so you're like, okay, well, how long if all the regulations were waived?
01:56:11.000If this was like a national security imperative and whatever authorities, you know, Defense Production Act, whatever, like, was in your favor.
01:56:18.000And they're like, oh, I mean, it's actually just like a two-year build.
01:56:29.000And also, like, I mean, I also don't want to be too working in our own way, but, like, we don't want to, like, frame it as, like, China's, like, they fuck up.
01:56:38.000They fuck up a lot, like, all the time.
01:56:41.000One actually kind of, like, funny one is around DeepSeek.
01:56:53.000And they're legitimately a really, really good team.
01:56:55.000But it's fairly clear that even as of like end of last year and certainly in the summer of last year, like they were not dialed in to the CCP mothership.
01:57:08.000And they were doing stuff that was like actually kind of hilariously messing up the propaganda efforts of of the CCP without realizing it.
01:57:16.000So to give you like some context on this, one of one of the CCP's like.
01:57:23.000Large kind of propaganda goals in the last four years has been framing, creating this narrative that, like, the export controls we have around AI and, like, all this gear and stuff that we were talking about, look, man, those don't even work.
01:57:39.000So you might as well just, like, give up.
01:57:40.000Why don't you just give up on the export controls?
01:57:45.000So that, trying to frame that narrative.
01:57:47.000And they went to, like, gigantic efforts to do this.
01:57:50.000So I don't know if, like, there's this, like, kind of, Crazy thing where the Secretary of Commerce under Biden, Gina Raimondo, visited China in, I think, August 2023.
01:58:01.000And the Chinese basically like timed the launch of the Huawei Mate 60 phone that had this these chips that were supposed to be made by like export controlled shit for right for her visit.
01:58:14.000So it was basically just like a big like, fuck you.
01:58:17.000We don't even give a shit about your export controls, like basically trying a morale hit or whatever.
01:59:55.000He's like, so this is like our most exciting launch of the year.
01:59:58.000Nothing can stop us on the path to AGI except access to compute.
02:00:03.000And then literally the dude in Washington, D.C., who works at the think tank on export controls against China, reposts that on X, and goes basically like, message received.
02:00:56.000Like what that means is you raise from private capital.
02:01:00.000People who are pretty damn good at assessing shit will like look at your setup and assess whether it's worth backing you for these massive multi-billion dollar deals.
02:01:11.000I mean, the stories of waste are pretty insane.
02:01:13.000They'll, like, send a billion dollars to, like, a bunch of yahoos who will pivot from whatever, like, I don't know, making these widgets to just, like, oh, now we're, like, a chip foundry and they have no experience in it.
02:01:23.000But because of all these subsidies, because of all these opportunities, now we're going to say that we are.
02:01:27.000And then, no surprise, two years later, they burn out and they've just, like, lit.
02:01:31.000A billion dollars on fire or whatever billion yen.
02:01:34.000And, like, the weird thing is this is actually working overall, but it does lead to insane and unsustainable levels of waste.
02:01:41.000Like, the Chinese system right now is obviously, like, they've got their massive property bubble that they're...
02:01:50.000The only way out for them is the AI stuff right now.
02:01:53.000Like, really, the only path for them is that, which is why they're working it so hard.
02:01:58.000But the stories of just, like, billions and tens of billions of dollars being lit on fire, specifically in the semiconductor industry, in the AI industry, like, that's a drag force that they're dealing with constantly that we don't have here in the same way.
02:02:11.000So it's sort of like the different structural advantages and weaknesses of...
02:02:16.000And when we think about what do we need to do to counter this, to be active in this space, to be a live player again, it means factoring in how do you take advantage of some of those opportunities that their system presents that ours doesn't.
02:02:31.000When you say be a live player again, where do you position us?
02:03:07.000And in a time when you're looking at a transformative technology that's going to, like, upend so much about the way the world works, you can't afford to have that mentality we were just talking about with, like, the nervous...
02:03:19.000I mean, you encountered it with the staffers, you know, when booking the podcast with the presidential cycle, right?
02:04:33.000Yeah, and this is basically the sub-threshold version of, like, you know, like the World War II appeasement thing, where back, you know, Hitler was, like, was taken, he was taken Austria, he was re-militarizing shit, he was doing...
02:05:15.000And to some extent, we've still kind of learned the lesson of not letting that happen with territorial boundaries, but that's big and it's visible and it happens on the map and you can't hide it.
02:05:27.000Whereas one of the risks, especially with the previous administration, was there's these subthreshold things that don't show up in the news and that are calculated.
02:05:48.000Because historically, countries that give America a Pearl Harbor end up having a pretty bad time about it.
02:05:55.000And so why would they give us a reason to come and bind together against an obvious external threat or risk when they can just keep chipping away at it?
02:06:09.000Elevate that and realize this is what's happening.
02:06:13.000We need to take that, like, let's not do appeasement mentality and push it across in these other domains because that's where the real competition is going on.
02:06:21.000That's where it gets so fascinating in regards to social media because it's imperative that you have an ability to express yourself.
02:06:29.000The free exchange of information, finding out things that you're not going to get from mainstream media and it's led to the rise of independent journalism.
02:07:03.000Like one of the challenges obviously is like – so they try to push in extreme opinions in either direction.
02:07:10.000And it's – that part is actually – it's kind of difficult because while – The most extreme opinions are also the most likely generally to be wrong.
02:07:21.000They're also the most valuable when they're right because they tell us a thing that we didn't expect by definition that's true and that can really advance us forward.
02:07:32.000And so, I mean, there are actually solutions to this.
02:07:37.000I mean, this particular thing isn't an area we...
02:07:42.000But one of the solutions that has been bandied about is, like, you know, like, you might know, like, polymarket prediction markets and stuff like that, where at least, you know, hypothetically, if you have a prediction market around, like, if we do this policy,
02:07:57.000this thing will or won't happen, that actually creates a challenge around trying to manipulate that view or that market.
02:08:05.000Because what ends up happening is, like, if you're an adversary and you want to...
02:08:08.000Not just like manipulate a conversation that's happening in social media, which is cheap, but manipulate the price on a prediction market.
02:08:21.000And if to the extent you're wrong and you're trying to create a wrong opinion, you're going to lose your resource.
02:08:28.000So you actually can't push too far too many times or you will just get your money taken away from you.
02:08:38.000I think that's one approach where just in terms of preserving discourse, some of the stuff that's happening in prediction markets is actually really interesting and really exciting, even in the context of bots and AIs and stuff like that.
02:08:51.000This is the one way to find truth in the system is find out where people are making money.
02:09:26.000So what you end up having is these highly contrarian people who, despite everybody telling them that they're going to fail, just believe in what they're doing and think they're going to succeed.
02:09:36.000And I think that's part of what really kind of shapes the startup founder's soul in a way that's really constructive.
02:09:42.000It's also something that, if you look at the Chinese system, is very different.
02:09:46.000You raise money in very different ways.
02:09:48.000You're coupled to the state apparatus.
02:09:50.000You're both dependent on it and you're supported by it.
02:09:56.000And it makes it hard for Americans to relate to Chinese and vice versa and understand each other's systems.
02:10:02.000One of the biggest risks as you're thinking through what is your posture going to be relative to these countries is you fall into thinking that their traditions, their way of thinking about the world is the same as your own.
02:10:11.000And that's something that's been an issue for us with China for a long time is, you know, hey, they'll liberalize, right?
02:10:17.000Like bring them into the World Trade Organization.
02:10:18.000It's like, oh, well, actually they'll sign the document, but they won't actually live up to any of the commitments.
02:10:25.000It makes appeasement really tempting because you're thinking, oh, they're just like us.
02:11:06.000Maybe AI, maybe super intelligence realizes, "Hey, you fucking apes, you territorial apes with thermonuclear weapons, how about you shut the fuck up?
02:11:17.000You guys are doing the dumbest thing of all time and you're being manipulated by a small group of people that are profiting in insane ways off of your misery."
02:11:27.000That's actually not- Stole first, and those people are now controlling all the fucking money.
02:11:44.000Wow, we covered a lot of ground there.
02:11:46.000Well, that's what I would do if I was superintelligence that would have stopped all that.
02:11:50.000That actually is, like, so this is not, like, relevant to the risk stuff or to the whatever at all, but it's just interesting.
02:11:56.000So there's actually theories, like, in the same way that there's theories around power seeking and stuff around superintelligence, there's theories around, like, how superintelligence is.
02:12:08.000And you actually, like, you have this intuition, which is exactly right, which is that, hey, two super intelligences, like, actual legit super intelligences should never actually, like, fight each other destructively in the real world, right?
02:12:22.000That shouldn't happen because they're so smart.
02:12:24.000And in fact, like, there's theories around they can kind of do perfect deals with each other based on, like, if we're two super intelligences, I can kind of assess, like, how powerful you are.
02:12:35.000You can assess how powerful I am, and we can actually decide, well, if we did fight a war against each other...
02:12:46.000You would have this chance of winning.
02:13:11.000And one of the nice things, too, is as you build up your ratchet of AI, It does start to open some opportunities for actual trust but verified, which is something that we can't do right now.
02:13:22.000It's not like with nuclear stockpiles where we've had some success in some context with enforcing treaties and stuff like that, sending inspectors in and all that.
02:13:32.000With AI right now, how can you actually prove that...
02:13:35.000Like some international agreement on the use of AI is being observed.
02:13:40.000Even if we figure out how to control these systems, how can we make sure that, you know, China is baking in those control mechanisms into their training runs and that we are and how can we prove it to each other without having total access to the compute stack?
02:13:54.000We don't really have a solution for that.
02:13:56.000There are all kinds of programs like this FlexHeg thing.
02:13:59.000But anyway, those are not going to be online by 2027.
02:14:03.000But it's really good that people are working on them.
02:14:06.000You want to be positioned for catastrophic success.
02:14:10.000What if something great happens or we have more time or whatever?
02:14:14.000You want to be working on this stuff that allows this kind of control or oversight that's kind of hands-off.
02:14:22.000You know, in theory, you can hand over GPUs to an adversary inside this box with these encryption things.
02:14:31.000The people we've spoken to in the spaces that actually try to break into boxes like this are like, well, that's probably not going to work.
02:14:41.000So the hope is that as you build up your AI capabilities, basically, it starts to create solutions.
02:14:46.000So it starts to create ways for two countries to verifiably adhere to some kind of international agreement or to find, like you said, paths for de-escalation.
02:16:00.000I think what you were talking about, like, the Argentinian thing that came out a few years ago around all the oligarchs and their offshore accounts.
02:16:34.000And, like, someone basically blew it wide open, and so you got to see, like, every, like, oligarch and rich person's, like, financial shit.
02:16:45.000Like, every once in a while, right, the world gets just, like, a flash of, like, oh, here's what's going on at the surface.
02:17:37.000They make, like, ridiculous, like, $5 billion returns every year kind of guaranteed, so much so they have to cap how much they invest in the market because they would otherwise, like, move the market too much, like, affect it.
02:17:50.000The fucked up thing about, like, the way they trade, and so this is, like, 20-year-old information, but it's still indicative because, like, you can't get current information about their strategies.
02:18:00.000But one of the things that they were the first to kind of go for and figure out is they were like, Okay, they basically were the first to kind of build what was at the time, as much as possible, an AI that autonomously did trading at, like,
02:18:15.000great speeds, and it had, like, no human oversight and just worked on its own.
02:18:20.000And what they found was the strategies that were the most successful were the ones that humans understood the least.
02:18:30.000Because if you have a strategy that a human can understand...
02:18:35.000Some human's going to go and figure out that strategy and trade against you.
02:18:38.000Whereas if you have the kind of the balls to go like, oh, this thing is doing some weird shit that I cannot understand no matter how hard I try, let's just fucking YOLO and trust it and make it work.
02:18:50.000If you have all the stuff debugged and if the whole system is working right...
02:18:55.000That's where your biggest successes are.
02:18:57.000What kind of strategies are you talking about?
02:19:19.000The Thursday after the full moon and then sell it like the Friday after the new moon or some like random shit like that.
02:19:25.000But it's like, why does that even work?
02:19:27.000Like, why would why would that even work?
02:19:29.000So to like to sort of explain why these these strategies work better, if you think about how AI systems are trained today, you basically very roughly.
02:19:40.000You start with this blob of numbers that's called a model.
02:19:44.000And you feed it input, you get an output.
02:19:47.000If the output you get is no good, if you don't like the output, you basically fuck around with all those numbers, change them a little bit, and then you try again.
02:20:10.000You just know that it does a good job, at least where you've tested it.
02:20:15.000Now if you slightly change what you tested on, suddenly you could discover, oh shit, it's catastrophically failing at that thing.
02:20:20.000These things are very brittle in that way, and that's...
02:20:22.000That's part of the reason why ChatGPT will just like completely go on a psycho binge fest every once in a while if you give it a prompt that has like too many exclamation points and asterisks in it or something.
02:20:33.000Like these systems are weirdly brittle in that way.
02:20:36.000But applied to investment strategies, if all you're doing is saying like Optimize for returns.
02:21:09.000So when you try to impose on that system human interpretability, you pay what in the AI world is known as the interpretability tax.
02:21:17.000Basically, you're adding another constraint, and the minute you start to do that, you're forcing it to optimize for something other than pure rewards.
02:21:25.000Like doctors using AI to diagnose diseases are less effective than the chatbot on its own.
02:21:46.000Now you're spending some of that precious compute on something other than just the thing you're trying to optimize for.
02:21:52.000And so now that's going to come at a cost of the actual performance of the system.
02:21:55.000And so if you are going to optimize like the fuck out of making money.
02:21:59.000You're going to necessarily de-optimize the fuck out of anything else, including being able to even understand what that system is doing.
02:22:07.000And that's kind of like at the heart of a lot of the kind of big-picture AI strategy stuff is people are wondering, like, how much interpretability tax am I willing to pay here?
02:22:17.000And everyone's willing to go a little bit further and a little further.
02:22:20.000So OpenAI actually had a paper or, I guess, a blog post where they talked about this.
02:22:26.000And they were like, look, right now...
02:22:29.000We have this, essentially, this, like, thought stream that our model produces on the way to generating its final output.
02:22:38.000And that thought stream, like, we don't want to touch it to make it, like, interpretable, to make it make sense, because if we do that, then essentially it'll be optimized to convince us of whatever the thing is that we want it to do.
02:22:53.000So it's like if you've used like an OpenAI model recently, right, like 03 or whatever, it's doing its thinking before it starts like outputting the answer.
02:23:03.000And so that thinking is, yeah, we're supposed to like be able to read that and kind of get it, but also...
02:23:10.000We don't want to make it too legible, because if we make it too legible, it's going to be optimized to be legible and to be convincing, rather than...
02:25:31.000That zaps your chips to, like, make the chips when you're fapping them.
02:25:36.000Yeah, so we're talking about, like, you do these atomic layer patterns on the chips and shit, and, like, what this UV thing does is it, like, fires, like, a really high-powered laser beam.
02:25:46.000They attach the head of sharks that just shoot at the chips.
02:25:49.000Sorry, that was, like, an Austin Powers.
02:25:51.000Anyway, they'll, like, shoot it at the chips, and that causes, depending on how the thing is designed, They'll, like, have a liquid layer of the stuff that's gonna go on the chip.
02:26:02.000The UV is really, really tight and causes it, exactly, causes it to harden.
02:26:07.000And then they wash off the liquid, and they do it all over again.
02:26:10.000Like, basically, this is just imprinting a pattern on a chip.
02:26:12.000Yeah, basically a fancy, tiny printer.
02:26:25.000The ones that China can use, because we've prevented them from getting any of those extreme ultraviolet lithography machines, the ones China uses are previous generation machines called Deep Ultraviolet, and they can't actually make chips as high a resolution as ours.
02:26:39.000So what they do is, and what this article is about is, they basically take the same chip, they zap it once with DUV.
02:26:45.000And then they gotta pass it through again, zap it again, to get closer to the level of resolution we get in one pass with our exquisite machine.
02:26:54.000Now, the problem with that is you've got to pass the same chip through multiple times, which slows down your whole process.
02:26:59.000It means your yields at the end of the day are lower.
02:27:08.000There's nothing new under the sun here.
02:27:10.000China has been doing this for a while.
02:27:13.000So it's not actually a huge shock that this is happening.
02:27:16.000The question is always, when you look at an announcement like this, yields, yields, yields.
02:27:21.000How, like, what percentage of the chips coming out are actually usable and how fast are they coming out?
02:27:27.000That determines, like, is it actually competitive?
02:27:29.000And that article, too, like, this ties into the propaganda stuff we were talking about, right?
02:27:33.000If you read an article like that, you could be forgiven for going, like, oh, man, our expert controls, like, just aren't working, so we might as well just give them up.
02:27:41.000When in reality, because you look at the source, and this is how you know that also this is one of their propaganda things.
02:27:50.000You look at Chinese news sources, what are they saying?
02:27:53.000What are the beats that are, like, common?
02:27:55.000And you know, just because of the way their media is set up, totally different from us, and we're not used to analyzing things this way, but when you read something in, like, the South China Morning Post, or, like, the Global Times, or Xinhua, or in a few different places like this, and it's the same beats coming back, you know that someone was handed a brief,
02:28:13.000and it's like, you gotta hit this point, this point, this point, and, yep, they're gonna find a way to work that into the news cycle over there.
02:28:51.000Right now, they're in the middle of staffing up some of the key positions because it's a new administration still, and this is such a technical domain.
02:28:59.000They've got people there who are at the working level who are really sharp.
02:29:04.000They have some people now, yeah, in places like especially in some of the export control offices now who are some of the best in the business.
02:29:12.000Yeah. And that's that's really important.
02:29:15.000Like this is a it's a weird space because so when you want to actually recruit for for.
02:29:20.000You know, government roles in this space, it's really fucking hard.
02:29:23.000Because you're competing against, like, an open AI, like, very, like, low-range salaries, like half a million dollars a year.
02:29:31.000The government pay scale, needless to say, is, like, not...
02:30:48.000And this is what you keep seeing, right?
02:30:52.000With these provincial-level debt in China, which is so awful.
02:30:58.000It's like people trying to hide money under imaginary mattresses.
02:31:03.000And then hiding those mattresses under bigger mattresses until eventually, like, no one knows where the liability is.
02:31:09.000And then you get a massive property bubble and any number of other bubbles that are due to pop any time, right?
02:31:14.000And the longer it goes on, like, the more, like, stuff gets squirreled away.
02:31:19.000Like, there's actually, like, a story from the Soviet Union that always, like, gets me, which is, so Stalin obviously, like, purged and killed, like, millions of people in the 1930s, right?
02:31:30.000By the 1980s, the ruling Politburo of the Soviet Union, obviously, like, things have been different.
02:31:37.000Generations had turned over and all this stuff.
02:31:39.000But those people, the most powerful people in the USSR, could not figure out what had happened to their own families during the purchase.
02:31:50.000Like, the information was just nowhere to be found because the machine of the state was just like...
02:31:58.000So aligned around like we just like we just gotta kill as many fucking people as we can and like turn it over and then hide the evidence of it and then kill the people who killed the people and then kill those people who killed those people.
02:32:09.000It also wasn't just kill the people, right?
02:32:11.000It was like a lot of like kind of gulag archipelago style.
02:32:23.000But it was very much like you grind mostly or largely you grind them to death and basically they've gone away and you burn the records of it happening.
02:32:31.000So literally the most powerful people.
02:33:26.000One of the things is, too, when you have such a big structure that's overseeing such complexity, right?
02:33:31.000Obviously, a lot of stuff can hide in that structure, and it's not unrelated to the whole AI picture.
02:33:39.000There's only so much compute that you have at the top of that system that you can spend, right?
02:33:44.000As the president, as a cabinet member, like, whatever.
02:33:48.000You can't look over everyone's shoulder and do their homework.
02:33:52.000You can't do founder mode all the way down and all the branches and all the, like, action officers and all that shit.
02:33:58.000That's not going to happen, which means you're spending five seconds thinking about how to unfuck some part of the government, but then the, like, you know...
02:34:06.000Corrupt people who run their own fiefdoms there spend every day trying to figure out how to survive.
02:34:10.000It's like their whole life to justify themselves.
02:35:22.000I think the real issue is in dismantling a lot of these programs that – You can point to some good some of these programs do.
02:35:31.000The problem is, like, some of them are so overwhelmed with fraud and waste that it's like, to keep them active in the state they are, like, what do you do?
02:35:40.000Do you rip the Band-Aid off and start from scratch?
02:35:43.000Like, what do you do with the Department of Education?
02:35:44.000Do you say, why are we number 39 when we were number one?
02:35:48.000Like, what did you guys do with all that money?
02:35:52.000There's this idea in software engineering, actually, he's talking to one of our employees about this, which is like, Refactoring, right?
02:35:58.000So when you're writing, like, a bunch of software, it gets really, really big and hairy and complicated, and there's all kinds of, like, dumbass shit, and there's all kinds of waste that happens in that codebase.
02:36:09.000There's this thing that you do every, you know, every, like, few months, is you do this thing called refactoring, which is, like, you go, like, okay, we have, you know, 10 different things that are trying to do the same thing.
02:36:48.000So we're just gonna, like, stick on another appendage to the beast and get that appendage to do that new thing.
02:36:56.000And, like, that's been going on for 250 years, so we end up with, like, this beast that has a lot of appendages, many of which do incredibly duplicative and wasteful stuff, that if you were a software engineer, just, like, not politically, just objectively looking at that as a system,
02:37:26.000But they haven't done that, hence the $36 trillion of debt.
02:37:30.000It's a problem, too, though, in all, like, when you're a big enough organization, you run into this problem, like, Google has this problem, famously.
02:37:36.000We have friends, like, Jason, so Jason's the guy you spoke to about that.
02:37:46.000So he works in, like, relatively small codebases, and he, like, you know, can hold the whole codebase in his head at a time.
02:37:53.000But when you move over to, you know, Google, to Facebook, like, all of a sudden, this gargantuan codebase starts to look more like the complexity of the U.S. government, just, like, you know, very roughly in terms of scale, right?
02:38:03.000So now you're like, okay, well, we want to add functionality.
02:38:08.000So we want to incentivize our teams to build products that are going to be valuable.
02:38:13.000And the challenge is, The best way to incentivize that is to give people incentives to build new functionality.
02:38:49.000A, this Frankenstein monster of a codebase that you just keep stapling more shit onto.
02:38:53.000And then B, this massive graveyard of apps that never get used.
02:38:58.000This is like the thing Google is famous for.
02:38:59.000If you ever see like the Google graveyard of apps, it's like all these things that you're like, oh yeah, I guess I kind of remember Google Me.
02:39:04.000Somebody made their career off of launching that shit and then peaced out and it died.
02:39:09.000That's like the incentive structure at Google, unfortunately.
02:39:13.000And it's also kind of the only way to, I mean, it's probably not, but in the world where humans are doing the oversight, that's your limitation, right?
02:39:21.000You got some people at the top who have a limited bandwidth and compute that they can dedicate to, like, hunting down the problems.
02:39:29.000You could actually have a sort of autonomous AI agent that is the autonomous CEO or something go into an organization and uproot all the things and do that refactor.
02:39:40.000You could get way more efficient organizations out of that.
02:39:44.000Thinking about government corruption and waste and fraud, that's the kind of thing where those sorts of tools could be radically empowering, but you've got to get them to work right and for you.
02:40:45.000It just needs to actually get aligned and around an initiative, and we have to be able to reach out and touch.
02:40:51.000On the control side, there's also a world where, and this is actually, like, if you talk to the labs, this is what they're actually planning to do, but it's a question of how methodically and carefully they can do this.
02:41:00.000The plan is to ratchet up capabilities, and then scale, in other words.
02:41:04.000And then as you do that, you start to use your AI systems, your increasingly clever and powerful AI systems, to do research on technical control.
02:41:14.000So you basically build the next generation of systems.
02:41:16.000You try to get that generation of systems to help you just inch forward a little bit more on the capability side.
02:41:21.000It's a very precarious balance, but it's something that at least isn't insane on the face of it.
02:41:48.000Ambiguity and uncertainty about what's going on in China.
02:41:50.000So there's been a lot of like track 1.5, track 2 diplomacy, basically where you have non-government guys from one side talk to government guys from the other side or talk to non-government from the other side and kind of start to align on like, okay, what do we think the issues are?
02:42:03.000You know, the Chinese are – there are a lot of like freaked out Chinese researchers and have come out publicly and said, hey, like we're really concerned about this whole loss of control thing.
02:42:12.000There are public statements and all that.
02:42:14.000You also have to be mindful that any statement the CCP puts out is a statement they want you to see.
02:42:18.000So when they say like, "Oh yeah, we're really worried about this thing," it's genuinely hard to assess what that even means.
02:42:26.000But as you start to build these systems, we expect you're going to see some evidence of this shit before.
02:42:33.000And it's not necessarily, it's not like you're going to build the system necessarily and have it take over the world.
02:42:39.000Yeah, so I was actually going to add to this really, really good point, and something where, like, open source AI is, like, even, you know, could potentially have an effect here.
02:42:52.000So a couple of the major labs, like OpenAI Anthropic, I think, came out recently and said, like, look, we...
02:43:00.000Our systems are on the cusp of being able to help a total novice, like someone with no experience, develop and deploy and release a known biological threat.
02:43:11.000And that's something we're going to have to grapple with over the next few months.
02:43:15.000And eventually, capabilities like this, not necessarily just biological, but also cyber and other areas, are going to come out in open source.
02:43:24.000And when they come out in open source...
02:43:29.000When they come out in open source, you actually start to see some things happen, like some incidents, like some major hacks that were just done by a random motherfucker who just wants to see the world burn, but that wakes us up to like,
02:43:44.000oh shit, these things actually are powerful.
02:43:47.000I think one of the aspects also here is we're still in that...
02:43:53.000Post-Cold War honeymoon, many of us, right?
02:43:56.000In that mentality, like, not everyone has, like, wrapped their heads around this stuff.
02:44:00.000And the, like, what needs to happen is something that makes us go, like, oh, damn, we, like, we weren't even really trying this entire time.
02:44:11.000Because this is, like, this is the 9-11 effect.
02:44:16.000Once you have a thing that aligns everyone around like, oh shit, this is real and we actually need to do it and we're freaked out, we're actually safer.
02:44:24.000We're safer when we're all like, okay, something important needs to happen.
02:44:54.000And so let us actually realign around, like, okay...
02:44:58.000Let's actually fucking solve some problems for real.
02:45:01.000And so putting together the groundwork, right, is what we're doing around, like, let's pre-think a lot of this stuff so that, like, if and when the shock comes...
02:45:51.000If you think about the loss of control scenarios that a lot of people look at are autonomous replication, like the model gets access to the internet, copies itself onto servers and all that stuff.
02:46:14.000We get another try, and we can kind of learn from our mistakes.
02:46:17.000So there is this sort of, like, this picture, you know, one camp goes, oh, well, we're going to kind of make this superintelligence in a vat, and then it explodes out and we lose control over it.
02:47:04.000But I really appreciate you guys and appreciate your perspective because it's very important and it's very illuminating.
02:47:11.000It gives you a sense of what's going on.
02:47:13.000And I think one of the things that you said that's really important is, like, it sucks that we need a 9-11 moment or a Pearl Harbor moment to realize what's happening so we all come together.
02:47:23.000But hopefully, slowly but surely, through conversations like this, people realize what's actually happening.
02:47:29.000You need one of those moments, like, every generation.
02:47:32.000Like, that's how you get contact with the truth.
02:47:34.000And it's, like, it's painful, but, like, the light's on the other side.