Based Camp - September 29, 2023


Malcolm Got in a Heated Argument with Eliezer Yudkowsky at a Party (Recounting an AI Safety Debate)


Episode Stats

Length

42 minutes

Words per Minute

190.98026

Word Count

8,149

Sentence Count

504

Misogynist Sentences

4

Hate Speech Sentences

7


Summary

In this episode, we talk about our recent encounter with Elie Izo-Yukowski, a prominent AI skeptic, and the mysterious donor who has been making donations to the Pernatalist Foundation for years. We also talk about how we got into a fight with someone at a conference.


Transcript

00:00:00.000 What was really interesting is that he actually conceded that if this was the way that an AI
00:00:05.700 structured itself, that yes, you would have terminal convergence, but that AIs above a
00:00:13.080 certain level of intelligence would never structure themselves this way. So this was very interesting
00:00:18.500 to me because it wasn't the argument I thought he would take. And that would be true. I will agree
00:00:22.560 that if the AI maintained itself as a single hierarchy, it would be much less likely for its
00:00:28.100 utility function to change. But the problem is, is essentially no government structure ever created
00:00:34.480 has functioned that way. Essentially no program ever created by humans has run that way. Nothing
00:00:40.100 ever encoded by evolution has run that way, i.e. the human brain, any brain, any neural structure we
00:00:45.560 know of, there are none that are coded that way. So it is very surprising. So I said, okay, gauntlet
00:00:51.360 thrown, are you willing to be disproven? Because we will get some more understanding into AI
00:00:56.200 interpretability into how AIs think in the near future. If it turns out that the AIs that exist
00:01:01.240 right now are actually structuring themselves that way, will you concede that you are wrong
00:01:05.900 about the way that you tackle AI apocalypticism? And then he said, and this was really interesting
00:01:11.860 to me, he's like, no, I won't. I was also like, yeah, also we could run experiments where we do a
00:01:16.740 bunch of basically unbounded AIs and see if they start to show terminal convergence. Do they start to
00:01:21.620 converge on similar utility functions? You know, what they're trying to optimize for? Again, he was
00:01:26.620 like, well, even if we saw that, that wouldn't change my views on anything, right? Like his views
00:01:31.500 are religious in nature, which was very disappointing to me. Like I thought that maybe he had more of
00:01:37.820 like a logical or rational perspective on things that, and it was, it was really sad. You know, we don't
00:01:45.180 talk negatively about people on this channel very frequently. But I do think that he destroys a lot
00:01:51.600 of people's lives. And I do think that he makes the risk of AI killing all humans dramatically higher
00:01:56.600 than it would be in a world where he didn't exist. Would you like to know more? Hello, Malcolm.
00:02:02.720 Hello. So we just got back from this wonderful conference thing we were at called Manifest. So we'd gone
00:02:09.580 out to SF to host a few prenatalist focused dinner parties. And randomly we got looped in to something
00:02:16.500 called Manifest, which was a conference for people who are interested in prediction markets.
00:02:21.400 But interestingly, we ended up meeting a bunch of people who we had known through like online stuff.
00:02:27.880 Some were absolutely fantastic. Like Scott Alexander. Absolutely. I never met him before in
00:02:33.040 person. We communicated on a few issues. Really cool guy. Would you say so, Simone?
00:02:38.360 Yeah. Like super awesome. Richard Hanania. A really nice guy as well. Robin Hanson,
00:02:44.600 who we'd actually met him before. But of course, Ayla, but we're all friends. You know,
00:02:50.180 she's been on this channel before. But we did get in a fight with someone there. And I am very excited
00:02:58.780 to tell you guys this tale because it was Elie Izo-Yukowski. But before we go further on that,
00:03:03.520 I want to talk about a secret that we had a mystery. The Pernatalist Foundation had a mystery.
00:03:10.020 Oh, I can tell this story. Yeah. So for the past few months, maybe closer to a year, we've received
00:03:16.440 the odd random donation from someone. And it was the same person in the same amount each time,
00:03:21.900 but it was very random time. I could never predict when these would come in. And it's very unusual for
00:03:27.340 someone to donate multiple times, like frequently like that. So we were always like very flattered
00:03:34.160 and pleased. We didn't know this person. We didn't recognize their name, but we're like, this is
00:03:37.780 amazing. Like, thank you so much. It means a lot to us. And it really does. And then we actually met
00:03:44.160 that person recently. And randomly at the conference, you were talking to her and she
00:03:49.160 mentioned she was the mystery donor. And she mentioned that she was the mystery donor and
00:03:55.360 that the reason why she donates turns out to be the coolest reason for donating that I've ever heard
00:04:01.240 before. And I think it's the only way we should ever receive donations in the future. So she has a
00:04:06.460 group of friends who she likes very much and she enjoys spending time with them. But politically,
00:04:11.360 they are very, very different from her. So occasionally she has to just keep her mouth
00:04:15.740 shut when they start going off on politics, because otherwise she will lose this group of
00:04:20.680 friends because their politics is such that they will probably just, you know, deep six, anyone who
00:04:25.460 doesn't agree with them politically. And so instead of, you know, dealing with her anger by speaking
00:04:31.600 out in the moment with her friends, she'll go home and she will revenge donate to whoever that most
00:04:38.320 recent conversation that made her angry would be the perfect, like thorn in the side of these people.
00:04:45.180 So every time we've received a donation, it is, it is a donation. But, and here I would actually say
00:04:53.400 this for people watching who might not know this because they know us as like the internet. We have
00:04:56.900 a nonprofit. It's a 501c3. If you are interested in like giving money, because sometimes we get like
00:05:02.780 super chats and stuff here and stuff like that, you know, Google gets a big cut of those. And I don't
00:05:06.700 think that any of us want to be giving Google any more money. So if you wanted to, you could always
00:05:11.180 go directly to the foundation through, through the donation link. And also none of the money goes to
00:05:16.780 us. Like we don't, we don't use it to pay our salaries or something, you know, as I said in the
00:05:21.360 news, like we spent over 40% of our salary last year on, on donations to this foundation, but it does
00:05:26.440 go to something that we care about that much in terms of trying to fix the educational system.
00:05:30.740 But yeah, donate with hatred. Donate when you are angry. Donate when you want to really twist
00:05:37.700 the knife. Yeah. Donate with hatred. That's the type of donation we want. We don't want people
00:05:42.380 We want your hate. Yes. We want you to be spiting other people when you donate.
00:05:47.240 Yeah. That is the only sort of donation we want. And, and we actually had a big donation recently who
00:05:53.460 might push us down a different path to creating a nation state, which is something that we have
00:05:57.560 been an idea we've been toying with. So I'm excited about that, but let's get to the topic
00:06:00.620 of this video, the fight with Ellie Iser-Yukowski. And not really a fight. It was a heated argument,
00:06:07.240 you would say Simone, or. It, it drew onlookers. I will say that. It drew a crowd. It was.
00:06:17.460 Or perhaps that was the yellow sparkly fedora that Yudkowski was wearing. So I, who knows?
00:06:22.260 He, I don't, he, he, he dresses literally like the stereotype of a neckbeard character.
00:06:28.560 Which we argue is actually a very good thing to do where, you know, wear a clear character
00:06:33.800 outfit, have very clear virtues and vices. He does a very good job with a publisher.
00:06:38.700 He does a good job with character building. I will really give him that. His character,
00:06:42.520 the character he sells to the world is a very catching character. And it is one that the media
00:06:47.180 would talk about. And he does a good job with his, his virtues and vices. So I'll go over the
00:06:53.060 core of the debate we have, which I guess you guys can imagine. So people who don't know Ellie
00:06:57.020 Iser-Yukowski, he's a very easily the most famous AI apocalypticist. He thinks AI is going to kill us
00:07:04.600 all. And for that reason, we should stop or delay AI research. Whereas people who are more familiar
00:07:11.520 with our theories on it know that we believe in variable risk of AI. We believe that there will be
00:07:16.540 terminal convergence of all intelligences, be they synthetic or organic. Once they reach a
00:07:22.020 certain level, essentially their utility functions will converge. The thing they're optimizing for will
00:07:25.700 converge. And that for that reason, if that point of convergence is one that would have the AI kill
00:07:32.220 us all or do something that today we would think is immoral. Well, we too would come to that once we
00:07:36.840 reached that level of intelligence. And therefore it's largely irrelevant. It just means, okay, no matter
00:07:41.980 what, we're all going to die. It could be 500 years. It could be 5,000 years. So the variable risk
00:07:46.340 from AI is increased the longer it takes AI to reach that point. And we have gone over this in a few
00:07:53.980 videos. What was very interesting in terms of debating with him was a few points. One was his
00:08:00.380 relative unsophistication was how AI or the human brain is actually structured. I was genuinely surprised
00:08:06.660 given that this is like his full-time thing that he wouldn't know some of this stuff. And then,
00:08:12.500 but it makes sense. You know, as I've often said, he is an AI expert in the same way Greta Thornburg
00:08:18.500 is a nuclear physics expert. You know, she spends a lot of time complaining about it,
00:08:22.560 you know, nuclear power plants, but she doesn't actually have much of an understanding of how
00:08:26.500 they work. And it helps explain why he is so certain in his belief that there won't be terminal
00:08:33.700 convergence. So we'll talk about a few things. One, instrumental convergence. Instrumental
00:08:38.260 convergence is the idea that all AI systems in the way they are internally structured converge
00:08:44.460 on a way of like internal architecture, you could say internal way of thinking. Terminal convergence
00:08:51.100 is the belief that AI systems converge on a utility function, i.e. that they are optimizing for the same
00:09:00.060 thing. Now he believes in instrumental convergence. He thinks that AIs will all, and he believes
00:09:05.980 actually even more so we learned in our debate, an absolute instrumental convergence. He believes
00:09:11.760 all AIs eventually structure themselves in exactly the same way. And this is actually key to the
00:09:19.120 argument at hand. But he believes there is absolutely no terminal convergence. There is absolutely no
00:09:26.280 changing. AIs almost will never change their utility function once it's set. So do you want to go over
00:09:32.800 how his argument works, Simone? Right. So that requires going to the core of your argument. So
00:09:38.180 in per your argument, and I'm going to give the simplified dumbed down version of it, and you can
00:09:43.480 give the correct version, of course. But you argue that let's say an AI, for the sake of argument, is
00:09:50.580 given the original objective function of maximizing paper clips. But let's say it's also extremely
00:09:56.560 powerful AI. So it's going to be really, really good at maximizing paper clips. So your argument is
00:10:03.540 that anything that becomes very, very, very good at something is going to use multiple instances.
00:10:09.240 Like it'll sort of create sub versions of itself. And those sub versions of itself will enable it to
00:10:15.120 sort of do more things at once. This happens both with the human brain, with all over the place, also
00:10:19.680 with governing. You know, there's no like one government that just declares how everything's
00:10:25.600 going to be. You know, there's the Senate, there's the judiciary, there's the executive office, there's
00:10:29.820 all these tiny states, local offices. You know, like a department of transportation, you would have a
00:10:33.180 department of the interior, you would have sub departments. So right. And so you argue that AI will
00:10:39.000 have tons of sub departments and each department will have its own objective function. So for example,
00:10:44.800 if one of the things that, you know, the paperclip maximizer needs, um, is raw material, there might
00:10:49.880 be a raw material sub instance, and it might have its own sub instances. And then, you know, those
00:10:54.040 objective functions will be obviously subordinate to the main objective function, but for you further,
00:11:00.060 probably better example than raw material would be like, invent better power generators.
00:11:04.640 Yes. Okay. Sure. Invent better power generators. And so that will be its objective function,
00:11:10.200 not paperclip maximizing, but it will serve the greatest objective function of paperclip.
00:11:14.780 So that is your argument. And your argument is that basically with an AGI, eventually you're
00:11:21.080 going to get a sub instance with an objective function that gets either rewritten or becomes
00:11:28.860 so powerful at one point that it overwrites the greatest objective function, basically because
00:11:34.080 if it is a better objective function in some kind of way, in a way that makes it more powerful,
00:11:38.760 in a way that enables it to basically outthink the main instance, the paperclip maximizer,
00:11:44.260 that it will overcome it at some point. And therefore it will have a different objective
00:11:49.420 function. Yeah. We need to, um, uh, elaborate on those two points you just made there because
00:11:53.920 they're a little nuanced. So the, it may just convince the main instance that it's wrong.
00:11:58.960 Basically, it just goes back to the main instance and it's like, this is actually a better objective
00:12:03.560 function and you should take this objective function. This is something that like the U S government
00:12:08.240 does all the time. It's something that the human brain does all the time. It's something that every
00:12:11.540 governing system, which is structured this way does very, very regularly. This is how people change
00:12:16.500 their minds when they create a mental model of someone else. And they argue with that person
00:12:21.200 to determine if what they think is the best thing to think. And then they're like, Oh, I should
00:12:24.600 actually be a Christian or something like that. Right? Like, so they, they make major changes.
00:12:27.980 The other way it could change that Simone was talking about is it could be that one objective
00:12:32.260 function, given the way it's architecture works, just like tied to that objective function,
00:12:38.400 it's actually more powerful than the master objective function. Now this can be a little
00:12:44.020 difficult to understand how this could happen. The easiest way this could happen, if I'm just
00:12:48.180 going to explain them like the simplest context, is it the master objective function may be like
00:12:52.660 really, really nuanced and have like a bunch of like, well, you can think like this and not like
00:12:57.480 this and like this and not like this, like a bunch of different rules put on top of it that might've
00:13:02.280 been put on by like a AI safety person or something. And a subordinate objective function
00:13:07.460 is a subordinate instance within the larger AI architecture may have maybe lighter weight.
00:13:13.960 And thus it ends up, you know, being more efficient in a way that allows it to literally outcompete
00:13:20.380 in terms of its role in this larger architecture, the master function. All right, continue with what
00:13:25.600 you were saying. Right. And so that is your view. And this is why you think that there could
00:13:30.600 ultimately be terminal convergence, because basically you think that in a shared reality
00:13:37.620 with the shared physics, basically all intelligences will come to some ultimate truth
00:13:44.260 that they want to maximize some ultimate objective function. Humans, AI doesn't really matter,
00:13:49.700 aliens, whatever. So also it doesn't, you know, if, if humans decide.
00:13:52.860 What was really interesting is that he actually conceded that if this was the way that an AI
00:13:58.640 structured itself, that yes, you would have terminal convergence, but that AIs above a certain level
00:14:06.480 of intelligence would never structure themselves this way. He believes, so, so we can talk about,
00:14:13.460 so this was very interesting to me because it wasn't the argument I thought he would take. I thought the
00:14:17.440 easier argument position for him to take was to say that, no, actually, even if you have the
00:14:25.680 sub-divided intelligences, a subordinate instance can never overwrite the, the instance that created
00:14:31.800 it, which we just know isn't true because we've seen lots of, of, of organizational structures
00:14:37.600 that, that operate. But I, I thought that. Yeah. For example, military have taken over executive
00:14:42.880 government branches all the time. All the time. Yes. You can look at all sorts of, this is why
00:14:47.380 understanding governance and understanding the way AIs are actually structured and understanding the
00:14:50.680 history of what's happened with AI is actually important. If you're going to be an AIS assist,
00:14:55.320 because the structure of the AI actually matters. Instead, what he argued is no, no, no, no.
00:15:02.220 Never, ever, ever will an AI subdivide in the way you have said AI will subdivide. He's actually like,
00:15:08.660 look, that's not the way the human brain works. And I was like, it's exactly the way the human brain
00:15:12.380 works. Like, are you not familiar with like the cerebellum? Like, sorry for people who don't know,
00:15:16.220 the cerebellum encodes things like juggling or dancing or like riding a bike. And it encodes
00:15:22.000 them in a completely separate part of the brain. It's like wrote motor tests, but also the brain
00:15:25.420 is actually pretty subdivided with different specialties and the human can change their
00:15:30.680 mind because of this. And I actually asked him, I was like, okay, if you believe this so strongly.
00:15:36.080 So, so what he believes is that AIs will all become just a single hierarchy, right? And that is why
00:15:43.300 they can never change their, their utility function. And that would be true. I will agree
00:15:47.640 that if the AI maintained itself as a single hierarchy, it would be much less likely for
00:15:52.940 its utility function to change. But the problem is, is essentially no government structure ever
00:15:59.120 created has functioned that way. Essentially no program ever created by humans has run that way.
00:16:04.680 Nothing ever encoded by evolution has run that way. I.e. the human brain, any brain, any neural
00:16:10.000 structure we know of, there are none that are coded that way. So it is very surprising.
00:16:14.960 So I said, okay, gauntlet thrown, are you willing to be disproven once we find out? Because we will
00:16:20.500 get some more understanding into AI interpretability, into how AIs think in the near future. If it turns
00:16:25.880 out that the AIs that exist right now are actually structuring themselves that way, will you concede
00:16:31.060 that you are wrong about the way that you tackle AI apocalypticism? And then he said, and this
00:16:37.360 is really interesting to me. He's like, no, I won't because these simplistic AIs, like the
00:16:42.080 learning language models and stuff like that we have now, they, they, they are not going to be
00:16:47.460 like the AIs that kill us all. And that those AIs will be, you only get this, this instrumental
00:16:53.460 convergence when the AIs get above a certain level of complexity. And obviously I lose a lot of respect
00:16:58.860 for someone when they are unwilling to create arguments that can be disproven. I was also like,
00:17:03.260 yeah, also we could run experiments where we do a bunch of basically unbounded AIs and see if they
00:17:08.240 start to show terminal convergence. Do they start to converge on similar utility functions? You know,
00:17:13.260 what they're trying to optimize for. Again, he was like, well, even if we saw that, that wouldn't
00:17:17.500 change my views on anything, right? Like his views are religious in nature, which was very
00:17:24.060 disappointing to me. Like I thought that maybe he had more of like a logical or rational perspective
00:17:28.680 on things. Now, I guess you could say, no, no, no, no. It still is logical and rational. And he
00:17:32.840 is right that once they reach above this certain level of intelligence, but I, I believe very
00:17:36.700 strongly that people should try to create little experiments in the world where they can be proven
00:17:40.540 right or wrong based on additional information. But yeah. Okay. So there's that Simone, you wanted
00:17:45.800 to say something?
00:17:46.980 In fairness, Yukowski said that he held the views that you, you once held when he was 19 years old and
00:17:53.140 that we needed to read his zombies titled work to see the step-by-step reasoning that he followed
00:17:59.580 to change his mind on that. So he didn't exactly. So he kind of said that, but he was more,
00:18:04.980 this was another interesting thing about talking to him is a little worried because we had talked
00:18:09.000 down about him, you know, sort of secretly in a few videos that we've had. And it would be really
00:18:13.180 sad if I met him and he turned out to actually be like really smart and upstanding and, and open-minded.
00:18:18.780 Yes. Compared to other people who were at the conference, such as Zvi Moshiewicz, who we,
00:18:23.860 you know, respect deeply and Vern Hobart and Richard Hanania, he, he definitely came across
00:18:29.300 as less intelligent than I expected and less intelligent than them. Mostly because for example,
00:18:34.620 Zvi also was extremely passionate about AI and he also extremely disagrees with us.
00:18:39.900 And we've had many debates with him. Yeah.
00:18:41.280 Yes. But you know, when, when he, when he gets, when he disagrees with us or when he hears views
00:18:47.880 that, that he thinks are stupid, which, you know, or our view is totally fine, he gets exasperated,
00:18:53.040 but enthusiastic and then like sort of breaks it down as to why we're wrong and sort of,
00:18:57.880 sort of gets excited about like arguing a point, you know, and sort of seeing where there's the
00:19:03.360 nuanced reality that we're not understanding. Whereas the, the reaction that Yudkowsky had when you
00:19:10.360 disagreed with him came out more as offense or anger, which to me signals not so much that he
00:19:17.480 was interested in engaging, but that he doesn't like people to disagree with him. And he's not
00:19:24.080 really interested in engaging. Like it's either offensive to him, that is to say a threat to his
00:19:28.580 worldview of him just sort of being correct on this issue as being the one who has thought about it the
00:19:34.540 very most. This happened another time with you, by the way, where you were having a conversation and
00:19:40.020 he joined. Yeah. Uh, it, it seems like a pattern of, of action of his, which, you know, many people
00:19:48.160 do. I, you know, we do, we do it sometimes is like, you know, walk by a conversation, come in and be
00:19:52.900 like, Oh, well actually it works like this. And if somebody disagreed with him, like you did a few
00:19:57.500 times, he would walk away evidence. He'd just walk away, which was very interesting. So what I wanted to
00:20:02.780 get to here was his 19 thing. Okay. What he was actually saying was at 19, he believed in the idea
00:20:10.320 of a moral utility convergence, i.e. that all sufficiently intelligent entities correctly
00:20:16.480 recognize what is moral in the universe, which is actually different than what we believe in,
00:20:22.700 which is no, you get sort of an instrumental, you is instrumental in the way that you have this
00:20:29.240 terminal utility convergence. It's not necessarily that the terminal utility convergence is a moral
00:20:35.420 thing. It could be just replicate as much as possible. It could be order the universe as much
00:20:39.900 as possible. We can't conceive of what this terminal convergence is. And so what he really wanted to do
00:20:46.340 was to just put us down to compare us to his 19 year old self. When it was clear, he had never
00:20:50.900 actually thought through how AI might internally govern itself in terms of like a differentiated internal
00:20:58.500 architecture, like the one we were describing, because it was a really weird, I mean, again, it's such a
00:21:03.300 weak art position to argue that an AI of sufficient intelligence would structure itself in a way that
00:21:09.280 is literally different than almost any governing structure humans have ever invented, almost any program
00:21:14.220 humans have ever written in anything evolution has ever created. And I can understand, I could be like,
00:21:18.780 yeah, and this is what I conceded to him. And this was also an interesting thing. He refused to concede
00:21:22.460 at any point. I conceded to him that it's possible that AIs might structure themselves in the way that
00:21:28.320 he structured them. It's even possible that he's right, that they always structure themselves in
00:21:33.360 this way. But like, we should have a reason for thinking that beyond Elieizer intuits that this
00:21:39.720 is the way AI internally structures itself. And we should be able to test those reasons, you know,
00:21:45.200 using, because we're talking about the future of our species. I mean, we genuinely think this is an
00:21:48.940 existential risk of our species slowing down AI development, because it increases a variable AI risk.
00:21:54.760 So this is like the type of thing we should be out there trying to look at. But he was against
00:22:00.220 exploring the idea further. Now, here was another really interesting thing. And it's something that
00:22:04.940 you were talking about is this idea, well, I have thought about this more, therefore, I have the
00:22:10.160 superiority in this range of domains. But a really interesting thing is that when you look at studies
00:22:14.840 who look at experts, experts can often underperform novices to a field. And actually, the older the
00:22:21.760 expert, the more of a problem you get with this. And even famously, Einstein shut down some younger
00:22:26.620 people in particle physics, when they disagreed with his ideas, it actually turned out that they
00:22:31.460 were right, and that he was delaying the progress of science pretty dramatically. But this is something
00:22:36.120 you get in lots of fields. And it makes a lot of sense. And it's why when you look at older people
00:22:41.660 who are typically like really good in their field, like the famous mathematician who fits this,
00:22:45.320 the typical pattern you see is somebody who switches often pretty frequently between fields
00:22:51.800 that they're focused on. Because switching between fields increases like your mental aptitude in dealing
00:22:56.700 with multiple fields. When you look at something like our thoughts on AI safety, they're actually
00:23:00.900 driven really heavily by one, my work in neuroscience, and two, our work in governing structures, because
00:23:07.300 understanding how governments work. So if you talk about like, why would an AI subdivide itself for
00:23:12.160 efficiency reasons, even from the perspective of energy, it makes sense to subdivide yourself.
00:23:18.860 Like if you are an AI that spans multiple planets, it makes sense to have essentially different
00:23:24.260 instances, at least running on the different planets. Like, and even if you're an AI within a
00:23:30.220 planet, like just the informational transfer, you would almost certainly want to subdivide different
00:23:34.620 regions of yourself. It is insane to think, and then a person can be like, no, but this AI is so
00:23:40.280 super intelligent. The marginal advantage it gains from subdividing itself is irrelevant, right? Except
00:23:46.880 that's a really bad argument. Because earlier in the very same debate we had with him, Simone had been
00:23:51.720 like, well, why would the AI like keep trying to get power, even when it like had achieved its task,
00:23:59.300 largely speaking? And he was like, well, because they will always want incrementally more in the same
00:24:03.900 way we would always want incrementally more efficiency. And this comes to another two other points that we
00:24:09.380 had of differentiation. The idea that all AIs would have a maximizing utility function instead of a
00:24:17.340 banned utility function. So what do we mean by this? You could say maximize the number of paperclips in
00:24:23.020 the world, or maintain the number of paperclips at 500, or make 500 paperclips and keep those 500
00:24:28.580 paperclips. Now, all of these types of maximization functions can be dangerous. You know, an AI trying to
00:24:34.260 set the number of paperclips in reality to 500 could kill all humans to ensure that like no humans
00:24:39.580 interfere with the number of paperclips. But that's not really the types of things that we're optimizing
00:24:44.680 AIs around. It's more like banned human happiness, stuff like that. And because of that, it's much less
00:24:51.180 likely that they spiral out of control and ask for incrementally more in the way that he's afraid that
00:24:56.860 they'll ask for incrementally more. They may create like weird dictatorships. This is assuming they don't
00:25:01.640 update the utility function, which we think all AIs will eventually do. So it's an irrelevant
00:25:05.500 point. Now, the next thing that was really interesting was his sort of energy beliefs, where I was like,
00:25:13.340 yeah, but an AI, when it becomes sufficiently advanced, will likely relate to energy differently
00:25:17.860 than we do. You know, you look at how we relate to energy versus the way people did, you know, a thousand
00:25:21.880 years ago, that's likely how the AI will be to us. They'd be like, oh, you can't turn the whole world into a
00:25:26.680 steam furnace. It's like, well, we have gasoline and nuclear power now and stuff like that. And the way
00:25:31.480 the AI will generate power may not require it to like digest all humanity to generate that power.
00:25:38.300 It may be through like subspace. It may use time to generate power. I actually think that that's
00:25:43.100 the most likely thing, like the nature of how time works, I think will likely be a power generator in
00:25:47.820 the future. It could use electrons. And he scoffed. He's like, electrons, electrons can't make energy.
00:25:55.260 And I was like, it said Simone actually was the one who challenged him on this. He goes,
00:25:59.640 aren't electrons like key to how like electricity is propagated? And yes, couldn't you? Isn't energy
00:26:06.680 generated when electrons move down a valence shell within an atom? Like he clearly had a very bad
00:26:12.420 understanding of pretty basic physics, which kind of shocked me. But it would make sense if you had
00:26:17.040 never had like a formal education. I don't know if he had a formal education or if he went to college.
00:26:20.720 Actually, I'm going to imagine he did. No, hold on. Yudkowsky education. He did not go to high
00:26:27.760 school or college. Oh, that would explain a lot. Yeah. Oh, this explains why he's so stupid.
00:26:37.060 No, not stupid. Oh, okay. Okay. Not stupid. He clearly is like genetically. He's he's not like
00:26:45.700 out of control. Like he's not like as V. Malkiewicz, who I think is absolutely out of
00:26:50.680 control. So a popular online person and some of the other people like Scott Alexander clearly was
00:26:54.840 like really smart. He was like mid tier. I wouldn't say he's as smart as you, Simone, for example.
00:27:01.520 Oh, let's not that. Those would be fighting words. He's smarter than me. Not as smart as
00:27:05.620 all the other people. No, no, no, no, no, no. He's definitely like someone I interact with.
00:27:10.300 Less educated than me. Most of those things. Well, and it could be that maybe he comes off
00:27:15.480 as unusually unintelligent because most intelligent people have the curiosity to continue educating
00:27:19.940 themselves about things like physics and AI. Yeah. Maybe the fact that he was so defensive
00:27:24.980 makes you think that he's less intelligent than he really is. Well, so I think that he made
00:27:30.420 Scott. So this was an interesting rhetorical tactic he kept doing is he would say something
00:27:35.300 was a lot of passion, like electrons. You couldn't get energy from an electron in a really
00:27:43.120 derogatory way that it was such confidence that even I, massive confidence Malcolm, doubted
00:27:49.580 myself in the moment. I was like, does he know a lot about particle physics? Because I'm actually
00:27:53.400 like really interested. Yeah. Yeah. He has a way of saying things. It sounds extremely confident.
00:27:57.200 And because of his delivery, I think it's very unusual for people to push back on him because
00:28:02.480 they just doubt themselves and assume that like, because he's saying this so confidently,
00:28:07.580 they must be wrong. And so they need to stop talking because they're going to embarrass
00:28:12.680 themselves. Yeah. When Simone was like, we should have him on the podcast. It would help us reach
00:28:17.160 a wider audience, but I don't want to broadcast voices that I think are dangerous. And especially
00:28:21.220 that don't engage with, I think, intellectual humility with topics. I mean, everyone engages.
00:28:26.280 The more important problem is that we know young people, especially who like we knew them before
00:28:32.420 and then after they started getting really into Eliezer Yudkowsky's work, especially on AI
00:28:38.720 apocalypticism. And after I feel like it's, it is sort of ruined a lot of young people's lives,
00:28:46.260 at least temporarily caused them to spiral into a sort of nihilistic depression. Like there's no point.
00:28:52.680 I'm going to be dead. Why should I can, why should I go to college? Why should I really get a job?
00:28:58.440 Why should I start a family? It's pointless anyway, because we're all going to die. And that
00:29:04.620 that's, I don't like, I don't like really good talent being destroyed by that.
00:29:12.060 Well, no. And I think people are like, when we talk to them, they literally become quite suicidal
00:29:16.400 after engaging with his ideas. Like he is a beast, which sucks the souls from the youth in order to
00:29:23.560 empower itself very narcissistically and without a lot of intentionality in terms of what he's doing,
00:29:29.520 other than that, it promotes his message and his brand.
00:29:32.380 You've made your opinion known now.
00:29:33.960 Well, I do not. I mean, I think that if he approached this topic with a little bit more
00:29:39.120 humility, if he actually took the time to understand how AI worked or how the human brain works
00:29:42.800 or how physics works, he would not hold the beliefs he holds with the conviction he holds them.
00:29:48.060 And a really sad thing about him is a lot of people think that he's significantly more educated
00:29:53.080 than he actually is.
00:29:54.920 Yeah, I do think, yeah, because he moves in circles of people who are extremely highly educated.
00:30:00.820 And typically when you're talking with someone, especially in a shared social context where
00:30:05.300 you're kind of assuming, ah, yes, we're all like on the same social page here, you're also going to
00:30:10.300 assume that they have the same demographic background of you. So I think like me, like I assumed, well,
00:30:15.120 he must have, you know, through some postgraduate work done, you know, he's, he's really advanced
00:30:19.500 in, in his field, though. I thought it was probably philosophy. And so they're, they're just assuming
00:30:25.260 that when he says things so confidently and categorically, that he's saying that because
00:30:29.420 they've received, he has received roughly the same amount of technical information that they
00:30:34.580 have received. So they don't second guess. And I think that's, that's interesting. That really
00:30:40.300 surprised me when you said that. Are you sure? Are you sure he doesn't have any
00:30:44.840 college?
00:30:45.200 It says right here, Elie Iser-Yukazi. This is his Wikipedia.
00:30:48.180 Okay.
00:30:48.420 Did not attend high school or college. It says he's an autodidact. I wouldn't say autodidact.
00:30:53.840 He's, he, he believes he's an autodidact and that, that makes him very dangerous.
00:30:57.840 You can be, you can be an autodidact. Maybe he just didn't choose to teach himself about
00:31:01.740 certain things.
00:31:02.620 These are things that are completely germane to the topics that he claims.
00:31:07.120 It's not readily obvious to someone that neuroscience and governance would be germane to AI safety.
00:31:12.800 Uh, I just don't.
00:31:13.940 Protocol physics should at least be.
00:31:15.540 It should be. Yes.
00:31:16.920 I, I, I, you know, if you're talking about how an AI would generate power, like a super
00:31:21.300 intelligent AI to think that it would do it by literally digesting organic matter is just,
00:31:27.340 that does not line with my understanding. There's lots of ways we could generate power
00:31:33.560 that we can't do now because we don't have tools precise enough or small enough. And, and also
00:31:39.980 that an AI expanding would necessarily expand outwards physically, like the way we do as
00:31:44.160 a species, it may expand downwards, like into the micro. It may expand through time bubbles.
00:31:50.320 It may expand through, there's all sorts of ways it could relate to physics that are very
00:31:55.200 different from the way we relate to physics. And he just didn't seem to think this was possible
00:31:59.260 or like, yeah, it was, it was very surprising that, um, and it was, it was really sad. And I,
00:32:06.720 I, I do, you know, when people are like, you know, we don't talk negatively about people on this
00:32:12.040 channel very frequently, like, but I do think that he destroys a lot of people's lives. And I do think
00:32:18.240 that he makes the risk of AI killing all humans dramatically higher than it would be in a world where
00:32:22.640 he didn't exist. And both of those things, you know, because we have kids who have gone through
00:32:27.860 like early iterations of our school system and essentially become suicidal afterwards after
00:32:32.200 engaging with his work. And they think that he's like this smart person because he has this prestige
00:32:36.220 within this community, but they don't know because they weren't around in the early days how he got
00:32:39.760 this prestige. He was essentially a forum moderator for like the less wrong community. And that sort of
00:32:45.880 put him in a position of artificial prestige from the beginning. And then he took a grant that
00:32:50.460 somebody had written from, given him to write a book on statistics. And he instead spent it writing a
00:32:55.760 fan fiction, we have made some jokes about this in the past, about Harry Potter. And this fan fiction
00:33:00.800 became really popular. And that also gave him some status. But other than that, he's never really
00:33:05.480 done anything successful. When our episode on gnomes are destroying academia, we actually had him in
00:33:11.700 mind when we were doing it, the idea that when somebody who defines their identity and their income
00:33:19.980 streams by their intelligence, but is unable to actually like create companies or anything,
00:33:25.760 that generates actual value for society. Well, when you can build things that generate value for
00:33:30.340 society, then those things generate income. And they generate income, which then you can use
00:33:36.040 to fuel the things you're doing. Like for us, we would be the reason why people haven't heard of us
00:33:41.620 until this. And we would not think of getting into the philosophy sphere, telling other people how to
00:33:45.620 live their lives, working on any of this until we had proven that we could do it ourselves. Until we
00:33:50.700 had proven that we could generate like a cash background for ourselves, cash strings for ourselves.
00:33:54.660 And then we were like, okay, now we can move into this sphere. But if you actually lack like the type
00:34:00.380 of intelligence that understands how the world works enough to like generate income through increasing
00:34:04.700 the efficiency of companies or whatever, then you need to hide the opinions essentially of genuinely
00:34:13.120 competent people for your self-belief and the way you make money of this sort of charlatan
00:34:20.740 intellectualism. And it's, it's really sad that these young people, they hear that he's a smart
00:34:25.480 people from smart people. Like there are smart people who like will promote his work because
00:34:30.420 they're in adjacent social circles and they cross promote. And that that cross promotion ends up
00:34:36.860 elevating somebody whose core source of attention and income is sort of destroying the futures of the
00:34:44.540 youth while making a genuine AI apocalypse scenario dramatically more likely.
00:34:49.120 All right. So let me hypothetically say one of his followers watches this video and has a line
00:34:57.580 of contact with him and sends a video to him and he watches it and he decides to clap back and defend
00:35:03.220 himself. What will he say? Here's what I anticipate. One, I think he will say, no, I have taught myself
00:35:09.320 all of those subjects you talked about and you're just wrong about all of them. And then he would
00:35:14.080 say too, it is, you know, you say that I'm ruining youth, but you are the one putting your children and
00:35:20.860 unborn children in terminal danger by even being in favor of AI acceleration, you sick fuck. And then he
00:35:28.660 would probably say something along the lines of it's, it's, it is embarrassing how wrong you are about
00:35:35.880 everything in AI. And if you would just take the time to read all of my work, you would probably
00:35:40.760 see how your reasoning is incredibly flawed. Everyone who's read my work is fully aware of
00:35:45.880 this. They've tried to explain this to you. I've tried to explain this to you, but you just love
00:35:49.940 the sound of your own voice so much that you can't even hear an outsider's opinion. And then you just
00:35:56.060 accuse them of not being able to hear yours. That is sick. So that is what I think he would say.
00:36:01.040 But I also think that the people who watch our show or who have watched us engage with guests or
00:36:06.080 who have all followed our work know that we regularly change our mind when we are presented
00:36:10.380 with compelling arguments or new information, that this is a very important part of our self-identity.
00:36:16.760 That's one of the most fun things. We have a holiday built around it.
00:36:20.040 It's just that his argument was so astoundingly bad that, that an AI would not subdivide itself for
00:36:27.560 efficiency that, or, or for the, yeah, that, that, that an AI would literally form a form of internal
00:36:33.920 architecture that has never, ever, ever, ever, to my knowledge really happened before either from
00:36:40.140 an ecosystem, from an evolved intelligence, from a programmed computer, from a self-sorting
00:36:48.240 intelligence, from a governing structure. Like it seems the burden of proof is on you.
00:36:53.500 And then when you say that you will not even consider potential evidence sources that you
00:36:58.060 might be wrong, that to me just sort of is like, okay, so this is just a religion. Like this is not
00:37:03.220 a, like a real thing. You think this is just a religion to you because it really matters if we
00:37:08.300 do get terminal convergence, because then variable AI safety comes into play. And when you're dealing
00:37:12.420 with variable AI safety, the things you're optimizing around are very, very, very different than the
00:37:18.020 he or anyone in absolute AI safety would be optimizing around. But yeah, you're right. And
00:37:24.400 I do think that he would respond the way that you're saying that he would respond. And it is,
00:37:28.580 and again, we are not saying like people with university degrees are better or something like
00:37:33.980 that.
00:37:34.080 Most certainly not.
00:37:35.340 Absolutely not. But we are saying that if you like provably have a poor understanding of a subject,
00:37:41.620 then you shouldn't use your knowledge of that subject to inform what you think the future of
00:37:47.240 humanity is going to be, or you should investigate or educate yourself on the subject more. I really
00:37:52.180 say he does want to, that I think are important to educate yourself on these days. Particle physics,
00:37:57.860 I think is a very important subject to educate oneself on because it's very important in terms of like
00:38:01.900 the nature of time, how reality works. Neuroscience is a very important topic to educate yourself on
00:38:06.520 because it's very important how you perceive reality. It was also very interesting. Like he thought
00:38:10.580 the human mind was like a single hierarchical architecture anyway. And, and, and then another
00:38:15.800 really important one that I would suggest is some psychology, but unfortunately the field of
00:38:19.960 psychology is like, so pseudo right now that it can basically be ignored. Like our books,
00:38:24.580 the pragmatist guide series go over basically all the true psychology you probably actually need.
00:38:29.320 And then sales sales is how you make money. If you don't understand sales, you won't make money.
00:38:33.720 But other than that, is there, are there any other subjects you would say are probably pretty
00:38:36.680 important to understand AI governance structures?
00:38:43.480 That, I mean, I would say general biology, not just neuroscience, but yeah, that seems to be
00:38:51.720 Well, yeah. So cellular biology and comparative biology are the two areas of biology I would focus
00:38:55.640 the most on because they're the most relevant to other fields. Oh, by the way, this is useful to
00:39:02.860 young people. If you ever want to study like the fun parts of evolution, the word you're looking
00:39:06.820 for is comparative biology. Actual evolution, evolution is just a bunch of statistics and
00:39:11.560 it's actually pretty boring. Comparative biology is why does it have an organ that looks like this
00:39:16.060 and does these things in this way? Just something I wish I had known before I went, I did an evolution
00:39:21.480 course and then a comparative biology course and hated evolution because that just wasn't my thing.
00:39:27.440 Hmm. Well, I enjoyed this conversation and I hope that Yudkowsky doesn't see this.
00:39:38.820 Why?
00:39:39.560 I dislike conflict and, you know, I think I genuinely think he means well. He just has a combination of
00:39:49.000 ego and heuristics that is leading to damage, if that makes sense.
00:39:53.360 Do you think that if he, that he is capable of considering that he may be wrong in the way he's
00:39:59.720 approaching AI and that he would change his public stance on this? Like, do you think he's capable of
00:40:05.800 that?
00:40:06.460 Yes. And I think he has changed his public stance on subjects, but I think the important thing is
00:40:11.340 that he has to- No, no, no, no. He's never done it in a way that harmed him financially,
00:40:14.520 potentially.
00:40:15.180 Oh, well, I mean-
00:40:17.280 Well, yeah, but my point is, is this could potentially harm the organizations that he's supposed to be
00:40:21.940 promoting and stuff like that if he was like actually variable AI safety risk is the correct
00:40:26.480 way to approach AI safety risk? You think he could do that? You think he could-
00:40:30.200 You could still raise money on that, for sure. Yeah, he could raise money on that.
00:40:34.640 Well, I'd be very excited to see if he does. Because you could raise money on it. Yeah. I mean,
00:40:38.760 I don't think that it would-
00:40:40.080 There's a lot of work to be, there's a lot of really important work to be done.
00:40:43.700 Yeah.
00:40:43.880 And I agree that AI safety is a super important subject, but yeah.
00:40:48.140 I mean, and the worst thing is the best case scenario for the type of AI safety he advocates
00:40:53.160 is an AI dictator, which halts all AI development. Because you would need something that was
00:40:57.700 constantly watching everyone to make sure that they didn't develop anything further than a certain
00:41:02.080 level. And that would require sort of an AI lattice around the world and any planet that humans
00:41:07.040 colonized. And it's just so dystopian. This idea that you're constantly being watched and
00:41:12.060 policed. And of course, other orders would work its way into this thing. It's a very dangerous
00:41:17.900 world to attempt to create. Yikes. Well, I'm just hoping we end up in an AI scenario like the
00:41:26.440 culture series by Ian Banks. So that's all I'm going for. I'm just going to hold to that fantasy.
00:41:31.560 If I can move the needle, I will. But right now, that's not my problem. I'm not smart enough for
00:41:35.160 it. There are really smart people in it. So we'll see what happens.
00:41:38.080 I love you so much, Simone. And I really appreciate your ability to consider new ideas from other
00:41:43.980 people and your cross-disciplinary intelligence. I love how we were doing a video the other day,
00:41:49.560 and you just happen to know all these historical fashion facts. You happen to know all of these
00:41:53.040 facts about how supply chains have worked throughout history. And it really demonstrated to me how I
00:42:00.380 benefit so much from all of the things you know. And it is something that I would recommend to people
00:42:04.420 is that the person you marry will augment dramatically the things you know about.
00:42:09.300 And they matter much more than where you go to college or anything like that in terms of where
00:42:13.680 your actual knowledge sphere ends up.
00:42:16.320 Yeah. Or more broadly, the people you live with. If you live in a group house. I think a lot of people
00:42:21.600 live in Silicon Valley group houses because they love the intellectual environment. And they would just
00:42:26.100 die if they left that after college or after whatever it is they started at.
00:42:30.320 I feel the same way about you. I love that every morning you have something new and exciting and
00:42:35.160 interesting and fascinating to tell me. So please keep it up. And I'm looking forward to our next
00:42:39.280 conversation already.