This is a response to an article written by Slate's Scott Alexander about arguments against AI apocalypse. In it, he lays out his argument, but it's clear that he doesn't understand one of the most fundamental concepts in the argument against AI: that AI isn't going to destroy the world as we know it.
00:17:09.360Christians and Jews. You just don't see big AI apocalyptic movements in Japan or Korea or China
00:17:16.120or India. They're just not freaking out about this in the same way. And keep in mind, I've made the
00:17:20.560table very big here. It's not like I'm just saying, oh, you're not seeing it in Japan. You're not seeing
00:17:24.660it in half the world. That's not prone to these types of apocalyptic panics. Okay. That is really big
00:17:32.380evidence to me. Okay. That's point one. Point two is it has all of the characteristics of the fake
00:17:40.680moral panics, historically speaking, and none of the characteristics of the accurate panics,
00:17:45.740historically speaking. But I'm wondering if you're noticing any other areas where there are
00:17:50.960congruence in the moral panics that turned out accurate versus the ones that didn't.
00:17:58.280The biggest theme to me is just invasion versus change. Like a foreign agent entering seems to be
00:18:05.560a bigger risk than something fundamentally changing from a technological standpoint, which is not what
00:18:10.880I expected you to come in with. So this is surprising to me. Yeah. Okay. So if we were going
00:18:17.660to modify AI risk to fit into the mindset of the moral panics that turned out to be correct,
00:18:25.800like the apocalyptic claims that turned out to be correct, you would need to reframe it. You'd need
00:18:30.220to say something like, and this would fit correct predictions, historically speaking. If we stop AI
00:18:37.280development and China keeps on with AI development, China will use the AI to subjugate us and eradicate
00:18:45.300a large portion of our population. Yeah. That would have a lot in common with the types of moral
00:18:51.480predictions or moral panic predictions that turned out accurate. AI will take people's jobs. AI will
00:19:00.080destroy our culture. Our AI will kill all people. These feel very much like the historic incorrect.
00:19:08.040But I think you are underplaying something, which is that while these technological predictions,
00:19:14.240Luddites freaking out about the industrial revolution, people freaking out about the printing
00:19:18.600press, it did not lead to the fall of civilization as expected. It did lead to fundamental changes.
00:19:25.640And AI will absolutely lead to fundamental changes and the way that people live and work.
00:19:31.020I don't argue. Have we ever argued that AI is not going to fundamentally change human civilization?
00:19:36.320We have multiple episodes on this point. Okay. Yeah. We would say it's going to fundamentally
00:19:40.920change the civilization. It's going to fundamentally change the economy. It's going to fundamentally
00:19:44.100change the way that we even perceive humanity and ourselves. None of that is stuff that we are
00:19:49.560arguing against. We are arguing against the moral panic around AI killing everyone and the need to
00:19:56.640delay AI advancement over that moral panic. Yeah. And that is fair. And the point here being is
00:20:05.540you can actually learn something by correlating historic events. And it is useful to correlate
00:20:13.200these historic events to look for these patterns. Um, which I find really interesting in terms of,
00:20:22.940so it makes sense. Like with the industrial revolution, like with the spinning wheel, whenever you see
00:20:27.980something that is going to create like an economic and sociological jump for our civilization,
00:20:34.920there are going to be a Luddite reaction movement to it. Never historically has there been a
00:20:41.000technological revolution without some large Luddite reaction. The only, and it's not even that weird
00:20:47.900because actually you look historically, Luddite movements often really spread well within the
00:20:52.960educated bourgeoisie that was non-working that that group just seems really susceptible to Luddite
00:20:59.420panics. But I can tell you what, growing up, I never expected the effect of altruist community and
00:21:04.140the rationalist and the singularity community to become sort of Luddite cults like that.
00:21:08.820I also never expected many so-called rationalists to turn to things like, um, energy healing and
00:21:17.460crystals, but here we are. So that's why we need to create a successor movement. And I really
00:21:23.480personally do see the pronatalist movement because I look at the members of the movement and like at
00:21:28.440the pronatalist conference that we went to, and this happening again this year, a huge chunk of the
00:21:32.660people were former people in the rationalist community and disaffected rationalists. And the young
00:21:37.640people I met in the movement were exactly the profile of young person. As I said, it's a hugely
00:21:42.720disproportionately autistic movement who, when they were younger or when I was younger would have been
00:21:47.800early members in the rationalist EA movement. And so we just need to be aware of the susceptibility of
00:21:54.480these movements to one mystic grifters who like you had with our episode, if people want to watch it,
00:21:59.920it's on the cult leverage or two, if they're not mystic grifters on forms of apocalypticism.
00:22:08.380Um, and I should note, and people should watch our episode. They're like, when you talk about the
00:22:12.000world fundamentally changing because of fertility collapse, like how is that different from
00:22:17.220apocalypticism? We have an episode on this if you want to watch it, but the gist of the answer is we
00:22:22.040predict things getting significantly harder and economic turnover, but not all humans dying. The nature of
00:22:28.920our predictions, and this is actually really interesting and it's something from a historic
00:22:32.880perspective in the wrong movements, the nature of our predictions say you need to, if you believe
00:22:39.020this, adopt additional responsibilities in terms of the fate of the world, in terms of yourself.
00:22:44.380Having kids is a huge amount of work. AI apocalypticism allows you to shirk responsibility
00:22:50.320because you say the world's going to end anyway. I don't really need to do anything other than build
00:22:54.200attention, i.e. build my own reader base or attention network towards myself, which is very
00:23:00.960successful from a memetic standpoint at building apocalyptic panic because if somebody donates to one
00:23:06.040of our charities, 90% of the money needs to go to making things better. You donate to an AI
00:23:10.400apocalypticism charity. Most of the money is just going to advertising the problem itself, which is why
00:23:16.680these ideas spread. And that's also what you see historically is panics.
00:23:20.100My concern too is a lot of these projects that have been funded as part of X-Risk philanthropy,
00:23:28.720the only people consuming them are the EA community. So these things aren't reaching
00:23:35.060other groups. And we saw this also at one of the dinner parties we hosted. One of our guests was the
00:23:41.660leader of one of the largest women's networks of AI developers in the world. And a bunch of other
00:23:47.500people there were literally working in AI alignment. This woman had never even heard the term AI
00:23:53.340alignment. These people working in AI alignment are not reaching out to people actually working in AI.
00:23:59.740They are not reaching. They're also not reaching audiences of just broader people. They're all in this echo chamber
00:24:07.820within the EA and rationalist community, and they're not actually getting reach. So even if I did believe
00:24:16.540in the importance of communicating this message, I wouldn't support this community because they're not doing it.
00:24:24.940What they need is to create a network that funds attractive young women to go on dates with people
00:24:33.820in the AI space to just live in areas where they are and try to convince them of it as an issue. But they won't.
00:24:38.700A lot of people in it. Here's another thing that I noticed that's cross correlating between the two groups.
00:24:42.780Actually, I would love to see you apply for a grant with one of those X-Risk funds of just, I will hire
00:24:49.900Thirst Traps to post on Instagram and to be on OnlyFans and to just start like...
00:24:56.540No, no, not OnlyFans. You've been moved to the cities because there's some cities where these companies are based and we're a lot of...
00:25:02.380Yeah, no, no, no, for sure. And date them. Yes, for sure. But I just, I love this idea of using women.
00:25:07.420But here's the other thing that's cross-correlated across all of the incorrect panics historically, which I find very interesting and I didn't notice just now.
00:25:17.100Every one of the correct panics had something specific and actual that you could do to help reduce the risk.
00:25:26.060Whereas almost all of the incorrect moral panics, the answer was just stop technological progress.
00:25:32.460That's how you fix the problem. So if you look at the correct moral panics, Black Plague, World War II, AIDS, DDT, asbestos, cigarette smoking, Native American warnings about Europeans.
00:25:43.420In every one of those, there was like an actionable thing that you needed to do, like DDT, go start doing removal, go don't have it sprayed on as many crops, AIDS, oh, safer sex policies, stuff like that.
00:25:56.940However, if you look at the incorrect things, what are you looking at? Like, the splitting of the Higs, you just need to stop technological development. Industrial revolution, you just need to stop technological development. The speed of trains, you just need to stop technological development. Reading panic, you just need to stop technological development. Radio, you just need to stop technological development. Printing press, you just need to stop technological development.
00:26:17.680An important point with all these, and you could argue, actually, that this was an issue with nuclear as well.
00:26:25.940In fact, this discussion was had with nuclear is that there was this one physicist who, one, believed that nuclear wouldn't be possible,
00:26:34.440but two, also was very strongly against censorship because a lot of people were saying we have to stop this research.
00:26:40.920It's too dangerous. And he just strongly believed that you should never, ever censor things in physics if that's not acceptable.
00:26:48.100And then we did ultimately end up with nuclear weapons, and that is a real risk for us.
00:26:53.440But I think the larger argument with technological development is someone's going to figure it out.
00:27:00.260And to a certain extent, it's going to have to be an arms race.
00:27:03.120And you're going to have to hope that your faction develops this and starts to own the best versions of this tech in a game of proliferation before anyone else.
00:27:16.720There's no... If you don't do it, someone else will.
00:27:20.900Yeah, and that's the other... Now, I haven't gone into this because this isn't what the video is,
00:27:24.500but recently I was trying to understand the AI risk people better as part of Lemon Week,
00:27:29.320or I have to engage really heavily with steel manning, an idea I disagree with.
00:27:34.000And one of the things I realized was a core difference between the way I was approaching this intellectually and they were,
00:27:40.540is I just immediately discounted any potential timeline where there was nothing realistic we could do about it.
00:27:46.440An example here would be in a timeline where somebody says,
00:27:51.420AI is an existential risk, but we can remove that risk
00:27:56.200by getting all of the world's governments to band together
00:28:00.380and prevent the development of something that could revolutionize their economies.
00:29:48.380I know very few living intellectuals where I'm like, yeah, you should really respect this person as an intellectual because they have takes beyond my own occasionally.
00:30:11.780Why do you think he didn't consider what I can just laid out and think is a fairly obvious point that you should be correlating these historical movements?
00:30:19.100I just, I think that you have a way of looking at things from an even more cross-disciplinary and first principles way than he does.
00:30:35.220Sometimes, so you both are very cross-disciplinary thinkers, which is one reason why I like both of your work a lot.
00:30:43.180But I think in the algorithm of cross-disciplinary thinking, he gives a heavier weight to other materials, and you give a heavier weight to just first principles reasoning, and that's how you come to reach these different conclusions.
00:31:04.820Yeah, and I also think another thing he gives a heavier weight to, like when I disagree with him most frequently, to things that are culturally normative in his community, he gives a slightly heavier weight to.
00:31:16.380That's actually, you are very similar in that way, in that your opinion is highly colored by recent conversations you've had with people and recent things you've watched.
00:31:25.060So it's something that both of you are subject to.
00:31:27.240I would say that maybe, you may be even more subject to it than he is, because you interact with people less than he does on a regular basis.
00:31:44.220Yeah, actually, I would definitely admit that.
00:31:46.000Like a lot of my talk around trans stuff recently is just because I've been watching lots and lots of content in that area, which has caused YouTube to recommend more of it to me, which has caused sort of a loop.
00:31:57.440On top of that, historically, I wouldn't have cared about that much.
00:31:59.540One thing I'll just end with, though, and I'm still not even finished reading this, but Leopold Ashenbrenner, I don't know actually how his last name is pronounced, but he is like in the EA X-Risk.
00:32:17.480He's famously one of the first people to talk about pronatalism.
00:32:20.200He just never put any money into it, even though he was on the board of FTX.
00:32:23.060He published a really great piece on AI that I now am using as my mooring point for helping me think through the implications of where we're going with AI.
00:32:34.440Seeing how steeped he is in that world and how well he knows many of the people who are working on the inside of it, getting us closer to AGI, I think he's a really good person to turn to in terms of his takes.
00:32:47.460I think that they're better moored in reality, and they're also more practically oriented.
00:32:53.220He wrote this thing called Situational Awareness, The Decade Ahead.
00:32:56.640You can find it at situational-awareness.ai.
00:33:01.880And if you look at his Twitter, if you just search Leopold Ashenbrenner on Twitter, it's like his Twitter URL link.
00:33:11.700In terms of the conversation that I wish we were having with AI, he sets the tone of what I wish we were talking about, like how we should be building on energy infrastructure, the immense security loopholes and concerns that we should be having about, for example, foreign actors getting access to our algorithms and weights and the AI that we're developing right now because there's very little security around it.
00:33:37.060So, yeah, I think that people should turn to his write-up.
00:33:43.380And I was just thinking I had another idea as to why maybe I recognize this when he didn't.
00:33:49.540Because this is very much like me asking, why did somebody smarter than me or who I consider smarter than me not see something that I saw as like really obvious and he didn't include and like discount in his piece?
00:34:01.740Of course, you would cross-correlate the instances of success with the instances of failure in these predictions.
00:34:06.280I suspect it could also be that my entire worldview and philosophy, and many people know this from our videos, comes from a memetic cloud-first perspective.
00:34:18.800I am always looking at the types of ideas that are good at replicating themselves and the types of ideas that aren't good at replicating themselves when I am trying to disturb it why large groups act in specific ways or end up believing things that I find off or weird.
00:34:44.960Like, how do people convince them things of stuff that to an outsider seem absurd?
00:34:50.320And so when I am looking at any idea, I am always seeing it through this memetic lens first.
00:34:56.060And I think when he looks at ideas, he doesn't first filter it through a memetically why would this idea exist before he is looking at the merits of the idea.
00:35:07.380Whereas I often consider those two things as of equal standing to help me understand how an idea came to exist and why it's in front of me.
00:35:15.360But I don't think that he has this second obsession here.
00:36:19.940That even our polygenic scores for intelligence show that you're the smarter one.
00:36:25.240Yeah, we went through our polygenic scores recently.
00:36:28.180And one of the things I mentioned in a few other episodes is that I have the face of somebody who, you know, when they were biologically developing, was in a high testosterone environment.
00:36:37.760When contrasted with Andrew Tate, like that's where I often talk about it, is he has the face of somebody who grew up in a very low testosterone environment.
00:36:42.940Believe it or not, when I was going through the polygenic markers, I came up 99% on testosterone production.
00:36:48.320In terms of the top 1% of the population in terms of just endogenous testosterone production.
00:36:54.620So, yeah, of course, when I was developmental, I was just flooded in the stuff.