Based Camp - June 25, 2024


Contra Scott Alexander on AI Safety Arguments


Episode Stats

Length

44 minutes

Words per Minute

172.34682

Word Count

7,749

Sentence Count

601

Misogynist Sentences

7

Hate Speech Sentences

12


Summary

This is a response to an article written by Slate's Scott Alexander about arguments against AI apocalypse. In it, he lays out his argument, but it's clear that he doesn't understand one of the most fundamental concepts in the argument against AI: that AI isn't going to destroy the world as we know it.


Transcript

00:00:00.000 I'm quoting from him here. Okay. One of the most common arguments against AI safety is here's an
00:00:05.420 example of a time someone was worried about something, but it didn't happen. Therefore AI,
00:00:10.460 which you are worried about also won't happen. I always give the obvious answer. Okay. But there
00:00:16.820 are other examples of times someone was worried about something and it did happen, right? How do
00:00:21.720 we know AI isn't more like those? So specifically he is arguing against is every 20 years or so,
00:00:28.440 you get one of these apocalyptic movements and this is why we're discounting this movement.
00:00:33.180 This is how he ends the article. So people know this isn't an attack piece. This is what he asked
00:00:37.360 for in the article. He says, conclusion. I genuinely don't know what these people are thinking. I would
00:00:41.940 like to understand the mindset of people who make arguments like this, but I'm not sure I've
00:00:45.880 succeeded. What is he missing? He is missing something absolutely giant in everything that
00:00:53.720 he's laid out. And it is a very important point. And it's very clear from his writeup that this
00:00:58.140 idea had just never occurred to him. Would you like to know more?
00:01:01.000 Hello, Simone. I am excited to be here with you today. We today are going to be creating a video
00:01:10.960 reply slash response to an argument. Scott Alexander, the guy who writes Astral Codex 10 or Slate Star
00:01:19.880 Codex, depending on what era you were introduced to his content, wrote about arguments against AI
00:01:27.420 apocalypticism, which are based around. It'll be clear when we get into the piece, because
00:01:33.120 I'm going to read some parts of it that I should know. This is not a Scott Alexander is not smart
00:01:39.980 or anything like that piece. We actually think Scott Alexander is incredibly intelligent and
00:01:44.100 well-meaning. And he is an intellectual who I consider a friend and somebody whose work I
00:01:51.280 enormously respect. And I am creating this response because the piece is written in a way
00:01:57.720 that actively requests a response. It's like, why do people believe this argument when I find it to
00:02:05.580 be so weak? Like one of those, what am I missing here kind of things? Yeah. What am I missing here
00:02:09.920 kind of things? He just clearly, and I like the way he lays out his argument because it's very clear
00:02:16.020 that yes, there's a huge thing he's missing. And it's clear from his argument and the way that he
00:02:21.040 thought about it, that he's just literally never considered this point. And it's why he doesn't
00:02:27.040 understand this argument. So we're going to go over his counter argument and we're going to go over the
00:02:31.560 thing that he happens to be missing. And I'm quoting from him here. Okay. One of the most common
00:02:36.520 arguments against AI safety is, here's an example of a time someone was worried about something,
00:02:41.480 but it didn't happen. Therefore AI, which you are worried about also won't happen. I always give
00:02:47.700 the obvious answer. Okay. But there are other examples of times someone was worried about
00:02:53.200 something and it did happen, right? How do we know AI isn't more like those? The people I'm arguing
00:02:58.820 with always seem so surprised by this response as if I'm committing some sort of betrayal by destroying
00:03:05.000 their beautiful argument. So specifically he is arguing against the form of apocalypticism
00:03:10.440 that when we talk about it more sounds like our argument against AI apocalypticism is every 20
00:03:16.340 years or so, you get one of these apocalyptic movements. And this is why we're discounting
00:03:21.020 this movement. Okay. And I'm going to go further with his argument here. So he says, I keep trying
00:03:26.180 to steel man this argument. So keep in mind, he's trying to steel man it. This is not us saying like
00:03:30.080 he wants to steel man. Okay. I keep trying to steel man this argument and it keeps resisting my
00:03:34.820 steel manning. For example, maybe the argument is a failed attempt to gesture at a
00:03:40.120 principle of quote, most technologies don't go wrong, but people make the same argument with
00:03:47.600 technologies that aren't technologies like global cooling or overpopulation. Maybe the argument is
00:03:53.700 a failed attempt to gesture at a principle of quote, the world is never destroyed. So doomsday
00:04:00.020 prophecies have an abysmal track record in quote, but overpopulation and global cooling don't claim
00:04:06.080 that no one will die. Just that a lot of people will. And plenty of prophecies about mass deaths
00:04:11.500 events have come true. EG black plague, world war two AIDS, and none of this explains coffee. So
00:04:19.320 there's some weird coffee argument that he comes back to, but I don't actually think is important
00:04:23.880 to understand this, but I can read it if you're interested. I'm sufficiently intrigued. Okay.
00:04:29.800 People basically made the thing of what basically reading from him now, once people were worried
00:04:36.160 about coffee, but now we know coffee is safe. Therefore AI will also be safe, which is to say
00:04:41.460 there was a period where everyone was afraid of coffee and there was a lot of apocalypticism about
00:04:44.960 it. And there really was like people were afraid of caffeine for a period and the fears turned out
00:04:49.280 wrong. And then people correlate that with AI. And I think that is a bad argument, but the other type
00:04:55.000 of argument he's making here. So you can see, and I will create a final framing from him here that I
00:05:01.760 think is a pretty good summation of his argument. There is at least one thing that was possible.
00:05:08.800 Therefore super intelligent AI is also possible and an only slightly less hostile reframing. So he's,
00:05:15.860 that's the way that he hears it when people make this argument, there is at least one thing that was
00:05:21.140 possible. Therefore super intelligent AI is also possible and safe presumably, right? Because it's
00:05:27.480 one thing was past technologies that we're talking about. And then he says, and an only slightly less
00:05:32.100 hostile rephrasing, people were wrong when they said nuclear reactions were impossible. Therefore,
00:05:37.660 they might also be wrong when they say super intelligent AI is possible. Conclusion, I genuinely don't
00:05:43.800 know what these people are thinking. And then he says, I would like to understand the mindset.
00:05:49.480 So this is how he ends the article. So people know this isn't an attack piece. This is what he asked
00:05:54.660 for in the article. He says, conclusion, I genuinely don't know what these people are thinking. I would
00:05:59.240 like to understand the mindset of people who make arguments like this, but I'm not sure I've succeeded.
00:06:04.640 The best I can say is that sometimes people on my side make similar arguments, the nuclear chain
00:06:09.600 reaction one, which I don't immediately flag as dumb. And maybe I can follow this thread to figure why
00:06:15.120 they seem tempting sometimes. All right. So great. What is he missing? Actually, I'd almost take a
00:06:25.820 pause moment here to see of our audience because he's is missing something absolutely giant in
00:06:31.780 everything that he's laid out. There is a very logical reason to be making this argument. And it
00:06:38.100 is a point that he is missing and everything that he's looking at. And it is a very important point.
00:06:43.020 And it's very clear from his writeup that this idea had just never occurred to him.
00:06:47.340 Is this the Margaret Thatcher, Irish terrorists idea?
00:06:52.920 No.
00:06:54.000 Okay.
00:06:54.920 Can you think what if I was trying to predict the probability of a current apocalyptic movement
00:07:03.620 being wrong? What would I use in a historic context? And I usually don't lay out this point
00:07:09.700 because I thought it was so obvious. And now I'm realizing that to even fairly smart people,
00:07:14.380 it's not an obvious point.
00:07:17.380 I have no idea. School us.
00:07:21.700 People historically have sometimes built up panics about things that didn't happen. And then sometimes
00:07:29.380 people have raised red flags as outliers about things that did end up happening.
00:07:34.280 What we can do to find out if the current event is just a moral panic, or is it actually a legitimate
00:07:42.800 panic, is to correlate it with historical circumstances to figure out what things did the
00:07:50.140 historical accurate predictions have in common, and what things did the pure moral panics have in
00:07:57.000 common.
00:07:57.600 So what are examples of past genuine apocalypses? So like the plague? What else?
00:08:02.720 So I went through, and we'll go through the examples of, yeah.
00:08:07.840 It's history time with Malcolm Collins.
00:08:09.620 It's history time with Malcolm Collins, where people actually predicted the beginnings of
00:08:14.320 something that was going to be huge. And then times, and hold on, I should actually word this a bit
00:08:19.260 differently.
00:08:19.840 Ooh, the Industrial Revolution. That's a good one.
00:08:22.620 Simone, we'll get to these in a second, okay?
00:08:24.920 Okay.
00:08:25.240 The point being, I want to better explain this argument to people because people may still
00:08:29.760 struggle to understand, like, the really core point that he's missing. Historically speaking,
00:08:35.820 people made predictions around things that were beginning to happen was in their times,
00:08:42.880 becoming huge and apocalyptic events in the future that ended up in mass deaths.
00:08:49.420 We can, from today's perspective, because we now know which of those predictions fell into which
00:08:54.780 categories correlate, one, the ways that these communities act, features of the prediction,
00:09:01.800 and the types of things they're making predictions about to find out if somebody today is making a
00:09:07.840 prediction around some current trend leading to mass deaths, okay?
00:09:14.900 If it's going to fall into the camp of false prediction or accurate prediction by correlating it
00:09:21.340 with the historic predictions, I think that the reason why, because I don't think he's a dumb
00:09:26.180 person, and he should have thought of this, like, I genuinely think, like, this is not a weird thing
00:09:30.440 to think about. I think the reason he hasn't thought about it is because he's so on the side of AI
00:09:35.380 apocalypticism being something we should focus on. He just hasn't thought about disconfirming arguments.
00:09:41.280 And when you begin to correlate AI apocalypticism with historic apocalyptic movements, it fits
00:09:48.660 incredibly snugly in the false fear category. So let's go into the historic predictions, okay?
00:09:56.860 So the times when they were accurate, all right, were predictions around the Black Plague,
00:10:02.740 predictions around World War II, predictions around AIDS, predictions around DDT, predictions around
00:10:09.140 asbestos, predictions around cigarette smoking, Native American warnings around Europeans.
00:10:15.000 Okay.
00:10:15.880 All apocalyptic predictions, which ended up becoming true. Now let's go through all of the ones that
00:10:21.460 were incorrect, that developed, freak out, almost religious communities around them.
00:10:26.380 Splitting of the Higgs boson. People thought that would cause a Black hole.
00:10:29.700 Oh, I remember that. Yes.
00:10:32.480 The Industrial Revolution. That was a huge one right there.
00:10:35.720 I don't know. That did precipitate the beginning of demographic collapse.
00:10:40.160 It wasn't a problem for the reason people thought it was a problem.
00:10:42.840 Okay.
00:10:43.140 They thought no one would have any jobs anymore. That was the core fear in the Industrial
00:10:46.840 Revolution, if you remember, and we'll get more into that. The speed of trains, the reading,
00:10:53.420 we can get to more on the Industrial Revolution if you want to park it in the edge case.
00:10:57.400 The reading panic. A lot of people don't know there was a reading panic. Everyone thought that
00:11:01.620 reading would destroy people and that all of these young women were becoming addicted
00:11:05.620 to reading and they do it very much in the way that we today. Yeah, there was this fear,
00:11:11.240 it was called like reading madness. A girl would get really into reading and today we just call
00:11:16.080 that being most nerdy young woman. Video game violence. This is what I didn't know about because
00:11:21.700 I went in to try to create as comprehensive a list as possible. The Telegraph. Critics believe
00:11:27.520 that the Telegraph would lead to a decline in literacy, destroy the Postal Service and contribute
00:11:32.320 to moral decay. Oh. I think it could destroy the Postal Service eventually, but- Amazing.
00:11:37.160 Anyway, radio. Critics warn that radio would lead to a decline in literacy, encourage mindless
00:11:41.980 entertainment, and foster a culture of violence. So for those that aren't aware, no, literacy has
00:11:47.040 broadly risen since the radio was introduced. The printing press. There was significant fear that
00:11:52.400 the printing press would spread heretical ideas, misinformation. Didn't it precipitate the
00:11:57.180 reformation? Yeah, so I guess the printing press we can put in the movie category. Legit. Come on.
00:12:02.240 Legit. Yeah. The spinning wheel. No, not really. The printing press really only moves things forwards.
00:12:09.280 The people who were afraid of the printing press- We liked the reformation, but it still did cause it.
00:12:13.880 It was the beginning of- No, but this doesn't fall into the category of false predictions.
00:12:18.020 Oh, okay. So this is like a fascist saying, I'm afraid that other people might have access to
00:12:22.560 information, which has given me power. That's not- No. The spinning wheel. This was in the 13th
00:12:28.180 century. People thought the spinning wheel would lead to the collapse of civilization.
00:12:32.740 Then there was when coffee was introduced to Europe in the 16th century. The access that was met
00:12:37.200 was suspicion and resistance. Some religious and political leaders feared that coffee houses would
00:12:41.960 become centuries of political dissent and immoral behavior. In 1674, the Women's Petition Against Coffee
00:12:47.660 in England claimed that coffee made men impotent and was a threat to family life. And yeah. So
00:12:54.420 what do these things have in common now that we have categorized them into these two groups? And I
00:13:00.640 think there is very loud things about the accurate predictions that you almost never see in the
00:13:06.020 inaccurate predictions. And very loud thing amongst all the inaccurate predictions you never see in the
00:13:10.880 accurate predictions. But I think that these two categories of predictions actually look very
00:13:14.800 different. Okay. Okay. So the things that turned out to be moral panics versus the things that turned
00:13:19.640 out to be accurate predictions of a future danger. People were already dying in small ways.
00:13:26.780 With the real ones. Yes. Every single time it has been an accurate prediction, whether it's the AIDS
00:13:33.080 or it's- The small batches of people dying. It's a sign that shit's about to go down.
00:13:37.440 Yeah. Yeah. But we haven't had a single AI turn rogue and start murdering people yet. Like we've
00:13:43.220 had machines break in factories. I think like a robotic arm accidentally killed someone in a Tesla
00:13:47.820 factory, but it wasn't like malicious. It wasn't like trying to kill the person.
00:13:51.920 Their factory does all the time. And of course, fewer today than ever before, probably.
00:13:55.320 Yes. This marks it clearly in the moral panic category. Okay. Okay. Ones that turned out to be
00:14:02.020 wrong are very often tied to a fear of people being replaced by machines. Yeah. Technology. It
00:14:08.800 seems that's the biggest theme is this new invention is going to ruin everything. So historically,
00:14:14.700 we've seen that has never happened. Or cultural destruction. That's the other thing that's often
00:14:18.980 claimed, which is also something we see around AI apocalypticism. Fears around cultural destruction
00:14:23.440 and jobs being taken. Interesting. And then here, and people can be like, what? Jobs are being
00:14:28.440 taken. Yes. But more jobs are created at the end of the day. What's always happened in a historic
00:14:32.520 context. And yes, like photography took jobs away from artists, but no one's now as like photography
00:14:38.040 as like a moral evil or something like that. Here's another one. The fake ones, the ones that turn out to
00:14:43.240 be wrong are usually related to technology or science. Yeah. The ones that are right are usually
00:14:49.740 related to medical concerns or actually always related to medical concerns or geopolitical
00:14:54.900 predictions. Yeah. I was getting the, that it's, it's an infection either of like people or outside
00:15:01.040 groups, like the sea peoples or Europeans or whatever, or literally a disease coming in.
00:15:06.940 Yes. And here is the final nail in the coffin for his, you cannot learn anything from this.
00:15:13.480 Different cultural groups react to fake predictions with different levels of susceptibility and panic.
00:15:23.140 By that, what I mean is that if you look at certain countries and cultural backgrounds,
00:15:28.640 they almost never have these moral panic freak outs when it is inaccurate.
00:15:35.020 Okay. Okay. So you're saying like you look at China and China's not shitting a brick about this thing.
00:15:40.520 Yeah. China is not very susceptible to moral panic. Most East Asian countries aren't. So India
00:15:45.100 isn't particularly susceptible. China isn't particularly susceptible. Japan isn't particularly
00:15:50.040 susceptible. And South Korea isn't particularly susceptible. They just historically have not had,
00:15:55.320 and I remember I was talking with someone once and then they came up with like some example of a
00:15:59.940 moral panic in China. And then I looked it up and it like, wasn't true. So if you're like,
00:16:05.160 no, here's some example of when this happened in China historically,
00:16:08.800 like the boxer rebellion or something like that, I'm like, no, that was not an, a moral panic. That
00:16:14.320 was a, or the opium wars, like the opium wars were an actual concern about something.
00:16:18.940 Yeah. People, it was a batches of people dying issue, which was a real problem.
00:16:25.060 So when you, in certain cultures are hyper susceptible to moral, to, to apocalyptic movements,
00:16:31.140 specifically, they spread really quickly was in Jewish communities and was in a Christian communities.
00:16:38.340 Those are the two groups that are most susceptible to this. Yeah.
00:16:42.040 Okay. Here's the problem. So you get the problem across the board here, which is the places having
00:16:50.620 the moral panics today around AI apocalypticism are 100% and nearly exclusively the communities that
00:16:59.820 were disproportionately successful, susceptible to incorrect moral panics on a historic basis.
00:17:07.840 White Christians and Jews.
00:17:09.360 Christians and Jews. You just don't see big AI apocalyptic movements in Japan or Korea or China
00:17:16.120 or India. They're just not freaking out about this in the same way. And keep in mind, I've made the
00:17:20.560 table very big here. It's not like I'm just saying, oh, you're not seeing it in Japan. You're not seeing
00:17:24.660 it in half the world. That's not prone to these types of apocalyptic panics. Okay. That is really big
00:17:32.380 evidence to me. Okay. That's point one. Point two is it has all of the characteristics of the fake
00:17:40.680 moral panics, historically speaking, and none of the characteristics of the accurate panics,
00:17:45.740 historically speaking. But I'm wondering if you're noticing any other areas where there are
00:17:50.960 congruence in the moral panics that turned out accurate versus the ones that didn't.
00:17:58.280 The biggest theme to me is just invasion versus change. Like a foreign agent entering seems to be
00:18:05.560 a bigger risk than something fundamentally changing from a technological standpoint, which is not what
00:18:10.880 I expected you to come in with. So this is surprising to me. Yeah. Okay. So if we were going
00:18:17.660 to modify AI risk to fit into the mindset of the moral panics that turned out to be correct,
00:18:25.800 like the apocalyptic claims that turned out to be correct, you would need to reframe it. You'd need
00:18:30.220 to say something like, and this would fit correct predictions, historically speaking. If we stop AI
00:18:37.280 development and China keeps on with AI development, China will use the AI to subjugate us and eradicate
00:18:45.300 a large portion of our population. Yeah. That would have a lot in common with the types of moral
00:18:51.480 predictions or moral panic predictions that turned out accurate. AI will take people's jobs. AI will
00:19:00.080 destroy our culture. Our AI will kill all people. These feel very much like the historic incorrect.
00:19:08.040 But I think you are underplaying something, which is that while these technological predictions,
00:19:14.240 Luddites freaking out about the industrial revolution, people freaking out about the printing
00:19:18.600 press, it did not lead to the fall of civilization as expected. It did lead to fundamental changes.
00:19:25.640 And AI will absolutely lead to fundamental changes and the way that people live and work.
00:19:31.020 I don't argue. Have we ever argued that AI is not going to fundamentally change human civilization?
00:19:36.320 We have multiple episodes on this point. Okay. Yeah. We would say it's going to fundamentally
00:19:40.920 change the civilization. It's going to fundamentally change the economy. It's going to fundamentally
00:19:44.100 change the way that we even perceive humanity and ourselves. None of that is stuff that we are
00:19:49.560 arguing against. We are arguing against the moral panic around AI killing everyone and the need to
00:19:56.640 delay AI advancement over that moral panic. Yeah. And that is fair. And the point here being is
00:20:05.540 you can actually learn something by correlating historic events. And it is useful to correlate
00:20:13.200 these historic events to look for these patterns. Um, which I find really interesting in terms of,
00:20:22.940 so it makes sense. Like with the industrial revolution, like with the spinning wheel, whenever you see
00:20:27.980 something that is going to create like an economic and sociological jump for our civilization,
00:20:34.920 there are going to be a Luddite reaction movement to it. Never historically has there been a
00:20:41.000 technological revolution without some large Luddite reaction. The only, and it's not even that weird
00:20:47.900 because actually you look historically, Luddite movements often really spread well within the
00:20:52.960 educated bourgeoisie that was non-working that that group just seems really susceptible to Luddite
00:20:59.420 panics. But I can tell you what, growing up, I never expected the effect of altruist community and
00:21:04.140 the rationalist and the singularity community to become sort of Luddite cults like that.
00:21:08.820 I also never expected many so-called rationalists to turn to things like, um, energy healing and
00:21:17.460 crystals, but here we are. So that's why we need to create a successor movement. And I really
00:21:23.480 personally do see the pronatalist movement because I look at the members of the movement and like at
00:21:28.440 the pronatalist conference that we went to, and this happening again this year, a huge chunk of the
00:21:32.660 people were former people in the rationalist community and disaffected rationalists. And the young
00:21:37.640 people I met in the movement were exactly the profile of young person. As I said, it's a hugely
00:21:42.720 disproportionately autistic movement who, when they were younger or when I was younger would have been
00:21:47.800 early members in the rationalist EA movement. And so we just need to be aware of the susceptibility of
00:21:54.480 these movements to one mystic grifters who like you had with our episode, if people want to watch it,
00:21:59.920 it's on the cult leverage or two, if they're not mystic grifters on forms of apocalypticism.
00:22:08.380 Um, and I should note, and people should watch our episode. They're like, when you talk about the
00:22:12.000 world fundamentally changing because of fertility collapse, like how is that different from
00:22:17.220 apocalypticism? We have an episode on this if you want to watch it, but the gist of the answer is we
00:22:22.040 predict things getting significantly harder and economic turnover, but not all humans dying. The nature of
00:22:28.920 our predictions, and this is actually really interesting and it's something from a historic
00:22:32.880 perspective in the wrong movements, the nature of our predictions say you need to, if you believe
00:22:39.020 this, adopt additional responsibilities in terms of the fate of the world, in terms of yourself.
00:22:44.380 Having kids is a huge amount of work. AI apocalypticism allows you to shirk responsibility
00:22:50.320 because you say the world's going to end anyway. I don't really need to do anything other than build
00:22:54.200 attention, i.e. build my own reader base or attention network towards myself, which is very
00:23:00.960 successful from a memetic standpoint at building apocalyptic panic because if somebody donates to one
00:23:06.040 of our charities, 90% of the money needs to go to making things better. You donate to an AI
00:23:10.400 apocalypticism charity. Most of the money is just going to advertising the problem itself, which is why
00:23:16.680 these ideas spread. And that's also what you see historically is panics.
00:23:20.100 My concern too is a lot of these projects that have been funded as part of X-Risk philanthropy,
00:23:28.720 the only people consuming them are the EA community. So these things aren't reaching
00:23:35.060 other groups. And we saw this also at one of the dinner parties we hosted. One of our guests was the
00:23:41.660 leader of one of the largest women's networks of AI developers in the world. And a bunch of other
00:23:47.500 people there were literally working in AI alignment. This woman had never even heard the term AI
00:23:53.340 alignment. These people working in AI alignment are not reaching out to people actually working in AI.
00:23:59.740 They are not reaching. They're also not reaching audiences of just broader people. They're all in this echo chamber
00:24:07.820 within the EA and rationalist community, and they're not actually getting reach. So even if I did believe
00:24:16.540 in the importance of communicating this message, I wouldn't support this community because they're not doing it.
00:24:24.940 What they need is to create a network that funds attractive young women to go on dates with people
00:24:33.820 in the AI space to just live in areas where they are and try to convince them of it as an issue. But they won't.
00:24:38.700 A lot of people in it. Here's another thing that I noticed that's cross correlating between the two groups.
00:24:42.780 Actually, I would love to see you apply for a grant with one of those X-Risk funds of just, I will hire
00:24:49.900 Thirst Traps to post on Instagram and to be on OnlyFans and to just start like...
00:24:56.540 No, no, not OnlyFans. You've been moved to the cities because there's some cities where these companies are based and we're a lot of...
00:25:02.380 Yeah, no, no, no, for sure. And date them. Yes, for sure. But I just, I love this idea of using women.
00:25:07.420 But here's the other thing that's cross-correlated across all of the incorrect panics historically, which I find very interesting and I didn't notice just now.
00:25:17.100 Every one of the correct panics had something specific and actual that you could do to help reduce the risk.
00:25:26.060 Whereas almost all of the incorrect moral panics, the answer was just stop technological progress.
00:25:32.460 That's how you fix the problem. So if you look at the correct moral panics, Black Plague, World War II, AIDS, DDT, asbestos, cigarette smoking, Native American warnings about Europeans.
00:25:43.420 In every one of those, there was like an actionable thing that you needed to do, like DDT, go start doing removal, go don't have it sprayed on as many crops, AIDS, oh, safer sex policies, stuff like that.
00:25:56.940 However, if you look at the incorrect things, what are you looking at? Like, the splitting of the Higs, you just need to stop technological development. Industrial revolution, you just need to stop technological development. The speed of trains, you just need to stop technological development. Reading panic, you just need to stop technological development. Radio, you just need to stop technological development. Printing press, you just need to stop technological development.
00:26:17.680 An important point with all these, and you could argue, actually, that this was an issue with nuclear as well.
00:26:25.940 In fact, this discussion was had with nuclear is that there was this one physicist who, one, believed that nuclear wouldn't be possible,
00:26:34.440 but two, also was very strongly against censorship because a lot of people were saying we have to stop this research.
00:26:40.920 It's too dangerous. And he just strongly believed that you should never, ever censor things in physics if that's not acceptable.
00:26:48.100 And then we did ultimately end up with nuclear weapons, and that is a real risk for us.
00:26:53.440 But I think the larger argument with technological development is someone's going to figure it out.
00:27:00.260 And to a certain extent, it's going to have to be an arms race.
00:27:03.120 And you're going to have to hope that your faction develops this and starts to own the best versions of this tech in a game of proliferation before anyone else.
00:27:16.720 There's no... If you don't do it, someone else will.
00:27:20.900 Yeah, and that's the other... Now, I haven't gone into this because this isn't what the video is,
00:27:24.500 but recently I was trying to understand the AI risk people better as part of Lemon Week,
00:27:29.320 or I have to engage really heavily with steel manning, an idea I disagree with.
00:27:34.000 And one of the things I realized was a core difference between the way I was approaching this intellectually and they were,
00:27:40.540 is I just immediately discounted any potential timeline where there was nothing realistic we could do about it.
00:27:46.440 An example here would be in a timeline where somebody says,
00:27:51.420 AI is an existential risk, but we can remove that risk
00:27:56.200 by getting all of the world's governments to band together
00:28:00.380 and prevent the development of something that could revolutionize their economies.
00:28:05.440 Does that not happen?
00:28:06.780 No, it's just stupid. It's a stupid statement.
00:28:09.640 Of course we can't do that.
00:28:11.600 If we live in a world where if we can't do that,
00:28:14.720 AI kills us in every timeline,
00:28:16.720 I don't even need to consider that possibility.
00:28:19.320 It's not meaningful on a possibility graph
00:28:22.320 because there's nothing we can do about living in that reality.
00:28:25.340 Therefore, I don't need to make any decisions under the assumption that we live in that reality.
00:28:30.520 It's a very relaxing reality.
00:28:32.460 Yeah. And that's what gets me is I realized that they weren't just immediately discounting impossible tasks.
00:28:39.060 Whereas I always do.
00:28:40.460 Like when people are like, you could fix pronatalism if you could give a half million grant to every parent.
00:28:44.300 I'm like, cool, but we don't live in that reality.
00:28:46.400 So I don't consider that.
00:28:48.640 Yeah.
00:28:49.100 They're like, yeah, government policy interventions could work.
00:28:51.260 You need a half million.
00:28:51.960 I'm like, yeah.
00:28:52.800 And people are like, technically we could economically afford it.
00:28:55.140 And I go, yes, but in no realistic governance scenario, could you get that passed in anything close to the near future?
00:29:03.120 I think it's just an issue of how I judge timelines to worry about and timelines not to worry about, which is interesting.
00:29:11.040 Anyway.
00:29:12.480 Love you to death.
00:29:13.360 It'd be interesting if Scott watches this.
00:29:14.880 We chat with Scott.
00:29:15.860 I'm friendly with him, but I also know that he doesn't really consume YouTube.
00:29:19.040 So I don't know if this is something that will get to him, but it's also just useful for people who watch this stuff.
00:29:25.320 And if you are not following Scott's stuff, you should be, or you are out of the cultural zeitgeist.
00:29:31.420 That's just what I'm going to tell you.
00:29:32.900 He is certainly still a figure that is well more respected than us as an intellectual.
00:29:39.360 And I think he is a deservingly respected intellectual.
00:29:44.060 And I say that about very few living people.
00:29:48.100 Yeah.
00:29:48.380 I know very few living intellectuals where I'm like, yeah, you should really respect this person as an intellectual because they have takes beyond my own occasionally.
00:29:56.620 Yeah, he is wise.
00:29:58.440 He is extremely well-read.
00:29:59.980 He is extremely clever and surrounded by incredibly clever people.
00:30:04.780 And then beyond that, I would say he disagrees with us on quite a few things.
00:30:08.000 So we have a lot to learn from him.
00:30:10.380 Actually, question, Simone.
00:30:11.780 Why do you think he didn't consider what I can just laid out and think is a fairly obvious point that you should be correlating these historical movements?
00:30:19.100 I just, I think that you have a way of looking at things from an even more cross-disciplinary and first principles way than he does.
00:30:35.220 Sometimes, so you both are very cross-disciplinary thinkers, which is one reason why I like both of your work a lot.
00:30:43.180 But I think in the algorithm of cross-disciplinary thinking, he gives a heavier weight to other materials, and you give a heavier weight to just first principles reasoning, and that's how you come to reach these different conclusions.
00:31:03.120 Yeah, I'd agree with that.
00:31:04.820 Yeah, and I also think another thing he gives a heavier weight to, like when I disagree with him most frequently, to things that are culturally normative in his community, he gives a slightly heavier weight to.
00:31:16.380 That's actually, you are very similar in that way, in that your opinion is highly colored by recent conversations you've had with people and recent things you've watched.
00:31:25.060 So it's something that both of you are subject to.
00:31:27.240 I would say that maybe, you may be even more subject to it than he is, because you interact with people less than he does on a regular basis.
00:31:34.440 True, he's much more social than us.
00:31:36.320 He's much more social than you, but you are extremely colored by what you're supposed to.
00:31:41.520 So you're not exempt from this, but it's okay.
00:31:43.580 That is true.
00:31:44.220 Yeah, actually, I would definitely admit that.
00:31:46.000 Like a lot of my talk around trans stuff recently is just because I've been watching lots and lots of content in that area, which has caused YouTube to recommend more of it to me, which has caused sort of a loop.
00:31:57.440 On top of that, historically, I wouldn't have cared about that much.
00:31:59.540 One thing I'll just end with, though, and I'm still not even finished reading this, but Leopold Ashenbrenner, I don't know actually how his last name is pronounced, but he is like in the EA X-Risk.
00:32:15.220 I think he's even pronatalist.
00:32:17.000 No, he is.
00:32:17.480 He's famously one of the first people to talk about pronatalism.
00:32:20.200 He just never put any money into it, even though he was on the board of FTX.
00:32:23.060 He published a really great piece on AI that I now am using as my mooring point for helping me think through the implications of where we're going with AI.
00:32:34.440 Seeing how steeped he is in that world and how well he knows many of the people who are working on the inside of it, getting us closer to AGI, I think he's a really good person to turn to in terms of his takes.
00:32:47.460 I think that they're better moored in reality, and they're also more practically oriented.
00:32:53.220 He wrote this thing called Situational Awareness, The Decade Ahead.
00:32:56.640 You can find it at situational-awareness.ai.
00:33:01.880 And if you look at his Twitter, if you just search Leopold Ashenbrenner on Twitter, it's like his Twitter URL link.
00:33:09.420 He's definitely promoting it.
00:33:10.760 I recommend reading that.
00:33:11.700 In terms of the conversation that I wish we were having with AI, he sets the tone of what I wish we were talking about, like how we should be building on energy infrastructure, the immense security loopholes and concerns that we should be having about, for example, foreign actors getting access to our algorithms and weights and the AI that we're developing right now because there's very little security around it.
00:33:37.060 So, yeah, I think that people should turn to his write-up.
00:33:41.960 That's a great call to action.
00:33:43.380 And I was just thinking I had another idea as to why maybe I recognize this when he didn't.
00:33:49.540 Because this is very much like me asking, why did somebody smarter than me or who I consider smarter than me not see something that I saw as like really obvious and he didn't include and like discount in his piece?
00:34:01.740 Of course, you would cross-correlate the instances of success with the instances of failure in these predictions.
00:34:06.280 I suspect it could also be that my entire worldview and philosophy, and many people know this from our videos, comes from a memetic cloud-first perspective.
00:34:18.800 I am always looking at the types of ideas that are good at replicating themselves and the types of ideas that aren't good at replicating themselves when I am trying to disturb it why large groups act in specific ways or end up believing things that I find off or weird.
00:34:35.400 Like, how could they believe that?
00:34:37.260 And that led me to, in my earliest days, become, as I've mentioned, like really interested with cults.
00:34:42.260 Like, how do cults work?
00:34:43.600 Why do religions work?
00:34:44.960 Like, how do people convince them things of stuff that to an outsider seem absurd?
00:34:50.320 And so when I am looking at any idea, I am always seeing it through this memetic lens first.
00:34:56.060 And I think when he looks at ideas, he doesn't first filter it through a memetically why would this idea exist before he is looking at the merits of the idea.
00:35:07.380 Whereas I often consider those two things as of equal standing to help me understand how an idea came to exist and why it's in front of me.
00:35:15.360 But I don't think that he has this second obsession here.
00:35:18.780 And I think that's probably why.
00:35:21.440 Maybe.
00:35:22.120 Yeah.
00:35:23.440 Yeah.
00:35:23.700 Yeah.
00:35:23.840 Yeah.
00:35:23.920 But I like it when people come to different conclusions because it's always something in between there that I find the value.
00:35:33.540 I don't know if that's helpful.
00:35:34.600 I actually think that's an unhelpful way to look at things.
00:35:36.520 I think you shouldn't look for averages.
00:35:38.460 But you can be.
00:35:39.200 I don't look for averages.
00:35:40.600 I find stuff.
00:35:41.980 I think when you look at what is different, you find interesting insights.
00:35:47.120 It's not an average of the two.
00:35:48.520 It's not a mean, a median, or a mode.
00:35:50.160 It is unique new insights.
00:35:52.400 It's more about emergent properties of the elements of disagreement that yield entirely new and often unexpected insights.
00:36:01.800 Not something in between.
00:36:04.480 Not compromise.
00:36:06.100 You are a genius, Simone.
00:36:08.160 I am so glad.
00:36:09.160 As the comments have said, you're the smarter of the two of us.
00:36:12.400 And I could not agree more.
00:36:14.180 And I will hit you with that every time now.
00:36:16.720 Because I know.
00:36:17.060 This drives me nuts.
00:36:18.020 You know that you're the smarter one.
00:36:19.940 That even our polygenic scores for intelligence show that you're the smarter one.
00:36:25.240 Yeah, we went through our polygenic scores recently.
00:36:28.180 And one of the things I mentioned in a few other episodes is that I have the face of somebody who, you know, when they were biologically developing, was in a high testosterone environment.
00:36:37.760 When contrasted with Andrew Tate, like that's where I often talk about it, is he has the face of somebody who grew up in a very low testosterone environment.
00:36:42.940 Believe it or not, when I was going through the polygenic markers, I came up 99% on testosterone production.
00:36:48.320 In terms of the top 1% of the population in terms of just endogenous testosterone production.
00:36:54.620 So, yeah, of course, when I was developmental, I was just flooded in the stuff.
00:36:58.040 That's why I look like this.
00:36:59.100 And then you were what?
00:36:59.760 In like 1% of pain tolerance that I was in 99% of pain tolerance.
00:37:03.680 Yeah, you were like in 99% for pain tolerance that I would be in 1% for pain tolerance.
00:37:07.940 It explains so much.
00:37:08.940 No, I like it.
00:37:09.680 Being high testosterone, but actually feeling pain and just being like, nah, not going to engage in those scenarios.
00:37:18.320 Yeah, it's probably a good mix of noping out of there.
00:37:22.540 It's a good mix of being tough, but noping out the moment it becomes dangerous.
00:37:26.860 Yes.
00:37:29.280 High risk, but good survival instinct.
00:37:32.060 Very good.
00:37:32.840 Yeah, especially because you also have fast twitch muscle, which I don't.
00:37:36.340 When you nope out of a place, about real fast.
00:37:38.760 No one has this joke about me being able to like bamf out of a situation whenever, like nightcrawler.
00:37:44.340 Whenever something dangerous happens, you know, he's like, I'm 20 feet away somewhere else.
00:37:49.160 Yeah, like I turn and he's just gone and like a car is hurtling toward me.
00:37:56.620 You are so slow.
00:37:58.520 You actually remind me of like a sloth.
00:38:00.600 I need to get better at just danking you out of the way.
00:38:03.120 You literally have to pull me because I'm blown.
00:38:05.280 Like when cars are coming at us because like we started crossing the road and she like didn't expect.
00:38:10.180 She cannot like speed up above a fast walk.
00:38:13.740 When I hate moving so quickly, I'm also like contemplating.
00:38:16.600 Do I want to die or should I try to move?
00:38:21.040 You really come off that way.
00:38:22.980 Yeah, I do.
00:38:24.200 You got to move.
00:38:25.100 Nightcrawler's got to bamf over, got to bamf back to grab you.
00:38:30.960 God.
00:38:32.400 I'm going to die.
00:38:33.760 Yeah.
00:38:34.340 I love you.
00:38:35.400 I love you so much, Simone.
00:38:36.920 You're amazing.
00:38:37.400 You're amazing.
00:38:40.180 Hey, I would love to get the slow cooker started on the tomatoes and meat that I got.
00:38:45.620 But you still have about two days worth of the other stuff.
00:38:49.340 Yeah, but it's easier to just freeze this stuff if I do it all at once now.
00:38:53.940 And then I can also.
00:38:55.000 Want to do it overnight?
00:38:56.040 I can also leave it cooking for a few days.
00:38:58.720 All right.
00:38:59.920 I can do that.
00:39:00.800 Do I have time to make biscuits or muffins, cornmeal muffins?
00:39:03.840 Yeah.
00:39:04.900 If I go down right now, I can make cornmeal muffins.
00:39:08.620 Would you like cornmeal muffins?
00:39:09.880 I'm okay with that.
00:39:10.860 Yeah.
00:39:11.700 Okay.
00:39:12.140 You're so nice.
00:39:13.520 Cornmeal goes great with slow cooked beef.
00:39:16.100 And you're still going to have the slow cooked beef that you made earlier this week, right?
00:39:19.700 I'm heading down.
00:39:20.420 She's asleep on my lap.
00:39:21.680 I don't want to.
00:39:22.360 Look.
00:39:22.580 She's like, bleh.
00:39:24.360 But I'll get up.
00:39:25.200 She loves mommy.
00:39:26.720 She loves sleeping.
00:39:28.640 I love you so much, Simone.
00:39:29.860 You're a perfect mom.
00:39:31.660 I love you too, Melco.
00:39:32.720 And you got to get that pouch so you can get that pocket on.
00:39:35.500 Okay.
00:39:35.860 Do you want me to order it right now?
00:39:36.960 Oh, no.
00:39:39.040 I need to contemplate whether or not we should spend money on that or new carbon monoxide detectors.
00:39:44.980 No, you're getting the new carbon monoxide detectors.
00:39:47.420 Just let me get this for you as a gift.
00:39:48.780 Okay.
00:39:50.680 Here.
00:39:51.120 I'm getting it right now.
00:39:51.920 It's $19.
00:39:52.520 I'll get it with my money.
00:39:56.120 Okay.
00:39:56.620 I just got it.
00:39:57.640 No, it was my money.
00:39:58.800 I'm the one who's demanding that you get a pocket because I'm so freaking annoyed that
00:40:02.760 you're walking around without a pocket.
00:40:05.260 All right, Melco.
00:40:06.400 It is annoying, Simone.
00:40:08.280 It causes me dissatisfaction.
00:40:11.240 Okay.
00:40:11.480 I will see you downstairs with my corn muffin hands ready to go.
00:40:19.000 Okay.
00:40:19.840 Love you.
00:40:20.320 Bye.
00:40:20.920 I guess you could call it the Dunning-Kruger trap where, you know, the Dunning-Kruger effect
00:40:25.980 is where people who know less about something feel more confident about it, right?
00:40:31.200 What happened?
00:40:33.180 You just noped right out of there.
00:40:36.140 Is it a bug?
00:40:46.520 Was it a mouse?
00:40:47.520 No, no, no, no.
00:40:49.800 It was a beer.
00:40:51.000 It was the beer that you knocked over yesterday.
00:40:54.000 Oh, the one that, no, Titan knocked it off the table.
00:40:57.260 Oh, and you're like, you better not open that one.
00:40:59.880 That's what just happened.
00:41:01.060 Oh, God.
00:41:01.860 Okay.
00:41:03.000 Whoops.
00:41:03.300 So the Dunning-Kruger effect, whereby people who know less about something feel more confident
00:41:12.260 about it.
00:41:12.980 By the way, Dunning-Kruger effect does not replicate.
00:41:15.320 And then, anyway, still people are familiar with it.
00:41:17.740 And then people who know more about something often say that they know less.
00:41:21.640 And I think that there gets to be a certain point where when you know a ton about something,
00:41:25.960 you just start to become very uncertain about it.
00:41:28.020 And you're not really willing to take any stance, which is something I saw a lot in academia.
00:41:31.980 Where the higher up in academia I got, the more the answer was always, it depends, instead
00:41:37.660 of...
00:41:38.040 That is, whatever you're talking about has nothing to do with any of the points I'm going
00:41:42.360 to make.
00:41:54.540 See if you smile when daddy appears on the screen.
00:41:57.340 Daddy?
00:41:58.620 Look at that!
00:42:00.560 It's daddy!
00:42:01.280 She doesn't see.
00:42:06.500 She doesn't see.
00:42:08.240 I haven't gotten her eyes on the screen.
00:42:10.760 She's got a...
00:42:12.500 Look at the screen.
00:42:15.360 Look at the screen.
00:42:15.920 Do you recognize me at all?
00:42:17.520 I don't know if they can recognize things on screens in the same way that adults can.
00:42:21.800 I don't know either.
00:42:23.060 Yeah, she doesn't seem to be focusing on it.
00:42:25.220 Yeah, she doesn't.
00:42:26.660 She can't see me.
00:42:27.420 Oh, well, it's okay.
00:42:30.140 We love you anyway.
00:42:31.460 I will get us started here.
00:42:33.820 Oh, I will pull this aside.
00:42:35.640 How could you tell that it was bad at creating websites, by the way?
00:42:39.760 Because it, after you buy a domain, will, like, literally take the names of your domain,
00:42:46.860 like, the words within it, and then assume that based on, okay, for example, because I
00:42:53.700 got pragmatistfoundation.org, they're like, oh, you're a pragmatic foundation, and you're
00:42:59.160 .org, so you're a non-profit, and so here's a non-profit website for a foundation that likes
00:43:06.960 pragmatism, and then it made up copy based on that and had a picture of kids sitting at
00:43:14.460 desks and with something like creating solutions that are pragmatic, which is not terribly
00:43:20.940 far off, but...
00:43:22.940 It doesn't sound so bad.
00:43:24.400 No, it sounds so bad.
00:43:25.520 It's just...
00:43:26.100 In case people are wondering, the reason why we're looking at buying websites right now
00:43:29.920 is when we needed to get the .org for the pragmatist foundation, because people were emailing
00:43:33.720 the wrong address, because we have .com for that.
00:43:36.480 But also, I've been thinking about building a website for the techno-puritan religion and
00:43:41.980 seeing if I can get it registered as a real religion, which would be pretty fun, especially
00:43:46.780 if I'm able to put religious wear in there, like you always have to be armed.
00:43:51.640 It was within specific constraints to see if you can get religious exceptions for...
00:43:56.040 Which I do believe there is a religious mandate for concealed carry and stuff.
00:44:00.640 That would be interesting from a legal perspective.
00:44:03.840 It'd be funny if we had like a religious mandate for always having to carry ceremonial sloths with
00:44:08.880 us.
00:44:10.340 But it's my religious sloth.
00:44:12.340 You can't let me not go to your restaurant wearing it.
00:44:14.900 You want to enshrine specific rights that people would want.
00:44:20.140 I think you can do stuff around sorts of data privacy and stuff like that makes sense
00:44:24.280 because in a religious context to us, but also provide a legal tool to people who want
00:44:29.580 the access to this stuff.
00:44:31.300 That could be interesting.
00:44:33.320 Which also helps the religion spread, so that'd be fun.
00:44:36.520 Yeah.
00:44:36.860 All right.
00:44:39.220 So I am opening this up here.
00:44:45.160 All right.
00:44:47.440 What are you doing, Wiggles?
00:44:51.840 Okay.
00:44:52.760 You better not let her wiggle.
00:44:54.680 I better not.
00:44:55.580 She's full of all the Wiggles.
00:44:56.700 else.